text
stringlengths
216
4.52M
meta
dict
\section{Introduction} Feedback is one of the fundamental techniques of classical control theory \cite{Bechhoefer2005} and its translation into the quantum realm \cite{Wiseman2009, Gough2012} seems certain to play an equally important role in the rapidly-developing field of quantum technology. This work concerns itself with the application of feedback control to quantum transport, where a number of interesting effects have already been predicted, such as the freezing of current fluctuations \cite{Brandes2010}, stabilisation of quantum states \cite{Kiesslich2011,Poeltl2011,Kiesslich2012}, realisation of a mesoscopic Maxwell's daemon \cite{Schaller2011, Esposito2012,Strasberg2013} and delay effects \cite{Emary2013b}. In these hitherto-proposed schemes, the feedback loops employed were examples of measurement-based quantum control \cite{Wiseman1994,Wiseman2009}, in which the full counting statistics of electron transport \cite{Levitov1996,Bagrets2003,Gustavsson2006,Fujisawa2006} were monitored and control operations applied to the system in response to individual electron tunnelling events. The feedback loop in such cases is classical, as is the information to flow between system and controller. In contrast, we are here interested in the application of {\em coherent feedback control} to quantum transport. In coherent control, the system (or, to borrow the engineering term, the {\em plant}), the controller and their interconnections are all quantum-mechanical and phase coherent. The system-controller complex therefore evolves under a joint unitary dynamic and the information flow between plant and controller is of quantum, rather than classical, information \cite{Lloyd2000}. The main advantages of coherent feedback control over its measurement-based cousin are held to be\cite{Zhang2011}: reduced noise, since the additional disturbance produced by the quantum-mechanical measurement process is absent; and speed, since the coherent controller is likely to operate on the same time scales as the plant (in contrast, a classical controller will be limited to speeds associated with traditional electronics). Various forms of coherent control have been discussed in the literature, e.g. Refs.~\onlinecite{Lloyd2000,Mabuchi2008,Zhang2011}, but the type we will focus on here is the {\em quantum feedback network} developed by one of the authors with James \cite{Gough2008,Gough2009,Gough2009a,James2008,Nurdin2009,Zhang2011} (see \citer{Zhang2012} for a recent review). The proposal of such networks can be traced to the cascading of open systems due to Carmichael and Gardiner \cite{Carmichael1993a,Gardiner1993}, feedback connections for linear quantum systems \cite{Yanagisawa2003}, as well as the all-optical measurement-based feedback schemes of Wiseman and Milburn \citer{Wiseman1994a}, which can also be placed in this setting \cite{Gough2009a}. A number of experiments have been performed in this paradigm, including disturbance rejection \cite{Mabuchi2008} and the control of optical squeezing \cite{Iida2012,Crisafulli2013}. Further proposals include automatic quantum error correction \cite{Kerckhoff2010,Kerckhoff2011}, suppression of switching in bistable optical systems \cite{Mabuchi2011}, cavity cooling \cite{Hamerly2012,Hamerly2013}, and the generation of entangled photons \cite{Hein2014}. Whilst these developments have taken place largely in the context of quantum optics, our aim here is to study coherent feedback control in quantum transport. In particular, we are interested in how a quantum feedback network can be used to modify the conduction properties of a mesoscopic device. To be specific, our focus will be on four-terminal devices, \fig{FIG:SFB}a, which we embed in a feedback network by connecting two of the four leads in a loop via some external control circuit or device, \fig{FIG:SFB}b. We will assume that the motion of electrons through plant and controller is phase coherent and that electron-electron interactions can be neglected. In this limit, transport can be described by Landauer-B\"uttiker theory \cite{Blanter2000}, where both the plant and controller are described by scattering matrices. Analysis of the feedback loop amounts to finding the composite scattering matrix of the system-controller complex and relating this to the conduction properties. From this, the main formal result is that when the number of controller channels, $M$, equals the number of plant channels that remain after feedback, $N$, then free choice of the control scattering matrix allows us to set the scattering matrix of combined system as desired. We refer to this situation as ``ideal control'', and since the scattering matrix can be set at will, so can the conduction properties (within natural limits set by the dimensionality of the scatterer). We then explore the issue of what happens away from this ideal limit. The first question we address is to what extent can the conductance be controlled when the size of the controller is lower than required for ideal control, i.e. when $M < N$. To answer this we consider the concrete example of a chaotic quantum dot, the scattering through which we describe with random matrix theory \cite{Beenakker1997}. We assume a unconstrained controller and choose its parameters so as to optimise the conductance through the dot as a function of $0 \le M \le N$. We compare these results with those obtained from a second control geometry in which the quantum dot is connected to the controller in series, \fig{FIG:series}. We find that, for all $0<M<N$, the feedback geometry significantly outperforms the series for conduction maximisation. \begin{figure}[t] \psfrag{S}{\scalebox{3}{$S$}} \psfrag{S0}{\scalebox{3}{$S_0$}} \psfrag{K}{\scalebox{2.7}{$K$}} \psfrag{K0}{\scalebox{2.3}{$K_0$}} \psfrag{K1}{\scalebox{1.7}{$K_1$}} \psfrag{K2}{\scalebox{1.7}{$K_2$}} \psfrag{A}{\scalebox{1.5}{A}} \psfrag{B}{\scalebox{1.5}{B}} \psfrag{C}{\scalebox{1.5}{C}} \psfrag{D}{\scalebox{1.5}{D}} \psfrag{N}{\scalebox{1}{$N$}} \psfrag{M}{\scalebox{1}{$M$}} \psfrag{N'}{\scalebox{1}{$N'$}} \psfrag{(a)}{\scalebox{1}{\textbf{(a)}}} \psfrag{(b)}{\scalebox{1}{\textbf{(b)}}} \psfrag{(c)}{\scalebox{1}{\textbf{(c)}}} \begin{center} \includegraphics[width=\columnwidth,clip=true]{./1_Smatrix_schematic.eps} \end{center} \caption{(color online) Schematics of quantum feedback network consisting of a mesoscopic device, $S$, and controller, $K$. \textbf{(a)} The isolated device with four leads labelled A through D. Leads A and B possess $N$ (bidirectional) channels; leads C and D possess $M$. \textbf{(b)} The feedback loop is realised by connecting the leads C and D together via the controller. After feedback, the device becomes a scatterer between the $N$ channels of lead $A$ and the $N$ channels of lead B. \textbf{(c)} A feedback network where the original device is a two-terminal device, $S_0$. The addition of scatterers $K_1$ and $K_2$ converts $S_0$ into a four-terminal device (enclosed in the dashed box here), such that this network maps on to that in part (b). \label{FIG:SFB} } \end{figure} Secondly, we consider the effects of dephasing on both feedback and series geometries and show that conduction maximisation in the feedback case is far more robust to dephasing than the series case. Indeed, the feedback geometry can provide some degree of conductance increase even in the presence of total dephasing. The series geometry cannot. This paper proceeds as follows. In \secref{SEC:scat}, we discuss the scattering problem and derive an expression for the scattering matrix of combined system-controller network in both feedback and series geometries. \secref{SEC:ideal} introduces the notion of ideal control and examines the conditions under which it can pertain. Secs.~\ref{SEC:QD} and \ref{SEC:deph} discuss numerical results for conductance optimisation for the quantum dot and focus on the effects of controller dimension and dephasing, respectively. Finally, \secref{SEC:disc} contains some concluding remarks and perspectives. \section{Quantum feedback network \label{SEC:scat}} The plant here is a four-terminal mesoscopic conductor with leads labelled A and C on the left and B and D on the right (\fig{FIG:SFB}a). Leads A and B each support $N$ conduction channels; leads C and D each support $M$ \footnote{ We describe two groupings of channels on each side (e.g. A and C on the left) as inhabiting physically distinct leads, but this need not necessarily be the case. The key requirement is that the two groups of channels be independently accessible. }. Should the plant of interest actually be a two-terminal conductor, use can be made of the geometry shown in \fig{FIG:SFB}c. Here two three-terminal scatterers, presumably very simple, are added before and after the original two-terminal plant. The composite of these three elements is then a four-terminal device, as assumed by the following formalism \footnote{ The layout \fig{FIG:SFB}c is reminiscent of the canonical feedback loop of classical control theory. The important thing to realise is that here the interconnects represent leads that support transport in both directions, rather than transfer in just a single direction. Thus, there is no issue with the cloning of quantum information here, as there is in the straightforward quantum generalisation of the classical transfer model. }. Let $b^\mathrm{in}_{\mathrm{X}, n}(E)$ be the annihilation operator for an incoming electron of energy $E$ in channel $n$ of lead $\mathrm{X=A,B,C,D}$, and let $b^\mathrm{out}_{\mathrm{X},n}(E)$ be the corresponding operator for an outgoing electron. In Landauer-B\"uttiker theory \cite{Blanter2000}, the device is treated as a phase-coherent scatterer of electrons with incoming and outgoing states related by \beq {b}^\mathrm{out}(E) = S(E) \; b^\mathrm{in}(E) , \label{eq:io} \eeq where $b^\mathrm{in}(E)$ and $b^\mathrm{out}(E)$ are vectors containing the appropriate annihilation operators of all leads, and $S(E)$ is the scattering matrix of the device at energy $E$. Note that the scattering matrix $S$ must be unitary. In the current work we will consider linear transport only and, in this case, the only relevant energy is the Fermi energy of the leads A and B. All quantities, in particular the scattering matrix $S$, will be evaluated at this Fermi energy, and we suppress the energy index from now on. We write \eq{eq:io} as \beq \rb{ \begin{array}{c} b^\mathrm{out}_{\mathrm{A}} \\ b^\mathrm{out}_{\mathrm{B}} \\ b^\mathrm{out}_{\mathrm{C}} \\ b^\mathrm{out}_{\mathrm{D}} \end{array} } = \rb{ \begin{array}{cccc} S_{\mathrm{AA}} & S_{\mathrm{AB}} & S_{\mathrm{AC}} & S_{\mathrm{AD}} \\ S_{\mathrm{BA}} & S_{\mathrm{BB}} & S_{\mathrm{BC}} & S_{\mathrm{BD}} \\ S_{\mathrm{CA}} & S_{\mathrm{CB}} & S_{\mathrm{CC}} & S_{\mathrm{CD}} \\ S_{\mathrm{DA}} & S_{\mathrm{DB}} & S_{\mathrm{DC}} & S_{\mathrm{DD}} \\ \end{array} } \rb{ \begin{array}{c} b^\mathrm{in}_{\mathrm{A}} \\ b^\mathrm{in}_{\mathrm{B}} \\ b^\mathrm{in}_{\mathrm{C}} \\ b^\mathrm{in}_{\mathrm{D}} \end{array} } , \; \; \; \; \label{eq:io_matrix} \eeq where the component $S_{\mathrm{XY}}$ is the matrix relating the input to lead Y (i.e., $b^\mathrm{in}_\mathrm{Y}$) with the output from lead $X$ (i.e., $b^\mathrm{out}_\mathrm{X}$). In \fig{FIG:binary_io} we show two representations of the scattering by this device. In the first representation the modes are organised in terms of the physical leads (e.~g. $b^\mathrm{in}_{\mathrm{A}}$ and $b^\mathrm{out}_{\mathrm{A}}$ are grouped together); the second representation reflects the structure of the scattering matrix. \subsection{Feedback \label{SUBSEC:FB}} We introduce feedback by connecting the channels in lead C to those in lead D via the controller. This latter we describe with the $2M \times 2M$ dimensional scattering matrix, $K$. To facilitate our description, we partition the scattering matrix in terms of those channels that will form the feedback loop (those in leads $C$ and $D$) and those that will persist ($A$ and $B$). We therefore write \beq S = \rb{ \begin{array}{cc} S_\mathrm{I} & S_\mathrm{II} \\ S_\mathrm{III} & S_\mathrm{IV} \end{array} } \label{EQ:Sfull} , \eeq with blocks \beq S_\mathrm{I} = \rb{ \begin{array}{cc} S_{\mathrm{AA}} & S_{\mathrm{AB}} \\ S_{\mathrm{BA}} & S_{\mathrm{BB}} \end{array} } ;\quad S_\mathrm{II} = \rb{ \begin{array}{cc} S_{\mathrm{AC}} & S_{\mathrm{AD}} \\ S_{\mathrm{BC}} & S_{\mathrm{BD}} \end{array} } ; \nonumber\\ S_\mathrm{III} = \rb{ \begin{array}{cc} S_{\mathrm{CA}} & S_{\mathrm{CB}} \\ S_{\mathrm{DA}} & S_{\mathrm{DB}} \end{array} } ;\quad S_\mathrm{IV} = \rb{ \begin{array}{cc} S_{\mathrm{CC}} & S_{\mathrm{CD}} \\ S_{\mathrm{DC}} & S_{\mathrm{DD}} \end{array} } . \eeq The scattering matrix for the complete feedback network, $S_\mathrm{fb}$, can then be derived by considering all scattering processes between leads A and B. Firstly, there is direct scattering, which is described by scattering block $S_\mathrm{I}$. Electrons can also be scattered into the feedback loop, traverse it once, and then reemerge into the ``AB system''. This is described by the sequence of matrices $S_\mathrm{II} K S_\mathrm{III} $. Further processes are then possible in which the electron makes $n$ traversals of the feedback loop, to give the scattering term $ S_\mathrm{II} \rb{ K S_\mathrm{IV}}^{n} K S_\mathrm{III} ;~ n=1,2,\ldots $. Summing the totality of all possibilities, the total scattering matrix of the system with feedback reads: \beq S_\mathrm{fb} = S_\mathrm{I} + S_\mathrm{II} \frac{1}{\mathbbm{1}- K S_\mathrm{IV}} K S_\mathrm{III} \label{EQ:SFB1} , \eeq with $\mathbbm{1}$ the unit matrix (here of dimensions $2M\times 2M$). This form relies on the existence of inverse of $\mathbbm{1}- K S_\mathrm{IV}$ and physically, this corresponds to the condition that all $M$ channels connect through the controller. Similar results have been derived previously, see, e.~g., \citer{Gough2008}. \begin{figure}[tbp] \flushleft{\hspace{5mm}\textbf{(a)}} \includegraphics[width=\columnwidth]{./2_binary_io.eps} \flushleft{\hspace{5mm}\textbf{(b)}} \includegraphics[width=\columnwidth]{./3_io.eps} \caption{ Two equivalent representations of an input-output device describing the scattering of fields $b_\mathrm{X}^\mathrm{in}$ into $b_\mathrm{X}^\mathrm{out}$ in leads $\mathrm{X=A,B,C,D}$. These fields are multidimensional with $N$ being the multiplicity of modes $b^\mathrm{in}_{\mathrm{A}}$, $b^\mathrm{out}_{\mathrm{A}}$, $b^\mathrm{in}_{\mathrm{B}}$, and $b^\mathrm{out}_{\mathrm{B}}$, and with $M$ being the multiplicity of modes $b^\mathrm{in}_{\mathrm{C}}$, $b^\mathrm{out}_{\mathrm{C}}$, $b^\mathrm{in}_{\mathrm{D}}$, and $b^\mathrm{out}_{\mathrm{D}}$. In \textbf{(a)}, as in \fig{FIG:SFB}a, fields within a given lead are grouped together, with leads A and C on the left and B and D on the right. In this representation, the device is seen to be a four-lead device, where each lead is bidirectional. \textbf{(b)} shows a representation that mirrors the action of the scattering matrix in which all input fields are drawn to the right, and all output fields leave on the left. \label{FIG:binary_io} } \end{figure} \subsection{Series \label{SUBSEC:series}} To gain an appreciation of the utility of the feedback geometry, we will compare it with a further plant-controller network, namely the {\em bidirectional series} connection: this is the generalization of the series product for unidirectional fields introduced in \citer{Gough2009a}. Our first order of business is describe how we model two-port bidirectional systems. These arise as unidirectional four port systems, see \fig{FIG:2port} where the inputs $b^\mathrm{in}_\mathrm{A} ,b^\mathrm{in}_\mathrm{B}$ and outputs $b^\mathrm{out}_\mathrm{A} ,b^\mathrm{out}_\mathrm{B}$ each have multiplicity $N$. The scattering matrix may then be written in block form as \beq S= \rb{ \begin{array}{cc} r & t' \\ t & r' \end{array} } , \label{eq:S-matrix} \eeq that is $r=S_\mathrm{AA}$ is the $N\times N$ complex matrix describing the reflection coefficients of input $b^\mathrm{in}_\mathrm{A}$ in $b^\mathrm{out}_\mathrm{A}$, $t=S_\mathrm{BA}$ describes the transmission coefficients of $b^\mathrm{in}_\mathrm{A}$ into $b^\mathrm{out}_\mathrm{B}$, etc. The bidirectional series construction between a plant with scattering matrix $S$ and a controller with scattering matrix $K$ is shown in \fig{FIG:series}. Here both $S$ and $K$ are $2N\times 2N$ unitary matrices which act between input and output fields as \begin{eqnarray} \rb{ \begin{array}{c} b^\mathrm{out}_\mathrm{A} \\ c \end{array} } = \rb{ \begin{array}{cc} r_S & t^\prime_S \\ t_S & r^\prime_S \end{array} } \rb{ \begin{array}{c} b^\mathrm{in}_\mathrm{A} \\ d \end{array} } , \nonumber \\ \rb{ \begin{array}{c} d \\ b^\mathrm{out}_\mathrm{B} \end{array} } = \rb{ \begin{array}{cc} r_K & t^\prime_K \\ t_K & r^\prime_K \end{array} } \rb{ \begin{array}{c} c \\ b^\mathrm{in}_\mathrm{B} \end{array} } . \end{eqnarray} From this we see that \begin{eqnarray} \rb{ \begin{array}{cc} \mathbbm{1} & -r^\prime_S \\ -r_K & \mathbbm{1} \end{array} } \rb{ \begin{array}{c} c \\ d \end{array} } = \rb{ \begin{array}{c} t_S \, b^\mathrm{in}_\mathrm{A} \\ t^\prime_K \, b^\mathrm{in}_\mathrm{B} \end{array} } , \end{eqnarray} The network will be well-posed if we can invert the matrix to solve for $c$ and $d$. Assuming that this is indeed the case, then we can use the block matrix inversion (Banachiewicz) formula \begin{eqnarray} \rb{ \begin{array}{cc} \mathbbm{1} & -r^\prime_S \\ -r_K & \mathbbm{1} \end{array} }^{-1} = \rb{ \begin{array}{cc} \Delta_{SK} & \Delta_{SK} \, r^\prime_S \\ \Delta_{KS} r_K & \Delta_{KS} \end{array} } , \; \; \; \end{eqnarray} where \beq \Delta_{SK} = (\mathbbm{1}-r^\prime_{S} r_K )^{-1} , \nonumber \\ \Delta_{KS} = (\mathbbm{1}- r_K r^\prime_S )^{-1} . \eeq This allows us then to write \beq \rb{ \begin{array}{c} b^\mathrm{out}_\mathrm{A} \\ b^\mathrm{out}_\mathrm{B} \end{array} } = \rb{ \begin{array}{cc} r & t' \\ t & r' \end{array} } \rb{ \begin{array}{c} b^\mathrm{in}_\mathrm{A} \\ b^\mathrm{in}_\mathrm{B} \end{array} } , \eeq with the blocks \beq r & = & r_S +t^\prime_S r_K \Delta_{SK} t_S \nonumber \\ t^\prime & = & t^\prime_S \Delta_{KS} \nonumber \\ t & = & t_K \Delta_{SK} t_S \nonumber \\ r^\prime & = & r^\prime_K + t_K r^\prime_S \Delta_{KS} \label{EQ:seriestr} . \eeq This then is the combined scattering matrix for the bidirectional series product $S \Diamond K$ as defined in \fig{FIG:series}. The joint scattering matrix, $S_{S\Diamond K}$, obtained this way agrees with standard calculation for two scatterers $S$ and $K$ connected in series \cite{Datta1997}. We remark that the formula should generalize to the situation where both the plant and controller are Markovian quantum systems which involve an internal dynamical $H$, coupling/collapse operators $L$, in addition to just the scattering matrix $S$. \begin{figure}[tbp] \begin{center} \includegraphics[width=\columnwidth]{./4_io_2port.eps} \end{center} \caption{ \textbf{(a)} A two-lead device with leads A and B each of which supports $N$ bidirectional channels. Scattering is described by unitary matrix $S$ \textbf{(b)} Equivalently, the system can be described as a four-port unidirectional device, where the fields $b^\mathrm{in}_\mathrm{A} ,b^\mathrm{in}_\mathrm{B},b^\mathrm{out}_\mathrm{A} ,b^\mathrm{out}_\mathrm{B}$ each have multiplicity $N$. } \label{FIG:2port} \end{figure} To make a direct comparison between series and feedback cases, we construct the control matrix $K$ here to have $N-M$ trivially-transmitting channels and $M$ channels that are actually subject to a control scattering matrix. \begin{figure}[tb] \begin{center} \includegraphics[width=\columnwidth]{./5_biseries.eps} \end{center} \caption{ (color online) \textbf{(a)} A mesoscopic device, $S$, and controller, $K$, connected in the {\em bidirectional series configuration}, which we denote by $S \Diamond K$. All leads support $N$-channels in each direction. \textbf{(b)} The bidirectional series connection of devices $S$ and $K$ may be expressed in terms of unidirectional models where we see explicitly the presence of a single algebraic feedback loop where output $c$ from the plant $S$ is fed into $K$ and contributes to output $d$ with gain $r_K$, while $d$ enters $S$ and contributes to output $c$ with gain $r'_S$. \label{FIG:series} } \end{figure} \subsection{Conductance} In the limit of low temperature and small bias about a Fermi energy $E_F$, the conductance of a two-terminal sample with a scattering matrix as in \eq{eq:S-matrix} is given by \cite{Blanter2000} \beq G &=& G_0\mathrm{Tr}[t^\dag t] , \eeq where $G_0 = {\textstyle \frac{2e^2}{h}}$ is the conductance quantum (all channels assumed spin-degenerate) and where the transmission block $t$ is evaluated at the Fermi energy $t = t(E_F)$. With $T_{n}$ the transmission probabilities given by the eigenvalues of matrix $t^\dag t$, the conductance can be written \beq G = G_0 \sum_n T_{n} \label{EQ:cond} . \eeq \section{Ideal control \label{SEC:ideal}} When the control scattering matrix has the same dimension as the output matrix, i.e. when $N=M$, the matrices $S_\mathrm{II}$ and $S_\mathrm{III}$ are square. Assuming that the determinants of these two matrices are non-zero (see below) these matrices are {\em invertible} and it becomes possible to rearrange \eq{EQ:SFB1} for the control matrix as \beq K = \frac{1}{S_\mathrm{IV} + S_\mathrm{III} \rb{S_\mathrm{fb}-S_\mathrm{I}}^{-1}S_\mathrm{II}} \label{EQ:Kideal} . \eeq Thus, given an arbitrary plant matrix $S$, we can obtain any given target $S_\mathrm{fb}$ by choosing the control operator as in \eq{EQ:Kideal}. And if $S_\mathrm{fb}$ can be chosen arbitrarily, so can the transmission eigenvalues $T_n$ and all desired conductance properties. The inversion of $S_\mathrm{fb}$ to obtain \eq{EQ:Kideal} requires that the inverses $S_\mathrm{II}^{-1}$, $S_\mathrm{III}^{-1}$ and $\rb{S_\mathrm{fb}-S_\mathrm{I}}^{-1}$ exist. Physically, the absence of these inverses corresponds to the case when one or more of the channels in leads A or B are completely decoupled from leads C and D. In this case, it is clear that these modes can not be affected by the feedback loop and thus ideal control is not possible. The possibility of ideal control also exists for the series case. Provided that the inverses $t_S^{-1}$ and ${t_S'}^{-1}$ exist, equation set~(\ref{EQ:seriestr}) can be inverted to obtain the ideal control matrix $K$. As above, this requires that the number of channels in control and output spaces be equal, $M=N$. \section{Conductance optimisation of a chaotic quantum dot \label{SEC:QD}} When the dimension of the controller equals that of the output ($M=N$), ideal control means that we can shape the conductance properties of the system as we like. In this section, we consider what happens for $M < N$. We focus on the example of the optimisation of the conductance of an open chaotic quantum dot \cite{Oberholzer2002} and look at both the feedback and series configurations. \subsection{Random matrix theory} We will use random matrix theory to describe the dot \cite{Baranger1994,Jalabert1994,Beenakker1997} and take its scattering matrix to be a $4N \times 4N$ random unitary matrix drawn from Dyson's circular ensemble. To study the effects of changing the size of the control space on a single system, we implement the control matrix, $K$, as a $2N \times 2N$ matrix consisting of a $2M \times 2M$ sub-matrix that represents the actual control operation, with the rest of the entries corresponding to simple reflections. Thus, for $M=0$, the scattering matrix of the dot consists of the four-lead random $ S$ with leads C and D completely sealed off such that electrons are simply reflected back into the dot. This is the scattering matrix of the dot without control and, as such, will be used as the basis of the series calculation. For $1 \le M \le N$, a total of $N-M$ channels reflect back into the dot and the remaining $M$ channels are scattered by the control matrix. In this way we mimic the opening up of the dot to increase the number of channels that are affected by the controller. We then use the controller to optimise the conductance of the dot. Concentrating first of the feedback loop geometry, we generate the random matrix $S$, and construct the feedback matrix $S_\mathrm{fb}$ based on an arbitrary $2M\times2M$ unitary control matrix $K$ parameterised as in \citer{Zyczkowski1994}. We then calculate the conductance of $S_\mathrm{fb}$ using \eq{EQ:cond} and numerically maximise its value over the choice of feedback controller $K$. This procedure is then repeated for the series geometry. \begin{figure}[t] \psfrag{MN}{$M/N$} \psfrag{GNG0}{$G/(NG_0)$} \begin{center} \includegraphics[width=\columnwidth,clip=true]{./6_pure_compare_aggregate.eps} \end{center} \caption{ Optimised conductance $G$ of a chaotic quantum dot under coherent control in both feedback (solid circles) and series (open squares) configurations as a function of the ratio of control to output dimension, $M/N$. The conductance shown is the average over 100 random scattering matrices (scaled by its maximum possible value $N G_0$) with controller $K$ chosen to maximise the conduction. The error bars indicate the standard deviation of the conductance distribution. Without control ($M=0$) the conductance takes a value of $G /(N G_0) \approx \frac{1}{2} $, in line with random matrix theory without control. When $M=N$, ideal control is possible for both series and feedback setups and the ballistic conductance $G/(N G_0) = 1$ is obtained. For $M$ increasing from $M=0$ we see a monotonic increase in the conductance for both series and feedback geometry. The feedback results, however, are clearly higher than those with series control across the entire range of $M$. The dotted line is a straight interpolation between start and end points: $G / (NG_0)= \frac{1}{2}(1+\frac{M}{N})$. \label{FIG:QDFBconductance} } \end{figure} \subsection{Results} \fig{FIG:QDFBconductance} shows the mean control-optimised conductance of 100 random $S$-matrices in both feedback and series geometries with $2\le N \le 8$ and $0\le M \le N$. The end points of this graph are easily understood. For $M=0$, there is no control and no optimisation. The average conductance is then very close to the random-matrix ensemble-average value \cite{Beenakker1997} \beq G &=& \frac{1}{2}G_0 N . \eeq The reflections used in constructing the $M=0$ scattering matrix therefore appear to give similar conductance properties to a random unitary, and this was confirmed further by examining the distribution of transmission eigenvalues for these matrices (not shown). At the other end of the graph, for $M=N$, ideal control is possible in both feedback and series cases, and the conductance can be maximised by setting all transmission probabilities $T_{n} = 1; ~ \forall n$. The conductance is then $G=NG_0$, which is the maximum possible for an $N$-channel conductor (the ballistic limit). We mention that our numerical optimisation reliably finds this maximum, regardless of the starting point for the $K$-optimisation. Between these points, the optimised conductance increases with controller size. Interestingly, with the conductance scaled by $NG_0$ and plotted as a function of the ratio $M/N$, the optimised-conductance results for different $N$ all appear to fall on or around a single curve for each of the two geometries. Moreover, as is clear from \fig{FIG:QDFBconductance}, in the ensemble average (and away from the known endpoints) the optimised conductance in the feedback case is always superior to that obtained from the series configuration. In fact, in the series case, the conductance drops off rapidly as $M$ moves away from $N$, whereas the drop off for the feedback geometry is far shallower. The maximum difference between series and feedback conductances is $\approx 0.25 N G_0$, occurring for a ratio $M/N \approx 0.63$. Since this is fully one half the difference between the uncontrolled and ballistic conductances, the advantage of the feedback geometry in this regime is considerable. \fig{FIG:QDFBconductance} also shows the standard deviation of the optimised-conductance distributions. In the regime, $M/N \gtrsim 0.35$, we observe that the feedback and series distributions are clearly distinct from one another. This point is further reinforced by \fig{FIG:both_scatter} which plots the optimised conductance in the feedback configuration against that from the series configuration for individual instances of the quantum-dot scattering matrix. We emphasise that, for a given $M$, both series and feedback calculations have the same number of free control parameters. Thus, for this system at least, the feedback geometry is far more effective at conductance optimisation than the series configuration \begin{figure}[t] \psfrag{GFBG0}{$G_\mathrm{fb}/G_0$} \psfrag{GSRG0}{$G_\mathrm{sr}/G_0$} \begin{center} \includegraphics[width=\columnwidth,clip=true]{./7_both_scatter.eps} \end{center} \caption{ Direct comparison of the optimised conductance in series ($x$-axis) and feedback ($y$-axis) geometries for individual random matrices. The number of target channels was $N=4$ and results for $M=1,2,3$ are shown. The dotted line corresponds to $G_\mathrm{fb} = G_\mathrm{sr} $. For $M=1$ (blue diamonds) a few points lie below the dotted line and in these cases, the series configuration was found to be better than the feedback. In the vast majority of cases, however, the conductances with feedback exceed those of the series configuration. For $M=2,3$, we found $G_\mathrm{fb} > G_\mathrm{sr} $ in all cases considered, such that the feedback geometry offers a clear advantage. This advantage increases with increasing $M < N$. \label{FIG:both_scatter} } \end{figure} \section{Dephasing \label{SEC:deph}} We now study the effects of dephasing with a simple, classical dephasing model. We will assume that transport through the plant remains phase coherent and that the controller is the only source of dephasing. In a minimal model, we write the scattering matrix of the controller as $K \to e^{i \phi} K$ and allow the phase $\phi$ to fluctuate between $-\Delta/2$ and $\Delta/2$. The parameter $0\le \Delta \le \pi$ is therefore a measure of the strength of the dephasing. With this phase in place, the transmission block of the scattering matrix with feedback becomes $t_\mathrm{fb} \to t_\mathrm{fb}(\phi) $. The conductance in the presence of dephasing is then calculated as \beq G_\mathrm{deph.}(\Delta) = \frac{G_0}{\Delta} \int_{-\Delta /2}^{\Delta /2} d\phi ~ \mathrm{Tr} \left[ t^\dag_\mathrm{fb}(\phi) t_\mathrm{fb}(\phi) \right] . \eeq This integral can be carried out analytically by expanding the inverse in $S_\mathrm{fb}$ as geometric series, but the resulting expression can not be resummed. A similar calculation can also be carried out for the series configuration. Since the expressions so obtained are both lengthy and unenlightening, we do not reproduce them here. \begin{figure}[t] \psfrag{DP}{$\Delta / \pi$} \psfrag{GG0}{$ G_\mathrm{deph.}/G_0$} \begin{center} \includegraphics[width=\columnwidth,clip=true]{./8_dephasing_combine.eps} \end{center} \caption{ Optimised conductance $G_\mathrm{deph}$ for the chaotic dot as a function of $\Delta$ which characterises the strength of the dephasing: $\Delta =0$ corresponds to no dephasing, and $\Delta = \pi$, to a complete randomisation of the phase associated with the controller. The two plots correspond to two instances of random matrix $S$. For each we plot the minimum and maximum conductance obtained with numerical optimisation of controller $K$ in both feedback and series configurations. We also show the conductance with no control (dashed line). The most significant point is that the maximum conductance in the feedback case drops far slower as a function of $\Delta$ than does the series case. Even in the completely-dephased limit, $\Delta = \pi$, the feedback geometry offers a degree of conductance gain. For these plots, the channel-numbers were $N = M = 2$. \label{FIG:dephasing} } \end{figure} \fig{FIG:dephasing} shows results of this calculation for both series and feedback configurations with $N = M = 2$. We show both minimum and maximum conductance in the presence of dephasing for two particular instances of random matrix $S$ (other instances gave very similar results). For $\Delta=0$, ideal control means that in both feedback and series cases, the maximum conductance is $G = N G_0$ and the minimum is $G=0$. Increasing $\Delta$, the minimum conductance for the series case remains zero, since $K$ can always be set to reflect all incident electrons. In contrast, the minimum in the feedback case increases away from zero with increasing $\Delta$. The maximum conductance drops as $\Delta$ increases for both cases. The drop, however, is precipitous in the series case and far more gradual in the feedback case. Also significant is that, for $\Delta = \pi$, when the phase of the controller is completely scrambled, the maximum series conductance is reduced to its value without control (the optimum $K$ in this case is simple transmission) whereas the feedback geometry still gives a significant increase in conductance over the value without control. This result can be explained as follows. Imagine an electron incident from the left in the series case. To increase transmission, paths reflected at $K$ must destructively interfere with those reflected at $S$. This can only occur when the system is phase coherent. On the other hand, the feedback geometry is able to increase conductance even when $S$ and $K$ are classical scatterers, since the feedback loop enables the transmission of electrons that would otherwise have been reflected back the way they came. In this sense, then, the series configuration is the more quantum-mechanical of the control strategies, as it relies exclusively on coherence to optimise the conduction. Conversely, as the feedback loop has an action that can be viewed as partially classical, it is more robust in the presence of dephasing. In both cases, however, ideal control requires perfect coherence. \section{Discussion \label{SEC:disc}} In this paper we have introduced and studied the use of coherent control in quantum transport. We have considered the connection of a controller to a mesoscopic scatterer in both feedback-loop and series geometries. In both cases we have seen that if the number of controller channels is equal to the number of output channels ($M=N$), and the controller is otherwise unconstrained, then the output scattering matrix can be set at will. From the studies of conductance optimisation for chaotic quantum dots, two distinct advantages of the feedback geometry over the series geometry were manifest. Firstly, away from the ideal case with $M<N$, the feedback geometry was observed to give higher conductance. The difference between feedback and series results was substantial --- a difference of up to 50\% of the maximum possible improvement was observed. The second advantage concerns dephasing --- the gains in conductance made by the feedback control were seen to far more robust against dephasing than in series control. Although further investigations are necessary, we speculate that the relative advantages described here are general features of the feedback geometry and will translate to other systems. It will also be interesting to see how these results compare with a more realistic treatment of the dephasing \cite{Seelig2001,Foerster2005}. Our focus here has been on the optimisation of the conductance of the mesoscopic device. The control schemes described here, however, can also be used to modify other transport properties, and in particular, the noise. Indeed, the suppression of current fluctuations was one of the first applications of measurement-based control in quantum transport \cite{Brandes2010}. We have not addressed this issue directly here because, by optimising the conductance, one automatically reduces the noise. Ideal control optimises the conductance by achieving a value of unity for all transmission probabilities, $T_n=1$. With the zero-frequency shot noise given by \cite{Blanter2000} $ \mathcal{S}_\mathrm{noise} = {\textstyle \frac{2e^3 V}{h}} \sum_n T_n(1-T_n) $ , we see that optimising the conductance simply reduces the noise zero. Away from ideal control ($M < N$), maximisation of the conductance still results a concurrent reduction in the noise. Direct optimisation of the noise itself would bring further gains. More interesting will be to see how coherent control can influence the full counting statistics \cite{Levitov1996}. One final way in which we envisage this study could be expanded is to consider a dynamic controller, and hence the role of frequency-dependence and time-delay in coherent feedback control in quantum transport. \acknowledgments{ JG wishes to thank EPSRC for funding under the grant EP/L006111/1Quantum Stochastic Analysis For Nanophotonic Circuit Design. } \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} \label{introduction} Infrared observations of star-forming regions show that the fraction of infrared excess, a signature of the existence of protoplanetary disks (PPDs), decreases with the age of the system, suggesting a finite lifetime of $3-6\,{\rm Myr}$ for the PPDs in solar metallicity environments \citep{2001_Haisch,2007_Hernandez,2007_Meyer,2009_Mamajek,2010_Fedele,2014_Ribas}. There are a few physical mechanisms proposed theoretically, which predict the disk dispersal time of a few million years, but the exact processes that drive destruction or evaporation of a PPD are poorly known. Interestingly, recent observations suggest that PPDs in sub-solar metallicity environments may have significantly shorter lifetimes of $\lesssim 1 \,{\rm Myr}$ \citep{2009_Yasui,2010_Yasui,2016_Yasui_I,2016_Yasui_II}. Photoevaporation, a physical process with which outflows are excited by irradiation from the central star, is proposed as a promising disk dispersal mechanism \citep{1994_Hollenbach,2001_Clarke,2004_Alexander, 2004_Font,2009_Ercolano,2009_GortiHollenbach,2010_Owen}. Far-ultraviolet (FUV; $6{\rm \,eV} \lesssim h \nu \leq 13.6{\rm \,eV}$), extreme-ultraviolet (EUV; $13.6{\rm \,eV} \leq h\nu \lesssim 0.1\,{\rm keV}$), and X-rays ($0.1\,{\rm keV} \leq h\nu \leq 10\,{\rm keV}$) can cause photoevaporation through different physical processes. FUV radiation heats the disk gas by photoelectric heating and/or photo-pumping of \ce{H2} \citep{2017_Wang}, whereas photoionization heating by EUV and X-ray radiation can also drive evaporative flows from a PPD. EUV radiation is mainly absorbed by hydrogen atoms in the disk gas, and the absorption cross section per hydrogen atom, of the order of $\sim 10^{-17} \cm{2}$, is much larger than those for FUV and X-ray radiation that are absorbed by dust grains and hydrogen/helium/heavy elements, respectively. Therefore, FUV and X-ray radiation penetrate the deep interior of a disk with column densities of $\col{H} \sim 10^{21} \cm{-2}$, whereas EUV radiation effectively heats lower density regions close to the disk surfaces with column densities of $\col{H} \sim 10^{19} - 10^{20} \cm{-2} $. Analytic models and numerical simulations suggest that EUV-driven photoevaporative flows originating from low-density regions (disk surfaces) yield a mass-loss rate of $\sim 10^{-10} - 10^{-9} \, M_{\odot}\, {\rm yr}^{-1}$ \citep{1994_Hollenbach,2013_Tanaka}. It is smaller by a factor of ten to even hundreds than FUV- and X-ray-driven photoevaporation rates \citep[][hereafter Paper I] {2009_GortiHollenbach,2009_Ercolano,2010_Owen,2018_Nakatani}. Recent studies on photoevaporation of a PPD by FUV and X-ray radiation show rather diverse results, and the relative importance of FUV and X-rays is under debate \citep{2009_GortiHollenbach, 2009_Ercolano, 2010_Owen, 2012_Owen, 2015_Gorti}. Interestingly, \cite{2010_ErcolanoClarke} show that X-ray photoevaporation is more efficient with lower metallicities owing to the reduced opacity effect. Unfortunately, these previous studies adopt different stellar models, disk models, and even different sets of chemistry, and thus one cannot compare the results directly. Furthermore, simplified assumptions are often made such as hydrostatic disk structure and/or radiative equilibrium, which degrades the reality of the calculations when considering the actual, dynamic evolution of a PPD. In order to examine the effect of FUV and X-ray radiation on PPD photoevaporation, it is necessary to perform hydrodynamics simulations with all of the above physical processes included self-consistently. Two recent studies, \cite{2017_Wang} and our Paper I, use hydrodynamics simulations with radiative transfer and non-equilibrium chemistry to follow the disk photoevaporation around a solar-type star. Both studies conclude that FUV photons effectively drive photoevaporation, although there are a few differences regarding the most effective heating process. In Paper I, we investigate metallicity dependence of UV photoevaporation rates. We conclude that the FUV-driven photoevaporation rate increases with decreasing metallicity for $10^{-0.5} ~\smetal \lesssim \metal \lesssim 10~\smetal$. We also find that photoelectric heating due to FUV becomes inefficient as metallicity decreases, compared with dust-gas collisional cooling. This reduced FUV heating lowers the temperatures of the neutral region. For $\metal \lesssim 10^{-1.5} \, \smetal$, the neutral region temperatures are too low for photoevaporative flows to be excited. Only EUV-driven photoevaporative flows contribute to the mass loss in this case, and thus the photoevaporation rates are smaller by about an order of magnitude than those with $\metal \gtrsim 10^{-1}\, \smetal$. It is worth mentioning that the simulations of \cite{2017_Wang}, which incorporate X-ray heating, show that the X-ray radiation itself is not the primary cause of photoevaporation. In the present study, we perform a suite of simulations of photoevaporating protoplanetary disks with various metallicities $10^{-3}\, \smetal \leq \metal \leq 10^{0.5}\, \smetal$. Our simulations incorporate X-ray heating and ionization coupled with our chemistry model of Paper I. We examine how X-ray radiation affects the disk photoevaporation rate, and determine the relative importance of FUV and X-ray in the process of photoevaporation. Also, we investigate the metallicity dependence of photoevaporation rates due to both UV and X-ray. The paper is organized as follows. In \secref{sec:method}, we present the methods of our simulations. In \secref{sec:result}, we discuss the simulation results. A final discussion and a summary are given in \secref{sec:discussion} and \secref{sec:summary}, respectively. \section{Methods} \label{sec:method} We perform a suite of simulations of photoevaporating protoplanetary disks with various metallicities of $10^{-3}~\smetal \leq \metal \leq 10^{0.5}~\smetal $. We solve coupled equations of hydrodynamics, radiative transfer, and non-equilibrium chemistry. We largely follow the method of Paper I, except that we include X-ray radiation and add \ce{H2+} as a chemical species in the present study. In this section, we briefly summarize our model and refer the readers to Paper I for numerical methods. Details of the X-ray implementation are described in \appref{app:X-ray}. We take into account FUV, EUV, and X-ray irradiation from the central star. The central star is assumed to have the stellar parameters tabulated in \tref{tab:model}. Although the stellar properties may well depend on metallicity, we adopt the fixed parameters in all our simulations in order to make it easy to compare the results directly. The FUV and EUV luminosities and the SED are the same as in Paper I. We adopt the X-ray SED presented in \cite{2007_Nomura_II}, which is derived by fitting the observational {\it XMM-Newton} data for TW Hydrae with using a two-temperature thin thermal plasma model \citep{1985_Mewe,1995_Liedahl}. In our model, we set the minimum and maximum energy of the SED to be $\Emin = 0.1 \,{\rm keV}$ and $\Emax = 10 \,{\rm keV}$, respectively\footnote{We refer to photons with $h\nu \geq 0.1 \,{\rm keV}$ as X-rays in the present study.}. The disk gas is composed of the eight chemical species: H, \ce{H+}, \ce{H2}, \ce{H2+}, CO, O, \ce{C+}, and electrons. Note that we add \ce{H2+} in order to follow \ce{H2} ionization by X-rays. Hereafter, we refer to H, \ce{H+}, \ce{H2}, and \ce{H2+} as H-bearing species and CO, O, and \ce{C+} as metal species. The amounts of the dust and heavy elements are determined by the ISM values for our solar metallicity disk, and assumed to be proportional to the ratio of the metallicity $\metal$ to the local interstellar metallicity $\smetal$. Thus, we use the dust to gas mass ratio $\dgratio = 0.01 \times \metal/\smetal$, and the gas-phase elemental abundances of carbon $ \abn{C} = 0.927\e{-4}~ \metal/\smetal$ and oxygen $\abn{O} = 3.568\e{-4}~ \metal /\smetal$ \citep{1994_Pollack,2000_Omukai}. The dust-to-gas mass ratio and the elemental abundances are the same as in Paper I. \begin{table}[htp] \caption{Properties of the model} \begin{center} \begin{tabular}{l r} \hline \hline {\bf Stellar parameters} & \\ Stellar mass & $ 0.5~M_\odot$ \\ Stellar radius & $ 2~R_\odot$ \\ FUV luminosity & $ 3\e{32}~\unit{erg}{}~\unit{s}{-1}$ \\ EUV photon number rate & $ 6\e{41}~\unit{s}{-1}$ \\ X-ray luminosity & $ 10^{30}~\unit{erg}{}~\unit{s}{-1}$\\ \hline {\bf Gas/dust properties} & \\ Species & \ce{H}, \ce{H+}, \ce{H2}, \ce{H2+}, CO, \ce{O}, \ce{C+}, \ce{e-} \\ Carbon abundance & $ 0.927\e{-4} \times Z/ Z_\odot$ \\ Oxygen abundance & $ 3.568\e{-4} \times Z/Z_\odot$ \\ Dust to gas mass ratio & $ 0.01 \times Z/ Z_\odot$ \\ \hline \end{tabular} \end{center} \label{tab:model} \end{table} The simulations are performed in 2D spherical polar coordinates $(r, ~\theta)$. The disk is assumed to be symmetric around the rotational axis $(\theta = 0)$ and to the mid-plane $(\theta = \pi /2)$. The time evolution of the gas density, velocity, energy, and chemical abundances are solved. Although the computational domain is defined in 2D, we solve the azimuthal velocity evolution as well as the poloidal velocity $\bm{v}_\text{p} = (v_r,~ v_\theta)$. In the energy equation, relevant heating/cooling sources are included (Paper I). For the chemical evolution, we take into account both the advection and chemical reactions. X-ray heating, X-ray ionization, and the associated chemical reactions involving \ce{H2+} are added to our chemistry model. We describe the implementation of these physical processes in \appref{app:X-ray}. The equation of state is given as in Paper I, but the ratio of specific heat $\gamma$ is calculated with considering the contribution of \ce{H2+}, \begin{equation} \gamma = 1 + \frac{\abn{\ce{H}} + \abn{\ce{H+}} + \abn{\ce{H2}} + \abn{\ce{H2+}} + \abn{e}} {\frac{3}{2} \abn{\ce{H}} + \frac{3}{2} \abn{\ce{H+}} + \frac{5}{2}\abn{\ce{H2}} + \frac{5}{2}\abn{\ce{H2+}} + \frac{3}{2} \abn{e}}. \label{eq:specific_heat} \end{equation} FUV, EUV, and X-ray radiative transfer is solved by ray-tracing but with neglecting scattering. The diffuse EUV due to recombination of hydrogen has been proposed as an important component to drive photoevaporation \citep{1994_Hollenbach}, but a recent study by \cite{2013_Tanaka} shows that the direct EUV is dominant over the diffuse one. \cite{1994_Hollenbach} uses a disk model which has an infinitesimally thin disk structure in the region outer than the gravitational radius and it has a finite scale height in the region inner than the radius. \cite{2013_Tanaka} uses another one where disk scale height is finite at any distance. The difference in the conclusions between these studies can be partly derived from the difference of the adopted disk models. This implies that the direct EUV can be dominant in the outer region unless there is a geometrically thick region which completely attenuates the direct EUV as in \cite{1994_Hollenbach}. We note that \cite{2017_Hollenbach} discusses the causes for the differences in conclusions between these two studies. Our disk model does not have the geometrically thick region. Therefore, we neglect the diffuse EUV component in our model. The FUV and EUV radiative transfer is done as in Paper I, whereas the X-ray radiation transfer is described in \appref{app:X-ray}. The dust temperature is calculated by following radiative transfer for the direct stellar irradiation component and the diffusive dust (re-)emission component, using a hybrid scheme of \cite{2010_Kuiper}. We set the computational domain on $r = [1, ~400] \,{\rm au}$ and $\theta = [0, ~\pi/2] {\rm ~ rad}$. Since the gravitational radius for a $0.5\, M_\odot$ central star is $\sim 0.7\,{\rm au}$ for $10^4{\rm \, K}$ ionized gas \citep{2003_Liffman}, setting the inner boundary at $1\,{\rm au} $ may result in an underestimate of the photoevaporative mass loss rate. However, the contribution from within $ 1\,{\rm au}$ is only a small fraction of the total mass loss rate.\footnote{In Paper I, we ran simulations with smaller inner boundaries of $0.1\,{\rm au}$, $0.35\,{\rm au}$, and $0.5\,{\rm au}$. We found that the density of the ionized atmosphere is too small to shield EUV photons and the EUV photons actually reach the outer region. We thus justify our setting of the inner boundary at $1\,{\rm au}$.} Therefore, we set the inner boundary at $r = 1 \,{\rm au}$ in our simulations. \cite{2017_Hollenbach} show that using a finite-size sink would result in missing the attenuation of the direct EUV inside it. This effect may reduce the direct EUV reaching the outer disk and yield a smaller EUV photoevaporation rate by a factor. Ideally, it would be better to define a computational domain which extends down to the stellar surface, but it is clearly beyond the limitation of the currently available numerical methods. We do not set the strict surface boundary conditions but note here that the accreting gas to the stellar surface might shield the high energy photons \citep{2018_Takasao}. We choose a sufficiently large domain to avoid spurious reflection of soundwaves and gas flows. In Paper I, we show that a sufficiently large outer boundary can eliminate the spurious effect. We run a set of simulations where all of the photoionization heating by EUV (hereafter, EUV heating), photoelectric heating by FUV (hereafter, FUV heating), and X-ray heating are taken into account. In order to isolate the effects of X-ray heating, we also run simulations without FUV heating. The resulting photoevaporation rates are compared with the results of Paper I, where X-rays are not included. Hereafter, we label the sets of our simulations according to which (or the combination) of FUV, EUV, and X-ray heating is taken into account. A simulation is labeled as ``Run YYY'', where ``YYY'' specifies which of the photo-heating sources are included. For example, in ``Run FEX'', all the photo-heating effects are included If only EUV heating is taken into account, the simulation is referred to as ``Run E''. In addition, when we refer to a simulation with metallicity $\metal = 10^C \, \smetal$, we append ``/Z\,$C$'' to the above labels. For instance, when we refer to the simulation with $\metal = 10^{-0.5}\, \smetal$ in which FUV and EUV heating are taken into account, we refer to the simulation as ``Run FE/Z\,-0.5''. \section{RESULTS} \label{sec:result} In this section, we study the photoevaporation rate and the structure of PPDs. We first present the results of the solar metallicity case in \secref{sec:result1}. We then discuss the metallicity dependence in \secref{sec:result2} and \secref{sec:result3}. \subsection{Solar Metallicity Disks} \label{sec:result1} \begin{figure}[htbp] \begin{center} \includegraphics[width=\linewidth, clip]{figs/evaporations.eps} \caption{ Solar-metallicity disk structures in Run FEX/Z0 (top), Run FE/Z0 (middle), and Run EX/Z0 (bottom). The left panels show the chemical structures and the right panels show the density structures. The arrows indicate the velocity fields with the poloidal velocities of $\vp > 1 {\rm \,km\,s^{-1}}$. The dotted lines are the density contours of $\nh = 10^5\cm{-3}$ (red), $\nh = 10^6\cm{-3}$ (black), $\nh = 10^7\cm{-3}$ (blue), and $\nh = 10^8\cm{-3}$ (purple). The pink dashed lines indicate the sonic surface. } \label{fig:evaporations} \end{center} \end{figure} \fref{fig:evaporations} shows that photoevaporative flows are excited in all the cases. Dense neutral photoevaporative flows composed of atomic and molecular hydrogen are driven in Run FEX/Z0 and Run FE/Z0, but not in Run EX/Z0. Ionized photoevaporative flows, which consist of ionized hydrogen, are driven in all the three simulations. FUV heating is an important process to drive the neutral flows. X-ray heating is ineffective to drive photoevaporation in Run EX/Z0, where EUV-driven flows are excited only from the low density, disk surface regions. Following Paper I, we measure the photoevaporation rate $\mdotph$ by integrating the mass flux normal to a spherical surface $S$ \begin{equation} \dot{M}_{\rm ph} = \int_{S} d\bm{S} \cdot \rho \bm{v} =r_S^2 \int_{S} d\theta d\phi \sin \theta \rho v_r , \label{eq:pratesim} \end{equation} where $d\bm{S}$ is an integral element vector of the surface and $r_S$ is the radius of the sphere. A gas parcel is regarded as unbound, and its contribution to $\mdotph$ is counted only if the total specific enthalpy of the gas \begin{equation} \eta = \frac{1}{2} \bm{v}^2 + \frac{\gamma}{\gamma -1 } c_s^2 - \frac{GM_*}{r} , \label{eq:eta} \end{equation} is positive at the surface \citep{2003_Liffman}. With this condition, contributions from the bound disk region $(\eta < 0)$ to $\mdotph$ are effectively excluded. In Section 3.3 of Paper I, we show that photoevaporation rates measured as in \eqnref{eq:pratesim} generally increase with $r_S$. This is because the temperatures in the launching regions of photoevaporative flows, also called base temperatures, are generally larger than necessary to escape from the gravity in the outer region beyond the gravitational radius; There are contributions to the mass loss in the outer region. Thus, photoevaporation rates should be estimated with different $r_S$ to count the contributions. We calculate $\mdotph$ with setting $r_S = 20\,{\rm au},~100\,{\rm au}, ~200\,{\rm au}$ for each of Run FEX, Run FE, and Run EX. For the solar metallicity disk, we also perform Run X/Z0 to examine explicitly the contribution of X-ray-driven flows to $\mdotph$. The photoevaporation rates with $r_S= 20\,{\rm au}$ include contribution from the inner part, while those with $r_S= 100\,{\rm au}$ and $r_S=200\,{\rm au}$ include contribution from the whole region of a disk. Generally, in our simulations, photoevaporation rates vary in time for the first $\sim 5000$ years, but then converge afterward. We take a time-averaged photoevaporation rate over $5000$ years. The resulting photoevaporation rates for Run FEX/Z0, Run FE/Z0, Run EX/Z0, and Run X/Z0 are given in \tref{tab:prate} for each case of $r_S = 20\,{\rm au}$, $r_S = 100\,{\rm au}$, and $r_S = 200\,{\rm au}$, and are also plotted in \fref{fig:prate_z}. \begin{table}[htp] \caption{Resulting photoevaporation rates $\mdotph$ of the solar metallicity disk in $\, M_{\odot}\, {\rm yr}^{-1}$} \begin{center} \begin{tabular}{c | c c c c} $r_S$ & Run FEX/Z0 & Run FE/Z0 & Run EX/Z0 & Run X/Z0 \\ \hline\hline $20\,{\rm au}$ & $6\e{-9}$ & $3\e{-9}$ & $2\e{-10}$ &$5\e{-12}$\\ $100\,{\rm au}$ & $2 \e{-8}$ & $2\e{-8}$ & $1\e{-9} $ & $1\e{-11}$\\ $200\,{\rm au}$ &$3 \e{-8}$ & $3\e{-8}$ & $2\e{-9}$ &$2\e{-11}$\\ \hline \end{tabular} \end{center} \label{tab:prate} \end{table} We note that $\mdotph$ in Run FEX/Z0 is slightly higher than in Run FE/Z0. Also, $\mdotph$ in Run EX/Z0 is about an order of magnitude smaller than in Run FEX/Z0 and in Run FE/Z0. We find a very small $\mdotph$ in Run X/Z0. Overall, these results suggest that FUV is a crucial radiation component to produce a high $\mdotph$, whereas X-rays give a minor contribution to $\mdotph$, although X-rays affect the structure of the neutral flows (see Run FEX/Z0 and Run FE/Z0 in \fref{fig:evaporations}). \begin{figure*}[htbp] \begin{center} \includegraphics[clip, width = \linewidth]{figs/Theatcool_CHI_PLUTO_ver170907171005Z-0_400AUdatacool_n04000r100_Rolf_PLUTOPhotoEvaporation161006Z-0datacool_n04000r98_CHI_PLUTO_ver170907171005Z-0_FUVoff400AUdatacool_n04000r100.eps} \caption{Meridional distributions of the physical quantities at $r = 100\,{\rm au}$: gas and dust temperatures (top), specific heating/cooling rates (middle), and chemical abundances of H, \ce{H+}, \ce{H2}, and electrons (bottom). In the middle row, the solid lines show the heating rates of EUV (red), FUV (orange), X-ray (black), and dust-gas collision (purple), while the dashed lines show the cooling rates of dust-gas collision (purple), OI (cyan), \ce{H2} (blue), CO (green), and adiabatic cooling (gray). Each column shows the physical quantities of Run FEX/Z0, Run FE/Z0, and Run EX/Z0 for solar metallicity disks from the left to the right. } \label{fig:theatcol} \end{center} \end{figure*} The X-ray heating rate in Run EX/Z0 is smaller than the FUV heating rate in Run FEX/Z0 and in Run FE/Z0 (see the second row of \fref{fig:theatcol}). The neutral regions in the disk are not heated by X-rays to sufficiently high temperatures to escape from the star-disk system. The gas temperatures are nearly coupled with the dust temperatures in this region (the top right panel of \fref{fig:theatcol}). We conclude that X-ray heating itself is not efficient to excite photoevaporative flows. Only the EUV-driven, ionized flows contribute to the mass loss in Run EX/Z0. \fref{fig:evaporations} shows that the ionized flows have densities typically several orders of magnitude smaller than the neutral flows. The resulting $\mdotph$ of Run EX/Z0 is much smaller than those of Run FEX/Z0 or Run FE/Z0, where FUV-driven neutral flows give a large contribution to $\mdotph$. FUV heating is dominant in the neutral regions in Run FEX/Z0 and Run FE/Z0 (\fref{fig:theatcol}). By studying these runs in detail, we find that the FUV heating rate is higher in Run FEX/Z0 than in Run FE/Z0 especially in the regions close to the ionization front. There, the gas is weakly ionized by X-rays. With the electron abundance slightly increased, dust grains recombine, the positive charges of dust grains are reduced, and then the photoelectric effect efficiency is increased \citep{2009_GortiHollenbach}. To summarize, X-ray ionization effectively strengthens FUV heating in neutral regions in a disk by increasing the photoelectric effect efficiency. Owing to the strengthened FUV heating, the temperatures of the neutral regions are higher in Run FEX/Z0 than in Run FE/Z0. With the combined FUV+X-ray heating effect, the neutral gas near the central star evaporates in Run FEX/Z0. We have checked and found that there is a large difference in $\mdotph$ measured with $r_S = 20\,{\rm au}$ between Run FEX/Z0 and Run FE/Z0 (\tref{tab:prate}). However, in these runs, the photoevaporative flows excited in the inner regions accounts for only a small fraction of the total mass loss rate, and a dominant contribution comes from outer disk regions. Therefore, there is a small difference in $\mdotph$ between Run FEX/Z0 and Run FE/Z0, when measured with $r_S = 100\,{\rm au}$ or $r_S = 200\,{\rm au}$. \subsection{FUV Heating in Low Metallicity Disks} \label{sec:result2} \begin{figure}[htbp] \centering \includegraphics[clip, width = \linewidth]{figs/prateCHIPLUTOver170907171005400AUtimeaveragedprr101AUoutCHIPLUTOver170529170714400AUtimeaveragedprr101AUoutCHIPLUTOver170907171005FUVoff400AUtimeaveragedprr101AUtmin1000out.eps} \includegraphics[clip, width = \linewidth]{figs/prateCHIPLUTOver170907171005400AUtimeaveragedprr203AUoutCHIPLUTOver170529170714400AUtimeaveragedprr203AUoutCHIPLUTOver170907171005FUVoff400AUtimeaveragedprr203AUtmin1000out.eps} \caption{ Each panel shows the difference in the metallicity dependences of the photoevaporation rates for Run FEX (blue), Run FE (orange), and EX (green). The purple regions show approximate EUV-driven flow contributions ($\dot{M}_\text{EUV} \simeq 0.4 - 1 \e{-9} \, M_{\odot}\, {\rm yr}^{-1}$), separating the EUV photoevaporation rates from the total photoevaporation rates. The photoevaporation rates are calculated with $r_S = 100\,{\rm au}$ and $r_S = 200\,{\rm au}$ in the top and bottom panels, respectively. } \label{fig:prate_z} \end{figure} FUV radiation can reach the deeper interior of a disk with a lower metallicity because the amount of dust grains and hence its opacity are correspondingly smaller. This results in exciting photoevaporative flows with higher densities. On the other hand, the FUV heating becomes progressively inefficient at low metallicities compared to dust-gas collisional cooling. The relatively inefficient FUV heating lowers the base temperatures, and FUV-driven flows are not excited in the ``cool'' disk. Consequently, $\mdotph$ of Run FE increases in the range of $10^{-1}\, \smetal \lesssim \metal \lesssim 10^{0.5}\, \smetal $ but decreases in the range of $10^{-2}\, \smetal \lesssim \metal \lesssim 10^{-1}\, \smetal $ (see \fref{fig:prate_z}). Note that a major contribution to the mass loss rate comes from FUV-driven photoevaporative flows in runs with $\metal \gtrsim 10^{-2} ~\smetal$. EUV-driven flows are important in runs with $\metal \lesssim 10^{-2} ~\smetal$, where the metallicity dependence is small. Although including X-rays affects the metallicity dependence of $\mdotph$ (compare Run FEX and Run FE in \fref{fig:prate_z}), the overall trend appears quite similar; $\mdotph$ of Run FEX increases as metallicity decreases in $\metal \gtrsim 10^{-1.5}\, \smetal$ because of the reduced opacity effect, while $\mdotph$ decreases with metallicity in $\metal \lesssim 10^{-1.5}\, \smetal$ because FUV photoelectric heating becomes less effective than dust-gas collisional cooling. \subsection{The Effect of X-ray Ionization} \label{sec:result3} There is a significant difference between Run FEX and Run FE at very low metallicities of $\metal \lesssim 10^{-1.5}\, \smetal$. In Run FEX, X-ray ionization raises the electron abundance in the neutral region of the disk, and effectively {\it strengthens} the FUV heating. The gas temperature in the neutral region is higher than in Run FE. This causes the neutral gas to evaporate from even closer regions to the star. However, with the low metallicities of $\metal \gtrsim 10^{-1.5}\, \smetal$, the contribution from the inner region is still small with respect to the total photoevaporation rate (\fref{fig:prate_z}). This is similar to the case with solar metallicity as discussed in \secref{sec:result1}. In other words, FUV heating can drive neutral photoevaporative flows even without the strengthening effect of X-rays in this metallicity range; the photoevaporation rates of Run FEX and Run FE are close to each other. This is in good contrast to the runs with $ 10^{-2.5}\,\smetal \lesssim \metal \lesssim 10^{-1.5}\,\smetal$. In the range of $\metal \lesssim 10^{-1.5} \, \smetal$, $\mdotph$ of Run FEX decreases with metallicity because of the efficient dust-gas collisional cooling relative to FUV heating, but we find a more gradual decline of $\mdotph$ than in Run FE. In Run FEX, the electron abundance in the neutral region is increased by the effect of X-rays. Hydrogen is the dominant X-ray absorber, and thus the electron abundance in the neutral region is essentially independent of metallicity (\fref{fig:eabun}). \begin{figure}[htbp] \begin{center} \includegraphics[clip, width = \linewidth]{figs/e_vs_z.eps} \caption{Meridional distributions of electron abundance at $r = 100\,{\rm au}$ for various metallicity disks. All of the data are taken from Run FEX. Note that we here use hydrogen nuclei column density $\col{H}$ instead of $\theta$ unlike in \fref{fig:theatcol}. } \label{fig:eabun} \end{center} \end{figure} Since the photoelectric effect efficiency depends on electron density only through the ratio of the dust/PAH photoionization rate to the dust/PAH recombination rate $\gamma_\text{pe} = G_\text{FUV} \sqrt{T} / \nspe{e}$ (see Eq.[41] of Paper I for details), it is not affected by metallicity, at least explicitly, when the electron abundance is largely set by hydrogen ionization. As a result, FUV heating remains effective even at low metallicities ($ \metal \lesssim 10^{-1.5}\,\smetal$) in Run FEX than in Run FE, and thus $\mdotph$ in Run FEX decreases more gradually. We find a large difference in $\mdotph$ between Run FEX and Run FE, especially in the range $ 10^{-2.5}\,\smetal \lesssim \metal \lesssim 10^{-1.5}\,\smetal$. This can be attributed to the effect of the X-ray radiation through partial ionization. For example, in the disk with $\metal = 10^{-2} \, \smetal$, the electron abundance in Run FEX/Z-2 is about two orders of magnitude larger than in Run FE/Z-2 in the low density part ($\nh \sim 10^5 - 10^6 \cm{-3}$) at around $100\,{\rm au}$. Correspondingly, the ratio of the dust photoionization rate to the dust recombination rate $\gamma_\text{pe} = G_\text{FUV} \sqrt{T}/n_\text{e}$ (cf. Appendix A of Paper I) is small, with the typical value of $\sim 10^3 (n_\text{e} / 100\cm{-3})^{-1}$ in the low-density region. This is about two orders of magnitude smaller than in Run FE/Z-2. Therefore, the photoelectric effect efficiency \begin{eqnarray} \epsilon_{\rm pe} && = \left[ \frac{4.87\e{-2}}{1+4\e{-3} ~ \gamma_{\rm pe}^{~0.73}} \right. \nonumber \\ &&+ \left. \frac{3.65\e{-2}(T/10^4~{\rm K})^{0.7}}{1+2\e{-4} ~ \gamma_{\rm pe}} \right] , \end{eqnarray} is larger by about an order of magnitude \citep{1994_BakesTielens}. The temperature is increased by a factor of a few, and then the gas satisfies the enthalpy condition $\eta > 0$. \begin{figure*}[htbp] \begin{center} \includegraphics[clip, width = \linewidth]{figs/eta.eps} \caption{Snapshots of FEX/Z-2 (left) and FE/Z-2 (right). The top panels show the density distributions, and the bottom panels show the distributions of $\eta$ (\eqnref{eq:eta}) where the red and blue regions indicate $\eta > 0$ (unbound) regions and $\eta < 0$ (bound) regions, respectively. The arrows show the velocity with $ 0.25\, {\rm \,km\,s^{-1}} < \vp < 0.5 \, {\rm \,km\,s^{-1}}$ and they are scaled by its magnitude. The velocity field in the \ion{H}{2}~regions $(\abn{\text{H{\cal II}}} > 0.5)$ are not shown for clarity. The solid lines show density contours of $\nh = 10^5 \cm{-3}$ (cyan), $\nh = 10^6 \cm{-3}$ (yellow), $\nh = 10^7 \cm{-3}$ (black), and $\nh = 10^8 \cm{-3}$ (red). Note that the velocity arrows and density contours are drawn in different manners from \fref{fig:evaporations}. The white dot-dashed lines show contours of $\abn{\text{H{\cal II}}} = 0.5$, which is a rough boundary between the \ion{H}{2}~region and the other. The pink dashed lines indicate the sonic surface. } \label{fig:eta} \end{center} \end{figure*} X-rays also affect other regions of the disk in a similar manner; the total specific enthalpy of the gas is increased to be positive. \fref{fig:eta} shows that the neutral region of Run FEX/Z-2 partly satisfies $\eta > 0$, whereas the region with $\eta > 0 $ appears to overlap with the \ion{H}{2}~region in Run FE/Z-2. In runs with $\metal = 10^{-2} \, \smetal $, incorporating X-ray ionization results in driving neutral photoevaporative flows, which significantly contribute to the mass loss rate. Without X-rays, however, the neutral gas flows are not excited, and only EUV-driven ionized gas flows contribute to the mass loss. Consequently, Run FEX/Z-2 shows a significantly larger $\mdotph$ than Run FE/Z-2. The same conclusion holds for Run FEX and Run FE with metallicities in the range of $ 10^{-2.5}\,\smetal \lesssim \metal \lesssim 10^{-1.5}\,\smetal$. In the very low metallicity of $\metal \lesssim 10^{-3}\, \smetal$, even though FUV heating is strengthened by the X-ray ionization effect, neutral flows are not excited because dust-gas collisional cooling becomes more efficient than FUV heating. Therefore, there is not a significant difference between the photoevaporation rates of Run FEX/Z-3 and Run FE/Z-3. \subsection{PPD Lifetime} Regarding metallicity dependence of PPD lifetimes, it is suggested that typical lifetimes of protoplanetary disks are $3\,{\rm Myr}$ for solar metallicity disks and $1 \,{\rm Myr}$ for those with $\metal = 0.2 \,\smetal$ \citep{2009_Yasui, 2010_Yasui, 2016_Yasui_I, 2016_Yasui_II}. This metallicity dependence of the lifetimes can be fit as $T_\text{life} \propto \metal ^ {0.7}$. In the present study, the resulting photoevaporation rate of Run FEX has metallicity dependences of $\mdotph \propto \metal ^{-0.6}$ for $r_S = 200\,{\rm au}$, in $0.1\, \smetal \leq \metal \leq 10^{0.5}\, \smetal$, while in Run FE $\mdotph \propto \metal ^ {-0.4}$. These metallicity dependences are consistent with the observational metallicity dependence of the lifetimes because disk lifetimes are approximately calculated as $T_\text{life} \propto \mdotph^{-2/3}$ \citep{2010_ErcolanoClarke}. Since X-ray radiation itself does not excite photoevaporative flows in a direct manner, the photoevaporation rate of Run EX is largely contributed by the EUV-driven flows. Therefore, $\mdotph$ is generally metallicity-independent and is significantly smaller than in Run FEX or Run FE, where FUV-driven flows contribute to the mass loss. This suggests that in the case EUV heating mainly contributes to photoevaporation, EUV and X-ray radiation does not cause metallicity dependence in PPD lifetimes. Hence, if the metallicity dependence of the lifetimes is originated from the metallicity dependence of photoevaporation, our model indicates that FUV photoevaporation has a major effect on the disk lifetimes. \section{DISCUSSION} \label{sec:discussion} Our conclusion regarding the effects of X-ray radiation is qualitatively consistent with that of \cite{2009_GortiHollenbach} (hereafter, GH09), who conclude that X-ray photoionization increases the efficiency of photoelectric heating and enhance the FUV photoevaporation rate. Although X-ray heating has been proposed as an important cause of photoevaporation in several studies \citep{2008_Ercolano,2009_Ercolano,2012_Owen}, our direct comparison shows that X-rays alone do not drive strong photoevaporation, in agreement with the conclusions of \cite{2004_Alexander}, \cite{2009_GortiHollenbach}, and \cite{2017_Wang}. In the following, we discuss the effect of a few elements associated with our X-ray radiation model. \subsection{X-Ray Spectral Hardness} \label{sec:speff} The hardness of the adopted X-ray spectrum affects the strength of X-ray photoevaporation. Table 1 of \cite{2009_Ercolano} shows that the photoevaporation rate decreases with the ``pre-screening'' column, i.e., the hardness of the incident flux on a disk. With the pre-screening column of $\col{H} \geq 10^{21} \cm{-2}$, there are virtually no photons with $\lesssim 0.1\,{\rm keV}$ reaching the interior of a disk (see Figure 3 of Ercolano et al. 2009). In this case, the resulting photoevaporation rate is of the order of $10^{-11} \, M_{\odot}\, {\rm yr}^{-1}$, and is smaller by two orders of magnitude than the case with the pre-screening column of $\col{H} \leq 10^{20} \cm{-2}$, where the EUV component ($\leq 0.1 \,{\rm keV}$) also heats the disk. The result suggests that using a hard X-ray spectrum results in {\it inefficient} X-ray photoevaporation, as has been also pointed out by \cite{2015_Gorti}. The X-ray spectrum used in the present study is similar to that with pre-screening column of $\col{H} \sim 10^{21} \cm{-2}$ with which the X-ray photoevaporation rate is small $(\sim 10^{-11} \, M_{\odot}\, {\rm yr}^{-1})$ \citep{2009_Ercolano}. In order to examine whether using a softer spectrum affects the resulting X-ray photoevaporation rate, we additionally perform test simulations where our fiducial X-ray spectrum is shifted to lower energies. We shift the fiducial X-ray spectrum $F(E)$ as $F(E\times \sqrt{10})$ and $F(E\times 10)$ while fixing the total luminosity of $10^{30}\,\unit{erg}{}\,\unit{s}{-1}$. \begin{figure}[htbp] \begin{center} \includegraphics[clip, width = \linewidth]{figs/SED_comparison.eps} \caption{Fiducial SED (blue) and logarithmically shifted SEDs (red and green) for X-rays. The shifted SEDs are given as $F(E\times \sqrt{10})$ (red) and $F(E\times 10)$ (green), where $F(E)$ is the fiducial SED function. } \label{fig:SEDs} \end{center} \end{figure} The shifted spectra are shown by the red and green lines in \fref{fig:SEDs}. Hereafter, we refer to the shifted spectrum colored in green as the soft spectrum, and the other colored in red as the intermediate spectrum. The photo-heating rates are calculated by \eqnref{eq:X-rayheat} as in our fiducial model. For the heating efficiency, we use the same $f_h$ given by \eqnref{eq:fh}. Although $f_h$ might be larger for the softer spectra because all the primary electron energy goes into heat through Coulomb interactions with the ambient electrons when $\abn{e} \sim 1$ \citep{1996_Maloney}, we do not model the heating efficiency as a function of photon energy. Instead, in \secref{sec:f_h}, we study simulations with $f_h = 1$, corresponding to the limiting case where all the absorbed energy goes into heating. FUV heating is not taken into account in our test simulations presented here. \begin{figure*}[htbp] \begin{center} \includegraphics[clip, width = \linewidth ]{figs/Theatcool_PLUTON180208180220_log001Z-0_FUVoff400AUdatacool_n04000r100_PLUTON180208180220_log0032Z-0_FUVoff400AUdatacool_n04000r100_PLUTON180208180220_log0032fh1Z-0_FUVoff400AUdatacool_n04000r100.eps} \caption{ Meridional distributions of the physical quantities at $r = 100\,{\rm au}$ in the simulations, where the soft spectrum (left), intermediate spectrum (middle), and intermediate spectrum with using $f_h = 1$ (right) are used. The panels are shown in the same manner as \fref{fig:theatcol}. Note that though the soft and intermediate spectra technically contain the EUV component ($13.6{\rm \,eV} \leq h\nu \leq 100{\rm \,eV}$), we refer to the photo-heating calculated with using the spectra as X-ray heating.} \label{fig:X-raytest} \end{center} \end{figure*} \fref{fig:X-raytest} shows that the maximum values of the photo-heating rates (the black lines) are larger by about an order of magnitude for the soft and intermediate spectra than that for our fiducial spectrum which is shown in the right column of \fref{fig:theatcol}. The specific photo-heating rate is smaller for higher energy photons. \fref{fig:X-raytest} also shows that low energy photons are nearly completely absorbed in the region close to the ionization front. Since the cross section of the disk medium is larger for lower energy photons, adopting a softer spectrum results in a higher specific photo-heating rate, but the low energy photons are absorbed in regions with small gas densities. The photo-heating raises the gas temperature only in the region close to the ionization front, whereas a large part of the neutral region remains at relatively low temperatures, as seen in \fref{fig:X-raytest}. The gas is not hot enough to launch neutral outflows, and the EUV-driven, ionized flows dominantly contribute to mass loss rate. \fref{fig:pratediscussion} compares the photoevaporation rates for the soft, intermediate, and fiducial spectra. Clearly, in our test simulations, the spectral hardness does not critically affect the photoevaporation rate, although it changes the thermal and chemical structure of the disk (Fig. 7). \begin{figure}[htbp] \begin{center}\includegraphics[clip, width = \linewidth]{figs/prate_180220_logFUVoff400AU_180220_logfhFUVoff400AU_180220_fiducialFUVoff400AU_r_eva=100_r_eva=200_tmin1000.eps} \caption{Resulting photoevaporation rates due to EUV and X-ray as functions of the cross-section-weighted average photon energy of the soft, intermediate, and fiducial spectra. The black and cyan lines show the photoevaporation rates derived from the simulations where $f_h$ is calculated by \eqnref{eq:fh} and $f_h = 1$, respectively. The blue points show the photoevaporation rates of Run X/Z0 (\tref{tab:prate}). } \label{fig:pratediscussion} \end{center} \end{figure} \subsection{Effect of the Heating Efficiency $f_h$} \label{sec:f_h} Another important factor is the heating efficiency $f_h$. Adopting $f_h = 1$ raises the photo-heating rate. We see in \fref{fig:X-raytest} that the neutral region has a higher temperature when using $f_h = 1$ (the right column) than when using $f_h$ of \eqnref{eq:fh} (the left and middle columns). Consequently, the photoevaporation rate with the intermediate spectrum with $f_h = 1$ are the highest (\fref{fig:pratediscussion}). Note that the cross-section-weighted mean energy $\bar{E}$ is $0.03\,{\rm keV}$ and $0.11\,{\rm keV}$ for the soft and intermediate spectra, respectively. For our fiducial spectrum, $\bar{E} = 0.34\,{\rm keV}$, $f_h = 1$ does not significantly change the photoevaporation rate. We conclude that high energy photons with $\gtrsim 0.1 \,{\rm keV}$ are ineffective to heat the neutral gas and do not excite dense photoevaporative flows. The same is true for the soft and intermediate spectra. Our test simulations show that the effective component for photoevaporative mass loss is not X-ray $(0.1\,{\rm keV} \leq h \nu \leq 10\,{\rm keV})$, but EUV $(13.6{\rm \,eV} \leq h \nu \leq 100{\rm \,eV})$. Hard EUV photons with $\sim 10^2{\rm \,eV}$ most efficiently drive disk mass loss. \fref{fig:pratediscussion} compares directly $\mdotph$ in the above test runs with that of Run X/Z0. Again, we confirm that X-rays are ineffective to excite photoevaporative flows. When FUV heating is absent (a somewhat artificial condition, but for the purpose of direct comparison), EUV mainly contributes to the photoevaporative mass loss. This is consistent with Table 1 of \cite{2009_Ercolano}, where we find the photoevaporation rates are significantly low when the EUV component is screened out (see also \cite{2015_Gorti}). In \cite{2009_Ercolano}, the EUV component reaches the disk surface if the pre-screening column density $\col{H}$ is smaller than $10^{20} \cm{-2} $. The spectrum model of \cite{2009_Ercolano} with $\col{H} = 10^{19} -10^{20} \cm{-2}$ is similar to our intermediate spectrum. Their resulting photoevaporation rate of \cite{2009_Ercolano} is $\mdotph \simeq 4 \e{-9} \, M_{\odot}\, {\rm yr}^{-1}$, which is close to $\mdotph$ of the test simulation with the intermediate spectrum and $f_h = 1$ (\fref{fig:pratediscussion}). Further, the photoevaporation rate differs only by a factor of two from that of \cite{2012_Owen}, where an ``ionization parameter'' approach is used with the unscreened spectrum of \cite{2009_Ercolano} to calculate gas temperatures in the hydrodynamics simulations. The agreement between the photoevaporation rates in \cite{2012_Owen} and the present study implies that, for a spectrum with a large amount of $\sim 0.1\,{\rm keV}$ photons, the ionization parameter approach yields essentially the same results as those derived with heating and radiative cooling in a consistent manner. Nonetheless, self-consistent calculations such as ours are necessary to investigate the relative importance between X-ray and FUV photons. The opening angle of the \ion{H}{2}~region is narrower and the resulting photoevaporation rate is larger by about a factor of two in \cite{2012_Owen} than those in our intermediate EX/Z0 with $f_h = 1$ (the right column in \fref{fig:X-raytest}). Since these differences likely originate from a number of differences in the adopted methods, it would be difficult to specify the causes. Nevertheless, the results of our intermediate EX/Z0 with $f_h = 1$ and with the fiducial $f_h$ (\eqnref{eq:fh}) provide important clues, because the different $f_h$ results in different opening angles and photoevaporation rates (\fref{fig:X-raytest} and \fref{fig:pratediscussion}). The results suggest that a higher heating efficiency $f_h$ tends to generate a narrower \ion{H}{2}~region and a larger photoevaporation rate. Thus, we expect that heating efficiency is higher in \cite{2012_Owen} than ours. Actually, the temperature is typically $\sim 4000-5000{\rm \, K}$ in the $2-10 \,{\rm au}$ region in \cite{2012_Owen}, while it is $2000-4000 {\rm \, K}$ in our intermediate EX/Z0 with $f_h = 1$. Hence, we conclude that the narrower opening angle in \cite{2012_Owen} may be attributed to a larger heating efficiency. The high heating efficiency could be realized when the ionization degree is large owing to more efficient Coulomb interactions between the ejected electron and the ambient electrons. We do not account for the effect in our simulations. This effect is neglected in the present study for simplicity. Incorporating the effect can yield a larger heating efficiency \citep{1985_ShullSteenberg}, and X-ray heating rates could be increased especially in the region where the ionization degree is high. On the other hand, since a highly ionized medium hardly absorbs X-rays, the heating rate can be also lowered in such regions. With the electron abundances in the neutral regions of our model $\abn{e} \sim 10^{-4} - 10^{-2} $ (\fref{fig:theatcol} and \fref{fig:X-raytest}), taking account of the dependence on the electron abundance could increase X-ray heating rates by a small factor in the neutral regions. We note that a higher heating rate also results in a narrower \ion{H}{2}~region in Run FEX and Run FE, where the opening angle is smaller when FUV heating is strengthened by X-ray (\fref{fig:evaporations} and \fref{fig:eta}). \cite{2017_Wang} show that disabling efficient cooling processes results in large photoevaporation rates due to hard X-rays with $1\,{\rm keV}$. In order to examine if this is also the case in the fiducial model of the present study, we further perform a test simulation, where $f_h = 1$ and the same thermal processes as Run X/Z0 are used except that all of the line cooling are disabled. The resulting photoevaporation rate is $5\times 10^{-9} \, M_{\odot}\, {\rm yr}^{-1}$, which is modestly larger than our EUV photoevaporation rates. Thus, if all of the primary electron energy go into heating and line cooling processes are not effective, which might be unrealistic, hard X-rays can also cause a relatively efficient mass loss. Overall, the effectiveness of X-rays on driving photoevaporation significantly depends on the heating efficiency $f_h$ and the exact shape of the spectrum of high energy photons especially around $\sim 0.1 \,{\rm keV}$. Hence, for a comprehensive study of X-ray photoevaporation, it is necessary to model the heating efficiency in a consistent manner, and to adopt a realistic spectrum. Finally, we note that the photoevaporation rate likely depends on metallicity when assuming a hard EUV spectrum. Since the efficiency of radiative cooling due to metals in the neutral region, such as \ion{O}{1}~cooling and dust-gas collisional cooling, decreases with metallicity, there may be metallicity dependence of EUV/X-ray photoevaporation when the exact spectral shape is taken into account. Similarly, different FUV spectra should also result in a different photoevaporation rate. Further studies are warranted to address these issues associated with detailed conditions. \subsection{Uncertainties in Input Parameters} We have concluded that FUV effectively drives dense neutral flows, while X-ray is ineffective. The conclusion should depend on input parameters such as the abundance of polycyclic aromatic hydrocarbons (PAHs) and luminosities/spectra of FUV/X-ray. PAHs and very small grains significantly contribute to photoelectric heating, and thus their abundances affect FUV photoevaporation rates \citep{2008_GortiHollenbach, 2009_Gorti, 2015_Gorti, 2018_Nakatani}. Recent observations suggest that the PAH abundances around T Tauri stars might be smaller than the interstellar value, which we adopt in our fiducial model, although there are uncertainties in the observational results \citep{2007_Geers, 2010_Oliveira, 2013_Vicente}. In Paper I, we investigate the effect of the reduced PAH abundance on FUV photoevaporation. We find that the net effect of the reduced PAH abundance is weakening photoelectric heating, but FUV driven flows are excited anyway even without the PAH contribution. The resulting mass loss rates are the same orders of magnitude as those with the PAH contribution to FUV heating. Besides the PAH abundance, photoelectric heating rates depend on the local size distribution and amount of grains, which can be spatially variable in PPDs owing to the effects of dust growth, settling, and entrainment \citep{2011_Owen_b, 2016_Hutchison_a, 2016_Hutchison}. Actually, their variabilities are detected in several PPDs \citep[e.g.,][]{2016_Pinte}. Disk opacity is varied according to the spatial distribution of dust grains, which also strongly affect photoevaporation rates. Hence, in order to derive FUV photoevaporation rates accurately, it is necessary to take account of different spatial distributions of grains with various sizes as well as the reduced abundances of smaller grains. FUV and X-ray luminosities of young stars are also uncertain factors. A large fraction of FUV photons are considered to be produced within accretion shocks around a classical T Tauri star (CTTS). CTTSs have a wide variety of FUV luminosities with $10^{-6}\, L_\odot \lesssim L_\text{FUV} \lesssim L_\odot$ which roughly correlates with the accretion rate $\dot{M}_\text{acc}$ as $L_\text{FUV} \propto \dot{M}_\text{acc}$ \citep{1998_Gullbring,2012_Yang}. X-ray luminosities of T Tauri stars range between $10^{28}\unit{erg}{}\unit{s}{-1} \lesssim L _\text{X} \lesssim 10^{31} \unit{erg}{}\unit{s}{-1}$ \citep{2007_Gudel,2014_Vidotto}, and measured plasma temperatures are typically $(5-30)\e{6} {\rm \, K}$ which corresponds to the peak X-ray energy of $\sim 1\,{\rm keV}$ \citep{2009_Gudel, 2014_Alexander}. A small number of T Tauri stars are known to have ``soft X-ray excess'' with temperatures of a few million kelvin $(0.3-0.4\,{\rm keV})$ \citep{2009_Gudel}. TW Hya, whose X-ray spectrum is adopted in our fiducial model, is one of them. Thus, our fiducial X-ray spectra might be relatively softer than a typical X-ray spectrum of a T Tauri star without the excessive soft X-ray. Our results show that X-ray is ineffective to drive photoevaporation in our fiducial model even though the fiducial spectrum contains the soft X-ray excess. Since softer components of a X-ray spectrum play a major role in driving photoevaporative flows (\secref{sec:speff} and \secref{sec:f_h}), using a typical X-ray spectrum without the soft X-ray excess would not change the conclusion that X-rays are ineffective to drive photoevaporation. We note that X-ray spectra constructed from emission measure data can be different from `processed' spectra that actually reach PPD surfaces, considering the possibility of screening close to the X-ray source. We also note that the observed FUV/X-ray luminosities and spectra have wide varieties as above; FUV and X-ray luminosities independently vary with evolutional stages of PPDs. Hence, it is worth investigating relative importance of FUV/X-ray in photoevaporation with a variety of luminosities and spectra. \section{SUMMARY} \label{sec:summary} We have performed a suite of radiation hydrodynamics simulations of photoevaporating protoplanetary disks to study the metallicity dependence of photoevaporation due to FUV, EUV, and X-ray radiation. Direct comparison between a variety of cases have shown that X-rays alone do not heat disk gas up to sufficiently high temperature to cause a significant photoevaporative mass loss. Although the net heating effect is unimportant, X-rays effectively ionize the neutral region in a disk. Then the electron abundance in the neutral region is raised, and charged dust grains recombine more efficiently. The FUV photoelectric heating efficiency is increased by the fast recombination, and the temperature in the neutral region becomes higher because of the strengthened FUV heating. Consequently, including the X-ray radiation results in a larger photoevaporation rate, compared with the cases with FUV heating only. With FUV, EUV, and X-ray radiation, the disk photoevaporation rate increases as metallicity decreases in the range of $\metal \gtrsim 10^{-1.5} \, \smetal$ because of the reduced opacity of a disk for FUV photons. At $\metal \lesssim 10^{-1.5}\, \smetal$, dust-gas collisional cooling becomes efficient compared to FUV photoelectric heating, and suppress photoevaporation. In this metallicity range, the strengthening effect of X-rays is crucial to driving FUV photoevaporation. Without X-rays, the FUV heating does not excite photoevaporation and only EUV-driven flows contribute to the mass loss. Therefore, the photoevaporation rate is significantly large in the simulations with very low metallicities if the X-ray effects are incorporated. We derived the metallicity dependence of the resulting photoevaporation rates. The metallicity dependence of photoevaporation rates due to FUV or strengthened FUV heating is consistent with the observational metallicity dependence of the disk lifetimes. Our model predicts that protoplanetary disks in an extremely low metallicity environment have longer lifetimes than in solar or sub-solar metallicity environments. \acknowledgments We thank Neal Turner, Mario Flock, Uma Gorti, Shu-ichiro Inutsuka, Kengo Tomida, and Kei Tanaka for fruitful discussions and helpful comments on the paper. We also thank the anonymous referee for giving insightful comments to improve the manuscript. RN has been supported by the Grant-in-aid for the Japan Society for the Promotion of Science (16J03534) and by Advanced Leading Graduate Course for Photon Science (ALPS) of the University of Tokyo. TH appreciates the financial supports by the Grants-in-Aid for Basic Research by the Ministry of Education, Science and Culture of Japan (16H05996). HN appreciates the financial supports by Grants-in-Aid for Scientific Research (25400229). RK acknowledges financial support via the Emmy Noether Research Group on Accretion Flows and Feedback in Realistic Models of Massive Star Formation funded by the German Research Foundation (DFG) under grant no. KU 2849/3-1. All the numerical computations were carried out on Cray XC30 and Cray XC50 at Center for Computational Astrophysics, National Astronomical Observatory of Japan.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{SecI} High-energy collisions between hadron and/or atomic nuclei produce multi-particle final-states for which single- and two-particle number distributions have been measured~\cite{STARspectra,PHENIXspectra,ALICEspectra}. Two-particle correlations, constructed from these distributions, have been shown to be sensitive to the underlying dynamics in the collision process. Parton fragmentation into jets~\cite{Tomjetfrag}, hadronization from soft processes in quantum chromodynamics (QCD)~\cite{LUND}, identical particle quantum interference~\cite{HBT}, parton collectivity (flow)~\cite{flow}, parton-parton quantum interference~\cite{Levin,Pomerons}, resonance decays, and conservation law effects are among the many dynamical processes predicted to contribute to two-particle correlations. The majority of two-particle correlation measurements reported for relativistic heavy-ion collisions at the Relativistic Heavy-Ion Collider (RHIC) and at the Large Hadron Collider (LHC) are angular correlations on sub-spaces $(\phi_1,\phi_2)$, $(\eta_1,\eta_2)$, $(\phi_1 - \phi_2)$, $(\eta_1 - \eta_2)$, or $(\eta_1 - \eta_2,\phi_1 - \phi_2)$, where $\phi$ and $\eta$ are the azimuth and pseudorapidity~\footnote{Pseudorapidity is defined as $\eta = -\ln[\tan(\theta/2)]$, where $\theta$ is the polar scattering angle relative to the beam direction.} of arbitrary particles 1 and 2. Particles are selected within fixed $p_t$ ranges depending on the physics goals of the analysis. In this paper the complementary correlations on transverse momentum $(p_{t1},p_{t2})$ for a fixed $(\eta,\phi)$ binning scale (bin size) and acceptance range~\cite{ptscale,STARscale} are considered. Measurements of two-particle correlations on $(p_{t1},p_{t2})$ have been reported by experiments NA49~\cite{NA49ptpt,JeffReid}, CERES~\cite{CERESptpt}, and STAR~\cite{JeffReid,JeffQM01,Ayamtmt}. In general, this type of correlation depends on the angular $(\eta,\phi)$ bin scale, acceptance range, and location in $(\eta,\phi)$ space as discussed in Refs.~\cite{STARscale,TrainorMeth,Trainormeanpt}. Here, the $(\eta,\phi)$ bin-scale is fixed at $\Delta\phi = 2\pi$, $\Delta\eta = 2$ for $|\eta| \leq 1$ corresponding to the STAR experiment's time projection chamber (TPC) tracking detector acceptance~\cite{STARNIM,STARTPC} at the RHIC. For symmetric collision systems, e.g. Au+Au and Pb+Pb, near mid-rapidity the correlations are approximately constant with respect to coordinate $(\eta_1 + \eta_2)$~\cite{AyaCD}. For unpolarized ion + ion collisions the correlations are invariant on coordinate $(\phi_1 + \phi_2)$. Two-particle correlations for these conditions can therefore be considered four-dimensional (4D) functions of variables $p_{t1}$, $p_{t2}$, $\eta_1 - \eta_2$ and $\phi_1 - \phi_2$. Two-dimensional measurements on $(\eta_1 - \eta_2,\phi_1 - \phi_2)$ as a function of 2D $(p_{t1},p_{t2})$, in principle, contain all of the two-particle correlation information. However, as discussed in this paper, angular correlations include an undetermined constant offset~\cite{axialCI} and are therefore incomplete. We will use the relation between two-particle correlations on 2D $(p_{t1},p_{t2})$ space and non-statistical fluctuations in mean-$p_t$~\cite{Trainormeanpt,meanptpaper} to determine the overall, absolute magnitude of the correlations on $(p_{t1},p_{t2})$ and thus resolve the above indeterminacy and allow access to all the information in 4D two-particle correlations. Two-particle correlations on transverse momentum may provide access to dynamical processes beyond that which can be studied with angular correlations alone. For example, in thermodynamic models, event-wise fluctuations in the final-state temperature of the observed collision system produce fluctuating slopes of the event-wise single-particle $p_t$ distributions resulting in a saddle-shape in the $(p_{t1},p_{t2})$ correlation~\cite{Ayamtmt}. Fluctuating slopes would not produce angular correlations unless they originate in regions with a characteristic angular scale. Another example is the fragmentation of minimum-bias jets~\cite{axialCI} which occurs within a characteristic angular scale and within a relatively local $p_t$ range at intermediate momentum of order 1-4~GeV/$c$~\cite{Tomjetfrag}. Fluctuations in the number of minimum-bias jets and/or the number of charged hadrons per jet cause the intermediate $p_t$ distribution to fluctuate resulting in positive correlations in $(p_{t1},p_{t2})$ along the $p_{t1}=p_{t2}$ diagonal. Angular correlations from minimum-bias jets~\cite{axialCI} determine only the average number of correlated particle-pairs from these processes. Together, angular and $(p_{t1},p_{t2})$ correlations provide access to independent information (i.e. averages and variances) about the event-wise number of correlated particle-pairs from dynamical processes such as jet fragmentation which tend to be localized in both angular and transverse momentum spaces. In the RHIC and the LHC experiments correlations are measured as functions of global properties of the collision events. Typically events are grouped according to an extensive, event-wise observable such as total charged-particle multiplicity, number of neutrons at zero-degree scattering angle, total transverse energy, etc. which serve as proxies for the degree of overlap, or centrality, between the colliding heavy-ions. In order to achieve sufficient statistical accuracy events must be collected into finite width bins (e.g. centrality bins) in the extensive observable within which the number of produced particles in the collision, or multiplicity, as well as the shape of the single-particle distribution vary. These variations over finite width centrality bins can {\em bias}, or distort the measurements. This bias is inconsequential for angular correlations~\footnote{Refering to Eq.~(\ref{Eq0}), statistical bias mainly adds a constant offset ($\xi$) to angular correlations and does not affect the analysis of correlation structure. Systematic bias caused by changes in the shape of the single-particle distribution or acceptance within a centrality bin, e.g. dependence of the raw pseudorapidity distribution on collision vertex position within the fiducial volume of the detector, can be minimized by requiring event-mixing within narrow sub-bins.}~\cite{axialCI} but can be quite severe for $(p_{t1},p_{t2})$ correlations~\cite{LizThesis}, being comparable to or larger than the intrinsic correlation structures of interest. The purposes of this work are to derive candidate $(p_{t1},p_{t2})$ correlation measures from non-statistical mean-$p_t$ fluctuation quantities in the literature, estimate the severity of the measurement bias for each form, and determine the range of centrality, multiplicity and kinematics where each correlation quantity is minimally biased. In the present context {\em bias} refers to any effect which causes the measured correlation to be non-zero when the true correlations vanish, or which distorts the correlation measurement in the presence of true correlations. For example, consider a typical correlation measurement where particle pairs from the same event (sibling pairs) are binned on variable $x$ (e.g. the above coordinate variables) in histogram $N^{\rm sib}(x)$ and mixed-event pairs (the two particles are taken from different events) are collected in $N^{\rm mix}(x)$. Both histograms are averaged over the events. Event-averaged particle pair densities are given by the ratio of the histograms to bin area. Correlation quantity $r(x)-1$~\cite{axialCI} is given by \begin{eqnarray} r(x)-1 & = & \frac{N^{\rm sib}(x) - N^{\rm mix}(x)}{N^{\rm mix}(x)} \nonumber \\ & = & \frac{\overline{N(N-1)} \hat{N}^{\rm sib}(x) - \bar{N}^2 \hat{N}^{\rm mix}(x)}{\bar{N}^2\hat{N}^{\rm mix}(x)} \nonumber \\ & = & \frac{\left( \overline{N(N-1)}/\bar{N}^2 \right) \hat{N}^{\rm sib}(x) - \hat{N}^{\rm mix}(x)}{\hat{N}^{\rm mix}(x)} \nonumber \\ & = & (1+\xi) \left( \frac{\hat{N}^{\rm sib}(x) - \hat{N}^{\rm mix}(x)} {\hat{N}^{\rm mix}(x)} \right) + \xi , \label{Eq0} \end{eqnarray} where $\bar{N}$ is the mean multiplicity, $\overline{N(N-1)}$ is the mean number of sibling pairs, ``hat'' symbols denote unit-normalized histograms, e.g. $\sum_x \hat{N}^{\rm sib}(x) = 1$, and $\xi \equiv \overline{N(N-1)}/\bar{N}^2 - 1$ where $|\xi| <\!< 1$ if $\bar{N} >\!> 1$ and the range of event-wise multiplicities is $\ll \bar{N}$. The algebraic steps used in going from the first to second line in Eq.~(\ref{Eq0}) are explained in the next section [see Eq.~(\ref{Eq7})] where, for this example, it is assumed that the shapes of the densities do not vary with event multiplicity. Variable $\xi$ is non-zero due to pair counting statistics, a {\em statistical} bias, where factor $(1+\xi)$ is a {\em multiplicative} bias and constant factor $\xi$ on the right-hand side (RHS) of the last line of Eq.~(\ref{Eq0}) is an {\em additive} bias. Variations in the shape of the single-particle distributions within the centrality bin will also cause the numerator in Eq.~(\ref{Eq0}) to not vanish in the absence of true correlations, a {\em systematic} bias. The advantage of $(p_{t1},p_{t2})$ correlations for constraining the 4D correlation measurements is negated by the additive bias $\xi$. For $(p_{t1},p_{t2})$ correlations reported as the number of correlated pairs per final-state particle~\cite{axialCI} the additive bias introduces large, shape distortions in the correlation structures as shown below. A major goal of this paper is to derive $(p_{t1},p_{t2})$ correlation measures for which additive bias effects are negligible. This paper is organized as follows. In Sec.~\ref{SecII} charge-identified (CID) $(p_{t1},p_{t2})$ correlation quantities are derived from both simple definitions and from mean-$p_t$ fluctuation quantities from the literature. In Sec.~\ref{SecIII} analytic leading-order bias contributions are derived which are due to systematic variations in single-particle distributions and correlation shapes within the finite-width centrality bin. In Sec.~\ref{SecIV} a Monte Carlo simulation model is described which was used to estimate bias effects corresponding to realistic analysis of RHIC data. Simulation results are presented and discussed in Sec.~\ref{SecV}. A summary and conclusions are given in Sec.~\ref{SecVI}. \section{Transverse momentum correlations} \label{SecII} Correlation quantities based on conventional definitions used in angular correlation analysis are discussed first. In Refs.~\cite{Ayamtmt,CLTTom} it was shown that a measure of nonstatistical, event-wise fluctuations in mean-$p_t$ is proportional to the $p_t$-weighted integral of a two-particle, transverse momentum correlation. This is an example of the general relationship between correlations and nonstatistical fluctuations~\cite{ptscale,Trainormeanpt,CLTTom}. Multiple definitions of mean-$p_t$ fluctuation measures can be found in the literature~\cite{meanptpaper,phipt,Voloshin,Fpt,ceres,NA49}. Those which are advocated by experimental collaborations at the CERN Super Proton Synchrotron (SPS), the RHIC, and the LHC are considered here where in each case the corresponding two-particle, transverse momentum correlations for like-sign (LS) and unlike-sign (US) charged-particle pairs are derived. \subsection{Simple definitions} \label{SecIIA} Many of the definitions of correlations in the literature~\cite{AyaCD,axialCI,AyaCI} arbitrarily assume total pair normalization where correlation quantity $(r_{\rm pair}-1)$ on binned space $x$ is given by \begin{eqnarray} r_{\rm pair}(x)-1 & = & \frac{N^{\rm mix}}{N^{\rm sib}} \frac{h^{\rm sib}(x)}{h^{\rm mix}(x)} -1 \nonumber \\ & = & \frac{N^{\rm mix}}{N^{\rm sib}} \frac{\left( h^{\rm uncorr}(x) + h^{\rm corr}(x) \right)}{h^{\rm mix}(x)} - 1 \label{Eq1} \\ & = & \frac{N^{\rm mix}}{N^{\rm sib}} \frac{h^{\rm corr}(x)}{h^{\rm mix}(x)} + \left[ \frac{N^{\rm mix}}{N^{\rm sib}} \frac{h^{\rm uncorr}(x)}{h^{\rm mix}(x)} - 1 \right]. \nonumber \\ \label{Eq2} \end{eqnarray} In these equations $h^{\rm sib}(x)$ and $h^{\rm mix}(x)$ are histograms of sibling and mixed-event particle pairs, respectively, in bin $x$ ($x$ may represent bins in 1D or 2D angular or $p_t$ sub-spaces), $h^{\rm sib}(x) = h^{\rm uncorr}(x) + h^{\rm corr}(x)$ corresponding to the number of uncorrelated (random) and correlated pairs, and the total pair counts are given by $N^{\rm sib} = \sum_x h^{\rm sib}(x)$ and $N^{\rm mix} = \sum_x h^{\rm mix}(x)$. The conventional definition of the unit-normalized two-particle density~\cite{Feshbach,Cramer}, $\hat{\rho}(x_1,x_2)$, is given by \begin{eqnarray} \hat{\rho}(x_1,x_2) & = & \hat{\rho}(x_1) \hat{\rho}(x_2) + C(x_1,x_2) \label{Eq3} \end{eqnarray} where the one-particle density, $\hat{\rho}(x_1)$, is the marginal distribution of the two-particle density, $\hat{\rho}(x_1) = \int dx_2 \hat{\rho}(x_1,x_2)$, and $C(x_1,x_2)$ is the two-particle correlation density. From this definition we find that \begin{eqnarray} \int dx_2 C(x_1,x_2) & = & \int dx_1 C(x_1,x_2) = 0. \label{Eq4} \end{eqnarray} However, the integral of $C(x_1,x_2)$ over a reduced portion of the full space (e.g. the detector acceptance $\Delta x$), given by \begin{eqnarray} \int_{\Delta x} dx_2 C(x_1,x_2) & \neq & 0, \label{Eq5} \end{eqnarray} does not vanish in general. For the bin counts in Eqs.~(\ref{Eq1}) and (\ref{Eq2}) the preceding nonvanishing integral requires that $\sum_x h^{\rm corr}(x) \neq 0$ and therefore $\sum_x h^{\rm uncorr}(x) \neq N^{\rm sib}$. It follows that the factor in square brackets in Eq.~(\ref{Eq2}) does not vanish in general and is approximately a constant $\lambda$ over the domain of $x$, where $|\lambda| <\!< 1$ if $h^{\rm corr}(x) <\!< h^{\rm uncorr}(x)$. Both quantities on the RHS of Eq.~(\ref{Eq2}) are small $(<\!<1)$, but may be comparable, i.e. the arbitrary $\lambda$ could be of the order of the correlation amplitude. It is conventional~\cite{Trainormeanpt,axialCI} to report angular correlations as a normalized covariance (Pearson's normalized covariance~\cite{Pearson}) by multiplying $[r_{\rm pair}(x)-1]$ in Eq.~(\ref{Eq1}) by a single particle quantity or histogram, $\sqrt{\rho_{\rm ref}(x)}$, where \begin{eqnarray} \sqrt{\rho_{\rm ref}(x)} \left[ r_{\rm pair}(x) - 1 \right] & = & \sqrt{\rho_{\rm ref}(x)} \frac{N^{\rm mix}}{N^{\rm sib}} \frac{h^{\rm corr}(x)}{h^{\rm mix}(x)} \nonumber \\ & + & \lambda \sqrt{\rho_{\rm ref}(x)}. \label{Eq6} \end{eqnarray} For angular correlations from symmetric collision systems (e.g. p+p, Au+Au, Pb+Pb) at mid-rapidity the {\em prefactor}, $\sqrt{\rho_{\rm ref}(x)}$ (see also Sec.~\ref{SecIIG}), is approximately constant and is given by $d^2N_{\rm ch}/d\eta d\phi$~\cite{axialCI} where $N_{\rm ch}$ is the charged-particle multiplicity. Factor $\lambda \sqrt{\rho_{\rm ref}(x)}$ contributes an unknown, constant offset to the angular correlations meaning that only the non-constant angular correlation structures are physically significant as explained in Ref.~\cite{axialCI}. For transverse momentum correlations the prefactor is given by $\sqrt{(d^2N_{\rm ch}/dp_{t1}d\eta_1)(d^2N_{\rm ch}/dp_{t2}d\eta_2)}$ which varies exponentially with $(p_{t1},p_{t2})$. This correlation per final-state particle measure provides much greater visual access to the correlation structures at low and intermediate $p_t$ less than a few GeV/$c$. In this case the structure of the unknown factor $\lambda \sqrt{\rho_{\rm ref}(p_{t1},p_{t2})}$ may be comparable to or larger than the true correlations, making the $(p_{t1},p_{t2})$ pair-normalized correlations unreliable. Equation~(\ref{Eq6}) and bias factor $\lambda$ apply to both LS and US charged particle-pairs. Another correlation definition, referred to in the Introduction, invokes event averaging where the sibling pair histogram is given by \begin{widetext} \begin{eqnarray} N^{\rm sib}(x) & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} n^{\rm sib}_j(x) = \sum_m \frac{\epsilon_m}{\epsilon} \frac{1}{\epsilon_m} \sum_{j=1}^{\epsilon_m} n^{\rm sib}_{jm}(x) = \sum_m \frac{\epsilon_m}{\epsilon} \bar{n}^{\rm sib}_m(x) \nonumber \\ & = & \sum_m \frac{\epsilon_m}{\epsilon} m(m-1) \hat{\bar{n}}^{\rm sib}_m(x) \approx \sum_{\delta_m} \frac{\epsilon_m}{\epsilon} ( \bar{N} + \delta_m)(\bar{N} + \delta_m - 1) \hat{\bar{n}}^{\rm sib}(x) \nonumber \\ & \approx & \left[ \bar{N}( \bar{N} - 1) + \sigma^2_N \right] \hat{\bar{n}}^{\rm sib}(x) \label{Eq7} \end{eqnarray} where in the first line $\epsilon$ is the number of collision events, $j$ is the event index, $n^{\rm sib}_j(x)$ is the number of sibling pairs in event $j$ in bin $x$, $m$ is an event multiplicity value within the centrality range, $\epsilon_m$ is the number of events which have multiplicity $m$, and $\bar{n}^{\rm sib}_m(x)$ is an average over all events with multiplicity $m$. In the second line event-wise pair count $m(m-1)$ includes both permutations of particles 1 and 2, $\hat{\bar{n}}^{\rm sib}_m(x)$ is normalized to unity where $\hat{\bar{n}}^{\rm sib}_m(x) = \bar{n}^{\rm sib}_m(x)/\sum_x\bar{n}^{\rm sib}_m(x)$, $\bar{N} = \sum_m (\epsilon_m/\epsilon) m$ is the mean multiplicity, $m = \bar{N} + \delta_m$, and in the last line $\sigma^2_N$ is the variance of the multiplicity distribution of the $\epsilon$ events given by $\sum_m (\epsilon_m/\epsilon) (m-\bar{N})^2$. In the second line of Eq.~(\ref{Eq7}) the possible multiplicity dependence of the shape of $\hat{\bar{n}}^s_m(x)$ was neglected. The mixed-event pair histogram is given by \begin{eqnarray} N^{\rm mix}(x) & = & \frac{1}{\epsilon_{\rm mix}} \sum_{j \neq j^{\prime}} \left[ n_j(x_1) n_{j^{\prime}}(x_2) \right]_{(x)} = \bar{N}^2 \left[ \hat{\bar{n}}(x_1) \hat{\bar{n}}(x_2) \right]_{(x)} \label{Eq8} \end{eqnarray} where $\epsilon_{\rm mix}$ is the number of pairs of mixed events used in the summation, $n_j(x_1)$ and $n_{j^{\prime}}(x_2)$ are the binned single-particle counts in events $j$ and $j^{\prime}$ where $j \neq j^{\prime}$, and notation $\left[ \hat{\bar{n}}(x_1) \hat{\bar{n}}(x_2) \right]_{(x)}$ means that all mixed-event particle-pairs which contribute to pair-wise bin $x$ are included in the summation. For transverse momentum correlations this factor is explicitly given by $\hat{\bar{n}}(p_{t1}) \hat{\bar{n}}(p_{t2})$ where in this context $p_{t1}$ and $p_{t2}$ represent $p_t$ bins. The normalized, event-averaged correlation is given by \begin{eqnarray} \sqrt{\rho_{\rm ref}}(r_{\rm event}-1) & = & \sqrt{\frac{d^2N_{\rm ch}}{dp_{t1}d\eta_1} \frac{d^2N_{\rm ch}}{dp_{t2}d\eta_2}} \frac{N^{\rm sib}(p_{t1},p_{t2})-N^{\rm mix}(p_{t1},p_{t2})} {N^{\rm mix}(p_{t1},p_{t2})}. \label{Eq9} \end{eqnarray} Multiplying $N^{\rm sib}(x)$ by $\bar{N}/(\bar{N}-1)$ removes the trivial pair counting difference between sibling and mixed-event pairs [see Eqs.~(\ref{Eq7}) and (\ref{Eq8})]. Equation~(\ref{Eq9}) can then be re-expressed as \begin{eqnarray} \sqrt{\rho_{\rm ref}}(r_{\rm event}-1) & = & \sqrt{\frac{d^2N_{\rm ch}}{dp_{t1}d\eta_1} \frac{d^2N_{\rm ch}}{dp_{t2}d\eta_2}} \frac{\frac{\bar{N}}{\bar{N} - 1} N^{\rm sib}(p_{t1},p_{t2})-N^{\rm mix}(p_{t1},p_{t2})} {N^{\rm mix}(p_{t1},p_{t2})} \nonumber \\ & \approx & \sqrt{\frac{d^2N_{\rm ch}}{dp_{t1}d\eta_1} \frac{d^2N_{\rm ch}}{dp_{t2}d\eta_2}} \left[ \frac{\hat{\bar{n}}^{\rm sib}(p_{t1},p_{t2}) - \hat{\bar{n}}(p_{t1}) \hat{\bar{n}} (p_{t2})}{\hat{\bar{n}}(p_{t1}) \hat{\bar{n}} (p_{t2})} + \frac{\sigma^2_N}{\bar{N}(\bar{N} - 1)} \frac{\hat{\bar{n}}^{\rm sib}(p_{t1},p_{t2})} {\hat{\bar{n}}(p_{t1}) \hat{\bar{n}} (p_{t2})} \right] . \label{Eq10} \end{eqnarray} \end{widetext} In the absence of true correlations $\hat{\bar{n}}^{\rm sib}(p_{t1},p_{t2})$ equals $\hat{\bar{n}}(p_{t1}) \hat{\bar{n}}(p_{t2})$. However, $(r_{\rm event}-1)$ is not zero in that limit due to the additive bias term proportional to $\sigma^2_N$ which is determined by the multiplicity or centrality bin width. The bias is approximately $\sqrt{\rho_{\rm ref}} (\sigma_N/\bar{N})^2$ and can be much larger than the typical correlations as shown in Sec.~\ref{SecV} where for heavy-ion collisions $|\hat{\bar{n}}^{\rm sib}(p_{t1},p_{t2}) / (\hat{\bar{n}}(p_{t1}) \hat{\bar{n}}(p_{t2})) - 1|$ is of order $10^{-3}$ to $10^{-2}$~\cite{Ayamtmt}. Alternatively, the ratio $r = N^{\rm sib}(x)/N^{\rm mix}(x)$ could be normalized by the factor $\bar{N}^2/[\bar{N}(\bar{N} - 1) + \sigma^2_N]$ which produces the same result and possible distortion as found for the above pair-normalization method. Equation~(\ref{Eq10}) directly applies to non-charge-identified particle-pairs and to LS pairs. For US pairs statistical bias persists where $\sigma^2_N/[\bar{N}(\bar{N} - 1)]$ in Eq.~(\ref{Eq10}) is replaced with $cov[(n^+ - \bar{N}^+)(n^- - \bar{N}^-)]/(\bar{N}^+ \bar{N}^-)$, the normalized covariance between positive and negative charged-particle number fluctuations. \subsection{Correlation derived from $\Delta \sigma^2_{p_t:m}$} \label{SecIIB} Several authors have proposed mean-$p_t$ fluctuation quantities which minimize statistical bias, all of which rely on the scale invariance (i.e. angular bin-size invariance), in the absence of correlations, of the quantity $m\sigma^2_{\langle p_t \rangle}$, where $m$ and $\sigma^2_{\langle p_t \rangle}$ are respectively the multiplicity and variance (defined below) of mean-$p_t$ fluctuations within the angular bin. This scale invariance is a consequence of the central limit theorem (CLT)~\cite{CLTTom,CLT}. Non-statistical fluctuations, which correspond to correlations, break this scale invariance causing the difference $[(m\sigma^2_{\langle p_t \rangle})_{\delta x_2} - (m\sigma^2_{\langle p_t \rangle})_{\delta x_1}]$ to be non-zero where subscripts $\delta x_1$ and $\delta x_2$ refer to different angular bin-sizes, or scales. However, there is not a unique method for implementing this scale difference quantity in the definitions of non-statistical mean-$p_t$ fluctuation measures. For example, difference $[(m\sigma^2_{\langle p_t \rangle})_{\delta x_2} - (m\sigma^2_{\langle p_t \rangle})_{\delta x_1}]$ can be multiplied by arbitrary powers of $m$ in order to minimize bias due to the $m$-dependence in the non-statistical fluctuations. A linear width-difference, $\sqrt{(m\sigma^2_{\langle p_t \rangle})_{\delta x_2}} - \sqrt{(m\sigma^2_{\langle p_t \rangle})_{\delta x_1}}$, could also be used. This ambiguity allows multiple forms for mean-$p_t$ fluctuation quantities to be defined. In Refs.~\cite{JeffReid,LiuTrainor} Liu, Trainor and Reid proposed the quantity $\Delta \sigma^2_{p_t:m}$ based directly on the above variance difference. This quantity was used by the STAR Collaboration in the analysis of Au+Au collisions at $\sqrt{s_{\rm NN}}$ = 130~GeV~\cite{meanptpaper}. Subscript $p_t:m$ emphasizes that this quantity measures non-statistical fluctuations of transverse momentum with negligible contribution from fluctuations in multiplicity ($m$)~\cite{meanptpaper}. This quantity was designed to eliminate bias when $[(m\sigma^2_{\langle p_t \rangle})_{\delta x_2} - (m\sigma^2_{\langle p_t \rangle})_{\delta x_1}]$ varies as $(f_0 + mf_1)$ in the presence of non-statistical fluctuations. For non-charge-identified particles this quantity at the acceptance scale is given by \begin{eqnarray} \Delta \sigma^2_{p_t:m} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} n_j \left( \langle p_t \rangle_j - \hat{p}_t \right)^2 - \sigma^2_{\hat{p}_t} \label{Eq11} \end{eqnarray} where $n_j$ is the multiplicity within the acceptance for event $j$, and event-wise mean-$p_t$, inclusive mean-$p_t$ and inclusive $p_t$ variance are respectively given by \begin{eqnarray} \langle p_t \rangle_j & = & \frac{1}{n_j} \sum_{i=1}^{n_j} p_{t,ji}, \label{Eq12} \\ \hat{p}_t & = & \frac{1}{\epsilon \bar{N}} \sum_{j=1}^{\epsilon} \sum_{i=1}^{n_j} p_{t,ji}, \label{Eq13} \\ \sigma^2_{\hat{p}_t} & = & \frac{1}{\epsilon \bar{N}} \sum_{j=1}^{\epsilon} \sum_{i=1}^{n_j} \left( p_{t,ji} - \hat{p}_t \right)^2. \label{Eq14} \end{eqnarray} The inclusive $p_t$ variance represents $(m\sigma^2_{\langle p_t \rangle})_{\delta x_1}$ in the limit of very small bin sizes where occupied bins contain exactly one particle and only occupied bins are included in the summations~\cite{JeffReid,meanptpaper,LiuTrainor}. In this paper angle brackets ``$ \langle$\,$\rangle$'' denote event-wise averages and over-lines denote averages over an event collection. In the absence of non-statistical fluctuations $\Delta \sigma^2_{p_t:m}$ equals zero, where \begin{widetext} \begin{eqnarray} \Delta \sigma^2_{p_t:m} & = & \sum_m \frac{\epsilon_m}{\epsilon} m \frac{1}{\epsilon_m} \sum_{j=1}^{\epsilon_m} \left( \langle p_t \rangle_j - \hat{p}_t \right)^2 - \sigma^2_{\hat{p}_t} = \sum_m \frac{\epsilon_m}{\epsilon} m \sigma^2_{\langle p_t \rangle} - \sigma^2_{\hat{p}_t} \rightarrow 0 \label{Eq15} \end{eqnarray} using the CLT result $m\sigma^2_{\langle p_t \rangle} = \sigma^2_{\hat{p}_t}$ where $\sigma^2_{\langle p_t \rangle}$ is the variance of the distribution of event-wise mean-$p_t$ for events with $m$ particles in the angular bin. The expression for $\Delta \sigma^2_{p_t:m}$ in Eq.~(\ref{Eq11}) can be expanded in terms of particle pairs by substituting the definitions in Eqs.~(\ref{Eq12}) - (\ref{Eq14}), collecting terms proportional to sums of pairs of particles from the same event (siblings) and sums of pairs of particles from different events (mixed-event pairs), and assuming a large number of events $\epsilon >\!> 1$. The result is given by \begin{eqnarray} \bar{N}\Delta \sigma^2_{p_t:m} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \frac{\bar{N}}{n_j} \sum_{i \neq i^{\prime} = 1}^{n_j} p_{t,ji} p_{t,ji^{\prime}} - \frac{\bar{N} - 1}{\bar{N}} \frac{1}{\epsilon_{\rm mix}} \sum_{j \neq j^{\prime}} \sum_{i=1}^{n_j} \sum_{i^{\prime} = 1}^{n_{j^{\prime}}} p_{t,ji} p_{t,j^{\prime} i^{\prime}} + \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \left( \frac{\bar{N}}{n_j} - 1 \right) \sum_{i=1}^{n_j} p_{t,ji}^2 . \label{Eq16} \end{eqnarray} \end{widetext} The last term in Eq.~(\ref{Eq16}) is a self-pair term which is non-vanishing when average $p_t^2$ is correlated with event multiplicity, but vanishes otherwise. It may contribute to the fluctuation measure but does not contribute to the correlation. The particle sums, when binned on 2D transverse momentum, can be expressed as \begin{eqnarray} \sum_{i \neq i^{\prime} = 1}^{n_j} p_{t,ji} p_{t,ji^{\prime}} & = & \sum_{k,l} p_{t,k} p_{t,l} n_{j,kl}^{\rm sib} \label{Eq17} \\ \sum_{i=1}^{n_j} \sum_{i^{\prime} = 1}^{n_{j^{\prime}}} p_{t,ji} p_{t,j^{\prime} i^{\prime}} & = & \sum_{k,l} p_{t,k} p_{t,l} n_{jk} n_{j^{\prime} l} \label{Eq18} \end{eqnarray} where subscripts $k,l$ are transverse momentum bin indices, $p_{t,k}$ and $p_{t,l}$ are the average $p_t$ within those bins (approximately $p_t$ at the bin centers), $n_{j,kl}^{\rm sib}$ is the number of sibling pairs in 2D bin $(k,l)$ in event $j$, and $n_{jk}$ and $n_{j^{\prime} l}$ are the number of particles in $p_t$ bins $k$ and $l$ in events $j$ and $j^{\prime}$, respectively. By substituting Eqs.~(\ref{Eq17}) and (\ref{Eq18}) into Eq.~(\ref{Eq16}) and omitting the self-pair term, the relationship between the mean-$p_t$ fluctuation measure and the two-particle correlation for this case can be expressed as \begin{eqnarray} \bar{N} \Delta \sigma^2_{p_t:m} & \approx & \sum_{k,l} p_{t,k} p_{t,l} \Delta N_{kl,\Delta \sigma^2} \label{Eq19} \\ \Delta N_{kl,\Delta \sigma^2} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \frac{\bar{N}}{n_j} n_{j,kl}^{\rm sib} - \frac{\bar{N} - 1}{\bar{N}} \frac{1}{\epsilon_{\rm mix}} \sum_{j \neq j^{\prime}} n_{jk} n_{j^{\prime} l} . \nonumber \\ \label{Eq20} \end{eqnarray} For like-sign charged-particle pairs ($++$ and $--$) the preceding equation can immediately be written as \begin{eqnarray} \Delta N_{kl,\Delta \sigma^2}^{\pm \pm} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \frac{\bar{N}^{\pm}}{n_j^{\pm}} n_{j,kl}^{{\rm sib}\pm \pm} \nonumber \\ & - & \frac{\bar{N}^{\pm} - 1}{\bar{N}^{\pm}} \frac{1}{\epsilon_{\rm mix}} \sum_{j \neq j^{\prime}} n_{jk}^{\pm} n_{j^{\prime} l}^{\pm}. \label{Eq21} \end{eqnarray} From Ref.~\cite{meanptpaper} the mean-$p_t$ fluctuation measure for unlike-sign charged-particle pairs is \begin{eqnarray} \Delta \sigma^{2,US}_{p_t:m} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \sqrt{n_j^+ n_j^-} \left( \langle p_t^{\pm} \rangle_j - \hat{p}_t^{\pm} \right) \left( \langle p_t^{\mp} \rangle_j - \hat{p}_t^{\mp} \right) . \nonumber \\ \label{Eq22} \end{eqnarray} After multiplying by $\sqrt{\bar{N}^+ \bar{N}^-}$ and using the CID versions of the summations in Eqs.~(\ref{Eq12}), (\ref{Eq13}), (\ref{Eq17}) and (\ref{Eq18}), the unlike-sign charged-particle pair correlation can be expressed as \begin{widetext} \begin{eqnarray} \Delta N_{kl,\Delta \sigma^2}^{\pm \mp} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \sqrt{\frac{\bar{N}^+ \bar{N}^-}{n_j^+ n_j^-}} n_{j,kl}^{{\rm sib}\pm \mp} - \frac{1}{\epsilon_{\rm mix}} \sum_{j^{\prime} \neq j^{\prime \prime}} \left[ \sqrt{ \frac{\bar{N}^{\pm} n_{j^{\prime}}^{\mp}}{\bar{N}^{\mp} n_{j^{\prime}}^{\pm}}} + \sqrt{ \frac{\bar{N}^{\mp} n_{j^{\prime \prime}}^{\pm}}{\bar{N}^{\pm} n_{j^{\prime \prime}}^{\mp}}} - \overline{ \sqrt{ \frac{ n_j^+ n_j^-}{\bar{N}^+ \bar{N}^-} } } \right] n_{j^{\prime}k}^{\pm} n_{j^{\prime \prime} l}^{\mp} \label{Eq23} \end{eqnarray} \end{widetext} where the over-lined quantity in the mixed-event summation is averaged over all events $j = 1,2, \cdots \epsilon$. In obtaining the second weight factor in the mixed-event expression summation indices $j^{\prime}$ and $j^{\prime \prime}$ were interchanged. Including the weight factors in Eqs.~(\ref{Eq21}) and (\ref{Eq23}) is essential for eliminating the finite bin-width statistical bias. Note that all weight factors equal unity when the CID event multiplicities are constant. The form of Eq.~(\ref{Eq20}) for nonidentified particles suggests the following (simple) CID expression where the sibling-pair term and the single-particle terms with CID labels $a,b$ are written out as \begin{eqnarray} n_{j,kl}^{\rm sib} & = & \sum_{a = \pm} \sum_{b = \pm} n_{j,kl}^{{\rm sib},ab} \label{Eq24} \\ n_{jk} & = & \sum_{a = \pm} n_{jk}^a . \label{Eq25} \end{eqnarray} The resulting, alternate CID form for the correlation is given by \begin{eqnarray} \Delta N_{kl,{\rm alt}}^{ab} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \frac{\bar{N}}{n_j} n_{j,kl}^{{\rm sib},ab} - \frac{\bar{N} - 1}{\bar{N}} \frac{1}{\epsilon_{\rm mix}} \sum_{j \neq j^{\prime}} n_{jk}^a n_{j^{\prime}l}^b \nonumber \\ \label{Eq26} \end{eqnarray} for $a,b = \pm,\pm$. In Sec.~\ref{SecV} it will be shown that this correlation definition is strongly biased; only the charge-nonidentified form in Eq.~(\ref{Eq20}) is useful. The CERES Collaboration introduced a mean-$p_t$ fluctuation quantity $\sigma^2_{p_t,{\rm dyn,Ceres}}$ in Ref.~\cite{ceres} at about the same time $\Delta \sigma^2_{p_t:m}$ was being developed by Liu, Trainor and Reid. It is algebraically identical to $\Delta \sigma^2_{p_t:m}/\bar{N}$ and therefore leads to the same correlation quantities given in Eqs.~(\ref{Eq21}) and (\ref{Eq23}). \subsection{Correlation derived from $\Phi_{p_t}$} \label{SecIIC} A mean-$p_t$ fluctuation width difference quantity $\Phi_{p_t}$~\cite{phipt} is defined as \begin{eqnarray} \Phi_{p_t} & = & \sqrt{ \overline{Z^2}/\bar{N}} - \sigma_{\hat{p}_t} \label{Eq27} \\ \overline{Z^2} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \sum_{i,i^{\prime} = 1}^{n_j} \left( p_{t,ji} - \hat{p}_t \right) \left( p_{t,ji^{\prime}} - \hat{p}_t \right) \nonumber \\ & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} n_j^2 \left( \langle p_t \rangle_j - \hat{p}_t \right)^2. \label{Eq28} \end{eqnarray} Direct conversion of $\Phi_{p_t}$ into a form proportional to a weighted integral of the correlation is complicated by the square-root in Eq.~(\ref{Eq27}) and the linear form of $\Phi_{p_t}$ which is based on a fluctuation width difference as opposed to a variance difference which was used in the preceding sub-section. An approximate quantity can be defined which depends on variance differences similar to that used for $\Delta \sigma^2_{p_t:m}$. Multiplying Eq.~(\ref{Eq27}) by $\left( \sqrt{ \overline{Z^2}/\bar{N}} + \sigma_{\hat{p}_t} \right)$ yields \begin{eqnarray} \Phi_{p_t} \left[\sqrt{ \overline{Z^2}/\bar{N}} + \sigma_{\hat{p}_t} \right] & = & \overline{Z^2}/\bar{N} - \sigma^2_{\hat{p}_t} \label{Eq29} \end{eqnarray} and then substituting from Eq.~(\ref{Eq27}) into the factor on the left-hand-side (LHS) results in \begin{eqnarray} \Phi_{p_t} \left[ \Phi_{p_t} + 2\sigma_{\hat{p}_t} \right] & \equiv & 2 \sigma_{\hat{p}_t} \Phi_{p_t}^{(0)} , \label{Eq30} \end{eqnarray} where the RHS defines approximate measure $\Phi_{p_t}^{(0)}$. For heavy-ion collisions $\Phi_{p_t} <\!< \sigma_{\hat{p}_t}$~\cite{phipt} and solving Eq.~(\ref{Eq30}) for $\Phi_{p_t}$ yields the rapidly converging expansion \begin{eqnarray} \Phi_{p_t} & \approx & \Phi_{p_t}^{(0)} \left[1 - \Phi_{p_t}^{(0)}/(2\sigma_{\hat{p}_t}) + \cdots \right] \label{Eq31} \end{eqnarray} when $\Phi_{p_t}^{(0)} <\!< \sigma_{\hat{p}_t}$. From Eqs.~(\ref{Eq28})-(\ref{Eq30}) we obtain \begin{eqnarray} \Phi_{p_t}^{(0)} & = & \left[ \overline{Z^2}/\bar{N} - \sigma^2_{\hat{p}_t} \right] /(2\sigma_{\hat{p}_t}) \nonumber \\ & = & \frac{1}{2\sigma_{\hat{p}_t} \bar{N} \epsilon} \sum_{j=1}^{\epsilon} \left[ n_j^2 \left( \langle p_t \rangle_j - \hat{p}_t \right)^2 -n_j \sigma^2_{\hat{p}_t} \right]. \label{Eq32} \end{eqnarray} Equation~(\ref{Eq32}) can be directly applied to like-sign pairs $(++)$ and $(--)$. Quantity $\Phi_{p_t}^{(0)}$ includes an additional factor $n_j$ compared to $\Delta \sigma^2_{p_t:m}$ in Eq.~(\ref{Eq11}). The STAR Collaboration adopted the quantity $2\sigma_{\hat{p}_t}\Phi_{p_t}^{(0)}$ for the scale-dependent fluctuation analysis in Refs.~\cite{ptscale,STARscale}. For unlike-sign charged-particle pairs quantity $\overline{Z^2}$ in Eq.~(\ref{Eq28}) is evaluated for $(\pm \mp)$ pairs where the variance $\sigma_{\hat{p}_t}^2$ in Eq.~(\ref{Eq32}) is not included as was the case for $\Delta \sigma^{2,US}_{p_t:m}$ [see Eq.~(\ref{Eq22})], and the scaling factors $\sigma_{\hat{p}_t}$ and $\bar{N}$ are replaced with geometric means as in Eq.~(\ref{Eq22}). The result from Eqs.~(\ref{Eq28}) and (\ref{Eq32}) is \begin{eqnarray} \Phi_{p_t}^{(0)+-} & = & \frac{1}{2\sqrt{\sigma^+_{\hat{p}_t} \sigma^-_{\hat{p}_t} \bar{N}^+ \bar{N}^-} } \nonumber \\ & \times & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} n_j^+ n_j^- \left( \langle p_t^+ \rangle_j - \hat{p}_t^+ \right) \left( \langle p_t^- \rangle_j - \hat{p}_t^- \right). \nonumber \\ \label{Eq33} \end{eqnarray} The LS and US correlations are derived by substituting the explicit summations for $\langle p_t^{\pm} \rangle_j$, $\hat{p}_t^{\pm}$ and $(\sigma^{\pm}_{\hat{p}_t})^2$ into Eqs.~(\ref{Eq32}) and (\ref{Eq33}), collecting sibling and mixed-event pair terms, using the $p_t$ binning in Eqs.~(\ref{Eq17}) and (\ref{Eq18}), and factoring out constants. The results are given by \begin{widetext} \begin{eqnarray} \Delta N^{\pm \pm}_{kl,\Phi} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} n_{j,kl}^{{\rm sib}\pm \pm} - \frac{1}{\epsilon_{\rm mix}} \sum_{j^{\prime} \neq j^{\prime \prime}} \left[ \frac{ n_{j^{\prime}}^{\pm} - 1}{\bar{N}^{\pm}} + \frac{ n_{j^{\prime \prime}}^{\pm} - 1}{\bar{N}^{\pm}} - \frac{\overline{n_j^{\pm} (n_j^{\pm} - 1)}}{\bar{N}^{\pm^2}} \right] n_{j^{\prime} k}^{\pm} n_{j^{\prime \prime} l}^{\pm} \label{Eq34} \\ \Delta N^{\pm \mp}_{kl,\Phi} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} n_{j,kl}^{{\rm sib}\pm \mp} - \frac{1}{\epsilon_{\rm mix}} \sum_{j^{\prime} \neq j^{\prime \prime}} \left[ \frac{ n_{j^{\prime}}^{\mp}}{\bar{N}^{\mp}} + \frac{ n_{j^{\prime \prime}}^{\pm}}{\bar{N}^{\pm}} - \frac{\overline{n_j^+n_j^-}}{\bar{N}^+ \bar{N}^-} \right] n_{j^{\prime} k}^{\pm} n_{j^{\prime \prime} l}^{\mp} \label{Eq35} \end{eqnarray} where self-pair terms cancel in this case. The ALICE Collaboration defined a mean-$p_t$ fluctuation quantity $C_{p_t}$\cite{ALICECpt} given by \begin{eqnarray} C_{p_t} & = & \frac{1}{N_{\rm pairs}} \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \sum_{i \neq i^{\prime} = 1}^{n_j} \left( p_{t,ji} - \hat{p}_t \right) \left( p_{t,ji^{\prime}} - \hat{p}_t \right), \label{Eq36} \end{eqnarray} a variance difference as shown in Ref.~\cite{Trainormeanpt}, where $N_{\rm pairs}$ is the event-average number of particle pairs. The correlations are derived by inserting the expansions for $\hat{p}_t$ and collecting sibling and mixed-event pair terms for like-sign and unlike-sign charged-particle pairs as above. The resulting correlations are the same as those derived for $\Phi^{(0)}_{p_t}$ in Eqs.~(\ref{Eq34}) and (\ref{Eq35}). \subsection{Correlation derived from $\sigma^2_{p_t,{\rm dynamical}}$} \label{SecIID} Mean-$p_t$ fluctuation quantity $\sigma^2_{p_t,{\rm dynamical}}$~\cite{Voloshin} is defined for like-sign and unlike-sign particle pairs, using a variance difference form (see Ref.~\cite{Trainormeanpt}) given by \begin{eqnarray} \sigma^{2\pm \pm }_{p_t,{\rm dynamical}} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \frac{1}{n_j^{\pm}(n_j^{\pm} - 1)} \sum_{i \neq i^{\prime} = 1}^{n_j^{\pm}} \left( p_{t,ji}^{\pm} - \hat{p}_t^{\pm} \right) \left( p_{t,ji^{\prime}}^{\pm} - \hat{p}_t^{\pm} \right) \label{Eq37} \\ \sigma^{2\pm \mp }_{p_t,{\rm dynamical}} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \frac{1}{n_j^+ n_j^-} \sum_{i=1}^{n_j^{\pm}} \sum_{i^{\prime} = 1}^{n_j^{\mp}} \left( p_{t,ji}^{\pm} - \hat{p}_t^{\pm} \right) \left( p_{t,ji^{\prime}}^{\mp} - \hat{p}_t^{\mp} \right). \label{Eq38} \end{eqnarray} This quantity is directly proportional to a weighted integral of a two-particle correlation. Following the same steps as in Sec.~\ref{SecIIB} the corresponding correlations are given by \begin{eqnarray} \Delta N^{\pm \pm}_{kl,\sigma-{\rm dyn}} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \frac{\bar{N}^{\pm^2}}{n_j^{\pm}(n_j^{\pm} - 1)} n_{j,kl}^{{\rm sib}\pm \pm} - \frac{1}{\epsilon_{\rm mix}} \sum_{j^{\prime} \neq j^{\prime \prime}} \left[ \frac{\bar{N}^{\pm}}{n_{j^{\prime}}^{\pm}} + \frac{\bar{N}^{\pm}}{n_{j^{\prime \prime}}^{\pm}} - 1 \right] n_{j^{\prime}k}^{\pm} n_{j^{\prime \prime}l}^{\pm} \label{Eq39} \\ \Delta N^{\pm \mp}_{kl,\sigma-{\rm dyn}} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \frac{\bar{N}^{\pm}\bar{N}^{\mp}}{n_j^{\pm} n_j^{\mp}} n_{jk}^{\pm} n_{jl}^{\mp} - \frac{1}{\epsilon_{\rm mix}} \sum_{j^{\prime} \neq j^{\prime \prime}} \left[ \frac{\bar{N}^{\pm}}{n_{j^{\prime}}^{\pm}} + \frac{\bar{N}^{\mp}}{n_{j^{\prime \prime}}^{\mp}} - 1 \right] n_{j^{\prime}k}^{\pm} n_{j^{\prime \prime}l}^{\mp}. \label{Eq40} \end{eqnarray} \end{widetext} Self-pair terms do not appear in $\sigma^2_{p_t,{\rm dynamical}}$. For like-sign sibling-pairs, events with $n_j^{\pm} = 1$, $n_{j,kl}^{{\rm sib}\pm \pm} = 0$ are skipped but those events are included in the mixed-event summation. \subsection{Correlation derived from $F_{p_t}$} \label{SecIIE} Mean-$p_t$ fluctuation quantity $F_{p_t}$, developed by the PHENIX Collaboration~\cite{Fpt}, is based on a fluctuation width difference similar to $\Phi_{p_t}$. $F_{p_t}$ is defined by \begin{eqnarray} F_{p_t} & = & \frac{\omega_{p_t,{\rm data}} - \omega_{p_t,{\rm mix}}} {\omega_{p_t,{\rm mix}}} \label{Eq41} \\ \omega_{p_t} & = & \left[ \overline{\langle p_t \rangle^2} - \overline{\langle p_t \rangle}^2 \right]^{1/2} / \overline{\langle p_t \rangle} , \label{Eq42} \end{eqnarray} where $\omega_{p_t}$ is calculated from the measured events (subscript ``data'') or from mixed events (subscript ``mix''). The latter are uncorrelated pseudo-events constructed by sampling from all particles in all events in the collection. For mixed events Eq.~(\ref{Eq42}) becomes \begin{eqnarray} \left( \overline{ \langle p_t \rangle}_{\rm mix} \omega_{p_t,{\rm mix}} \right)^2 & = & \overline{ \langle p_t \rangle^2_{\rm mix}} - \overline{ \langle p_t \rangle}^2_{\rm mix} \nonumber \\ & = & \overline{ \left( \langle p_t \rangle - \overline{ \langle p_t \rangle} \right)^2_{\rm mix} } \label{Eq43} \end{eqnarray} where \begin{eqnarray} \overline{ \langle p_t \rangle}_{\rm mix} & = & \frac{1}{\epsilon^{\prime}} \sum_{j=1}^{\epsilon^{\prime}} \frac{1}{n_j} \sum_{i=1}^{n_j} p_{t,ji} = \sum_m \frac{\epsilon^{\prime}_m}{\epsilon^{\prime}} \frac{1}{m \epsilon^{\prime}_m} \sum_{j=1}^{\epsilon^{\prime}_m} \sum_{i=1}^m p_{t,ji} \nonumber \\ & = & \sum_m \frac{\epsilon^{\prime}_m}{\epsilon^{\prime}} \hat{p}_{t,m} = \hat{p}_t. \label{Eq44} \end{eqnarray} In Eq.~(\ref{Eq44}) $\epsilon^{\prime}$ is the number of mixed events, $\epsilon^{\prime}_m$ is the number of mixed events having multiplicity $m$, $\hat{p}_{t,m}$ is the inclusive mean-$p_t$ for all mixed events with multiplicity $m$, and in the last step any systematic dependence on multiplicity of the inclusive mean-$p_t$ for the real events is suppressed because each mixed event is composed of a random sample of particles from all events in the collection. For real events, in which $\hat{p}_{t,m}$ may systematically vary with multiplicity, $\overline{ \langle p_t \rangle}_{\rm data} \neq \hat{p}_t$ in general. However, the ratio $\zeta = \hat{p}_t/\overline{ \langle p_t \rangle}_{\rm data}$ is expected to be approximately 1. Continuing from Eq.~(\ref{Eq43}) we obtain \begin{eqnarray} \left( \hat{p}_t \omega_{p_t,{\rm mix}} \right)^2 & = & \frac{1}{\epsilon^{\prime}} \sum_{j=1}^{\epsilon^{\prime}} \left( \langle p_t \rangle_j - \hat{p}_t \right)^2_{\rm mix} \nonumber \\ & & \hspace{-0.8in} = \sum_m \frac{\epsilon^{\prime}_m}{\epsilon^{\prime}} \frac{1}{\epsilon^{\prime}_m} \sum_{j=1}^{\epsilon^{\prime}_m} \left( \langle p_t \rangle_{j,m} - \hat{p}_t \right)^2_{\rm mix} \nonumber \\ & & \hspace{-0.8in} = \sum_m \frac{\epsilon^{\prime}_m}{\epsilon^{\prime}} \sigma^2_{\langle p_t \rangle :m,{\rm mix}} = \sum_{m>0} \frac{\epsilon^{\prime}_m}{\epsilon^{\prime}} \frac{\sigma_{\hat{p}_t}^2}{m} = \overline{m^{-1}} \sigma_{\hat{p}_t}^2 \label{Eq45} \end{eqnarray} where $\sigma^2_{\langle p_t \rangle :m,{\rm mix}}$ is the mean-$p_t$ variance for mixed events with multiplicity $m>0$ (the summation includes only those events with non-vanishing bin content), and in the last line the central limit theorem can be used because the mixed events are uncorrelated. Quantity $\overline{m^{-1}}$ is statistically biased and will be discussed below. In order to access the correlation, the width difference form of $F_{p_t}$ must be transformed to a variance difference similar to what was done for $\Phi_{p_t}$ in Sec.~\ref{SecIIC}. This transformation can be accomplished by multiplying $F_{p_t}$ by $\bar{N}^2 \hat{p}_t^2 \omega_{p_t,{\rm mix}} (\omega_{p_t,{\rm data}} + \omega_{p_t,{\rm mix}})$. The result is defined with new symbol ${\cal F}_{p_t}$ where $\bar{N}^2$ is required in order that the resulting expression be proportional to pair number. In Ref.~\cite{Fpt} $F_{p_t}$ for RHIC collision data was found to be of order 2\% which implies that \begin{eqnarray} \omega_{p_t,{\rm data}} & \approx & \omega_{p_t,{\rm mix}} = \left[ \overline{ m^{-1} } \right] ^{1/2} \sigma_{\hat{p}_t} / \hat{p}_t \label{Eq46} \end{eqnarray} using Eq.~(\ref{Eq45}). The above multiplicative factor is then approximately $2 \bar{N}^2 \overline{ m^{-1}} \sigma_{\hat{p}_t}^2$, which is a constant or scaling factor. The result for ${\cal F}_{p_t}$ is given by \begin{eqnarray} {\cal F}_{p_t} & = & \bar{N}^2 \hat{p}_t^2 \left( \omega^2_{p_t,{\rm data}} - \omega^2_{p_t,{\rm mix}} \right) \nonumber \\ & = & \frac{\bar{N}^2 \hat{p}_t^2}{\overline{ \langle p_t \rangle}^2_{\rm data}} \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \left( \langle p_t \rangle_j - \overline{ \langle p_t \rangle} \right)^2_{\rm data} - \bar{N}^2 \overline{ m^{-1}} \sigma_{\hat{p}_t}^2 \nonumber \\ & = & \bar{N}^2 \zeta^2 \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \left( \langle p_t \rangle_j - \hat{p}_t/\zeta \right)^2_{\rm data} - \bar{N}^2 \overline{ m^{-1}} \sigma_{\hat{p}_t}^2 \label{Eq47} \end{eqnarray} where the second quantity on the RHS was obtained from Eq.~(\ref{Eq45}). If $\zeta \neq 1$, the statistical bias from the average factor $\overline{ m^{-1}}$ contributes to the final $\Delta N_{kl}$ correlation as an additive bias which may produce significant artifacts. It is expected that $\zeta \approx 1$ for applications in high energy heavy-ion collisions and therefore setting $\zeta = 1$ permits a formal, statistically unbiased correlation to be defined approximately corresponding to fluctuation quantity ${\cal F}_{p_t}$. By evaluating Eq.~(\ref{Eq47}) for $(++)$ and $(--)$ charged-particle pairs and inserting the summations for $\langle p_t \rangle_j$, $\hat{p}_t$ and $\sigma_{\hat{p}_t}^2 $ as in the above derivations, the like-sign correlation quantity can be derived and is given by \begin{eqnarray} \Delta N^{\pm \pm}_{kl,{\rm F}} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \frac{\bar{N}^{\pm^2}} {n_j^{\pm^2}} n_{j,kl}^{{\rm sib} \pm \pm} \nonumber \\ & & \hspace{-0.8in} - \frac{1}{\epsilon_{\rm mix}} \sum_{j^{\prime} \neq j^{\prime \prime}} \left( \frac{\bar{N}^{\pm}}{n^{\pm}_{j^{\prime}}} + \frac{\bar{N}^{\pm}}{n^{\pm}_{j^{\prime \prime}}} - 1 - \overline{{m^{\pm}}^{-1}} \right) n^{\pm}_{j^{\prime}k} n^{\pm}_{j^{\prime \prime}l} \label{Eq48} \end{eqnarray} where self-pair terms are not included in the correlation. The statistical bias factor $ \overline{m^{\pm^{-1}}}$ in the mixed-event term is equal to $\bar{N}^{\pm^2} \overline{m^{\pm^{-1}}} \hat{\bar{n}}^{\pm}_k \hat{\bar{n}}^{\pm}_l$, if multiplicity dependence of the single-particle $p_t$ distribution shape is neglected. This bias term is cancelled by a similar bias term in the sibling-pair sum given by \begin{eqnarray} \bar{N}^{\pm^2} \overline{ \left( \frac{m^{\pm} (m^{\pm} - 1)}{{m^{\pm}}^2} \right) } \hat{\bar{n}}^{{\rm sib}\pm \pm}_{kl} & = & \bar{N}^{\pm^2} \left( 1 - \overline{m^{\pm^{-1}}} \right) \hat{\bar{n}}^{{\rm sib}\pm \pm}_{kl} \nonumber \\ & & \hspace{-0.5in} \approx \bar{N}^{\pm^2} \left( 1 - \overline{m^{\pm^{-1}}} \right) \hat{\bar{n}}^{\pm}_k \hat{\bar{n}}^{\pm}_l \label{Eq49} \end{eqnarray} where the bias contribution comes from the second term on the RHS. Multiplicity dependence in the shape of the two-particle distribution is also neglected in Eq.~(\ref{Eq49}). The last line in Eq.~(\ref{Eq49}) represents the limit of zero correlations. For realistic applications with non-vanishing correlations this statistical bias contributes to $\Delta N^{\pm \pm}_{kl,{\rm F}}$. In Sec.~\ref{SecV} the possible significance of this bias will be studied using simulations. For unlike-sign pairs quantity ${\cal F}_{p_t}$, with $\zeta = 1$, becomes \begin{eqnarray} {\cal F}_{p_t}^{\rm US} & = & \frac{\bar{N}^+ \bar{N}^-}{\epsilon} \sum_{j=1}^{\epsilon} \left( \langle p_t^+ \rangle_j - \hat{p}_t^+ \right) \left( \langle p_t^- \rangle_j - \hat{p}_t^- \right). \label{Eq50} \end{eqnarray} The resulting unlike-sign correlation is given by \begin{eqnarray} \Delta N^{\pm \mp}_{kl,{\rm F}} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \frac{\bar{N}^+ \bar{N}^-}{n_j^+ n_j^-} n_{j,kl}^{{\rm sib} \pm \mp} \nonumber \\ & & \hspace{-0.4in} - \frac{1}{\epsilon_{\rm mix}} \sum_{j^{\prime} \neq j^{\prime \prime}} \left( \frac{\bar{N}^{\pm}}{n^{\pm}_{j^{\prime}}} + \frac{\bar{N}^{\mp}}{n^{\mp}_{j^{\prime \prime}}} -1 \right) n^{\pm}_{j^{\prime}k} n^{\mp}_{j^{\prime \prime}l} \label{Eq51} \end{eqnarray} which is statistically unbiased. \subsection{Correlations derived from $\Delta [P_T,N]$ and $\Sigma [P_T,N]$} \label{SecIIF} The NA49 Collaboration recently published transverse momentum and multiplicity fluctuation measures $\Delta [P_T,N]$ and $\Sigma [P_T,N]$~\cite{NA49}, defined in the present notation by \begin{eqnarray} \Delta [P_T,N] & = & \frac{1}{\bar{N} \omega(p_t)} \left\{ \bar{N} \omega[P_T] - \overline{P}_T \omega[N] \right\}, \label{Eq52} \\ \Sigma [P_T,N] & = & \frac{1}{\bar{N} \omega(p_t)} \left\{ \bar{N} \omega[P_T] + \overline{P}_T \omega[N] \right. \nonumber \\ & - & \left. 2\left[ \overline{P_T N} - \overline{P}_T \bar{N} \right] \right\}. \label{Eq53} \end{eqnarray} In these equations $P_{T,j} = \sum_{i=1}^{n_j} p_{t,ji}$ is the event-wise sum of $p_t$ magnitude over all particles in the $j^{\rm th}$ event. The other symbols are defined as follows: \begin{eqnarray} \overline{P}_T & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} P_{T,j}, \label{Eq54} \\ \overline{P_T N} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} n_j P_{T,j}, \label{Eq55} \\ \omega[P_T] & = & \left( \overline{P_T^2} - \overline{P}_T^2 \right) /\overline{P}_T, \label{Eq56} \\ \omega[N] & = & \left( \overline{N^2} - \bar{N}^2 \right) / \bar{N}, \label{Eq57} \\ \omega(p_t) & = & \sigma^2_{\hat{p}_t} / \hat{p}_t . \label{Eq58} \end{eqnarray} Note the different meaning of symbol $\omega$ in the above equations from Ref.~\cite{NA49}, which is proportional to a fluctuation variance, compared to the definition in the previous subsection where that $\omega$ was proportional to a fluctuation width. Equation~(\ref{Eq52}) can be simplified by multiplying both numerator and denominator of the RHS by $\bar{N}\overline{P}_T$. The result is \begin{eqnarray} \overline{P}_T \left( \sigma^2_{\hat{p}_t} / \hat{p}_t \right) \Delta [P_T,N] & = & \left[ \bar{N}^2 \overline{P_T^2} - \overline{N^2} \, \overline{P}_T^2 \right] / \bar{N}^2 . \nonumber \\ \label{Eq59} \end{eqnarray} The correlated particle-pair difference is derived by inserting summations in the numerator of the RHS of this equation, omitting a self-pair term, and binning on 2D transverse momentum. The result is given by \begin{eqnarray} \Delta N_{kl,\Delta} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} n^{\rm sib}_{j,kl} - \frac{\overline{N^2}}{\bar{N}^2} \frac{1}{\epsilon_{\rm mix}} \sum_{j \neq j^{\prime}} n_{jk} n_{j^{\prime}l} . \label{Eq60} \end{eqnarray} The multiplicity bin-width dependence of $\Delta N_{kl,\Delta}$ can be estimated using the same steps as in Eq.~(\ref{Eq7}) and neglecting possible systematic variations in the shapes of the single- and two-particle distributions on $p_t$ with event multiplicity. The result is \begin{eqnarray} \Delta N_{kl,\Delta} & \approx & \left[ \bar{N} (\bar{N} - 1) + \sigma^2_N \right] \hat{\bar{n}}^{\rm sib}_{kl} - \left( \bar{N}^2 + \sigma^2_N \right) \hat{\bar{n}}_k \hat{\bar{n}}_l \nonumber \\ & = & \left( \bar{N}^2 + \sigma^2_N \right) \left( \hat{\bar{n}}^{\rm sib}_{kl} - \hat{\bar{n}}_k \hat{\bar{n}}_l \right) - \bar{N} \hat{\bar{n}}^{\rm sib}_{kl} . \label{Eq61} \end{eqnarray} The last term on the RHS of Eq.~(\ref{Eq61}) represents an additive bias, i.e. $\Delta N_{kl,\Delta} \neq 0$ in the no-correlation limit, $\hat{\bar{n}}^{\rm sib}_{kl} = \hat{\bar{n}}_k \hat{\bar{n}}_l$. The correlated particle-pair difference for $\Sigma [P_T,N]$ can be derived by following the same steps as above, assuming large event-number $\epsilon >\!> 1$. The result is given by \begin{eqnarray} \Delta N_{kl,\Sigma} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} n^{\rm sib}_{j,kl} - \frac{1}{\epsilon_{\rm mix}} \sum_{j \neq j^{\prime}} \left( \frac{2n_{j^{\prime}}}{\bar{N}} - \frac{\overline{N^2}}{\bar{N}^2} \right) n_{jk} n_{j^{\prime}l} \nonumber \\ \label{Eq62} \\ & \approx & \left[ \bar{N}(\bar{N}-1)+\sigma^2_N \right] \hat{\bar{n}}^{\rm sib}_{kl} -2\left( \bar{N}^2 + \sigma^2_N \right)\hat{\bar{n}}_k \hat{\bar{n}}_l \nonumber \\ & + & \left( \bar{N}^2 + \sigma^2_N \right)\hat{\bar{n}}_k \hat{\bar{n}}_l \nonumber \\ & = & \left( \bar{N}^2 + \sigma^2_N \right) \left( \hat{\bar{n}}^{\rm sib}_{kl} - \hat{\bar{n}}_k \hat{\bar{n}}_l \right) - \bar{N} \hat{\bar{n}}^{\rm sib}_{kl} \label{Eq63} \end{eqnarray} where the second and third equations apply if the shapes of the single- and two-particle $p_t$-distributions do not vary with event multiplicity. In this limit $\Delta N_{kl,\Delta}$ and $\Delta N_{kl,\Sigma}$ are equal and both are additively biased. The bias can be minimized by selecting multiplicity ranges where $\sigma^2_N \ll \bar{N}^2$ and multiplying the mixed-event pair term with factor $(\bar{N}-1)/\bar{N}$, or by selecting multiplicity ranges where $\sigma^2_N \approx \bar{N}$ (Poisson limit) and $\bar{N} \gg \sigma_N$. \subsection{Multiplicity dependent non-statistical fluctuations} \label{SecIIFb} In the preceding subsections two-particle correlations on $(p_{t1},p_{t2})$ were derived from different non-statistical mean-$p_t$ fluctuation quantities. For example, $\Delta \sigma^2_{p_t:m}$ in Eq.~(\ref{Eq15}) is expressed as an average over all events in the centrality bin, given by \begin{eqnarray} \Delta \sigma^2_{p_t:m} & = & \overline{m \sigma^2_{\langle p_t \rangle} - \sigma^2_{\hat{p}_t}} \label{Eq63a} \end{eqnarray} where $m$ is the $(\eta,\phi)$ bin-wise multiplicity and $\sigma^2_{\langle p_t \rangle}$ is the variance of fluctuating mean-$p_t$ for the events containing $m$ particles. Similar expressions for the non-charge-identified forms for $\Phi_{p_t}^{(0)}$, $\sigma^2_{p_t,{\rm dynamical}}$ and ${\cal F}_{p_t}$ are given by \begin{eqnarray} 2 \sigma_{\hat{p}_t} \Phi_{p_t}^{(0)} & = & \overline{m ( m \sigma^2_{\langle p_t \rangle} - \sigma^2_{\hat{p}_t})}/\bar{N} \label{Eq63b} \\ (\bar{N} - 1) \sigma^2_{p_t,{\rm dynamical}} & = & (\bar{N} - 1) \overline{\left( \frac{m \sigma^2_{\langle p_t \rangle} - \sigma^2_{\hat{p}_t}}{m-1}\right)} \label{Eq63c} \\ {\cal F}_{p_t}/\bar{N} & \approx & \bar{N} \overline{( m \sigma^2_{\langle p_t \rangle} - \sigma^2_{\hat{p}_t})/m} \label{Eq63d} \end{eqnarray} using Eqs.~(\ref{Eq32}), (\ref{Eq37}) and (\ref{Eq47}) and rearranging the event summation as in Eq.~(\ref{Eq15}). In the last equation $\zeta \approx 1$ was assumed for quantity ${\cal F}_{p_t}$. Non-statistical fluctuations, or two-particle correlations, cause $( m \sigma^2_{\langle p_t \rangle} - \sigma^2_{\hat{p}_t}) \neq 0$ and to depend on $(\eta,\phi)$ bin-wise multiplicity. Defining \begin{eqnarray} f(m) & \equiv & m \sigma^2_{\langle p_t \rangle} - \sigma^2_{\hat{p}_t} \label{Eq63e} \end{eqnarray} each quantity can be written as a multiplicity weighted average of $f(m)$, given by \begin{eqnarray} \Delta \sigma^2_{p_t:m} & = & \overline{f(m)} \label{Eq63f} \\ 2 \sigma_{\hat{p}_t} \Phi_{p_t}^{(0)} & = & \overline{mf(m)}/\bar{N} \label{Eq63g} \\ (\bar{N} - 1) \sigma^2_{p_t,{\rm dynamical}} & = & (\bar{N} - 1) \overline{ f(m)/(m-1)} \label{Eq63h} \\ {\cal F}_{p_t}/\bar{N} & \approx & \bar{N} \overline{f(m)/m} \label{Eq63i} \end{eqnarray} where the explicit dependence of $f(m)$ on multiplicity is determined by the dynamical processes which produce non-statistical fluctuations. For example, event-wise fluctuations in global temperature involve all particles such that the number of correlated particle-pairs, $\Delta{N}_{kl}$, is proportional to $m^2$ and $\Delta \sigma^2_{p_t:m}$, being proportional to number of correlated pairs per final-state particle [see Eq.~(\ref{Eq19})], is proportional to $m$. For this example $f(m) \propto m$ and the averages over finite width multiplicity bins in above Eqs.~(\ref{Eq63f}) - (\ref{Eq63i}) result in a bias (results depend on bin width) for $\Phi_{p_t}^{(0)}$ but not for the other three quantities. The bias in $\Phi_{p_t}^{(0)}$ occurs because $\overline{mf(m)} = \overline{m^2} = \bar{N}^2 + \sigma^2_N$. If the number of correlated particle pairs is proportional to $m$, then $f(m)$ is constant and quantities $\sigma^2_{p_t,{\rm dynamical}}$ and ${\cal F}_{p_t}$ are biased while $\Delta \sigma^2_{p_t:m}$ and $\Phi_{p_t}^{(0)}$ are not. If an analysis of data were focused on a specific dynamical process which was known to produce a certain $f(m)$, then the set of possible correlation quantities could be ranked with respect to optimal suppression of the above bias effect. For practical analysis of data from relativistic heavy-ion collisions multiple dynamical processes contribute to the non-statistical fluctuations and those processes are expected to follow different functions of multiplicity, for example the number of nucleon participants or the number of binary nucleon-nucleon collisions~\cite{axialCI}. Dynamical processes also depend on the charge combination of particle-pairs (e.g. in hadronization) and the location in $(p_{t1},p_{t2})$ space (e.g. for soft versus semi-hard processes). Given this complexity it is preferable to evaluate the bias caused by multiplicity dependent non-statistical fluctuations, or correlations, by using realistic estimates of those correlations on $(p_{t1},p_{t2})$. This is the approach followed here and discussed in detail in Secs.~\ref{SecIII} and \ref{SecIV}. \subsection{Normalized covariance} \label{SecIIG} In the preceding subsections the bin-wise number of correlated particle pairs $\Delta N_{kl}$ was calculated. In terms of particle densities $\Delta N_{kl}$ is proportional to $C(p_{t1},p_{t2})$ in Eq.~(\ref{Eq3}) which can be expressed as \begin{eqnarray} \hat\rho (p_{t1},p_{t2}) & = & \hat\rho(p_{t1}) \hat\rho(p_{t2}) + C(p_{t1},p_{t2}) \nonumber \\ & & \hspace{-0.6in} = \hat\rho(p_{t1}) \hat\rho(p_{t2}) r(p_{t1},p_{t2}) \nonumber \\ & & \hspace{-0.6in} = \hat\rho(p_{t1}) \hat\rho(p_{t2}) \left\{ 1 +\left[ r(p_{t1},p_{t2}) - 1 \right] \right\}, \label{Eq64} \end{eqnarray} where the correlated pair density is given by \begin{eqnarray} C(p_{t1},p_{t2}) & = & \hat\rho(p_{t1}) \hat\rho(p_{t2})\left[ r(p_{t1},p_{t2}) - 1 \right]. \nonumber \\ \label{Eq65} \end{eqnarray} Quantities $C$ and $\Delta N_{kl}$ therefore include a trivial dependence on multiplicity squared which is easily removed by dividing by $\hat\rho(p_{t1}) \hat\rho(p_{t2})$. Furthermore, tracking inefficiency and detector acceptance effects cancel in this ratio if the product $\hat\rho(p_{t1}) \hat\rho(p_{t2})$ is calculated using the same data which were used for $C(p_{t1},p_{t2})$. In heavy-ion collisions, quantum interference between identical particles in the final-state produces correlations which scale with the number of identical-particle pairs~\cite{HBT}. The per-pair ratio $C(p_{t1},p_{t2})/\hat\rho(p_{t1}) \hat\rho(p_{t2})$ is approximately constant with increasing centrality. All other processes which are expected to produce correlations (see Sec.~\ref{SecI}) scale with either the number of participating nucleons, the number of binary nucleon + nucleon collisions, or the number of final-state particles. A per final-state particle ratio~\cite{Trainormeanpt} is therefore more appropriate for studying the centrality dependence of most correlation structures other than that caused by final-state quantum interference. In the statistics literature Pearson's normalized covariance~\cite{Pearson}, given by \begin{eqnarray} \frac{\overline{(n_k - \bar{n}_k)(n_l - \bar{n}_l)}} {\sqrt{\sigma^2_{n_k} \sigma^2_{n_l}}} & = & \frac{\overline{n_k n_l} - \bar{n}_k \bar{n}_l}{\sigma_{n_k} \sigma_{n_l}} \nonumber \\ & \approx & \frac{\overline{n_k n_l} - \bar{n}_k \bar{n}_l}{\sqrt{\bar{n}_k \bar{n}_l}}, \label{Eq66} \end{eqnarray} provides the necessary, per final-state particle correlation measure using the geometric-mean particle number in the denominator, where $n_k$ and $n_l$ are the event-wise number of particles in bins $k$ and $l$. In Eq.~(\ref{Eq66}) over-lines indicate event averages within the event collection, and $\sigma_{n_k}^2$ is the variance of the event-wise, multiplicity frequency distribution in bin $k$. In the last step in Eq.~(\ref{Eq66}) the bin-wise multiplicity frequency distributions are assumed to be accurately represented by Poisson distributions with means $\bar{n}_k$ and $\bar{n}_l$. In order to cancel efficiency and acceptance effects, applications of Eq.~(\ref{Eq66})~\cite{axialCI} take the form \begin{eqnarray} \frac{\overline{n_k n_l} - \bar{n}_k \bar{n}_l}{\sqrt{\bar{n}_k \bar{n}_l}} & = & \sqrt{\bar{n}_k \bar{n}_l} \left[ \frac{\overline{n_k n_l} - \bar{n}_k \bar{n}_l}{\bar{n}_k \bar{n}_l} \right] \label{Eq67} \end{eqnarray} where the ratio in square brackets is obtained from the data and the leading square-root, or prefactor (see Sec.~\ref{SecIIA}), is calculated from a model representation of the efficiency corrected product of single-particle yields. For transverse momentum correlations the prefactor $\sqrt{\rho_{\rm ref}}$ is given by \begin{eqnarray} \sqrt{\rho_{\rm ref}(p_{t1},p_{t2})} & = & \left[ \frac{d^2N_{\rm ch}}{dp_{t1} d\eta_1} \frac{d^2N_{\rm ch}}{dp_{t2} d\eta_2} \right] ^{1/2} \label{Eq68} \end{eqnarray} where $N_{\rm ch}$ includes all charged particles within the $p_t$, $\Delta\eta$ and $\Delta\phi$ acceptance. Parton fragmentation into jets is of considerable interest in analysis of heavy-ion collision data. It has been shown that jet fragment distributions scale with transverse rapidity $y_t$~\cite{Tomjetfrag}, defined by \begin{eqnarray} y_t & = & \ln [(p_t + m_t)/m_0], \label{Eq69} \end{eqnarray} where $m_t = \sqrt{p_t^2 + m_0^2}$, and arbitrary mass $m_0$, which regulates the singularity at $p_t = 0$, is assumed equal to the pion mass when non-identified particles are used in the analysis, and equals the true particle-mass when the particle species is identified. The single-particle distribution on $y_t$ is given by \begin{eqnarray} \frac{d^2N_{\rm ch}}{dy_t d\eta} & = & \frac{dp_t}{dy_t} \frac{d^2N_{\rm ch}}{dp_t d\eta} \approx m_t \frac{d^2N_{\rm ch}}{dp_t d\eta} \label{Eq70} \end{eqnarray} where $p_t = m_0\sinh (y_t)$ and in the last step $dp_t/dy_t = m_t$ at mid-rapidity ($\eta = 0$) is assumed. In the present application 2D transverse rapidity correlations will be calculated where the final quantity, $\Delta\rho/\sqrt{\rho_{\rm ref}}$, is given by \begin{eqnarray} \frac{\Delta\rho}{\sqrt{\rho_{\rm ref}}} (y_{t,k},y_{t,l}) & = & \left[ \frac{d^2N_{\rm ch}}{dy_{t,k} d\eta} \frac{d^2N_{\rm ch}}{dy_{t,l} d\eta} \right] ^{1/2} \frac{\Delta N_{kl}}{N^{\rm mix}_{kl}}. \label{Eq71} \end{eqnarray} Pair difference $\Delta N_{kl}$ for like-sign and unlike-sign particle pairs for all the various methods derived here are given in the preceding subsections. The mixed-event particle pair averages $N^{\rm mix}_{kl}$ are given by the second factor on the right-hand sides of Eqs.~(\ref{Eq21}), (\ref{Eq23}), (\ref{Eq26}), (\ref{Eq34}), (\ref{Eq35}), (\ref{Eq39}), (\ref{Eq40}), (\ref{Eq48}) and (\ref{Eq51}) for each mean-$p_t$ fluctuation quantity considered here. Sibling-pair averages $N^{\rm sib}_{kl}$ are given by the first factors on the right-hand sides in the preceding list of equations. The prefactor in Eqs.~(\ref{Eq68}) and (\ref{Eq71}) applies when all charged-particle pairs in the acceptance are used in the correlations. The number of like-sign or unlike-sign pairs is one-half of the total, assuming the numbers of positive and negative charged-particles are equal, which is approximately true for relativistic heavy-ion collisions~\cite{STARspectra,PHENIXspectra,ALICEspectra}. The appropriate prefactor for these cases is $\sqrt{\rho_{\rm ref}/2}$. In Sec.~\ref{SecIV} Monte Carlo simulations are described for each mean-$p_t$ fluctuation quantity in which LS, US, charge-independent (CI) and charge-dependent (CD) correlations, plus alternate CI and CD forms are included. For each correlation form, charged-particle pair combinations $(a,b) = (++)$, $(--)$, $(+-)$ and $(-+)$ are calculated and combined to give the LS, US, CI and CD combinations. Those combinations and the corresponding prefactors are listed in the following equations: \begin{eqnarray} \frac{\Delta\rho}{\sqrt{\rho_{\rm ref}}} ({\rm LS}) & = & \frac{\sqrt{\rho_{\rm ref}}}{2\sqrt{2}} \sum_{ab=++,--} \left[ \frac{\Delta N_{kl}}{N^{\rm mix}_{kl}} \right]_{ab} \label{Eq72} \\ \frac{\Delta\rho}{\sqrt{\rho_{\rm ref}}} ({\rm US}) & = & \frac{\sqrt{\rho_{\rm ref}}}{2\sqrt{2}} \sum_{ab=+-,-+} \left[ \frac{\Delta N_{kl}}{N^{\rm mix}_{kl}} \right]_{ab} \label{Eq73} \\ \frac{\Delta\rho}{\sqrt{\rho_{\rm ref}}} ({\rm CI}) & = & \frac{\sqrt{\rho_{\rm ref}}}{4} \sum_{a,b=\pm,\pm} \left[ \frac{\Delta N_{kl}}{N^{\rm mix}_{kl}} \right]_{ab} \label{Eq74} \\ \frac{\Delta\rho}{\sqrt{\rho_{\rm ref}}} ({\rm CI,alt}) & = & \sqrt{\rho_{\rm ref}} \frac{\sum_{a,b=\pm,\pm} \Delta N^{ab}_{kl}} {\sum_{a,b=\pm,\pm} N^{{\rm mix},ab}_{kl}} \label{Eq75} \\ \frac{\Delta\rho}{\sqrt{\rho_{\rm ref}}} ({\rm CD}) & = & \nonumber \\ & & \hspace{-0.9in} \sqrt{\rho_{\rm ref}} \frac{ \left( N^{\rm sib,++}_{kl} + N^{\rm sib,--}_{kl} \right) - \left( N^{\rm sib,+-}_{kl} + N^{\rm sib,-+}_{kl} \right) } {\sum_{a,b=\pm,\pm} N^{{\rm mix},ab}_{kl}} \nonumber \\ \label{Eq76} \\ \frac{\Delta\rho}{\sqrt{\rho_{\rm ref}}} ({\rm CD,alt}) & = & \nonumber \\ & & \hspace{-0.9in} \sqrt{\rho_{\rm ref}} \frac{ \left( \Delta N_{kl}^{++} + \Delta N_{kl}^{--} \right) - \left( \Delta N_{kl}^{+-} + \Delta N_{kl}^{-+} \right) } {\sum_{a,b=\pm,\pm} N^{{\rm mix},ab}_{kl}} \nonumber \\ \label{Eq77} \end{eqnarray} where summation indices $a,b$ denote charged-particle pair combinations. \section{Systematic bias} \label{SecIII} In addition to the pair-counting statistical bias caused by finite centrality bin widths, systematic variations in the shapes of the single-particle distribution and the true correlation itself within the range of the centrality bin lead to bias in the correlation quantities. Systematic bias due to shape variation in the single-particle distribution occurs because: (1) mixed-event particle pairs are selected from arbitrary pairs of events within the centrality bin where each mixed-event particle pair may sample two different parent distributions;~\footnote{A parent distribution is the infinite statistics limit of a measured distribution which in this analysis is approximated by a mathematical function. Finite statistics random samples of the parent distribution produce fluctuating event-wise distributions.} (2) sibling pairs from a single-event sample the same parent distribution, while sibling pairs for other events may sample different parent distributions; (3) ``cross-terms'' in the mixed-event pairs, where different parent distributions are sampled, have no corresponding terms in the sibling pairs, giving rise to non-vanishing contributions in the absence of true correlations. Systematic bias due to multiplicity dependent shape variations in the true correlation is a matter of definition. Here, the goal is to measure the true correlation at a fixed multiplicity or centrality. The amount by which the measured correlation, when averaged over the finite centrality bin-width, differs from the true correlation at the mid-point of the bin is considered a bias. On the other hand, if the goal is to measure the average correlation within the finite centrality bin then the bias, if any, will depend on the averaging method. The effects of these sources of systematic bias will be discussed in Sec.~\ref{SecIV} in terms of Monte Carlo simulations. In this section we illustrate the origin of systematic bias by calculating systematic contributions to the $\Delta\sigma^2_{p_t:m}$ based correlation to leading order, where for simplicity, charge identification is ignored. The purpose of the analytical treatment in this section is to provide an intuitive understanding of the above two sources of systematic bias. From Eq.~(\ref{Eq21}) the sibling-pair number can be written as \begin{eqnarray} N^{\rm sib}_{kl,\Delta\sigma^2} & = & \frac{1}{\epsilon} \sum_{j=1}^{\epsilon} \frac{\bar{N}}{n_j} n^{\rm sib}_{j,kl} = \sum_m \frac{\epsilon_m}{\epsilon} \frac{\bar{N}}{m} \frac{1}{\epsilon_m} \sum_{j=1}^{\epsilon_m} n^{\rm sib}_{jm,kl} \nonumber \\ & = & \sum_m \frac{\epsilon_m}{\epsilon} \frac{\bar{N}}{m} \bar{n}^{\rm sib}_{m,kl} = \sum_m \frac{\epsilon_m}{\epsilon} \frac{\bar{N}}{m} m(m-1) \hat{\bar{n}}^{\rm sib}_{m,kl} \nonumber \\ \label{Eq78} \end{eqnarray} where, as in Eq.~(\ref{Eq7}), $m$ is a multiplicity within the finite bin, $\bar{N}$ is the mean multiplicity, $\epsilon_m$ is the number of events having multiplicity $m$, $\bar{n}^{\rm sib}_{m,kl}$ is the average sibling-pair histogram for all $\epsilon_m$ events, and $\hat{\bar{n}}^{\rm sib}_{m,kl}$ is the unit-normalized distribution on $y_t$ bins $(k,l)$ where $\sum_{k,l} \hat{\bar{n}}^{\rm sib}_{m,kl} = 1$. The sibling-pair distribution consists of an uncorrelated (factorized) part plus a non-factorized correlation, which is written for the unit-normalized distributions as \begin{eqnarray} \hat{\bar{n}}^{\rm sib}_{m,kl} & = & \hat{\bar{n}}_{mk} \hat{\bar{n}}_{ml} + C_{m,kl} \label{Eq79} \end{eqnarray} where $\hat{\bar{n}}_{mk}$ is the average, unit-normalized single-particle distribution on $y_t$ bin $k$ for events having multiplicity $m$, where $\sum_k \hat{\bar{n}}_{mk} = 1$. True correlation quantity $C_{m,kl}$ is defined such that $\sum_{kl}C_{m,kl} = 0$. We consider the possibility that the shapes of both the single-particle distribution and the true correlation vary with multiplicity $m$, and therefore express these quantities as \begin{eqnarray} \hat{\bar{n}}_{mk} & = & \hat{\bar{n}}_{k}^{(0)} + \delta \hat{\bar{n}}_{mk}, \label{Eq80} \\ C_{m,kl} & = & C_{kl}^{(0)} + \delta C_{m,kl} \label{Eq81} \end{eqnarray} where $\hat{\bar{n}}_{k}^{(0)}$ is the single-particle distribution at the mid-point of the bin and $C_{kl}^{(0)}$ is the true correlation at a fixed multiplicity which is the primary object to be measured. Substituting Eqs.~(\ref{Eq79}) - (\ref{Eq81}) into Eq.~(\ref{Eq78}) yields \begin{widetext} \begin{eqnarray} N^{\rm sib}_{kl,\Delta\sigma^2} & = & \bar{N}(\bar{N}-1) \left[ \hat{\bar{n}}_{k}^{(0)} \hat{\bar{n}}_{l}^{(0)} + C_{kl}^{(0)} \right] + \bar{N} \sum_m \frac{\epsilon_m}{\epsilon} (m-1) \left[ \hat{\bar{n}}_{k}^{(0)} \delta \hat{\bar{n}}_{ml} + \hat{\bar{n}}_{l}^{(0)} \delta \hat{\bar{n}}_{mk} + \delta \hat{\bar{n}}_{mk} \delta \hat{\bar{n}}_{ml} + \delta C_{m,kl} \right]. \label{Eq82} \end{eqnarray} Similarly the mixed-event pair distribution from Eq.~(\ref{Eq21}) is expressed as \begin{eqnarray} N^{\rm mix}_{kl,\Delta\sigma^2} & = & \bar{N}(\bar{N}-1) \hat{\bar{n}}_{k}^{(0)} \hat{\bar{n}}_{l}^{(0)} + (\bar{N}-1) \sum_m \frac{\epsilon_m}{\epsilon} m \left( \hat{\bar{n}}_{k}^{(0)} \delta \hat{\bar{n}}_{ml} + \hat{\bar{n}}_{l}^{(0)} \delta \hat{\bar{n}}_{mk} \right) \nonumber \\ & + & \frac{\bar{N}-1}{\bar{N}} \left( \sum_m \frac{\epsilon_m}{\epsilon} m \delta \hat{\bar{n}}_{mk} \right) \left( \sum_{m^{\prime}} \frac{\epsilon_{m^{\prime}}}{\epsilon} {m^{\prime}} \delta \hat{\bar{n}}_{{m^{\prime}}l} \right). \label{Eq83} \end{eqnarray} The correlated pair-difference is given by \begin{eqnarray} \Delta N_{kl,\Delta\sigma^2} & = & \bar{N}(\bar{N}-1) C_{kl}^{(0)} + \sum_m \frac{\epsilon_m}{\epsilon} (m-\bar{N}) \left( \hat{\bar{n}}_{k}^{(0)} \delta \hat{\bar{n}}_{ml} + \hat{\bar{n}}_{l}^{(0)} \delta \hat{\bar{n}}_{mk} \right) \nonumber \\ & + & \sum_m \frac{\epsilon_m}{\epsilon} \bar{N} (m-1) \left( \delta \hat{\bar{n}}_{mk} \delta \hat{\bar{n}}_{ml} + \delta C_{m,kl} \right) - \frac{\bar{N}-1}{\bar{N}} \left( \sum_m \frac{\epsilon_m}{\epsilon} m \delta \hat{\bar{n}}_{mk} \right) \left( \sum_{m^{\prime}} \frac{\epsilon_{m^{\prime}}}{\epsilon} {m^{\prime}} \delta \hat{\bar{n}}_{{m^{\prime}}l} \right). \label{Eq84} \end{eqnarray} \end{widetext} Leading-order estimates of the bias contributions in Eq.~(\ref{Eq84}) are calculated as follows. We define $\delta_m \equiv m - \bar{N}$ as in Eq.~(\ref{Eq7}) and expand the change in shape of the single-particle distribution as \begin{eqnarray} \delta \hat{\bar{n}}_{mk} & \approx & \frac{\partial \hat{\bar{n}}_{mk}} {\partial m} \mid _{m=\bar{N}} \delta_m + \cdots \nonumber \\ & \equiv & f_k \delta_m + \cdots , \label{Eq85} \end{eqnarray} and the change in shape of the true correlations as \begin{eqnarray} \delta C_{m,kl} & = & \frac{\partial C_{m,kl}} {\partial m} \mid _{m=\bar{N}} \delta_m + \cdots \nonumber \\ & \equiv & g_{kl} \delta_m + \cdots ~. \label{Eq86} \end{eqnarray} The first-order expansion for the correlated pair-difference is derived by substituting the leading terms in Eqs.~(\ref{Eq85}) and (\ref{Eq86}) into Eq.~(\ref{Eq84}), replacing $m$ with $(\bar{N} + \delta_m)$, and retaining terms only through order $(\delta_m)^2$. The final result is given by \begin{eqnarray} \Delta N_{kl,\Delta\sigma^2} & \approx & \bar{N}(\bar{N}-1) C_{kl}^{(0)} + \left( f_l \hat{\bar{n}}_{k}^{(0)} + f_k \hat{\bar{n}}_{l}^{(0)} \right) \sigma^2_N \nonumber \\ & & \hspace{-0.5in} + \bar{N}(\bar{N}-1) f_k f_l \sigma^2_N \left( 1 - \sigma^2_N / \bar{N}^2 \right) + \bar{N} g_{kl} \sigma^2_N \label{Eq87} \end{eqnarray} where $\sigma^2_N = \sum_m (\epsilon_m/\epsilon) (\delta_m)^2$ is the variance of the finite width multiplicity bin. In the limit of zero multiplicity bin width quantity $\Delta N_{kl,\Delta\sigma^2}$ is proportional to the true correlation at a specific multiplicity. However, in general, Eq.~(\ref{Eq87}) shows that linear variations alone in the single-particle distribution and/or correlation shapes are sufficient to produce systematic bias. This bias occurs for the LS and US forms for all correlation quantities in Sec.~\ref{SecII}. In the next section Monte Carlo simulations are used to study a variety of realistic systematic variations in the shapes of the input distributions. These variations are general and are not limited to the linear terms assumed in this section. \section{Monte Carlo simulations} \label{SecIV} Simulations were done for the correlation quantities derived in Sec.~\ref{SecII} using realistic multiplicity frequency distributions, centrality-bin widths, charged-particle $p_t$ spectra, and transverse momentum correlations in order to estimate realistic levels of statistical and systematic biases. The simulated collision system is minimum-bias (unrestricted, random nucleus + nucleus impact parameters) Au + Au collisions at $\sqrt{s_{NN}}$ = 200~GeV per colliding nucleon + nucleon pair. Collisions were selected for peripheral, mid-central and more-central nuclear overlap corresponding to total reaction cross section ranges~\cite{axialCI} 84-93\%, 55-64\% and 9-18\%, respectively, where, for example, the 5\% most-central (head-on) collisions account for the 0-5\% range of the multiplicity frequency distribution. Charged-particle production for $p_t > 0.15$~GeV/$c$, $|\eta| \leq 1$ and full $2\pi$ range in azimuth was assumed corresponding to the acceptance of the STAR TPC~\cite{STARTPC}. Monte Carlo Glauber model~\cite{MCG} estimates of the number of nucleon participants ($N_{\rm part}$) were taken from Ref.~\cite{axialCI}. The minimum-bias multiplicity frequency distribution for relativistic heavy-ion collisions is accurately approximated by the power-law function~\cite{MCG} \begin{eqnarray} \frac{dN_{\rm events}}{dN_{\rm ch}} & \propto & N_{\rm ch}^{-3/4} \label{Eq88} \end{eqnarray} except near the lower and upper multiplicity end-points where the measured distribution falls off rapidly. For the selected centralities the multiplicity ranges are [2, 14], [66, 117] and [644, 910] with mean multiplicities $\bar{N}_{\rm ch}$ = 6.6, 89.7 and 771, respectively~\cite{MikeThesis}. Measurements~\cite{Prabhat} of the frequency distribution on charge difference $n_{\Delta} \equiv n^+ - n^-$ for positive $(n^+)$ and negative $(n^-)$ charged-particle multiplicities ($N_{\rm ch} = n^+ + n^-$) for all centralities indicate an approximate Gaussian dependence, $\exp[-(n_{\Delta} - \bar{n}_{\Delta})^2/2\sigma^2_{n_{\Delta}}]$, where the mean $(\bar{n}_{\Delta})$ and width $(\sigma_{n_{\Delta}})$ increase monotonically with centrality. Within each centrality bin this dependence can be approximated with linear functions given by \begin{eqnarray} \bar{n}_{\Delta}(N_{\rm ch}) & = & \bar{n}_{\Delta}^0 + \delta\bar{n}_{\Delta} (N_{\rm ch} - \bar{N}_{\rm ch}) \label{Eq89} \\ \sigma_{n_{\Delta}}(N_{\rm ch}) & = & \sigma_{n_{\Delta}}^0 + \delta\sigma_{n_{\Delta}} (N_{\rm ch} - \bar{N}_{\rm ch}), \label{Eq90} \end{eqnarray} where the centrality trends of the data are well described with the parameters listed in Table~\ref{TableI}. The above power-law and Gaussian distributions were sampled to obtain the event-wise number of positive and negative charged particles within the acceptance. Nonidentified charged-particle $p_t$ spectra, $d^2N_{\rm ch}/2\pi p_t dp_t d\eta$, were reported by the STAR~\cite{STARspectra} and PHENIX~\cite{PHENIXspectra} Collaborations for Au + Au minimum-bias collisions at 200~GeV. These data can be accurately described in the range 0.15~GeV/$c$ $\leq p_t \leq$ 4~GeV/$c$ by a Levy distribution~\cite{Ayamtmt,Levy} given by \begin{eqnarray} \frac{d^2N_{\rm ch}}{2\pi p_t dp_t d\eta} & = & \frac{A}{[1+\beta(m_t-m_0)/q]^q} , \label{Eq91} \end{eqnarray} where $A$, $\beta \equiv 1/T$ and $q$ are fitting parameters, $m_t = \sqrt{p_t^2 + m_0^2}$ and $m_0$ is assumed to be the pion mass. Inverse exponent $q^{-1}$ can be interpreted as the relative variance $\sigma_{\beta}^2 / \bar{\beta}^2$ of an event-wise fluctuating slope parameter $\beta$ of a $p_t$ distribution $\exp{(-\beta m_t)}$~\cite{Ayamtmt}. Fit parameters for the STAR and PHENIX spectra data were interpolated to the centralities defined in Ref.~\cite{axialCI} which were used here. The $p_t$-integrated yields were then normalized to the efficiency corrected $dN_{\rm ch}/d\eta$ for each centrality given in Ref.~\cite{axialCI}. The parameters are listed in Table~\ref{TableII}. The $p_t$ distribution can be expressed as a $y_t$ distribution using $dp_t/dy_t \approx m_t$ near $\eta = 0$ where \begin{eqnarray} \frac{d^2N_{\rm ch}}{dy_td\eta} & = & 2\pi p_t \frac{dp_t}{dy_t} \frac{d^2N_{\rm ch}}{2\pi p_t dp_t d\eta}. \label{Eq92} \end{eqnarray} The resulting parent distributions were sampled $n^{\pm}$ times to determine the transverse rapidities for the particles in each simulated event. \begin{table}[htb] \caption{Centrality bins, multiplicity ranges, and multiplicity dependent single-particle distribution parameters for the Monte Carlo simulations discussed in the text.} \label{TableI} \begin{tabular}{cccc} \hline \hline & \multicolumn{3}{c}{Centrality} \\ \hline Parameter & 84-93\% & 55-64\% & 9-18\% \\ \hline $[N_{\rm ch,min}$,$N_{\rm ch,max}]$ & [2,14] & [66,117] & [644,910] \\ $\bar{N}_{\rm ch}$ & 6.6 & 89.7 & 771 \\ $\bar{n}^0_{\Delta}$ & 0.27 & 1.56 & 6.48 \\ $\delta \bar{n}_{\Delta}$ & 0.058 & 0.015 & 0.0038 \\ $\sigma^0_{n_{\Delta}}$ & 1.20 & 6.83 & 21.5 \\ $\delta\sigma_{n_{\Delta}}$ & 0.26 & 0.035 & 0.010 \\ $T_0$ (GeV) & 0.1540 & 0.1828 & 0.2184 \\ $T_1$ (GeV) & 0.00075 & 0.000174 & 0.0000224 \\ $q_0$ & 10.425 & 11.858 & 16.120 \\ $q_1$ & 0.033 & 0.0124 & 0.00393 \\ \hline \hline \end{tabular} \end{table} \begin{table*}[t] \caption{Efficiency corrected multiplicity and estimated number of participant nucleons~\cite{axialCI}, single-particle $p_t$ Levy distribution parameters, and 2D Levy distribution parameters for eleven centrality bins for the Monte Carlo simulations discussed in the text.} \label{TableII} \begin{tabular}{cccccccccc} \hline \hline Centrality(\%) & $\frac{dN_{\rm ch}}{d\eta}$ & $N_{\rm part}$ & $A$~(GeV/$c$)$^{-2}$ & $T$~(GeV) & $q$ & $\Delta(1/q)^{\rm LS}_{\Sigma}$ & $\Delta(1/q)^{\rm LS}_{\Delta}$ & $\Delta(1/q)^{\rm US}_{\Sigma}$ & $\Delta(1/q)^{\rm US}_{\Delta}$ \\ \hline 84-93 & 5.2 & 4.6 & 14.70 & 0.1540 & 10.425 & 0.0058 & -0.0218 & 0.00120 & -0.0236 \\ 74-84 & 13.9 & 10.5 & 35.65 & 0.1647 & 10.822 & 0.0042 & -0.0143 & 0.00129 & -0.0165 \\ 64-74 & 28.8 & 20.5 & 68.33 & 0.1740 & 11.290 & 0.0024 & -0.0087 & 0.00115 & -0.0100 \\ 55-64 & 52.8 & 36.0 & 117.0 & 0.1828 & 11.858 & 0.0016 & -0.0060 & 0.00096 & -0.0068 \\ 46-55 & 89 & 58.1 & 185.3 & 0.1914 & 12.560 & 0.0009 & -0.0047 & 0.00091 & -0.0048 \\ 38-46 & 139 & 86.4 & 275.1 & 0.1989 & 13.321 & 0.0009 & -0.0035 & 0.00062 & -0.0037 \\ 28-38 & 209 & 124.6 & 395.5 & 0.2059 & 14.173 & 0.0007 & -0.0028 & 0.00059 & -0.0028 \\ 18-28 & 307 & 176.8 & 558.4 & 0.2124 & 15.117 & 0.0006 & -0.0022 & 0.00048 & -0.0019 \\ 9-18 & 440 & 244.4 & 772.6 & 0.2184 & 16.120 & 0.0005 & -0.0015 & 0.00040 & -0.0018 \\ 5-9 & 564 & 304.1 & 968.0 & 0.2224 & 16.872 & 0.0004 & -0.0013 & 0.00035 & -0.0014 \\ 0-5 & 671 & 350.3 & 1129.7 & 0.2258 & 17.547 & 0.0004 & -0.0012 & 0.00032 & -0.0013 \\ \hline \hline \end{tabular} \end{table*} Systematic bias due to variations in the shape of the single-particle $p_t$ spectrum was simulated by allowing parameters $T$ and $q$ in Eq.~(\ref{Eq91}) to vary within each centrality bin. A linear dependence of $T$ and $q$ with respect to $N_{\rm ch}$ within each centrality bin was assumed based on the trends in Table~\ref{TableII}. Systematic variations, if any, in the separate shapes of the positive and negative charged-particle $p_t$ distributions with respect to charge difference, $n_{\Delta}$, have not been reported. Possible linear (antisymmetric on $n_{\Delta}$) and quadratic (symmetric) dependences were therefore assumed in this study. Within each centrality bin $T$ and $q$ for positive and negative charged particles were described by the functions \begin{eqnarray} T^{\pm}(N_{\rm ch},n_{\Delta}) & = & T_0 + T_1(N_{\rm ch} - \bar{N}_{\rm ch}) \nonumber \\ & + & T_2^{\pm} n_{\Delta} + T_3^{\pm} n_{\Delta}^2/\sigma_{n_{\Delta}} \label{Eq93} \\ q^{\pm}(N_{\rm ch},n_{\Delta}) & = & q_0 + q_1(N_{\rm ch} - \bar{N}_{\rm ch}) \nonumber \\ & + & q_2^{\pm} n_{\Delta} + q_3^{\pm} n_{\Delta}^2/\sigma_{n_{\Delta}} \label{Eq94} \end{eqnarray} where four of the parameter values are listed in Table~\ref{TableI}. Parameters $T_0$, $T_1$, $q_0$ and $q_1$, determined by fitting the centrality-dependent trends in Table~\ref{TableII}, were assumed to be the same for positive and negative charged particles. In lieu of further measurements, parameters $|T_2^{\pm}|$ and $|T_3^{\pm}|$ were assumed equal to the magnitude of $T_1$, i.e. the same variation with particle number. Similarly, $|q_2^{\pm}|$ and $|q_3^{\pm}|$ were set equal to the magnitude of $q_1$. Linear and quadratic variations with $n_{\Delta}$ were studied separately and the relative algebraic signs for positive and negative charged-particle shape variations were alternated where it was assumed that $T^+_{2,3} = \pm T^-_{2,3} = T_1$ and $q^+_{2,3} = \pm q^-_{2,3} = \pm q_1$. The systematic bias effects resulting from the assumed dependence on charge difference should be taken as an estimate of possible systematic bias, pending future measurements of charge-identified $p_t$ distributions with respect to $n_{\Delta}$. Preliminary $\Delta\rho/\sqrt{\rho_{\rm ref}}$ transverse rapidity correlations for 200~GeV Au + Au collisions were reported by Oldag~\cite{LizThesis,LizHotQuarks} as a function of centrality. Although these correlations are preliminary and precede the present work they provide a reasonable estimate of the expected magnitudes and centrality dependence of the correlations and can be used to study systematic bias. Parametrizations of these data were used to construct sibling pair weights for the Monte Carlo simulations where the $N_{\rm ch}$-dependence was calculated by interpolating each parameter to the selected value of $N_{\rm ch}$. Simulated correlations computed by averaging over finite centrality bin widths will be compared to the input correlation at the centrality-bin mid-point determined by $\bar{N}_{\rm ch}$. The correlation data in Refs.~\cite{LizThesis,LizHotQuarks} were defined as \begin{eqnarray} \frac{\Delta\rho}{\sqrt{\rho_{\rm ref}}} & = & \frac{S N^{\rm mix}_{kl}}{\sqrt{N^{\rm soft}_{kl}}} \left[ \frac{\hat{N}^{\rm sib}_{kl} - \hat{N}^{\rm mix}_{kl}}{\hat{N}^{\rm mix}_{kl}} \right], \label{Eq95} \end{eqnarray} where $S$ is the prefactor scaling coefficient described below and ``hat'' symbols denote unit-normalized distributions as defined previously. In Eq.~(\ref{Eq95}) the ratio in square brackets is equivalent to the total pair-number normalization discussed in Sec.~\ref{SecIIA}. Mixed-event and soft-particle pair distributions are given by \begin{eqnarray} N^{\rm mix}_{kl} & = & \frac{d^2N_{\rm ch}}{dy_{t,k}d\eta} \frac{d^2N_{\rm ch}}{dy_{t,l}d\eta}, \label{Eq96} \\ N^{\rm soft}_{kl} & = & \frac{d^2N_{\rm ch,soft}}{dy_{t,k}d\eta} \frac{d^2N_{\rm ch,soft}}{dy_{t,l}d\eta}, \label{Eq97} \end{eqnarray} using Eqs.~(\ref{Eq91}) and (\ref{Eq92}). The soft-particle production spectrum, defined as that part of the single-particle $p_t$ spectrum which scales with $N_{\rm part}$, can be estimated from the $N_{\rm ch} \rightarrow 0$ limit of the shape of the $p_t$ spectrum as explained in Ref.~\cite{TomSpectrum}. Solving Eq.~(\ref{Eq95}) for the sibling-pair distribution gives \begin{eqnarray} \hat{N}^{\rm sib}_{kl} & = & \hat{N}^{\rm mix}_{kl} \left\{ 1 + \frac{\sqrt{N^{\rm soft}_{kl}}}{S N^{\rm mix}_{kl}} \left[ \frac{\Delta\rho}{\sqrt{\rho_{\rm ref}}} \right]_{\rm model} \right\} \label{Eq98} \end{eqnarray} where $[\Delta\rho/\sqrt{\rho_{\rm ref}}]_{\rm model}$ is the model function used to fit the preliminary correlation data in Refs.~\cite{LizThesis,LizHotQuarks}. That model uses a 2D Levy distribution~\cite{Ayamtmt} given by \begin{eqnarray} N^{\rm sib}_{kl,{\rm 2DLevy}} & = & (2\pi)^2 p_{t,k} p_{t,l} m_{t,k} m_{t,l} \left( 1 + \frac{\beta m_{t,\Sigma}}{2q_{\Sigma}} \right)^{-2q_{\Sigma}} \nonumber \\ & \times & \left[ 1 - \left( \frac{\beta m_{t,\Delta}}{2q_{\Delta} + \beta m_{t,\Sigma}} \right)^2 \right]^{-q_{\Delta}} \label{Eq99} \end{eqnarray} where $m_{t,\Sigma} = m_{t,k} + m_{t,l} - 2m_0$, $m_{t,\Delta} = m_{t,k} - m_{t,l}$ and inverse exponents are given by \begin{eqnarray} \Delta(1/q)_{\Sigma} & = & \frac{1}{q_{\Sigma}} - \frac{1}{q}, \label{Eq100} \\ \Delta(1/q)_{\Delta} & = & \frac{1}{q_{\Delta}} - \frac{1}{q}. \label{Eq101} \end{eqnarray} Differences $\Delta(1/q)_{\Sigma,\Delta}$ were shown in Ref.~\cite{Ayamtmt} to represent the covariance of the 2D, event-wise distribution of slope parameters $(\beta_1,\beta_2)$ for arbitrary particles 1 and 2. Numerical values which fit the preliminary LS and US correlation data in Eq.~(\ref{Eq95}) for away-side particle pairs (relative azimuth $>$ $\pi/2$) are listed in Table~\ref{TableII}. For these cases the prefactor scaling coefficient is $1/\sqrt{4} = 1/2$. Sibling-pair weights computed using Eqs.~(\ref{Eq98}) and (\ref{Eq99}) are obtained from \begin{eqnarray} N^{\rm sib}_{kl} & = & N^{\rm mix}_{kl} \left[ 1 + \left( \hat{N}^{\rm sib}_{kl,{\rm 2DLevy}} - \hat{N}^{\rm mix}_{kl} \right) / \hat{N}^{\rm mix}_{kl} \right] \nonumber \\ \label{Eq102} \end{eqnarray} where the weight factor in square brackets is of order unity. Equation~(\ref{Eq102}) was calculated for LS and US pairs. Parameters $\beta$, $q$, $q_{\Sigma}$ and $q_{\Delta}$ in Eqs.~(\ref{Eq91}) and (\ref{Eq99}) for arbitrary $N_{\rm ch}$ were interpolated from the values in Table~\ref{TableII}. Monte Carlo simulations were carried out for each selected centrality using the following steps: (1) The power-law and Gaussian frequency distributions were sampled to obtain $n^+$, $n^-$ and $N_{\rm ch} = n^+ + n^-$ for each event. (2) The single-particle $y_t$ parent distributions were computed using Eqs.~(\ref{Eq91}) - (\ref{Eq94}) for event-wise values of $N_{\rm ch}$ and $n_{\Delta}$ and were then sampled $n^{\pm}$ times for positive/negative charged-particle multiplicities, where parameter $A$ in Eq.~(\ref{Eq91}) was normalized to the event-wise number of particles. (3) Correlation pair weights were calculated from Eqs.~(\ref{Eq96}), (\ref{Eq99}) and (\ref{Eq102}) using interpolated values at variable $N_{\rm ch}$ or at fixed $\bar{N}_{\rm ch}$ for each sibling pair. (4) Sibling pair and mixed-event pair histograms were accumulated for $(++)$, $(--)$, $(+-)$ and $(-+)$ charged-particle pairs and for each correlation definition in Sec.~\ref{SecII}. (5) Event averages were constructed for LS, US, CI, CD, CI-alternate and CD-alternate for correlation definitions based on $\Delta\sigma^2_{p_t:m}$ and its alternate form (Sec.~\ref{SecIIB}), $\Phi_{p_t}^{(0)}$ (Sec.~\ref{SecIIC}), $\sigma^2_{p_t,{\rm dynamical}}$ (Sec.~\ref{SecIID}), and $F_{p_t}$ (Sec.~\ref{SecIIE}). (6) Prefactors (Sec.~\ref{SecIIG}) were then calculated and applied. The resulting correlations for finite centrality bin-widths were compared with that expected in the absence of bias as discussed in the next section. \begin{table}[h] \caption{Monte Carlo model parameter settings for the eleven types of simulation runs in this paper. The entry $\Delta N_{\rm ch} > 0$ means that the finite bin widths in Table~\ref{TableI} were used. For $n_{\Delta}$, the notation ``vary'' means that non-zero values of parameters $\bar{n}^0_{\Delta}$, $\delta\bar{n}_{\Delta}$, $\sigma^0_{n_{\Delta}}$ and $\delta\sigma_{n_{\Delta}}$ in Table~\ref{TableI} were used. $T_1$ and $q_1 \neq 0$ refer to the values in Table~\ref{TableI}. Labels ``same,'' ``diff'' and ``mix'' mean that $T^+_2 = T^-_2 = T_1$ and $q^+_2 = q^-_2 = q_1$, $T^+_2 = -T^-_2 = T_1$ and $q^+_2 = -q^-_2 = q_1$, and $T^+_2 = -T^-_2 = T_1$ while $q^+_2 = -q^-_2 = -q_1$, respectively. Similar values apply when $T^{\pm}_3$ and $q^{\pm}_3$ are non-zero. Correlation weights are ``fixed'' when they are calculated for $N_{\rm ch} = \bar{N}_{\rm ch}$ and are ``varied'' when calculated as a function of $N_{\rm ch}$.} \label{TableIII} \begin{tabular}{ccccccc} \hline \hline Simulation & & & & & & Correlation \\ Run Type & $\Delta N_{\rm ch}$ & $n_{\Delta}$ & $T_1,q_1$ & $T^{\pm}_2,q^{\pm}_2$ & $T^{\pm}_3,q^{\pm}_3$ & pair weights \\ \hline 1 & 0 & 0 & 0,0 & 0,0 & 0,0 & 1.0 \\ 2 & $>0$ & vary & 0,0 & 0,0 & 0,0 & 1.0 \\ 3 & $>0$ & vary &$\neq 0$& 0,0 & 0,0 & 1.0 \\ 4 & $>0$ & vary &$\neq 0$& same & 0,0 & 1.0 \\ 5 & $>0$ & vary &$\neq 0$& diff & 0,0 & 1.0 \\ 6 & $>0$ & vary &$\neq 0$& mix & 0,0 & 1.0 \\ 7 & $>0$ & vary &$\neq 0$& 0,0 & same & 1.0 \\ 8 & $>0$ & vary &$\neq 0$& 0,0 & diff & 1.0 \\ 9 & $>0$ & vary &$\neq 0$& 0,0 & mix & 1.0 \\ 10 & $>0$ & vary & 0,0 & 0,0 & 0,0 &$\neq 1$,fixed \\ 11 & $>0$ & vary & 0,0 & 0,0 & 0,0 &$\neq 1$,varied \\ \hline \hline \end{tabular} \end{table} \begin{figure} \includegraphics[keepaspectratio,width=3.5in]{MCBias_ytyt_Paper_Figure1} \caption{\label{Fig1} Simulated correlations $\Delta\rho/\sqrt{\rho_{\rm ref}}(y_{t1},y_{t2})$ for Au + Au collisions at 200~GeV for LS pairs (panels a, c, e) and US pairs (panels b, d, f) corresponding to peripheral (panels a, b), mid-central (panels c, d) and more-central (panels e, f) collisions. For these calculations finite centrality bin-widths and the correlations from Refs.~~\cite{LizThesis,LizHotQuarks} were assumed as explained in the text.} \end{figure} \begin{figure} \includegraphics[keepaspectratio,width=3.5in]{MCBias_ytyt_Paper_Figure2} \caption{\label{Fig2} Statistical bias effects for event-averaged correlation $\Delta\rho/\sqrt{\rho_{\rm ref}}(y_{t1},y_{t2})$ for 200~GeV mid-central Au + Au collisions. Results assuming fixed $N_{\rm ch}$ and finite multiplicity bin widths with no input correlations are shown in panels (a,b) and (c,d), respectively. Results for LS pairs and US pairs are shown in panels (a,c) and (b,d), respectively.} \end{figure} \begin{figure*} \includegraphics[keepaspectratio,width=6.5in]{MCBias_ytyt_Paper_Figure3} \caption{\label{Fig3} Statistical bias effects in $\Delta\rho/\sqrt{\rho_{\rm ref}}(y_{t1},y_{t2})$ for 200~GeV Au + Au collisions due to finite multiplicity bin-widths corresponding to simulation run type~2 in Table~\ref{TableIII}. No input correlations were used. LS pairs in peripheral collisions (panels a, e, i), US peripheral (panels b, f, j), LS mid-central (panels c, g, k), and US mid-central (panels d, h, l) are shown in the columns of panels from left-to-right, respectively. Simulation results based on $\Delta \sigma^2_{p_t:m}$, its alternate form in Eq.~(\ref{Eq26}), and $\sigma^2_{p_t,{\rm dynamical}}$ are shown in successive rows of panels from upper-to-lower.} \end{figure*} \begin{figure} \includegraphics[keepaspectratio,width=3.5in]{MCBias_ytyt_Paper_Figure4} \caption{\label{Fig4} Statistical bias effects for the charge-dependent (CD = LS$-$US) $\Delta\rho/\sqrt{\rho_{\rm ref}}(y_{t1},y_{t2})$ quantity for mid-central, 200~GeV Au + Au collisions including finite multiplicity bin-width only with no input correlations. Nominal and alternate CD forms were used for the results shown in the left-hand panels (a, c, e, g) and right-hand panels (b, d, f, h), respectively. Possible bias effects are shown for correlation quantities derived from $\Delta \sigma^2_{p_t:m}$, $\Phi_{p_t}^{(0)}$, $\sigma^2_{p_t,{\rm dynamical}}$, and $F_{p_t}$ in successive rows of panels from upper-to-lower.} \end{figure} \begin{figure*} \includegraphics[keepaspectratio,width=6.5in]{MCBias_ytyt_Paper_Figure5} \caption{\label{Fig5} Systematic bias effects for 200~GeV Au + Au collisions due to the $N_{\rm ch}$-dependent shape of the single-particle $p_t$ distribution as discussed in the text. LS and US pair correlations for peripheral and mid-central collisions are shown in the columns of panels as in Fig.~\ref{Fig3}. Results are shown for correlations derived from $\Delta \sigma^2_{p_t:m}$ (panels a-d), $\Phi_{p_t}^{(0)}$ (panels e-h), $\sigma^2_{p_t,{\rm dynamical}}$ (panels i-l), and $F_{p_t}$ (panels m-p).} \end{figure*} \begin{figure*} \includegraphics[keepaspectratio,width=6.5in]{MCBias_ytyt_Paper_Figure6} \caption{\label{Fig6} Systematic bias effects for 200~GeV Au + Au collisions due to possible $n_{\Delta}$ dependence in the positive and negative single-particle $p_t$ distributions as discussed in the text. LS and US pair correlations for peripheral and mid-central collisions are shown in the columns of panels as in Fig.~\ref{Fig3}. Results are shown for correlations derived from $\Delta \sigma^2_{p_t:m}$ (panels a-d) and $F_{p_t}$ (panels e-h).} \end{figure*} \begin{figure*} \includegraphics[keepaspectratio,width=6.5in]{MCBias_ytyt_Paper_Figure7} \caption{\label{Fig7} Systematic bias effects for 200~GeV Au + Au collisions due to $N_{\rm ch}$-dependence of the correlation shapes as discussed in the text. LS and US pair correlations for peripheral and mid-central collisions are shown in the columns of panels as in Fig.~\ref{Fig3}. Results are shown for correlations derived from $\Delta \sigma^2_{p_t:m}$ (panels a-d), $\Phi_{p_t}^{(0)}$ (panels e-h), $\sigma^2_{p_t,{\rm dynamical}}$ (panels i-l), and $F_{p_t}$ (panels m-p).} \end{figure*} \section{Results} \label{SecV} For the correlation quantities in Sec.~\ref{SecII} eleven types of simulation calculations were done for the peripheral, mid-central and more-central bins using $10^6$, $10^6$ and $0.5\times10^6$ collision events in each run, respectively. The event sample sizes were sufficient to ensure that statistical and/or systematic bias effects large enough to compromise correlation measurements are clearly visible in the simulations. Typical statistical errors in the simulated $\Delta\rho/\sqrt{\rho_{\rm ref}}$ are of order $\pm0.01$, $\pm0.01$ and $\pm0.015$ for the peripheral, mid-central and more-central collisions, respectively. These errors are about one-tenth of the expected correlation magnitudes~\cite{LizThesis,LizHotQuarks}. The eleven simulation runs progressively included additional bias producing effects. The first assumed fixed $N_{\rm ch}$ with $n_{\Delta}$ = 0 and no correlation weights. Subsequent calculations included finite multiplicity bin widths and non-zero $n_{\Delta}$. Then $N_{\rm ch}$-dependent single-particle $p_t$ spectrum shapes were added, followed by $n_{\Delta}$-dependent $p_t$ spectrum shapes for positive and negative charged particles. Finally, $N_{\rm ch}$-dependent correlation shape variation was included. Explicit parameter settings for the eleven types of calculations are explained in Table~\ref{TableIII}. Simulated correlations $\Delta\rho/\sqrt{\rho_{\rm ref}}(y_{t1},y_{t2})$ for each centrality and for like-sign and unlike-sign charged-particle pairs are shown in Fig.~\ref{Fig1} corresponding to simulation run type~10 in Table~\ref{TableIII} and assuming the $\Delta \sigma^2_{p_t:m}$ based correlation. Single-particle spectrum shapes and 2D Levy parameters were fixed at their respective $\bar{N}_{\rm ch}$ values in each centrality bin. The essential features of the correlations include: (1) an overall increase in the amplitude with increasingly more-central collisions, (2) the evolution of the shape of the unlike-sign correlations along the diagonal $y_{t1} \approx y_{t2}$ bins, and (3) the prominent peak at $(y_{t1},y_{t2}) \approx (3,3)$. The magnitudes of the bias effects discussed below may be compared to these estimated correlations. In Fig.~\ref{Fig2} simulations for the mid-central bin assuming the event-averaged correlation form in Eq.~(\ref{Eq10}) are shown for LS and US pairs in upper panels (a) and (b) for fixed multiplicity $\bar{N}_{\rm ch}$ and no input correlation (simulation run type~1). The results are statistically consistent with zero indicating no bias. In the lower panels LS and US results for finite multiplicity bin-width and no input correlations (simulation run type~2) are shown where large structure appears indicating strong statistical bias. In the absence of correlation weights [weight factor in Eq.~(\ref{Eq102}) equals unity] quantity $\Delta\rho/\sqrt{\rho_{\rm ref}}$ should ideally be statistically consistent with zero. Statistically significant non-zero correlations resulting from simulation run types~2 - 9 in Table~\ref{TableIII} indicate the presence of bias. Fig.~\ref{Fig2} shows that the simple, event-averaged correlation induces large, statistical bias for both LS and US pairs for finite width multiplicity bins, as shown in Eq.~(\ref{Eq10}). Applications of this event-averaged correlation require the variance of the selected event-wise multiplicity fluctuations to be much less than $\bar{N}^2$. Statistical biases due to pair-counting in finite-width multiplicity bins (simulation run type~2) for correlations derived from fluctuation quantities $\Delta \sigma^2_{p_t:m}$ and its alternate charge-identified form in Eq.~(\ref{Eq26}), $\Phi_{p_t}^{(0)}$, $\sigma^2_{p_t,{\rm dynamical}}$, and $F_{p_t}$ were studied for LS and US charged-particle pairs and for the three centrality bins. Results for $\Delta \sigma^2_{p_t:m}$, $\Phi_{p_t}^{(0)}$ and $F_{p_t}$ were statistically consistent with zero (unbiased) as shown for the $\Delta \sigma^2_{p_t:m}$ results in the upper row of panels in Fig.~\ref{Fig3}. Results for the alternate, charge-identified form in Eq.~(\ref{Eq26}) are strongly biased (i.e. bias effects are larger than the expected correlations) as shown in the middle row of panels in this figure. Only the charge-nonidentified results (not shown) from Eq.~(\ref{Eq20}) are statistically unbiased. The explicit treatment of unlike-sign charged-particle pairs as in Eq.~(\ref{Eq23}) is essential for eliminating statistical bias. Results for the quantity based on $\sigma^2_{p_t,{\rm dynamical}}$ (lower row of panels in Fig.~\ref{Fig3}) are highly biased for the peripheral, LS correlations but are statistically unbiased for the other cases. The large bias effect can be avoided by restricting $N_{\rm ch} > 2$, based on calculations with $N_{\rm ch} \in [3,14]$ (not shown). Additional simulations with $N_{\rm ch} \in [1,14]$ (also not shown) indicate that large statistical bias appears in the LS correlations obtained from $\Delta \sigma^2_{p_t:m}$ and $F_{p_t}$, requiring $N_{\rm ch} > 1$ for these quantities. For high multiplicity events and/or large angular-bin size (scale) these restrictions are of little practical importance. However, for scale-dependent analysis~\cite{ptscale,STARscale} in which the average angular-bin occupancy may be small ($\sim$2), the $N_{\rm ch} >$ 1 or 2 restrictions would distort the event sample used to compute the correlations. Results based on $\Phi_{p_t}^{(0)}$ are statistically unbiased for bin-wise occupancies with $N_{\rm ch} \geq 1$. The $\Phi_{p_t}^{(0)}$ quantity was used in the scale-dependent, mean-$p_t$ fluctuation analysis of STAR data in Refs.~\cite{ptscale,STARscale}. In Eqs.~(\ref{Eq76}) and (\ref{Eq77}) two forms for the charge-dependent (CD = LS$-$US) correlation are given. The results for mid-central collisions with finite multiplicity bin width (simulation run type~2, no input correlation) are shown in Fig.~\ref{Fig4}. The nominal [Eq.~(\ref{Eq76})] and alternate [Eq.~(\ref{Eq77})] CD results are shown in the left and right columns of panels, respectively. Bias effects for the correlation forms derived from fluctuation quantities $\Delta \sigma^2_{p_t:m}$, $\Phi_{p_t}^{(0)}$, $\sigma^2_{p_t,{\rm dynamical}}$ and $F_{p_t}$ are shown in descending order from upper to lower rows, respectively. For the nominal CD results large bias occurs for the $\Delta \sigma^2_{p_t:m}$ and $\Phi_{p_t}^{(0)}$ forms. A small degree of bias is present at lower $y_t$ for the $\sigma^2_{p_t,{\rm dynamical}}$ form. The $F_{p_t}$ results are consistent with zero (unbiased). For each quantity the alternate CD results shown here are unbiased. Note that any difference in the biases for LS and US pairs will contribute directly to both the nominal and alternate CD correlations. The point of Fig.~\ref{Fig4} is to show that the LS$-$US difference computed using the sibling minus mixed differences given in Eq.~(\ref{Eq77}) minimizes the bias and is therefore the recommended form to use for charge-dependent (LS$-$US) correlations. No significant differences were found between the simulated correlation results for the nominal and alternate charge-independent (CI=LS+US) correlations in Eqs.~(\ref{Eq74}) and (\ref{Eq75}), respectively. Similar results for these two forms were produced for the respective correlation methods discussed in Sec.~\ref{SecII} as all of the statistical and systematic bias sources were successively added to the simulations. The mathematical differences between the two CI correlation forms are contained in the charge-identified weights, given by 1/4 for the nominal CI and $N_{kl}^{{\rm mix},ab}/\sum_{a^{\prime},b^{\prime}=\pm \pm} N_{kl}^{{\rm mix},a^{\prime} b^{\prime}}$ for the alternate CI. When the positive and negative charged-particle $p_t$ distributions differ in shape, the alternate CI weight factors vary with $(y_{t1},y_{t2})$. The effects of these variations are insignificant in the present examples. Systematic bias due to $N_{\rm ch}$-dependence of the single-particle $p_t$ spectrum shape is shown in Fig.~\ref{Fig5} for LS and US charged-particle pairs for peripheral and mid-central collisions. The panels show differences for simulation run type~3 for $\Delta\rho/\sqrt{\rho_{\rm ref}}$ minus that for run type~2 (no input correlations). Systematic bias effects for $\Delta\rho/\sqrt{\rho_{\rm ref}}(y_{t1},y_{t2})$ derived from $\Delta \sigma^2_{p_t:m}$, $\Phi_{p_t}^{(0)}$, $\sigma^2_{p_t,{\rm dynamical}}$ and $F_{p_t}$ are shown in successive rows of panels from the upper-most row to the bottom row, respectively. Modest bias effects (increases) are seen at lower $y_t$ in all cases for the mid-central collisions. Larger bias is seen for LS pairs in peripheral collisions for the $\Delta \sigma^2_{p_t:m}$ and $\sigma^2_{p_t,{\rm dynamical}}$ based forms. A larger systematic bias appears in LS pair, peripheral collisions for the $F_{p_t}$ based correlation. This bias is as large as the expected correlation signal (see Fig.~\ref{Fig1}). No significant systematic bias effects were found for US pairs in peripheral collisions. Systematic bias effects due to assumed variations in the $p_t$ distribution shapes for positive and negative charged particles as a function of $n_{\Delta} = n^+ - n^-$ were estimated using the different sets of model parameters described in Sec.~\ref{SecIV}. The largest estimated bias effects resulted from assuming non-zero values for either parameters $T^{\pm}_2$ and $q^{\pm}_2$, or $T^{\pm}_3$ and $q^{\pm}_3$, in the ``mix'' configuration corresponding to simulation run types~6 or 9 in Table~\ref{TableIII}. Results for non-zero $T^{\pm}_2$ and $q^{\pm}_2$ (``mix''), where $\Delta\rho/\sqrt{\rho_{\rm ref}}$ for run type~3 was subtracted from that for run type~6, are shown in Fig.~\ref{Fig6}. The columns of panels show results for LS and US charged-particle pairs for peripheral and mid-central collisions as in Fig.~\ref{Fig3}. Bias results for correlations derived from $\Delta \sigma^2_{p_t:m}$, $\Phi_{p_t}^{(0)}$ and $\sigma^2_{p_t,{\rm dynamical}}$ are similar with small bias effects at lower $y_t$ as shown for $\Delta \sigma^2_{p_t:m}$ in the upper row of panels. Bias effects for the correlation quantity based on $F_{p_t}$ (lower row of panels) are similar except for LS, peripheral which are larger. In Fig.~\ref{Fig7} the systematic bias due to $N_{\rm ch}$-dependence in the assumed correlation shape [see the pair weight-factor in Eq.~(\ref{Eq102})] relative to the correlation at the mid-point of the multiplicity bin at $\bar{N}_{\rm ch}$, is shown for LS and US charged-particle pairs for peripheral and mid-central collisions. Specifically, the results shown correspond to $\Delta\rho/\sqrt{\rho_{\rm ref}}$ computed in simulation run type~10 subtracted from that for run type~11 (see Table~\ref{TableIII}). The bias effects vary in shape and overall amplitude for LS versus US charged-particle pairs, for peripheral versus mid-central collisions, and for each correlation measure. In general, these systematic bias effects are negligible relative to the expected correlation magnitudes. Finally, systematic bias effects for the more-central collisions listed in Tables~\ref{TableI} and \ref{TableII} were found to be approximately twice as large as those shown here for mid-centrality. In realistic correlation analyses~\cite{MikeThesis,LizThesis,axialCI} broad multiplicity bin-widths such as the more-central bin considered here with $N_{\rm ch} \in [644, 910]$ must be sub-divided in order that event-mixing for the reference pair densities does not produce artifacts in the correlation structure. Typically, the maximum allowed range for $N_{\rm ch}$ is 50 which would reduce systematic bias effects to levels no greater than those shown in Figs.~\ref{Fig5} - \ref{Fig7}. Statistical biases for the more-central collisions for correlations derived from $\Delta \sigma^2_{p_t:m}$, $\Phi_{p_t}^{(0)}$, $\sigma^2_{p_t,{\rm dynamical}}$ and $F_{p_t}$ are negligible, even for the full bin-width. \begin{table}[h] \caption{Range of applicability of LS and US charged particle pair correlations based on four mean-$p_t$ fluctuation quantities in terms of the allowed range of multiplicity ($N_{\rm ch} = n^+ + n^-$) within an arbitrary, angular bin.} \label{TableIV} \begin{tabular}{ccc} \hline \hline Fluctuation & & \\ Quantity & LS & US \\ \hline $\Delta \sigma^2_{p_t:m}$ & $N_{\rm ch} > 1$ & $N_{\rm ch} \geq 1$ \\ $\Phi_{p_t}^{(0)}$ & $N_{\rm ch} \geq 1$ & $N_{\rm ch} \geq 1$ \\ $\sigma^2_{p_t,{\rm dynamical}}$ & $N_{\rm ch} > 2$ & $N_{\rm ch} \geq 1$ \\ $F_{p_t}$ & $N_{\rm ch} > 1$ & $N_{\rm ch} \geq 1$ \\ \hline \hline \end{tabular} \end{table} The ranges of applicability of four of the LS and US correlation quantities derived here are summarized in Table~\ref{TableIV}. Multiplicity, $N_{\rm ch} = n^+ + n^-$, refers to the event-wise, charged-particle occupancy in arbitrary $(\eta,\phi)$ bins. For the present analysis the $(\eta,\phi)$ bin size was assumed to be equal to the full acceptance of the STAR TPC tracking detector. Requiring $N_{\rm ch} >$ 1 (quantities $\Delta \sigma^2_{p_t:m}$ and $F_{p_t}$ for LS) or $N_{\rm ch} >$ 2 (quantity $\sigma^2_{p_t,{\rm dynamical}}$ for LS) only affects the most-peripheral collision centrality bin in these cases. In general, however, the $(\eta,\phi)$ bin size can be less than the acceptance as explained in Refs.~\cite{ptscale,STARscale}. The $(\eta,\phi)$ bin-wise multiplicity requirement, $N_{\rm ch} >$ 1 or 2, for these three correlation quantities could affect more central collisions and, depending on the collision system and energy and the angular bin-scale, could cause the correlation analyses for these three quantities for LS particle pairs to be unreliable for much of the centrality range. Correlations derived from $\Phi_{p_t}^{(0)}$ do not suffer from the above restrictions on $N_{\rm ch}$. Statistical bias in the simple, event-averaging correlation (Sec.~\ref{SecIIA}) and in the recent NA49 fluctuation quantity (Sec.~\ref{SecIIF}) can only be eliminated by restricting the event acceptance to nearly zero multiplicity width such that $\sigma^2_N$ is negligible. Systematic biases of the types discussed here can be reduced by restricting the centrality range of the events. \section{Summary and Conclusions} \label{SecVI} Two-particle correlations on transverse momentum $(p_{t1},p_{t2})$, or transverse rapidity $(y_{t1},y_{t2})$, contain additional, independent information beyond that accessible with angular correlation measurements. These correlations therefore play an important role in efforts to understand the dynamics involved in relativistic heavy-ion collisions. It is essential that transverse momentum correlation measurements, which can be vulnerable to bias effects in the form of distorted shapes and structures, are as free of statistical and systematic biases as possible. Several correlation quantities were studied, most of which were derived from non-statistical mean-$p_t$ fluctuation measurement quantities in the literature. Bias effects were studied both analytically and numerically via Monte Carlo simulations for Au + Au collisions at $\sqrt{s_{NN}}$ = 200~GeV. For the simulations, event multiplicity distributions, $p_t$-spectrum parameters, and estimated correlation distributions were taken from measurements reported in the literature. The simple correlation definition based on pair-number normalization, Eq.~(\ref{Eq1}), includes unknown distortions while that based on event-averaging, Eq.~(\ref{Eq10}), includes large statistical bias when event collections are used which have a finite range of multiplicities. Five distinctly different correlation quantities were then studied which were derived from mean-$p_t$ fluctuation quantities $\Delta \sigma^2_{p_t:m}$, $\Phi_{p_t}^{(0)}$, $\sigma^2_{p_t,{\rm dynamical}}$, $F_{p_t}$, and $\Delta [P_T,N]$, $\Sigma [P_T,N]$ in order to ascertain the statistical and systematic biases associated with each. A simplified charge-identified correlation form based on the charge-nonidentified $\Delta \sigma^2_{p_t:m}$ fluctuation quantity was found to have large statistical bias which exceeded the expected magnitude of the correlation signal. A charge-identified correlation form derived from an explicit charge-identified $\Delta \sigma^2_{p_t:m}$ definition did not contain significant statistical bias. Explicit charge identification was therefore included in all of the other mean-$p_t$ fluctuation quantities considered here. Statistical bias for like-sign pairs can be problematic (bias effects as large or larger than the expected correlations) for the $\Delta \sigma^2_{p_t:m}$, $\sigma^2_{p_t,{\rm dynamical}}$ and $F_{p_t}$ based correlations for peripheral collisions or for scale-dependent analyses at any centrality where the event-wise bin occupancies can be as low as 1, 2, and 1, respectively. Statistical bias is not an issue for the like-sign and unlike-sign charged-pair correlations based on $\Phi_{p_t}^{(0)}$; the same is true for unlike-sign correlations for the above three correlation quantities. The applicable ranges in $(\eta,\phi)$ bin-wise multiplicities for the LS and US correlations derived from mean-$p_t$ fluctuation quantities $\Delta \sigma^2_{p_t:m}$, $\Phi_{p_t}^{(0)}$, $\sigma^2_{p_t,{\rm dynamical}}$, and $F_{p_t}$ are listed in Table~\ref{TableIV}. Statistical bias in the correlations derived from $\Delta [P_T,N]$ and $\Sigma [P_T,N]$ can only be controlled by severely limiting the bin-wise multiplicity range such that $\sigma^2_N$ is negligible. Systematic bias due to multiplicity, or centrality, dependence in the single-particle $p_t$ spectrum shape is more evident at lower $(y_{t1},y_{t2})$ and, in the present simulations, is largest for the like-sign, peripheral collision correlations based on quantity $F_{p_t}$. Systematic bias caused by the overall multiplicity dependence of the correlation shape varies with correlation model, charge pair combination, location in $(y_{t1},y_{t2})$, and centrality. For each case studied here this bias is one-tenth or less of the expected correlation magnitudes. Systematic bias magnitudes are proportional to centrality bin-width and therefore can be reduced by limiting the accepted centrality range in the data analysis. Reducing the centrality bin-width for statistically unbiased quantities until stable correlations are achieved is a straightforward way to minimize this type of systematic bias. Charge-dependent (CD = LS$-$US) correlations were also studied for each correlation quantity. The like-sign minus unlike-sign sibling pair difference form in Eq.~(\ref{Eq76}) produces spurious results in most cases. The correlated-pair difference form in Eq.~(\ref{Eq77}), $[(\Delta N_{kl}^{++} + \Delta N_{kl}^{--}) - (\Delta N_{kl}^{+-} + \Delta N_{kl}^{-+})]$, does not produce any additional bias beyond that already present in the LS and US charged-particle pair correlations. In conclusion, two-particle correlations on transverse momentum derived from mean-$p_t$ fluctuation quantities $\Delta \sigma^2_{p_t:m}$, $\Phi_{p_t}^{(0)}$, $\sigma^2_{p_t,{\rm dynamical}}$, and $F_{p_t}$ reproduce intrinsic correlation structure at the mid-point of the centrality bin with reasonable fidelity, if the event-wise range of multiplicities ($N_{\rm ch}$) in the $(\eta,\phi)$ angular bin are appropriately restricted as in Table~\ref{TableIV}, and if the collision centrality bin-width is sufficiently narrow. Multiplicity restrictions are an issue for low event-multiplicities and for $(y_{t1},y_{t2})$ correlation analysis at any collision centrality when mean-$p_t$ fluctuations are measured in small $(\eta,\phi)$ bins. For applications to other collision systems, energies, detector acceptances or angular bin scales than that studied here, the impact of the multiplicity restrictions in Table~\ref{TableIV} should be evaluated and the stability of the correlations with respect to centrality bin-width investigated. The analytical analysis and Monte Carlo simulations presented here can be readily extended to other such applications in order to facilitate the reduction of bias in two-particle correlations on transverse momentum. \vspace{0.2in} {\bf Acknowledgements} \vspace{0.1in} The authors would like to thank Professor Tom Trainor of the Univ. of Washington for many informative discussions relevant to this work. This research was supported in part by the Office of Science of the U. S. Department of Energy under Grants No. DE-FG02-94ER40845 and No. DE-SC0013391.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The power flow problem is \emph{the} cornerstone problem for power systems analyses: find all (complex) quantities in an \textsc{ac} electrical network in steady state. The information drawn from the solution of the power flow problem is relevant for planning power systems, as well as expanding and operating them~\cite{Grainger94book}. Hence, the power flow problem is relevant to all stakeholders maintaining a well-functioning electrical grid, most importantly \gls{tsos} and \gls{dsos}. Traditionally, \gls{tsos} and \gls{dsos} solve and use power flow problems independently of each other, each making modeling assumptions with respect to the other system, e.g. treating the distribution system as a lumped load for the transmission system~\cite{Sun2008}. Clearly, these modeling assumptions---even if they were valid---may lead to real-world mismatches in both power and voltage. Hence---what is a sovereignty-preserving way to solve power flow problems for large power systems that may be composed of several \gls{tsos} and/or \gls{dsos}?\footnote{Sometimes the literature refers to a power flow problem for a combination of \gls{tsos} and \gls{dsos} as a \emph{global power flow} problem~\cite{Sun2008, Sun2015}.\label{foot:global-power-flow}} This is the motivating question the present paper. To answer this question we focus on the mathematical formulation and solution of the power flow problem. Mathematically, the power flow problem is modeled as a system of nonlinear equations, traditionally solved by Newton-Raphson methods or Gauss-Seidel approaches. These solution techniques may be classified as \emph{centralized approaches}, i.e. the full grid model is available to a single central entity. This entity solves the power flow problem, having access to all information not just about the problem itself but also about the solution. Hence, this established approach is in principle able to solve power systems composed of \gls{tsos} and \gls{dsos} (so-called global power flow problems~\cite{Sun2008, Sun2015})---but only at the cost of giving up sovereignty. Recently, so-called \emph{distributed approaches} have drawn significant academic attention. These are methods for which several entities (or agents) solve sub-problems independently of each other, then broadcast some---but not all---information to a coordinator~\cite{Erseghe2014,Kim1997,Hug2015,Kim2000,Engelmann2019Aladin}. The coordinator then solves a coordination problem, and sends to all entities the information they need to solve their sub-problems again.\footnote{We clearly distinguish between \emph{distributed} and \emph{decentralized} approaches, the latter requiring no central coordinator whatsoever.} This process is repeated until convergence is achieved. The high-level description of distributed approaches suggests several advantages relative to centralized methods: \begin{itemize} \item distribute the computational effort, \item preserve sovereignty and/or privacy, e.g. grid models, \item decrease the vulnerability due to a single-point-of-failure, and \item add flexibility. \end{itemize} The interest in distributed approaches is not just academic; there exists a genuine desire by industry to leverage the advantages for real-world problems. In Germany, for example, the horizontal connection between the four \gls{tsos}---50 Hertz, TenneT, Amprion, and TransnetBW---is based on centralized power flow solutions. However, new legislation and the undergoing German \emph{Energiewende} toward more renewables force the German \gls{tsos} to focus on new vertical cooperation with the numerous \gls{dsos}. For this vertical cooperation, centralized approaches are not favorable mainly due to privacy concerns: the host then combines the role of data owner and product owner, and introduces a possible single-point-of-failure. Hence, it is \emph{not mainly} the distributed computational effort, but more the increased privacy, reliability, and flexibility that spur the interest of \gls{tsos} in distributed approaches. Large Chinese cities are another example where the combined power flow problem for \gls{tsos} and \gls{dsos} is of relevance; in \cite{Sun2008} it is argued that many Chinese cities are operating both transmission and distribution systems, both of which are studied and operated separately however. If a computational method were available to solve the combined power flow problem in terms of a privacy-preserving distributed problem, this would be helpful~\cite{Sun2008,Sun2015}. In light of the above considerations the present paper contributes as follows: \begin{mylist} \item \label{item:formulations} We present two mathematical formulations of distributed power flow problems as privacy-preserving and physically-consistent distributed optimization problems. \item \label{item:admm-aladin} We rigorously evaluate the applicability of the \acrfull{admm} and the \acrfull{aladin} to distributed power flow problems with up to several thousand buses. \item \label{item:rapidpf} We introduce \gls{rapidpf}: open-source \textsc{matlab}\xspace code fully compatible with \textsc{matpower}\xspace that allows to generate \textsc{matpower}\xspace case files for distributed power flow problems tailored to distributed optimization; the code is available on \url{https://github.com/KIT-IAI/rapidPF/} under the \textsc{bsd}-3-clause\xspace license. \item \label{item:aladin-alpha} We extend \textsc{aladin}-$\alpha$\xspace, a \textsc{matlab}\xspace rapid-prototyping toolbox for distributed and decentralized non-convex optimization, to allow for user-defined sensitivities and three new solver interfaces (\texttt{fminunc}\xspace, \texttt{fminunc}\xspace, \texttt{worhp}\xspace). \end{mylist} We explain our contributions relative to the state-of-the-art. Ad \ref{item:formulations}: The idea to solve a global power flow problem that stands for the combination of \gls{tsos} and \gls{dsos} was popularized by the works~\cite{Sun2008, Sun2015}. Specifically,~\cite{Sun2008} coined the term ``Master-slave-splitting'' to highlight the idea that there is a master system to which several workers are connected;\footnote{We prefer the less inappropriate term ``worker'' instead of ``slave''.} also so-called ``boundary systems'' are introduced which make up the physical connection between the master and its worker~\cite{Sun2008}. The solution of the overall power flow problem is done iteratively: initialize the boundary voltages, solve the power flow for the worker systems, then substitute the solution to the boundary system, and solve the master system. This process is executed until the difference of voltage iterates is sufficiently small. Any power flow solver can be used to solve the sub-problems. Unfortunately, no convergence guarantees are provided, and the method was applied to systems with less than 200 buses. In the follow-up work~\cite{Sun2015} a convergence analysis is carried out, but its practicability is limited due to mathematical settings---such as the implicit function theorem---that are difficult to relate to real-world criteria and/or data. Also, the simulation results from~\cite{Sun2015} are the ones from~\cite{Sun2008}. It hence remains unclear how well this method scales. Unfortunately, neither \cite{Sun2008} nor \cite{Sun2015} provide plots on the actual convergence behavior of their method, or wall-clock simulation times, or the influence of different initial conditions---all of which are aspects relevant to practitioners. In light of~\cite{Sun2008, Sun2015} the focus of the present paper is on the following: \begin{itemize} \item clear distinction between problem formulation and problem solution; \item two different mathematical problem formulations that make no assumptions on the sub-problems (e.g. meshed grids vs. radial grids); \item convergence properties follow from theory of distributed optimization; \item reproducible numerical results for test systems with up to $\approx$ 4000 buses. \end{itemize} Ad \ref{item:admm-aladin}: Recently, distributed optimization techniques have drawn attention for distributed optimal power flow problems. It is especially \gls{admm} that finds widespread application for optimal power flow~\cite{Erseghe2014,Guo2017,Kim2000}. However, \gls{admm} being a first-order optimization methods often converges relatively quickly to the vicinity of the optimal solution, but then takes numerous iterations to approach satisfying numerical accuracy~\cite{Boyd2011}. In addition, \gls{admm} is known to be rather sensitive to both tuning and the choice of initial conditions~\cite{Sun2013}; line flow limits pose a significant obstacle for \gls{admm}~\cite{Erseghe2014}. Furthermore, convergence guarantees for \gls{admm} apply to convex optimization problems, but optimal power flow is known to be non-convex. There exist distributed optimization methods that are devised for non-convex problems, for instance \gls{aladin}. With \gls{aladin} being a second-order method, it has access to curvature information that speed up convergence, at the expense of having to share more information among the sub-problems. The proof-of-concept applicability of \gls{aladin} to distributed optimal power flow problems has been demonstrated in several recent works~\cite{Engelmann2017Aladin, Engelmann18b, Engelmann2019Aladin}; how to reduce the information exchange among the sub-problems is discussed in~\cite{Engelmann2020BiLevelAladin}. Compared to \gls{admm}, \gls{aladin} has more favorable convergence properties: within a few dozen iterations, convergence to the optimal solution is achieved with satisfying numerical accuracy~\cite{Engelmann2019Aladin}. Just like with \gls{admm}, however, tuning remains a challenge with \gls{aladin}. Also, the largest test case to which \gls{aladin} was successfully applied is the 300-bus test case. To summarize: both \gls{admm} and \gls{aladin} have demonstrated their potential for solving distributed optimal power flow problems. It is fair to ask how both methods apply to distributed power flow problems---a question that has not been tackled before to the best of the authors' knowledge. Our findings suggest that for the distributed power flow problem \gls{aladin} outperforms \gls{admm} far more significantly than it does for the optimal power flow problem (in terms of scalability, speed, performance, and tuning). The main advantage of applying established techniques from distributed optimization to distributed power flow is that the convergence guarantees can be leveraged. Ad \ref{item:rapidpf}: For academic power system analyses \textsc{matpower}\xspace is a mature, well-established, and widely adopted open source collection of \textsc{matlab}\xspace code~\cite{Zimmerman2011matpower}. It is not just the many computational facets that \textsc{matpower}\xspace provides that make it popular (power flow and several relaxations, optimal power flow, unit commitment, etc.), but also the so-called \textsc{matpower}\xspace case file format has inspired other open source packages, for instance {PowerModels.jl}~\cite{Coffrin2018PowerModels} or {PyPSA}~\cite{PyPSA}. The \textsc{matpower}\xspace case file format describes a power system with respect to its bus data, generator data, and branch data. Additionally, there is a base MVA value for per-unit conversions, and for optimal power flow problems there is an entry on generator costs. Based on both the popularity and the maturity of \textsc{matpower}\xspace we provide glue code that solves the following laborious task: given several \textsc{matpower}\xspace case files, and given connection information for these case files, construct a \textsc{matlab}\xspace struct that corresponds to the mathematical problem formulation, and that is amenable to distributed optimization methods. This glue code is called \gls{rapidpf}, and it is publicly available with a rich documentation---and full \textsc{matpower}\xspace compatibility. In addition, \gls{rapidpf} decreases the time-from-idea-to-result, it computes relevant sensitivities (gradients, Jacobians, Hessians), and it comes with post-processing functionalities. The idea of \gls{rapidpf} is inspired by the \textsc{matlab}\xspace packages TDNetGen~\cite{Pilatte2019TDNetGen} and AutoSynGrid~\cite{Sadeghian2020SynGrid}; the code for \gls{rapidpf} is hosted under \url{https://github.com/KIT-IAI/rapidPF/}. From a first glance, TDNetGen seems to provide functionality similar to \gls{rapidpf}. As written in the abstract, TDNetGen is \textsc{matlab}\xspace code ``able to generate synthetic, combined transmission and distribution network models''~\cite{Pilatte2019TDNetGen}. Unfortunately, TDNetGen is not as flexible as desired: there is currently no straightfoward way to generate TDNetGens so-called templates from arbitrary \textsc{matpower}\xspace case files. Also, the focus of TDNetGen is on \emph{generating} large test systems, not on \emph{solving} them. In turn, \gls{rapidpf} allows to both generate test systems and \emph{prepare} them for solution by distributed optimization methods such as \gls{admm} and \gls{aladin}. This preparation step must not be underestimated, because providing for an interface to distributed solvers is key in making distributed techniques more popular. The focus of AutoSynGrid is on generating numerous test systems with similar statistical properties~\cite{Sadeghian2020SynGrid}. Hence, AutoSynGrid is not directly comparable to either TDNetGen or \gls{rapidpf}. Ad \ref{item:aladin-alpha}: The recently published \textsc{matlab}\xspace toolbox \textsc{aladin}-$\alpha$\xspace provides several implementations of both \gls{admm} and \gls{aladin}~\cite{Engelmann2020Aladin}. Its user interface allows the user to provide merely the cost functions, the equality constraints, and the inequality constraints. Besides setting several default parameter settings, \textsc{aladin}-$\alpha$\xspace computes derivatives required for either \gls{admm} or \gls{aladin}. To do so, \textsc{aladin}-$\alpha$\xspace relies internally on \textsc{c}as\textsc{ad}i\xspace, an automatic differentiation framework that also parses the optimization problem to the low level interface of Ipopt~\cite{Andersson2019, Waechter06}. The idea of \textsc{aladin}-$\alpha$\xspace is to provide rapid prototyping capabilities for general distributed optimization problems; it is not specifically tailored to distributed (optimal) power flow problems. From the authors' experience, this all-purpose character in combination with \textsc{c}as\textsc{ad}i\xspace being hard-wired into \textsc{aladin}-$\alpha$\xspace hinders it from being applicable to mid- to large-scale power flow problems. We forked the code and tailored it to the needs of distributed power flow problems: the exact power flow Jacobian is passed from \textsc{matpower}\xspace, Hessian approximations are provided, and three new solvers are interfaced (\texttt{fminunc}\xspace, \texttt{fminunc}\xspace, \texttt{worhp}\xspace). Also, the interface of \textsc{aladin}-$\alpha$\xspace needed substantial changes to allow for passing of user-supplied sensitivities instead of auto-computed sensitivities from \textsc{c}as\textsc{ad}i\xspace. Although the motivation for the present paper is to solve power flow problems for systems composed of \gls{tsos} and \gls{dsos}, the authors stress that this setup is not a requirement. The presented methodology is generic in the following sense: \begin{quote} Given $i \in \{1, \hdots, \nregions \}$ power flow problems, and given suitable connection information, what is a coherent methodology for solving the overall power flow problem in a distributed manner? \end{quote} It may be that the individual power flow problems happen to coincide with \gls{tsos} and/or \gls{dsos}, but they can as well be sub-problems of a genuinely large power flow problem that should be solved in a distributed way. In either case, the answer the present paper can provide to the above question is: \begin{quote} If the distributed power flow problem is formulated as a distributed-least squares problem, and if this problem is solved with \gls{aladin} using a Gauss-Newton Hessian approximation, then the solution is found within half a dozen \gls{aladin} iterations for systems ranging from 53 to 4662 buses. \end{quote} \begin{remark}[Partitioning] The present paper assumes that the partitioning of the grid is \emph{given}. For insights on how to partition large grids in computationally advantageous ways we refer to~\cite{Guo2016Partitioning,Murray2019Partitioning,Kyesswa2020Partitiong}. \end{remark} The paper is organized as its title suggests: formulation, solution, implementation, followed by an extensive section on results, and concluding comments. The formulation \autoref{sec:formulation} introduces nomenclature and the mathematical formulation of the distributed power flow problem. The solution \autoref{sec:solution} covers two methods from distributed optimization: \gls{admm} and \gls{aladin}. The implementation is covered in \autoref{sec:implementation}, with a strong focus on the open source \textsc{matlab}\xspace code \gls{rapidpf}. The results \autoref{sec:results} gives both qualitative and quantitative assessments of the approach, clearly demonstrating that the least-squares formulation in combination with \gls{aladin} is the most suitable solution approach. Concluding comments in \autoref{sec:conclusion} close the paper. \section{Problem formulation} \label{sec:formulation} Given a single-phase equivalent of a connected AC electrical network in steady state with $\nbus \in \mathbb{N}$ buses, solving the power flow problems means to solve a set of nonlinear equations such that the complex voltage and apparent power of all buses of the network is found. The standard way to solve power flow problems is to apply a \emph{centralized} method: a single machine determines the solution, for instance, via Gauss-Seidel or Newton-Raphson methods. An alternative is to distribute the computational effort to several machines, leading to so-called \emph{distributed} approaches. Distributed approaches are promising because they eliminate single-point-of-failures, they better preserve privacy, their technical scale-up is easier, and they foster cooperation between transmission and distribution system operators. The idea of \emph{distributed} power flow is to solve \emph{local} power flow problems within each subsystem, independently of each other, and to find consensus on the physical values of the exchanged power between the subsystems, see \autoref{fig:example-decomposition}. \subsection{Nomenclature} \label{sec:nomenclature} \begin{figure*} \centering \begin{subfigure}{0.45\textwidth} \includegraphics[scale=0.45]{tex/tikz-fig-of-pf-problem-a.pdf} \subcaption{Example how to decompose a power grid into three regions $\{1, 2, 3\}$, $\{4, 5, 6, 7 \}$, and $\{8, 9, 10, 11, 12 \}$.% \label{fig:example-decomposition}} \end{subfigure} \hspace{0.4cm} \begin{subfigure}{0.45\textwidth} \includegraphics[scale=0.45]{tex/tikz-fig-of-pf-problem-b.pdf} \subcaption{From the perspective of region $\mathcal{R}_1$, the core buses are buses $\{1, 2, 3 \}$, and the copy buses are buses $\{4, 8\}$.% \label{fig:example-copy-and-core-buses}} \end{subfigure} \caption{Graphical depiction of nomenclature for distributed power flow problems, see \autoref{sec:nomenclature}.} \label{} \end{figure*} Before we introduce suitable mathematical formulations for distributed power flow, we introduce some nomenclature. For that consider \autoref{fig:example-decomposition}, which shows a 12-bus system divided into three subsystems (or so-called \emph{regions}). Suppose we are the operator of region $\mathcal{R}_1 = \{1, 2, 3 \}$, for which we know all electrical parameters as well as all bus specifications, and for which we would like to solve a power flow problem. This requires additional information: the complex voltages of buses $\{4, 8\}$, and the branch parameters of the tie lines---hence, connection information about the neighboring subsystems $\mathcal{R}_2 = \{4, \hdots, 7 \}$ and $\mathcal{R}_3 = \{8, \hdots, 12 \}$.\footnote{We stress that no information about the \emph{net power} of the neighboring buses $\{4, 8 \}$ is required to formulate the power flow equations.} We shall call buses $\{ 1, 2, 3\}$ the \emph{core buses} of region~$\mathcal{R}_1$, and buses $\{4, 8\}$ the \emph{copy buses} of region~$\mathcal{R}_1$; \autoref{fig:example-copy-and-core-buses} highlights the distinction. The combination of core buses \emph{and} copy buses allows to formulate a self-contained power flow problem for every region. \begin{table} \centering \scriptsize \caption{List of symbols for distributed power flow.} \label{tab:list-of-symbols} \begin{tabular}{ll} \toprule Symbol & Meaning \\ \midrule $\nregions$ & Number of regions \\ $\nconnections$ & Number of connecting lines between regions \\ $\ncore_{i}$ & Number of core buses in region $i$ \\ $\ncopy_{i}$ & Number of copy buses in region $i$ \\ \midrule $\stateCore_{i}$ & Electrical state of core buses in region $i$ \\ $\stateCopy_{i}$ & Electrical state of copy buses in region $i$\\ \bottomrule \end{tabular} \end{table} \subsection{Distributed power flow} \autoref{tab:list-of-symbols} introduces the notation we use from here on: we consider a finite number $i \in \{1, \hdots, \nregions \}$ of regions. The (electrical) state~$\stateCore_i$ of region~$i$ contains the voltage angles, the voltage magnitudes, the net active power, and the net reactive power of all \emph{core} buses \begin{align} \label{eq:state-core} \stateCore_i = \begin{pmatrix} {\theta_i^\text{core}} & {v_i^\text{core}} & {p_i^\text{core}} & {q_i^\text{core}} \end{pmatrix} \in \mathbb{R}^{4 \ncore_i}. \end{align} The (electrical) state~$\stateCopy_i$ of region~$i$ contains the voltage angles and the voltage magnitudes of all \emph{copy} buses \begin{align} \label{eq:state-copy} \stateCopy_i = \begin{pmatrix} \theta_i^\text{copy} & v_i^\text{copy} \end{pmatrix} \in \mathbb{R}^{2 \ncopy_i}. \end{align} Hence, each region~$i$ is represented by a total of $4 \ncore_i + 2 \ncopy_i$ real numbers. For all core buses of region~$i$ the respective $2 \ncore_i$ power flow equations $\pf_i \colon \mathbb{R}^{4 \ncore_i} \times \mathbb{R}^{2 \ncopy_i} \rightarrow \mathbb{R}^{2 \ncore_i}$, and the respective $2 \ncore_i$ bus specifications $\busspecs_i \colon \mathbb{R}^{4 \ncore_i} \rightarrow \mathbb{R}^{2 \ncore_i}$ make up the power flow problem for this very region~\cite{Frank2016}.\footnote{The copy buses are required solely to formulate the power flow equations.} Subtracting the number of equations from the number of decision variables gives a total of \begin{align} \label{eq:dofs} \underbrace{4 \ncore_i + 2 \ncopy_i}_{\text{Decision vars.}} - \underbrace{2 \ncore_i}_{\text{Power flow eqns.}} - \underbrace{2 \ncore_i}_{\text{Bus specs.}} = 2 \ncopy_i \end{align} missing equations per region~$i$.\footnote{Note that $\textstyle\sum_{i = 1}^{\nregions} \ncopy_i \equiv 2 \nconnections$---we introduce two copy nodes for every line connecting two regions, yielding a total of $4 \nconnections$ missing equations.\label{foot:missing-equations}} It remains to formalize the information that every \emph{copy} bus from region~$i$ corresponds to a \emph{core} bus from a neighboring region~$j \neq i$. An example: in \autoref{fig:example-copy-and-core-buses}, bus~$4$ is a copy bus of region~$\mathcal{R}_1$, and it is a core bus of region~$\mathcal{R}_2$. Hence, their complex voltage must be identical. Having introduced the nomenclature we formulate distributed power flow mathematically as follows \begin{subequations} \label{eq:dist-power-flow-problem} \begin{align} \label{eq:dist-power-flow-problem-pf} \pf_i( \stateCore_i, \stateCopy_i ) &= 0 & \forall i \in \{1, \hdots, \nregions \}\\ \label{eq:dist-power-flow-problem-busspecs} \busspecs_i ( \stateCore_i ) &= 0 & \forall i \in \{1, \hdots, \nregions \}\\ \label{eq:dist-power-flow-problem-consensus} \sum_{i = 1}^{\nregions} A_i \begin{bmatrix} \stateCore_i \\ \stateCopy_i \end{bmatrix} &= 0. \end{align} \end{subequations} The local power flow problem for region~$i$ is given by \eqref{eq:dist-power-flow-problem-pf} and \eqref{eq:dist-power-flow-problem-busspecs}, see \autoref{rema:power-flow-equations} and \autoref{rema:bus-specifications}; the so-called consensus matrices $A_i \in \mathbb{R}^{4 \nconnections \times (4 \ncore_i + 2 \ncopy_i)}$ enforce equality of the voltage angle and the voltage magnitude at the copy buses and their respective core buses, hence they provide the remaining $4 \nconnections = \textstyle\sum_{i = 1}^{\nregions} 2 \ncopy_i$ missing equations, see \autoref{foot:missing-equations}. \begin{remark}[Power flow equations] \label{rema:power-flow-equations} The specific form of the regional power flow equations~$\pf_i(\cdot)$ in~\eqref{eq:dist-power-flow-problem} is arbitrary. Nevertheless, we chose polar coordinates for the voltage phasors when defining the electrical state in~\eqref{eq:state-core}. In that case, the regional power flow equations are \begin{subequations} \begin{align} p_j &= v_j \sum_{k = 1}^{n_i} v_k \left( G_{jk} \cos(\delta_j - \delta_k) + B_{jk} \sin(\delta_j - \delta_k) \right) \\ q_j &= v_j \sum_{k = 1}^{n_i} v_k \left( G_{jk} \sin(\delta_j - \delta_k) - B_{jk} \cos(\delta_j - \delta_k) \right) , \end{align} \end{subequations} for all buses~$j$ from region~$i$; the bus admittance matrix entries are~$Y_{jk} = G_{jk} + \mathrm{j} B_{jk}$. For further details we refer to the excellent primer~\cite{Frank2012Primer}. \end{remark} \begin{remark}[Bus specifications] \label{rema:bus-specifications} For conventional power flow studies, each bus is modelled as one of the following: \begin{itemize} \item \emph{Slack bus:} The voltage magnitude and the voltage angle are fixed; the net active and the reactive power are determined by the power flow solution. \item \emph{\textsc{pq}\xspace/load bus:} The active power and the reactive power are fixed; the voltage magnitude and the voltage phasor are determined by the power flow solution. \item \emph{\textsc{pv}\xspace/voltage-controlled bus:} The active power and the voltage magnitude are fixed; the reactive power and the voltage angle are determined by the power flow solution. \end{itemize} Mathematically, these requirements are simple equality constrains of the form~$\busspecs_i(\cdot)$ for every region~$i$. \end{remark} \begin{remark}[Physical consistence] \label{rema:consistence} The concept of \emph{core buses} and \emph{copy buses} allows to compose the distributed power flow problem in a physically consistent manner: no additional modeling assumptions are introduced or required. If the solution to the distributed problem is found, then this is also the solution to the respective centralized power flow problem. In other words, copying buses \emph{does not} introduce a structural numerical error~\cite{Erseghe2014}. Other approaches, such as ``cutting'' connecting tie lines and enforcing equality of the electrical state at the intersection~\cite{Engelmann2020Aladin}, are in general \emph{not} physically consistent (only in the absence of line capacitance). Hence, even if the true solution to the distributed problem is found, this solution is not numerically identical to the solution of the centralized power flow problem. In other words, cutting lines \emph{does} introduce a structural numerical error, generally speaking. \end{remark} \begin{remark}[Privacy] \label{rema:privacy} To formulate the power flow equations for region~$i$, the voltage information of the copy buses needs to be shared among neighboring regions; this is inherent to the idea of \emph{core} and \emph{copy} buses. Although this means having to share data, the copy bus voltage data (i) does not contain a wealth of privacy information yet (ii) allows for a physically consistent problem formulation, see \autoref{rema:consistence}. \end{remark} \subsection{Distributed optimization problem} The distributed power flow problem from~\eqref{eq:dist-power-flow-problem} is a system of nonlinear equations. In contrast to the standard power flow problem, however, Problem~\eqref{eq:dist-power-flow-problem} is in a form amenable to distributed optimization. Specifically, we propose to solve Problem~\eqref{eq:dist-power-flow-problem} either as a \emph{distributed feasibility problem} \begin{subequations} \label{eq:dist-feasibility-problem} \begin{align} \underset{\underset{\forall i \in \{1, \dots, \nregions\}}{\stateCore_i, \stateCopy_i }}{\operatorname{min}} \: 0 \quad \operatorname{s.t.}\\ \pf_i( \stateCore_i, \stateCopy_i ) &= 0 \label{eq:pfeq}\\ \busspecs_i ( \stateCore_i ) &= 0 \label{eq:busSpec} \\ \sum_{i = 1}^{\nregions} A_i \begin{bmatrix} \stateCore_i \\ \stateCopy_i \end{bmatrix} &= 0, \end{align} \end{subequations} or as a \emph{distributed least-squares problem}\footnote{If not stated otherwise, we have $\| \cdot \| \equiv \| \cdot \|_2$.} \begin{align} \label{eq:dist-least-squares-problem} \underset{\underset{\forall i \in \{1, \dots, \nregions\}}{\stateCore_i, \stateCopy_i }}{\operatorname{min}} \sum_{i = 1}^{\nregions} \: \norm{\begin{bmatrix} \pf_i( \stateCore_i, \stateCopy_i ) \\ \busspecs_i ( \stateCore_i ) \end{bmatrix}}^2 \quad \operatorname{s.t.} \quad \sum_{i = 1}^{\nregions} A_i \begin{bmatrix} \stateCore_i \\ \stateCopy_i \end{bmatrix} = 0. \end{align} Necessarily, the solution from the distributed feasibility problem is a solution for the distributed least-squares problem. To summarize, we propose to formulate distributed power flow problems as either a distributed feasibility problem~\eqref{eq:dist-feasibility-problem} or a distributed least-squares problem~\eqref{eq:dist-least-squares-problem}. Both formulations divide and conquer: formulate power flow problems for every region, and relate them by enforcing equal voltages at the connecting buses. The privacy overhead for the regional power flow problems is limited: only the voltage information of the connecting buses is required to formulate the regional power flow equations. Both of the given formulations---feasibility~\eqref{eq:dist-feasibility-problem} and least-squares~\eqref{eq:dist-least-squares-problem}---are special cases of a more general problem formulation. We shall state the general problem formulation in order to simplify the solution algorithms to follow. Using \begin{subequations} \begin{equation} \regions = \{1, \hdots, \nregions \}, \end{equation} we define \label{eq:sepForm} \begin{align} \underset{\underset{\forall i \in \regions}{\DistOptLocalState_i }}{\operatorname{min}}~ \sum_{i \in \regions} f_i(\DistOptLocalState_i) \\ \text{subject to }\quad g_i(\DistOptLocalState_i)&=0 \qquad \forall i \in \regions \label{eq:sepProbGi} \\ \sum_{i \in \regions} A_i \DistOptLocalState_i &= 0, \label{eq:consConstr} \end{align} \end{subequations} where $\DistOptLocalState_i = ( \stateCore_i, \stateCopy_i )$ combines the core bus state and the copy bus state for region~$i$. The consensus constraints~\eqref{eq:consConstr} are identical for either problem formulation; the correspondence of the cost and the equality constraints is summarized in the following \autoref{tab:correspondences}. \begin{table}[h!] \scriptsize \centering \caption{Correspondence of terms from general problem~\eqref{eq:sepForm} to feasibility problem~\eqref{eq:dist-feasibility-problem} and to least-squares problem~\eqref{eq:dist-least-squares-problem}.} \label{tab:correspondences} \begin{tabular}{ccc} \toprule Term from~\eqref{eq:sepForm}& Feasibility problem~\eqref{eq:dist-feasibility-problem} & Least-squares problem~\eqref{eq:dist-least-squares-problem} \\ \midrule $f_i(\DistOptLocalState_i) = $ & 0 & $\norm{\begin{bmatrix} \pf_i( \stateCore_i, \stateCopy_i ) \\ \busspecs_i ( \stateCore_i ) \end{bmatrix}}^2$ \\ $g_i(\DistOptLocalState_i) = $ & $\begin{bmatrix} \pf_i( \stateCore_i, \stateCopy_i ) \\ \busspecs_i ( \stateCore_i ) \end{bmatrix} $ & n/a \\ \bottomrule \end{tabular} \end{table} \begin{remark}[Nonlinear least-squares problems~\cite{Nocedal2006}] \label{rema:nonlinear-least-squares} Least-squares have been studied extensively. Besides being an intuitive formulation of a problem at hand, least-squares problems provide rich structure that can be exploited. For nonlinear least-squares problems, it is well-known that Gauss-Newton methods work well. Instead of applying a full Gauss-Newton method we merely exploit the fact that the Hessian matrix can be approximated by the matrix product of the Jacobian. \end{remark} \section{Problem solution} \label{sec:solution} Two viable methods to tackle distributed optimization problems of the form~\eqref{eq:dist-feasibility-problem} or~\eqref{eq:dist-least-squares-problem} are \gls{admm} and \gls{aladin}; we provide a brief overview of both. In the following, the superscript \textsuperscript{$k$} denotes the $k$\textsuperscript{th} iterate; the superscript \textsuperscript{$0$} hence denotes the initial condition. \begin{remark}[Wording] \label{rema:wording} Different problems bring about different wording. In the problem formulation in \autoref{sec:formulation} we speak of ``regions'', because the power flow problem is usually related to an existing physical region. In the problem solution to follow, however, we prefer to speak of ``subsystems'', and ``local problems'', because the optimization problems that need to be solved in parallel need not resemble anything that exists in the physical world. \end{remark} \subsection{\gls{admm}} \label{sec:solution-admm} \input{admm.tex} \subsection{\gls{aladin}} \label{sec:solution-aladin} \input{aladin.tex} \section{Implementation} \label{sec:implementation} The problem formulations (\autoref{sec:formulation}) and suggested solutions (\autoref{sec:solution}) are moot without means to actually implement, execute, and validate them. We introduce \gls{rapidpf}, an open source \textsc{matlab}\xspace code that tackles the problem formulation, and we present an extension to \textsc{aladin}-$\alpha$\xspace, an open source \textsc{matlab}\xspace code that deals with the problem solution. \subsection{\Acrfull{rapidpf}} Although there exist several excellent open-source tools to model, study, and solve (optimal) power flow problems (e.g. \textsc{matpower}\xspace in \textsc{matlab}\xspace~\cite{Zimmerman2011matpower}, PowerModels\xspace in Julia~\cite{Coffrin2018PowerModels}, or pandapower\xspace in Python~\cite{pandapower.2018}), the same cannot be said for \emph{distributed} (optimal) power flow problems---to the best of the authors' knowledge. To help overcome both the tedious, error-prone, and laborious task of formulating distributed power flow problems, and of interfacing distributed optimization methods, we provide open source \textsc{matlab}\xspace code for \gls{rapidpf}, which automates the following task: \begin{figure*} \scriptsize \centering \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto, node distance = 2mm and 5mm] \pgfdeclarelayer{bg} \pgfsetlayers{bg, main} \node[rectangle, fill=blue!20](connection_information){Connection information}; \node[rectangle, fill=teal!20](case_file_1)[below = of connection_information]{Case file 1}; \node[rectangle, fill=teal!20](case_file_2)[below = of case_file_1]{Case file 2}; \node[rectangle, fill=teal!20](case_file_dots)[below = of case_file_2]{\dots}; \node[rectangle, fill=teal!20](case_file_n)[below = of case_file_dots]{Case file \nregions}; \node[rectangle, fill=gray!40](case_file_generator)[right = of case_file_2]{Case file generator}; \node[rectangle, fill=gray!40](case_file_splitter)[right = of case_file_generator]{Case file splitter}; \node[rectangle, fill=gray!40](case_file_parser)[right = of case_file_splitter]{Case file parser}; \node[rectangle, fill=orange!20](aladin)[right = of case_file_parser, yshift=-9mm]{Distributed least-squares problem~\eqref{eq:dist-least-squares-problem}}; \node[rectangle, fill=orange!20](admm)[right = of case_file_parser, yshift=9mm]{Distributed feasibility problem~\eqref{eq:dist-feasibility-problem}}; \begin{pgfonlayer}{bg} \node[draw, draw=none, rounded corners, fill=gray!10, fit={($(case_file_generator.south west)+(-2.5mm, -6mm)$) ($(case_file_parser.north east)+(2.5mm, 6mm)$)}, inner sep=0pt, label=above:Open source \textsc{matlab}\xspace code \gls{rapidpf}] {}; \end{pgfonlayer} \draw[->] (connection_information.east) to [](case_file_generator); \draw[->] (case_file_1.east) to (case_file_generator); \draw[->] (case_file_2.east) to (case_file_generator.west); \draw[->] (case_file_dots.east) to (case_file_generator); \draw[->] (case_file_n.east) to (case_file_generator); \draw[->] (case_file_generator.east) to (case_file_splitter.west); \draw[->] (case_file_splitter.east) to (case_file_parser.west); \draw[->] (case_file_parser) to (admm.west); \draw[->] (case_file_parser) to (aladin.west); \end{tikzpicture} \caption{Flow chart for \gls{rapidpf} depicting its inputs (case files \& connection information) and its output (\textsc{matlab}\xspace struct compatible with \textsc{aladin}-$\alpha$\xspace).} \label{fig:rapidpf-flow-chart} \end{figure*} \begin{quote} Given $\nregions$ \textsc{matpower}\xspace case files for all $i \in \{1, \hdots, \nregions \}$ regions, and given information about how the $i \in \{1, \hdots, \nregions \}$ regions are connected, generate a \textsc{matlab}\xspace struct compatible with \textsc{aladin}-$\alpha$\xspace. \end{quote} The features of \gls{rapidpf} span: \begin{itemize} \item \emph{Rapid prototyping:} \gls{rapidpf} decreases the time-from-idea-to-result. \item \emph{Compatibility:} \gls{rapidpf} is compatible with \textsc{matpower}\xspace and \textsc{aladin}-$\alpha$\xspace. All generated case files can be visualized, for example, with the excellent ``Steady-State AC Network Visualization in the Browser''\footnote{Available on \url{https://immersive.erc.monash.edu/stac/}.}. \item \emph{Comparability:} \gls{rapidpf} generates \textsc{matpower}\xspace case files that can be validated by \textsc{matpower}\xspace functions such as \texttt{runpf()}. \item \emph{Sensitivities:} \gls{rapidpf} generates function handles for gradients, Jacobians, and Hessians. \item \emph{Documentation:} \gls{rapidpf} comes with a self-contained and user-friendly online documentation. \item \emph{Open source:} \gls{rapidpf} is publicly available under the \textsc{bsd}-3-clause\xspace license on \url{https://github.com/KIT-IAI/rapidPF/}. \item \emph{Post-processing:} \gls{rapidpf} provides rich post-processing functionalities to analyze the results quickly and intuitively. \end{itemize} The code of \gls{rapidpf} is made up of three components: the case file generator, the case file splitter, and the case file parser, see \autoref{fig:rapidpf-flow-chart}. The case file generator requires as inputs several \textsc{matpower}\xspace case files in combination with their connection information; the connection information encodes \emph{who} is connected to \emph{whom} and \emph{by what} (kind of branch and/or transformer). The regions can be connected in (almost) arbitrary ways, see \autoref{fig:rapidpf-supported-connections}.\footnote{The exception being that two buses are allowed to be connected by just one line. \autoref{rema:connecting-buses} provides further guidance about the assumptions on how buses can be connected.} The output of the case file generator is a \textsc{matpower}\xspace-compatible merged case file. This merged case file is generated for validation purposes: it provides a reference solution that can be computed, for instance, by running \textsc{matpower}\xspace's \texttt{runpf()} command. The splitter adds information to each of the $\nregions$ case files about its core buses and copy buses. Finally, the parser takes the augmented case files, and generates an \textsc{aladin}-$\alpha$\xspace-compatible \textsc{matlab}\xspace struct that describes the problem either as a distributed feasibility problem~\eqref{eq:dist-feasibility-problem} or as a distributed least-squares problem~\eqref{eq:dist-least-squares-problem}. The parser also generates sensitivities of the power flow problem, namely the Jacobian of the power flow equations and bus specifications as well as their Hessian information. \begin{remark}[Sensitivities] \label{rema:sensitivities} All first- and second-order optimization methods require information about derivatives. Hence, \gls{rapidpf} provides them for the user. The gradient of the local cost function, and the Jacobian of the power flow problem are the exact analytical expressions. The Hessian matrix---required only for \gls{aladin} but not \gls{admm}---is approximated by one of four methods: finite differences, \gls{bfgs}, limited-memory \gls{bfgs}, or Gauss-Newton. The first three methods can be applied to both problem formulations (feasibility~\eqref{eq:dist-feasibility-problem} and least-squares~\eqref{eq:dist-least-squares-problem}); Gauss-Newton is a method tailored to nonlinear least-squares problems~\cite{Nocedal2006}, hence applies only to the least-squares formulation~\eqref{eq:dist-least-squares-problem}. \end{remark} \begin{remark}[Connecting buses] \label{rema:connecting-buses} A few more words are appropriate about how systems can be connected within the case file generator. First, we formally distinguish between the \emph{master system} and its \emph{worker systems}. The sole difference is that (without loss of generality) the slack bus of the overall system is the slack bus of the master system. The connection between two system is directed, imposing a natural distinction between the \emph{from}- and \emph{to}-system. For instance, consider the line connecting the \emph{Master} and \emph{Worker 1} in \autoref{fig:rapidpf-supported-connections}: the \emph{Master} is the \emph{from}-system, \emph{Worker 1} is the \emph{to}-system. The connecting buses in both the \emph{from}- and the \emph{to}-system must be generation buses, hence either a slack bus or a \textsc{pv}\xspace bus. If the connecting bus in the \emph{to}-worker-system is the slack bus, then this slack bus is replaced by a \textsc{pq}\xspace bus with zero generation and zero demand. If the connecting bus in the \emph{to}-worker-system is a \textsc{pv}\xspace bus, then this \textsc{pv}\xspace bus is replaced by a \textsc{pq}\xspace bus with zero generation and its original demand. If no connecting bus in the \emph{to}-worker-system is the slack bus, then the worker system's slack bus is replaced by a \textsc{pv}\xspace bus; the respective set points for the active power and the voltage magnitude are taken from the \textsc{matpower}\xspace case file entries in \texttt{mpc.gen}. \end{remark} \begin{figure} \centering \scriptsize \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto, node distance = 4mm and 4mm, minimum width=1.0cm] \node[rectangle_algo](master){Master}; \node[rectangle_algo, draw=none, fill=white](dummy)[right = of master]{}; \node[rectangle_algo](worker_2)[right = of dummy]{Worker 2}; \node[rectangle_algo](worker_1)[above=of dummy]{Worker 1}; \draw[->] (master) to (worker_1.west); \draw[->] (master) to (worker_2); \draw[->] (worker_1.east) to (worker_2); \draw[->] (worker_2) to (worker_1); \end{tikzpicture} \caption{Supported types of connections between regions.} \label{fig:rapidpf-supported-connections} \end{figure} \subsection{Extensions to \textsc{aladin}-$\alpha$\xspace} Whereas \gls{rapidpf} is \textsc{matlab}\xspace code tailored to simplify and streamline the \emph{problem formulation}, the open source \textsc{matlab}\xspace code \textsc{aladin}-$\alpha$\xspace is used to tackle the \emph{problem solution}~\cite{Engelmann2020Aladin}. \textsc{aladin}-$\alpha$\xspace provides tested implementations and several variants of both \gls{admm} and \gls{aladin}. Under the hood, \textsc{aladin}-$\alpha$\xspace depends to a large degree on \textsc{c}as\textsc{ad}i\xspace---an open source tool for algorithmic differentiation---and Ipopt as the solver for nonlinear programs. Unfortunately, the sole dependency on \textsc{c}as\textsc{ad}i\xspace and Ipopt hinders distributed methods from \textsc{aladin}-$\alpha$\xspace to be applicable to medium- to large-scale power systems (as we shall discuss in \autoref{sec:results}). Hence, we created a separate branch for \textsc{aladin}-$\alpha$\xspace that allows to use the user-defined sensitivities from \gls{rapidpf}, and that allows to interface different solvers such as \texttt{fmincon}\xspace, \texttt{fminunc}\xspace,\footnote{The solvers \texttt{fmincon}\xspace and \texttt{fminunc}\xspace are part of \textsc{matlab}\xspace's Optimization Toolbox\texttrademark.} or \texttt{worhp}\xspace~\cite{Kuhlmann2018WORHP}, see also the right-hand side of \autoref{fig:formulation-solution-solvers}. \section{Results} \label{sec:results} \begin{figure*} \scriptsize \centering \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto, node distance = 2mm and 5mm] \pgfdeclarelayer{bg} \pgfsetlayers{bg, main} \node[rectangle_algo](feasibility){Feasibility~\eqref{eq:dist-feasibility-problem}}; \node[rectangle_algo](least-squares)[below = of feasibility]{Least-squares~\eqref{eq:dist-least-squares-problem}}; \node[rectangle_algo](sensitivities)[right = of feasibility]{Sensitivities}; \node[rectangle_algo](admm)[right = of sensitivities]{\gls{admm}}; \node[rectangle_algo](aladin)[below = of admm]{\gls{aladin}}; \node[rectangle_algo](fminunc)[right = of admm]{\texttt{fminunc}\xspace}; \node[rectangle_algo](casadi)[above = of fminunc]{\textsc{c}as\textsc{ad}i\xspace \& Ipopt}; \node[rectangle_algo](fmincon)[below = of fminunc]{\texttt{fmincon}\xspace}; \node[rectangle_algo](worhp)[below = of fmincon]{\texttt{worhp}\xspace}; \draw[->] (feasibility.east) to (sensitivities.west); \draw[->] (least-squares.east) to (sensitivities.west); \draw[->] (sensitivities.east) to (admm.west); \draw[->] (sensitivities.east) to (aladin.west); \draw[->, dashed] (admm.east) to (casadi.west); \draw[->, dashed] (aladin.east) to (casadi.south west); \draw[->, dotted] (aladin.east) to (fminunc.west); \draw[->, dotted] (aladin.east) to (fmincon.west); \draw[->, dotted] (aladin.east) to (worhp.west); \draw[->, dashed] (current bounding box.north west)++(0, 0em) -- ++(2em, 0) node[right] {\textsc{aladin}-$\alpha$\xspace}; \draw[->, dotted] (current bounding box.north west)++(0, -2em) -- ++(2em, 0) node[right] {Newly interfaced solvers}; \end{tikzpicture} \caption{Problem formulation, problem solution, and interfaced solvers.} \label{fig:formulation-solution-solvers} \end{figure*} We turn to numerical results for power systems of various sizes. We examine several combinations of the two problem formulations---feasibility~\eqref{eq:dist-feasibility-problem} and least-squares~\eqref{eq:dist-least-squares-problem}---and the two solution methods---\gls{admm} and \gls{aladin}, paired with different ways to compute sensitivities and interfance different solvers, see \autoref{fig:formulation-solution-solvers}. The section is devised top-down: we begin with qualitative comparisons of \gls{admm} and \gls{aladin}, then examine the least-squares problem in combination with \gls{aladin} (for different solvers and different Hessian approximations). The final section analyzes the convergence behavior for a 4662-bus system. Our main finding is that the least-squares formulation with \gls{aladin} and a Gauss-Newton Hessian approximation outperforms all other combinations. \begin{remark}[Settings common to all examples] \label{rema:settings} For all following examples, the connecting lines between all regions are modelled as transformers with a per-unit reactance of 0.00623, and a tap ratio of 0.985; the resistance, the total line charging susceptance, and the transformer phase shift angle are set to zero.\footnote{In light of \autoref{rema:wording} we switch back to referring to ``subsystems'' as ``regions'' and so on.} The initial condition for the primal state (i.e. the state of the electrical grid) is created from the \textsc{matpower}\xspace case files as follows: the voltage angle and voltage magnitude are initialized with their respective entries from the entries in the \texttt{bus} struct; similaly, the net active power and the net reactive power are initialized as the difference between the respective summed entries in the \texttt{gen} struct and the \texttt{bus} struct. All dual variables are set to 0.01 initially. All computed solutions are verified relative to the reference solution provided by the \textsc{matpower}\xspace command \texttt{runpf()}. \end{remark} \subsection{Qualitative comparison} \label{sec:qualitative-comparison} \begin{figure} \centering \begin{subfigure}{1\textwidth} \includegraphics[width=\textwidth]{tex/different_rho_error.pdf} \includegraphics[width=\textwidth]{tex/different_rho_violation.pdf} \caption{Varying penalty parameter~$\rho = 10^{n}$ for $n \in \{-1, 0, 3, 5 \}$.} \label{fig:admm-rho-dependence} \end{subfigure} \begin{subfigure}{1\textwidth} \includegraphics[width=\textwidth]{tex/different_initial_point_error.pdf} \includegraphics[width=\textwidth]{tex/different_initial_point_violation.pdf} \caption{Varying intial conditions $\DistOptLocalState^\star + \alpha \hat{\DistOptLocalState}$, where $\DistOptLocalState^\star$ is the true solution, $\hat{\DistOptLocalState}$ is a vector whose entries are samples from a standard normal distribution, and $\sigma \in \{0.01, 0.1, 1, 10 \}$ is the standard deviation.} \label{fig:admm-initial-point-dependence} \end{subfigure} \caption{Convergence behavior of \gls{admm} for a feasibility problem formulation of the 53-bus test case from \autoref{tab:computing-times}. In each subplot, the upper plot shows the distance to the optimal solution, and the lower plot shows the violation of the consensus constraints.} \end{figure} \begin{table} \centering \scriptsize \caption{Qualitative comparison of both problem formulations (feasibility vs. least-squares) and their solution by either \gls{admm} or \gls{aladin}.} \label{tab:qualitative-comparison} \renewcommand{\arraystretch}{1.3} \begin{tabular}{lcccg} \toprule & \multicolumn{2}{c}{Feasibility problem~\eqref{eq:dist-feasibility-problem}} & \multicolumn{2}{c}{Least-squares problem~\eqref{eq:dist-least-squares-problem}}\\ & \gls{admm} & \gls{aladin} & \gls{admm}& \cellcolor{white}\gls{aladin} \\ \midrule Scalability & -{}- & - & - & ++ \\ Speed & -{}- & + & - & ++ \\ Performance & -{}- & + & - & ++ \\ Tuning & -{}- & - & - & + \\ \bottomrule \end{tabular} \end{table} We begin with a qualitative comparison of the applicability of both solution methods (\gls{admm} and \gls{aladin}) to both problem formulations (feasibility and least-squares); we base our qualitative findings on a total of 7 test cases that are summarized in the first four columns of \autoref{tab:computing-times}. From \autoref{tab:qualitative-comparison}, which summarizes our qualitative findings, it appears that \gls{admm} is unsuitable for either problem formulation. The performance of \gls{admm} depends critically on both the choice of the penalty parameter and the initial condition. \autoref{fig:admm-rho-dependence} shows the convergence behavior for \gls{admm} applied to the feasibility formulation of the 53-bus test case from \autoref{tab:computing-times}. For various choices of the penalty parameter~$\rho$ \gls{admm} exhibits strange and overall dissatisfying convergence properties. Most of the considered cases (the ones from \autoref{tab:computing-times}) did not converge successfully even after having done significant parameter sweeps. \autoref{fig:admm-rho-dependence} shows the influence of the choice of the penalty parameter~$\rho$, and the often-encountered convergence behavior with \gls{admm}: after relatively few iterations, the solution is in the vicinity of the optimal solution, but it takes several hundred iterations before further refinement occurs. And even then, the solution is far from being accurate. \autoref{fig:admm-initial-point-dependence} shows the critical dependence on the (primal) initial condition: Perturbing the primal initial condition around the opimal solution, the plots show that the entire optimization process is prolonged significantly. In contrast to \gls{admm}, \gls{aladin} appears applicable to solve the distributed power flow problems from \autoref{tab:computing-times}. In all qualitative aspects we consider (scalability, speed, performance, tuning), the least-squares formulation outperforms the feasibility counterpart by far, see \autoref{tab:qualitative-comparison}. It is especially the aspect of scalability that hinders the feasibility problem: for instance, the 354-bus test case from \autoref{tab:computing-times} already took 38.2 s to solve with \texttt{fmincon}\xspace, and converged within 14 \gls{aladin} iterations. An explanation for this behavior may be the zero-cost objective function for the feasibility problem~\eqref{eq:dist-feasibility-problem}; sensitivities of the objective hence contain no information. The advantage of the least-squares formulation is not just a non-zero objective function, but the absence of (in-)equality constraints in the local nonlinear programs, cf. \autoref{rema:local-optimization-problems} and \autoref{rema:nonlinear-least-squares}. \subsection{Least-squares formulation with \gls{aladin}} Based on our findings from the previous \autoref{sec:qualitative-comparison}, we consider only the least-squares formulation with \gls{aladin} in what is to follow. \subsubsection{Different solvers} \label{sec:different-solvers} We investigate how the different solvers mentioned in \autoref{fig:rapidpf-flow-chart} cope with the different test cases from \autoref{tab:computing-times}; we use the sensitivities provided by \gls{rapidpf} in all cases, i.e. analytical gradients of the cost function, exact Jacobians, and the Gauss-Newton Hessian approximation. Interestingly, \autoref{tab:computing-times} suggests that just half a dozen \gls{aladin} iterations are sufficient to solve the test cases, which range from a total of 53 buses to 4662 buses. Hence, the applicability of \gls{aladin} itself is clearly demonstrated. Of course, the overall solution time differs significantly with the choice of the local solver.\footnote{All computations were carried out on a standard a desktop computer with \textit{Intel\textsuperscript{\textregistered} Core\texttrademark\, i5-6600K CPU @ 3.50GHz} Processor and 16.0 \textsc{gb} installed \textsc{ram}; no efforts were made towards parallelization.\label{foot:computing-times}} As a negative result we find that plain \textsc{aladin}-$\alpha$\xspace, which interfaces only \textsc{c}as\textsc{ad}i\xspace with Ipopt, is not suitable for the problem at hand. That is why we chose to implement interfaces for the three other solvers: \texttt{fminunc}\xspace, \texttt{fmincon}\xspace, and \texttt{worhp}\xspace. Although \texttt{fminunc}\xspace is the seemingly best fit---the local subproblems are unconstrained optimization problems---its practical applicability is limited to subproblems of a few hundred buses. For the 2708- and 4662-bus test systems, \texttt{fminunc}\xspace takes significantly longer, because the dimension of the local subproblem grows too large. The solution times for \texttt{fmincon}\xspace and \texttt{worhp}\xspace are acceptable for all considered cases. It stands to reason that \texttt{worhp}\xspace will outperform \texttt{fmincon}\xspace for even larger test cases, because it is able to exploit the sparsity of the optimization problem. \begin{table} \centering \scriptsize \caption{Computing times for different test cases and different solvers when solving the distributed least-squares problem~\eqref{eq:dist-least-squares-problem} with \gls{aladin} and sensitivities from \gls{rapidpf}, see \autoref{foot:computing-times}.} \label{tab:computing-times} \rowcolors{2}{gray!10}{white} \begin{tabular}{rrp{1.2cm}rrrrrrr} \toprule & & \textsc{matpower}\xspace & & \multicolumn{3}{c}{Solution time in s for} & \gls{aladin}\\ \multirow{-2}{*}{Buses}& \multirow{-2}{*}{\nregions} & case files & \multirow{-2}{*}{\nconnections} & \texttt{fminunc}\xspace & \texttt{fmincon}\xspace & \texttt{worhp}\xspace & iterations\\ \midrule 53 & 3 & 9, 14, 30& 3 & 2.5 & 2.2 & 2.4 & 4\\ 354 & 3 & 3 $\times$ 118& 5& 2.5 & 3.1 & 4.8 & 5\\ 418 & 2 & 118, 300 & 2 & 4.5 & 5.2 & 7.0 & 5\\ 826 & 7 & 7 $\times$ 118 & 7 & 3.7 & 5.3 & 7.2 & 5\\ 1180 & 10 & 10 $\times$ 118 & 11 & 4.9 & 6.7 & 9.8 & 6\\ 2708 & 2 & 2 $\times$ 1354 & 1 & 212.7 & 41.9 & 53.6 & 4\\ 4662 & 5& 3 $\times$ 1354, 2 $\times$ 300& 4 & 387.9 & 90.1 & 113.8 & 5\\ \bottomrule \end{tabular} \end{table} \subsubsection{Different Hessian approximations} \label{sec:different-Hessian-approximations} With \gls{aladin} being a second-order optimization method, the Hessian matrix is required---or an accurate yet easy-to-compute approximation thereof. We compare four different Hessian approximations for the least-squares problem~\eqref{eq:dist-least-squares-problem} with \gls{aladin}: finite differences, \gls{bfgs}, limited-memory \gls{bfgs}, and the Gauss-Newton method. The results, which are shown in \autoref{tab:Hessian-approximations}, confirm what is to be expected: the Gauss-Newton method outperforms all other methods. This is in accordance with the fact that exploiting the structure of the least-squares formulation correctly pays off tremendously. The finite difference approximation, just like the two \gls{bfgs} methods, are all-purpose Hessian approximation unaware of the underlying problem structure. Gauss-Newton, in turn, is a Hessian approximation tailored to nonlinear least squares problem, see also \autoref{rema:sensitivities}. The results from \autoref{tab:Hessian-approximations} make it clear that already for small system sizes, the all-purposes Hessian approximations should be avoided, because they lead to longer computation times.\footnote{Note however that the total number of \gls{aladin} iterations is unaffected.} Consequently, the default Hessian approximation for least-squares problems is the Gauss-Newton method in \gls{rapidpf}. \begin{table}[h!] \centering \scriptsize \caption{Computing times for least-squares problem~\eqref{eq:dist-least-squares-problem} with \gls{aladin} and \texttt{fmincon}\xspace, for different Hessian approximations. The entries in the column ``Buses'' refers to the entries in \autoref{tab:computing-times}.} \label{tab:Hessian-approximations} \rowcolors{2}{gray!10}{white} \begin{tabular}{rrrrr} \toprule Buses & Finite difference & \gls{bfgs} & Limited-memory \gls{bfgs} & Gauss-Newton \\ \midrule 53 & 10.0 & 28.6 & 22.9 & 2.2 \\ 354 & 61.5 & 287.8 & 107.4 & 3.1 \\ 418 & 185.6 & 1086.4 & 148.2 & 5.2 \\ 826 & n/a & n/a & n/a & 5.3 \\ $\hdots$ & n/a & n/a & n/a & See \autoref{tab:computing-times} \\ 4662 & n/a & n/a & n/a & See \autoref{tab:computing-times} \\ \bottomrule \end{tabular} \end{table} \subsection{4662-Bus system -- Convergence behavior} Next, we study the convergence behavior of the 4662-bus test case. This test case is composed of three 1354-bus \textsc{matpower}\xspace test cases, and two 300-bus \textsc{matpower}\xspace test cases. \autoref{tab:4662-bus-test-case} shows the connecting buses between the regions. For other relevant information such as how the connecting lines are modelled, and how the initial conditions are chosen, see \autoref{rema:settings}. To solve the distributed power flow problem we choose a least-squares formulation with \gls{aladin}. We use the Gauss-Newton Hessian approximation, and \texttt{fmincon}\xspace is used to solve the local problems. From \autoref{tab:computing-times} we see that this setup requires 5 \gls{aladin} iterations and about 90 seconds. \autoref{fig:4662-violations} shows, for every \gls{aladin} iteration and for every region, the $\infty$-norm of the power flow equations~\eqref{eq:dist-power-flow-problem-pf}, of the bus specifications~\eqref{eq:dist-power-flow-problem-busspecs}, and of the consensus constraint violations~\eqref{eq:dist-power-flow-problem-consensus}. After 5 \gls{aladin} iterations, all violations are below $10^{-10}$, and the computations are terminated. \begin{table} \centering \scriptsize \caption{Regions and used test cases for 4662-bus test case (left). Connecting buses between regions (middle). Connection graph (right)} \label{tab:4662-bus-test-case} \begin{tabular}{ccc} \begin{tabular}{rr} \toprule Region & \textsc{matpower}\xspace case file \\ \midrule 1 & \texttt{case1354pegase} \\ 2 & \texttt{case1354pegase} \\ 3 & \texttt{case1354pegase} \\ 4 & \texttt{case300} \\ 5 & \texttt{case300} \\ \bottomrule \end{tabular} & \begin{tabular}{rrrr} \toprule \multicolumn{2}{c}{From-system} & \multicolumn{2}{c}{To-system} \\ Region & Bus & Region & Bus \\ \midrule 1 & 17 & 2 & 46 \\ 1 & 111 & 3 & 271 \\ 2 & 64 & 4 & 10 \\ 2 & 837 & 5 & 8 \\ \bottomrule \end{tabular} & \begin{tabular}{c} \begin{tikzpicture} [ grow = down, edge from parent/.style = {draw, -latex}, every node/.style = {font=\scriptsize, circle, draw}, sibling distance = 4em, level distance = 4em, inner sep = 2pt, ] \node {$\mathcal{R}_1$} child { node {$\mathcal{R}_3$} } child { node {$\mathcal{R}_2$} child { node {$\mathcal{R}_4$} } child { node {$\mathcal{R}_5$}} }; \end{tikzpicture} \end{tabular} \end{tabular} \end{table} \begin{figure} \centering \begin{subfigure}{1\textwidth} \includegraphics[width=\textwidth]{tex/sim-result-4662-residual-pf.pdf} \end{subfigure} \begin{subfigure}{1\textwidth} \includegraphics[width=\textwidth]{tex/sim-result-4662-residual-bus.pdf} \end{subfigure} \begin{subfigure}{1\textwidth} \includegraphics[width=\textwidth]{tex/sim-result-4662-residual-consensus.pdf} \end{subfigure} \caption{Decrease of the $\infty$-norm of the power flow equations, the bus specifications, and the consensus violations, each per \gls{aladin} iteration for the 4662-bus system from \autoref{tab:computing-times}.} \label{fig:4662-violations} \end{figure} \section{Conclusion \& Outlook} \label{sec:conclusion} The relevance of distributed power flow problems is increasing, because their solutions allow for better cooperation between different stakeholders, e.g. \gls{tsos} and \gls{dsos}. Distributed optimization is a viable technique to tackle such distributed power flow problems. It is speficically the \acrfull{aladin} with its convergence guarantees that yields promising results: if the distributed power flow problem is formulated as a distributed least-squares problem, and if a Gauss-Newton Hessian approximation is used, then about half a dozen iterations suffice to converge to the correct solution. To facilitate rapid prototyping we introduce \acrfull{rapidpf}, which is fully \textsc{matpower}\xspace-compatible \textsc{matlab}\xspace code that takes over the laborious task of creating code amenable to distributed optimization. Future steps will focus mainly on further structure exploitation for solving the problem, and on implementing larger test cases. The least-squares formulation is promising, hence further improvements are possible, such as relying not on an all-purpose solver but devising a solver dedicated to nonlinear-least squares problems. A first step might be a tailored Gauss-Newton method, or a tailored Levenberg-Marquardt method~\cite{Nocedal2006}. For the Gauss-Newton method, for example, it is possible to avoid having to compute the Hessian altogether, because a singular-value decomposition or a conjugate gradient method can be applied directly to solve the linearized problem~\cite{Nocedal2006}. This user-defined nonlinear-least squares solver must then be interfaced with \textsc{aladin}-$\alpha$\xspace. The simulation results we presented are all carried out on a single machine. To leverage the literal \emph{distribution} of the optimization, efforts toward parallel computing shall be undertaken when tackling larger test cases. Finally, \gls{rapidpf} can be extended to optimal power flow problems upon adding cost functions per region. \section*{Acknowledgment} The authors would like to thank Jochen Bammert and Tobias Wei\ss bach (both TransnetBW GmbH) for insightful discussions and continuing support and interest in distributed power flow. Finally, Tillmann M\"uhlpfordt thanks Daniel Bacher for his supporting the migration from Gitlab to GitHub. \section*{Contributor roles} See \autoref{tab:contributor-roles} for the roles of each author. \begin{table}[h!] \scriptsize \centering \caption{Contributor roles.} \label{tab:contributor-roles} \begin{tabular}{lp{0.57\textwidth}} \toprule Author & Role(s) \\ \midrule Tillmann M\"uhlpfordt & Conceptualization, investigation, methodology, software, writing -- original draft\\ Xinliang Dai & Software, investigation, visualization\\ Alexander Engelmann & Methodology (while at \textsc{kit}), writing -- original draft (\autoref{sec:solution-admm}, \autoref{sec:solution-aladin}) (while at \textsc{tu} Dortmund)\\ Veit Hagenmeyer & Conceptualization, funding acquisition, writing -- review \& editing\\ \bottomrule \end{tabular} \end{table} \footnotesize
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:1} Quantum physics differs in many aspects from our conventional intuition. One of such intriguing difference is the celebrated notion of nonlocality, which arises from the debate of the early 20th century among scientists. One line of the debate originated with Einstein, Podolsky, and Rosen, who proposed the thought experiment known as EPR paradox and the so-called ``spooky-action-at-a-distance" \cite{locality}. Their predictions of quantum mechanics are, in sharp contrast to the conventional view, that physical processes should obey the principle of locality. Nonlocality of quantum physics has been studied from different points of view, such as by Bell inequality and entanglement. The Bell-type inequalities \cite{Bell,chsh,Augusiak} are derived by the local hidden variable theory, and may be violated by the quantum measurement outcomes realizable experimentally \cite{physrep}. The violation of Bell inequalities implies the existence of entanglement in a system \cite{chsh}, however, this is not always true for the opposite case, as there exist mixed states that are entangled but do not violate any Bell-type inequalities \cite{Werner}. Other studies further consolidate that there is also nonlocality without entanglement \cite{nowen}, or nonlocality without quantum correlations other than entanglement \cite{nowqu}. So nonlocality seems can be quantified reasonably by different measures. Nowadays, we realized that nonlocality is not only a central concept of quantum mechanics, but may also be used to improve the efficiency of many quantum information processing (QIP) tasks \cite{RMP,qip}. Meanwhile, it is also interrelated with other foundational theory of quantum mechanics such as uncertainty principle \cite{Oppenheim-science}. These delicate and intriguing features of nonlocality prompted a huge surge of people's interest from the quantum physics community, with notable progresses being achieved in the past few years \cite{glo1,glo2,glo3,new1}. Apart from the traditional line for quantum nonlocality related to entanglement or Bell inequalities, it is also significant to investigate it from other perspectives. Recently, Luo and Fu \cite{min} presented a new measure of nonlocality which they termed as measurement-induced nonlocality (MIN) motivated by the definition of quantum discord \cite{Ollivier,new2}. As the name itself indicates, the MIN characterizes nonlocality from a measurement perspective, and thus is different from other nonlocality measures. It is a manifestation of the global disturbance to the overall state of a system caused by the locally non-disturbing measurement on one subsystem, and can also be considered as one kind of nonclassical correlation measure different from entanglement and quantum discord. \section{MIN quantified by Hilbert-Schmidt norm}\label{sec:2} The MIN has been one of current research focuses for years \cite{min2,min3,min4,min5, min6,min7,min8,min9,uin}. However, its quantification based on the Hilbert-Schmidt norm (we call it conventional MIN for brevity), while intuitively appealing and conceptually significant, has certain discouraging properties. To see this explicitly, we recall its definition, which reads \cite{min} \begin{equation}\label{eq1} N_2(\rho_{AB})=\max_{\Pi^A}||\rho_{AB}-\Pi^A(\rho_{AB})||_2^2, \end{equation} for a bipartite state $\rho_{AB}$ in $\mathcal {H}_{AB}$. Here, $||X||_2=\sqrt{\text{Tr}(X^\dag X)}$ denotes the Hilbert-Schmidt norm, and the maximum is taken over the full set of local projective measurements $\Pi^A=\{\Pi_k^A\}$ that keep the reduced state $\rho_A=\text{Tr}_B \rho_{AB}$ invariant, namely, $\sum_k \Pi_k^A\rho_A\Pi_k^A=\rho_A$. An analytical formula of the conventional MIN for any $2\times n$ dimensional state $\rho_{AB}$ can be obtained \cite{min}. Here, we argue that the conventional MIN in Eq. \eqref{eq1}, despite being favored for its convenience of calculation, may has certain undesirable properties. More specifically, we will show that it can increase or decrease under trivial local reversible operations on the unmeasured subsystem $B$ of $\rho_{AB}$. Consider, for instance, a channel $\Gamma_B$ acting as $\Gamma_B(\rho_{AB}) = \rho_{AB}\otimes\rho_C$ (i.e., it introduces a local ancilla to $B$), then by making use of the multiplicativity of the Schatten $p$-norm (which reduces to the Hilbert-Schmidt norm when $p=2$) under tensor products, we obtain \begin{equation}\label{eq2} N_2(\rho_{A:BC})=N_2(\rho_{AB})\text{Tr}\rho_C^2. \end{equation} This equality means that $N_2(\rho_{A:BC})\le N_2(\rho_{AB})$ as the purity of a state is no larger than one, $\text{Tr}\rho_C^2 \le 1$. Particularly, if $\rho_C = \mathbb{I}_n/n$ with $\mathbb{I}_n$ being the $n$-dimensional identity operator, we obtain $N_2(\rho_{A:BC})= N_2(\rho_{AB})/n$. Then $N_2(\rho_{A:BC})$ will approach zero when $n$ takes the limit of infinity. This behavior differs completely from our intuition that the nonlocal properties of a system should not be affected by trivially adding or removing an uncorrelated local ancillary state. We remark here that the above perplexity is reminiscent of the phenomena encountered for the geometric measure of quantum discord (GQD) \cite{gqd,gqd1,Piani,Dakic}. In that case, several well-defined measures of GQD have been introduced to remedy this problem \cite{lqu,square,trace,Ciccarello,bures}. \section{MIN based on the trace norm}\label{sec:3} Motivated by the proposition for modifying GQD via trace norm \cite{trace}, we propose to define the MIN for a bipartite state $\rho_{AB}$ as \begin{equation}\label{eq3} N_1(\rho_{AB})=\max_{\Pi^A}||\rho_{AB}-\Pi^A(\rho_{AB})||_1, \end{equation} where $||X||_1= \text{Tr}\sqrt{X^\dag X}$, and $\Pi^A$ denotes still the projective measurements that satisfy $\Pi^A(\rho_A)=\rho_A$. We call $N_1(\rho_{AB})$ the trace MIN hereafter. The physical interpretation of this new nonlocality measure can still be presented as the maximal global effect, or more explicitly, the maximal trace distance that the postmeasurement state $\Pi^A(\rho_{AB})$ departs from its premeasurement state $\rho_{AB}$, caused by locally invariant measurements. The nonlocality measure defined above can circumvent the problem occurred for the conventional MIN as implied by Eq. \eqref{eq2}. Explicitly, let us repeat the analysis by adding an uncorrelated ancilla when the new definition is used, then $N_1(\rho_{A:BC}) =N_1(\rho_{AB})$ due to the normalization condition $\text{Tr}\rho_C =1$. Therefore, $N_1(\rho_{AB})$ does not increase under the action of $\Gamma_B$, namely, it is unaffected by adding or removing a factorized local ancilla on the unmeasured party. Here, we further show a more general and powerful result related to the trace MIN in Eq. \eqref{eq3}. \emph{Theorem 1.} The trace MIN $N_1(\rho_{AB})$ defined in Eq. \eqref{eq3} is nonincreasing under the action of any completely positive trace-preserving (CPTP) channel $\mathcal {E}_B$ on the unmeasured party $B$, i.e., we always have \begin{equation}\label{eq4} N_1(\rho_{AB})\geqslant N_1[\mathcal {E}_B(\rho_{AB})]. \end{equation} \emph{Proof.} Let $\mathcal {E}_B$ be an arbitrary CPTP channel acting on party $B$ of $\rho_{AB}$, and $\{\tilde{\Pi}_k^A\}$ be the optimal projection-valued measurement on party $A$ that maximizes the trace norm on the right-hand side of Eq. \eqref{eq3} for $N_1[\mathcal {E}_B(\rho_{AB})]$, namely, \begin{equation}\label{eq-theo11} N_1[\mathcal{E}_B(\rho_{AB})] = ||\mathcal {E}_B(\rho_{AB}) -\tilde{\Pi}^A[\mathcal{E}_B(\rho_{AB})]||_1, \end{equation} then, by noting that any local channel on party $B$ and the measurement made on party $A$ commute, we obtain $\tilde{\Pi}^A [\mathcal{E}_B (\rho_{AB})]=\mathcal{E}_B [\tilde{\Pi}^A (\rho_{AB})]$, and therefore, by denoting $\bar{\Pi}^A$ (note that $\bar{\Pi}^A \neq \tilde{\Pi}^A$ in general) the optimal measurement for obtaining $N_1(\rho_{AB})$, we have \begin{eqnarray}\label{eq-theo12} N_1(\rho_{AB})&=&||\rho_{AB}-\bar{\Pi}^A (\rho_{AB})||_1 \nonumber\\ &\geqslant& ||\rho_{AB}-\tilde{\Pi}^A (\rho_{AB})||_1 \nonumber\\ &\geqslant& ||\mathcal{E}_B(\rho_{AB}) -\mathcal{E}_B [\tilde{\Pi}^A (\rho_{AB})]||_1 \nonumber\\ &=&N_1[\mathcal {E}_B(\rho_{AB})], \end{eqnarray} where the first inequality comes from the fact that $\tilde{\Pi}^A (\rho_{AB})$ is not necessarily the optimal state to $\rho_{AB}$, and the second inequality is due to the contractivity of the trace norm under CPTP map (Theorem 9.2 of Ref. \cite{Nielsen}). This completes the proof. \hfill{$\blacksquare $} The above theorem means that no physical process on $B$ can increase the maximum trace distance between a state $\rho_{AB}$ and its postmeasurement state $\Pi^A(\rho_{AB})$ obtained after the locally invariant measurements on party $A$, and therefore it circumvents successfully the problem occurred for the conventional MIN. We now list some other basic properties of the trace MIN. (i) $N_1(\rho_{AB})=0$ for all the product states $\rho_{AB}=\rho_A \otimes \rho_B$, and the classical-quantum states $\rho_{AB} =\sum_k p_k \Pi_k^A\otimes \rho^B_k$ with nondegenerate $\rho_A= \sum_k p_k \Pi_k^A$. (ii) $N_1(\rho_{AB})$ is invariant under locally unitary operation $U=V_A\otimes W_B$ on $\rho_{AB}$, namely, $N_1 (U\rho_{AB}U^\dagger)=N_1(\rho_{AB})$, which is obvious as the trace norm is preserved under unitary transformations \cite{Nielsen}. The proposed trace MIN can be used to detect the effect of a locally invariant measurement on the overall state of a system, and the zero trace MIN implies that the state of the system cannot be disturbed by any locally invariant measurement, namely, the measurement of one subsystem cannot determine the corresponding result of a measurement of the other, and therefore this system obeys the principle of locality. Moreover, from the basic properties listed above, one can note that while any entangled or discordant state possess nonvanishing trace MIN, there also exist states with nonvanishing trace MIN but do not produce correlations of entanglement or discord. Therefore, the trace MIN is an important complementary to, but different from, entanglement and quantum discord. \section{Analytical formulas of the trace MIN}\label{sec:4} The maximization in Eq. \eqref{eq3} over the full set of locally invariant measurements on party $A$ can be obtained for certain family of states, and in turn the trace MIN can be evaluated analytically. We present them via the following theorems. \emph{Theorem 2.} For any $2\times n$ dimensional pure state $|\psi\rangle$ with the Schmidt decomposition $|\psi\rangle=\sum_{k=1}^2 \sqrt{\lambda_k}|\phi^A_k\rangle\otimes |\phi^B_k\rangle$, the trace MIN is given by \begin{eqnarray}\label{eq6} N_1(|\psi\rangle\langle\psi|)=2\sqrt{\lambda_1\lambda_2}. \end{eqnarray} \emph{Proof.} We denote $\rho^\psi=|\psi\rangle\langle\psi|$, and $\rho_A^\psi= \text{Tr}_B \rho^\psi$ for simplicity, then if $\rho_A^\psi$ is nondegenerate, the optimal measurement $\tilde{\Pi}_k^A =|\phi^A_k\rangle\langle \phi^A_k|$, and we have \begin{eqnarray}\label{eq-theo21} \tilde{\Pi}^A (\rho^\psi)=\sum_{k=1}^2 \lambda_k |\phi^A_k\rangle\langle \phi^A_k|\otimes|\phi^B_k\rangle\langle\phi^B_k|, \end{eqnarray} therefore \begin{eqnarray}\label{eq-theo22} \rho^\psi-\tilde{\Pi}^A (\rho^\psi)&=&\sqrt{\lambda_1\lambda_2}(|\phi^A_1\rangle\langle\phi^A_2| \otimes |\phi^B_1\rangle\langle\phi^B_2|\nonumber\\ && +|\phi^A_2\rangle\langle \phi^A_1| \otimes |\phi^B_2\rangle \langle \phi^B_1|), \end{eqnarray} the singular values of which can be obtained as $\epsilon_{1,2} = \sqrt{\lambda_1\lambda_2}$, and thus \begin{equation}\label{eq-theo23} N_1(\rho^\psi)= 2\sqrt{\lambda_1\lambda_2}. \end{equation} If $\rho_A^\psi$ is degenerate (i.e., $\lambda_{1,2}=1/2$), assuming the optimal locally invariant measurement to be $\tilde{\Pi}_k^A=|\tilde{k}\rangle \langle \tilde{k}|$, with \begin{equation}\label{eq-theo24} |\tilde{k}\rangle=a_1^k |\phi^A_1\rangle+a_2^k |\phi^A_2\rangle, \end{equation} then as one can always find a unitary operator $U_A$ such that $U_A |\phi^A_k\rangle=|\tilde{k}\rangle$, and as $N_1(\rho^\psi)$ is locally unitary invariant, we obtain $N_1(\rho^\psi)= 1 $ after a similar analysis as that performed for the nondegenerate case, and this completes our proof. \hfill{$\blacksquare$} As the entanglement of formation (EoF) for $|\psi\rangle$ was given by $E_f=-\sum_{k=1}^2 \lambda_k\log_2\lambda_k$ \cite{EoF}, the above theorem implies that $N_1(|\psi\rangle\langle\psi|)$ constitutes an entanglement monotone. But this is not the case for general states. Moreover, one can derive $N_1(|\psi\rangle\langle\psi|)=\sqrt{2 N_2 (|\psi\rangle\langle\psi|)}$, which means that for this special case, both $N_1(|\psi\rangle\langle\psi|)$ and $N_2(|\psi\rangle\langle\psi|)$ give qualitatively the same characterizations of nonlocality. For general $m \times n$ dimensional pure state in the Schmidt expression $|\Psi\rangle =\sum_{k=1}^{d} \sqrt{\lambda_k} |\phi^A_k\rangle\otimes |\phi^B_k\rangle$ with $d=\min\{m,n\}$, and $\rho^\Psi = |\Psi\rangle\langle\Psi|$, we have \begin{eqnarray}\label{eq-theo25} \rho^\Psi-\tilde{\Pi}^A (\rho^\Psi)= \sum_{i\neq j}\sqrt{\lambda_i\lambda_j} |\phi^A_i\rangle\langle \phi^A_j| \otimes |\phi^B_i\rangle\langle \phi^B_j|, \end{eqnarray} when $\rho_A^\Psi= \text{Tr}_B \rho^\Psi$ is nondegenerate, but a closed form of its singular values cannot be derived for $m\geqslant3$, and in turn it is difficult to obtain an analytical formula of $N_1(\rho^\Psi)$. For the degenerate $\rho_A^\Psi$, an analysis similar as that for $m=2$ yields $N_1(\rho^\Psi)= 2(m-1)/m$. Now, we calculate the trace MIN for a general two-qubit state $\tau_{AB}$, which has been proved to be locally unitary equivalent to $\rho_{AB}$ of the following form \cite{Horodecki} \begin{equation}\label{eq7} \rho_{AB}=\frac{1}{4}\bigg(\mathbb{I}_2\otimes\mathbb{I}_2+ \vec{x}\cdot\vec{\sigma}\otimes\mathbb{I}_2+ \mathbb{I}_2\otimes\vec{y}\cdot\vec{\sigma}+ \sum_{i=1}^3 c_i\sigma_i\otimes\sigma_i\bigg), \end{equation} where the vectors $\vec{x}=(x_1,x_2,x_3)$, $\vec{y} =(y_1,y_2,y_3)$, and $x_{i}= \text{Tr}\rho_{AB}(\sigma_i\otimes \mathbb{I}_2)$, $y_{i}= \text{Tr}\rho_{AB}(\mathbb{I}_2 \otimes \sigma_i)$, $c_{i}=\text{Tr}\rho_{AB}(\sigma_i\otimes\sigma_i)$, and $\sigma_{1,2,3}$ are the three Pauli operators. The local unitary invariance of the trace MIN enables $N_1(\tau_{AB})= N_1(\rho_{AB})$, and therefore it suffices to consider the representative family of states $\rho_{AB}$ in Eq. \eqref{eq7}, for which $N_1(\rho_{AB})$ can be evaluated analytically. \emph{Theorem 3.} For any two-qubit state of the form of Eq. \eqref{eq7}, the trace MIN can be obtained as \begin{equation}\label{eq8} N_1(\rho_{AB})=\left\{ \begin{aligned} &\frac{\sqrt{\chi_{+}}+\sqrt{\chi_{-}}}{2||\vec{x}||_1} &\text{if}~\vec{x}\neq 0,\\ &\max\{|c_1|,|c_2|,|c_3|\} &\text{if}~\vec{x}=0, \end{aligned} \right. \end{equation} where $\chi_\pm=\alpha\pm 2\sqrt{\beta} ||\vec{x}||_1$, with $\alpha=||\vec{c}||_1^2||\vec{x}||_1^2-\sum_i c_i^2x_i^2$, $\vec{c}=(c_1,c_2,c_3)$, $\beta=\sum_{\langle ijk\rangle}x_i^2c_j^2c_k^2$, and the summation runs over all the cyclic permutations of $\{1,2,3\}$. \emph{Proof.} If $\rho_A=(\mathbb{I}_2 +\vec{x}\cdot \vec{\sigma})/2$ is nondegenerate, that is, if $\vec{x}\neq 0$, then the unique projective measurement leaving $\rho_A$ invariant is induced by its spectral resolutions \begin{eqnarray}\label{eq-theo21} \tilde{\Pi}_{1,2}^A=\frac{1}{2} \left(\mathbb{I}_2 \pm \frac{\vec{x}\cdot \vec{\sigma}}{||\vec{x}||_1}\right). \end{eqnarray} Thus we have \cite{glo3} \begin{eqnarray}\label{eq-theo22} \tilde{\Pi}^A(\rho_{AB})&=&\frac{1}{4}\bigg(\mathbb{I}_2\otimes\mathbb{I}_2+ \vec{x}\cdot\vec{\sigma}\otimes\mathbb{I}_2+ \mathbb{I}_2\otimes\vec{y}\cdot\vec{\sigma}\nonumber\\ &&+\frac{\vec{x}\cdot \vec{\sigma}}{||\vec{x}||_1^2} \otimes \sum_{i=1}^3 c_i x_i \sigma_i\bigg), \end{eqnarray} and therefore \begin{eqnarray}\label{eq-theo23} \rho_{AB}-\tilde{\Pi}^A(\rho_{AB})&=&\frac{1}{4}\bigg(\sum_{i=1}^3 c_i\sigma_i \otimes\sigma_i\nonumber\\ && -\frac{\vec{x}\cdot \vec{\sigma}}{||\vec{x}||_1^2} \otimes \sum_{i=1}^3 c_i x_i \sigma_i\bigg). \end{eqnarray} After a straightforward algebra, one can obtain the singular values of $\rho_{AB}-\tilde{\Pi}^A(\rho_{AB})$ as \begin{equation}\label{eq-theo24} \varepsilon_{1,2}=\frac{\sqrt{\chi_{+}}}{4||x||_1},~~ \varepsilon_{3,4}=\frac{\sqrt{\chi_{-}}}{4||x||_1}, \end{equation} with $\chi_{\pm}$ being given below Eq. \eqref{eq8}, and therefore \begin{equation}\label{eq-theo25} N_1(\rho_{AB})=\frac{\sqrt{\chi_{+}}+\sqrt{\chi_{-}}}{2||\vec{x}||_1}. \end{equation} If $\rho_A$ is degenerate, i.e., $\vec{x}=0$, then by adopting the similar lines as in Ref. \cite{Ciccarello}, one can obtain \begin{equation}\label{eq-theo26} N_1(\rho_{AB})=\frac{1}{2}\sqrt{2\max_{\hat{e}}h(\hat{e})}, \end{equation} with $\hat{e}=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)$ being a unit vector in $\mathbb{R}^3$. By further ordering the singular values of the correlation tensor $\mathcal{R}=\text{diag}\{c_1,c_2,c_3\}$ as $c_+ \geqslant c_0 \geqslant c_-$, we have \cite{Ciccarello} \begin{equation}\label{eq-theo27} h(\hat{e})=Q+\sqrt{H}, \end{equation} where $Q$ and $H$ are given by \begin{eqnarray}\label{eq-theo28} &&Q=c_+^2+c_0^2-\sin^2\theta\left[c_0^2-c_-^2+ \cos^2\phi(c_+^2-c_0^2)\right],\nonumber\\ &&H=A(\theta)\sin^4\phi+B(\theta)\sin^2\phi+C(\theta), \end{eqnarray}\\ with the $\theta$-dependent functions $A(\theta)$, $B(\theta)$, and $C(\theta)$ (note that there is a misprint in \cite{Ciccarello}, $\gamma_2^2+\gamma_3^2$ in the expression for $C(\theta)$ should be $\gamma_3^2-\gamma_2^2$) being given by \begin{eqnarray}\label{eq-theo29} &&A(\theta)=\sin^4\theta(c_+^2-c_0^2)^2, \nonumber\\ &&B(\theta)=2(c_+^2-c_0^2)[\sin^2\theta(c_+^2+c_0^2-2c_-^2) \nonumber\\ &&~~~~~~~~~~~~~-\sin^4\theta(c_+^2-c_-^2)],\nonumber\\ &&C(\theta)=[c_+^2-c_0^2-\sin^2\theta(c_+^2-c_-^2)]^2. \end{eqnarray} By combining Eqs. \eqref{eq-theo28} and \eqref{eq-theo29}, one can note that both $Q$ and $H$ reach their maxima when $\phi=\pi/2$, for which \begin{eqnarray}\label{eq-theo2x} &&Q_{\rm max}=c_+^2+c_0^2-\sin^2\theta(c_0^2-c_-^2), \nonumber\\ &&H_{\rm max}=[c_+^2-c_0^2+\sin^2\theta(c_0^2-c_-^2)]^2. \end{eqnarray} Therefore, we have $\max_{\hat{e}}h(\hat{e})=2c_+^2$, and thus \begin{eqnarray}\label{eq-theo2y} N_1(\rho_{AB})=c_+. \end{eqnarray} This completes our proof. \hfill{$\blacksquare$} We would like to point out here that for the two-qubit $\rho_{AB}$ with nondegenerate $\rho_A$, the calculation of the trace MIN can also be performed in a similar manner as that for the degenerate case. But now the expression for $h(\hat{e})$ in Eq. \eqref{eq-theo27} is more complicated (see Eq. [31] in Ref. \cite{Ciccarello} for more detail), and therefore we adopted the procedure listed above as the optimal $\{\tilde{\Pi}_{1,2}^A\}$ can be written explicitly for this case. \begin{figure} \centering \resizebox{0.48\textwidth}{!} \includegraphics{figure1.eps}} \caption{(Color online) Surfaces of constant trace MIN $N_1(\rho^{\rm BD})= 0.45$ (a), and valid $(c_1,c_2,c_3)$ (the color shaded regions) for which $N_1(\rho^{\rm BD})$ is not destroyed by the phase flip noise (b).} \label{fig:1} \end{figure} In Fig. \ref{fig:1}(a), we presented an exemplified plot of the level surfaces of $N_1(\rho^{\rm BD})=0.45$ for the Bell-diagonal states $\rho^{\rm BD}$ [i.e., $\vec{x}=0$ and $\vec{y}=0$ in Eq. \eqref{eq7}]. As physical $(c_1,c_2,c_3)$ belongs to a tetrahedron $\mathcal{T}$ (see Fig. \ref{fig:1}), and $N_1(\rho^{\rm BD})=\max\{|c_1|,|c_2|, |c_3|\}$, the surfaces of constant trace MIN correspond to the cross sections of the six surfaces of a cube $\mathcal{C}$ of side length $N_1(\rho^{\rm BD})$ with $\mathcal{T}$. When $N_1(\rho^{\rm BD})\leq 1/3$, the surfaces of $\mathcal{C}$ are also the surfaces of constant trace MIN, while for $N_1(\rho^{\rm BD})> 1/3$, partial of them are cut by the four surfaces of $\mathcal{T}$. By denoting $c_+ $, $c_0$, and $c_- $ the maximum, intermediate, and minimum values of $\{|c_1|,|c_2|,|c_3|\}$, respectively, one can derive a relation between $N_1(\rho^{\rm BD})$ and $N_2(\rho^{\rm BD})=(c_+^2+c_0^2)/4$ for the Bell-diagonal states $\rho^{\rm BD}$ as \begin{eqnarray}\label{eq16} N_1(\rho^{\rm BD})= \sqrt{4 N_2(\rho^{\rm BD})-c_0^2}. \end{eqnarray} This implies that the two different MIN measures may impose different orderings of nonlocality, as when $c_+$ keeps unchanged, $N_1(\rho^{\rm BD})$ also keeps unchanged, while $N_2(\rho^{\rm BD})$ increases (decreases) with the increasing (decreasing) value of $c_0$ in the region of $c_-\leqslant c_0\leqslant c_+$. Thus there is no one-to-one correspondence between the well-defined trace MIN and the conventional MIN in general, and we hope this simple example may provide some intuition about the subtle issue concerning the appropriateness of using the Hilbert-Schmidt norm as a distance for quantifying nonlocality, just as the appropriateness of using it for defining GQD \cite{Piani}. Moreover, it is also worthwhile to point out that when $\rho^{\rm BD}$ being subject to the $\$^{(i)}$ channel (with $i=1,2,3$ representing respectively, the bit flip, bit-phase flip, and phase flip channels), we have $c_i(t)=c_i(0)$, and $c_{j,k}(t)=c_{j,k}(0)p(t)$ ($i\neq j\neq k$), where $p(t)=e^{-\gamma t}$ for the one-sided channel $\$^{(i)}\otimes\mathbb{I}_2$ or $\mathbb{I}_2\otimes \$^{(i)}$, and $p(t)=e^{-2\gamma t}$ for the two-sided channel $\$^{(i)}\otimes \$^{(i)}$, with $\gamma$ being the decay rate. As a consequence, if $|c_i(0)| =\max\{|c_i(0)|,|c_j(0)|,|c_k(0)|\}$ at the initial time, we obtain $N_1[\$^{(i)}(\rho^{\rm BD})]=|c_i(0)|$ by Eq. \eqref{eq8}, which is not destroyed by the $\$^{(i)}$ noise during the whole time region. This is in sharp contrast to other nonclassical correlation measures which remain constant only for a finite time interval \cite{sudden}. This unique and novel characteristic of the trace MIN is not only conceptually significant, but is also appealing for potential quantum algorithms relying on it. Fig. \ref{fig:1}(b) plots the valid regions of $(c_1,c_2,c_3)$ for which $N_1(\rho^{\rm BD})$ can evade the detrimental effects of the phase flip channel. They belong to two hexahedra with vertices $(0,0,0)$, $(\pm 1,\mp1,1)$, $(\pm 1/3,\pm1/3,1/3)$, and $(0,0,0)$, $(\pm 1,\pm 1,-1)$, $(\pm 1/3,\mp 1/3,-1/3)$, respectively. The results for the bit (bit-phase) flip channel is similar, with $c_3$ replacing $c_1$ ($c_2$). So far we have obtained analytical formulas of the trace MIN for any $2\times n$ dimensional pure state and a general two-qubit state, and discussed several interesting implications of them. We now turn to consider two high-dimensional states with symmetry. The analytical expressions of some quantum correlation measures (see Ref. \cite{Vedral-RMP} for a review) for them have already been obtained \cite{gqd1,square,ana1,ana2}. Consider first the celebrated Werner state on $\mathbb{C}^d \otimes \mathbb{C}^d$ \cite {Werner}, which can be written as \begin{eqnarray}\label{eq17} \rho^{W}=\frac{d-x}{d^3-d}\mathbb{I}_{d^2} +\frac{dx-1}{d^3-d} \sum_{i,j} |ij\rangle\langle ji|,~~ x\in[-1,1], \end{eqnarray} which admits the local unitary invariance, i.e., $\rho^W = (U\otimes U) \rho^W (U^\dag\otimes U^\dagger)$ for any local unitary operation $U$, and therefore one can choose the optimal measurement basis to be $\tilde{\Pi}_i^A=|i\rangle\langle i|$, which yields \begin{eqnarray}\label{eq18} \rho^{W}-\tilde{\Pi}^A(\rho^W)=\frac{dx-1}{d^3-d}\sum_{i\neq j} |ij\rangle\langle ji|. \end{eqnarray} As $\sum_{i\neq j} |ij\rangle\langle ji|$ constitutes a permutation matrix (a binary matrix with exactly one entry 1 in each row and each column and zeros elsewhere), the singular values of $\rho^{W}-\tilde{\Pi}^A(\rho^W)$ can be evaluated directly as $|dx-1|/(d^3-d)$ with multiplicity $d(d-1)$. Then, by the definition \eqref{eq3} we obtain \begin{eqnarray}\label{eq19} N_1(\rho^{W})=\frac{|dx-1|}{d+1}, \end{eqnarray} therefore $N_1(\rho^W)$ vanishes only when $x=1/d$, and it implies that for the present case, the trace MIN disappears only when $\rho^W$ reduces to the maximally mixed one. Meanwhile, the conventional MIN for $\rho^{W}$ had also been derived analytically \cite{glo3}, from which we obtain \begin{eqnarray}\label{eq-ad1} N_1(\rho^W)=\sqrt{d(d-1)N_2(\rho^W)}. \end{eqnarray} This means that both $N_1$ and $N_2$ give qualitatively the same descriptions of nonlocality for $\rho^W$ with finite $d$. But it should be note that their asymptotic behaviors are different because $\lim_{d\rightarrow\infty}N_1(\rho^W)=|x|$, and $\lim_{d\rightarrow\infty}N_2(\rho^W)=0$. The second high-dimensional state we want to consider is the $d\times d$ dimensional isotropic state expressed as follows \begin{eqnarray}\label{eq20} \rho^{I}=\frac{1-x}{d^2-1}\mathbb{I}_{d^2}+\frac{d^2 x-1}{d^2-1} |\Phi\rangle\langle \Phi|,~~ x\in[0,1], \end{eqnarray} with $|\Phi\rangle=\frac{1}{\sqrt{d}}\sum_i |ii\rangle$, and $|i\rangle$ denotes the computational basis on $ \mathbb{C}^d$. For this state, let $\tilde{\Pi}_k^A=|\tilde{k}\rangle\langle\tilde{k}|$ be the measurement basis that maximizes the trace norm in Eq. \eqref{eq3}, then due to the symmetry of $|\Phi\rangle$, one can always find local unitary operation $U$ such that $(U\otimes U)|\Phi\rangle=\frac{1}{\sqrt{d}} \sum_k |\tilde{k}\tilde{k} \rangle$, and the local unitary invariance of $N_1$ enables $N_1(\rho^I)=N_1[(U\otimes U)\rho^I (U^\dag\otimes U^\dag)]$. Therefore, by denoting $\rho^{I}_U=(U\otimes U)\rho^I (U^\dag\otimes U^\dag)$, we obtain \begin{eqnarray}\label{eq21} \rho^{I}_U-\tilde{\Pi}^A(\rho^I_U)=\frac{d^2 x-1}{d(d^2-1)}\sum_{k\neq l} |\tilde{k}\tilde{k}\rangle\langle\tilde{l}\tilde{l}|, \end{eqnarray} the singular values of which can be evaluated analytically as $|d^2 x-1|/(d^2+d)$ with multiplicity $1$ and $|d^2 x-1|/(d^3-d)$ with multiplicity $d-1$, and this yields \begin{eqnarray}\label{eq22} N_1(\rho^{I})=\frac{2|d^2 x-1|}{d(d+1)}. \end{eqnarray} Here, the trace MIN $N_1(\rho^I)=0$ only when $x=1/d^2$, namely, when $\rho^I$ comes to be maximally mixed. Moreover, the analytical expression for the conventional MIN can be obtained from Ref. \cite{glo3}, by combining of which we obtain \begin{eqnarray}\label{eq-ad2} N_1(\rho^I)=2\sqrt{\frac{(d-1)N_2(\rho^I)}{d}}. \end{eqnarray} Their asymptotic values are given by $\lim_{d\rightarrow\infty} N_1(\rho^I)=2x$, and $\lim_{d\rightarrow\infty}N_2(\rho^I)=x^2$, respectively. It implies that the two MIN measures still give qualitatively the same characterizations of nonlocality for the isotropic state. Moreover, it is remarkable that for the special case $x=1$, i.e., when $\rho^I$ reduces to the maximally entangled state, we have $N_1(\rho^{I}) =2(d-1)/d$, which is just twice that of the conventional MIN. \section{Summary and discussion}\label{sec:5} To summarize, we have introduced a well-defined measure of nonlocality by making using of the trace norm. It can remedy the undesirable property of the conventional MIN which can be changed arbitrarily and reversibly by trivial local action on the subsystem. We proved explicitly that the proposed trace MIN is nonincreasing under the action of general CPTP quantum channels on the unmeasured subsystem. This property has by itself a conceptual significance, as it has already been proved that the Schatten 1-norm (trace norm) is the only $p$-norm that can be used to give a well-defined quantum correlation measure \cite{trace}. Here, the fascinating properties of the trace MIN show again the ubiquitousness and intrinsic significance of the Schatten 1-norm for defining MIN. We hope this may shed some new light on the issue concerning the characterization and quantification of nonlocality from a measurement perspective. We have also presented analytical formulas of the trace MIN for any $2\times n$ dimensional pure state, two-qubit state, as well as the Werner states and the isotropic states on $\mathbb{C}^d \otimes \mathbb{C}^d$ which possess high symmetry. We revealed through these results that the trace MIN captures the nonlocal property of a system more intrinsically than that of the conventional MIN. Moreover, we revealed a unique and appealing characteristic of this new proposed nonlocality measure, namely, it can evade the detrimental effects of certain noisy channels during the whole time region for elaborately designed initial states. This may have potential applications in QIP for its coherence protecting property. We remark that the entropic measure of MIN based on the von Neumann entropy \cite{min2}, or its equivalent form based on the relative entropy \cite{min3}, is also monotonically decreasing due to the monotonicity of the quantum mutual information under channels on $B$ (see Ref. \cite{Piani} for a detailed proof). Moreover, it has already been pointed out that one can remedy the MIN via the square root of the considered density matrix \cite{square}. Here, we mention that it is also natural to define the MIN as \begin{eqnarray}\label{eq-ad3} N_B(\rho_{AB})= 2\max_{\Pi^A}\{1 -\sqrt{F[\rho_{AB},\Pi^A(\rho_{AB})]}\}, \end{eqnarray} via the Bures distance \cite{bures}, with $\Pi^A$ being the locally invariant measurement and $F(\rho,\sigma)=[\text{Tr}(\sqrt{\rho} \sigma \sqrt{\rho})^{1/2}]^2$ denotes the Uhlmann fidelity. By using the monotonicity of the Bures distance \cite{Nielsen} and after a similar analysis as that for proving Theorem 1, one can show directly that $N_B$ is also nonincreasing under general CPTP channels. But its evaluation may be intractable and further investigation is needed. Thus the measure presented in this paper may be widely used for its concise and simple form. The significance for this computable measure of the trace MIN in QIP can be studied parallel as that of quantum entanglement and quantum discord, see reviews \cite{RMP,Vedral-RMP}. Additionally, it may be applied as a new technique to various models to study physical phenomena such as quantum phase transitions, topologies of those systems, similar as the applications of entanglement, see for example \cite{Amico-RMP,Cui-NC,Fan-PRL}. Moreover, as nonlocality is quantitatively related with Heisenberg's uncertainty principle \cite{Oppenheim-science}, which provides the basis for the security of quantum cryptography \cite{Berta-NP}, the obvious physical significance of the trace MIN and its convenience of calculation is also hoped to play a role in these related issues. \section*{ACKNOWLEDGMENTS} This work was supported by NSFC (11205121, 11175248), the ``973'' program (2010CB922904), NSF of Shaanxi Province (2014JM1008), and SRP of the Education Department of Shaanxi Province (12JK0986). \newcommand{\PRL}{\emph{Phys. Rev. Lett.} } \newcommand{\RMP}{\emph{Rev. Mod. Phys.} } \newcommand{\PRA}{\emph{Phys. Rev.} A } \newcommand{\PRB}{\emph{Phys. Rev.} B } \newcommand{\PR} {\emph{Phys. Rev.} } \newcommand{\NJP}{\emph{New J. Phys.} } \newcommand{\JPA}{\emph{J. Phys.} A } \newcommand{\JPB}{\emph{J. Phys.} B } \newcommand{\PLA}{\emph{Phys. Lett.} A } \newcommand{\NP}{\emph{Nat. Phys.} } \newcommand{\NC}{\emph{Nat. Commun.} } \newcommand{\QIC}{\emph{Quantum Inf. Comput.} } \newcommand{\QIP}{\emph{Quantum Inf. Process.} } \newcommand{\EPJD}{\emph{Eur. Phys. J.} D } \newcommand{\AP}{\emph{Ann. Phys.} } \newcommand{\IJMPB}{\emph{Int. J. Mod. Phys.} B }
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Gravity: An Emergent Phenomenon}\label{sec:gravemerge} While the difference between a hot body and a cold one was known even to the cavemen, physicists struggled for centuries to understand the nature of heat \cite{tpr:heat}. It was known to them that a macroscopic system like, for example, a gas can be studied by introducing several thermodynamic variables (like temperature, entropy, \textit{etc.}), but for a very long time, they did not know what these variables really meant. The breakthrough came with the work of Boltzmann, who essentially said: ``If you can heat it, it has microscopic degrees of freedom''. Before this idea was accepted, a gas or a fluid was thought of as a continuum all the way down to the smallest scales, and the notion of heat and temperature were superimposed on it, in a rather \textit{ad hoc} manner. Boltzmann introduced a paradigm shift in which matter was treated as discrete at small scales and the thermal phenomena were related to the (suitably averaged) mechanical attributes of these discrete degrees of freedom. This paradigm shift is profound. It stresses that \textit{the existence of microscopic degrees of freedom leaves a tell-tale signature even at the largest macroscopic scales, in the form of temperature and heat}. One could have guessed that a glass of water {must be} made of discrete microscopic degrees of freedom just from the fact that it can be heated, without probing it at Angstrom scales, even though it actually took centuries for physicists to recognize that temperature and heat provide a direct link between microscopic and macroscopic phenomena. In fact, a relation like $Nk_B=E/[(1/2)T]$ directly counts the microscopic degrees of freedom, $N$, in terms of macroscopic variables $E$ and $T$! Mathematically, one key variable in thermodynamics, which was absent in the Newtonian mechanics of point particles, is the heat content $TS$ of the matter, which is the difference $(F-E)$ between the free energy and the internal energy of the system. In terms of densities, the heat density is $Ts = P+\rho$, where $s$ is the entropy density, $\rho$ is the energy density and $P$ is the pressure. (This is the Gibbs--Duhem relation for systems with zero chemical potential in which we will be interested). Proceed now from normal matter to spacetime. Work done in the last several decades \cite{A1,A2,A3,A4,A5,A6,A7,A8} shows that spacetimes, due to the existence of null surfaces, which block information from a certain class of observers, also possess a heat density $Ts$. The emergent gravity paradigm \cite{A9,A11} builds upon this fact and treats the gravitational field equations as analogous to the equations of fluid dynamics or elasticity. There is a considerable amount of {internal} evidence in the structure of gravitational theories, much more general \cite{A13,A34} than Einstein's theory, to indicate that this is a correct and useful approach to pursue. This review explores several aspects of this approach and extends the ideas to a deeper level. \section{Scope, Structure and Features of this Review} As will become clear soon, it is possible to associate a temperature and entropy density with every event in the spacetime just as one could have done so for a glass of water. On the other hand, one traditionally described the dynamics of spacetime through some field equation for gravity, because Einstein told us that gravity is nothing but the curvature of spacetime. If we take both of these results seriously, we are led to the following conclusions and results described in this review: \begin{enumerate} \item The Boltzmann principle suggests that if spacetime can be hot, it must have a microstructure. What is more, we should be able to count the atoms of spacetime without having the technology to do Planck-scale experiments, just as Boltzmann could guess the existence of atoms of matter without doing Angstrom-scale experiments. We would then expect a relation like $Nk_B=E/[(1/2)T]$ to exist for the spacetime. We will see in Section~\ref{sec:avogadro} that this is indeed the case. \item If the spacetime is analogous to a fluid made of atoms, the gravitational field equations must have the same conceptual status as the equations describing fluid mechanics. Hence, we should be able to derive them from a purely-thermodynamic variational principle. Just as in the case of matter, such a variational principle \cite{A14,A16} will be a phenomenological input when we approach it from the macroscopic side. Further, we should be able to write the field equation in a purely \textit{thermodynamic} language rather than in the (conventional) \textit{geometrical} language \cite{A19,A20,A21,A22}. Consequently, we would expect several variables, which are usually considered geometrical, to have an underlying thermodynamic interpretation. We will describe these features in Sections~\ref{sec:elegantgravdyn} and~\ref{sec:geotherm}. \item The discreteness of normal matter is usually taken into account in the kinetic theory by introducing a distribution function $f(x^i, p_i)$, such that $dN = f(x^i, p_i) d^3x d^3p$ counts the number of atoms in a phase volume. Such a description recognizes the discreteness, but works at scales such that the volume $d^3x$ is large enough to, say, contain a sufficient number of atoms. We can develop (see Sections~\ref{sec:gravheatden} and~\ref{sec:eventarea}) a similar concept for the spacetime that recognizes the discreteness at the Planck scale and yet allows the use of continuum mathematics to describe the phenomena. This provides a deeper level of description of spacetime, such that the thermodynamic variational principle, mentioned in Item (2) above, can be obtained from it. \item Such a reformulation of spacetime dynamics as thermodynamics should provide us with insights into some of the problems of the standard formulation, like for example, the cosmological constant, spacetime singularities, \textit{etc}. This goes beyond describing what is known in a \textit{new language} and should lead to \textit{new results} \cite{C8,C9}. I will describe in Sections~\ref{sec:eventarea} and~\ref{sec:summary} how this approach leads to a new perspective on cosmology and allows us to {predict the numerical value} of the {cosmological\ constant}! \end{enumerate} There exists a fair amount of previous work (cited above) that shows that the emergent gravity paradigm does achieve 1, 2 and 4 above. In Sections~\ref{sec:gravbricks} and \ref{sec:geotherm}, we will review these developments, highlighting some recent results. The main thrust of this article, however, will be to describe (Sections~\ref{sec:gravheatden}--\ref{sec:summary}) the first glimpses of a viable microscopic model, related to Item (4) above, and to explain how one could possibly recover spacetime thermodynamics as a limit of the statistical mechanics of the atoms of spacetime.\footnote{\textit{Notation:} The signature is $(-,+,+,+, ...)$. The Latin letters run over all of the spacetime indices $(0,1,2,....d-1)$; the Greek letters over the spatial indices $(1,2,....d-1)$; and the uppercase Latin letters, $A,B,C,\ldots$, run over a co-dimension two surface when appropriate. We set $\hbar=1,c=1$ and $16\pi G=1$ for the most part of our discussion (occasionally, when we use the $G=1$ units, it will be mentioned specifically). Einstein's field equations will then take the form $2G_{ab}=T_{ab}$.} \section{Building Gravity: Brick by Brick}\label{sec:gravbricks} I will begin by describing the logical structure behind a first-principle approach, which obtains the spacetime dynamics as an emergent phenomenon, working from the macroscopic side. To do this, it is convenient to separate the kinematic (``how gravity makes the matter move'') and dynamic (``how matter makes the spacetime curve'') aspects of the gravitational theories. This is important, because there is some amount of emotional resistance in the community to tinkering with general relativity, given its elegance and beauty. However, what is not often recognized (or stressed in the text books) is that \textit{all} of the elegance of general relativity is confined to its {kinematic part}, which describes gravity as being due to the curvature of spacetime. The dynamics, encoded in the gravitational field equations, has no real elegance and, in fact, does not follow from any beautiful principle analogous to, for example, the principle of equivalence. The emergent gravity paradigm retains {all} of the elegance of general relativity by keeping its kinematic structure intact; further, it provides a nice thermodynamic underpinning to describe the dynamics. In Sections~\ref{sec:elegantgravkin} and \ref{sec:elegantgravdyn}, I will describe how this comes about. \subsection{The Elegance of Gravitational Kinematics}\label{sec:elegantgravkin} Judicious use of the principle of equivalence tells us that gravity \textit{is} geometry and can be described by a metric $g_{ab}$ of the curved spacetime. Further, the principle of general covariance insists on the democratic treatment of all observers in the spacetime. By abandoning any special form of the pre-geometric metric (like the $\eta_{ab}$ of special relativity), we accept the fact that one can no longer think of a part of $g_{ab}$ as arising due to acceleration (\textit{i.e.}, coordinate choice) and a part as arising due to genuine curvature. These principles also provide us with a procedure to describe the influence of spacetime geometry on matter fields: we invoke the standard laws of special relativity (SR) in a freely-falling frame (FFF), rewrite them in a generally covariant language valid in arbitrary curvilinear coordinates and postulate that the same form should hold, even in a curved spacetime. As a consequence, the energy momentum tensor $T^a_b$ for the matter (known from SR) will satisfy the equation: \begin{equation} \nabla_a T^a_b = 0 \label{divT} \end{equation} in curvilinear coordinates in SR and, hence, should also hold in arbitrary curved geometry. Generically, this equation will give the equations of motion for matter in the presence of gravity. In our approach, the matter sector will be described by a $T^a_b$, which satisfies \eq{divT}, rather than by an action, \textit{etc}. It is also straightforward to conclude from \eq{divT}, applied to the light rays, that they will bend in the presence of gravity; hence the causal structure of the spacetime will now be determined by the gravitational field. In particular, it is easy to construct observers (\textit{i.e.}, timelike congruences) in any spacetime such that part of the spacetime will be inaccessible to them.\footnote{I stress that (a) this is a purely kinematic feature and (b) it is \textit{always} observer dependent. For example, (i) such observers exist even in flat spacetime and (ii) in the case of, say, a black hole spacetime, an observer freely falling into the black hole and the one who is stationary outside, will access different regions of spacetime.} A generic example of such observers is provided by the local Rindler observers \cite{A35} constructed as follows: In a region around any event $\mathcal{P}$, introduce the FFF with coordinates $(T, \mathbf{X})$. Boost from the FFF to a local Rindler frame (LRF) with coordinates $(t,\mathbf{x})$ constructed using some acceleration $a$, through the transformations: $X=x\cosh (at), T=x\sinh (at)$. There will be a null surface passing though $\mathcal{P}$, which gets mapped to the $X=T$ surface in the FFF; this null surface will now act as a patch of horizon to the $x=$ constant Rindler observers. This construction leads to the most beautiful result \cite{A5,A6} we have obtained so far by combining the principles of general relativity and quantum field theory: the local vacuum state, defined by the freely-falling observers around an event, will appear as a thermal state to the local Rindler observer with the temperature: \begin{equation} k_BT = \left(\frac{\hbar}{c}\right) \left(\frac{a}{2\pi}\right) \end{equation} where $a$ is the acceleration of the local Rindler observer, which can be related to other geometrical variables of the spacetime in different contexts. This Davies--Unruh temperature tells us that around \textit{any} event, in \textit{any} spacetime, there exists a class of observers who will perceive the spacetime as hot. This fact will play a crucial role in our discussion. There are a couple of related results that we will use later on, which are worth recalling at this stage. The~first is the relation between Euclidean spacetime and the temperature introduced above. The~mapping, from the FFF to the LRF, $X=x\cosh at,\ T=x\sinh at$, has the Euclidean continuation (under \mbox{$iT = T_E, \ it = t_E$}) given by $X=x\cos at_E,\ T_E=x\sin at_E$. This, in turn, maps a pair of null surfaces $X^2 - T^2 =0$ to the single point in the Euclidean origin given by $X^2 + T_E^2 =0$. Approaching the origin of the Euclidean sector, therefore, corresponds to approaching the null surface in the original spacetime as a limit. We will make use of this fact later on. The second result \cite{A35} is related to the energy flow associated with the matter that crosses the null surface, as viewed from the FFF. A local Rindler observer will see that the matter takes a very long time to cross the local Rindler horizon, thereby allowing for thermalization to take place. (This is similar to the fact that, as seen by the outside observer, matter takes infinite time to cross the black hole horizon). Since the local Rindler observer attributes a temperature $T$ to the horizon, she will interpret the energy associated with the matter that crosses the null surface (asymptotically) as some amount of energy $\Delta E$ being dumped on a \textit{hot} surface, thereby contributing a \textit{heat} content $\Delta Q=\Delta E$. This quantity can be computed as follows: We choose an FFF around any given spacetime event $\mathcal{P}$ and construct an LRF. The LRF provides us with an approximate Killing vector field $\xi ^{a}$, generating boosts, which coincides with the null normal $\ell ^{a}$ at the null surface. The heat current arises from the boost energy current $T_{ab}\xi ^{b}$ of matter. Therefore, the total heat energy dumped on the null surface will be: \begin{align}\label{Paper06_New_11} Q_{m}=\int \left(T_{ab}\xi ^{b}\right)d\Sigma ^{a}=\int T_{ab}\xi ^{b}\ell ^{a}\sqrt{\gamma}d^{2}x d\lambda =\int T_{ab}\ell ^{b}\ell ^{a}\sqrt{\gamma}d^{2}x d\lambda \end{align} where we have used the fact that $\xi ^{a} \to \ell ^{a}$ on the null surface. Since the parameter $\lambda $ (defined through $\ell^a = dx^a/d\lambda$) is similar to a time coordinate, we can also define a heating rate: \begin{equation} \frac{dQ_{m}}{d\lambda}=\int T_{ab}\ell ^{b}\ell ^{a}\sqrt{\gamma}d^{2}x \end{equation} and a heating rate density per unit proper area of the surface: \begin{equation} \mathcal{H}_m[\ell_a]\equiv \frac{dQ_{m}}{\sqrt{\gamma}d^{2}xd\lambda}=T_{ab} \ell^a\ell^b \label{hmatter} \end{equation} so that the heat transferred by matter is obtained by integrating $\mathcal{H}_m$ with the integration measure $\sqrt{\gamma}d^{2}xd\lambda$ over the null surface generated by the null congruence $\ell ^{a}$, parametrized by $\lambda$. We will simply call $\mathcal{H}_m$ the heat density (energy per unit area per unit time) of the null surface, contributed by matter crossing a local Rindler horizon, as interpreted by the local Rindler observer. There are two features that are noteworthy regarding this heat density. \begin{itemize} \item If we add a constant to the matter Lagrangian (\textit{i.e.}, $L_{m} \to L_{m} + $ constant), the $T^a_{b}$ changes by $T^a_b \to T^a_b + $ (constant) $\delta^a_b$. The heat density, defined by Equation~(\ref{hmatter}) remains invariant under this transformation. \item The heat density vanishes if $T^a_b \propto \delta^a_b$. Therefore, \textit{the cosmological constant has zero heat density}, though it has non-zero energy density. (In fact, for an ideal, comoving fluid, $T_{ab} \ell^a\ell^b = (\rho + P)$, and hence, the heat density vanishes only for the {cosmological\ constant}\ with equation of state $\rho=-P$.) \end{itemize} We will have occasion to use these facts later on. \subsection{Restoring Elegance to Gravitational Dynamics}\label{sec:elegantgravdyn} The next task is to obtain the field equations describing the evolution of spacetime geometry. In the conventional approach, there is no simple guiding principle that will allows us to do this, and it ultimately reduces to certain assumptions of simplicity. I will now show how it is possible to approach gravitational dynamics using a guiding principle, which turns out to be as powerful as the principle of equivalence~\cite{A19,A11}. Recall that the equations of motion for matter, obtained from an action principle, remain invariant if we add a constant to the matter Lagrangian, \textit{i.e.}, under $L_{m} \to L_{m} + $ constant.\footnote{To be precise, there is some subtlety if supersymmetry is an unbroken symmetry; since we have no evidence for supersymmetry anyway, I will not discuss this issue.} Mathematically, this is a trivial consequence of the fact that the Euler equations only care about the \textit{derivatives} of the Lagrangian. Physically, this encodes the principle that the zero level of energy density does not affect dynamics. It~seems reasonable to postulate that the gravitational field equations should not break this symmetry, which is already present in the matter sector. Since $T_{ab}$ is the most natural source for gravity (as can be argued from the principle of equivalence and considerations of the Newtonian limit), we demand that: \begin{itemize} \item[$\blacktriangleright$] The variational principle that determines the dynamics of spacetime must be invariant under the change $T^a_b \to T^a_b + $ (constant) $\delta^a_b$. \end{itemize} This principle immediately rules out the possibility of varying the metric tensor $g_{ab}$ in a covariant, local, action principle to obtain the field equations! It can be easily proven \cite{C7} that if (i) the action is obtained from a local, covariant Lagrangian integrated over a region of spacetime with the standard measure $\sqrt{-g}\, d^4x$ and (ii) the dynamical equations are obtained by the unrestricted variation\footnote{The second condition rules out unimodular \cite{C18,C19,C20} theories and their cousins, in which one varies the metric keeping $\sqrt{-g}$ fixed; I do not think we have good physical motivation for this approach.} of the metric in the action, then the field equations \textit{cannot} remain invariant under $T^a_b \to T^a_b + $ (constant) $\delta^a_b$. In fact, $L_{m} \to L_{m} + $ constant is no longer a symmetry of the action if the metric is treated as the dynamical variable. Therefore, any variational principle we want to work with cannot use $g_{ab}$ as a dynamical variable. This fact, in turn, raises two issues: (1) Normally, you will vary some variables $q_A$ in an action to obtain equations of motion for the \textit{same} variables $q_A$. We, of course, want the dynamical equation to still describe the evolution of $g_{ab}$, but we have just concluded that we cannot vary $g_{ab}$ in any variational principle! How is this possible? (2) In the conventional approach, we vary the metric in the \textit{matter} Lagrangian to obtain $T_{ab}$ as the source. Since we are not varying $g_{ab}$, but still want $T_{ab}$ to be the source, it is necessary to have $T_{ab}$ explicitly included in the variational principle. Therefore, we want the variational principle to \textit{depend} on $T_{ab}$ and, yet, \textit{be invariant} under $T^a_b \to T^a_b + $ (constant) $\delta^a_b$! How can this be done? The answers to these two questions are closely related. The combination $T_{ab} n^a n^b$, where $n_a$ is any null vector, is obviously invariant under the shift $T^a_b \to T^a_b + $ (constant) $\delta^a_b$. Therefore, if the variational principle depends on $T_{ab}$ only through the combination $T_{ab} n^an^b$, the requirement in (2) above is automatically satisfied.\footnote{We want to introduce a minimum number of auxiliary variables. In a $d$-dimensional spacetime, the null vector with $(d-1)$ degrees of freedom is the minimum. In contrast, if we use, say, a combination $T^{ab}V_{ab}$ with a symmetric traceless tensor $V_{ab}$, to maintain the invariance under $T^a_b \to T^a_b + $ (constant) $\delta^a_b$, then we would have introduced $(1/2)d(d+1)-1$ degrees of freedom; in $d=4$, this introduces nine degrees of freedom, equivalent to introducing three null vectors rather than one.} This suggests using a variational principle that extremizes a functional defined by: \begin{equation} Q_{\rm tot}\equiv \int dV (\mathcal{H}_m+\mathcal{H}_g); \qquad \mathcal{H}_m[n_a] \equiv T_{ab} n^a n^b \label{Qtot} \end{equation} where $\mathcal{H}_g$ is the corresponding contribution from gravity, which is yet to be determined, and $dV$ is the proper measure for integration over a suitable region of spacetime, which is also currently left unspecified. This approach introduces an arbitrary null vector $n_a$ into the variational principle, which at this stage, is just an auxiliary field. However, since no null vector is special, the extremum should hold for all null vectors, which requires us to vary $n_a$ in \eq{Qtot} and demand that the resulting equations hold for \textit{all} $n_a$ at a given event. This should lead to a constraint on the background metric $g_{ab}$, which will determine the dynamics of spacetime. If we can find such a $\mathcal{H}_g$, we would have taken care of the issue raised in (1) above, as well. Therefore, we need to find a suitable functional $\mathcal{H}_g[n_a, g_{ab}]$ of $n_a(x), g_{ab}(x)$, such that the extremum condition $\delta Q_{tot}/\delta n_a=0$, for all null vectors $n_a$ at a given event $\mathcal{P}$, leads to sensible equations for the evolution of $g_{ab}$. Since $Q_{tot}$ is invariant under $T^a_b \to T^a_b + $ (constant) $\delta^a_b$, the source that appears in the field equation must respect this symmetry. Therefore, we would expect the equations of motion to be algebraically equivalent~to: \begin{equation} 2E^a_b=T^a_b+\Lambda \delta^a_b \label{eab} \end{equation} Here, $\Lambda$ is an \textit{undetermined integration constant}, which will allow us to absorb the constant in the shift $T^a_b \to T^a_b + $ (constant) $\delta^a_b$, while $E^a_b$ is constructed from $g_{ab}$ and its derivatives and must satisfy $\nabla_a E^a_b=0$ identically for consistency. By very construction, the cosmological constant (for which $T_{ab}^{(\Lambda)}n^an^b=0$) cannot appear in the variational principle. At the same time, it arises as an integration constant in \eq{eab}, and we need a \textit{further} principle to fix its value once and for all. Therefore, the microscopic theory, \textit{viz}. the statistical mechanics of atoms of spacetime, should lead to: \begin{itemize} \item The explicit form of $\mathcal{H}_g[n_a, g_{ab}]$ in the thermodynamic limit. \item A procedure to determine the value of the cosmological constant in our universe. \end{itemize} I will describe later on (see Section~\ref{sec:kineticsast}) how one could attempt to model such a microscopic theory that will satisfy both of these criteria, but first, I will show how one can obtain the form of $\mathcal{H}_g[n_a, g_{ab}]$ working downward from the macroscopic description. Everything works out fine \cite{A14,A16} if we take $\mathcal{H}_g$ to be a quadratic in $\nabla_an_b$ of the form: \begin{equation} \mathcal{H}_g= -\left(\frac{1}{16\pi L_P^2}\right) (4P^{ab}_{cd}\nabla_an^c\nabla_bn^d) \label{hgrav} \end{equation} where $L_P^2$ is an arbitrary constant at this stage, with the dimensions of area (this gives $\mathcal{H}_g$ the dimension $L^{-4}$ as required). Demanding that $\delta Q_{tot}/\delta n_a=0$ for all null vectors $n_a$ at a given event should lead to an equation for background geometry allows us to fix the form of $P^{ab}_{cd}$. We find that: \begin{equation}\label{Paper06_SecLL_07} P^{ab}_{cd} \propto \delta ^{aba_{2}b_{2}\ldots a_{m}b_{m}}_{cdc_{2}d_{2}\ldots c_{m}d_{m}} R^{c_{2}d_{2}}_{a_{2}b_{2}}\ldots R^{c_{m}d_{m}}_{a_{m}b_{m}} \end{equation} where $\delta ^{aba_{2}b_{2}\ldots a_{m}b_{m}}_{cdc_{2}d_{2}\ldots c_{m}d_{m}}$ is the totally-antisymmetric $m$-dimensional determinant tensor. If we now extremize $Q_{tot}$ in \eq{Qtot}, using this $P^{ab}_{cd}$ in the expression for $\mathcal{H}_g$ in \eq{hgrav}, we get the field equations of (what is known as) the Lanczos-Lovelock\ model \cite{A13,A14,A16}, given by: \begin{equation} E^{a}_{b}\equiv P^{ai}_{jk}R_{bi}^{jk}-\frac{1}{2}\delta^{a}_{b}\mathcal{R} =(8\pi L_P^2)T^{a}_{b}+\Lambda\delta^a_b \end{equation} where $E^{a}_{b}$ and $ m\mathcal{R}\equiv P^{ab}_{cd}R_{ab}^{cd} $ are the generalizations of the Einstein tensor and the Ricci scalar.\footnote{It is possible to prove that $E_{ab}$ is \textit{symmetric} \cite{A40} and $\nabla_aE^a_b =0$, so that everything is consistent. Further, the variational principle works when $dV$ in \eq{Qtot} is the integration measure on the spacetime or on a suitable null surface with $n_a$ as the normal.} These models \cite{A37,A38,A39} have the curious, and unique, feature that, even though the Lagrangians describing them, in the conventional approach, are $m$-th degree polynomials in the curvature tensor, the resulting field equations are still second order in $g_{ab}$! In $d=4$ dimensions, $P^{ab}_{cd}$ reduces to the determinant tensor given by $ P^{ab}_{cd}=(1/2)(\delta^a_c\delta^b_d-\delta^b_c\delta^a_d) $. The resulting equation for $g_{ab}$ is identical to Einstein's equations with an undetermined cosmological~constant: \begin{equation} G^a_b=(8\pi L_P^2)T^a_b+\Lambda \delta^a_b \end{equation} which has the structure in \eq{eab}, as expected. The expression for $P^{ab}_{cd}$ determines the entropy density of horizons (corresponding to the Wald entropy) in the resulting theory through the expression \cite{A8,A13}: \begin{equation} s=-\frac{1}{8} \sqrt{\gamma} P^{abcd}\epsilon _{ab}\epsilon _{cd} \label{entrophys} \end{equation} (where $\epsilon_{ab}$ is the binormal to the horizon surface) which, of course, reduces to $\sqrt{\gamma}/4$ if we choose $P^{ab}_{cd}=(1/2)(\delta^a_c\delta^b_d-\delta^b_c\delta^a_d)$, appropriate for the Einstein gravity. Thus, the specification of horizon entropy specifies the $P^{ab}_{cd}$ and selects the corresponding Lanczos-Lovelock\ model. In the case of normal matter, we know that two different bodies, say, a glass of water and a metal rod, can be kept at the same temperature; so, the temperature of a material is purely kinematic and contains no structural information. On the other hand, the entropy function $S(E,V)$ will be completely different for water and the metal rod at the same temperature; specifying it will allow us to describe the structure of the material. Similarly, the temperature of the spacetime, as we saw before, is purely kinematic, but specifying the form of horizon entropy in \eq{entrophys}, specifies the dynamics of the theory. So far, we have not specified the physical nature of null vector field $n^a$ nor the physical interpretation of $\mathcal{H}_g$ or $\mathcal{H}_m$. We, however, know from Equations~(\ref{Paper06_New_11}) and (\ref{hmatter}) that the combination $T_{ab}n^an^b$ has a physical interpretation (of the heat density contributed by matter to a null surface), if we identify $n^a=\ell^a$, the tangent vector to a null congruence defining a null surface, and choose $dV=\sqrt{\gamma}\, d^2xd\lambda$, which is the natural integration measure on the null surface. The identifications, $n_a\to \ell_a$ with $\mathcal{H}_m[n]\to \mathcal{H}_m[l]$, in turn, imply that $\mathcal{H}_g[\ell_a]$ should be interpreted as the corresponding quantity, \textit{viz}. the heat density contributed by gravity to the null surface. Thus, our guiding principle, that the field equations should be invariant under $T^a_b \to T^a_b + $ constant $\delta^a_b$, tells us that the variational principle extremizes the total heat density (since we know the interpretation of $\mathcal{H}_m$ for matter), thereby leading to a direct thermodynamic interpretation to the variational principle based on: \begin{equation} Q_{\rm tot} \equiv \int \sqrt{\gamma}\, d^2xd\lambda\, (\mathcal{H}_g[\ell] +\mathcal{H}_m[\ell]) \end{equation} Since $P^{ab}_{cd}$ is related to the entropy of the horizons in the resulting theory, it is no surprise that the on-shell value of $Q_{\rm tot}$ is closely related to the entropy of null surfaces. We can show \cite{SCTPnull} in general relativity, for example, that the on-shell value is: \begin{equation} Q_{\rm tot}^{({\rm on-shell})} = Q(\lambda_2)-Q(\lambda_1) \label{eqnx2} \end{equation} with: \begin{equation} Q(\lambda)= \int \frac{\sqrt{\gamma} \, d^2x}{4L_P^2}\, k_B T_{\rm loc} = \int d^2x (T_{\rm loc}\, s) \end{equation} where $T_{\rm loc}$ is the Davies--Unruh temperature attributed to the null surface by appropriate local Rindler observers and $s= (\sqrt{\gamma}/4L_P^2)$ is the entropy density in \eq{entrophys} for general relativity (the interpretation in \eq{eqnx2} works for all Lanczos-Lovelock\ models if we use the $s$ in \eq{entrophys}). It is also possible to provide a direct physical meaning to $L_P^2$. This is most easily found from rewriting \eq{eqnx2} as: \begin{equation} 2Q_{\rm tot}^{({\rm on-shell})} = E_{\rm sur}(\lambda_2) - E_{\rm sur}(\lambda_1) \label{eqnx1} \end{equation} with: \begin{equation} E_{\rm sur}(\lambda) = \int \frac{\sqrt{\gamma} \, d^2x}{L_P^2} \left( \frac{1}{2}k_B T_{\rm loc}\right) =\frac{1}{2} N_{\rm sur} (k_BT_{\rm avg}) \label{equiE} \end{equation} where $T_{\rm avg}$ is the average of $T_{\rm loc}$ over the surface and $N_{\rm sur} =(A_{\rm sur}/L_P^2)$ is the number of surface degrees of freedom \cite{A17,A18} if we attribute one degree of freedom to each cell of area $L_P^2$. \textit {This provides the physical meaning of the fundamental constant $L_P^2$ we have introduced as a quantum of area}; viz., the number of microscopic degrees of freedom associated\footnote{One can, of course, rescale $(1/2)k_BT\to(\nu/2)k_BT,N_{sur}\to A_{sur}/\nu L_P^2$ without changing the result; we have chosen $\nu=1$.} with an area $A$ is $A/L_P^2$. Therefore, the physical meaning of $Q_{\rm tot}$, used in our variational principle, is reiterated by its on-shell value.\footnote{The relative factor two in the left-hand sides of Equations~(\ref{eqnx1}) and (\ref{eqnx2}) is not \textit{ad hoc} and, in fact, helps to solve a long-standing problem in general relativity related to a factor two in the definition of Komar mass; see, e.g., \cite{C4}; I will not discuss it here.} The following point, however, needs to be stressed. Eventually, one would like to obtain such a thermodynamic variational principle from a deeper, microscopic consideration. All that we require in such a derivation is that (i) \textit{some} auxiliary null vector field $n_a$ should arise in the microscopic theory and (ii) should lead to $\mathcal{H}_g [n_a]$ with the correct structure. If we identify this $n_a$ with the normals to the null surfaces, we get the correct field equations in the macroscopic limit. However, at a fundamental level, the auxiliary vector field $n^a$ (which could arise in the microscopic physics) and the $\ell^a$ (associated with the null surfaces in the macroscopic limit) are conceptually distinct. I will discuss this in greater detail in Sections~\ref{sec:gravheatden} and \ref{sec:kineticsast}. The fact that the thermodynamic description transcends general relativity in a unified manner is a feather in the cap for this approach. \textit{In fact, virtually every result in the emergent gravity paradigm obtained for general relativity also holds \cite{A13,A22,A32,A34} for the Lanczos-Lovelock\ models.} At the same time, the paradigm is quite selective; while it leads to the Lanczos-Lovelock\ models with a natural quadratic expression for $Q_{\rm tot}$, there is no natural generalization to obtain, say, the $f(R)$ models of gravity. The fact that the Lanczos-Lovelock\ models are the only ones that have field equations that are second order in $g_{ab}$ seems to be encoded in this paradigm. For most of the remaining part of the review, I will work with $d=4$ and general relativity. The form of $\mathcal{H}_g$ is, of course, not unique, and we can add to it any scalar function $f(x)$, possibly built from the metric and other background variables; this will not change the result, because we are varying $\ell_a$ and not $g_{ab}$. One can also add to it any total derivative of the form $dF/\sqrt{\gamma}d\lambda$, where $F$ can depend on $\ell_a$; such a term will contribute only at the end points $\lambda= \lambda_1,\lambda_2$, where, as usual, we will keep $\ell_a$ fixed. (We can also add a two-divergence $D_Av^A$ in the transverse space, which integrates to zero on $\sqrt{\gamma}d^2x$ integration, and hence is not of much significance). Therefore, a more general form is: \begin{equation} \mathcal{H}_g=f(x)-\left(\frac{1}{16\pi L_P^2}\right) (4P^{ab}_{cd}\nabla_a\ell^c\nabla_b\ell^d)+\frac{1}{\sqrt{\gamma}}\frac{dF}{d\lambda}+D_Av^A \end{equation} This possibility of adding a $(1/\sqrt{\gamma})(dF/d\lambda)$ allows us to rewrite $\mathcal{H}_g$ in a simpler form, which makes the final result obvious. It also helps to separate the contributions that arise even (in, say, a Rindler frame) in flat spacetime from the effects of curvature; in fact, we would expect $\mathcal{H}_g$ to become a total divergence in flat spacetime. I will get back to these aspects later in Section~\ref{sec:gravheatden}. There is an important insight we can obtain from this exercise as regards gravity, in spite of the fact that the field equations are the same. In the Newtonian limit, the gravitational force is now given by: \begin{equation} F=\left(\frac{c^3 L_P^2}{\hbar}\right)\left(\frac{m_1 m_2}{r^2}\right) \label{newton} \end{equation} in terms of the three constants that we have introduced: $c,\hbar,L_P^2$. \textit{You should resist the temptation to write $(c^3L_P^2/\hbar)$ as $G_N$, thereby making $G_N$ independent of $\hbar$!} \eq{newton} tells us that gravity has no classical limit \cite{noclimit}, and the force diverges when $\hbar\to 0$ at finite $L_P^2$, just as all matter collapses when $\hbar\to 0$, because no stable atom can exist. \textit{Gravity is quantum mechanical at all scales.} To summarize, we have succeeded in obtaining the equations for spacetime evolution, such that: (1) The variational principle remains invariant under the shift $T^a_b \to T^a_b + $ (constant) $\delta^a_b$. (2) The variational principle is thermodynamic in character and extremizes the heat content of the null surfaces in the spacetime. (3) The cosmological constant arises as an integration constant to the equations (and its value needs to be fixed by some further microscopic principle once and for all). The really significant result is: \begin{itemize} \item[$\blacktriangleright$] The most natural way of incorporating the fact that gravity is immune to the zero-level of energy \textit{implies} an emergent, thermodynamic, interpretation for gravity! \end{itemize} This result connects what used to be thought of as two completely separate ideas! \section{Geometry in the Thermodynamic Language}\label{sec:geotherm} We have found the dynamical equations for the spacetime, but, as we said earlier (see Item 2 in Section~\ref{sec:gravemerge}), it does not make much sense to use the geometrical language to describe the spacetime evolution if the field equations have the same status as those in other emergent phenomena! We saw that the thermodynamic interpretation of geometry relates $L_P^2$ to the degrees of freedom and entropy of null surfaces. This idea will be reinforced when we express the dynamical equations in a thermodynamic language. This has been described in several previous works in this subject cited earlier. For the sake of completeness, I shall review some of the key results, amplifying the conceptual~aspects. One way to do this is to introduce \cite{SCTPnull} a conserved vector current $J^a[v]$, which can be defined in terms of an arbitrary vector field $v^a$ in the spacetime. (We will define $J^a[v]$ in the context of general relativity, but it can be generalized to all the Lanczos-Lovelock\ models). This current, when computed for the time evolution vector field $v^a=\xi^a$ in the spacetime, will have a direct thermodynamic significance. From any arbitrary vector field $v^{a}$, we can construct\footnote{This happens to be the off-shell version of the standard Noether current; but, the conventional way of deriving it using diffeomorphism invariance of the gravitational action is misleading, because it suggests that $J^a[v]$ has something to do with the action and its symmetries. As we see here, it has nothing to do with either, and its conservation is a rather trivial identity. We will continue to call it the Noether current, but its conservation does not require the nice theorems Emmy Noether proved!} a \textit{conserved} current $J^{a}=\nabla _{b}J^{ab}$ where the antisymmetric tensor $J^{ab}$ is defined as: $ (16\pi L_P^2) J^{ab}=\nabla ^{a}v^{b}-\nabla ^{b}v^{a}$. (The normalization of this current is arbitrary; the introduction of the area $L_P^2$ in the proportionality constant gives it the correct dimension and makes the later results transparent and simple. We will usually work in units with $16\pi L_P^2=1$ and reintroduce it in the final formulas.) Elementary algebra now leads to the alternative expression: \begin{equation} \sqrt{-g}\, J^{a}(v) =2\sqrt{-g}\, R^{a}_{b}v^{b}+f^{bc}\pounds _{v}N^{a}_{bc} \label{Paper06_Sec03_Eq02} \end{equation} where: \begin{align}\label{Paper06_Sec_01_Eq01} f^{ab}\equiv\sqrt{-g}g^{ab};\qquad N^{c}_{ab}\equiv-\Gamma ^{c}_{ab}+\frac{1}{2}\left(\delta ^{c}_{a}\Gamma ^{d}_{db}+\delta ^{c}_{b}\Gamma ^{d}_{ad}\right) \end{align} The individual terms in \eq{Paper06_Sec03_Eq02} are generally covariant, because the Lie derivative of the connection $\pounds_v\Gamma^c_{ab}$, given by $ \pounds _{v}\Gamma ^{a}_{bc}=\nabla _{b}\nabla _{c}v^{a}+R^{a}_{~cmb}v^{m} $, is generally covariant. The set $(f^{ab},N^{c}_{ab})$ contains the same amount of information as $(g_{ab},\Gamma^{c}_{ab})$, but has a more direct thermodynamic interpretation \cite{A21}. Let $\mathcal{H}$ be a null surface, which is perceived as a horizon by local Rindler observers who attribute to it a temperature $T$ and entropy density $s=\sqrt{\gamma}/4$. Then, one can show that: \begin{itemize} \item The combination $ N^{c}_{ab} f^{ab}$, integrated over $\mathcal{H}$ with the usual measure $d^3 \Sigma_c=\ell_c\sqrt{\gamma}d^2xd\lambda$, gives its heat content; that is: \begin{equation} \frac{1}{16 \pi L_P^2}\int d^3 \Sigma_c (N^{c}_{ab} f^{ab}) =\int d\lambda\ d^2x\ T s \end{equation} \item Consider the metric variations $\delta f$ that preserve the null surface. Remarkably enough, the combinations $f\delta N$ and $N\delta f$ correspond to the variations $s\delta T$ and $T\delta s$, when integrated over the null surface. That is: \begin{eqnarray} \frac{1}{16 \pi L_P^2}\int d^3 \Sigma_c(N^{c}_{ab}\delta f^{ab})&=& \int d\lambda\ d^2x\ T \delta s; \label{stsdt0}\\ \frac{1}{16 \pi L_P^2}\int d^3 \Sigma_c (f^{ab}\delta N^{c}_{ab})&=& \int d\lambda\ d^2x\ s \delta T \label{stSdT} \end{eqnarray} Therefore, the variations ($N\delta f, f\delta N$) exhibit \textit{thermodynamic conjugacy} similar to that in the corresponding variations $(T\delta s, s\delta T)$. \end{itemize} \subsection{The Avogadro Number of the Spacetime and the Spacetime Evolution}\label{sec:avogadro} A crucial relation in the study of, say, gases is the equipartition law $E=(1/2)Nk_BT$, which should be more appropriately written as: \begin{equation} Nk_B=\frac{E}{(1/2)T} \end{equation} Here, both the variables in the right-hand side, $E$ and $T$, have valid interpretations in the continuum, thermodynamic limit, but the $N$ in the left-hand side has no meaning in the same limit. The $N$ actually counts the microscopic degrees of freedom or, more figuratively, the number of atoms, the very existence of which is not recognized in thermodynamics! An equation like this directly relates the macroscopic and microscopic descriptions. Can we obtain a similar relation for spacetime? Can we count the number of atoms of spacetime? It turns out that indeed we can \cite{A17,A18}, and the current $J^a[\xi]$, where $\xi^a$ is the time evolution vector related to the (1 + 3) foliation, shows the way. Consider a section of a spacelike surface $\mathcal{V}$ with boundary $\partial\mathcal{ V}$ corresponding to $N=$ constant. In any static spacetime, one can show that the gravitating (Komar) energy $E_{\rm Komar}$ of this bulk is equal to the equipartition heat energy of the surface we encountered earlier in \eq{equiE}: \begin{equation} E_{\rm Komar}\equiv\int d^{3}x\sqrt{h} \ 2N\bar{T}_{ab}u^{a}u^{b}=\int \frac{\sqrt{\gamma} \, d^2x}{L_P^2} \left( \frac{1}{2}k_B T_{\rm loc}\right) =\frac{1}{2} N_{\rm sur} (k_BT_{\rm avg}) \label{hequi1} \end{equation} where $\bar{T}_{ab}\equiv {T}_{ab}-(1/2)g_{ab}T$. Therefore, there is a correspondence between the bulk and boundary energies, as well as equipartition, which I will call holographic equipartition. It gets better. When we consider the \textit{most general} spacetime (rather than static spacetimes), we would expect the above relation to break down and the difference between the two energies to drive the evolution of the spacetime. This is precisely what happens. One can associate with the bulk energy $E_{\rm Komar}$ the number $N_{\rm bulk}$, defined as the number of degrees of freedom in a bulk volume \textit{if} the (Komar) energy $E_{\rm Komar}$ contained in the bulk is at equipartition at the temperature $T_{\rm avg}$. That is: \begin{align}\label{Papper06_NewFin01} N_{\rm bulk}\equiv\frac{\epsilon}{(1/2)T_{\rm avg}}\int d^{3}x\sqrt{h} \ 2N\bar{T}_{ab}u^{a}u^{b}=\frac{|E_{\rm Komar}|}{(1/2)T_{\rm avg}} \end{align} where $\epsilon=\pm$ is chosen so as to keep $N_{\rm bulk}$ positive, even if $E_{\rm Komar}$ turns negative. We do \textit{not}, of course, assume that the equipartition is actually realized; this is just a dimensionless measure of the Komar energy in terms of the average boundary temperature. One can then show \cite{A19} that the time evolution of spacetime geometry in a bulk region, bounded by the $N=\textrm{constant}$ surface, is driven by the suitably-defined bulk and boundary degrees of freedom. Specifically: \begin{align}\label{Paper06_NewFin02} \frac{1}{8\pi}\int d^{3}x\sqrt{h}u_a g^{ij}\pounds_\xi N^a_{ij} =\frac{\epsilon}{2}T_{\rm avg}\left(N_{\rm sur}-N_{\rm bulk}\right) \end{align} with $\xi_a=Nu_a$ being the time evolution vector, where $u_a$ is the velocity of the observers moving normal to the foliation.\footnote{The Lie variation term in \eq{Paper06_NewFin02} is closely connected with the canonical structure \cite{A19} of general relativity in the conventional approach, through the relation $ \sqrt{h}u_{a}g^{ij}\pounds _{\xi}N^{a}_{ij}=-h_{ab}\pounds _{\xi}p^{ab}$, where $p^{ab}=\sqrt{h}(Kh^{ab}-K^{ab})$ is the momentum conjugate to $h_{ab}$ in the standard approach.} This result shows that \textit{it is the difference between the surface and the bulk degrees of freedom that drives the time evolution of the spacetime!} (A very similar result holds \cite{SCTPnull} for a null surface, as well, in terms of corresponding variables.) A simple, but remarkable corollary is that in all static \cite{A17,A18} spacetimes, we have holographic equipartition, leading to the equality of the number of degrees of freedom in the bulk and boundary: \begin{equation} N_{\rm sur}=N_{\rm bulk}; \qquad (\mathrm {holographic\; equipartition}) \label{nsureqn} \end{equation} which, of course, is a restatement of \eq{hequi1}. \subsection{The Fluid Mechanics of the Null Surfaces} From the $J^a[v]$, one can define another vector field $P^a[v]$, which can be thought of as the gravitational momentum attributed to spacetime \cite{A36}, by an observer with velocity $v^a$. It is defined as: \begin{equation} P^a[v] \equiv 2 G^a_b v^b - J^a [v] = -Rv^{a}-g^{ij}\pounds _{v}N^{a}_{ij} \label{deffourvel} \end{equation} The physical meaning of $P^a[v]$ arises from the following fact: the conservation of total momentum $(P^a + M^a)$ for all observers will lead to \cite{A36} the field equations of general relativity; the introduction of $P^a(v)$ restores the conservation of momentum in the presence of gravity! When evaluated for the time evolution vector in for the Gaussian null coordinates (GNC)\footnote{The GNC system generalizes the notion of the local Rindler frame associated with an arbitrary null surface; see~\cite{A41,A42,A43} for more details. We define the time development vector as $\xi^a=Nu^a$, where $u^a$ is the velocity of observers at rest in GNC. One can show that $\xi^a$ will reduce to the timelike Killing vector corresponding to the Rindler time coordinate if we rewrite the standard Rindler metric in the GNC form. Therefore, $\xi^a$ is a natural generalization of the time evolution vector, corresponding to the local Rindler-like observers in the GNC, though, of course, in the general case, $\xi^a$ will not be a Killing vector in a general spacetime.} associated with a given null surface, $P^a[\xi]$ reveals its thermodynamic significance in two contexts. First, we can show that the variational principle used to obtain the field equations has a simple interpretation \cite{SCTPnull} in terms of $P^a[\xi]$ in GNC. Second, the projection of $P^a[\xi]$ along $\ell_a,k_a$ and $q_{ab}$ associated with a null surface leads to three sets of equations, all of which have a direct thermodynamic interpretation. Let us start with the variational principle, which was based on \eq{Qtot}. The $Q_{tot}$ has a simple expression in terms of the total momentum flux through the null surfaces. We can show \cite{SCTPnull} that: \begin{align}\label{Paper06_New_21} {Q}_{\rm tot}=-\int d^{2}x d\lambda \sqrt{\gamma}\, \ell_a \,P^{a}_{\rm tot}(\xi) =-\int d^{2}x d\lambda \sqrt{\gamma}\, \ell_a \,\left[P^{a}(\xi)+M^{a}(\xi)\right] \end{align} where the expressions in the integrand can be thought of as the limit of $\xi_a P_{\rm tot}^a(\xi)$ as we approach the null surface, and we have ignored the end point contributions. Clearly, the total energy density associated with the total momentum $P^a_{\rm tot}$, by the local Rindler observer, is what contributes to the $Q_{\rm tot}$ of the null surface. The variational principle thus has a clear physical significance \textit{even off-shell}, unlike, for example, the action principle for gravity in the conventional approach. The second feature about the gravitational momentum $P^a(\xi)$ is somewhat more technical, and hence, I will only mention its physical content. Given the thermodynamic properties of the null surfaces, one would expect the flow of gravitational momentum vis-\`{a}-vis any null surface to be of primary importance. To explore this, we construct the GNC associated with the given null surface and the $P^a(\xi)$ using the corresponding time evolution vector. The natural basis vectors associated with the null surface are given by the set of vectors $(\ell ^{a},k^{a},e^{a}_{A})$ where $e^{a}_{A}$ spans the two transverse directions. The gravitational momentum can be decomposed using this basis as: $P^{a}=A\ell ^{a}+Bk^{a}+C^{A}e^{a}_{A}$; and the components $A,B$ and $C^{A}$ can be recovered from the projections of $P^a$ given by $A=-P^{a}(\xi)k_{a}$, $B=-P^{a}(\xi)\ell _{a}$ and $C^{A}=P^{a}(\xi)e^{A}_{a}$. Therefore, the following combinations, $q^{a}_{b}P^{b}(\xi), k_{a}P^{a}(\xi)$ and $\ell_{a}P^{a}(\xi)$, will give the complete information about the flow of gravitational momentum vis-\`{a}-vis the given null surface. Each of them leads to an interesting thermodynamic interpretation. However, since the calculations are somewhat involved, I will skip the algebraic details (which can be found in \cite{SCTPnull}) and summarize the results: \begin{itemize} \item The component $q^{b}_{a}P^{a}(\xi)$ allows us to rewrite the relevant component of the field equations in a form identical to the Navier--Stokes equation for fluid dynamics \cite{A24,A25} (for a variable that can be interpreted as drift velocity on the horizons). This is probably the most direct link between the field equations and the fluid mechanics of atoms of spacetime. This result generalizes the corresponding result, known previously for black hole spacetimes \cite{A26,A27}, to \textit{any} null surface in \textit{any} spacetime. \item The projection $k_{a}P^{a}(\xi)$, evaluated on an arbitrary null surface, can be \cite{A33} rewritten in the form: $TdS=dE+PdV$, \textit{i.e.}, as a thermodynamic identity. Here, all of the variables have the conventional meanings, and differentials are interpreted as changes in the relevant variables when we make an infinitesimal virtual displacement of the null surface in the direction of $k^a$. This generalizes the corresponding results, previously known for spacetimes with some symmetry \mbox{(see, e.g., \cite{A30,A31,A32,A28,A29})}, and the null surface in question is a horizon. This result also allows us to associate the notion of energy with an arbitrary null surface \cite{A44,A45}. \item Finally, the component $\ell _{a}P^{a}(\xi)$ gives \cite{SCTPnull} the evolution of null surface, in terms of its heating rate involving both $ds/d\lambda$ and $dT/d\lambda$, where $s$ is the entropy density, $T$ is the temperature associated with the null surface and $\lambda$ is the parameter along the null generator $\ell _{a}$. \end{itemize} \section{A Closer Look at the Atoms of Spacetime}\label{sec:gravheatden} The results described so far suggest that the dynamics of spacetime is the thermodynamic limit of the statistical mechanics of microscopic degrees of freedom, which we shall call the atoms of spacetime. Our~next task is to obtain the heat density $\mathcal{H}_g$, used in the variational principle based on \eq{Qtot}, from a reasonable model for microscopic degrees of freedom. Given the enormous conceptual complications in any such attempt, we will approach the problem in a step-by-step manner, proceeding by analogy with more familiar situations. Let us start by recalling certain features in the description of a normal fluid made of atoms. The macroscopic, thermodynamic description ignores the existence of discrete structures and describes the fluid as a continuum using variables, like density $\rho(t, \mathbf{x})$, pressure $p(t, \mathbf{x})$, mean velocity $\mathbf{V}(t, \mathbf{x})$, \textit{etc}. The price we pay for ignoring the discrete structures is that we need to add certain variables (like temperature) purely phenomenologically, say through the equation of state $P=P(\rho,T)$, for this description to work properly. The next layer of description for a fluid, used in physical kinetics, is in terms of the distribution function $f(t, \mathbf{x},\mathbf{v})$. (In a relativistic case, we will use $f(x^i,p_j)$ with $p^ip_i=m^2$, which reduces again to $f(t, \mathbf{x},\mathbf{v})$ with a suitable Jacobian). This description recognizes the fact that the fluid \textit{is} made of atoms. However, it works at a scale sufficiently large compared to the inter-atomic distance, so that we can interpret $dN = f(t, \mathbf{x},\mathbf{v})d^3\mathbf{x} d^3\mathbf{v}$ as the number of atoms in a phase volume $d^3\mathbf{x} d^3\mathbf{v}$. The key assumption is that we can introduce a volume element $d^3\mathbf{x}$, which is sufficiently small to be treated as `practically' infinitesimal and yet large enough to contain a sufficiently large number of atoms of the fluid. The main difference between the descriptions in these two layers (thermodynamics \textit{vs}. physical kinetics) lies in the fact that the latter allows us to handle the dispersion in the microscopic variables. For~example, $f(t, \mathbf{x},\mathbf{v})$ tells us that, at a given location $\mathbf{x}$, there could be several atoms moving in different directions with different speeds, thereby leading to velocity dispersion. One could therefore compute \textit{both} the mean velocity \textit{and} the velocity dispersion using $f(t, \mathbf{x},\mathbf{v})$ by: \begin{equation} \mathbf{V}(t,\mathbf{x})\equiv \int \mathbf{v} f(t, \mathbf{x},\mathbf{v}) d^3\mathbf{v};\qquad \sigma_v^2(t,\mathbf{x})\equiv \int (\mathbf{v}-\mathbf{V})^2 f(t, \mathbf{x},\mathbf{v}) d^3\mathbf{v} \end{equation} and relate $\sigma_v^2$ to the temperature by, say, $k_BT\propto \sigma_v^2$. In contrast, the thermodynamic description only has the notion of the mean velocity $\mathbf{V}(t,\mathbf{x})$ of the fluid at an event, but not that of any velocity dispersion, since no discrete structure is recognized. As a result, we have to introduce the temperature (and other variables) in an \textit{ad hoc} manner in such a description. Clearly, the description in terms of a distribution function, recognizing the existence of atoms with different velocities at a given point, is one level closer to reality and is the first step in incorporating the discreteness at the microscopic level. What we seek is a similar description, for the atoms of spacetime, so that we are led to the correct form of the heat density. Working from the macroscopic scales, we know that the auxiliary vector field $n_a$ plays a crucial role. However, the discussion in Section~\ref{sec:elegantgravdyn} shows that one can obtain the field equations with \textit{any} null vector $n_a$. In the macroscopic limit, if we identify $n_a$ with $\ell_a$, corresponding to a null congruence, then $T_{ab}\ell^a\ell^b$ has a thermodynamic interpretation. This does not immediately suggest a unique microscopic origin for this vector field $n_a$. There are two natural interpretations one could explore. The first one is to think of $n_a$ as representing something analogous to the velocity $\mathbf{v}$ of the atoms that appear in the distribution function. The fact that $n_a$ is null implies that the atoms of spacetime have no mass scale associated with them, which makes sense. However, in that case, one would have expected the kinetic energy contribution to the gravitational heat density to be of the form: \begin{equation} \mathcal{K}_{g}=\frac{1}{2}M_{ab}n^an^b \label{hk} \end{equation} rather than a quadratic in the \textit{derivatives} of $n_a$. The second possibility is to think of $n_a(x)$ as analogous to the mean velocity field $\mathbf{V}(t,\mathbf{x})$, which appears at the thermodynamic description. Then, one can relate a quadratic term in $\nabla_an_b$ to some kind of viscous heat generation (as indicated by the correspondence with the Navier--Stokes equations~\cite{A24,A25} mentioned earlier) contributing to the heat density. In the description of normal fluids, these two are \textit{completely} different constructs. However, in the description of spacetime, we have only one kind of vector field, $n_a$, and it should somehow play roles analogous to both $\mathbf{v}$ and $\mathbf{V}(t,\mathbf{x})$ simultaneously! Then, both of the descriptions will be valid, and we will have a natural interpretation of the heat density, both from microscopic and macroscopic scales. Mathematically, this requires that we should be able to express the heat density $\mathcal{H}_g$ in \eq{hgrav} in an equivalent form as a quadratic in $n_a$ (like \eq{hk}) without any derivatives. \textit{This is a very nontrivial constraint}; but again, everything works out fine! Let me explain how this comes about in some detail. To do this, let us begin by asking the question: How come the variation of a quadratic in $\nabla_an_b$, in \eq{hgrav}, did not lead to second derivatives of $n_a$ in the Euler--Lagrange equations? Algebraically, this is due to the occurrence of the commutator of covariant derivatives $[\nabla_i, \nabla_j] n_k$, which, as we know, is linear in $n_m$ and does not contain any second derivatives. There is, however, a nicer way to see this result \cite{A19}, which is based on the following identity: \begin{equation} 2P^{ab}_{cd}\nabla_an^c\nabla_bn^d= R_{ab}n^an^b + \frac{1}{\sqrt{\gamma}} \frac{d}{d\lambda} (\sqrt{\gamma}\Theta) \label{prsep} \end{equation} where $n^i$ is the affinely parameterized congruence with $\lambda$ being the affine parameter and $\Theta = \nabla_in^i = (d/d\lambda)(\ln \sqrt{\gamma})$. Therefore, $\mathcal{H}_g$ and $R_{ab}n^an^b$ differ by a total derivative term that does not contribute to the variation, and we can write: \begin{equation} Q_{\rm tot} = \int \sqrt{\gamma}\, d^2x d\lambda\left[ - \frac{R_{ab}}{8\pi L_P^2} + T_{ab} \right] n^an^b - \frac{1}{8\pi L_P^2} \int d^2 x \, \sqrt{\gamma}\, \Theta \Bigg|^{\lambda_2}_{\lambda_1} \label{var1} \end{equation} Ignoring the second term, since it contributes only at the end points $\lambda = (\lambda_1,\lambda_2)$, our variational principle reduces to working with $(-R_{ab}/8\pi L_P^2 + T_{ab})n^an^b$. Imposing the $n^2 =0$ condition by a Lagrange multiplier and varying this expression with respect to $n^a$ will lead to $R^a_b = (8\pi L_P^2) T^a_b + f(x) \delta^a_b$. Taking the divergence and using the Bianchi identities, as well as $\nabla_a T^a_b =0$, we find that \mbox{$f(x) = (1/2) R \ + $} constant, thereby leading to Einstein's equations with the {cosmological\ constant}\ appearing as an integration constant. \eq{prsep} also shows that $\mathcal{H}_g$ reduces to a total divergence term in flat spacetime (expressed in, say, the Rindler coordinates) and isolates the contribution due to spacetime curvature, which is contained~in: \begin{equation} \mathcal{K}_{g} \equiv - \frac{1}{8\pi L_P^2} R_{ab} n^an^b \label{hk1} \end{equation} Everything would have worked out fine even if we had used an expression for the gravitational heat density\footnote{A conceptually unsatisfactory feature of the standard approach to dynamics is that it equates a purely geometrical object $G^a_b$ to a matter variable $T^a_b$. It is unclear what is common to the two sides of this equation. Our approach shows clearly what is common to both sides of Einstein's equations, if we write it as $ (8\pi L_P^2)^{-1}R_{ab}\ell^a\ell^b=T_{ab}\ell^a\ell^b$. They both represent the heat densities, of spacetime and matter! Moreover, all of these results generalize to Lanczos-Lovelock\ models with $R^a_b$ replaced by $E^a_b$, \textit{etc.}} given by \eq{hk1}. The result in \eq{hk1} has exactly the same structure seen in \eq{hk}, which is what we wanted. Therefore, we could have thought of our $n_a$ as analogous to: (i) the macroscopic, mean velocity field $\mathbf{V}(t,\mathbf{x})$ and interpreted $\mathcal{H}_g$ in \eq{hgrav} as the heat density arising from something analogous to viscous dissipation; or (ii) the microscopic velocity field $\mathbf{v}$, which can be interpreted as analogous to the velocity of the atoms themselves. It is very gratifying that the same heat density allows both of the descriptions. The corresponding heating rate, made dimensionless for future convenience, is given by: \begin{equation} \frac{d(Q_{g}/E_P)}{d(\lambda/L_P)}= L_P^2\frac{dQ_{g}}{d\lambda}=L_P^2\int\sqrt{\gamma}d^2x \ \mathcal{K}_{g}= -\int\frac{\sqrt{\gamma}d^2x}{L_P^2}\left(\frac{L_P^2}{8\pi } R_{ab} n^an^b\right) \label{qrate} \end{equation} In fact, one can also work with a variational principle based on $(dQ_{g}/d\lambda)$ (rather than $Q_{g}$), if we use this expression in \eq{var1}. Therefore, the variational principle can be thought of as an extremum condition on the heating rate. It is possible to make some more progress with the expression in \eq{hk1} by recognizing that one could limit oneself to affinely parameterized null vectors $n^a =\nabla_a \sigma$, which are pure gradients. In that case, the gravitational heat density in \eq{hk1} takes the form: \begin{equation} \mathcal{K}_{g} \equiv - \frac{1}{8\pi L_P^2} R^{ab} \nabla_a\sigma \nabla_b\sigma \label{hk2} \end{equation} If we use this expression in \eq{Qtot} and vary $\nabla_a\sigma$, imposing the constraint that $\nabla_a\sigma$ is null, we will again get the correct field equations. As we mentioned earlier on, we really have no idea what is the extra degree of freedom $q_A$ on which our extremum principle will depend, when we approach it from the microscopic scales; an $n_a$ of the form $\nabla_a\sigma$ is adequate. Therefore, our task now reduces to coming up with a microscopic model, which will have the following~features: \begin{itemize} \item The key new ingredient in our approach is the introduction of a vector field $n_a = \nabla_a\sigma$ into a variational principle. It is not \textit{a priori} clear how the auxiliary variable, like $\sigma$ or $n_a$, arises from a microscopic description and why we need to vary it in an extremum principle. The microscopic description should lead to the vector field $n_a = \nabla_a \sigma$, as well as $\sigma$ itself. This is probably the most important task. \item There should be a fundamental reason why null vectors, closely associated with local Rindler horizons, play such an important role. This should emerge from the microscopic description. \item Finally, we need to obtain the \textit{explicit} form of the heat density in \eq{hk2} in a natural manner from the microscopic description. \end{itemize} These might appear to be fairly formidable tasks, but I will show that it is possible to come up with a microscopic description that satisfies all of these criteria! It turns out that $\sigma$, as well as the combination $R^{ab} \nabla_a\sigma \nabla_b\sigma $ have a very natural interpretation, which I will now describe. To do this, I want to introduce an alternate way of describing the standard Riemannian geometry using what is known \cite{E5,E8,E9,E10} as Synge's world function $\sigma^2(x,x')$, instead of the metric tensor $g_{ab}(x)$. The world function $\sigma^2(x,x')$ is defined as the geodesic interval between any two events $x$ and $x'$, which are sufficiently close so that a unique geodesic exists. Since the knowledge of all geodesic distances (locally) is equivalent to the knowledge of the metric, anything one can do with the metric tensor can be done using $\sigma^2(x,x')$. The information contained in the ten functions $g_{ab}(x)$, which depends on the choice of the coordinate system, is more efficiently encoded in the single biscalar $\sigma^2(x,x')$. (Of course, one could summarize the information of ten functions in a single function only because $\sigma^2$ is nonlocal and depends on \textit{two} events $x$ and $x'$). Mathematically, this arises from the expansion: \begin{equation} \frac{1}{2} \nabla_a \nabla_b \sigma^2 = g_{ab} - \frac{\lambda^2}{3} \mathcal{E}_{ab} +\frac{\lambda^2}{12} n^i\nabla_i \mathcal{E}_{ab} + \mathcal{O} (\lambda^4) \label{sigexp} \end{equation} where $\lambda$ is the affine distance along the geodesic connecting $x$ and $x'$, $\mathcal{E}_{ab} \equiv R_{akbj} n^kn^j$ and: \begin{equation} n_a = \frac{1}{2\sqrt{ |\sigma^2|}}\, \nabla_a \sigma^2 = \nabla_a \sigma \end{equation} (The second equality follows from the fact that $\sigma$ satisfies the Hamilton--Jacobi equation leading to $g^{ab}\nabla_a \sigma^2\nabla_b \sigma^2 = 4\sigma^2$; we will assume $\sigma^2 >0$ for simplicity when it will not cause any problems.) \eq{sigexp} shows that the coincidence limit ($x\to x'$) of $(1/2) \nabla_a \nabla'_b\sigma^2$ gives the metric tensor $g_{ab}$. Given the geodesic distance $\sigma^2(x,x')$, we can obtain $g_{ab}$ at any event and, hence, can calculate any other geometrical quantity. Therefore, all of gravitational dynamics can be done, in principle, with $\sigma^2(x,x')$ instead of with the metric. The expansion in \eq{sigexp} shows that the second order term contains the combination $\mathcal{E}_{ab}$, the trace of which is given by: \begin{equation} \mathcal{E} = R^{ab}n_an_b = R^{ab} \nabla_a \sigma \nabla_b \sigma \end{equation} This has an algebraic structure identical to the heat density in \eq{hk2}! This suggests that if we work with $\sigma^2(x,x')$ (rather than with the metric), then some natural variables in the microscopic theory could possibly be related to the heat density in \eq{hk2}. Let me illustrate how $\mathcal{E}$ occurs in several geometrical variables in a natural fashion \cite{D8}. To do this, we will switch from the Lorentzian spacetime to Euclidean spacetime around an event $P'$, so that: (i) $\sigma^2 (P',P)$ treated as a function of $P$ (with fixed $P'$) is positive. (ii) The local Rindler horizon gets mapped to the Euclidean origin, which we take to be $P'$. (iii) The coincidence limit of $P\to P'$, approaching the origin, corresponds to approaching the local Rindler horizon in the original spacetime. (The coincidence limit $\sigma^2 \to 0$ corresponds to all of the events $P$ in the original spacetime connected to the origin $P'$ by a null ray.) In the Euclidean spacetime, it is convenient to introduce the notion of an equi-geodesic surface that corresponds to all events at the same geodesic distance from the origin \cite{D1,D4,D5,D6}. To describe such a surface, it is convenient to work with a natural coordinate system $(\sigma, \theta_1, \theta_2, \theta_3)$ where $\sigma$ (the geodesic distance from the origin) is the ``radial'' coordinates and $\theta_\alpha$ are the angular coordinates on the equi-geodesic surfaces corresponding to $\sigma =$ constant \cite{D7}. The metric can then be reduced to the form: \begin{equation} ds^2_E = d\sigma^2 + h_{\alpha\beta} dx^\alpha dx^\beta \label{sync} \end{equation} where $h_{\alpha\beta} $ is the induced metric on the equi-geodesic surface with $\sigma =$ constant.\footnote{This is the analogue of the synchronous frame in Minkowski spacetime, with $x^\alpha$ chosen to be angular coordinates.} The most primitive quantities one can introduce in such a spacetime are the volume element $\sqrt{g}\, d^4x$ and the area element of the equi-geodesic surface, $\sqrt{h}\, d^3x$. For the metric in \eq{sync}, we, of course, have $\sqrt{g} = \sqrt{h}$, and hence, both the volume and area measures are identical. It is possible to show \cite{D8} that in the limit of $\sigma \to 0$, this measure is given by: \begin{align} \sqrt{h}= \sqrt{g}=\sigma^3\left(1-\frac{1}{6}\mathcal{E}\sigma ^{2}\right)\sqrt{h_\Omega} \label{gh} \end{align} where $\sqrt{h_\Omega}$ arises from the standard metric determinant of the angular part of a unit sphere. This is the simplest example of the appearance of $\mathcal{E}$ in a primitive geometrical variable. It gives the correction to the area of (or the volume enclosed by) an equi-geodesic surface. This is a very standard result in differential geometry and is often mentioned as a measure of curvature around any event. It seems natural to assume that the number of atoms of spacetime (\textit{i.e.}, the microscopic degrees of freedom, contributing to the heat density) at $P$ should be proportional to either the area or volume ``associated with'' the event $P$. This is because we would expect the number of atoms of spacetime to scale with either area or volume. (Based on the earlier result $N_{sur}=A/L_P^2 = N_{\rm bulk}$ in equipartition, we would expect a scaling with $\sqrt{h}$, which is the ``area'' element of $\sigma=$ constant surface, but it is important to \textit{derive} this and understand why volume scaling does not arise in the microscopic description). To give precise meaning to the phrase, area or volume ``associated with'' the event $P$, we can proceed as follows: (i) we construct an equi-geodesic surface $S$ centered on $P$ with ``radius'' $\sigma$; (ii) we compute the volume enclosed by $S$ and the surface area of $S$; and (iii) we take the limit of $\sigma\to0$ to determine the area or volume associated with $P$. However, as we can see from \eq{gh}, these measures identically vanish in the limit of $P\to P'$, which corresponds to $\sigma \to 0$. Therefore, while the required combination $\mathcal{E} = R^{ab}\nabla_a\sigma \nabla_b\sigma$ does exist in the volume and area measures, it does not contribute in the appropriate limit. A little thought shows that this is certainly to be expected. As we saw from the macroscopic approach, the entropy requires a quantum of area $L_P^2$ for its proper description. Classical differential geometry, which is what we have used so far, knows nothing about a quantum of area and, hence, cannot give us the correct heat density. To obtain the heat density from the above considerations, we need to ask how the geodesic interval gets modified in a quantum description of spacetime and whether such a modified description will have a $\sqrt{h}$ (or $\sqrt{g}$) leading to the correct heat density. The last miracle I will describe is how this comes about. \section{The Renormalized Spacetime}\label{sec:kineticsast} The essential idea was to recognize that a primary effect of quantum gravity will be to introduce into the spacetime a zero-point length \cite{D2a,D2b,D2c,D2d,D2e,D2f}, by modifying the geodesic interval $\sigma^2(x,x')$ between any two events $x$ and $x'$ (in a Euclidean spacetime) to a form like $\sigma^2 \to \sigma^2 + L_0^2$ where $L_0$ is a length scale of the order of the Planck length. More generally, such a modification can take the form of $\sigma^2 \to S(\sigma^2)$, where the function $S(\sigma^2)$ satisfies the constraint $S(0) = L_0^2$ with $S'(0)$ finite. (Our results are happily insensitive to the explicit functional form of such $S(\sigma^2)$; so, for the sake of explicit illustration, we will use $S(\sigma^2) = \sigma^2 + L_0^2$.) The theoretical evidence for the existence of such a zero point length is described in several previous works \cite{D2a,D2b,D2c,D2d,D2e,D2f} and will not be repeated here. While we may not know how quantum gravity modifies the classical metric, we do have an indirect handle on it if quantum gravity introduces a zero point length to the spacetime in the manner described above. Since the original $\sigma^2$ can be obtained from the original metric $g_{ab}$ (and \textit{vice versa}), it will be nice if we can obtain the quantum gravity-corrected geodesic interval $S(\sigma^2)$ from a corresponding quantum gravity-corrected metric \cite{D1}, which we will call the q-metric $q_{ab}$. Obviously, no such local, non-singular $q_{ab}$ can exist because, for any such $q_{ab}$, the resulting geodesic interval will vanish in the coincidence limit, almost by definition. Therefore, we expect $q_{ab}(x,x')$ to be a bitensor, which is singular at all events in the coincidence~limit. One can now determine \cite{D4,D5} the form of such a $q_{ab}(x,x')$ for a given $g_{ab}(x)$ by using two~conditions: (i) It should lead to a geodesic interval $S(\sigma^2)$ with a zero point length and; (ii) The Green function describing small metric perturbations should have a non-singular coincidence limit. It can be shown \cite{D5} that these conditions determine $q_{ab}$ uniquely in terms of $g_{ab}$ (and its associated geodesic interval $\sigma^2$). We get: \begin{align} q_{ab}=Ah_{ab}+ B n_{a}n_{b};\qquad q^{ab}=\frac{1}{A}h^{ab}+\frac{1}{B}n^{a}n^{b} \label{qab} \end{align} where $D$ is the dimension of spacetime, $D_k$ is a shorthand for $D-k$ and: \begin{align} B=\frac{\sigma ^{2}}{\sigma ^{2}+L_{0}^{2}};\qquad A=\left(\frac{\Delta}{\Delta _{S}}\right)^{2/D_{1}}\frac{\sigma ^{2}+L_{0}^{2}}{\sigma ^{2}};\qquad n_a=\nabla_a\sigma \label{defns} \end{align} and $\Delta$ is the Van Vleck determinant related to the geodesic interval $\sigma^2 $ by: \begin{align} \Delta (x,x')=\frac{1}{2}\frac{1}{\sqrt{g(x)g(x')}}\textrm{det}\left\lbrace \nabla _{a}^{x}\nabla _{b}^{x'}\sigma ^{2}(x,x') \right\rbrace \end{align} The $\Delta_S$ is the corresponding quantity computed with $\sigma^{2}$ replaced by $S(\sigma^{2})$ in the above definition. Before proceeding further, I want to introduce the notion of a renormalized (`dressed') spacetime \cite{paperD} and interpret $q_{ab}$ as the renormalized spacetime metric, which incorporates some of the non-perturbative effects of quantum gravity at Planck scales. While this is not essential for what follows, it provides a possible back drop for understanding the origin of $q_{ab}$. An important effect of the interactions in quantum field theory is to replace the bare variables in a Lagrangian by physical variables, which incorporate (some) effects of the interactions. We know that, in general, \textit{such a renormalization changes not only the constants, which appear in the Lagrangian, but also the field variables.} For example, consider the usual $\lambda \phi^4$ theory of a scalar field in $D=4$, described by a Lagrangian $L(\phi_B;m_B,\lambda_B)$ in terms of the bare variables. The perturbation theory (carried up to the two-loop level) tells us that we need to renormalize not only $\lambda_B$ and $m_B$ to their physical values $\lambda$ and $m$, but also change the bare field $\phi_B$ to the physical field $\phi$ if the theory is to make sense. A similar effect arises in QED, as well, which requires field renormalization. Though these results are usually obtained in perturbation theory, the requirement of renormalization by itself is a non-perturbative feature. In the Wilsonian interpretation of the field theory, integrating out the high energy modes will lead to the renormalization of the low energy effective Lagrangian, which is a feature that transcends perturbation~theory. It seems, therefore, natural to assume that a similar effect will arise in the case of gravity, as well. The bare Lagrangian for gravity, $L(g_{ab}^B, G_B, \Lambda_B) \propto G_B^{-1}[R(g_{ab}^B) - 2\Lambda_B] \sqrt{-g_B}$ should be interpreted as being expressed in terms of \textit{not only} the bare coupling constants ($G_B$ and $\Lambda_B$), \textit{but also} the bare metric tensor $g_{ab}^B$. We would then expect quantum gravitational processes at the Planck scale to replace ($g_{ab},G_B,\Lambda_B$) by their renormalized, physical, counterparts ($g^R_{ab}, G, \Lambda$). We can then compute all other renormalized geometrical variables (e.g., the curvature tensor) by using the $g^R_{ab}$ in the place of $g^B_{ab}$ in the relevant expressions. This procedure is necessarily approximate, compared to a fully rigorous non-perturbative quantum gravitational approach, which we do not have, but will surely capture some of the effects at the intermediate (``mesoscopic'') scales between the Planck scale and the long wavelength limit at which the classical metric is adequate. Of course, we cannot use perturbation techniques to directly compute $g^R_{ab}$ for a given classical geometry described by a $g_{ab}$, and we would expect $g^R_{ab}$ to be non-local and singular at any given event. (We will drop the superscript $B$ in $g_{ab}^B$ hereafter.) However, since the same quantum gravity effects that replace $g_{ab}$ by $q_{ab}$ are expected to replace $\sigma^2$ by $S(\sigma^2)$, we can identify $g^R_{ab}=q_{ab}$ in \eq{qab}. Therefore, we have an indirect way of determining the renormalized spacetime $g^R_{ab}=q_{ab}$ by this procedure. Let us get back to $q_{ab}$. As shown in previous work \cite{D1,D6}, the q-metric has several interesting properties, which I will now list: (1) The $q_{ab}(x,x')$, unlike $g_{ab}(x)$, is a bitensor depending on \textit{two} events through $\sigma^2(x,x')$. As we said before, this non-locality is essential if spacetime has to acquire a zero-point length. Any local, nonsingular metric will lead to a $\sigma^2(x,x')$, which vanishes in the limit of $x\to x'$. (2) The $q_{ab}$ reduces to the background metric $g_{ab}$ in the limit of $L_0^2 \to 0$, as it should. In the opposite limit of $(\sigma^2 / L_0^2) \to 0$, the $q_{ab}$ is singular, which is again natural if we interpret $q_{ab}$ as the metric of the renormalized spacetime; it is not expected to be well defined at any localized event and will require some kind of smearing over Planck scales. (3) When $g_{ab}=\delta_{ab}$, the $q_{ab}$ is also locally flat in the sense that there exists a coordinate transformation, which will reduce $q_{ab} dx^a dx^b$ to $\eta_{ab} dx^a dx^b$ in the synchronous frame. (This is, however, rather subtle because the coordinate transformation removes a region of size $L_P$ from the spacetime around \textit{all} events.) (4) Let $\Phi[g_{ab}(x)]$ be some scalar or scalar density (like, for example, the Ricci scalar $R[g_{ab}(x)]$) constructed from the background metric and its derivatives. We can compute the corresponding (bi)scalar $\Phi[q_{ab}(x,x');L_0^2]$ for the renormalized spacetime by replacing $g_{ab}$ by $q_{ab}$ in $\Phi[g_{ab}(x)]$ and evaluating all of the derivatives at $x$ keeping $x'$ fixed. The renormalized value of $\Phi[q_{ab}(x,x');L_0^2]$ is obtained by taking the limit $x\to x'$ in this expression keeping $L_0^2$ non-zero. Several useful scalars like $R$, $K$, \textit{etc.}, remain finite~\cite{D1,D5,D6} and local in this limit, even though the q-metric itself is singular when $x\to x'$ with non-zero $L_0^2$. The algebraic reason for this result~\cite{D1} is that the following two limits do not commute: \begin{equation} \lim_{L_0^2\to 0}\, \lim_{x\to x'} \Phi[q_{ab}(x,x');L_0^2]\neq \lim_{x\to x'}\,\lim_{L_0^2\to 0} \Phi[q_{ab}(x,x');L_0^2] \end{equation} All of the computations involving the $q_{ab}$ are most easily performed \cite{D7} by choosing a synchronous frame for the background metric, given in \eq{sync}, which can always be done in a local region. \section{A Point Has Zero Volume but Finite Area!}\label{sec:eventarea} We will now re-evaluate the area element of an equi-geodesic surface and the volume element for the region enclosed by it using the renormalized q-metric. This will involve $\sqrt{q} \ d^4x$ and $\sqrt{h}\, d^3 x$ as the respective integration measures, where $h$ now stands for the determinant of the induced metric on the equi-geodesic surface, corresponding to $q_{ab}$. (For the q-metric in \eq{qab}, calculated for the $g_{ab}$ in \eq{sync}, these two measures will not be equal, because $q_{00} \neq 1$.) If our ideas are correct, $\sqrt{h}$ should lead to the correct density of the atoms of spacetime in the coincidence limit. Further, there must be a valid mathematical reason to prefer the area element $\sqrt{h}$ over the volume element $\sqrt{q}$. I will show that these hopes are indeed realized! It is straightforward to compute these quantities using the q-metric, and we find that (with \mbox{$S(\sigma^2)=\sigma^2+L_0^2$} chosen for illustration, though the final results \cite{D7} hold in the more general case, as well as in $D$ dimensions): \begin{align} \sqrt{q}=\sigma \left(\sigma ^{2}+L_{0}^{2}\right)\left[1-\frac{1}{6}\mathcal{E}\left(\sigma ^{2}+L_{0}^{2}\right)\right]\sqrt{h_\Omega} \label{qfinal} \end{align} and:\footnote{ This result is algebraically subtle. One might think that the expression in \eq{hfinal} (which is actually $\sqrt{h}=A^{3/2}\sqrt{g}$) might arise from the standard result in differential geometry, \eq{gh}, by the replacement $\sigma^2\to(\sigma^2+L_{0}^{2})$. However, note that this trick does \textit{not} work for the expression in \eq{qfinal} (which is $\sqrt{q}=\sqrt{B}A^{3/2}\sqrt{g}$) due to the $\sqrt{B}=\sigma(\sigma ^{2}+L_{0}^{2})^{-1/2}$ factor that has the limiting form $\sqrt{B}\approx\sigma/L_{0}$ when $\sigma\to0$. This is the key reason why the event has zero volume, but finite area. A possible insight into this, rather intriguing, feature is provided by the following fact: The leading order dependence of $\sqrt{q}d\sigma\approx\sigma d\sigma$ makes the volumes scale as $\sigma^2$ (while the area measure is finite). This, in turn, is related to the fact \cite{paperD} that \textit{the effective dimension of the renormalized spacetime becomes $D=2$ close to Planck scales}, a result that has been noticed by several \mbox{people~\cite{z1,z2,z3,z4}} in different, but specific, models of quantum gravity. Our approach seems to give this result in a \textit{model-independent} manner, which, in turn, is the root cause of the result that events have zero volume, but finite area.} \begin{align} \sqrt{h}=\left(\sigma ^{2}+L_{0}^{2}\right)^{3/2}\left[1-\frac{1}{6}\mathcal{E}\left(\sigma ^{2}+L_{0}^{2}\right)\right]\sqrt{h_\Omega} \label{hfinal} \end{align} When $L_{0}^{2}\to0$, we recover the result in \eq{gh}, as we should. However, as explained in Item~(4) above, our interest is in the limit $\sigma^2\to0$ at finite $L_P$. Something remarkable happens when we do this. The volume measure $\sqrt{q}$ vanishes (just as in the case of the original metric), showing that it cannot lead to anything nontrivial. The zero point length does not lead to a residual volume measure. However, in the limit of $\sigma^2 \to 0$, we find that $\sqrt{h}$ has a non-zero limit! It is given~by: \begin{align} \sqrt{h}= L_{0}^{3}\left[1-\frac{1}{6}\mathcal{E}L_{0}^{2}\right]\sqrt{h_\Omega} \label{hlimit} \end{align} As the title to this section indicates, the q-metric (which we interpret as representing the renormalized spacetime) attributes to every point in the spacetime a finite area measure, but a zero volume measure! Since $L_0^3\sqrt{h_\Omega}$ is the volume measure of the $\sigma=L_0$ surface, the dimensionless density of the atoms of spacetime, contributing to the gravitational heat, can be taken to be: \begin{equation} f(x^i,n_a)\equiv \frac{\sqrt{h}}{L_0^3\sqrt{h_\Omega}} =1-\frac{1}{6}\mathcal{E}L_{0}^{2} =1-\frac{1}{6} L_{0}^{2} R_{ab}n^an^b \label{denast} \end{equation} How can we interpret this expression for the ``number of atoms of spacetime''? Our intention all along has been to define the analogue of a distribution function $f(x^i,n_a)$ that gives the number of atoms of spacetime at \textit{a given event}. We expected $f(x^i,n_a)$ to depend on an auxiliary vector field $n_a$, as well as on the location $x^i$. Just as in the usual kinetic theory, we no longer think of this location as a mathematical point, but imagine a region that contains a sufficiently large number of atoms of spacetime to make the description in terms of $f(x^i,n_a)$ valid. (To think of spacetime being filled with atoms is no stranger than thinking of matter being filled with atoms; both descriptions work at scales larger than the inter-atomic spacing, but recognize the existence of discrete structures.) The dependence on $x^i$ can have a universal part (which could exist even in the flat spacetime limit), as well as a part that depends on (what we call in macroscopic physics) the spacetime curvature. Since we want $f(x^i,n_a)$ to arise from the most basic of the geometrical properties of the space, it seems reasonable to explore areas and volumes. We know from classical differential geometry that areas and volumes of a region of size $r$ do have a flat space contribution, which is corrected by curvature-dependent terms. However, now, we want the area $(\sqrt{h}d^3x)$ and volume $(\sqrt{g} d^4x)$ measures to be defined \textit{at a point}, which will require taking the limit $r\to 0$. In a classical spacetime, both the measures vanish in this limit, as to be expected. When we consider the renormalized spacetime incorporating a zero point length, one might have naively expected \textit{both} of them to be finite at a given event. Remarkably enough, the volume measure ($\sqrt{q} d^4x$) still vanishes when the region collapses to a point, but the area measure does not.\footnote{One likes to think of the number of atoms per unit \textit{spatial} volume, rather than unit \textit{spacetime} volume, whether it is atoms of a gas or a spacetime; this is what we get from $\sqrt{h}\, d^3x$.} Briefly stated, quantum gravity endows each event in spacetime with a finite area, but zero volume. It is this area measure that we compute to obtain a natural estimate for $f(x^i,n_a)$. The desirable, but intriguing feature of this result is that a vector field $n_a = \nabla_a \sigma$ has survived in the final expression. At any given event (to which the coincidence limit has been taken), this vector field can point in all directions, because the geodesics emanating from that event can point in all directions. Therefore, the function $f(x^i,n_a)$ depends on the choice of the vector field $n_a$ at a given event. This is, again, very reminiscent of the distribution function $f(x^i, p^j)$ for a bunch of relativistic particles, which gives the number of particles at an event $x^i$ with momentum $p^j$. As I have emphasized earlier, the coexistence of several particles, with different momenta, at a given event is the characteristic feature of the description in physical kinetics. This assumes that one can consider a volume $d^3 \mathbf{x}$ that is small enough to be treated as infinitesimal, but large enough to contain several particles. In the same spirit, we should think of $f(x^i,n_a)$ as the number of atoms of spacetime, or less figuratively, the number of microscopic degrees of freedom, at an event $x^i$ with an extra attribute $n_a$, which is analogous to the momentum that appears in the distribution function in physical kinetics.\footnote{Incidentally, a field redefinition $g^{ab} \to g^{ab} - (L_0^2/6) R^{ab}$ in $g^{ab} \nabla_a \sigma\nabla_b \sigma$ will lead to \eq{denast}; similar field redefinitions have been used (see, e.g., \cite{tpr2}) in quantum gravity, but the connection with our approach is unclear.} \textit{It is also easy to see how null surfaces and null vectors are singled out in this approach.} This is because the coincidence limit $P'\to P$ in the Euclidean sector (with the event $P$ taken to be the origin) corresponds to approaching the null horizon in the Minkowski sector. In all calculations, we will eventually take the limit $\sigma^2 \to 0$ in the Euclidean sector. However, this limit, $\sigma^2 \to 0$, will translate into a null surface in the Minkowski spacetime.\footnote{The local Rindler observers who live on the hyperboloid $r^2-t^2=\sigma^2$ see the null cone $r^2-t^2=0$ as the horizon. In the Euclidean sector, the hyperboloid becomes the sphere $r^2+t_E^2=\sigma_E^2$, and approaching the Euclidean origin, $\sigma_E\to0$, translates to approaching the light cone in the Minkowski space.} The normal vector $n_i = \nabla_i\sigma$ (which occurs in the q-metric and all of the resulting constructs) will pick out the null vector, which is the normal to the null surface. More generally, $\sigma^2 (x,x') \to 0$ selects out events that are connected by a null geodesic, and hence, $n_a$ will correspond to a null vector in the Minkowski spacetime. This is how a null vector field $n_i$ is introduced in the description from a microscopic point of view. It is also understandable that we should extremize the expressions with respect to this variable, which is, in some sense, the relic from quantum gravity. In fact, the extremum condition is equivalent to demanding that $Q_{g}$ should not depend on this arbitrary vector field $n_a$, which is another way of interpreting the variational principle. Let us complete the analysis by connecting up with the macroscopic limit. The contribution to the gravitational heat in any volume is obtained by integrating $f(x^i,n_j)$ over the volume. Therefore, the expression for the heating rate, in dimensionless form (corresponding to \eq{qrate}), is given by: \begin{equation} L_P^2\frac{dQ_{g}}{d\lambda}=\int\frac{\sqrt{\gamma}d^2x}{L_P^2}f(x^i,n_j) = \int\frac{\sqrt{\gamma}d^2x}{L_P^2}\left[1-\frac{1}{6}L_{0}^{2}(R_{ab}n^an^b)\right] \label{corres} \end{equation} which gives the the correct expression in \eq{qrate}, with the crucial minus sign, plus a constant\footnote{If we had used, say, $\mu L_P$, rather than $L_P$ in \eq{corres} to obtain the dimensionless result here (and retained $L_P$ in \eq{qrate}), the constant term will become $\mu^{-4}$, and we will get $L_0^2=(3/4\pi)\mu^4 L_P^2$; we choose $\mu=1$ to get the unit degree of freedom as the constant term. It is also possible to add a proportionality constant in \eq{denast} which we have set to unity.} if we set $L_0^2=(3/4\pi)L_P^2$. Thus, the consistency of the macroscopic and microscopic descriptions also allows us to determine the value of the zero point length in terms of $L_P$ (which we know observationally from the Newtonian limit). \textit{Therefore, one can indeed interpret the gravitational heat density from the area measure of the renormalized~spacetime.} While the second term in \eq{corres} gives what we want for the variational principle, the first term is important for two reasons: \begin{itemize} \item It tells us that there is a zero-point contribution to the degrees of freedom in spacetime, which, in dimensionless form, is just unity. Therefore, it makes sense to ascribe $A/L_P^2$ degrees of freedom to an area $A$, which is consistent with what we saw in the macroscopic description. \item The result tells us that a two sphere of radius $L_P$ has $4\pi L_P^2/L_P^2=4\pi$ degrees of freedom. This was the crucial input that was used in a previous work to determine the numerical value of the {cosmological\ constant}\ for our universe. Thus, the microscopic description does allow us to determine \cite{C8,C9} the value of the {cosmological\ constant}, which appeared as an integration~constant. \end{itemize} Let me elaborate a bit on the last point, since it can provide a solution to what is usually considered the most challenging problem of theoretical physics today. Observations indicate that our universe is characterized by three distinct phases: (i) an inflationary phase with approximately constant density $\rho_{inf}$; (ii) a phase dominated by radiation and matter, with $\rho=\rho_{eq}[x^{-4}+x^{-3}]$, where $x(t)\equiv a(t)/a_{\rm eq}$, the $\rho_{eq}$ is a (second) constant and $a_{eq}$ is the epoch at which the matter and radiation densities were equal; and (iii) an accelerated phase of expansion at late times driven by the energy density $\rho_\Lambda$ of the cosmological constant. Values of these three constants $[\rho_{inf},\rho_{eq},\rho_\Lambda]$ will completely specify the dynamics of our universe. Standard high energy physics can, in principle, determine $\rho_{inf}$ and $\rho_{eq}$, but we need a new principle to fix the value of $\rho_\Lambda$, which is related to the integration constant that appears in our approach to field equations. It turns out that such a universe with these three phases has a new \textit{conserved} quantity, \textit{viz}. the number $N$ of length scales, which cross the Hubble radius during any of these phases \cite{C8,C9}. Any physical principle that determines the value of $N$ during the radiation-matter dominated phase, say, will determine $\rho_\Lambda$ in terms of $[\rho_{inf},\rho_{eq}]$. The emergent paradigm tells us that the value of this conserved quantity $N$ can be fixed at the Planck scale as the degrees of freedom in a two-sphere of radius $L_P$, giving $N=4\pi L_P^2/L_P^2 = 4\pi$. This, in turn, leads to the remarkable prediction relating the three \mbox{densities~\cite{C8,C9}:} \begin{equation} \rho_\Lambda \approx \frac{4}{27} \frac{\rho_{\rm inf}^{3/2}}{\rho_{\rm eq}^{1/2}} \exp (- 36\pi^2) \label{ll6} \end{equation} From cosmological observations, we find that $\rho_{eq}^{1/4} = (0.86 \pm 0.09) \text{eV}$; if we take the range of the inflationary energy scale as $\rho_{\rm inf}^{1/4} = (1.084-1.241) \times 10^{15}$ GeV, we get $\rho_{\Lambda} L_P^4 = (1.204 - 1.500) \times 10^{-123}$, which is consistent with observations! This novel approach for solving the cosmological constant problem provides a unified view of cosmic evolution, connecting all three phases through \eq{ll6}; this is to be contrasted with standard cosmology in which the three phases are put together in an unrelated, \textit{ad hoc} manner. Further, this approach to the cosmological constant problem \textit{makes a falsifiable prediction}, unlike any other approach I know of. From the observed values of $\rho_\Lambda$ and $\rho_{\rm eq}$, we can constrain the energy scale of inflation to a very narrow band, to within a factor of about five, if we consider the ambiguities in re-heating. If future observations show that inflation took place at energy scales outside the band of $(1-5)\times 10^{15}$ GeV, this model for explaining the value of {cosmological\ constant}\ is ruled out. \section{Discussion and Outlook}\label{sec:summary} The paradigm described here has two logically distinct parts. The first part (Sections~\ref{sec:gravemerge}--\ref{sec:geotherm}) is mathematically rigorous and paints an alternative picture about the nature of gravity. It is based on the desire to have a strong physical principle to describe the dynamics of gravity, \textit{viz}. that the field equations should be invariant under the shift $T^a_b \to T^a_b + ({\rm constant})\ \delta^a_b$. This principle is powerful enough to rule out the metric as a dynamical variable and suggests that any variational principle that we use should depend on the matter sector through the combination $T^a_b \ell_a\ell^b$ where $\ell^a$ is a null vector. This combination is interpreted by the local Rindler observers as the heat density contributed to a null surface by the matter crossing it. This, in turn, suggests looking for a corresponding heat density $\mathcal{H}_g$ contributed by gravity, such that extremizing the total heat density will lead to the relevant field equations. As we saw, it is indeed possible to construct such a thermodynamic variational principle \textit{not only} for general relativity, \textit{but also} for all Lanczos-Lovelock\ models. The construction is based on the tensor $P^{ab}_{cd}$, which determines the entropy of horizons in the appropriate theory. Thus, one has a completely self-consistent thermodynamic variational principle for a large class of gravitational theories. This approach also suggests that the standard geometrical variables must have a thermodynamic interpretation, and we should be able to recast the field equations themselves into a thermodynamic language. We illustrated these features in Section~\ref{sec:geotherm}. One finds that the time evolution of the spacetime metric is driven by the difference ($N_{\rm sur} - N_{\rm bulk}$) between the appropriately-defined surface and bulk degrees of freedom. Static spacetimes obey holographic equipartition in which $N_{\rm sur} = N_{\rm bulk}$, thereby leading to the equality of the number of degrees of freedom in the surface and bulk. All of these ideas work both on a spacelike surface and on a null surface. In the case of the latter, the field equations can also be re-written as a Navier--Stokes equation, which is probably the most direct connection between gravity and fluid dynamics. Further, just as in the case of normal matter, the equipartition condition allows us to identify the number density of microscopic degrees of freedom. We found that there are $A/L_P^2$ degrees of freedom, which can be associated with an area $A$. The second part of the review (Sections~\ref{sec:gravheatden} and \ref{sec:kineticsast}) takes this analysis one level deeper. The challenge is to obtain the expression for $\mathcal{H}_g$ from more fundamental considerations. We find that if we switch to the description of the differential geometry in terms of the geodesic interval $\sigma^2(x,x')$ rather than the metric, then the combination $R_{ab}n^an^b$ where $n_a=\nabla_a\sigma$ occurs rather ubiquitously in several geometrical expressions. The most primitive of these are the volume ($\sqrt{g}d^4x$) and area measures ($\sqrt{h}d^3x$) related to an equi-geodesic surface. In classical differential geometry, these measures $\sqrt{g}$ and $\sqrt{h}$ vanish when the equi-geodesic surface shrinks to a point. Therefore, even though the expressions for $\sqrt{g}$ and $\sqrt{h}$ contain the combination $R_{ab}n^an^b$, it does not contribute in the appropriate limit and prevents us from `associating' an area or volume with an event. This is, of course, just an indication that the degrees of freedom of spacetime will arise only when we introduce a quantum of area $L_P^2$. There is a fair amount of evidence that suggests that one of the effects of quantum gravity is to introduce a zero-point length $L_0$ in the spacetime, by modifying $\sigma^2 \to \sigma^2 +L_0^2$. When this idea is developed further, in terms of a renormalized spacetime metric, which we called the q-metric, something remarkable happens. The volume measure corresponding to the renormalized metric still vanishes when the equi-geodesic surface collapses to a point; but the area measure remains finite and contains the correct expression for the heat density when we take $L_0^2 = (3/4\pi)L_P^2$. Thus, this approach allows us to count the number density of atoms of spacetime, and, by comparing the result with the macroscopic theory, determines the value of $L_0$. We also have a fundamental reason as to why the area measures are more relevant than the volume measures, a feature that has been repeatedly noticed in the thermodynamics of horizons. The description at this layer is more speculative than in the previous part, but, of course, the rewards are also significantly higher. One can compare this layer of description with the kinetic theory of gases, which recognizes the existence of atoms, but yet, works at scales where a continuum description is~possible. The central quantity in such a description, in the case of a gas, will be the distribution function $f(x^i,p_j)$, which will give the number of atoms of gas at an event $x^i$ with momentum $p_j$. The corresponding quantity for the spacetime is a function $f(x^i,n_j)$ where $n_j = \nabla_j \sigma$ is the tangent vector to the null geodesic at the event $x^i$. Since several null geodesics can emanate from a given event, this is analogous to the distribution function for a gas, which describes several particles with different momenta coexisting at a single event. In neither case can the spacetime event be truly infinitesimal, and one assumes the existence of some intermediate scales, so that a sufficiently large number of atoms (of either gas or spacetime) can be collectively described by a distribution function. In the case of normal matter, one can think of $f(x^i,p_j)$ as counting the number of (i) atoms, or (ii) microscopic degrees of freedom, or (iii) microstates available to the discrete entities, since they all differ only by a numerical factor. In the case of spacetime, it seems appropriate to think of $f(x^i,n_j)$ as counting the number of microstates of geometry at $x^i$ with an internal degree of freedom described (at some suitable limit) by a null vector $n_j$. (The broad picture is somewhat reminiscent of Wheeler's spacetime foam idea \cite{jw}, but it is difficult to make a connection in general with only macroscopic inputs; the few computations based on spacetime foam (e.g.,~\cite{remo}) that exist are model dependent.) There are several open questions that this description raises, and their investigation will prove to be fruitful in taking the ideas further. The most crucial question (which has not been tackled so far in the emergent gravity paradigm) is the role of normal matter, which has been introduced through a conserved energy momentum tensor $T^a_b$. While the macroscopic physics did provide an interpretation of $T^a_b\ell_a\ell^b$, which we used to develop the ideas further, this term lacks a microscopic description at present.In fact, it is rather ironic that, in our approach, we get the gravitational sector as a relic from quantum gravity, but have no quantum or semi-classical description of matter!\footnote{An analogy with a gaseous system is the following: think of a gas, confined to a box with a piston and described by a distribution function $f(\mathbf{x},\mathbf{p})$ giving the number density of atoms. Using $f(\mathbf{x},\mathbf{v})$, one could compute not only the pressure exerted on the piston, but also the fluctuations in the pressure which acknowledges the existence of atoms in the gas. Even though both the piston and the gas are made of atoms and interact with each other, we are now taking into account the discrete constituents of the gas, but not of the piston. The situation in which we recognize the discrete nature of spacetime, but borrow $T_{ab}$ from the classical theory, is roughly analogous.} This is one issue in which the thermodynamic variational principle lags behind the usual action principle; in the latter, one has a uniform description in terms of the sum of the actions, $A_{\rm grav} + A_{\rm matt}$, and the extremum principle for the action is sanctioned by the quantum theory. The thermodynamic variational principles for normal systems, for example, the one for entropy $S(q_A)$, however, do not come from any path integral \textit{amplitude}, but instead from the fact that the probability for a configuration is proportional to $\exp S(q_A)$. This would suggest that the gravitational sector of the variational principle should have a similar probabilistic interpretation. If we interpret $f(x^i,n_j)$ as related to number of microscopic states available to quantum geometry, then in the suitable limit, one can introduce a probability $P(x^i,n_a)$ for $n_a$ at each event $x^i$ and the partition function: \begin{equation} e^{S(x^i)}\propto \int\mathcal{D}n_i P(x^i,n_a)\exp[\mu L_P^4 T_{ab}n^an^b] \label{geoz} \end{equation} where $\mu$ is a numerical factor of order unity. If we take: \begin{equation} P(x^i,n_a)\propto\exp [\mu f(x^i,n_a)] \propto \exp\left( - \frac{\mu L_P^2}{8\pi} R_{ab} n^an^b\right) \label{eqnx} \end{equation} then the steepest-descent evaluation of \eq{geoz} will pick out the geometry determined by Einstein's equation with an arbitrary cosmological constant. (Further, the choice $\mu=1/4$ will allow $P$ to be interpreted as the number of microstates.) More generally, one can think of $P(x^i,n_a)$ to be such that it gives the correlator: $ \langle n^an^b\rangle \approx(4\pi/\mu L_P^2) R_{ab}^{-1} $ which facilitates writing the field equations in the form: \begin{equation} 2\mu L_P^4\ \langle \bar T_{ab} n^an^b\rangle\approx 2\mu L_P^4\ \langle \bar T_{ab}\rangle \langle n^an^b\rangle =1 \label{mach} \end{equation} where $\bar T^a_b\equiv T^a_b-(1/2)\delta^a_b T$ and $\langle \cdots \rangle$ now indicates both expectation values for the quantum operator $\bar T_{ab}$, as well as a probabilistic averaging of $n^an^b$. Equation~(\ref{mach}) has a clear Machian flavor. We cannot set $\langle T_{ab}\rangle =0$ everywhere and study the resulting spacetime, since it will lead to $0=1$! \textit{Matter and geometry must emerge and co-exist together, suggesting a new perspective on cosmology.} If \eq{geoz} could be obtained from a systematic approach, we will have a nice way of describing the effect of the source on the geometry. This will also throw light on the avoidance of classical singularities in quantum gravity, which is definitely indicated in any spacetime with a zero-point length. In all such approaches, one would consider $f(x^i,n_a)$ as a fundamental (pre-geometric) object; from this point of view, it would be also interesting to study the evolution equation for $f(x^i,n_a)$ in terms of, say, $n^j\nabla_j f(x^i,n_a)$. The choice of $\sqrt{h}$ as a measure of the density of the atoms of spacetime seems reasonable, but one cannot ignore the fact that many other geometrical variables in the renormalized spacetime have \cite{D6} finite limits, containing the combination $R_{ab}n^an^b$, which is, in fact, rather ubiquitous. We have made the most basic choice, but it would be nice if one could explore other possibilities, as well. One possibility, for example, is the following: We know that in the local Rindler frame, $A_\perp/4$ is interpreted as entropy. We can compute the corrections to $A_\perp$ due to the curvature in the Euclidean sector, by computing the corresponding quantity over a small circle in the $T_E,X$ plane. (This is not quite an equi-geodesic surface, as we have defined it, but a cross-section of it on the $T_E,X$ plane; however, the idea still works.) Classically we find that the correction does have the factor $[1-(\sigma^2/6)(R_{ab}n^an^b)]$ where $n^a$ is now confined to the $T_E,X$ plane, which, of course, does not contribute in the $\sigma\to0$ limit. Working out the same with the q-metric (now with $g_{ab}$ corresponding to a Riemann normal coordinate system boosted to a local Rindler frame), we will get the correct result. Therefore, one can also interpret the entropy density $(R_{ab}n^an^b)$ as corrections to $A_\perp/4$ in flat spacetime. One can also do a similar exercise \cite{D6} with the integral of the extrinsic curvature $K/8\pi$ over a stretched horizon in local Rindler frame, which we know gives its heat content $\kappa A_\perp/8\pi$ in flat spacetime; but in this case, one needs to make some \textit{ad hoc} choices for the numerical factors to get the result. Such attempts, \textit{viz}. interpreting our extra terms as curvature corrections to the standard expressions for entropy (which works only after adding the zero-point length), are rather unsatisfactory as first-principle approaches. There is another natural geometrical quantity that contains $(R_{ab}n^an^b)$. The expression for $f(x^i,n_a)$ comes from the term in square brackets in \eq{hlimit} which, in turn and rather surprisingly, arises from the ratio of Van Vleck determinants in \eq{defns}, which has the leading order behavior: \begin{equation} \frac{\Delta}{\Delta _{S}}=f(x^i,n_a) =1-\frac{1}{6} L_{0}^{2} R_{ab}n^an^b \end{equation} so one could have used this as an alternative definition for $f(x^i,n_a)$. This might be better for the probabilistic interpretation of $f$ in \eq{eqnx}. Finally, it will be interesting to ask how these ideas generalize to Lanczos-Lovelock\ models (for some related ideas, see \cite{entent}). The renormalization of a Lanczos-Lovelock\ theory will, of course, lead to a different expression for $q_{ab}$, the corresponding $S(\sigma^2)$ and, consequently, a different expression for $\sqrt{h}$. However, for consistency, we know that the final $f(x^i,n_a)$ must be the same with $R_{ab}$ replaced by $\mathcal{R}_{ab}$. It will be interesting to explore whether these notions work out for Lanczos-Lovelock\ models, as well. \section*{Acknowledgments} I thank Sumanta Chakraborty, Sunu Engineer, Dawood Kothawala and Kinjalk Lochan for several discussions and comments on the manuscript. My work is partially supported by the JC Bose fellowship of the Department of Science and Technology (DST) of the Government of India.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Quantum particles can be treated as wave packets (WPs). The use of neutrino WPs clarified some conceptual confusion \cite{Akhmedov2009} from the use of plane waves to discuss neutrino oscillations and introduced a few oscillation-suppressing terms in the flavor transformation probabilities. For that problem, one-dimensional (1D) consideration suffices as only the longitudinal evolution of neutrino WPs is of concern. Nevertheless, the simplified 1D description is not a full account of WP propagation in the 3D space. According to Heisenberg's uncertainty principle, a particle localized in a finite spatial region has intrinsic momentum uncertainty that causes the WP to spread over time. The longitudinal spreading of a neutrino WP is suppressed by the tiny neutrino mass \cite{Giunti1991} and can be neglected. In contrast, the transverse size of the WP increases with the time of travel $t$ as $(\Delta k_\perp / k_0)t$, where $\Delta k_\perp$ and $k_0$ are the transverse momentum uncertainty and the average momentum of the WP, respectively. Limited by the speed of light, the WP asymptotically evolves into a spherical shape that subtends a constant angle from its initial position, as depicted in Fig.~\ref{fig:WP spread}. \begin{figure}[h] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[scale=0.35]{figure1a.jpg} \vspace{0.3 in} \caption{} \label{fig:WP spread} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[scale=0.45]{figure1b.pdf} \caption{} \label{fig:Overlap criteria} \end{subfigure} \caption{(a) Spreading of a massless neutrino WP with initial position uncertainties $a_l$ and $a_t$. The blue dashed arrow shows the classical path. (b) Criteria for overlap. The blue and gray strips represent the 90\%-probability volumes of two WPs.} \label{fig:Illustration of WP spreading and overlap} \end{figure} The transverse spreading potentially allows neutrino WPs emitted in slightly different directions but with approximately the same momentum to overlap. If such overlap occurs, the indistinguishability of neutrinos would require a formal many-particle treatment. Here we describe an approximate approach to quantify the overlap based on the evolution of 3D Gaussian WPs. We apply this approach to accelerator, reactor, solar and supernova neutrinos and find no and severe overlap for the former and latter two sources, respectively. However, the overlap has no measurable consequences for practical detection of solar and supernova neutrinos. \section{Estimating Overlap of 3D WPs} As the tiny neutrino mass introduces negligible longitudinal WP spreading, we assume massless neutrinos to focus on the transverse spreading. We consider a Gaussian WP with momenta closely centered around the mean value $\vec{k}_0 = k_0\hat{z}$. An initial WP with the position widths shown in Fig.~\ref{fig:WP spread} can be expressed as \begin{equation} \Psi(\vec{r},0)=\frac{1}{(2\pi)^{3/4}a_ta_l^{1/2}} \exp\left( -\frac{\rho^2}{4a_t^2}-\frac{z^2}{4a_l^2} +ik_0z \right), \end{equation} where $\rho \equiv \sqrt{x^2 + y^2} $. At time $t>0$, the above WP evolves into \cite{Li2016} \begin{equation} \label{eq: Psi in the far-field limit at t>0 approximated result} \Psi(\vec{r},t)\propto \frac{1}{z}\exp\left\{ -\frac{\left[z+\rho^2/(2z)-t\right]^2}{4a_l^2} -\frac{\rho^2}{z^2(k_0a_t)^{-2}} +ik_0\left(z+\frac{\rho^2}{2z}-t\right) \right\}. \end{equation} With $r \approx z + \rho^2/(2z)$, the corresponding probability density can be expressed in spherical coordinates as $|\Psi(\vec{r},t)|^2 = R(r,t)\Theta(\theta)$, where \begin{equation} \label{eq: radial and angular probability distribtuions } R(r,t)\propto \frac{1}{r^2} \,\exp\left[ -\frac{(r-t)^2}{2a_l^2} \right],\ \Theta(\theta)\propto \exp\left[ -\frac{\theta^2}{2\cdot\left(2k_0a_t\right)^{-2}} \right]. \end{equation} For estimating the overlap of WPs, we define the 90\%-probability volume by $t-2a_l<r<t+2a_l$ and $0\leq\theta\lesssim 1.22(k_0 a_t)^{-1}$, which corresponds to the 95\%-probability regions of the radial and angular distributions. We regard two WPs as overlapping if their 90\%-probability volumes intersect (see Fig.~\ref{fig:Overlap criteria}) and if their energies are the same within the intrinsic uncertainty $\Delta E$. This defines the emission time window $\tau\sim\mathcal{O}[(\Delta E)^{-1}]$ and solid angle $\Delta\Omega_\text{overlap}\sim\mathcal{O}[(k_0 a_t)^{-2}]$ (see Fig.~\ref{fig:Overlap criteria}) for the WPs of concern. With a differential production rate $d^2\Phi/dE_\nu d\Omega$ for the source, the number of WPs expected to overlap with a reference WP is \begin{equation} \label{eq: overlap factor} \eta=\tau\frac{d^2\Phi}{dE_\nu d\Omega}(\Delta E) (\Delta\Omega_\text{overlap}) \sim\frac{d^2\Phi}{dE_\nu d\Omega} \frac{96\pi}{(k_0 a_t)^2}\sim 96\pi\frac{d^2\Phi}{dE_\nu d\Omega} \left(\frac{\Delta k_\perp}{k_0}\right)^2, \end{equation} where $\Delta k_\perp\sim a_t^{-1}$ is the transverse momentum uncertainty of the WPs. Because the WPs have sharply-peaked momentum distributions, $\Delta k_\perp\ll k_0$ and \begin{equation} \eta\ll 96\pi\frac{d^2\Phi}{dE_\nu d\Omega}. \label{eq:eta} \end{equation} The numerical factor in Eqs.~(\ref{eq: overlap factor}) and (\ref{eq:eta}) comes from generous estimates of $\Delta E$, $\tau$, and $\Delta\Omega_\text{overlap}$. We will see that this factor does not affect our results below. We give characteristic parameters for accelerator, reactor, solar, and supernova neutrinos along with the corresponding estimates of $\eta(k_0a_t)^2$ in Table~\ref{tab:overlap factor for sources}. It can be seen that accelerator and reactor neutrinos can be safely treated as non-overlapping WPs. However, for reasonable guesses of $k_0a_t$, solar and supernova neutrinos correspond to overlapping WPs. \begin{table}[t] \begin{center} \caption{Characteristics of various neutrino sources} \begin{tabular}{lccccc} \hline \hline Source & $d^2\Phi/dE_\nu d\Omega$&$\eta(k_0 a_t)^2$ & $D$&$E_\nu$& HBT\\ &(MeV$^{-1}$~s$^{-1}$~sr$^{-1}$)&&(m)&(MeV)&setup\\ \hline accelerator &$\sim 10^{18}$&$\sim 0.1$ & $\sim10^6$& $\sim10^3$ & No \\ reactor &$\sim 10^{19}$ &$\sim 1$ & $\sim10^3$--$10^5$& $\sim 1$ & No \\ sun &$\sim 10^{32}$--$10^{38}$ &$\sim10^{13}$--$10^{19}$ & $\sim 10^{11}$& $\sim 0.1$--10 & No \\ supernova & $\sim 10^{51}$--$10^{55}$&$\sim 10^{32}$--$10^{36}$ & $\sim 10^{20}$& $\sim 10$ & Yes\\ \hline \hline \end{tabular} \label{tab:overlap factor for sources} \end{center} \end{table} \vspace{-0.2 in} \section{Physics of Quantum WPs: Overlapping or Not} When WPs do not overlap, one might ask how the 3D WP treatment can be reconciled with the picture of bullet-like particles, which is commonly assumed for analyzing neutrino detection. As the transverse size of the WP can easily become macroscopically large, any microscopic detection process only ``sees'' a 1D neutrino wave train weighted by a direction-dependent amplitude, the square of which is $\Theta(\theta)$ in Eq.~\eqref{eq: radial and angular probability distribtuions }. Because of the spherical wave front in Eq.~\eqref{eq: Psi in the far-field limit at t>0 approximated result}, the observed effective plane-wave momentum points in the same direction as that defined in the classical sense. In addition, without interference among the WPs, the observed flux is a sum over WPs emitted in different directions. As $\Theta(\theta)$ is normalized, this flux is the same as the particle number flux from a source emitting bullet-like particles. When WPs overlap, formally neutrinos should be described by an anti-symmetric many-particle wave function. However, if among the overlapping WPs, only one neutrino is detected at a time, it can be shown that the interference terms in the one-particle detection probability are proportional to the inner products of the one-particle states. Although these states overlap in both position and momentum spaces at the detector, they are in fact orthogonal because their production processes are spatially separated at the source. Consequently, the one-particle detection rate is not affected by the overlap of WPs. If more than one neutrino can be detected simultaneously, then the Hanbury Brown and Twiss (HBT) effect may take place. For fermions such as neutrinos, the HBT effect causes the detected events to ``anti-bunch'' if certain criteria are met \cite{Fano1961}. For this to occur, the WPs from production points a and b must overlap at detection pionts c and d as shown in Fig.~\ref{fig:HBT}. More specifically, the geometric condition \begin{equation} E_\nu \,r_{ab,\perp}r_{cd,\perp} / D \lesssim 1 \label{eq:hbt} \end{equation} must be satisfied and the temporal separation between the two detected events must be less than the coherence time of the source. In Eq.~(\ref{eq:hbt}), $r_{ab,\perp}$ ($r_{cd,\perp}$) is the distance between points a and b (c and d) in the direction perpendicular to the source-detector axis and $D$ is the source-detector distance. When the above conditions are realized, the two neutrinos would be rendered in the same phase space cell by the detection processes. Therefore, such joint detection would be suppressed by the Pauli exclusion principle, which accounts for the anti-bunching. \begin{figure}[t] \centering \includegraphics[scale=0.20]{figure2.pdf} \caption{Overlapping WPs and the HBT effect. Ambiguity in pairing the production and detection processes gives rise to the HBT effect.} \label{fig:HBT} \end{figure} Using the parameters given in Table~\ref{tab:overlap factor for sources}, we find that the geometric condition in Eq.~(\ref{eq:hbt}) is only satisfied for neutrinos from a Galactic supernova. However, even with a megaton detector, the expected number of $\bar\nu_e+p\to n+e^+$ events from overlapping WPs is $\sim 10^{-14}(\text{kpc}/D)^{2}$, which is simply too small to show the HBT effect. \section{Summary} Based on the evolution of 3D Gaussian WPs, we have estimated the potential overlap among WPs for accelerator, reactor, solar and supernova neutrinos. We find that no overlap occurs for the former two sources and that the overlap for the latter two does not have measurable consequences. For analyzing detection of neutrinos from the above four sources, it is appropriate to treat neutrinos as separate WPs. \bigskip \bigskip \begin{center} \begin{large This work was supported in part by the U.S. DOE under grant DE-FG02-87ER40328. C.H.L. gratefully acknowledges the organizers of NuPhys2015 for hospitality and the Council of Graduate Students at the University of Minnesota for a travel grant.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}} \IEEEPARstart{W}{ith} the current trend towards industrial and private digitization, the building automation- and smart home sector have become fast growing industries, since the demand for such digitization has seen a steady increase in the last years \cite{StatistaSHR2019}. As these smart home systems are becoming more commonly used, the internal and external security of such systems is getting more and more crucial. For this reason this paper shows an analysis of the Z-Wave smart home protocol and its implementation in regards to its security. Z-Wave is a proprietary standard, which today is owned by \emph{Silicon Labs} with the former owners being \emph{Zen-Sys} and \emph{Sigma Designs}. While most of the protocol standard, especially the security aspects, are being kept secret, previous works e.g. \cite{BehrangGhanounBH2013} have shown, that the "security through obscurity" approach is ultimately doomed to fail. While Silicon Labs customers need to implement both the provided hardware, a Z-Wave SoC (System on Chip, \cite{ZwaveSoC}) or module \cite{ZWaveModule} and the proprietary software protocol stack library and are also legally bound to a nondisclosure and confidentiality agreement regarding the secret details of the protocol, to keep the standard secure, programmers as well as attackers have shown that the protocol can be reversed engineered anyway, which lead to projects like the Open Z-Wave library \cite{OpenZWave}. With more than 2600 different Z-Wave certified products currently available, which are being manufactured by approx. 700 different companies and a U.S. market share in the home security area of approx. 90\% \cite{ZWaveAllianceReport}, the protocol is besides Zigbee the most commonly used one for home automation, which makes it interesting for but not only security researchers. The following analysis focus laid on finding security vulnerabilities, which could be exploited using Software Defined Radio (SDR) to send fake messages. A \emph{HackRF One}\cite{HackRF} acted as transmitter and receiver, which was controlled by GNU Radio\cite{GNU}. The Python module \emph{Scapy-radio}\cite{Scapy-Radio} was included into attack scripts for decoding and encoding packages with the goal of finding unencrypted messages. These messages are part of the protocol's design and therefore intended, but have the potential to be misused to cause unwanted system states or to control devices. The result of this research are two novel Denial of Service attacks (DoS), which overload the Z-Wave gateway with relatively little effort. A gateway in this state will no longer process events from connected devices or the smartphone app, which disables the entire smart home network for all participants. \section{State of the Art} Previous researchers like e.g. \emph{PenTestPartners} \cite{Pentestpartners} discovered a downgrade attack against the newer version of the \textit{Z-Wave} security standard. They did this within a test case using a locked door in inclusion mode while manipulating the \textit{NodeInfo} package and exploiting its backwards compatibility. Before this attack, a security evaluation from Behrang Fouladi and Sahand Ghanoun \cite{SecurityEvaluation} focused on controlling a door lock without exploiting the default key of \textit{Z-Wave} devices, by exploiting a missing validation in the key exchange protocol handler. They were able to reset the shared network key, which was ultimately a implementation error of the door lock manufacturer. Since the implementer is capable to fit a Z-Wave system to his needs, the protocol security measures can be circumvented if implemented the wrong way. Another class of attacks target the Z-Wave routing protocol, e.g. so called Black Hole attacks\cite{BADENHOP2017112}, utilizing bad design in the routing protocol. While most of the other publications focus on decrypting messages and/or controlling the Z-wave components itself, we focused our evaluation on possible denial of service attacks like the ones shown in \cite{BADENHOP2017112}. The novel approach of our attack thereby lays in targeting the Z-Wave gateway itself and therefore the main communication hub, not the routing devices of the Z-Wave network. \mbox{\\} The paper is structured as follows: in Section 3 the packages used for the attacks are explained with their intended use. Section 4 explains the methods and the procedure of the attacks including the structure of the used packages. In Section 5 the results of the attacks are depicted which are then discussed in Section 6. \section{The Z-Wave Protocol } The information about the Z-Wave protocol were gathered through the various specifications of Silicon Labs which are available in the internet and through testing. Z-Wave is still a closed protocol which added to the difficulty of the research.\cite{spec} \subsection{Nonce-Get} The \emph{Nonce-Get} command is used to request a unique random number, which must never be used again. Such number is called \emph{Nonce} which is a abbreviation for \emph{Number used once}. These nonces get requested as part of the encryption algorithm \textit{S0 Encapsulated} Messages (see figure \ref{fig:Nonce}). The initialization vector (IV) used for such encryption is spilt into two parts, namely the Nonce of A and B, which are concatenated to create the IV (IV = (nonce sender $||$ nonce receiver))\cite{ZWaveApplication}. The Sender, here \emph{Node A}, initially transmits a \textit{Nonce Get} command, which will be answered by the receiver, Node B, with an \textit{Nonce Report} package, containing the newly created Nonce from \emph{Node B}. \emph{Node A} is now able to combine both nonces as IV for the encryption of the \textit{S0 Encapsulated Message} payload. \begin{figure}[ht] \includegraphics[width=\linewidth]{images/Fugure_3_Sending_secure_messages.PNG} \caption{Z-Wave-Transport-Encapsulation \cite{Z-Wave-Transport-Encapsulation}} \label{fig:Nonce} \end{figure} \newline This algorithm stays basically the same for the \textit{S2 Encapsulated Message} except for the \emph{Nonce Get} command which is \textit{S2 Nonce Get}. A connected node has to answer this request whenever another node or the gateway sends this command, but it must not, if the command is sent by multicast. The node itself isn't allowed to send it via multicast either. It should also be mentioned that, if no acknowledgement (ACK) follows as reaction, an attempt is made to route it to the receiver. But there is no rule in place stopping nodes from attempts to send a) Nonce Report packets to addresses which haven't been assigned to a node or b) to their own node address. \subsection{Finding Z-Wave nodes in range} The \textit{Find Nodes In Range} command is used by the Gateway in the inclusion phase of a new devices. The device, which is being included into the Z-Wave network, will send \textit{NOP Power} packages to every address given in the command. Afterwards the device waits a moment after each packet sent to get an ACK message. Using this method the requesting device is able to discover all other devices within its range (see figure \ref{fig:finde}). If the command is completed the device sends a \textit{Command Complete} package to notify that it has finished. The command \textit{Find Nodes In Range} will generally be send by the gateway to an device which is getting included. Devices, which aren't in the inclusion phase, should not accept this command. \begin{figure}[ht] \includegraphics[width=\linewidth]{images/FindNodesInRange.PNG} \caption{Procedure of Find Nodes in Range command\cite{findNodes}} \label{fig:finde} \end{figure} \section{Methodology} In general Python scripts have been used to create and process packets with Scapy-Radio. The Scapy-Radio version of BastilleResearch\cite{Scapy-Radio} was used for our purpose. These Packages are then processed through software defined radio with GNU Radio. Therefore the open-source Software Defined Radio HackRF One\cite{HackRF} was used as an transmitter and receiver. Two of them were needed, because they aren't full-duplex. We had to change the GNU Radio flow graph to get at least the receiving path of 100 Kbit/s and 40 Kbit/s running. With the used flow graph we were able to send packages with 40 Kbit/s. \subsection{Used tools: Scapy-Radio} \textit{Scapy-Radio}, a modified version of the python program \textit{Scapy}\cite{Scapy}, which has been altered to process wireless protocols, is used to send, sniff and filter the \textit{Z-Wave} packages. The specific version used has been modified by BastilleResearch\cite{Scapy-Radio} as testing tool for \textit{IoT} radio networks and includes some Z-Wave protocol capabilities. \subsection{Used tools: Gnu-Radio} GNU Radio\cite{GNU} is a free tool for implementing software defined radios (SDR). GNU Radio uses two HackRF One in this project, one as receiver and one as transmitter. GNU Radio has a graphical user interface which can be used to create a or multiple flow graphs from blocks via drag and drop (much like Matlab Simulink). For the HackRF One a Osmocom\cite{Osmocom} source block is used. The created flow graph defines the decoding, demodulation and the general processing of the signals. The used flowgraph is shown in \ref{fig:zwave-flowgraph}. At the end of the flowgraph the data stream is send via the system's internal loopback interface to the written Python script for further processing. \begin{figure*}[ht] \includegraphics[width=\linewidth]{images/ZWave_40_100_send.png} \caption{Z-Wave Flow Graph} \label{fig:zwave-flowgraph} \end{figure*} \subsection{Used tools: HackRF One} The HackRF One\cite{HackRF} is an open-source SDR solution. This device can transmit and receive radio signals from one megahertz to six gigahertz. It can be connected via USB and be used for instance in GNU Radio or be programmed as a stand-alone solution. \subsection{Packet Manipulation} In search for vulnerabilities within the Z-Wave protocol or specific implementations malicious data packages were generated using Scapy-Radio. Primarily, unencrypted commands were tested as these can be used independently from the encryption and security level, which were manipulated in different ways. First the addresses were changed and combinations were tested which include single- and multicast traffic. After these tests the payload was also altered, especially the packets with the requirement of the receiver to send a answer. During these tests the commands \emph{Nonce Get / S2 Nonce Get} and \emph{Find Nodes In Range} were showing unexpected effects. Through these effects both commands could be used for a Denial of Service attack against the Z-Wave gateway. \subsection{Nonce-Get manipulation} The only manipulation needed for the \emph{Nonce Get} frames is to change the source and destination addresses like in figure \ref{fig:Nonce_S0} or \ref{fig:Nonce_S2}. These are both changed to decimal 001. This is the address of the gateway itself. With the \emph{S2 Nonce Get} frames the sequence number is counted up as well. Now the gateway seems to send itself Nonce Get commands. Naturally the HomeID of the Z-Wave network has to be changed to the ID of the attacked one. \begin{figure}[h] \includegraphics[width=\linewidth]{images/Routed_Noncense_Angreifer_gesendetes_Nonce_Get_Paket.PNG} \caption{Structure of manipulated S0 Nonce Get packet} \label{fig:Nonce_S0} \end{figure} \begin{figure}[h] \includegraphics[width=\linewidth]{images/Routed_Noncense_Angreifer_gesendetes_Nonce_Get_S2_Paket.PNG} \caption{Structure of manipulated S2 Nonce Get packet} \label{fig:Nonce_S2} \end{figure} As long as the attacker is sending these manipulated packets, the gateway tries to send Nonce Report packets to itself . This is not a problem in itself. Should there now be a node in the network available, which is capable of routing, it tries to route the Nonce Report packets via other routing nodes back to the gateway (see figure \ref{fig:Nonce_S0_react}), which doesn't recognize itself being the destination of the packet and tries the routing process several times over again. \begin{figure}[!h] \includegraphics[width=\linewidth]{images/Routed_Noncense_Basisstation_Nachrichtenverkehr_Behandlung_Nonce_Get.PNG} \caption{Nonce Get S0 Packet reaction} \label{fig:Nonce_S0_react} \end{figure} This process takes a lot longer now, which in the meantime stops the gateway from processing received packets or commands sent via the smartphone app. As the gateway is the central managing entity within the network, which is responsible for both the logic and control of the connected nodes (i.e. the user's defined home automation programming), the whole Z-Wave network is blocked for as long as the gateway is blocked itself. Depending on the specific gateway and its implementation it might also be possible to use a not existing source address to get the gateway into the same state. During testing one such gateway was encountered among the test candidates (see figure \ref{fig:Nonce_S2_react}). \begin{figure}[!h] \includegraphics[width=\linewidth]{images/Routed_Noncense_Test_ZWave_DoS_S2.PNG} \caption{Nonce Get S2 packet reaction} \label{fig:Nonce_S2_react} \end{figure} \subsection{Routed Nonce(nse)} Through the previously mentioned effects a Denial of Service (DoS) attack is created if the attacker keeps on sending manipulated Nonce Get frames, as the network is constantly routing nonsense. Because this Denial of Service blocks the gateway, device internal automation aren't affected, like disassembly alarms. It is beneficial that the messages are sent unencrypted, which enables this attack for both security level S0 and S2. The efficiency of the attack varies though, depending on the specific manufacturer of the gateway. The most efficient result was a twenty minute DoS logjam against the targeted gateway utilizing only 256 sent packets. \subsection{Automated Routed Noncense} To automate this attack, a Python script was written to read the HomeID of the target network in reach, creating the manipulated messages automatically. Such created messages are then sent continuously to the targeted network. The intervals in which the packets are sent vary depending on the manufacturer of the gateway. If S2 Nonce Get packets are sent, a counter is used for the sequence number. \subsection{Manipulating "Find nodes in range"} The second DoS attack is realized through misuse of the \emph{Find Nodes In Range} command. Here, the addresses of the packet are changed again. Both source and destination addresses are set to decimal 001. Afterwards the payload is changed, filling it with 32 bytes of 0xFF (see figure \ref{fig:PowerofNOP_Package}). The payload usually depends on the known nodes in the network(assumed through testing). Because of this alteration, the gateway seems to send itself a \emph{Find Nodes In Range} packet. The HomeID of the attacked network has to be inserted as well. It works with both security level S0 and S2 without security level specific changes. \begin{figure}[h] \includegraphics[width=\linewidth]{images/Power_of_NOPe_Angreifer_gesendetes_Paket.PNG} \caption{Find Nodes In Range manipulated} \label{fig:PowerofNOP_Package} \end{figure} \subsection{Severity of the manipulation} Should this manipulated packet now been sent, the gateway gets the command to look for nodes in range. Therefore it sends \textit{NOP Power} packets to all possible addresses going several times through every address. It's waiting after each sent packet a fracture of time for an ACK and proceeds then with sending a packet to the next address. While the gateway executes this command, it doesn't answer or process any incoming messages as depicted in figure \ref{fig:PowerofNOP_Serv} the Nonce Get package or commands from the smartphone app. Therefore this can be used for an DoS attack as well. The execution of the manipulated packet takes the gateway a little under two minutes. The duration of the jamming is consistent through all manufacturers. \begin{figure}[h] \includegraphics[width=\linewidth]{images/Power_of_NOPe_Nachrichtenverkehr_Basisstation.PNG} \caption{Finde Nodes In Range Serverity} \label{fig:PowerofNOP_Serv} \end{figure} \begin{comment} [Bild: Auswirkungen_post_Find_Nodes_In_Range] \end{comment} With these effects of a manipulated \emph{Find Nodes In Range} packet, a opportunity for a very efficient DoS attack is given. Since the gateway executes no other commands during the two minute time frame, the consecutively sent messages need to have perfect timing to guarantee a continuous Denial of Service on the gateway. Some old versions of the Z-Wave protocol can cause the gateway to send \emph{Command Complete} packets after the execution, which gives the attacker the exact moment to send another manipulated packet. This was the case with one of the tested gateways. The latest version of the Z-Wave protocol doesn't cause the gateway to send the \emph{Command Complete} packet. This version is obligatory for all manufacturers which require a S2 certification for their devices. \subsection{Automated "Power of NOP(e)"} The first part of the automation is the automatic insertion of the HomeID of the attacked network, which can be read-out of any normal packet sent within the Z-Wave network. The second part is the continuous sending of the Find Nodes In Range packets in given time intervals to jam the gateway and therefore the entire Z-Wave network. With the old implementations of the protocol there is the possibility to wait for an \emph{Command Complete} packet to determine the perfect moment for the attacker to send the next packet. \section{Results} There are two main differences between the tested smart home gateways. The first difference is the efficiency of the Routed Noncense attack. It probably has performance reasons how fast the gateway executes the routing measurements. The second difference is the used version of the Z-Wave protocol. There are still device for sale which implement older protocol releases from Silicon labs supporting only S0. These differ from the newer ones which support S2 as well. Therefore a gateway which had an older version only supporting S0 was easier to exploit by sending a fake \emph{Find Nodes In Range} packet, because it returned a \emph{Command Complete} packet after finishing the command. The newer versions of Z-Wave supporting S2 don't send this packet which makes it slightly harder to exploit. This can be used for perfect timing of the transmission of the next \emph{Find Nodes In Range} packets in an ongoing attack. The jamming of a Z-Wave network can be useful for an intruder, who's goal is to break into a Z-Wave secured house without e.g. raising the alarm. In such attack scenario, the new found DoS attacks can be used instead of a normal hardware frequency jammer, which the difference that a jamming attack usually also effects neighbouring systems unintentionally. This attack can also be remote controlled if the attacker places a suitable SDR in a waterproof case powered by a battery pack and connected via GPRS or similar mobile protocols. The DoS attacks can still be detected via heartbeat detection to an extend. If the heartbeat detection is executed by the gateway only, it maybe wouldn't be executed, similar to the commands coming from the smartphone application. Therefore it would have to detect that itself isn't sending heartbeats. On the other side a connected device would be able to detect a inactive gateway through heartbeats and would have to display the lost connection somehow. This would be difficult with window contacts or other devices which don't have a proper way to display these lost connections with other than a blinking LED, but not with alarm sirens or other security related devices which offer proper visual and audible feedback. As example the siren could activate a sound alarm. The user would have to be able to deactivate the heartbeat alarm in case there is a need to shut down the gateway or in the case of a downtime because of an update. \section{Discussion} In general this security issues were caused by mistakes of programmers during the implementation of the protocol. The \emph{Find Nodes In Range} command as example is not designed to be executed by the gateway. It is only supposed to be executed by other nodes during inclusion. The gateway is missing a rule to not send Nonce Report packets to non-existent addresses or itself as well. This shows how important prerequisites are, under which even less important or less used commands are allowed to be executed. \section{Conclusion} Z-Wave is a radio protocol. Therefore it is always possible to execute a DoS attack with the use of a jammer. With this in mind the found DoS attacks are less severe, but it's a more concealed way of a DoS attack which isn't detectable though jamming detection using a Received Signal Strength Indicator. Besides that, it's a very efficient way to block the Z-Wave network. The main problem which was used for the attacks is the whole logic of the Z-Wave network being taken over by the gateway. Created automation, push messages and the processing of commands from the smartphone app are tasks of the gateway only. Because of that, the Z-Wave network has a star topology with some mesh capabilities when it comes to routing. So there is no need to block all nodes in the Z-Wave network if there is a way to block the central node. As an example the alarm siren wouldn't go off when the window contact senses the opening of the window, since the gateway wouldn't execute the automation for it. Both attacks take advantage of this by only blocking the gateway and therefore jamming the whole network. The attacks are both much more efficient than normal jamming and simple to pull off. The attack Power of NOPe requires the attacker to send one \emph{Find Nodes In Range} packet approx. every two minutes. The most efficient case of the Routed Noncense attack was able to block the gateway with just 256 packets for ca. 20 minutes. Both attacks can be repeated to block the gateway constantly over a intended time frame. Both attacks are only traceable if the attacked person has a Z-Wave sniffer active at all times. Then the person has to determine the unusual messages and would need to check if these were possible within Z-Wave. The source of the attack can't be determined, because the messages look like they've been sent from the gateway. A jammer would be much easier to be detected, because it blocks communication in itself. There are no permanent effects after the attacks and the gateway goes back to normal operation after the end of the attacks. Our written tool and the modified Scapy-Radio parts can be found in our GitHub repository: \url{https://github.com/A-Siemer/Dirtywave} \section{Updates} Silicon Labs acknowledged the vulnerabilities discovered in this paper and has already informed their customers via public announcement\cite{PSIRT-27}. Along with the announcement an updated Z-Wave implementation and protocol specification was made available, fixing the vulnerabilities which lead to the attack described above. The updated version of the Z-Wave protocol can be downloaded from the company's website\cite{Download_Z-Wave_SDK}. \section{Acknowledgement} We would like to thank Silicon Labs for their cooperation during the evaluation of the Z-Wave vulnerability found, especially Jakob Buron. This project received funding from the Institute for Project Oriented Teaching (IPro-L) - University of Applied Sciences Emden/Leer. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{ieeetr}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The discovery of the Higgs boson with a mass $m_h = 125.09 \pm 0.21$ (stat.) $\pm 0.11$ (syst.) GeV~\cite{Aad:2012tfa,Chatrchyan:2012xdj} confirms the particle content of the Standard Model. Although the measured value of the Higgs boson squarely fits in the region 75-135 GeV as predicted by the minimal supersymmetric standard model (MSSM)~\cite{Carena:2002es}, lack of signal for sparticles raises questions on naturalness of the supersymmetric (SUSY) models. The SUSY spectrum was predicted to lie not too far from the weak scale $\sim 100$ GeV, based on {\it naturalness} calculations oftenly expressed via BG measure~\cite{Barbieri:1987fn}: \begin{equation} \Delta_{\rm BG}\equiv \max \left[ c_i\right]\ \ {\rm where}\ \ c_i=\left|\frac{\partial\ln m_Z^2}{\partial\ln p_i}\right| =\left|\frac{p_i}{m_Z^2}\frac{\partial m_Z^2}{\partial p_i}\right| \label{eq:DBG} \end{equation} where $p_i$'s are the various parameters of particular effective theories. $\Delta_{\rm BG}$ measures how sensitive the $Z$-boson mass is to variations of parameters at some high defining scale. In such a measure the gluino mass, lower bound of which is set to $\simeq1.9$ TeV by ATLAS group~\cite{ATLAS:2016uzr}, was expected to be less than 350 GeV. Since LHC searches pushed sparticle (SUSY particle) masses to multi-TeV energy scale, remaining supersymmetric models are considered to be in crisis. It has been argued that $\Delta_{\rm BG}$ overestimates fine-tuning when applied to effective theories with multiple independent soft terms that are correlated~\cite{Baer:2013gva,Mustafayev:2014lqa}. The electroweak fine-tuning~\cite{Baer:2012up}, $\Delta_{\rm EW}$, is a model independent fine-tuning measure which compares the largest contribution on the right-hand side of Eq.~(\ref{eq:mzs}) to the value of $m_Z^2/2$: \begin{equation} \frac{m_Z^2}{2} = \frac{(m^2_{H_d}+\Sigma_d^d)-(m^2_{H_u}+\Sigma_u^u)\tan^2\beta}{(\tan^2\beta -1)} -\mu^2 \label{eq:mzs} \end{equation} Eq.~(\ref{eq:mzs}) is the well-known condition from minimization of the Higgs potential for electroweak symmetry breaking to occur. The electroweak fine-tuning is defined as: \begin{equation} \Delta_{\rm EW} \equiv max_i \left(|C_i|\right)/(m_Z^2/2) \label{eq:deltaew} \end{equation} where $C_{H_u}=-m_{H_u}^2\tan^2\beta /(\tan^2\beta -1)$, $C_{H_d}=m_{H_d}^2/(\tan^2\beta -1)$ and $C_\mu =-\mu^2$, along with definitions for the radiative corrections $C_{\Sigma_u^u(k)}$ and $C_{\Sigma_d^d(k)}$~\cite{Baer:2012cf}. Low $\Delta_{\rm EW}$ assures that there are no large cancellations on the right-hand side of Eq.~(\ref{eq:mzs}). Models with $\Delta_{\rm EW}<30$, which corresponds to 3\% or less fine-tuning, are considered as {\it natural}. This should not be considered as an attempt to save SUSY but an appropriate definition of naturalness to avoid overestimation for the models with multiple uncorrelated soft terms. From Eq.~(\ref{eq:mzs}), using the naturalness bound $\Delta_{\rm EW}<30$, the $\mu$ term is restricted to be less than 355 GeV. This is not a problem in models with non-unified Higgs masses since a weak scale value of $\mu$ can easily be chosen or $m_{H_u}^2$(GUT) can be adjusted so that it barely runs negative after RGE running. The constrained MSSM (cMSSM) model with only 4 parameters has been severely constrained by dark matter and sparticle searches. cMSSM models with 1 TeV higgsino are still viable~\cite{Roszkowski:2014wqa,Bagnaschi:2015eha} by giving up the naturalness constraint. Even before LHC was turned on, LEP-II working group~\cite{Barate:2003sz} reported the lower limit of Higgs mass as $m_h \gtrsim 114.4$ GeV in 2003, which forced natural cMSSM to survive in conflict with naturalness expectations~\cite{Barbieri:1998uv}. The 'WIMP miracle' picture with a bino-like neutralino had already been disfavored in MSSM~\cite{Baer:2010wm} before the discovery of the Higgs boson. In radiatively-driven natural SUSY models, neutralinos are underproduced due to higgsino-like neutralino so a second dark matter component ({\it e.g.}, axions) is needed. By introducing axion, another fine-tuning problem, namely the strong CP problem, is addressed. \section{Neutralino LSP in Natural SUSY} In natural SUSY models with $\Delta_{\rm EW}<30$, the lightest supersymmetric particle (LSP) is higgsino-like with a mass $m_{\widetilde{Z}_1} \simeq \mu$ so the neutralino mass is bounded above by the value of the $\mu$ parameter. In such a model, a pure higgsino-like neutralino can only make up one fourth of the total dark matter abundance considering thermal production only. Thermally produced WIMP abundance can be higher with a considerable amount of bino mixing by considering low $m_{1/2}$ but such a parameter set has already been ruled out by the LUX dark matter and LHC gluino searches unless $\tan\beta$ is small~\cite{Badziak:2017the}. Spin-independent (SI) and spin-dependent (SD) WIMP-proton cross sections for the NUHM2 model with $\Delta_{\rm EW}<30$, calculated using ISAJET v7.86~\cite{Paige:2003mg}, is shown in Fig.\ref{det}. \begin{figure}[h] \centering \begin{tabular}[b]{c} \includegraphics[width=.46\linewidth]{si} \\ \small (a) \end{tabular} \qquad \begin{tabular}[b]{c} \includegraphics[width=.45\linewidth]{sd} \\ \small (b) \end{tabular} \caption{Plot of rescaled (a) spin-independent $\xi \sigma^{\rm SI}(\widetilde{Z}_1, p)$ and (b) spin-dependent $\xi \sigma^{\rm SD}(\widetilde{Z}_1, p)$ direct detection rate vs. neutralino LSP mass $m_{\widetilde{Z}_1}$. All points satisfy the naturalness constraint: $\Delta_{\rm EW}<30$. Red points show the additional region that appears when $m_{A/H} \simeq 2 \times m_{\widetilde{Z}_1}$. Published DM search results are shown by straight lines. Future projected reaches are shown by dashed lines.} \label{det} \end{figure} Scattering cross section rates are scaled down by a factor $\xi=\Omega_{\widetilde{Z}_1}^{th}h^2/0.12$~\cite{Bottino:2000jx} due to the fact that the WIMPs comprise only a portion of the local dark matter abundance. The higher bound is set by LUX bounds on the spin-independent scattering cross section~\cite{Akerib:2016vxi} whereas the lower bound is determined by the naturalness condition. Unlike in the cMSSM, $A/H$ funnel might occur for any $\tan\beta$ values since $m_A$ is an input parameter in the NUHM2 model given that $m_A$ vs. $\tan\beta$ bounds~\cite{Aaboud:2016cre} are respected and shown by red dots in Fig.\ref{det}. The upper bound on the BR$(b \to s \gamma)$ removes the parameter region where $m_{A/H} \lesssim 300$ GeV~\cite{Bae:2015nva}. Green points represent the region where no additional annihilation mechanism is required~\cite{Baer:2016ucr}. It is questionable to expect $A/H$ resonance, or why the nature chooses $m_{A/H} \simeq 2 \times m_{\widetilde{Z}_1}$ in the SUSY scenarios with non-unified scalar masses. Neutralino abundance becomes lower with extra annihilations so $\xi$ gets even smaller, which results in a lower detection rate due to fewer target particles. Stau co-annihilation region does not appear as an additional region since $m_{\widetilde{Z}_1} \lesssim 360$ GeV and the lower bound for $m_{1/2}$ is rather high from LHC searches for a light stau mass. Recent results from PICO-60 collaboration~\cite{Amole:2017dex} are shown in Fig.\ref{det}(b). Projected spin-dependent dark matter searches such as DARWIN~\cite{Aalbers:2016jon} are not able to probe the whole parameter space predicted by the natural NUHM2 model. The gray region shows already excluded points from SI LUX searches. On the SI-plane, XENON1T dark matter search~\cite{Aprile:2015uzo} will be able to cover the bulk of the remaining parameter space. Projected future reaches from n-tonne detectors such as LZ~\cite{Akerib:2015cja}, XENONnT~\cite{Aprile:2015uzo}, DarkSide-20K~\cite{Agnes:2015lwe}, DEAP-50T~\cite{Amaudruz:2014nsa} and DARWIN~\cite{Aalbers:2016jon} will cover the remaining parameter space of the NUHM2 model. In natural generalized mirage mediation model (nGMM)~\cite{Baer:2016hfa}, wino and bino masses may be elevated that result in a lower rescaled scattering rate. Although XENON1T will not be able to cover most of the parameter space, n-tonne detectors will probe the whole parameter space. \section{Supersymmetrized DFSZ axion model} The QCD Lagrangian contains a CP-violating term: \begin{equation} {\cal L}\ni \bar{\theta}\frac{g_s^2}{32\pi}G_{A\mu\nu}\tilde{G}^{\mu\nu}_A \end{equation} where $\bar{\theta}\equiv\theta+arg\ det (M)$ and $M$ is the quark mass matrix. Measurements of the neutron electric dipole moment (EDM) imply that $\bar{\theta}\ll 10^{-10}$ thus requiring a huge fine-tuning in $\bar{\theta}$. The smallness of $\bar{\theta}$ is known as the strong CP problem. A solution to the problem is to introduce Peccei-Quinn (PQ) symmetry~\cite{Peccei:1977hh} which causes the $G\tilde{G}$ term dynamically to settle to zero when $U(1)_{\rm PQ}$ is broken. The associated pseudo Nambu-Goldstone boson with the PQ symmetry breaking is called {\it axion}, $a$~\cite{Weinberg:1977ma,Wilczek:1977pj}. In the supersymmetrized DFSZ scenario, MSSM lagrangian is augmented by: \begin{equation} W_{\rm DFSZ}\ni \lambda\frac{S^2}{M_{Pl}}H_uH_d \label{eq:KN} \end{equation} where $S$ is a singlet superfield charged under PQ symmetry. Higgs doublet {\it superfields} $H_u$ and $H_d$ carry PQ charges so the SUSY $\mu$ term is in fact forbidden before the PQ symmetry-breaking~\cite{Kim:1983dt}. An effective $\mu$ term is generated with: \begin{equation} \mu\sim \lambda \mbox{ } v_{\rm PQ}^2/M_{Pl}. \end{equation} A weak scale $\mu$ term can easily be generated by breaking PQ symmetry radiatively~\cite{Murayama:1992dj}. Then little hierarchy characterized by $\mu\sim m_Z\ll m_{3/2}\sim {\rm multi-TeV}$ emerges quite naturally due to the mis-match between PQ breaking scale and hidden sector mass scale $f_a\ll m_{\rm hidden}$. In a Peccei-Quinn augmented MSSM (PQMSSM) scenario, the axion superfield is given by~\cite{Bae:2011jb}: \begin{equation} A = \frac{1}{\sqrt{2}}\left(s+ia\right) + \sqrt{2}\theta \tilde a + \theta^2 F_a \end{equation} where $a$ is the axion field, $s$ is the spin-0 {\it saxion} field and $\tilde a$ is spin-$\frac{1}{2}$ fermionic partner of axion called {\it axino}. In addition to the thermal production, in PQ-augmented SUSY scenarios WIMPs are produced by subsequent decay of both axino ($\tilde{a}\to \widetilde{Z}_1+...$) and saxion ($s\to \widetilde{Z}_1\widetilde{Z}_i$) when kinematically allowed. The $s \to aa / \tilde{a}\tilde{a}$ branching ratio is controlled by the axion-saxion effective coupling~\cite{Chun:1995hc}: \begin{equation} \mathcal{L} \ni \frac{\xi_s}{f_a} s \left[ \left(\partial_\mu a \right)^2 + i \bar{\tilde{a}} \slashed\partial \tilde{a} \right] \label{eq:xis} \end{equation} where $\xi_s$ can take any values between 0 and 1. Although saxion decays to axion pairs at large axion decay constant, $f_a$, can produce significant amount of dark radiation, dark matter density constraint is always the most restrictive one for a saxion mass at TeV scale. Total neutralino abundance can be computed accounting for adding thermally produced neutralino and neutralino production from decays of axinos and saxions. Amount of axion needed to satisfy the measured DM abundance~\cite{Calabrese:2017ypx}, $\Omega_{a}^{\rm co} h^2=0.12-\Omega_{\widetilde{Z}_1}^{th}h^2$, is produced from the coherent oscillations of the axion field. Desired amount of axion can be computed by adjusting the misalignment angle $\theta_i$~\cite{Visinelli:2009zm}: \begin{equation} \Omega_a^{\rm co} h^2 \simeq 0.23 f(\theta_i)\theta_i^2\left(\frac{f_a}{10^{12} \mbox{ GeV}}\right)^{7/6} \label{eq:axco} \end{equation} where $f(\theta_i)$ is the anharmonicity factor, parametrized as $f(\theta_i)= \left[\ln\left(e/(1-\theta_i^2/\pi^2\right)\right]^{7/6}$. The axion misalignment angle $\theta_i$ can take any values between $-\pi$ and $\pi$. However, $f(\theta_i)$ is very sensitive to a small change in $\theta_i$ for $\theta_i \gtrsim 3$ hence the parameter set that satisfy $\Omega_a^{\rm co} h^2 + \Omega_{\widetilde{Z}_1}h^2=0.12$ for $\theta_i >\pi$ is considered as {\it unnatural}. In SUSY DFSZ model, this region occurs when the axion decay constant, $f_a \lesssim 10^{11}$ GeV~\cite{Bae:2014rfa,Bae:2015rra}. PQ breaking is assumed to have occured before or during inflation and has not been restored, to avoid any domain wall problem. Axion-neutralino mixed dark matter can be calculated by solving the eight coupled Boltzmann equations~\cite{Bae:2014rfa}. Thermal production of axion, axino and saxion are independent of the reheat temperature, $T_R$, in the SUSY DFSZ scenario so the gravitino problem is avoided for the values of $T_R$ less than $\sim 10^{10}$ GeV~\cite{Bae:2015efa}. The evolution of number densities of gravitino, neutralino, axino and saxion (both thermally and coherently produced) are tracked from the end of inflation, from $T_R$, until today. Lifetimes of axino, gravitino and saxion are tracked along with their abunances in order to check whether there is a violation to big bang nucleosynthesis (BBN) or not. Although saxions mainly decay into axion pairs, in cases where saxions are light $m_s \ll m_0$, long-lived saxions might impose stronger constraint from BBN on the maximum value of $f_a$ than DM abundance constraint. Nevertheless, in models with gravity mediated SUSY breaking saxion mass is expected to be at the order of gravitino mass $m_s \simeq \alpha m_{\tilde{G}}$. For each parameter set which yields $\Omega_{\widetilde{Z}_1}h^2<0.12$, the axion misalignment angle $\theta_i$ is adjusted using Eq.~(\ref{eq:axco}) so that $\Omega_{\widetilde{Z}_1}h^2+\Omega_a^{\rm co} h^2=0.12$. \begin{figure}[h] \centerline{\includegraphics[width=350pt]{evol}} \caption{Evolution of the energy densities of axion, axino, gravitino, neutralino and saxion vs. temperature, $T$ from the reheat temperature $T_R=10^7$ GeV to now. Two scenarios with different axion decay constants for $f_a=10^{10}$ GeV (solid) and $f_a=10^{15}$ GeV (dashed) are illustrated. Vertical green lines bound the temperature range where BBN takes place.} \label{evols} \end{figure} An example of evolution of the energy densities from the solution of coupled Boltzmann equations is illustrated in Fig.\ref{evols} for the same NUHM2 benchmark point (parameters given in the next section) with different $f_a$ values. Gravitino and axino masses are set to 10 TeV whereas saxion mass is set to 5 (10) TeV for the large (small) $f_a$ case to show a scenario where BBN violation occurs. The reheat temperature is chosen at $10^7$ GeV. Solid lines show the evolutions for $f_a=10^{10}$ GeV. The point is safe from BBN constraint and $\Omega_{\widetilde{Z}_1}h^2 = \Omega_{\widetilde{Z}_1}^{th}h^2 \simeq 0.006$ since axinos and saxions decay significantly before the neutralino freeze-out. As a result, the cold dark matter density is mainly from coherent production of axions with $\theta_i \simeq \pi$. Even though the gravitino decays during BBN, its abundance is not large enough to intervene the nucleosynthesis. Dashed lines show the evolutions for $f_a=10^{15}$ GeV; this point is not allowed from BBN and dark matter density constraints. Saxion$^{\rm co}$s are still decaying when nucleosynthesis starts at $\sim1$ MeV so the point is not BBN-safe. For a larger $f_a$, thermal yields of axinos, axions and saxions at $T=T_R$ are lower but their couplings to matter is weaker hence they decay much later. The axions from $s\to a +a$ decays can be seen as a rise in the (relativistic) axion component. In both cases, axion$^{\rm co}$ production starts at $T\sim 1$ GeV. For the $f_a=10^{15}$ GeV case, dark matter is already overproduced so the axion misalignment angle is set to 1 for simplicity. Scan results for the same benchmark point with $\xi_s=0$ gives an upper bound on $f_a=2 \times 10^{12}$ GeV. \section{Admixture of neutralino-axion in SUSY DFSZ} In the SUSY DFSZ scenario, thermal productions of axinos and saxions are proportional to $1/f_a^2$ whereas coherent production of axions and saxions increase with increasing $f_a$. In the lower $f_a$ region $10^9 \leq f_a/$GeV $\leq 10^{13}$ where the lower bound is from astrophysical observations, axino decays mainly contribute to neutralino abundance. In the region $10^{13} \leq f_a/$GeV $\leq 10^{16}$, direct or indirect decays of saxion dominantly augment the neutralino abundance. Although yields of thermally produced axinos and saxions decrease with increasing $f_a$, their lifetime increase since their couplings become weaker. The amount of allowed neutralino density from the decays is constrained by DM searches. \begin{figure}[h] \centerline{\includegraphics[width=350pt]{perce}} \caption{Percentage composition of higgsino-like neutralino allowed in neutralino-axion admixture from NUHM2 model. Red region is viable due to enhanced annihilations. Black and gray shaded regions are excluded by LEP-II searches and indirect (and direct, darker gray) DM searches respectively. Green region is not allowed by the fine-tuning constraint. Brown shaded region is not viable due to very low thermal production of the neutralino LSP.} \label{compos} \end{figure} In Fig.\ref{compos}, the amount of allowed neutralino dark matter in a 2-component DM scenario is shown quantitatively. For each point shown in Fig.\ref{det}, the maximum allowed $\xi$ ratio is computed without violating the LUX bound on the SI scattering cross section rate and the Fermi-LAT/MAGIC combined reach via on gamma rays from $\widetilde{Z}_1 \widetilde{Z}_1 \to W^+W^-$ channel~\cite{Ahnen:2016qkx}. Although indirect dark matter searches have not started probing the region expected from the NUHM2 model as seen in Fig.3 of Ref.\cite{Baer:2016ucr}, results from Fermi-LAT/MAGIC collaboration put a stronger constraint on the allowed neutralino abundance since the annihilation rate is rescaled by $\xi^2$. The blue and pink regions are the allowed regions for the amount of neutralinos present in the admixture. The lower edge of the blue region can be read as the maximum amount of neutralinos produced thermally in the natural NUHM2 model. Due to the higgsino-like neutralino, neutralinos can only make up to $\simeq 25$\% of the dark matter abundance. Its composition can be augmented up to any allowed point in the blue region. Additional neutralinos are assumed to be produced from axino and saxion decays. For $m_{\widetilde{Z}_1} \gtrsim 330$ GeV, neutralino can make up to 100\% of the DM without violating published limits on the dark matter annihilation cross section rate~\cite{Ahnen:2016qkx}. Thermally produced neutralino abundance can be as low as $\sim$0.005 with $A/H$ resonance annihilations (pink shaded area). The black region is excluded from LEP-II searches, $m_{\widetilde{W}_1^{+/-}}>103.5$ GeV whereas in the gray shaded region Fermi-LAT/MAGIC exclusion applies. In the green area, electroweak fine-tuning is big, $\Delta_{\rm EW}>30$, so considered to be unnatural. In the brown shaded region, thermally produced neutralino abundance is too low to reach even with enhanced annihilations. \begin{figure}[h] \centering \begin{tabular}[b]{c} \includegraphics[width=.47\linewidth]{nuhm2a} \\ \small (a) \end{tabular} \qquad \begin{tabular}[b]{c} \includegraphics[width=.47\linewidth]{nuhm2b} \\ \small (b) \end{tabular} \caption{Plot of neutralino density vs. $f_a$ for the NUHM2 benchmark point. In frame (a), points that violate BBN and $\Delta_{N_{\rm eff}}$ constraints are colored red and brown respectively. In frame (b), the region with $\Omega_{\widetilde{Z}_1}h^2 < 0.12$ is zoomed in. Purple shaded points are excluded from the indirect DM searches. Green points are generated with both $m_{\tilde{a}}$ and $m_s$ greater than 30 TeV.} \label{bolt} \end{figure} Neutralino cold dark matter density, $\Omega_{\widetilde{Z}_1}h^2=\Omega_{\widetilde{Z}_1}^{th}h^2+\Omega_{\widetilde{Z}_1}^{dec}h^2$, vs. axion decay constant, $f_a$, from a scan over $f_a:10^{9-16}$ GeV, $m_{\tilde{a}/s}:0.5-40$ TeV and $m_{\tilde{G}}=10$ TeV for a NUHM2 benchmark point with parameters: \begin{equation} (m_0,\ m_{1/2},\ A_0,\ \tan\beta,\ \mu,\ m_A) = \mbox{(5300 GeV, 2030 GeV, -9850 GeV, 9, 150 GeV, 3000 GeV)} \end{equation} is shown in Fig.\ref{bolt}. The point has $\Delta_{\rm EW}=29.2$, $m_{\tilde{g}}=4481$ GeV within HE-LHC33 reach~\cite{Baer:2017yqq}, $\Omega_{\widetilde{Z}_1}^{th}h^2 \simeq 0.006$ and $<\sigma .v>\simeq4\times10^{-26}$ cm$^3$ s$^{-1}$. This is an example of a restricted NUHM2 model from indirect WIMP searches with $m_{\widetilde{Z}_1}\simeq 150$ GeV. General results from the scan with BBN and $\Delta_{N_{\rm eff}}$ constraints are shown in Fig.\ref{bolt}(a). Points with $\Delta_{N_{\rm eff}}>1$ are excluded at greater than 99\% confidence~\cite{Ade:2015xua} and colored brown. In Fig.\ref{bolt}(b), $f_a$ region that predicts $\Omega_{\widetilde{Z}_1}h^2 \leq 0.12$ is zoomed in. Thermally produced neutralinos make only 5\% of the total dark matter density, its composition in the admixture can be augmented up to 36\% without violating Fermi-LAT/MAGIC combined reach via $W^+W^-$ channel. $\xi_s=1$ so $s \to aa$ and $s \to \tilde{a} \tilde{a}$ decay channels are open. In the low $f_a$ region, $f_a \lesssim 2 \times 10^{10}$ GeV, axinos and saxions decay before the neutralino freeze-out so $\Omega_{\widetilde{Z}_1}h^2$ takes its standard thermal value $\Omega_{\widetilde{Z}_1}^{th}h^2$ which is independent of PQ parameters : $f_a$, $m_{\tilde{a}}$, $m_s$, $\theta_{i/s}$ and $\xi_s$. Axinos and saxions decay more slowly with increasing $f_a$ since their couplings to particles / sparticles are proportional to $1/f_a$. Only long-lived axinos and saxions enhance neutralino DM density. Neutralino relic density strictly increases with increasing $f_a$ for $f_a \lesssim 10^{13}$ GeV. For $f_a \gtrsim 10^{13}$ GeV, neutralino density indeed decreases for the points with $m_s \lesssim 2m_{\tilde{a}}$. The saxion mainly decays to axion pairs and the decay $s\to \tilde{a} \tilde{a} \to \widetilde{Z}_1 + X$ is kinematically not allowed. Moreover, $s \to SM$ decay injects entropy into the universe that dilutes relics. In the high $f_a$ region, saxions are produced coherently in a large amount, hence their decays increase neutralino density even though BR$(s \to \widetilde{Z}_1 + \widetilde{Z}_j)$ is suppressed. For the benchmark point, BBN violation starts at $f_a \simeq 2 \times 10^{14}$ GeV and $\Delta_{N_{\rm eff}} >1$ for $f_a \simeq 8 \times 10^{13}$ GeV. Nonetheless; such points have already been excluded from the dark matter density constraint. The most constraining upper bound for $f_a$ is from Fermi-LAT/MAGIC exclusion : $\Omega_{\widetilde{Z}_1}h^2 \leq 0.043$. The excluded region from the indirect DM search is shaded purple in Fig.\ref{bolt}(b). $f_a$ values greater than $2 \times 10^{14}$ GeV are not allowed. In the $f_a$ range between $6\times 10^{12}$ GeV and $3\times 10^{13}$ GeV, WIMPs are overproduced. For some parameter choices with a softer constraint from the indirect DM searches, the diluted region can be within the allowed range; a continous range of $f_a$ up to $2 \times 10^{14}$ GeV can be allowed. For saxion and axino masses less than $\sim$30 TeV (black dots), the upper bound on $f_a$ is $6 \times 10^{12}$ since entropy injection from saxion decays can not lower $\Omega_{\widetilde{Z}_1}h^2$ under the allowed region. The yellow points show the results with $\theta_i \gtrsim 3$. Considering naturalness in the PQ sector, values of $\theta_i \sim \pi$ are fine-tuned so the lower bound for $f_a$ in this natural DFSZ scenario is $10^{11}$ GeV. In the scenario shown by black dots only, the axion is expected to have a mass $2$ $\mu$eV $\lesssim m_a \lesssim$ $60$ $\mu$eV which is mostly in the range of the projected sensitivity by ADMX Gen2 experiment~\cite{Stern:2016bbw}. The large $f_a$ region which is accessible with heavy $m_{\tilde{a}/s}$ is not within the range of axion search experiments. In a simple scenario where the axion is the only dark matter candidate, setting $\theta_i=1$, the axion is overproduced if its mass is lighter than $\sim 2$ $\mu$eV. ADMX Gen2 has search capabilities for axion mass from 2 to 40 $\mu$eV with a high DFSZ sensitivity up to $m_a \simeq 25$ $\mu$eV. For the $\xi_s=0$ case, in which the decays $s\to \tilde{a} \tilde{a}$ and $s\to a a$ are turned off, the upper bound on $f_a$ is more severe. In this case, $\Omega_{\widetilde{Z}_1}h^2$ is strictly increasing with increasing $f_a$ since BR$(s \to \tilde{a} + \tilde{a}) =0$ and there is no dilution mechanism from $s\to SM$ injection to the thermal bath. \section{Summary} The natural SUSY scenario is being probed by LHC searches and WIMP searches. Projected n-tonne direct WIMP detection experiments (DarkSide, DEAP, LZ, XENONnT, DARWIN) reaches cover almost the entire motivated SUSY dark matter models. Moreover, DARWIN is projected to detect very low scattering cross section, almost down to the neutrino background, for WIMP masses between 0.1-1 TeV. WIMPs in natural SUSY are expected to have a mass less than $\sim$350 GeV, which is a unique signature for underabundant SUSY DM scenarios. In such models, WIMP (higgsino-like neutralino) detection is ultimately expected. As $g_{a\gamma \gamma}$ coupling in SUSY DFSZ model is lower than the one in non-SUSY DFSZ axion model, it is not clear whether ADMX Gen2 searches will be able to reach SUSY DFSZ predicted $g_{a\gamma \gamma}$ coupling or not. An axion detection within projected reaches is an indication of mainly axion admixture where neutralino contribution to total DM density is low. \section{Acknowledgements} I would like to thank K.J. Bae, H. Baer, V. Barger and A. Lessa for earlier collaborations on this topic and Caroline Serce for proof-reading. The computing for this project was performed at the OU Supercomputing Center for Education \& Research (OSCER) at the University of Oklahoma (OU). \nocite{*} \bibliographystyle{aipnum-cp}%
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} Integrated pulse profiles are obtained by integrating tens of thousands of individual pulses. Features of pulse profiles have been investigated to understand the geometry and physical processes within pulsar magnetosphere \citep[e.g.][]{ran83, lm88, kwj+94, nsk+15}. Integrated pulse profiles generally comprise several components, and are characterized by diverse polarization features, including prominent linear polarization, `S'-shaped polarization angle curves, single sign or sign reversals of circular polarization. Some integrated pulse profiles are highly polarized for the whole pulse, even 100\% polarized such as PSR B1259-63 and B1823-13. These pulsars are generally young and have very high spin-down luminosity $\dot{E}$ and flat spectrum \citep{qml+95, vkk98, cmk01, wj08}. Some pulsars have highly linearly polarized leading or trailing components, for example, the leading components of PSRs B0355+54 and B0450+55 \citep{lm88, vx97, gl98}, and the trailing components of PSRs B1650-38 and B1931+24 \citep{kjm05, hdv+09}. \citet{vkk98} noticed that the highly polarized leading component of PSR B0355+54 has a flat spectrum and becomes increasingly prominent at higher frequencies. The highly polarized trailing component of PSR B2224+65 has a flat polarization angle curve \citep{mr11}. \begin{figure*} \centering \begin{tabular}{c} \includegraphics[angle=0, width=0.8\textwidth] {J1048-5832_prof.ps} \\ \includegraphics[angle=0, width=0.8\textwidth] {J2225+6535_prof.ps} \\ \end{tabular} \caption{Highly polarized leading component of PSR J1048-5832 and trailing component of J2225+6535 at three frequencies to show their frequency evolution. The solid lines stand for the total intensities, the dashed and dashed-dotted lines represent the linear and circular polarizations, respectively. The position angle curves are shown by dotted lines at the upper part of each panel. The polarization data are collected from literature as listed in Table \ref{high_L}. } \label{fig:profiles} \end{figure*} Theoretical efforts have been made to understand various polarization features. In general, pulsar polarizations are closely related to the emission processes of the relativistic particles streaming along the curved magnetic field lines \citep[e.g.][]{bcw91,wwh12}, the propagation effects within pulsar magnetosphere \citep[e.g.][]{ba86,wlh10,bp12}, and the scattering within the interstellar medium \citep{lh03}. However, these investigations have been conducted separately on each aspect, rarely done jointly. Curvature radiation which serves as one of the most probable mechanisms for pulsar emission can produce highly polarized emission \citep{gan10,wwh12}. Propagation effects are succeeded in demonstrating the interaction of the ordinary (O) and extra-ordinary (X) modes within pulsar magnetosphere and can lead to diverse depolarization features \citep{cr79,ba86,wlh10,bp12}, though initial ratios for both modes are uncertain. Propagation effects within the interstellar medium need to be investigated further. Recently, we investigated the emission processes jointly with propagation effects \citep{wwh14, wwh15}, which provides us a new opportunity to understand the highly polarized components, because distributions of the X-mode and O-mode within pulsar magnetosphere are related to the depolarization across pulsar emission beam by considering the refraction and corotation effects. Emission can be highly depolarized in some beam regions where both modes have comparable intensities, but dominated by one mode in other regions and hence the resulting profile can be highly polarized. In this paper, we summarize observations for highly polarized components of integrated pulse profiles in literature and then theoretically explain them by modeling emission and propagation processes. In Section 2, we analyze various features for highly polarized components of observed pulsar profiles. In Section 3, we simulate polarized pulsar beams and pulse profiles by considering the emission processes and propagation effects. Discussions and conclusions are given in Section 4. \begin{table*} \caption{Highly polarized components of 78 pulsars in literature. } \label{high_L} \tabcolsep 1mm \begin{tabular}{llrrll} \hline \hline PSR Jname & Bname & Period & DM & Polarization Features & References \\ & & (s) & ($\rm cm^{-3} pc$)& & \\ \hline J0014+4746 & B0011+47 & 1.24069 & 30.8 & Leading & 31, 70, 86 \\ J0358+5413 & B0355+54 & 0.15638 & 57.1 & Leading, Flat Spec., Orth. Modes & 8, 9, 16, 20, 24, 26, 30, 31, 34, 77, 81 \\ J0454+5543 & B0450+55 & 0.34072 & 14.5 & Leading, Flat Spec., Flat PA & 16, 20, 30, 31, 81 \\ J0814+7429 & B0809+74 & 1.29224 & 5.7 & Leading, Orth. Modes & 1, 9, 30, 31, 44, 58, 78, 88\\ J0942-5657 & B0941-56 & 0.80812 & 159.7 & Leading & 23, 33, 69, 90 \\ J0954-5430 & & 0.47283 & 200.3 & Leading & 69, 90 \\ J1048-5832 & B1046-58 & 0.12367 & 129.1 & Leading, Flat Spec. & 23, 56, 59, 69, 76, 89, 90 \\ J1057-5226IP& B1055-52& 0.19710 & 30.1 & Leading, Flat Spec. & 4, 6, 12, 16, 29, 69, 72, 77, 89, 90 \\ J1112-6103 & & 0.06496 & 599.1 & Leading, Scattering & 69, 89, 90 \\ J1341-6220 & B1338-62 & 0.19333 & 717.3 & Leading, Scattering & 23, 39, 59, 60, 69, 90 \\ J1410-6132 & & 0.05005 & 960.0 & Leading, Scattering & 68, 69, 89, 90 \\ J1453-6413 & B1449-64 & 0.17948 & 71.0 & Leading, Flat PA, Orth. Modes & 5, 6, 7, 29, 54, 59, 69, 71, 81, 90 \\ J1730-3350 & B1727-33 & 0.13946 & 259.0 & Leading, Scattering & 31, 39, 59, 69, 89, 90 \\ J1805+0306 & B1802+03 & 0.21871 & 80.8 & Leading, Flat PA & 31, 38 \\ J1823-3106 & B1820-31 & 0.28405 & 50.2 & Leading & 21, 31 \\ J1825-0935MP& B1822-09& 0.76900 & 19.3 & Leading, Flat Spec., Orth. Modes& 7, 9, 24, 29, 30, 31, 49, 61, 63, 65, 73, 77 \\ J1844+1454 & B1842+14 & 0.37546 & 41.4 & Leading, Flat Spec., Flat PA & 17, 31, 38, 54, 65, 74, 77, 81, 90 \\ J1849+2423 & & 0.27564 & 62.2 & Leading & 70 \\ J1937+2544 & B1935+25 & 0.20098 & 53.2 & Leading, Flat PA & 31, 38, 63, 70, 81, 90 \\ J2008+2513 & & 0.58919 & 60.5 & Leading, Orth. Modes & 70 \\ \hline J0601-0527 & B0559-05& 0.39596 & 80.5 & Trailing, Flat Spec., Flat PA & 20, 23, 31, 34, 54, 69, 90 \\ J0922+0638 &B0919+06 & 0.43062 & 27.2 & Trailing, Flat Spec., Orth. Modes & 13, 19, 24, 29--31, 38, 49, 54, 59 \\ & & & & & 62, 65, 74, 81, 90 \\ J1401-6357 & B1358-63& 0.84278 & 98.0 & Trailing & 21, 23, 29 \\ J1539-5626 & B1535-56& 0.24339 & 175.8 & Trailing, Flat Spec., Flat PA & 23, 56, 59, 69, 90 \\ J1548-5607 & & 0.17093 & 315.5 & Trailing & 69, 90 \\ J1653-3838 &B1650-38 & 0.30503 & 207.2 & Trailing & 56, 69, 90 \\ J1739-1313 & & 1.21569 & 58.2 & Trailing & 69, 90 \\ J1808-3249 & & 0.36491 & 147.3 & Trailing & 56, 69, 90 \\ J1933+2421 &B1931+24 & 0.81369 & 106.0 & Trailing & 31, 70 \\ J2013+3845 & B2011+38& 0.23019 & 238.2 & Trailing, Flat PA & 31, 81 \\ J2225+6535 &B2224+65 & 0.68254 & 36.0 & Trailing, Flat Spec., Flat PA & 9, 16, 31, 77, 86, 88 \\ \hline J0737-3039A& & 0.02269 & 48.9 & MSP, Leading\&Trailing & 47, 55, 57, 82 \\ & & & & Flat PA, Orth. Mod & \\ J1012+5307 & & 0.00525 & 9.0 & MSP, Trailing, Flat PA & 35, 37, 70, 88 \\ J1022+1001 & & 0.01645 & 10.2 & MSP, Trailing & 35--37, 46, 48, 52, 80, 84, 85, 87, 88 \\ J1300+1240 &B1257+12 & 0.00621 & 10.1 & MSP, Leading & 35, 70 \\ \hline J0108-1431 & & 0.80756 & 2.3 & Whole & 33, 69, 90 \\ J0134-2937 & & 0.13696 & 21.8 & Whole & 33, 65, 69, 71, 90 \\ J0139+5814 & B0136+57 & 0.27245 & 73.7 & Whole & 16, 20, 30, 31, 81, 86, 88 \\ J0538+2817 & & 0.14315 & 39.5 & Whole & 30, 81 \\ J0543+2329 & B0540+23 & 0.24597 & 77.7 & Whole, Pol. dec. with freq. & 9, 10, 17, 19, 24, 30, 31, 38, 49 \\ & & & & & 53, 63, 65, 69, 74, 77, 81, 90 \\ J0614+2229 & B0611+22 & 0.33495 & 96.9 & Whole, Strong CP & 10, 16, 19, 31, 34, 38, 53, 63, 65, 69, 74 \\ J0630-2834 & B0628-28 & 1.24441 & 34.4 & Whole & 5, 6, 7, 9, 16, 29, 31, 34, 54, 59, 65, 69, 81, 90 \\ J0631+1036 & & 0.28780 & 125.3 & Whole & 27, 69, 75, 89, 90 \\ J0659+1414 & B0656+14 & 0.38489 & 13.9 & Whole & 17, 31, 38, 40, 53, 59, 63, 64, 69, 74 \\ & & & & & 75, 81, 89, 90 \\ J0742-2822 & B0740-28 & 0.16676 & 73.7 & Whole & 6, 7, 9, 16, 24, 29--31, 33, 49, 54, 59 \\ & & & & & 61, 69, 71, 75, 77, 81, 83, 89, 90 \\ J0835-4510 & B0833-45 & 0.08932 & 67.9 & Whole, Strong CP & 2, 3, 5--7, 11, 29, 43, 54, 59, 61 \\ & & & & & 69, 71, 76, 81, 89, 90 \\ J0901-4624 & & 0.44199 & 198.8 & Whole, Strong CP & 69, 90 \\ J0905-5127 & & 0.34628 & 196.4 & Whole & 69, 90 \\ J0908-4913 & B0906-49 & 0.10675 & 180.3 & Whole, Inter Pulse & 21, 23, 32, 59, 66, 69, 76, 89, 90 \\ J1015-5719 & & 0.13988 & 278.7 & Whole & 60, 69, 90 \\ J1028-5819 & & 0.09140 & 96.5 & Whole & 67, 69, 89 \\ J1057-5226MP& B1055-52& 0.19710 & 30.1 & Whole & 4, 6, 12, 16, 29, 69, 72,77, 89, 90 \\ J1105-6107 & & 0.06319 & 271.0 & Whole & 39, 60, 69, 89, 90 \\ \hline \end{tabular} \end{table*} \begin{table*} \addtocounter{table}{-1} \caption{ -- continued. } \tabcolsep 1mm \begin{tabular}{lllrll} \hline J1119-6127 & & 0.40796 & 707.4 & Whole & 45, 60, 69, 79, 89, 90 \\ J1302-6350 & B1259-63 & 0.04776 & 146.7 & Whole, Strong CP & 22, 25, 42, 56, 59, 69, 90 \\ J1321+8323 & B1322+83 & 0.67003 & 13.3 & Whole & 31, 70, 86 \\ J1359-6038 & B1356-60 & 0.12750 & 293.7 & Whole, Strong CP & 21, 29, 33, 59, 61, 69, 90 \\ J1420-6048 & & 0.06817 & 358.8 & Whole, Strong CP & 41, 60, 69, 75, 89, 90 \\ J1614-5048 & B1610-50 & 0.23169 & 582.8 & Whole, Scattering, Strong CP & 23, 56, 69, 90 \\ J1637-4553 & B1634-45 & 0.11877 & 193.2 & Whole & 56, 69, 90 \\ J1702-4128 & & 0.18213 & 367.1 & Whole & 69, 89, 90 \\ J1705-1906IP& B1702-19& 0.29898 & 22.9 & Whole, Strong CP, Inter-pulse & 15, 16, 29, 31, 34, 49, 65, 69, 90 \\ J1705-3950 & & 0.31894 & 207.1 & Whole, Strong CP & 69, 90 \\ J1709-4429 & B1706-44 & 0.10245 & 75.6 & Whole, Strong CP & 23, 54, 56, 59, 69, 89, 90 \\ J1718-3825 & & 0.07466 & 247.4 & Whole & 69, 75, 89, 90 \\ J1733-3716 & B1730-37 & 0.33758 & 153.5 & Whole & 56, 69, 90 \\ J1740-3015 & B1737-30 & 0.60688 & 152.1 & Whole, Strong CP & 21, 23, 31, 34, 59, 69, 76, 90 \\ J1801-2451 & B1757-24 & 0.12491 & 289.0 & Whole, Strong CP & 31, 63, 69, 81, 86, 89, 90 \\ J1803-2137 & B1800-21 & 0.13366 & 233.9 & Whole, Strong CP & 21, 31, 34, 69, 86, 90 \\ J1809-1917 & & 0.08274 & 197.1 & Whole, Strong CP & 69, 90 \\ J1826-1334 & B1823-13 & 0.10148 & 231.0 & Whole, Strong CP & 31, 34, 69, 90 \\ J1830-1059 & B1828-11 & 0.40504 & 161.5 & Whole, Strong CP & 31, 69, 90 \\ J1841-0345 & & 0.20406 & 194.3 & Whole & 69, 90 \\ J1841-0425 & B1838-04 & 0.18614 & 325.4 & Whole & 31, 63, 69, 90 \\ J1850+1335 & B1848+13 & 0.34558 & 60.1 & Whole & 31, 38, 63, 65, 81, 90 \\ J1915+1009 & B1913+10 & 0.40454 & 241.6 & Whole, Strong CP & 17, 31, 34, 38, 63, 81, 90 \\ J1926+1648 & B1924+16 & 0.57982 & 176.8 & Whole & 10, 17, 19, 31, 38, 77 \\ J1932+1059 & B1929+10 & 0.22651 & 3.1 & Whole & 2, 9, 10, 13, 14, 16--20, 24, 26, 28--31, 37, 38 \\ & & & & & 40, 49--51, 53, 54, 70, 74, 81, 86, 88, 90 \\ \hline \end{tabular} \parbox{180mm} { Notes. Leading: Highly polarized (larger than 70\%) leading components; Trailing: highly polarized trailing components; MSP: millisecond pulsars with highly polarized leading and/or trailing components; Whole: highly polarized for the whole pulse profiles; Flat Spec.: flat spectrum; Flat PA: flat polarization angle curves; Orth. Modes: orthogonal modes; Strong CP: strong circular polarization. References: (1) \citet{lsg71} at 0.151, 0.24, 0.408 GHz; (2) \citet{man71} at 0.392, 1.665 GHz; (3) \citet{kmr74} at 4.83 GHz; (4) \citet{mha+76} at 1.4 GHz; (5) \citet{hma+77} at 0.338, 0.4 GHz; (6) \citet{mhm+78} at 0.631, 0.649 GHz; (7) \citet{mhm80} at 1.612 GHz; (8) \citet{msf+80} at 2.65 GHz; (9) \citet{mgs+81} at 1.72, 2.65, 4.85, 8.7 GHz; (10) \citet{rb81} at 0.43 GHz; (11) \citet{kd83} at 2.295 GHz; (12) \citet{ran83} at 0.17, 0.631 GHz; (13) \citet{scr+84} at 1.404 GHz; (14) \citet{scw+84} at 0.8 GHz; (15) \citet{blh+88} at 0.408 GHz; (16) \citet{lm88} at 0.408, 0.415, 0.43, 0.611, 0.64, 1.42 GHz; (17) \citet{rsw89} at 1.4 GHz; (18) \citet{ph90} at 0.43, 1.665 GHz; (19) \citet{bcw91} at 0.43, 1.418 GHz; (20) \citet{xrs+91} at 1.72 GHz; (21) \citet{wml+93} at 1.56 GHz; (22) \citet{mj95} at 1.52, 4.68 GHz; (23) \citet{qml+95} at 0.66, 1.411, 1.44 GHz; (24) \citet{xsg+95} at 10.55 GHz; (25) \citet{jml+96} at 4.8 GHz; (26) \citet{xkj+96} at 32.0 GHz; (27) \citet{zcw+96} at 1.418, 1.665, 2.38 GHz; (28) \citet{rr97} at 0.43, 1.414 GHz; (29) \citet{vdh+97} at 0.8, 0.95 GHz; (30) \citet{vx97} at 1.41, 1.71, 4.85, 10.55 GHz; (31) \citet{gl98} at 0.23, 0.4, 0.6, 0.92, 1.4, 1.6 GHz; (32) \citet{gsf+98} at 1.3 GHz; (33) \citet{mhq98} at 0.435, 0.66 GHz; (34) \citet{vkk98} at 4.85, 10.55 GHz; (35) \citet{xkj+98} at 1.41 GHz; (36) \citet{kxc+99} at 1.41 GHz; (37) \citet{stc99} at 0.41, 0.61, 1.414 GHz; (38) \citet{wcl+99} at 1.418 GHz; (39) \citet{cmk01} at 0.661, 1.351 GHz; (40) \citet{ew01} at 1.418 GHz; (41) \citet{rrj01} at 1.517 GHz; (42) \citet{cjm+02} at 1.4 GHz; (43) \citet{kjv02} at 2.3 GHz; (44) \citet{rrs+02} at 0.328, 1.365 GHz; (45) \citet{ck03} at 1.366, 2.496 GHz; (46) \citet{rk03} at 1.42 GHz; (47) \citet{drb+04} at 0.82 GHz; (48) \citet{hbo04} at 1.341 GHz; (49) \citet{kj04} at 1.4, 4.85 GHz; (50) \citet{mc04} at 1.404 GHz; (51) \citet{mr04} at 0.43, 1.17 GHz; (52) \citet{ovh+04} at 1.373 GHz; (53) \citet{wck+04} at 0.43 GHz; (54) \citet{jhv+05} at 1.4 GHz; (55) \citet{hbo05} at 0.685, 1.373 GHz; (56) \citet{kjm05} at 3.1 GHz; (57) \citet{rdk+05} at 0.82 GHz; (58) \citet{rrs05} at 0.112, 0.328 GHz; (59) \citet{jkw06} at 8.4 GHz; (60) \citet{jw06} at 1.369, 3.1 GHz; (61) \citet{kj06} at 1.375, 3.1 GHz; (62) \citet{rrw06} at 0.327, 1.425 GHz; (63) \citet{jkk+07} at 0.691, 1.374, 3.1 GHz; (64) \citet{ran07} at 1.525 GHz; (65) \citet{jkm+08} at 0.243, 0.322, 0.69, 1.4, 3.1 GHz; (66) \citet{kj08} at 1.4, 3.1, 8.6 GHz; (67) \citet{kjk+08} at 1.37, 3.087 GHz; (68) \citet{ojk+08} at 3.1, 6.2 GHz; (69) \citet{wj08} at 1.5, 3.0 GHz; (70) \citet{hdv+09} at 0.774 GHz; (71) \citet{nkk+09} at 1.369, 1.375 GHz; (72) \citet{ww09} at 1.369 GHz; (73) \citet{bmr10} at 0.325 GHz; (74) \citet{hr10} at 0.0492, 0.132, 0.43, 1.404 GHz; (75) \citet{waa+10} at 1.369 GHz; (76) \citet{kjl+11} at 17,24 GHz; (77) \citet{mr11} at 0.325 GHz; (78) \citet{rd11} at 0.82 GHz; (79) \citet{wje11} at 1.5 GHz; (80) \citet{ymv+11} at 1.369 GHz; (81) \citet{nkc+12} at 1.4, 2.7, 3.1, 4.85 GHz; (82) \citet{gkj+13} at 1.4 GHz; (83) \citet{ksj13} at 1.369, 3.1 GHz; (84) \citet{van13} at 1.341 GHz; (85) \citet{dhm+15} at 0.6, 1.5, 3.0 GHz; (86) \citet{fdr15} at 1.5 GHz; (87) \citet{lkl+15} at 1.3 GHz; (88) \citet{nsk+15} at 0.15 GHz; (89) \citet{rwj15} at 1.5, 3.1, 6.0 GHz; (90) Johnston et.al. http://www.atnf.csiro.au/people/joh414/ppdata/. } \end{table*} \section{Observational features for highly polarized pulse components} The highly polarized components of integrated pulse profiles exhibit diverse polarization features. To demonstrate the properties, a sample of 78 pulsars is collected from literatures, as listed in Table~\ref{high_L}. Among them, 20 pulsars have highly polarized leading components, 11 pulsars have highly polarized trailing components, four millisecond pulsars have both highly polarized leading and/or trailing components, and 43 pulsars are highly polarized for the whole pulse profile. The fractional linear polarization is larger than 70\% for highly polarized components or the whole profile for these pulsars at more than one frequency. \subsection{Flat spectra of highly polarized components} Multi-frequency observations demonstrate that pulsar flux density generally decreases with frequency, following a power-law spectrum \citep[e.g.][]{sie73}. Different components for a given pulsar could evolve differently with frequency. For example, the relative spectra for the leading and trailing components are diverse for the conal double pulsars \citep{whq01}. The highly polarized components also show frequency evolution. Fig.~\ref{fig:profiles} shows the polarized pulse profiles at three frequencies for two pulsars, J1048-5832 and J2225+6535. PSR J1048-5832 exhibits highly polarized leading component with polarization degree approaching 100\%. At 692 MHz, the peak intensity of the highly polarized leading component is weaker than the low polarized trailing component. As observation frequency increases, the highly polarized leading component gradually dominates, as shown by the profiles of 1369 and 3068 MHz. Similar features have been seen from PSRs J0358+5413, J0454+5543, J1057-5226IP, J1825-0935MP and J1844+1454. In contrast, PSR J2225+6535 is an example for highly polarized trailing component, which becomes dominant as observation frequency increases. Similar cases can be found from PSRs J0601-0527, J0922+0638 and J1539-5626. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[angle=0, width=0.222\textwidth] {J0358+5413.ps} & \includegraphics[angle=0, width=0.222\textwidth] {J0454+5543.ps} \\ \includegraphics[angle=0, width=0.222\textwidth] {J1048-5832.ps} & \includegraphics[angle=0, width=0.222\textwidth] {J1057-5226IP.ps} \\ \includegraphics[angle=0, width=0.222\textwidth] {J1825-0935MP.ps} & \includegraphics[angle=0, width=0.222\textwidth] {J1844+1454.ps} \\ \end{tabular} \caption{The frequency evolution for the peak intensity ratio of the highly polarized leading components for six pulsars with respect to the low polarized trailing ones. The intensity ratios are listed in Table~\ref{table:leading_ratio}, which can be described by a power-law as $\rm I_{HiP}/I_{LowP} \sim a\nu^k$. } \label{fig:leading_ratio} \end{figure} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[angle=0, width=0.222\textwidth] {J0601-0527.ps} & \includegraphics[angle=0, width=0.222\textwidth] {J0922+0638.ps} \\ \includegraphics[angle=0, width=0.222\textwidth] {J1539-5626.ps} & \includegraphics[angle=0, width=0.222\textwidth] {J2225+6535.ps} \\ \end{tabular} \caption{Same as Fig.~\ref{fig:leading_ratio} but for the highly polarized trailing components for four pulsars. Data are listed in Table~\ref{table:trailing_ratio}. } \label{fig:trailing_ratio} \end{figure} Figs.~\ref{fig:leading_ratio} and \ref{fig:trailing_ratio} quantitatively demonstrate the frequency evolution of the peak intensity ratios, $\rm I_{HiP}/I_{LowP}$, of the highly polarized components with respect to the low polarized ones at a series of frequencies, see data in Tables \ref{table:leading_ratio} and \ref{table:trailing_ratio} in the Appendix. Clearly, $\rm I_{HiP}/I_{LowP}$ generally increases with frequency for highly polarized leading or trailing components, and can be described by a power-law, though the power-law indices vary from 0.35 to 1.79 for different pulsars. We conclude that the highly polarized components exhibit a flatter spectrum than the low polarized components, regardless of its location at the leading or trailing phase. \subsection{Polarization angle curves of highly polarized components} \begin{table} \begin{footnotesize} \caption{Gradients of polarization curves for the highly polarized and low polarized components. References are numbered in Table A1.} \label{table:PA_gradient} \tabcolsep 1.5mm \scriptsize \begin{tabular}{lrlrrl} \hline \hline PSR & Hi. Pol. Comp. &Freq. &$\Delta PA/\Delta\phi$ & $\Delta PA/\Delta \phi$ & Ref. \\ & &(GHz) & High Pol. Comp. & Low Pol. Comp. & \\ \hline J0454+5543 & Leading & 1.41 & $-0.6\pm0.2$ & $-7.8\pm0.3$ & 30 \\ J1453-6413 & Leading & 1.4 & $0.0\pm0.2$ & $8.0\pm0.3$ & 81 \\ J1805+0306 & Leading & 1.4 & $-1.2\pm0.5$ & $13.1\pm0.6$ & 38 \\ J1844+1454 & Leading & 1.4 & $2.4\pm1.1$ & $17.0\pm2.1$ & 54 \\ J1937+2544 & Leading & 0.774 & $-1.5\pm0.3$ & $-7.6\pm0.6$ & 70 \\ J0601-0527 & Trailing & 0.692 & $2.6\pm0.6$ & $5.6\pm1.0$ & 90 \\ J2225+6535 & Trailing & 0.325 & $-0.1\pm0.8$ & $-3.7\pm0.7$ & 77 \\ \hline J0358+5413 & Leading & 1.408 & $-1.4\pm0.2$ & Orth. Modes & 31 \\ J0814+7429 & Leading & 1.41 & $-0.6\pm0.2$ & Orth. Modes & 30 \\ J1825-0935 & Leading & 0.691 & $3.7\pm0.2$ & Orth. Modes & 63 \\ J2008+2513 & Leading & 0.774 & $1.4\pm0.7$ & Orth. Modes & 70 \\ J0922+0638 & Trailing & 0.692 & $5.2\pm0.4$ & Orth. Modes & 90 \\ \hline J1112-6103 & Leading & 1.5 & $-0.6\pm0.1$ & Scattering & 69 \\ & & 3.0 & $-5.0\pm0.1$ & - & 69 \\ J1341-6220 & Leading & 1.5 & $0.6\pm0.1$ & Scattering & 60 \\ & & 3.0 & $7.1\pm0.2$ & - & 60 \\ J1410-6132 & Leading & 1.5 & $0.0\pm0.1$ & Scattering & 69 \\ & & 3.1 & $4.0\pm0.2$ & - & 68 \\ J1730-3350 & Leading & 1.5 & $-1.5\pm0.1$ & Scattering & 69 \\ & & 3.0 & $-5.4\pm0.3$ & - & 69 \\ \hline J0014+4746 & Leading & 0.774 & $-1.0\pm0.1$ & $-1.4\pm0.1$ & 70 \\ J0954-5430 & Leading & 1.4 & $12.1\pm2.1$ & $10.8\pm1.7$ & 90 \\ J1057-5226IP& Leading & 1.377 & $0.6\pm0.5$ & $1.2\pm1.3$ & 90 \\ J0942-5657 & Leading & 1.5 & $14.2\pm0.4$ & Mixed & 69 \\ J1048-5832 & Leading & 1.369 & $4.3\pm0.2$ & Mixed & 90 \\ J1823-3106 & Leading & 1.4 & $-4.5\pm0.3$ & Mixed & 31 \\ J1401-6357 & Trailing & 0.955 & $8.1\pm1.4$ & Mixed & 29 \\ J1739-1313 & Trailing & 1.377 & $8.9\pm1.5$ & Mixed & 90 \\ J2013+3845 & Trailing & 1.408 & $-1.0\pm0.2$ & Mixed & 31 \\ J1849+2423 & Leading & 0.774 & $-1.2\pm0.2$ & Weak Pol. & 70 \\ J1539-5626 & Trailing & 1.5 & $0.0\pm0.1$ & Weak Pol. & 69 \\ J1548-5607 & Trailing & 1.4 & $2.4\pm0.3$ & Weak Pol. & 90 \\ J1653-3838 & Trailing & 1.377 & $3.4\pm1.2$ & Weak Pol. & 90 \\ J1808-3249 & Trailing & 1.377 & $-8.2\pm1.6$ & Weak Pol. & 90 \\ J1933+2421 & Trailing & 0.774 & $9.4\pm0.3$ & Weak Pol. & 70 \\ \hline J0737-3039A& MSP-Leading & 1.4 & $0.0\pm0.2$ & Orth. Modes & 82 \\ & MSP-Trailing & 1.4 & $0.0\pm0.3$ & - & 82 \\ J1012+5307 & MSP-Trailing & 0.774 & $-0.4\pm0.1$ & Mixed & 70 \\ J1022+1001 & MSP-Trailing & 1.3 & $3.3\pm0.2$ & $4.4\pm0.1$ & 87 \\ J1300+1240 & MSP-Leading & 0.774 & $0.4\pm0.3$ & Weak Pol. & 70 \\ \hline \end{tabular} \parbox{85mm}{ Note: Gradients of polarization angle curves for the low polarized components are hard to determine due to various reasons as listed in the fifth column. } \end{footnotesize} \end{table} Highly polarized components differ from low polarized components also in polarization angle curves. Table~\ref{table:PA_gradient} summarizes the gradients of polarization angle curves for 35 pulsars extracted from Table~\ref{high_L}. The highly polarized components generally have flat polarization angle curves. For example, the gradient of polarization angle curve for the highly polarized trailing component of PSR J2225+6535 in Fig.~\ref{fig:profiles} approximates to be -0.1 at 325MHz, but it is -3.7 for the low polarized leading component \citep{mr11}. The gradient is 2.4 for the highly polarized leading component of PSR J1844+1454 at 1.4GHz, but 17.0 for low polarized trailing component \citep{jhv+05}. The large difference for the gradients can also be found for PSRs J0454+5543, J1453-6413, J1805+0306, J1937+2544 and J0601-0527, as listed in Table~\ref{table:PA_gradient}. It implies that the highly polarized emission of these pulsars might be generated from the beam regions well away from the magnetic meridional plane. However, the highly polarized emission of some pulsars might also be produced near the meridional plane, e.g. J0942-5657 and J1933+2421. Both of them have very steep polarization angle curves with gradients of 14.2 and 9.4 for the highly polarized components. Gradients for low polarized components of many pulsars are hard to determine due to various reasons as noted in the fifth column of Table~\ref{table:PA_gradient}. \begin{figure} \centering \includegraphics[angle=0, width=0.4\textwidth] {PA-Gradient.ps} \caption{Histograms for absolute values of gradients of the polarization angle curves for highly polarized and low polarized components. The gradient values are listed in Table~\ref{table:PA_gradient}. } \label{fig:PA_gradient} \end{figure} As shown in Fig.~\ref{fig:PA_gradient}, the gradients of polarization angle curves for the highly polarized components are concentrated near 0.0. The gradients for the low polarized components have fewer data but are widely distributed. We therefore conclude that the highly polarized emission tends to have a flat polarization angle curve. \subsection{Depolarization and other properties} There are two mechanisms for depolarization of pulsar profiles: orthogonally polarized radiation and scattering within the interstellar medium. Single pulse observations \citep[e.g.][]{scr+84,scw+84} show the orthogonal modes of pulsar emission, and highly polarized components of integrated profiles are generally of one mode. The orthogonal modes often depolarize the integrated profiles and lead to low polarized components, as shown for PSRs J0814+7429 and J0922+0638 by \citet{scr+84} and \citet{rrs+02}. PSRs J0358+5413, J1825-0935 and J2008+2513 also show orthogonal modes and have depolarized trailing components, as listed in Table~\ref{table:PA_gradient}. Scattering during the propagation of pulsed emission in the interstellar medium can also cause depolarization at the trailing parts of profiles and result in a flat polarization angle curve \citep{lh03}. For example, PSR J1112-6103 has a dispersion measure of 599.1 and has two highly polarized components at 3.1GHz \citep{wj08}. But at frequencies below 1.5GHz, the effect of scattering becomes very significant and causes depolarization in the trailing part. The polarization angle curves are also flattened, as indicated by the gradient values in Table~\ref{table:PA_gradient}. The other three pulsars, PSRs J1341-6220, J1410-6132 and J1730-3350 show similar polarization profiles due to scattering. Millisecond pulsars exhibit highly polarized components as the normal pulsars. PSR J0737-3039A is an orthogonal rotator and has an inter-pulse. The leading part of the main pulse and the trailing part of the interpulse are highly linearly polarized with a nearly constant position angle. The gradient of polarization angle curve is near 0.0 as listed in Table~\ref{table:PA_gradient}. Orthogonal modes might happen at the trailing part of the main pulse and the leading part of the interpulse \citep{gkj+13}. PSR J1012+5307 is an aligned rotator and has emission at almost all rotation phases. The trailing part of the brightest component and all the other components are highly linearly polarized \citep{stc99,hdv+09}. The swing of polarization angle is nearly flat at all these phases. PSRs J1022+1001 and J1300+1240 show similar polarization features. \section{Theoretical explanations of highly polarized components} It can be summarized from observations that the highly polarized components preferably have a flat spectrum and a flat polarization angle curve. Orthogonal modes and scattering could cause depolarization. Millisecond pulsars exhibit similarly highly polarized components as the normal pulsars. After we analyze literature data to uncover these features for highly polarized components, we here carry out numerical simulations of emission processes and propagation effects to understand the polarization. \subsection{A theoretical model for emission processes and propagation effects} \begin{figure} \centering \includegraphics[angle=0, width=0.4\textwidth]{Patch_distribution.ps} \caption{The distributions of wave modes and fractional linear polarization within simulated pulsar emission beam. The upper panels are plotted for the X-mode and O-mode intensities, $I_X$ and $I_O$. The bottom panel shows the degree of linear polarization. Seven density patches labeled as $\it a$, $\it b$, $\it c$, $\it d$, $\it e$, $\it f$ and $\it g$ are shown in the figure and their locations are listed in Table~\ref{table:patch_loc}. Example sight lines at $\zeta=31^o$ and $34^o$ from the rotation axis of a neutron star are indicated by the dashed lines. Other pulsar parameters used for simulations are the inclination angle of the magnetic axis from the rotation axis $\alpha=30^o$ and pulsar period $P=1s$. Relativistic particles are assumed to have a Lorentz factor of $\gamma=500$, and emit at 1.4GHz.} \label{fig:patch_dis} \end{figure} In general, pulsar magnetosphere is assumed to be an dipole, \begin{equation} \bmath B=B_{\star}(\frac{R_{\star}}{r})^3[3\hat{\bmath r}(\hat{\bmath r}\cdot \hat{\bmath m})-\hat{\bmath m}], \label{eq:staticb} \end{equation} here $R_{\star}$ and $B_{\star}$ represent neutron star radius and the magnetic field on its surface, $\hat{\bmath r}$ and $\hat{\bmath m}$ are the unit vectors along $\bmath r$ and the magnetic dipole moment. The magnetic axis inclines to the rotation axis by an inclination angle $\alpha$. It rotates freely in space. Relativistic particles with a Lorentz factor of $\gamma$ are produced by the sparking processes above the polar cap. They stream out along the curved magnetic field lines and co-rotate with pulsar magnetosphere. As influenced by the perpendicular acceleration, relativistic particles will produce curvature radiation. The radiation field $\bmath E(t)$ and its Fourier components $\bmath E(\omega)$ can be calculated by using circular path approximation \citep{wwh12}. Curvature radiation at a given position of pulsar magnetosphere actually contains the contributions from all the nearby field lines within a $1/\gamma$ cone around the tangential direction. The polarization patterns of emission cones are further distorted by rotation effects, as demonstrated by \citet{wwh12}. In general, there are four wave modes (two transverse and two longitudinal) in the plasma of pulsar magnetosphere \citep{bp12}. Two modes are damped at large distances from the neutron star in the magnetosphere. Only the X-mode and superluminous O-mode, hereafter the O-mode, can escape from the magnetosphere to be observed. Immediately after the waves are generated in the emission region, they are coupled to the local X-mode and O-mode to propagate outwards. Within the $1/\gamma$ emission cone, both components have comparable intensities and propagate separately. The X-mode component propagates in a straight line, while the O-mode component suffers refraction \citep{ba86}. Hence, both mode components are separated \citep{wwh14}. The detectable emission at a given position consists of incoherent superposition of X-mode and O-mode components coming from discrete emission regions. Both mode components experience `adiabatic walking', wave mode coupling, and cyclotron absorption \citep{wlh10, bp12}. These emission processes and propagation effects have been considered jointly by \citet{wwh14} for four particle density models in the form of uniformity, cone, core and patches. We demonstrated that refraction and co-rotation significantly affect pulsar polarizations. Refraction bends O-mode emission towards the outer part of pulsar emission beam, and causes the separation of both modes. Co-rotation will lead to different ratios for both modes at different parts of pulsar emission beam. Investigations on the influences of both effects have been extended to a wide range of frequencies, and succeeded in demonstrating the frequency dependence of pulsar linear polarization \citep{wwh15}. \begin{table} \begin{center} \caption{Assumed seven density patches within a pulsar emission beam. Here, $\theta_i$ and $\phi_i$ represent the peak positions for the Gaussian density patches in the magnetic colatitude $\theta$ and azimuth $\phi$ directions within ranges of $0<\theta_i<1$ and $-180^o<\phi_i<180^o$. $\sigma_\theta$ and $\sigma_\phi$ represent the width of Gaussian distribution of the density distribution of particles. } \label{table:patch_loc} \begin{tabular}{clcrr} \hline \hline Index &$\theta_i$&$\phi_i(^o)$&$\sigma_{\theta}$&$\sigma_{\phi}(^o)$ \\ \hline $\it a$ & 0.8 & 85 & 0.06 & 5 \\ $\it b$ & 0.5 & 40 & 0.08 & 12 \\ $\it c$ & 0.5 & -55 & 0.09 & 12 \\ $\it d$ & 0.85 & -85 & 0.06 & 5 \\ $\it e$ & 0.8 & 40 & 0.06 & 5 \\ $\it f$ & 0.8 & -10 & 0.06 & 5 \\ $\it g$ & 0.85 & -55 & 0.06 & 5 \\ \hline \end{tabular} \end{center} \end{table} Based on our previous studies \citep{wwh12, wwh14, wwh15}, we here simulate the curvature radiation processes and propagation effects, but focus mainly on the distribution of highly polarized emission regions within pulsar emission beam. Fig.~\ref{fig:patch_dis} represents a very typical case for the distributions of wave modes and fractional linear polarization, based on an uniform density model demonstrated in \citet{wwh14}. It shows that the intensity distributions for both modes are quite different. The X-mode components, $I_X$, are stronger at the two sides of pulsar beam in the $\zeta$ direction, as shown in the top left panel of Fig.~\ref{fig:patch_dis}, while $I_O$ are stronger at the two sides of pulsar beam in the $\varphi$ direction, as shown in the top right panel of Fig.~\ref{fig:patch_dis}. Here, $\zeta$ is the sight line angle, i.e., the angle between sight line and the rotation axis, $\varphi$ represents the rotation phase. Depolarization is caused by two modes. Some regions in emission beam is dominated by one mode that can be highly polarized. The depolarization leads the distribution of fractional linear polarization $|I_X-I_O|/|I_X+I_O|$ to be quadruple. It implies that the highly polarized emission could be produced at four parts of pulsar emission beam, i.e., the leading (O-mode), trailing (O-mode), top (X-mode) and bottom (X-mode) parts of the beam. In order to demonstrate the formation of highly polarized components, seven density patches ($\it a$, $\it b$, $\it c$, $\it d$, $\it e$, $\it f$ and $\it g$) are simulated as listed in Table~\ref{table:patch_loc}. As shown in the bottom panel of Fig.~\ref{fig:patch_dis}, patches $\it a$ and $\it d$ are dominated by the O-mode emission, while patch $\it f$ by the X-mode. The emission from these regions should have a large fraction of linear polarization. However, emission from density patches $\it b$, $\it c$ and $\it e$ have both the X and O modes with comparable intensity, hence the observed emission from these regions is depolarized. \subsection{Polarized pulse profiles obtained by a small impact angle} \begin{figure} \centering \includegraphics[angle=0, width=0.235\textwidth] {Profile_stack_ab.ps} \includegraphics[angle=0, width=0.235\textwidth] {Profile_stack_cd.ps} \caption{Pulse profiles resulting from the cut of density patches ($\it a$, $\it b$) and ($\it c$, $\it d$) to explain the highly polarized leading and trailing components, depending on the available density patches in the emission region. The sold lines represent the total intensity, the dashed and dotted lines are for the linear polarization and polarization angle curves. The wave modes are marked near the polarization angle curves.} \label{fig:patch_prof_s} \end{figure} When a sight line has a small impact angle, $\beta$, i.e., $\zeta-\alpha$, cutting across pulsar emission beam, it will detect emissions from the density patches $\it a$, $\it b$, $\it c$ and $\it d$ in Fig.~\ref{fig:patch_dis}. The resulting pulse profiles are shown in Fig.~\ref{fig:patch_prof_s}, depending on the available density patch combinations, for example ($\it a$, $\it b$) or ($\it c$, $\it d$). We can conclude from simulations that: 1) The highly polarized components can be generated from the leading (patch $\it a$) and the trailing (patch $\it d$) parts of pulsar emission beam, both of which are dominated by the O-mode. 2) The highly polarized components have a flat polarization angle curve, because density patches $\it a$ and $\it d$ are away from the meridional plan of $\varphi=0^o$. 3) The low polarized components exhibit orthogonal modes, and the emission from the X and O modes has comparable intensity. Orthogonal mode jump happens when one mode dominates over the other, as shown by the polarization angle curves. In addition, the simulations predict that the highly polarized components are more likely to be generated at the leading parts of pulse profiles, because the highly polarized leading part of pulsar emission beam is broader than the trailing one (see Fig.\ref{fig:patch_dis}) due to rotation-induced asymmetry. Highly polarized components would have a flatter spectrum than the low polarized components, because the beam regions further away from the magnetic axis tend to have a flat spectrum according to \citet{lm88}, while the detailed spectrum behavior is not modeled in our simulations. \subsection{Polarized pulse profiles obtained by a large impact angle} \begin{figure} \centering \includegraphics[angle=0, width=0.235\textwidth] {Profile_stack_ef.ps} \includegraphics[angle=0, width=0.235\textwidth] {Profile_stack_fg.ps} \includegraphics[angle=0, width=0.235\textwidth] {Profile_stack_efg.ps} \caption{Same as Fig.~\ref{fig:patch_prof_s} but for density patch combinations of ($\it e$, $\it f$), ($\it f$, $\it g$) and ($\it e$, $\it f$, $\it g$).} \label{fig:patch_prof_l} \end{figure} When a sight line has a large impact angle to cut across the pulsar emission beam, it will detect emission from density patches $\it e$, $\it f$ and $\it g$ in Fig.~\ref{fig:patch_dis}. The resulting pulse profiles are shown in Fig.~\ref{fig:patch_prof_l} for different patch combinations ($\it e$, $\it f$), ($\it f$, $\it g$) or ($\it e$, $\it f$, $\it g$) for available density distributions of particles. Highly polarized components can appear at the leading, central or trailing part of pulse profiles. These profiles have similar features as those in Fig.~\ref{fig:patch_prof_s}, but differences are as following. 1) The highly polarized component from the bottom part, i.e., density patch $\it f$, of pulsar emission beam is dominated by the X-mode, rather than the O-mode. 2) Highly polarized component has a steeper polarization angle curve, because the component is generated near the meridional plane, where the polarization angle has the maximum rate of change approximating $(d PA/d \varphi)_{\rm max}=\sin \alpha/\sin \beta$. The gradient is inversely proportional to the impact angle $\beta$. 3) Highly polarized component may have a similar spectrum to the low polarized components, since all components are generated at comparable distances from the magnetic axis. In summary, joint simulations of emission processes and propagation effects demonstrate that highly polarized components can be produced at the leading, central and trailing parts of pulse profiles. The properties of emission components, polarization angle curve, mode characteristic and spectrum, depend on pulsar geometry and the density patches of radiating particles. \section{Discussions and Conclusions} In this paper, we have investigated the highly polarized components of integrated pulse profiles observationally and theoretically. We found from observational data that: (i) Highly polarized components of pulsar profiles have a flatter spectrum than the low polarized components, regardless of their locations at the leading or trailing phase; (ii) Highly polarized components tend to have a flat polarization angle curve, though a small fraction of pulsars have very steep polarization angle curves; (iii) Highly polarized components generally have one mode, while the low polarized components often show orthogonal modes; (iv) Significant scattering will cause depolarization at the trailing parts of pulse profiles and result in flat polarization angle curves; (v) Millisecond pulsars can have highly polarized components as normal pulsars. We simulated emission processes and propagation effects within pulsar magnetosphere, and found that highly polarized emission could be produced at the leading (O-mode), trailing (O-mode), top (X-mode) and bottom (X-mode) parts of pulsar emission beam. When a sight line cuts across the beam with different impact angles, the detected highly polarized components have different properties, depending on the specific geometry and available density patches of the radiating particles: (i) Highly polarized component generated from the leading or trailing part of pulsar emission beam is of the O-mode, and has a flat polarization angle curve; (ii) Highly polarized component generated from the top or bottom part of pulsar emission beam is of the X-mode, and has a steep polarization angle curve. In the observational aspect, polarization observations at multiple frequencies are important to reveal the frequency dependencies of intensities and polarization degrees of the components. The polarization observations should have higher signal to noise ratio and time resolution. For example, PSR J1048-5832 appeared to have one component at 1.44GHz due to limited time resolution \citep{qml+95}, but it is clearly resolved to two components by the recent polarization observations at 1.5GHz \citep{wj08}, which show clearly the gradient differences of polarization angle curves between the highly polarized and low polarized components. In addition, single pulse observations can help to identify the wave modes and depolarization processes \citep{scr+84, scw+84}. In the theoretical aspect, our simulations here represent the further development of joint researches on the emission processes and propagation effects \citep{wwh14} and focus mainly on the properties for the highly polarized components within the wave mode separated magnetosphere. Note, however that in our current calculations, the magnetic field is assumed to be a rotating dipole for an empty magnetosphere. Radiation correction is neglected, and the effect of loaded plasma on magnetic fields is not yet incorporated. Furthermore, the energy and density distributions of relativistic particles are assumed to follow a simple model. Therefore, the conclusions and predictions under these assumptions may be altered if more complicated pulsar magnetosphere is considered. \section*{Acknowledgements} This work has been supported by the National Natural Science Foundation of China (11403043, 11473034 and 11273029), and the Strategic Priority Research Programme ``The Emergence of Cosmological Structures'' of the Chinese Academy of Sciences (Grant No. XDB09010200). \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The mean-eddy interaction in fluid flows is an important problem in fluid mechanics. Related to it are hydrodynamic stability, turbulence production, laminarization, atmospheric cyclogenesis, hurricane generation, ocean eddy shedding, to name but a few. Central to the problem is the transfer of energy between the mean and eddy processes as decomposed (cf.~Fig.~\ref{fig:schematic}). The purpose of this paper is to quantify this transfer within the traditional Reynolds decomposition framework, and use it to investigate a new strategy of fluid control. In a forthcoming paper, this formalism will be extended to a more generic framework for real-time problems (Liang et al., manuscript submitted to SIAM J. Multiscale Model. Simul.) \begin{figure} [h] \begin{center} \includegraphics[angle=0,width=0.5\textwidth] {schematic.eps} \caption{A schematic of the mean-eddy interaction, which is characterized by the energy transfer ${\Gamma}$ between the mean and eddy processes. \protect{\label{fig:schematic}}} \end{center} \end{figure} The classical formalism of energy transfer can be best illustrated with the Reynolds decomposed equations for the advection of a scalar field $T = \mean T + T'$ in an incompressible flow $\ve v$, where the overbar stands for an ensemble mean, and the prime for the departure from the mean. In the absence of diffusion, $T$ evolves as \begin{eqnarray} \Dt T + \nabla\cdot(\ve v T) = 0, \end{eqnarray} whose decomposed equations are \begin{subequations} \begin{eqnarray} &&\Dt {\mean T} + \nabla\cdot(\mean{\ve v} \mean T + \mmean{\dev{\ve v} \dev T}) = 0, \label{eq:Tmean} \\ &&\Dt {\dev T} + \nabla\cdot(\dev {\ve v} \mean T + \mean {\ve v} \dev T + \dev {\ve v}\dev T - \mmean{\dev {\ve v} \dev T}) = 0. \label{eq:Teddy} \end{eqnarray} \end{subequations} Multiplying (\ref{eq:Tmean}) by $\mean T$, and (\ref{eq:Teddy}) by $\dev T$, and taking the mean, one arrives at the evolutions of the mean energy and eddy energy (variance)\cite{Lesieur}\cite{McComb} \begin{subequations} \label{eq:TE} \begin{eqnarray} && \Dt {{\mean T}^2/2} + \nabla\cdot (\mean{\ve v} {\mean T}^2/2) = - \mean T \nabla \cdot (\mmean{\dev{\ve v} \dev T}) \label{eq:TEmean} \\ && \Dt {\mmean{{\dev T}^2/2}} + \nabla\cdot (\mmean{\ve v {\dev T}^2/2}) = - \mmean {\dev{\ve v} \dev T} \cdot \nabla \mean T. \label{eq:TEeddy} \end{eqnarray} \end{subequations} The terms in divergence form are generally understood as the transports of the mean and eddy energies, and those on the right hand side as the respective energy transfers. The latter are usually used to explain the mean-eddy interaction. Particularly, when $T$ is a velocity component, the right hand side of (\ref{eq:TEeddy}) has been interpreted as the rate of energy extracted by Reynolds stress, or ``Reynolds stress extraction'' for short, against the mean field to fuel the eddy growth; in the context of turbulence research, it is also referred to as the ``rate of turbulence production''. An observation of the two ``transfer terms'' on the right hand sides of (\ref{eq:TE}) is that they are not symmetric; in other words, they do not cancel out each other. In fact, they sum to $\nabla\cdot (\mean T \mmean{\dev{\ve v}\dev T})$, which in general does not vanish. This is not what one expects, as physically a transfer process should be a mere redistribution of energy between the mean and eddy processes, without destroying or generating energy as a whole. These two quantities therefore are not real transfers, and cannot be used to measure the mean-eddy interaction. The reason for the asymmetry between the terms on the right hand side of (\ref{eq:TE}) is that they are intertwined with transport processes; or alternatively, the divergence terms on the left hand side do not account for all the fluxes. Some people such as Pope\cite{Pope} add an extra term in the flux term of (\ref{eq:TEmean}) to maintain the balance, but it is not clear how that term should be chosen on physical grounds. Pedlosky\cite{Pedlosky} pointed out that a partial solution of the problem is to take averages for these terms over a substantially large domain. This way the transport contributions may be reduced and hence the transfer stands out. Liang and Robinson\cite{LR1} argued that spatial averages should be avoided to retain the information of spatial intermittency in the energetics. They believed that a precise separation between the transport and transfer can be made to satisfy the above symmetric requirement. They even named the transfer thus obtained {\it perfect transfer}, in distinction from other transfers that may have been called in the literature. But in their paper a rigorous formalization was postponed to future work; how the separation can be achieved is still open, This study intends to give this problem a solution in the traditional framework. A complete answer to the issue of separation raised in \cite{LR1}, which is based on a new mathematical apparatus, the {\it multiscale window transform} developed by Liang and Anderson (manuscript submitted to SIAM J. Multiscale Model. Simul.), is deferred to the sequel to this paper. The following two sections are devoted to the establishment of a rigorous formalism for the transfer ${\Gamma}$. We first consider the case for a scalar field $T$ (\ref{sect:scalar}), and then extend to momentum equations (\ref{sect:mom}). The formalism is validated with a well-studied instability model of an atmospheric jet stream (section~\ref{sect:kuo}), and applied to harness the Karman vortex street behind a circular cylinder (\ref{sect:wake}). A brief summary is presented in section~\ref{sect:summary}. \section{Formalism with a passive scalar} \label{sect:scalar} \subsection{Reynolds decomposition} \label{sect:mathframework} The transfer is sought within the Reynolds decomposition framework. The key of the Reynolds decomposition is Reynolds averaging. It decomposes a field, for example a scalar field $T$, into a mean $\mean T$ plus a departure from the mean, $T'$. Simple as it is, Reynolds averaging actually introduces an important geometric structure which, as we will see shortly, helps to make the transfer problem easier. A Reynolds average may be understood either as an ensemble mean, or an expectation with respect to the measure of probability. Practically it may also be understood as an average in time or an average in some dimension of space. Its basic properties include: (1) $\mmean {T'} = 0$, (2) $\mmean{\alpha T} = \alpha \mean T$, for $\alpha=\rm const$, and a corollary from the above two, (3) $\mmean{\mean T T'} = \mean T \mmean{(T')} = 0$. To put all these understandings together, the decomposition can be recast in the framework of a Hilbert space $\cal H$, with an inner product defined as, \begin{eqnarray} \inner f g = \mmean {f g}, \end{eqnarray} for any $f$ and $g$ in the space $\cal H$, that is, the ensemble, the probability space, or the space of functions over the time or spatial domain under consideration, if the overbar is, respectively, an ensemble mean, a probability expectation, or a time/spatial average. (It is interesting to note that the meaning of a Reynolds averaging is two-fold: one is the mean state reconstruction, another the summation or integration operator in forming the inner product.) A Reynolds decomposition thus splits $\cal H$ into two subspaces, which contain the mean process and the eddy process. We will refer to these subspaces as {\it windows}, so we have a mean window and an eddy window. Distinguish them respectively with subscripts $0$ and $1$. Correspondingly the decomposed components of a field $T$, $\mean T$ and $T'$, will be alternatively written as $\win T 0$ and $\win T 1$ for convenience. Using these notations, the energy of $T$ on window $k$ as defined in (\ref{eq:TE}) is $\frac 1 2 \inner {\win T k} {\win T k}$; the property $\mmean{\mean T T'} = 0$ becomes $\inner {\win T 0} {\win T 1} = 0$, implying the two windows are orthogonal. The concept of orthogonal windows puts the mean and eddy fields on the same footing, and will help to greatly simplify the derivation. \subsection{Multiscale flux} An important step toward the solution of the transfer problem is finding the fluxes, and hence the transports, on the two scale windows. The ($\mean{\ve v} \mean T^2/2 $) and ($\mmean{\ve v T'^2/2}$) in (\ref{eq:TEmean}) and (\ref{eq:TEeddy}), though seemingly in flux forms, are not really the desiderata in a rigorous physical sense. One may see this through a simple argument of energy conservation, which requires that the mean and eddy fluxes sum to $(\mmean{\ve v T^2/2})$--clearly these two quantities do not meet the requirement. On the other hand, the concept of multiscale flux can be naturally introduced within the formalized Reynolds decomposition framework. Given a flow $\ve v$, the flux of an inner product $\inner f g$ over $\cal H$ is $\inner {\ve v f} g = \inner f {\ve v g}$ ($\ve v$ is self-adjoint with respect to $\inner\cdot\cdot$). (Note the flux is uniquely represented this way. The only other choice one might propose for the representation is $\inner {\ve v} {fg}$. This, however, does not make sense in physics, as a flow $\ve v$ is an operator, not a function of the same like as $f$ and $g$ in the functional space.) Let $f=g= \frac T {\sqrt 2}$, one obtains the flux of energy \begin{eqnarray} \label{eq:flux} \ve Q = \frac 1 2 \inner{\ve v T} T. \end{eqnarray} Geometrically, the right hand side of (\ref{eq:flux}) shows a projection of $(\ve v T)$ onto $T$. The flux on window~$k$ then should be a projection of $\ve v T$ onto $\win T k$: \begin{eqnarray} \label{eq:Qk} \ve Q_k = \frac 1 2 \inner {\ve v T} {\win T k} = \frac 1 2 \inner {\win {\ve v T} k} {\win T k}. \end{eqnarray} In arriving at the last result we have used the fact that the two windows are orthogonal. The multiscale fluxes $\ve Q_k$ thus obtained are additive, i.e., $\ve Q = \ve Q_0 + \ve Q_1$. In fact, by the orthogonality between the mean and eddy windows, we immediately have $$\sum_{k=0}^1 \frac 1 2 \inner{\win {\ve v T} k} {\win T k} = \frac 1 2 \inner{\ve v T} {T}.$$ This is the very conservation requirement mentioned above. \subsection{Perfect transfer} \label{sect:perfect} Continue to examine the evolution of $T$ in an incompressible flow $\ve v$. In the language introduced in subsection~\ref{sect:mathframework}, Eqs.~(\ref{eq:Tmean}) and (\ref{eq:Teddy}) can be written in a unified form: \begin{eqnarray} \label{eq:Tk} \Dt {\win T k} + \nabla\cdot \win {\ve v T} k = 0, \end{eqnarray} for windows $k=0, 1$. Here the decomposition is performed only in a statistical sense. That is to say, the mean is an ensemble mean or a probability expectation. But as we will see toward the end of this section, the formalism of energy transfer is essentially the same with respect to other methods of averaging. Application of $\inner {\win T k} \cdot $ to (\ref{eq:Tk}) gives the energy evolution equation on window~$k$: \begin{eqnarray} \label{eq:TEk} \Dt {\frac 1 2 \inner {\win T k} {\win T k}} + \inner {\win T k} {\nabla\cdot \win {\ve v T} k} = 0. \end{eqnarray} The nonlinear term (second part on the l.h.s) involves two interwoven processes: transport and transfer. The former integrates to zero over a closed spatial domain; the latter sums to zero over $k$, $k=\{0,1\}$. That is to say, (\ref{eq:TEk}) can be symbolically written as, \begin{eqnarray} \label{eq:TEk2} \Dt {\frac 1 2 \inner{\win T k} {\win T k} } = - \nabla\cdot \ve Q_k + {\Gamma}_k, \end{eqnarray} where $\ve Q_k$ is the flux on window $k$, and ${\Gamma}_k$ the transfer to window $k$ from its complementary subspace. We already know the multiscale flux $\ve Q_k$ in (\ref{eq:Qk}). The transfer ${\Gamma}_k$ is now easy to derive. Comparing (\ref{eq:TEk2}) with (\ref{eq:TEk}), one obtains \begin{eqnarray} {\Gamma}_k - \nabla\cdot\ve Q_k = - \inner {\win T k} {\nabla\cdot \win{\ve v T} k}. \end{eqnarray} Substitution of (\ref{eq:Qk}) immediately gives \begin{eqnarray} \label{eq:trans1} {\Gamma}_k = \frac 1 2 \nabla\cdot \inner {\win {\ve v T} k} {\win T k} - \inner {\win T k} {\nabla\cdot\win {\ve v T} k}. \end{eqnarray} It would be more clear to see the mean-eddy interaction if (\ref{eq:trans1}) is rewritten in the traditional overbar/prime notation. For the eddy process ($k=1$), the transfer from the mean flow is \begin{eqnarray} {\Gamma}_1 = \frac 1 2 \nabla\cdot \parenth{\mmean{\dev {(\ve v T)} \dev T}} - \mmean{\dev T \nabla\cdot \dev {(\ve v T)}}, \end{eqnarray} which reduces to \begin{center} \framebox[0.50\textwidth]{ \begin{minipage}[c]{1\textwidth} \begin{eqnarray} {\Gamma}_1 \equiv {\Gamma} = \frac 1 2 \cbrace{ \mean T \nabla\cdot (\mmean{\dev{\ve v} \dev T}) - (\mmean {\dev{\ve v} \dev T}) \cdot \nabla \mean T }. \label{eq:transfer}\\ \nonumber \end{eqnarray} \end{minipage} } \end{center} In the derivation, the incompressibility assumption $\nabla\cdot\ve v=0$, and hence $\nabla\cdot\mean{\ve v} = 0$, $\nabla\cdot\ve v' = 0$, has been used. Likewise, \begin{eqnarray} \label{eq:transfer0} {\Gamma}_0 = \frac 1 2 \nabla\cdot \mmean{(\mmean{(\ve v T)} \mean T)} - \mmean {\mean T \nabla\cdot {(\ve v T)} } = - {\Gamma}, \end{eqnarray} and \begin{eqnarray} \ve Q_0 &=& \frac 1 2 \cbrace{\mean {\ve v} {\mean T}^2 + \mean T \mmean{\ve v'T'}}, \label{eq:Q0} \\ \ve Q_1 &=& \frac 1 2 \cbrace{\mmean{\ve v T'^2} + \mean T \mmean{\ve v'T'} } \end{eqnarray} The mean-eddy energetics corresponding to (\ref{eq:TE}) are, therefore, \begin{subequations} \label{eq:TE2} \begin{eqnarray} && \Dt {{\mean T}^2/2} + \nabla\cdot \parenth{ \frac 1 2 \mean {\ve v} {\mean T}^2 + \frac 1 2 \mean T \mmean{\ve v'T'}} = - {\Gamma}, \label{eq:TE2mean} \\ && \Dt {\mmean{{\dev T}^2/2}} + \nabla\cdot \parenth{ \frac 1 2 \mmean{\ve v T'^2} + \frac 1 2 \mean T \mmean{\ve v'T'}} = {\Gamma}, \label{eq:TE2eddy} \end{eqnarray} \end{subequations} with ${\Gamma}$ as shown in (\ref{eq:transfer}). Equations (\ref{eq:transfer}) and (\ref{eq:transfer0}) imply an important property for the transfer derived above, \begin{center} \framebox[0.25\textwidth]{ \begin{minipage}[c]{1\textwidth} \begin{eqnarray} \sum_k {\Gamma}_k = 0. \label{eq:perfect}\\ \nonumber \end{eqnarray} \end{minipage} } \end{center} That is to say, the transfer thus obtained is a process of energy redistribution between the windows; there is no energy generated or destroyed as a whole, just as one may expect. To distinguish from other energy transfers one may have encountered in the literature, we will refer to ${\Gamma}$ as {\it perfect transfer}, a term adopted from \cite{LR1}, when confusion might arise. Note the distinct difference between ${\Gamma}$ and the Reynolds stress extraction as appears in (\ref{eq:TEeddy}), $\Reynolds = - \mmean {\dev{\ve v} \dev T} \cdot \nabla \mean T$, which traditionally has been used to interpret the generation of eddy events, and has been interpreted as, in the turbulence research context, the turbulence production rate. In sections~\ref{sect:kuo} and \ref{sect:wake}, we will see that these two are in general differently distributed in space and time. The transfer ${\Gamma}$ may be further simplified in expression. If $\mean T \ne 0$, (\ref{eq:transfer}) may be alternatively written as \begin{eqnarray} \label{eq:trans2} {\Gamma} = \frac 1 2 {\mean T}^2\ \nabla\cdot \parenth{\frac {\mmean{\ve v'T'}} {\mean T}}. \end{eqnarray} Observe that the quantity in the parenthesis has the dimension of velocity. It represents a flow coupled with the mean and eddy processes of $T$. For convenience, introduce a ``$T$-coupled eddy velocity'' \begin{eqnarray} \label{eq:Tvelo} \ve v_T = {\frac {\mmean{\ve v'T'}} {\mean T}}, \end{eqnarray} then \begin{eqnarray} \label{eq:transfer2} {\Gamma} = \frac 1 2 \mean T^2 \nabla\cdot \ve v_T. \end{eqnarray} Notice that $\frac 1 2 \mean T^2$ is the mean energy of $T$ and is hence always positive, so whether eddies are produced is totally determined by the divergence of the $T$-coupled eddy flow. The $T$-coupled eddy flow $\ve v_T$ is introduced for notational simplicity and for physical understanding. One should be aware that ${\Gamma}$ is well defined, even though $\ve v_T$ does not exist when $\mean T=0$. In that case, the original expression (\ref{eq:transfer}) should be used. \subsection{Other methods of averaging} Practically the Reynolds averaging is often performed with respect to time if the process is stationary, or some dimension of space if the process in homogeneous in that dimension. If the averaging is in time, the above derivations also apply, except that the time derivatives in (\ref{eq:Tk}), (\ref{eq:TEk}), (\ref{eq:TE2mean}), and (\ref{eq:TE2eddy}) are gone. The transfer ${\Gamma}$ is still in the same form as (\ref{eq:transfer}). If the averaging is performed in a dimension of space, say, in $x$, then the above derivation needs modification, as the averaging does not commute with $\frac \partial {\partial x}$. But we have the following extra properties: $\mmean {\frac {\partial \psi}{\partial x}} = 0$, $\frac {\partial \mmean {\psi }} {\partial x} = 0$, for any field $\psi$. These substituted into the continuity equation $\nabla\cdot\ve v = 0$ yield $\Dx {\mean u} = \Dy {\mean v} + \Dz {\mean w} = 0$. With these identities, we repeat the procedures in the above subsection, and obtain: \begin{eqnarray} \label{eq:tmp1} {\Gamma} = \frac 1 2 \cbrace{ \mean T \nabla_{yz} \cdot (\mmean{\dev{\ve v} \dev T}) - (\mmean {\dev{\ve v} \dev T}) \cdot \nabla_{yz} \mean T}. \end{eqnarray} Here $\nabla_{yz} = \ve j \Dy {\ } + \ve k \Dz {\ }$ is the $\nabla$ operator with the $x$ component removed. Notice that $\mmean{u' T'}$ and $\mean T$ are independent of $x$, viz \begin{eqnarray} \label{eq:tmp2} \frac 1 2 \parenth{ \mean T \Dx {\mmean{u'T'}} - \mmean{(u'T')} \Dx {\mean T}} = 0. \end{eqnarray} So we may add the left hand side of (\ref{eq:tmp2}) to (\ref{eq:tmp1}) to get \begin{eqnarray} {\Gamma} = \frac 1 2 \cbrace{ \mean T \nabla \cdot (\mmean{\dev{\ve v} \dev T}) - (\mmean {\dev{\ve v} \dev T}) \cdot \nabla \mean T}, \end{eqnarray} which is precisely the same as (\ref{eq:transfer}) in expression form. In a brief summary, we have derived the mean-eddy energy transfer for a passive scalar in an incompressible flow, which is ``perfect'' in the sense that the energy extracted from the mean is equal in amount to the energy released by the mean. The perfect transfer is invariant in expression form with averaging schemes. \section{Formalism with momentum equations} \label{sect:mom} We need to deal with momentum equations for the mean-eddy kinetic energy transfer. Consider an incompressible ideal flow $\ve v$. It is governed by \begin{subequations} \label{eq:gov} \begin{eqnarray} && \Dt{\ve v} + \nabla\cdot (\ve v \ve v) = - \nabla P, \label{eq:mom} \\ && \nabla \cdot \ve v = 0, \label{eq:cont} \end{eqnarray} \end{subequations} In forming the multiscale energy equations, the pressure term only contributes to the transport, i.e., the resulting pressure work is in a divergence form, thanks to the incompressibility assumption. So only the nonlinear terms require some thought in deriving the perfect transfer. In this case, the formalism with the momentum equations is then essentially the same as that with the evolution of a passive scalar, except that we now have three ``scalars'' for the three velocity components $u$, $v$, and $w$. If $\mean u \ne 0$, $\mean v \ne 0$, and $\mean w\ne 0$, for each component, there is a ``coupled eddy velocity'' defined by (\ref{eq:Tvelo}), so we have $\ve v_u$, $\ve v_v$, and $\ve v_w$, which are ${\frac {\mmean{\ve v'u'}} {\mean u}}$, ${\frac {\mmean{\ve v'v'}} {\mean v}}$, and ${\frac {\mmean{\ve v'w'}} {\mean w}}$, respectively. According to the preceding section, each velocity corresponds to a transfer as expressed by (\ref{eq:transfer2}). The total kinetic energy transfer is thence the sum of all the three transfers: \begin{eqnarray} {\Gamma} = \frac 1 2 \cbrace{ \mean u^2 \nabla\cdot {\ve v}_u + \mean v^2 \nabla\cdot {\ve v}_v + \mean w^2 \nabla\cdot {\ve v}_w }. \end{eqnarray} As noted in subsection~\ref{sect:perfect}, the above formula is well defined even when the mean velocity vanishes. In that case, one just needs to expand it to obtain: \begin{eqnarray} \label{eq:transfervec} {\Gamma} = \frac 1 2 \cbrace{ \nabla\cdot (\mmean{\ve v' \ve v'}) \cdot \mean{\ve v} - (\mmean{\ve v' \ve v'}) : \nabla\mean{\ve v} }, \end{eqnarray} where the second term in the curly braces is the very Reynolds stress extraction: \begin{eqnarray} \label{eq:Reynoldsvec} \Reynolds = - (\mmean{\ve v' \ve v'}) : \nabla\mean{\ve v}. \end{eqnarray} Like those with a scalar field, these formulas stay invariant in form, no matter what an averaging scheme is adopted. \section{Validation with an instability model} \label{sect:kuo} In this section, the above formalism of perfect transfer ${\Gamma}$ is validated with an idealized instability model. We will also see through this concrete example how ${\Gamma}$ differs from the classical Reynolds stress extraction against the basic profile. Consider a well-studied barotropic instability model, the Kuo model, for the instability of the zonal atmospheric jet stream\cite{Kuo}\cite{Kuo2}. Liang and Robinson\cite{LR2} have constructed a particular solution with a highly localized structure which is ideal for our purpose here. In the following we briefly present this solution, and then calculate the transfer (\ref{eq:transfervec}). Choose a coordinate frame with $x$ pointing eastward, $y$ northward. The governing equations for the Kuo model are the 2D version of (\ref{eq:gov}), but with a Coriolis force term $f\ve k \wedge \ve v$ ($f$ constant) on the left hand side. The domain is periodic in $x$, and limited within latitudes $y = \pm L$, where a slip boundary condition $v = 0$ is applied. As rotation makes no contribution to the energy evolution, the formulas established in section~(\ref{sect:mom}) equally apply here, i.e., the Kuo model can be used for the validation. Assume a basic velocity profile (cf.~Fig.~\ref{fig:Kuo1}a) \begin{eqnarray} \bar u(y) = {\bar u}_{\max} \cos^2 \parenth{\frac\pi 2 \frac y L}, \qquad {\mean u}_{\max} > 0. \end{eqnarray} The background potential vorticity $q$ has a meridional gradient (cf.~Fig.~\ref{fig:Kuo1}b) \begin{eqnarray} {\bar q}_y = - {\bar u}_{yy} = - \frac {\pi^2} {2L^2} {\bar u}_{\max} \cos\frac {\pi y} L, \end{eqnarray} which changes sign at $y = \pm \frac L 2$, meeting the necessary condition for instability by Rayleigh's theorem ({\it ibid}). \begin{figure} [h] \begin{center} \includegraphics[angle=0,width=0.75\textwidth] {Kuo_config.eps} \caption{Configuration of the Kuo model. (a) The basic flow profile $\bar u = \bar u(y)$. (b) The background potential vorticity. Marked are the two reflection points on the profile curve. \protect{\label{fig:Kuo1}}} \end{center} \end{figure} Decompose the flow as \begin{eqnarray} (u,v) = (\mean u(y), 0) + (u', v'), \end{eqnarray} and substitute back to the governing equations. Kuo considered only the initial stage of instability when the perturbation field $(u',v')$ is very small. So the resulting equations can be linearized. Assuming a solution of the form \begin{eqnarray} \label{eq:kuosolform} (u',v') = (\tilde u(y), \tilde v(y)) e^{ik(x - ct)}, \end{eqnarray} one obtains an eigenvalue problem \begin{eqnarray} \label{eq:eigen} \frac {d^2 \tilde v} {d y^2} + \parenth{\frac {\mean u_yy} {c - \mean u} - k^2} \tilde v = 0, \end{eqnarray} with boundary conditions \begin{eqnarray} \tilde v = 0, \qquad\qquad {\rm at}\ y=\pm L. \end{eqnarray} The solution of (\ref{eq:eigen}) is not repeated here; the reader may refer to Kuo's original papers for details. Kuo showed that, in addition to the $q_y$ inflection requirement, the difference $(\mean u - c_r)$ ($c_r = {\rm Re} \{c\}$ the mode phase velocity) must be positively correlated with $\mean q_y$ over $[-L,L]$ in order for the perturbation to destabilize the jet. In other words, for an instability to occur, it requires that \begin{itemize} \item[(1)] $\mean q_y$ change sign through $y\in[-L,L]$ (Rayleigh' theorem); \item[(2)] $(\mean u - c_r)$ and $\mean q_y$ be positively correlated over $[-L,L]$ (Kuo's theorem). \end{itemize} Hence the zero points of $(\mean u - c_r)$ and $\mean q_y$ are critical. We will validate our transfer formalism through examining the instability structures near these critical points. We choose a particular unstable mode (and hence a particular $c_r$) to fulfill the objective. As shown in \cite{LR2}, the wavenumber $k = \frac 3 4 \frac \pi L$ gives such a mode; it lies within the unstable regime as computed by Kuo\cite{Kuo2}. In fact, if substituting back into the eigenvalue problem, one obtains, using the shooting method\cite{Verterling}, \begin{eqnarray} c = c_r + i c_i = (0.4504 + 0.0476i) {\mean u}_{\max}, \end{eqnarray} yielding a positive growth rate $kc_i>0$. Solved in the mean time is the corresponding eigenvector $\tilde v$, which substituted in (\ref{eq:kuosolform}) and the governing equations give a solution of all the fields. The resulting phase speed $c_r = 0.4504 \mean u_{\max}$ and the gradient of the basic potential vorticity $\mean q_y$ give four critical values of $y$: \begin{eqnarray} \begin{array} {ll} \mean u(y) - c_r = 0 &\Longrightarrow y=\pm 0.53L,\\ \mean q_y = - \mean u_{yy} = 0 &\Longrightarrow y=\pm 0.50L. \end{array} \end{eqnarray} The four critical latitudes, as marked in Fig.~\ref{fig:Kuo1}b, partitions the $y$ dimension into five distinct regimes characterized by different values of $K \equiv \mean q_y (\mean u - c_r)$. For most of $y\in[-L,l]$, $K>0$, but the positivity is interrupted by two narrow strips near $y = \pm L/2$, where $K<0$. This scenario has profound implications by Kuo's theorem. Although Kuo's theorem is stated in a global form, it should hold locally within the correlation scale. In the present example, that means one of the necessary conditions for barotropic instability is not met around the strips and so there should be no instability occurring there. \begin{figure} [h] \begin{center} \includegraphics[angle=0,width=0.5\textwidth] {Kuo_transfer.eps} \caption{The barotropic energy transfer (scaled by ${\mean u}_{\max}^3 / L$) for the Kuo's model: (a) the perfect transfer ${\Gamma}$, which is $\frac 1 2\bracket {\mean u \Dy {\mmean{u'v'}} - \mmean{u'v'} \Dy {\mean u}}$ here by (\ref{eq:transfervec}); (b) the Reynolds stress extraction $\Reynolds$, which is equal to $-\mmean{u'v'} \Dy {\mean u}$ here. The averaging is taken with respect to $x$. \protect{\label{fig:Kuo2}}} \end{center} \end{figure} Instability means a transfer of energy from the background to the perturbation field, namely, a positive ${\Gamma}$. Using the particular solution obtained above, we compute the transfer from (\ref{eq:transfervec}). We adopt a zonal averaging, i.e., averaging in $x$, to fulfill the decomposition. This is because, (1) $\mean u$ itself does not have $x$-dependence and hence can be understood as an $x$-average, and (2) the solution is homogeneous in $x$ due to the cyclic boundary condition. The computation is straightforward. The result is plotted in Fig.~\ref{fig:Kuo2}a. Sure enough, ${\Gamma}$ is not positive around the two narrow strips; in fact, there is a strong negative transfer, i.e., upscale or inverse transfer from the eddy window to the background. Moreover, the negative transfer is limited within two narrow regimes, just as one may expect by Kuo's theorem. In contrast, a different scenario is seen on the profile of the conventional Reynolds stress extraction $\Reynolds$, which we plot in Fig.~\ref{fig:Kuo2}b. $\Reynolds$ is nonnegative throughout $[-L,L]$; particularly, it is maximally positive over the narrow strip regimes, countering our foregoing intuitive argument. Through this example, our perfect transfer ${\Gamma}$ results in a scenario agreeing well with the analytical result of the Kuo model, while the conventional Reynolds stress extraction $\Reynolds$ does not. \section{Application to the suppression of eddy shedding} \label{sect:wake} A practical application of the above research is turbulence control. Turbulence control is a technique to manipulate turbulence growth to achieve the goal of drag reduction (cf.~\cite{Farrell} and particularly the celebrated paper by Kim\cite{Kim}, and the references therein). How the current research may come to help is to provide a better object, i.e., the perfect transfer, to manipulate, in place of the growth of turbulence energy or eddy energy. The proposal is out of the concern of how to maximally take advantage of the processes of self-laminarization or relaminarization that may occur within a turbulent flow. In the interest of energy saving, suppression of the positive transfer ${\Gamma}$ [cf.~(\ref{eq:transfervec})] is preferred to suppression of the eddy energy growth to inhibit the production of turbulence. To see why, observe that {\it eddy energy increase does not necessarily occur in accordance with a positive transfer}, and hence {\it a place where turbulence grows does not necessarily correspond to turbulence production}. Actually, the correspondence is an exception rather than a rule. (Later in this section we will see an example.) The energy needed to fuel the growth could be transported from the neighborhood, rather than released {\it in situ}. One possibility is, while disturbances rapidly grow, a process of laminarization could be undergoing at the very position. As shown in the two-point system in Fig.~\ref{fig:ctr_schem}, while disturbances grow at both $A$ and $B$ (both $K_A^{eddy}$ and $K_B^{eddy}$ increase), the eddy energy is produced at A only. \begin{figure} \begin{center} \includegraphics[angle=0,width=0.6\textwidth] {ctr_schem.eps} \caption { Schematic of the eddy energy transport and transfer for a two-point turbulent system. $K$ stands for kinetic energy, and ${\Gamma}$ and $\nabla\cdot\ve Q$ for transfer and transport, respectively. An arrow indicates the direction of an energy flow, with its thickness standing for strength. In this case, transfer is toward the mean at position B, but $K_B^{eddy}$ still grows because of the transport (advection) of $K_A^{eddy}$ from position A. For optimal results, control should be placed at A only. \protect{\label{fig:ctr_schem}}} \end{center} \end{figure} At B, not only there is no eddy energy production, but the transfer is from the eddy window to the mean window. That is to say, the system is undergoing a laminarization at $B$, even though the eddy energy $K_B^{eddy}$ grows, because of a surplus of the influx of eddy energy over the inverse transfer. Control of the eddy energy growth at both A and B indeed helps to suppress the onset of turbulence, but it is not optimal in terms of energy saving. Suppression of $K_B^{eddy}$ defeats the intrinsic trend of laminarization in the mean time, and therefore reduces the control performance. To take advantage of this laminarization, the control should be applied at position A, i.e., the source region, only, and the optimal objective functional should be designed with respect to ${\Gamma}_A$, rather than $K_A^{eddy}+K_B^{eddy}$. In this spirit, we demonstrate the application by showing how one may efficiently suppress the vortex shedding behind a circular cylinder. Vortex suppression is important in that it can result in significant drag reduction and hence energy saving; it may also be used to reduce noise. Presented in the following are just some diagnostic results with a saturated wake to which the afore-established formalism is applicable; the same example will be studied in more detail in the sequel to this paper with nonstationarity considered. We will deal with a laminar case only, but the idea equally applies to turbulent wakes. There are many sophisticated techniques to suppress the shedding of vortices in a wake [cf.~\cite{Huerre} and \cite{Oertel} and the references therein]. Surface-based suction is one of them. To our knowledge, the research along this line thus far, however, has been focused on the technique per se. No report has been found on the issue of where to place the suction to optimize the performance. In the following, we will show that our formalism of mean-eddy interaction and perfect transfer can give this question an answer. Consider a planar flow passing around a circular cylinder. The governing equations are the same as those of Eqs.~(\ref{eq:gov}), except that dissipation is included. The computational domain is plotted in Fig.~\ref{fig:ctr_domain}, with $x$ and $y$ nondimensionalized by the cylinder diameter $d$. A uniform inflow $(U,0)$ is specified at $x=-2.5$, and on the right open boundary ($x=37.5$) a radiative condition\cite{Orlanski} is applied. At $y=\pm 4$ are two solid boundaries, where nonslip conditions are imposed. \begin{figure} \begin{center} \includegraphics[angle=0,width=0.6\textwidth] {ctr_domain.eps} \caption {Model configuration. The coordinates $x$ and $y$ are scaled by the cylinder diameter $d$. \protect{\label{fig:ctr_domain}}} \end{center} \end{figure} We examine a flow with Reynolds number $ Re = \frac {Ud} {\nu} = 200$. The spacing choice of $\Delta x$ and $\Delta y$ is found not a stringent constraint. By experiments a mesh with $\Delta x = \Delta y = 0.05$ and a mesh with $\Delta x = \Delta y = 0.025$ produce little difference in the final result for our problem. We thus choose $\Delta x = \Delta y = 0.05$ for economic reason. The governing equations are integrated forward on a staggered grid\cite{Arakawa} using a semi-implicit (implicit for pressure) finite difference scheme (e.g., \cite{Kreiss}) of the second order in both time and space. The model is run until a statistical equilibrium is reached. After that, it is integrated further for 100 time units and the outputs are used to calculate the transfer ${\Gamma}$. The stationarity in time makes it a natural choice to perform time averaging in computing the transfer (\ref{eq:transfervec}). The computation is straightforward. We plot the result in Fig.~\ref{fig:ctr_transfer}. Note the time interval is large enough that one virtually sees no difference in the computed result if it is enlarged. \begin{figure} [h] \begin{center} \includegraphics[angle=0,width=0.5\textwidth] {ctr_transfer.eps} \caption{Perfect transfer ${\Gamma}$ in the wake behind a cylinder (units in $U^3/d$). \protect{\label{fig:ctr_transfer}}} \end{center} \end{figure} By (\ref{eq:transfervec}), a positive ${\Gamma}$ means eddy energy generation or turbulence production in turbulence research, while a negative ${\Gamma}$ indicates a transfer in the opposite direction. In Fig.~\ref{fig:ctr_transfer}, two triangular lobes of strong positive ${\Gamma}$ sit on either side of the axis $y=0$, with a weak negative center lying in the near wake. That is to say, eddy energy is generated within the two lobes, while in between is a laminarization process. By the forgoing arguments, an efficient control strategy should be the one inhibiting the positive ${\Gamma}$ in these two lobes. Since we are considering only the technique of surface-based suction, the ${\Gamma}$ distribution suggests that application of suctions near the two lobes should be effective. In doing this, one can simultaneously take advantage of the laminarization process occurring in the near wake. Indeed, our control experiments show that the areas between 50 to 80 degrees and between $-50$ to $-80$ degrees from the x-axis are the effective suction locations to suppress the vortex street. The effectiveness, according to \cite{Oertel}, may be measured by a suction rate $c_q = \frac m {U d}$, where $m$ is the mass flow rate, $U$ the free stream velocity, and $d$ the cylinder diameter. By experiments, the most effective control is that placed at $\pm 70^o$, following the same orientation of the two positive ${\Gamma}$ lobes. Only a rate of $c_q = 0.18$ can have the vortices completely suppressed (e.g., Fig.~\ref{fig:ctr_vort_suct}). In contrast, controls in areas below 50 degrees and above $-50$ degrees are counterproductive, as the near wake laminarization process is defeated. \begin{figure} [h] \begin{center} \includegraphics[angle=0,width=1\textwidth] {ctr_vort_suct.eps} \caption{ Snapshots of the vorticity in the optimal control experiment. The control is applied at $t=50$ and so forth. \protect{\label{fig:ctr_vort_suct}}} \end{center} \end{figure} It is of interest to see how other diagnostic fields, such as the eddy energy $\frac 1 2 \mmean{(u'^2 + v'^2)}$ and $\Reynolds$, are distributed. Shown in Figs.~\ref{fig:ctr_transfer2}a and b are these fields. Clearly, they both attain their maxima along the axis $y=0$, a scenario completely different from that of ${\Gamma}$ in Fig.~\ref{fig:ctr_transfer}. If the control is based on these fields, a suction should be placed at $(x,y) = (\frac 1 2, 0)$, i.e., $0^{\rm o}$ from the axis. But, as noted above, the control experiments show this does not result in an effective vortex suppression. \begin{figure} [h] \begin{center} \includegraphics[angle=0,width=1\textwidth] {ctr_transfer2.eps} \caption{(a) Perturbation energy (in $U^2$); (b) $\Reynolds$ (in $U^3/d$). \protect{\label{fig:ctr_transfer2}}} \end{center} \end{figure} The success of the control experiment serves to validate our formalism of the mean-eddy interaction and the equations of ${\Gamma}$, (\ref{eq:transfer}) and (\ref{eq:transfervec}). In the mean time, a variety of fluid control problems, both active and passive, may benefit from this formalism. \section{Conclusions and discussion} \label{sect:summary} In the Reynolds decomposition framework, the mean-eddy interaction has been rigorously formulated in terms of energy transfer, which can be singled out from the intertwined nonlinear processes by eliminating the transport effect. The resulting transfer sums to zero over the two decomposed subapces, or {\it windows} as called in the text. In other words, the transfer represents a redistribution process between the mean and eddy windows, without generating or destroying energy as a whole. Because of this property, it is sometimes referred to as {\it perfect transfer} in distinction from other transfers that one may have encountered in the literature. The perfect transfer from the mean process to the eddy process can be explicitly written out. In the case of a scalar $T$ advected by an incompressible flow $\ve v$, traditionally there is a quantity \begin{eqnarray*} \Reynolds = - (\mmean{\ve v'T'}) \cdot \nabla \mean T, \end{eqnarray*} which, when $T$ is a velocity component, has been explained as the rate of energy extracted by Reynolds stress against the basic profile. We showed that this is not the eddy energy transferred from the mean to the eddy windows. The real transfer should be \begin{eqnarray*} {\Gamma} = \frac 1 2 \bracket{ \mean T \nabla\cdot (\mmean{\ve v' T'}) + \Reynolds}, \end{eqnarray*} which may also be written as \begin{eqnarray*} {\Gamma} = \frac 1 2 \mean T^2\ \nabla\cdot \ve v_T, \end{eqnarray*} if $\mean T \ne 0$, in terms of a ``$T$-coupled eddy flow'' \begin{eqnarray*} \ve v_T = \frac {\mmean{T'\ve v'}} {\mean T}. \end{eqnarray*} Since $\frac 1 2 \mean T^2$ is the eddy energy and is hence always positive, the eddy generation is therefore completely controlled by the divergence of this flow. This simple formalism can be easily generalized to those with momentum equations. In that case, the perfect transfer is a redistribution of kinetic energy between the mean and the eddy windows. The resulting transfer is referred to (\ref{eq:transfervec}). For all the averaging schemes, it has the same form. The formalism has been validated with a well-known barotropic instability model, the Kuo model for the stability of a zonal atmospheric jet stream. Instability implies energy transfer from the background to perturbation, or mean to eddy in this context. We have seen a scenario of perfect transfer consistent with that inferred based on Kuo's theorem, while the traditional Reynolds stress extraction does not agree with the inference. An intuitive argument regarding the perfect transfer ${\Gamma}$ is that the distribution of ${\Gamma}$ is generally not in accordance to that of eddy energy or eddy energy growth, due to the presence of transport processes. This has been testified in the wake control experiment. In the context of turbulence, that is to say, the rapid growth of turbulent energy does not necessarily correspond to turbulence production. It is not uncommon that, at a location where perturbation is growing, the underlying process could be a transfer in the inverse direction, i.e., a laminarization. This argument has profound implication in real applications. Turbulence control is such an example. It suggests the optimal location to place a control be that of positive ${\Gamma}$, rather than that of turbulence growth, in order to take the advantage of the self-laminarization within a turbulent flow. This conjecture has been testified in an exercise of vortex shedding suppression with a cylinder wake. By computation there are two lobes of strong positive ${\Gamma}$ attaching to the cylinder on either side. We tried a surface-based suction on many places of the cylinder, but the most effective places are those where the transfer processes within these two lobes are easiest to defeat. Other places are not as effective as these two, in terms of energy saving. The success of the wake suppression experiment implies the physically robust quantity ${\Gamma}$ may be useful for a variety of fluid control problems. Specifically, it may come to help in selecting the location(s) to place a passive control, or designing the performance functional for an active control. The above experiment is an example for the former; for the latter, we should be able to design some transfer-oriented functional for the optimization. As we argued before, this should be advantageous over those based on turbulence growth in light of energy saving. It should be pointed out that, in realistic flows, the signals are generally not stationary, nor homogeneous, and as a result, the Reynolds averaging cannot be replaced with an averaging over time or a spatial dimension. In such cases, the mean and eddy fields are not as simple as thus reconstructed; the mean itself can be time varying. Besides, interactions may not be limited just between two windows. A common process, mean-eddy-turbulence interaction, for example, requires three distinct windows for a faithful representation. All these difficulties will be overcome, and a new real problem-oriented formalism will be realized in a forthcoming paper after the introduction of a new analysis apparatus, multiscale window transform, to replace the Reynolds averaging technique for a realistic mean-eddy-turbulence decomposition. \begin{acknowledgments} This work has been benefited from the important discussions with Allan Robinson, Howard Stone, Brian Farrell, and Glenn Flierl. Joseph Pedlosky inspired the formalization of multiscale flux. Part of the wake control experiments were run when the author visited the Center for Turbulence Research at Stanford University and NASA Ames Research Laboratory. Thanks are due to Parviz Moin and Nagi Mansour for their kind invitation, and Alan Wray for his generous help with the computing. The author is particularly indebted to Meng Wang, who hosted the visit and spent a lot of time discussing the issues raised in this work. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The mathematical setting of this chapter is $(1+1)D$ field theories. That is to say, we consider a generic spatiotemporal field, $\phi=\phi(x,t)$ (although, in later sections `$\phi$' maybe be replaced by `$u$' depending on the context), and its concordant governing evolution equation. Within the context of classical neutral scalar field theories, the evolution of $\phi$ is determined by a partial differential equation (PDE) that extremizes the \emph{action} functional (in some appropriate ``natural'' dimensionless units): \begin{equation}\label{1.1a} \mathcal{S}[\phi] = \int dt\int dx \, \left[\frac{1}{2}\phi_t^2 - \frac{1}{2}\phi_x^2 - V(\phi) \right], \end{equation} where $x$ and $t$ subscripts henceforth denote partial differentiation. The three terms on the right-hand side of Eq.~\eqref{1.1a} denote, respectively, the kinetic energy of the field, the (negative of the) potential energy within the field, and $V$ is some to-be-specified term quantifying the field's \emph{self-interaction}. Specifically, we call $V$ ``the potential'' of the field theory, and it sets the dynamics of the field. As is convention, we term a field theory with a polynomial potential $V$ of degree $n$ as a ``$\phi^n$ field theory.'' The potential is constructed, derived, modeled or conjectured on the basis of the physical behavior of the system under consideration. We refer the reader to \cite{rr,MantonSut,vach} for more detailed textbook introductions to the subject, including physical examples of various field theories in high-energy theoretical physics. \jcmindex{\myidxeffect{F}!Field theory} The Euler--Lagrange equation extremizing the action $\mathcal{S}$ from Eq.~\eqref{1.1a} is easily found to be the nonlinear wave (often referred to as Klein--Gordon-type) equation \begin{equation}\label{eq:nl_wave_eq} \phi_{tt} - \phi_{xx} + V'(\phi) = 0\,, \end{equation} where a prime denotes differentiation with respect to the argument of the function, here $\phi$. Note that, on the basis of the Lorentz invariance of Eq.~\eqref{eq:nl_wave_eq}, in this chapter we are only concerned with its static solutions, i.e., $\phi_t=0$ and $\phi=\phi(x)$ only, with traveling solutions being obtainable from the latter by a boost transformation. Hence, we are interested in how the solutions of the ordinary differential equation \begin{equation}\label{1.1b} \phi_{xx} = V'(\phi) \end{equation} are affected by the choice of $V$. Specifically, we will (mostly) discuss ``simple'' polynomials of even degree that possess $\phi \mapsto -\phi$ symmetry (thus endowing the field theory with reflectional, or $Z_2$, symmetry) as choices for the potential. \jcmindex{\myidxeffect{K}!Klein--Gordon equation} In this chapter, we discuss the kink (i.e., domain wall) solutions of Eq.~\eqref{1.1b} and their context in the hierarchy of various higher-order field theories, where by ``higher-order'' we either mean that the potential $V$ is a polynomial of degree greater than four or is non-polynomial. For brevity and clarity, we will often refer the reader to the encyclopedic study of the eighth, tenth- and twelfth-degree field theories (and their kink solutions in the presence of degenerate minima), provided in \cite{kcs}. \section{First- and second-order phase transitions: The need for higher order field theory} \label{sec:need_hoft} \jcmindex{\myidxeffect{G}!Ginzburg--Landau theory} The quartic, $\phi^4$, potential is the ``workhorse'' of the Ginzburg--Landau (phenomenological) theory of superconductivity \cite{Landau,GinzburgLandau,Tinkham}, taking $\phi$ as the \emph{order parameter} of the theory (i.e., the macroscopic wave function of the condensed phase). In this context, the third term (i.e., $V(\phi)$) in Eq.~\eqref{1.1a} is interpreted as the Landau free energy density, while the combination of the second and third terms (i.e., $\frac{1}{2}\phi_x^2 + V(\phi)$) is the full Ginzburg--Landau free energy density, which allows for domain walls of non-vanishing width and energy to exist between various phases (corresponding to equilibria, i.e., minima of $V$) in the system. Specifically, a prototypical example of the \emph{continuous} (or, second-order) phase transition can be modeled by the classical $\phi^4$ (double well) field theory. \jcmindex{\myidxeffect{D}!Double well potential} \jcmindex{\myidxeffect{S}!Second-order phase transition} To illustrate the second-order phase transition, consider the symmetric quartic (double well) potential \begin{equation}\label{eq:V4_pt} V(\phi) = \frac{1}{4}\phi^4-\frac{\alpha_2}{2}\phi^2+\frac{1}{4}\,, \end{equation} where $\alpha_2$ is a parameter that might depend on, e.g., the temperature or pressure of the system in condensed matter physics or the mass of a meson in high-energy theoretical physics. As the temperature or pressure of the system changes, so does $\alpha_2$, leading to structural changes of the potential in Eq.~\eqref{eq:V4_pt}, as shown in Fig.~\ref{fig:46pt_combo}(a). Note that at $\alpha_2=1$, Eq.~\eqref{eq:V4_pt} can be rewritten as $V(\phi) = \frac{1}{4}(\phi^2-1)^2$. Specifically, the \emph{global} minima of this potential, i.e., $\phi_0$ such that $V'(\phi_0)=0$ and $V''(\phi_0)>0$, are \begin{equation} \phi_0 = \left\{ \pm \sqrt{\alpha_2} \,\right\} \quad (\alpha_2>0)\,, \end{equation} while $\phi_0=0$ is a global maximum. As $\alpha_2\to0^+$, these two minima smoothly coalesce into a single \emph{global} minimum at $\phi_0=0$ ($\alpha_2\le0$), as shown in Fig.~\ref{fig:46pt_combo}(c). This smooth process is characteristic of the continuous, i.e., second-order, phase transitions, and is the only type of bifurcation of equilibria that a symmetric double well $\phi^4$ potential can exhibit. For $\alpha_2=1$ both degenerate minima of the potential satisfy $V(\phi_0) = V'(\phi_0)=0$, and a domain wall, which solves Eqs.~\eqref{1.1b} and \eqref{eq:V4_pt}, exists connecting the two phases ($\phi_0=-1$ and $\phi_0=+1$): \begin{equation}\label{eq:V4_kink} \phi_K(x) = \tanh\left(x/\sqrt{2}\right). \end{equation} This well-known domain wall, or \emph{kink}, solution \cite{rr,MantonSut,vach} is illustrated in Fig.~\ref{fig:46pt_combo}(e). \jcmindex{\myidxeffect{K}!Kink solution} \jcmindex{\myidxeffect{D}!Domain wall} However, in materials science and condensed matter physics, one also observes \emph{discontinuous} (or, first-order) phase transitions, or even \emph{successive} series of first- and/or second-order phase transitions. How can those be modeled? One approach is to add degrees of freedom by increasing the degree of the potential $V$ to greater-than-fourth order \cite{gl}. For example, first-order transitions can be modeled by sextic, $\phi^6$, field theory \cite{Behera,Falk,Falk2}. A triple well potential characteristic of a $\phi^6$ field theory naturally arises as a one-dimensional cross-section across a path of strain space passing through the austenite well and two of the martensite wells of the free energy of a two-phase martensitic material with cubic and tetragonal phases \cite[Sec.~5.5]{Abeyarat}. Although another possibility to model a first-order transition is by way of an \emph{asymmetric} double well $\phi^4$ potential (e.g., a double well potential in an external field) \cite{SanatiSaxenaAJP}, here we restrict ourselves to symmetric potentials only. Then, in order to capture two or more successive transitions, we must go beyond the $\phi^4$ and $\phi^6$ field theories to even higher orders \cite{gl,GufanBook}. Similarly, higher-order field theories arise in the context of high-energy physics, wherein the availability of more than two equilibria leads to more types of mesons \cite{Lohe79,CL}, which is indeed necessary for certain nuclear and particle physics models. \begin{figure} \center \includegraphics[width=0.5\textwidth]{ch11_figs/V4ptplot.eps}\hfill \includegraphics[width=0.5\textwidth]{ch11_figs/V6ptplot.eps}\\[1mm] \includegraphics[width=0.5\textwidth]{ch11_figs/V4eqplot.eps}\hfill \includegraphics[width=0.5\textwidth]{ch11_figs/V6eqplot.eps}\\[1mm] \includegraphics[width=0.5\textwidth]{ch11_figs/V4kinkplot.eps}\hfill \includegraphics[width=0.5\textwidth]{ch11_figs/V6kinkplot.eps} \caption{Illustrations of (a) a continuous (second-order) phase transition under the $\phi^4$ potential, Eq.~\eqref{eq:V4_pt} and (b) a discontinuous (first-order) phase transition under the $\phi^6$ potential, Eq.~\eqref{eq:V6_pt}. In the literature it is common to shift $V(\phi)\mapsto V(\phi) - V_0(\alpha_2)$ to ensure that the kink solution in Eq.~\eqref{eq:V4_kink} exists for each $\alpha_2$ at which there are two degenerate minima, but we have chosen not to do that here in order to not overcomplicate the picture. Panels (c) and (d) show the respective bifurcations in the minima $\phi_0$ such that $V'(\phi_0)=0$ and $V''(\phi_0)>0$ as a function of the parameter $\alpha_2$; dashed and solid curves denote the local and global minima, respectively. Panels (e) and (f) show the kink and half-kink solutions, Eqs.~\eqref{eq:V4_kink} and \eqref{eq:V6_hkink}, respectively, at the critical value $\alpha_2=1$ when the potentials in (a) and (b) have two or three degenerate minima.} \label{fig:46pt_combo} \end{figure} \jcmindex{\myidxeffect{T}!Triple well potential} \jcmindex{\myidxeffect{F}!First-order phase transition} To illustrate the first-order phase transition, consider the sextic (triple well) potential \begin{equation}\label{eq:V6_pt} V(\phi) = \frac{1}{2}\phi^6 - \phi^4 + \frac{\alpha_2}{2} \phi^2\,, \end{equation} where $\alpha_2$ is again a parameter that might depend on, e.g., the temperature or pressure of the system. Varying $\alpha_2$ leads to structural changes of the potential in Eq.~\eqref{eq:V6_pt}, as shown in Fig.~\ref{fig:46pt_combo}(b). Note that at $\alpha_2=1$, Eq.~\eqref{eq:V6_pt} can be rewritten as $V(\phi) = \frac{1}{2}\phi^2(\phi^2-1)^2$. Specifically, the minima of this potential are \begin{equation}\label{eq:V6_equil} \phi_0 = 0,\quad \phi_0 = \left\{\pm \frac{\sqrt{\sqrt{4-3 \alpha_2}+2}}{\sqrt{3}} \,\right\} \quad (\alpha_2 < 4/3) \,. \end{equation} For $\alpha_2>1$, the two non-zero minima are \emph{local}, while $\phi_0=0$ is the \emph{global} minimum; vice versa for $\alpha_2<1$. At $\alpha_2=1$, the exchange of global minima is \emph{sudden}, i.e., the global minima at $|\phi_0|\ne0$ do not coalesce with the one at $\phi_0=0$ as in the $\phi^4$ example above. This non-smooth process is characteristic of discontinuous, i.e., first-order, phase transitions. (In Fig.~\ref{fig:46pt_combo}(d), dashed and solid curves denote the local and global minima values of $\phi_0$, respectively.) For $\alpha_2=1$ all three minima of the potential become degenerate and satisfy $V(\phi_0) = V'(\phi_0)=0$, thus domain wall (kink) solutions, which satisfy Eqs.~\eqref{1.1b} and \eqref{eq:V6_pt}, exist connecting a pair of equilibria (either $\phi_0=-1$ and $\phi_0=0$ or $\phi_0=0$ and $\phi_0=+1$): \begin{equation}\label{eq:V6_hkink} \phi_K(x) = \begin{cases} \displaystyle-\frac{1}{\sqrt{\mathrm{e}^{2 x}+1}} = - \sqrt{ \frac{1 - \tanh x}{2} }\,, &\quad \text{half-kink from $-1$ to $0$},\\ \displaystyle\frac{1}{\sqrt{\mathrm{e}^{-2 x}+1}} = \sqrt{ \frac{1 + \tanh x}{2} }\,, &\quad \text{half-kink from $0$ to $+1$}, \end{cases} \end{equation} as illustrated in Fig.~\ref{fig:46pt_combo}(f). Equation~\eqref{eq:V6_hkink} represents the well-known $\phi^6$ \emph{half-kink} \cite{Khare79}. Finally, as $\alpha_2\to(4/3)^-$, the two non-zero minima disappear entirely (once again suddenly) leaving a single, global minimum at $\phi_0=0$, as shown in Fig.~\ref{fig:46pt_combo}(d). \jcmindex{\myidxeffect{H}!Half-kink solution} Beyond these two introductory examples of a second- and a first-order phase transition, similar reasoning can be applied to show that an octic, $\phi^8$, field theory can model a second-order transition followed by a first-order transition \cite{kcs,R5a,R5b,R6}. Meanwhile, two successive first-order transitions can be modeled by a $\phi^{10}$ field theory \cite{kcs,R7}. But, to describe three successive (first- and/or second-order) transitions one must resort to a $\phi^{12}$ field theory \cite{kcs,R8,R9}. Continuing in the same vein, for four or more successive transitions, a $\phi^{14}$ or higher-order (e.g., $\phi^{4n}$ or $\phi^{4n+2}$ with $n > 3$) field theory must be employed. So far we have implicitly assumed that, as stated at the beginning of the chapter, we deal with neutral scalar (single-component) field theories. Beyond the scope of this chapter but also relevant is the fact that {\it multi-component} $\phi^4$ or $\phi^6$ field theories can also describe successive phase transitions \cite{R10}. \jcmindex{\myidxeffect{S}!Successive phase transitions} \jcmindex{\myidxeffect{H}!Higher-order field theory} Higher-order (specifically, higher than sextic) field theories are also needed to capture \emph{all} symmetry-allowed phases in a transition \cite{R5a,R5b}. Certain crystals undergo two successive ferroelastic (i.e., strain as the order parameter) or ferroelectric (i.e., electric polarization as the order parameter) first-order transitions \cite{R7}. In particle physics massless mesons interacting via long-range forces are modeled with the $\phi^8$ field theory \cite{Lohe79}. Additionally, there are examples of isostructural transitions (i.e., the crystal symmetry does not change but the lattice constant changes), which can be described by the $\phi^8$ field theory \cite{R11}. In biophysics, chiral protein crystallization is modeled via a $\phi^{10}$ field theory \cite{R12}. Similarly, the transitions in certain piezoelectric (i.e., stress-induced polarization) materials with perovskite structure are modeled by the $\phi^{12}$ field theory \cite{R8,R9}. \section{$\phi^6$ field theory} As we have just discussed in Sec.~\ref{sec:need_hoft}, the location of the global minimum of a triple well $\phi^6$ potential abruptly (discontinuously) jumps from $\phi_0=0$ to a pair of finite value $|\phi_0|\ne0$ through the phase transition (as $\alpha_2$ goes through $1$ in the example of Eq.~\eqref{eq:V6_pt} above). At the phase transition point, the potential has three global minima. This type of phase transition is ubiquitous in nature: from cosmological transitions in the early Universe \cite{Vilen} to solid-solid transformations from one crystal structure to another \cite{R5a,R5b}. Here, it is relevant to mention the significance of the latter from the thermodynamics point of view (see also \cite[\S5]{Bishop80}): i.e., when the field $\phi(x,t)$ possesses a large number of kinks driven by white noise and balanced by dissipation. At such a discontinuous (first-order) phase transition, described by the $\phi^6$ field theory, we expect that field's self-correlation function will yield finite correlation lengths at the transition temperature, which is associated with latent heat in classical thermodynamics \cite{Reichl}. \subsection{Exact kink and periodic solutions, asymptotic kink interaction} \label{sec:phi6_kink_periodic_etc} In the case of three degenerate minima, a $\phi^6$ potential can always be factored into the form $\phi^2(\phi^2-a^2)^2$, up to scaling factors, and then the exact domain-wall solutions are the half-kinks in Eq.~\eqref{eq:V6_hkink}. Whether at, above or below the critical temperature ($\alpha_2=1$ for Eq.~\eqref{eq:V6_pt}) at which the system exhibits three stable equilibria, further exact domain-wall solutions exist near the ``wells'' of a $\phi^6$ potential \cite{SanatiSaxenaJPA}. To illustrate these ideas, let us return to Eq.~\eqref{1.1b}. Multiplying by $\phi_x$ and forming a complete differential, we may integrate both sides to get the \emph{first integral of motion}: \begin{equation}\label{1.1c} (\phi_x)^2 = 2[V(\phi) - \mathfrak{C}]\,, \end{equation} where $\mathfrak{C}$ is a constant of integration. Assuming $\phi \to \phi_0$ smoothly as $|x|\to\infty$, where $\phi_0$ is a degenerate minimum of $V$ such that $V(\phi_0)=V'(\phi_0)=0$ and $V''(\phi_0)>0$, fixes the integration constant as $\mathfrak{C}=0$. Then, a second integration, taking $V$ to be as in Eq.~\eqref{eq:V6_pt} with $\alpha_2=1$, leads to the solution $\phi_K(x)$ in Eq.~\eqref{eq:V6_hkink}. \jcmindex{\myidxeffect{K}!Kink lattice solution} But, what if we do not apply the approach to equilibrium as a boundary condition? Then, what happens when $\mathfrak{C}\ne 0$? To understand this case, note that we may still separate variables in the first-order ODE in Eq.~\eqref{1.1c} to get the implicit relation: \begin{equation}\label{eq:V6_int} x = \int \frac{d\phi}{\sqrt{2[V(\phi) - \mathfrak{C}]}}\,, \end{equation} where we have yet to specify the limits of integration (hence, no need for a second constant of integration). For clarity, we restrict ourselves to the positive root in Eq.~\eqref{eq:V6_int}. As discussed in \cite{Falk2,SanatiSaxenaJPA}, by picking the integration limits to be consecutive zeros of $V-\mathfrak{C}$, with a maximum of $V$ in between, the right-hand side of Eq.~\eqref{eq:V6_int} becomes an \emph{elliptic integral} \cite{BF54} in the variable $\varphi = \phi^2$, and \emph{Jacobi's elliptic functions} \cite{as} (see also Sec.~\ref{sec:PT_solutions}) can be used to solve for $\phi$. This sets a range for physically admissible choices for $\mathfrak{C}$, namely those between maxima and minima values of $V(\phi)$. \jcmindex{\myidxeffect{E}!Elliptic integral} Figure~\ref{fig:V6_kink_lattice}(b) summarizes visually these so-called \emph{kink lattice} solutions obtained in \cite{SanatiSaxenaJPA} by performing the integration in Eq.~\eqref{eq:V6_int} with $V(\phi) = \frac{1}{6}\phi^6 -\frac{1}{4}\phi^4 + \frac{\alpha_2}{2}\phi^2$ and inverting the expression in terms of the Jacobi elliptic functions $\sn$ and $\dn$: \begin{subequations}\begin{align} \phi_{KL,1}(x) &= \frac{\phi_1}{\sqrt{1-A^2\sn^2(\beta x \,|\, m)}}\,,\\ \phi_{KL,2}(x) &= \frac{\phi_2\dn(\beta x,m)}{\sqrt{1-B^2\sn^2(\beta x \,|\, m)}}\,,\\ \phi_{KL,3}(x) &= \frac{\phi_1 C \sn(\bar{\beta} x,\bar{m})}{\sqrt{1-C^2\sn^2(\bar{\beta} x \,|\, \bar{m})}}\,, \end{align}\label{eq:V6_lattice_kinks}\end{subequations} where the elliptic moduli $m,\bar{m}\in[0,1]$, and the constants $A$, $B$, $C$, $\beta$ and $\bar{\beta}$ are related to the roots $\phi_{1,2,3}$, satisfying $V(\phi_{1,2,3}) = \mathfrak{C}$ (see Fig.~\ref{fig:V6_kink_lattice}(a)), as \begin{multline} A^2 = \frac{\phi_2^2-\phi_1^2}{\phi_2^2},\quad B^2 = \frac{\phi_2^2-\phi_1^2}{\phi_2^3-\phi_1^2},\quad C^2 = \frac{\phi_2^2}{\phi_1^2+\phi_2^2}, \quad \beta^2 = {\tfrac{1}{3}{\phi_2^2(\phi_3^2-\phi_1^2)}}, \\ \bar{\beta} = {\tfrac{1}{3}{\phi_3^2(\phi_2^2+\phi_1^2)}},\quad m = \frac{\phi_3^2(\phi_2^2-\phi_1^2)}{\phi_2^2(\phi_3^2-\phi_1^2)},\quad \bar{m} = \frac{\phi_2^2(\phi_3^3+\phi_1^2)}{\phi_3^2(\phi_2^2+\phi_1^2)}\,. \end{multline} The solutions in Eqs.~\eqref{eq:V6_lattice_kinks} are exact and \emph{periodic} with period $4K(m)/\beta$, where $K(m)$ is the complete elliptic integral of the first kind \cite{as}. In cases 2, 4 and 5 (Fig.~\ref{fig:V6_kink_lattice}), these periodic solutions reduce to distinct kink solutions in the limit of $m\to1$ (or $\bar{m}\to 1$, as the case might be). \jcmindex{\myidxeffect{J}!Jacobi elliptic function} \begin{figure}[t] \center \includegraphics[width=\textwidth]{ch11_figs/phi6_lattice_kinks.pdf} \caption{(a) A $\phi^6$ potential (i.e., the Landau free energy density of the field theory) at different temperatures (panels 1--5); specifically $V(\phi) = \frac{1}{6}\phi^6 -\frac{1}{4}\phi^4 + \frac{\alpha_2}{2}\phi^2$ for different $\alpha_2$. $\mathfrak{C}$ is the constant of integration in Eq.~\eqref{1.1c}, and $\pm\phi_{1,2,3}$ are the solutions to $V(\phi) = \mathfrak{C}$. (b) Illustrations of the corresponding kink lattices: case 1 [$\frac{3}{16}<\alpha_2<\frac{1}{4}$, $\phi_{KL,1}(x)$], case 2 [$\alpha_2=\frac{3}{16}$, $\phi_{KL,1}(x)$ as dashed and $\phi_{KL,2}(x)$ as solid], case 3 [$0<\alpha_2<\frac{3}{16}$, $\phi_{KL,2}(x)$], cases 4--5 [$\alpha_2<\frac{3}{16}$ or $\alpha_2<0$ \emph{but} $\pm\mathrm{i}\phi_1$ is now a pair of imaginary solutions, $\pm\phi_{KL,3}(x)$ as dashed and solid]. From [M.\ Sanati and A.\ Saxena, ``Half-kink lattice solution of the $\phi^6$ model,'' J.\ Phys.\ A: Math.\ Gen.\ 32, 4311--4320 (1999)] $\copyright$ IOP Publishing. Reproduced with permission. All rights reserved.} \label{fig:V6_kink_lattice} \end{figure} Finally, the asymptotic force of interaction between $\phi^6$ kinks can be obtained in the usual way via Manton's method \cite{MantonSut,manton_npb} from the exponential tail asymptotics as shown in, e.g., \cite{SanatiSaxenaJPA} (the result is also mentioned in \cite{dorey}). \jcmindex{\myidxeffect{M}!Manton's method} \subsection{Linearization about a kink (internal modes) and linearization about an equilibrium (phonon modes)} \label{sec:V6_linearize} \jcmindex{\myidxeffect{I}!Internal mode} Linearizing the field theory about a kink solution, i.e., substituting $\phi(x,t) = \phi_K(x) + \delta \mathrm{e}^{\mathrm{i}\omega_i t}\chi_i(x)+\mathrm{c.c.}$ into Eq.~\eqref{eq:nl_wave_eq} and keeping terms up to $\mathcal{O}(\delta)$, yields a standard Schr\"odinger-type eigenvalue problem \cite{Bishop80}: \begin{equation}\label{eq:EVP_linearized} \left[-\frac{d^2}{dx^2} + V''\big(\phi_K(x)\big)\right] \chi_i = \omega_i^2 \chi_i, \end{equation} where $\omega_i$ is the temporal oscillation frequency of the $i$th linearization mode, and $\chi_i(x)$ is the eigenfunction giving this mode's spatial structure. The traditional $\phi^4$ symmetric double well's kink, e.g., as in Eq.~\eqref{eq:V4_kink}, possesses a translational mode, $\omega_0=0$, and an internal mode at an isolated eigenvalue at $\omega_1 = \sqrt{3/2}$ (continuous spectrum begins at $\omega = \sqrt{2}$). In fact, it is even possible to write down $\chi_1(x)$ analytically \cite{Bishop80}. Meanwhile, the standard $\phi^6$ symmetric triple well's half-kink, e.g., as in Eq.~\eqref{eq:V6_hkink}, \emph{does not} possess such an internal mode \cite{Khare79,dorey}. \jcmindex{\myidxeffect{S}!Schr\"odinger-type eigenvalue problem} However, this issue of whether a single translational mode exists or not, is hardly the whole story in higher-order field theories. As we discuss below, there are $\phi^6$ models with \emph{controllably} many internal modes. Meanwhile, much like the ``classical'' $\phi^4$ and $\phi^6$ pictures, for a $\phi^8$ field theory with four degenerate minima, specifically $V(\phi) = (\phi^2-a^2)^2(\phi^2-b^2)$ with $a,b=(\sqrt{3}\mp1)/2$, which has both full- and half-kink solutions \cite{kcs}, the full-kink has an internal mode ($\omega_1 \approx 1.645$) \cite{GaLeLiconf} while the half-kink does not \cite{GaLeLi}. The possibility of power-law (as opposed to exponential) tails in higher-order field theories adds further complications. The kink of the $\phi^8$ model with two degenerate minima, specifically $V(\phi) = (\phi^2-a^2)^2(\phi^2-b^2)$ with $a=4/5$ and $b=1$, is reported to possess three internal modes ($\omega_1\approx2.068$, $\omega_2\approx3.192$ and $\omega_3\approx3.689$) \cite{GaLeLi}, while kink solutions with power-law tails of other sextic and non-polynomial models possess only the zero mode \cite{Bazeia18}. \jcmindex{\myidxeffect{P}!Phonon modes} Meanwhile, phonon modes, i.e., linear excitations about an equilibrium state $\phi_0$, are a simpler matter. Linearizing the field theory about a minimum of $V$ i.e., substituting $\phi(x,t) = \phi_0 + \delta \mathrm{e}^{\mathrm{i}(qx-\omega_q t)}+\mathrm{c.c.}$ into Eq.~\eqref{eq:nl_wave_eq} and keeping terms up to $\mathcal{O}(\delta)$, yields \begin{equation}\label{eq:phonon_dr} \omega_q^2 - q^2 = V''(\phi_0) \,, \end{equation} for a phonon mode with temporal frequency $\omega_q$ and spatial wave number $q$. For the example triple well $\phi^6$ potential in Eq.~\eqref{eq:V6_pt}, we have $V''(\phi_0) = 15 \phi_0^4 - 12 \phi_0^2 + \alpha_2$. Substituting the equilibria $\phi_0$ from Eq.~\eqref{eq:V6_equil} into the latter gives us \begin{equation} V''(\phi_0) = \left\{ \alpha_2,~~ \frac{4}{3} \left(2 \sqrt{4-3 \alpha_2}+4-3 \alpha_2\right),~~ \frac{4}{3} \left(2 \sqrt{4-3 \alpha_2}+4-3 \alpha_2\right) \right\}, \end{equation} where the second and third values, obviously, hold only for $\alpha_2\le4/3$ (i.e., as long as those minima exist). In particular, at $\alpha_2=1$ for the case of three degenerate minima, we have $V''(\phi_0) = \{ 1,4,4 \}$. Since in all cases we have $V''(\phi_0)\ne0$, then our model $\phi^6$ field theory has well-defined phonon modes along an optical branch (i.e., $\omega_q \not\to 0$ as $|q|\to0$). On the other hand, in certain special cases of higher-than-sixth order field theories (e.g., $\phi^8$), a degeneracy occurs and $V''(\phi_0)=0$, leading to the possibility of \emph{nonlinear phonons}. Nonlinear (or anharmonic) phonons, represent large field excursions of oscillations around the minima of the potential (but do not go over adjacent barriers). Then, in such a case, more terms must be kept in the linearization beyond the vanishing $V''(\phi_0)$ term. Finally, we note that by Weyl's theorem, the dispersion relation given by Eq.~\eqref{eq:phonon_dr} describes the continuous spectrum for \emph{both} linearization about a uniform equilibrium and for linearization about a coherent structure such as a kink. \subsection{Collisional dynamics of $\phi^6$ kinks and multikinks} {Chapters 2 and 3} discuss the ``classical picture'' of kink--antikink collisions in the $\phi^4$ model as developed/described in the large body of work emanating from \cite{csw,anninos,belova}. In particular, {Chapter 3} discusses some of the recently uncovered twists in this classical picture, as far as the collective-coordinate approach is concerned, and how to resolve them. {Chapter 12} further delves into the notions of fractal structures in the resonance windows and the finer details of their study under the collective-coordinates (variational) approximation. Thus, in this subsection we simply mention one of the more salient aspects of studying kink collisions in higher-order field theories. Specifically, the availability of multiple stable equilibria in the system, which allows for the existence of half-kinks (recall Fig.~\ref{fig:46pt_combo}(f)), opens the possibility of studying collisions between kinks each connecting a \emph{different} pair of equilibria (also called ``topological sectors''). Whereas in the prototypical $\phi^4$ field theory under the potential in Eq.~\eqref{eq:V4_pt} (with $\alpha_2=1$) we only have a kink (given in Eq.~\eqref{eq:V4_kink}) connecting $-1$ to $+1$ or antikink connecting $+1$ to $-1$, in the example $\phi^6$ field theory under the potential in Eq.~\eqref{eq:V6_pt} (with $\alpha_2=1$) we only have \emph{two} half-kinks (given in Eq.~\eqref{eq:V6_hkink}) and their corresponding antikinks. Clearly, this key difference between the $\phi^4$ and $\phi^6$ models gives rise to a potentially far richer phenomenology of kink-kink and kink-antikink collisions. For example, the collisional dynamics of a ``staircase'' half-kink+half-kink ansatz, which is formed by superimposing the half-kink from $-1$ to $0$ onto the kink from $0$ to $+1$, suitably well separated as shown in Fig.~\ref{fig:CL_G6_combo}(a), were studied by the classical collective coordinate approach in \cite{gani1}, with an updated treatment (resolving certain quantitative discrepancies) given in \cite{weigel1,weigel2}. These types of kink+kink collisions are obviously not possible in the $\phi^4$ model, where one typically studies kink--antikink collisions only. The $\phi^6$ collision phenomenology is, thus, more subtle. Further explorations of multikink configurations, meaning various superpositions of half-kinks in some prescribed arrangements, were presented in \cite{MGSDJ}. \jcmindex{\myidxeffect{C}!Collective-coordinate approach} \jcmindex{\myidxeffect{C}!Collisions of kinks} \jcmindex{\myidxeffect{P}!Parametrically deformed $\phi^6$} \begin{figure}[b] \center \includegraphics[width=0.5\textwidth]{ch11_figs/G6combo.eps}\hfill \includegraphics[width=0.5\textwidth]{ch11_figs/CLcombo.eps} \caption{Two types of ``staircase'' multikink-type ans\"atze studied in the literature. (a) The example $\phi^6$ field theory, Eq.~\eqref{eq:V6_pt} at $\alpha_2=1$ exhibiting three degenerate minima (see inset), allowing for the superposition of two well-separated half-kinks from Eq.~\eqref{eq:V6_hkink}. (b) The Christ--Lee model, Eq.~\eqref{eq:V_CL} at $\epsilon=0$ exhibiting two degenerate minima (see inset and note the middle, non-degenerate minimum ``lifting off'' from the origin) and a ``bound pair'' exact kink solution given in Eq.~\eqref{eq:CL_bp_kink}.} \label{fig:CL_G6_combo} \end{figure} \jcmindex{\myidxeffect{B}!Bound pair kink} A related possibility in $\phi^6$ field theories is exact kink solutions that look like a ``bound pair'' of kinks (see Fig.~\ref{fig:CL_G6_combo}(b)), similar to the ``staircase'' kink in Fig.~\ref{fig:CL_G6_combo}(a) discussed above. Such kinks can be found in the parametric $\phi^6$ model introduced by Christ and Lee \cite{CL}, specifically an example potential (fixing some of the extra parameters from \cite{CL}) of this form is \begin{equation}\label{eq:V_CL} V(\phi) = \frac{1}{8(1+\epsilon^2)} (\phi^2+\epsilon^2)(\phi^2-1)^2\,. \end{equation} The corresponding exact kink solution to Eqs.~\eqref{1.1b} and \eqref{eq:V_CL} (see also \cite{DDKCS}) is \begin{equation}\label{eq:CL_bp_kink} \phi_K(x) = \frac{\epsilon\sinh(x/2)}{\sqrt{1 + \epsilon^2 + [\epsilon\sinh(x/2)]^2}}\,. \end{equation} Notice that as $\epsilon\to0^+$ or $\epsilon\to\infty$, the potential in Eq.~\eqref{eq:V_CL} takes the form of the prototypical triple well $\phi^6$ potential (i.e., Eq.~\eqref{eq:V6_pt} with $\alpha_2=1$, suitably normalized) or the prototypical double well $\phi^4$ potential (i.e., Eq.~\eqref{eq:V4_pt} with $\alpha_2=1$, suitably normalized), respectively, discussed above. The context of the Christ--Lee model is not condensed matter physics or phase transitions, but rather it was introduced in high-energy theoretical physics as a ``bag" model in which the role of quarks within a hadron is played by the domain wall solutions of the field theory. For the Christ--Lee model, with potential given by Eq.~\eqref{eq:V_CL}, studying the collisional dynamics of the kink solutions from Eq.~\eqref{eq:CL_bp_kink} yields highly nontrivial results (as compared to the ``classical picture'' of $\phi^4$ kink--antikink collisions). Specifically, as the parameter $\epsilon$ is tuned in the Christ--Lee model, one can \emph{control} the number of internal modes (i.e., non-zero isolated eigenvalues of Eq.~\eqref{eq:EVP_linearized}) of the staircase-like kink. Although it has long been posited \cite{csw} that the internal mode of the kink's linearization (recall Sec.~\ref{sec:V6_linearize}) to a large extent sets the collisional dynamics, recent results using the $\phi^6$ model \cite{dorey} have proposed an additional mechanism unrelated to the internal mode. After the work in \cite{dorey}, it was further shown in \cite{DDKCS} that the resonance window structure exhibits quite counterintuitive behaviors as the number of internal modes in the Christ--Lee model under Eq.~\eqref{eq:V_CL} is tuned. Specifically, this increase in the number of internal modes does \emph{not} lead to more complex resonance structures of ever more multi-bounce windows. Instead, for a wider range of collision velocities, the staircase-like kinks simply scatter elastically off to infinity. \jcmindex{\myidxeffect{M}!Multi-bounce windows} \jcmindex{\myidxeffect{R}!Resonance windows} \begin{figure} \center \includegraphics[width=0.6\textwidth]{ch11_figs/dorey_bounce.pdf}\hfill \includegraphics[width=0.4\textwidth]{ch11_figs/vin_vs_vout_eps05.eps}\\ \hspace{4cm}(a)\hfill (b)\hspace{2cm} \caption{Resonance window maps, based on direct numerical simulation of Eq.~\eqref{eq:nl_wave_eq}, of the final kink velocity ($v_f$ or $v_{out}$) upon a collision of prescribed initial kink velocities ($v_i$ or $v_{in}$). (a) The traditional $\phi^6$ model (i.e., Eq.~\eqref{eq:V6_pt} with $\alpha_2=1$) and no internal modes for the half-kink (i.e., Eq.~\eqref{eq:V6_hkink}). (b) The parametric $\phi^6$ model (i.e., Eq.~\eqref{eq:V_CL} with $\epsilon=0.5$) and four internal modes for the staircase-like kink (i.e., Eq.~\eqref{eq:CL_bp_kink}); colors indicate how many bounces it took to escape (black for one, blue for two, green for three, red for four). Panel (a) is reprinted with permission from [Patrick Dorey, Kieran Mersh, Tomasz Romanczukiewicz, and Yasha Shnir, Physical Review Letters, 107, 091602, 2011] \cite{dorey} $\copyright$ The American Physical Society. Panel (b) is reprinted (without modification) from \cite{DDKCS}, $\copyright~2017$ The Authors of \cite{DDKCS}, under the CC BY 4.0 license.} \label{fig:res_wind_4_vs_CL} \end{figure} \jcmindex{\myidxeffect{I}!Internal mode} Figure~\ref{fig:res_wind_4_vs_CL} shows a comparison between (a) the ``classical'' $\phi^6$ resonance window (kink with one internal mode) structure of kink collisions and (b) the parametric $\phi^6$ theory under Eq.~\eqref{eq:V_CL} with $\epsilon=0.5$ (i.e., four internal modes of the staircase-like kink). The study of $\phi^4$ kink interactions and resonances is a time-honored subject that has led to elegant demonstrations of Hamiltonian dynamics and even mechanical demonstrations of the two-bounce windows \cite{goodman_chaos}. Following \cite{goodman}, given initially symmetrically located kinks with equal and opposite velocities $v_{in}$, a direct numerical simulation of Eq.~\eqref{eq:nl_wave_eq} is performed, colliding the kinks. If they ``escape'' the collision going off to infinity, the escape velocity $v_{out}$ is recorded and plotted. Clearly, only for some ranges of $v_{in}$ is there a computable $v_{out}$. The ranges in which $v_{out}$ does not exist are termed \emph{resonance} windows in which the kinks continue to bounce back-and-forth forming a bound pair of sorts. Counterintuitively, the structure of these resonance windows in the absence of an internal mode, Fig.~\ref{fig:res_wind_4_vs_CL}(a), is far more complex than in the presence of four internal modes, Fig.~\ref{fig:res_wind_4_vs_CL}(b). We will not delve into this mystery further here because there remain many open problems about kink interactions in $\phi^6$ (and even higher-order) field theories. \subsection{Statistical mechanics of the $\phi^6$ theory, including QES results} \label{sec:f6_PDF} Equation~\eqref{1.1b} subject to the $\phi^6$ potential, e.g., as given in Eq.~\eqref{eq:V6_pt}, represents a highly anharmonic system. Therefore, the number of nonlinear (e.g., soliton and breather) and linear (e.g., phonons) elementary excitations is thermally controlled. In order to determine the thermal density of these excitations, and their individual contribution to correlation functions (and other thermodynamic quantities such as specific heat and entropy), one must investigate their statistical mechanics. In one dimension, entropy considerations dictate the presence of kinks. Thus, the interactions between kinks and phonons and possibly other excitations play a crucial role in the overall thermodynamics of the process. This question has been of significant interest in condensed matter physics for the past 40 years \cite[\S5]{Bishop80}. The latter can be studied using a probability density function (PDF), which can be calculated either analytically via the path-integral approximation scheme \cite{SSF72,KS75} or numerically by way of Langevin dynamics \cite{MG90}. In these ways, one can obtain equilibrium properties; and, not just the PDF but also the presence of heterophase fluctuations in the vicinity of a phase transition, the field configuration(s), the average total kink-number density, correlation functions, structure factors, specific heat, internal energy and entropy. \jcmindex{\myidxeffect{L}!Langevin dynamics} \jcmindex{\myidxeffect{K}!Kink field thermodynamics} \jcmindex{\myidxeffect{T}!Transfer-operator approach} \jcmindex{\myidxeffect{S}!Statistical mechanics of kinks} The $\phi^4$ model and its attendant kink field have been extensively studied in the literature using techniques such as the path integral formalism \cite{SSF72,KS75}. As discussed in {Chapter 4}, Langevin dynamics were also developed for computing the thermodynamic quantities of a $\phi^4$ field theory \cite{AH93,Kovner,BHL99,HL00}. For higher-order field theories, on the other hand, not much is known beyond the very preliminary results regarding $\phi^6$ in \cite{Habib}. In general, we expect a much richer phenomenology in terms of the possible kink structures and their interactions, under higher-order field theories. \jcmindex{\myidxeffect{Q}!Quasi-exactly solvable model} An important departure of the $\phi^6$ model (and, indeed, all higher-order field theories of the form $\phi^{4n+2}$) from the $\phi^4$ model, is that it leads to a \emph{quasi-exactly solvable} (QES) problem \cite{Leach} for the PDF of the kink field. This result was shown in \cite{Behera,Bruce1980} for $\phi^6$, then some further exact PDFs were obtained for $\phi^{10}$ in \cite{kcs}. Let us illustrate the basic idea of this approach. Via the path-integral (transfer operator) formalism \cite{SSF72,KS75,AH93,Kovner} (see also \cite[Sec.~10.5]{PeyDauBook}), one can reduce the statistical mechanics problem of finding the PDF to solving, once again, a Schr\"odinger-type eigenvalue problem: \begin{equation}\label{eq:schro_evp} \left[-\frac{1}{2\beta^2}\frac{d^2}{d\phi^2} + V(\phi)\right]\Psi_k = E_k\Psi_k, \end{equation} where $\beta$ is an inverse temperature, $(\Psi_k, E_k)$ is the sought after eigenpair and $V$ is the model potential. For $V(\phi)$ given in Eq.~\eqref{eq:V6_pt}, Eq.~\eqref{eq:schro_evp} is a well-known QES eigenvalue problem \cite{Ushve}. Specifically, one posits one solution to Eq.~\eqref{eq:schro_evp} (out of the infinite number of possible ones) in the form \begin{equation} \Psi_0(\phi) = \exp\left\{-\frac{1}{2}\phi^2\left(\phi^2 - K\right)\right\} , \label{eq:ground_state_phi6} \end{equation} where $E_0$ and $K$ are still to be determined. Upon substituting Eq.~\eqref{eq:ground_state_phi6} for the wavefunction $\Psi_0(\phi)$ and Eq.~\eqref{eq:V6_pt} for $V(\phi)$ into Eq.~\eqref{eq:schro_evp} and requiring that equality hold, one obtains the consistency conditions: \begin{equation} \alpha_2 = -\frac{1}{2},\quad K = 2,\quad E_0 = - \frac{1}{4},\quad \beta = 2\,. \end{equation} Thus, for the specific $\phi^6$ potential in Eq.~\eqref{eq:V6_pt} with $\alpha_2=-1/2$ and at the precise (inverse) temperature $\beta = 2$, Eq.~\eqref{eq:ground_state_phi6} represents an \emph{exact} ground state PDF (i.e., the wavefunction has no nodes) for the $\phi^6$ field theory, as long as $E_0=-1/4$ and $K=2$. Finally, the PDF for the field is just the normalized squared ground state wave function $\Psi_0^2$ from Eq.~\eqref{eq:ground_state_phi6}. \jcmindex{\myidxeffect{S}!Schr\"odinger-type eigenvalue problem} This is but one example for an exact solution, many other ans\"atze that would conceivably lead to further exact PDF solutions are provided in \cite{Ushve}, potentially including excited states. The exactness of these solutions (and the accuracy of the path-integral formalism) can subsequently be verified by Langevin simulations \cite{MG90,Kovner}. Other examples of QES \emph{non-polynomial} field theories that have both exact kink solutions and quasi-exactly solvable thermodynamics are discussed in \cite{SH,KHS,HKS}. Finally, we emphasize once more that the PDF for the $\phi^4$ model \emph{can only be obtained numerically} (or approximately using certain Gaussian fits) \cite{AH93,Kovner}. \jcmindex{\myidxeffect{L}!Langevin dynamics} \section{$\phi^8$ field theory} \subsection{Successive phase transitions} \label{sec:phi8_spt} \jcmindex{\myidxeffect{S}!Successive phase transitions} \jcmindex{\myidxeffect{F}!First-order phase transition} \jcmindex{\myidxeffect{S}!Second-order phase transition} A $\phi^8$ field theory can be used to describe a first-order transition followed by a second-order phase transition. That is to say, as the coefficients of the potential are varied, it is possible to observe coalescence (continuously) and global/local exchange (discontinuously) of minima. A comprehensive discussion is given in \cite[Sec.~II-A]{kcs}. Let us now illustrate, through Fig.~\ref{fig:phi8_fig1} and its discussion, how a succession of a first-order and a second-order phase transition can be described using the octic potential \begin{equation}\label{eq:V8_pt} V(\phi) = \phi^{8} - 4\phi^6 + \frac{9}{2} \phi^4 - \alpha_2 \phi^2 + \frac{1}{16}, \end{equation} where $\alpha_2$ is a free parameter that can be varied to observe the successive phase transitions (e.g., it can be considered a function of the system's temperature). The coefficient of $\phi^8$ in $V$ can be taken to be unity, without loss of generality, by an appropriate rescaling of the $x$-coordinate. \begin{figure}[t] \center \includegraphics[width=0.5\textwidth]{ch11_figs/V8ptplot.eps}\hfill \includegraphics[width=0.5\textwidth]{ch11_figs/V8eqplot.eps} \caption{(a) Structure of the $\phi^8$ potential in Eq.~\eqref{eq:V8_pt} for different $\alpha_2$ (i.e., values of the coefficient of the quadratic term), showing the various phases and phase transitions in this field theory; inset shows zoom near the origin. (b) Bifurcations in the minima $\phi_0$ such that $V'(\phi_0)=0$ and $V''(\phi_0)>0$ as a function of the parameter $\alpha_2$; dashed and solid curves denote the local and global minima values, respectively.} \label{fig:phi8_fig1} \end{figure} First, note that, as $\alpha_2\to0^+$, the potential in Eq.~\eqref{eq:V8_pt} has an absolute minimum at $\phi_0=0$ into which two global minima at \begin{equation}\label{eq:V8_inner_min} \phi_{0,\mathrm{inner}} = \pm \frac{1}{2} \sqrt{\mathrm{i} \sqrt{3} \left(r - \frac{1}{r} \right) - r - \frac{1}{r} + 4}\,,\quad r = \sqrt[3]{\sqrt{(\alpha_2-2) \alpha_2}+\alpha_2-1}\,, \end{equation} have coalesced. Note that $\phi_{0,\mathrm{inner}}$ are actually real numbers (thus, exist) only for $0\le \alpha_2 \le 2$. Meanwhile the two local minima, \begin{equation}\label{eq:V8_outer_min} \phi_{0,\mathrm{outer}} = \pm \sqrt{1 + \frac{r}{2} + \frac{1}{2r}}\qquad (\alpha_2\ge0)\,, \end{equation} where $r$ is as given in Eq.~\eqref{eq:V8_inner_min}, have become the inflection points $\phi_{0}=\pm\sqrt{3/2}$ at $\alpha_2=0$. This behavior is analogous to the $\phi^4$ scenario illustrated in Fig.~\ref{fig:46pt_combo}(a). Hence, $\alpha_2=0$ corresponds to a second-order phase transition. \jcmindex{\myidxeffect{S}!Second-order phase transition} Second, note that for $\alpha_2=1$, the potential in Eq.~\eqref{eq:V8_pt} has four (the maximum number of) degenerate global minima, and can be factored into the form $V(\phi) = (\phi^2-a^2)^2(\phi^2-b^2)^2$ with $a=\sqrt{\frac{1}{2} \left(2-\sqrt{3}\right)}$ and $b=\sqrt{\frac{1}{2} \left(2+\sqrt{3}\right)}$. As $\alpha_2$ passes through the value of $1$, the inner pair of minima and the outer pair of minima suddenly exchange their local/global nature. Hence, $\alpha_2=1$ corresponds to a first-order phase transition temperature of the system. This behavior is analogous to the $\phi^6$ scenario illustrated in Fig.~\ref{fig:46pt_combo}(b). \jcmindex{\myidxeffect{F}!First-order phase transition} Going further, for $\alpha_2=2$, the potential in Eq.~\eqref{eq:V8_pt} has absolute minima at $\phi_{0,\mathrm{outer}}=\pm\sqrt{2}$, a maximum at $\phi_0=0$ and inflection points at $\phi_{0,\mathrm{inner}}=\pm\sqrt{2}/2$. Meanwhile for $1 < \alpha_2 < 2$, the potential in Eq.~\eqref{eq:V8_pt} has global minima at $\phi_{0,\mathrm{outer}}$, as given by Eq.~\eqref{eq:V8_outer_min}, local minima at $\phi_{0,\mathrm{inner}}$, as given by Eq.~\eqref{eq:V8_inner_min}, and three maxima, including one at $\phi_0=0$. Then, for $0 < \alpha_2 < 1$, the situation is reversed and the global minima are at $\phi_{0,\mathrm{inner}}$, while the local minima are at $\phi_{0,\mathrm{outer}}$; there are still three maxima, including one at $\phi_0 = 0$. For $\alpha_2 < 0$, the example potential has only a single minimum at $\phi_0=0$ and no other extrema. \subsection{Exact kink solutions: ``The rise of the power-law tails''} \label{sec:power-law} \jcmindex{\myidxeffect{P}!Power-law tails of kinks} A classification and enumeration of kink solutions to $\phi^8$ field theories with degenerate minima can be found in \cite[Sec.~II]{kcs}. First, we note that, given the extra degrees of freedom, an octic potential can have up to four simultaneous degenerate minima, at the first-order phase transition. In this case a kink \emph{and} a half-kink are possible, each with different energy \cite{kcs}. Next, let us now summarize the most salient feature of these kink solutions: the possibility of \emph{algebraic} (``slow'') decay of the kinks' shapes $\phi_K(x)$ towards the equilibria $\phi_0$ as $|x|\to\infty$, i.e., \emph{power-law} tail asymptotics. \jcmindex{\myidxeffect{D}!Double well potential} Consider an octic potential with two degenerate minima (equilibria) at $\phi_0=\pm a$ (i.e., a double well potential), specifically $V(\phi) = \frac{\lambda^2}{2} (\phi^2-a^2)^4$, which has an exact, \emph{implicit} kink solution of Eq.~\eqref{1.1b} \cite[Eq.~(32)]{kcs}: \begin{equation}\label{7.8} x(\phi) = \frac{2a\phi}{\gamma_1 (a^2-\phi^2)}+ \frac{1}{\gamma_1}\ln \left(\frac{a+\phi}{a-\phi}\right), \end{equation} where $\gamma_1 = 4\lambda a^3$. The implicit relation for $x(\phi)$ in Eq.~\eqref{7.8} can be easily inverted to give $\phi_K(x)$ using, e.g., {\sc Mathematica}. From Eq.~(\ref{7.8}), the asymptotics of the tails of this kink are found to be algebraic (and symmetric) \cite[Eq.~(33)]{kcs}: \begin{equation}\label{eq:V8_kink_tail_2dm} \phi_K(x) \simeq \mp a\left(1 \pm \displaystyle \frac{1}{\gamma_1 x}\right),\quad x\to \mp \infty\,. \end{equation} \jcmindex{\myidxeffect{T}!Triple well potential} Next, consider an octic potential with three degenerate minima (equilibria) at $\phi_0=0,\pm a$ (i.e., a triple well potential), specifically $V(\phi) = \frac{\lambda^2}{2}\phi^4 (\phi^2-a^2)^2$, which has an exact half-kink solution \cite[Eq.~(23)]{kcs} of Eq.~\eqref{1.1b} given {implicitly} by \begin{equation}\label{7.2} x(\phi) = -\frac{2a}{\gamma_2\phi} + \frac{1}{\gamma_2}\ln \left (\frac{a+\phi}{a-\phi} \right ) , \end{equation} where $\gamma_2 = 2\lambda a^3$. (See also \cite[Eq.\ (67)]{Lohe79} but it should be noted that there is a typographical error therein that is evident upon comparison with Eq.~\eqref{7.2}.) From expanding Eq.~(\ref{7.2}) perturbatively as $|x|\to\infty$, the ``tails'' of the kink can be shown to be of mixed algebraic/exponential (asymmetric) type \cite[Eq.~(24)]{kcs}: \begin{equation}\label{eq:V8_kink_tail_3dm} \phi_K(x) \simeq a \times \begin{cases} -\displaystyle \frac{2}{\gamma_2 x},\quad &x\to -\infty\,,\\[3mm] 1 - \displaystyle {2} e^{- \gamma_2 x -2} ,\quad &x\to +\infty\,. \end{cases} \end{equation} The tail asymptotics highlighted by Eqs.~\eqref{eq:V8_kink_tail_2dm} and \eqref{eq:V8_kink_tail_3dm} (illustrated in Fig.~\ref{fig:phi8_kinks}) are in \emph{stark} contrast to the exponentially decaying kinks and half-kinks of the $\phi^4$ and $\phi^6$ models, respectively. Of course, these are not the only examples of double and triple well $\phi^8$ potentials. Other cases are discussed in \cite[Sec.~II]{kcs}, including kink solutions with the ``usual'' exponential tail asymptotics. Furthermore, the slow (algebraic) decay of the tails is indicative of \emph{long-range} interactions of kinks \cite{gani17}. It is noteworthy, that kinks with algebraic tail asymptotics can also be obtained in certain sextic potentials \cite{Bazeia18,Gomes.PRD.2012}. Some initial forays into the excitation spectra of kinks with power-law tails (i.e., linearization about a kink, along the lines of Sec.~\ref{sec:V6_linearize}) were presented in \cite{GaLeLiconf,GaLeLi}. \begin{figure}[t] \center \includegraphics[width=0.5\textwidth]{ch11_figs/V82combo.eps}\hfill \includegraphics[width=0.5\textwidth]{ch11_figs/V81combo.eps} \caption{Kink solutions of octic field theories with power-law tail asymptotics. (a) The kink from Eq.~\eqref{7.8} ($a=1$ and $\lambda=1/\sqrt{2}$) and the corresponding double well $\phi^8$ potential as an inset. To illustrate the slow tail decay, the $\phi^4$ kink from Eq.~\eqref{eq:V4_kink} is superimposed as a dashed curve. (b) The half-kink from Eq.~\eqref{7.2} ($a=1$ and $\lambda=1$) and the corresponding triple well $\phi^8$ potential as an inset. To illustrate the slow tail decay, the $\phi^6$ half-kinks from Eq.~\eqref{eq:V6_hkink} are superimposed as dashed curves.} \label{fig:phi8_kinks} \end{figure} \subsection{Collisional dynamics and interactions of $\phi^8$ kinks} \jcmindex{\myidxeffect{C}!Collisions of kinks} \jcmindex{\myidxeffect{P}!Power-law tails of kinks} Very little is known about kink collisions under the $\phi^8$ (or any higher-order) polynomial field theory, beyond some preliminary results \cite{Belendryasova.JPCS,Belendryasova.arXiv.08.2017}. The main challenge in studying such collisions is that an ansatz of superimposed single-kink solutions must be used as initial conditions. Thus, while cases of kinks with exponential decay may be studied along the same lines as $\phi^4$ and $\phi^6$ theories (see {Chapter 2} and also recall the discussion and references above), the case of power-law tails is not so simple. In particular, due to the slow algebraic decay of power-law tails, it is neither clear how to quantify the condition of initially ``well separated'' kinks, nor how to decide the truncation length of the finite computational domain. For example, even though a $\phi^8$ kink--antikink pair appears to show a weakly \emph{repulsive} character under certain discretizations, resonance windows typical of \emph{attractive} interactions are observed \cite{Belendryasova.JPCS,Belendryasova.arXiv.08.2017}. At this time, this counterintuitive result remains poorly understood, and it is not known how the numerical discretization of the slowly decaying tails affects it. Further mysteries (specifically, unexplained quantitative discrepancies) arise when comparing Manton's \cite{MantonSut,manton_npb} method for estimating the kink--antikink force of interaction to results from the collective-coordinate approach (see \cite{gani17} wherein the kink--antikink force of interaction was estimated to decay as the fourth power of their separation). The issue of how to numerically discretize kinks with power-law tails, and how to quantify whether they are ``well separated'' initially, is equally thorny \cite{longrange} under the collective-coordinate approach. \jcmindex{\myidxeffect{C}!Collective-coordinate approach} \jcmindex{\myidxeffect{M}!Manton's method} Our current understanding of this subject is evolving. Recent developments suggest that direct numerical simulation approaches that prepare an initial condition for kink--antikink collisions via ``standard'' superpositions (summation or product) of kinks and antikinks do not accurately account for ``long'' (algebraically) decaying tails. As a result, a number of unexpected and, to some degree, unwarranted results arise from collision simulations based on such ans\"atze. To uncover the key physics of kink--antikink collisions in the presence of long-range interactions (power-law tails), the first step is, thus, to determine the proper superposition to be employed in constructing initial conditions. This topic is the subject of ongoing research. \subsection{Statistical mechanics of the $\phi^8$ field theory and phonons} \jcmindex{\myidxeffect{P}!Phonon modes} \jcmindex{\myidxeffect{S}!Statistical mechanics of kinks} Field theories of the $\phi^8$ type are \emph{not} QES so their statistical mechanics can only be studied by Langevin simulations \cite{AH93,Kovner,BHL99,HL00} or the ``double-Gaussian'' variational approximation \cite{AH93,Kovner}. In principle, one can obtain the lowest energy state numerically, e.g., by Langevin dynamics and use it to calculate the PDF and the concordant thermodynamic quantities. Likewise, the eigenvalues of Eq.~\eqref{eq:schro_evp} can be computed numerically and used in the transfer operator approach. Finally, there exist special cases of the $\phi^8$ field theories with two and three degenerate minima have $V''(\phi_0) = 0$ \cite[Table~I]{kcs}, again leading to the possibility of nonlinear phonon modes. The impact of the latter on the field thermodynamics is, as of now, unexplored. \jcmindex{\myidxeffect{L}!Langevin dynamics} \section{Beyond} There is a veritable zoology of (kink and other) exact solutions in higher-order field theories, depending on the potential specified. Here, we make no attempt to systematically classify or organize these solutions as such an endeavor would be a book on its own. Instead, we highlight some (a) interesting and (b) novel aspects of kinks in higher-order field theories ``beyond'' $\phi^8$. \subsection{Brief overview of the $\phi^{10}$ field theory} \subsubsection{Successive phase transitions and kink solutions} \jcmindex{\myidxeffect{S}!Successive phase transitions} As in Sec.~\ref{sec:phi8_spt}, one can design a specific $\phi^{10}$ potential, in which varying the coefficient of the $\phi^2$ term leads to a succession of \emph{two first-order} phase transitions \cite[Sec.~III]{kcs}; for brevity, we do not include the latter discussion here. From amongst the many features that $\phi^{10}$ kinks can exhibit, we summarize the following from \cite{kcs}: (a) in the case of five degenerate minima, four quarter-kinks of different energy, e.g., a pair connecting $0$ to $+a$ (or $-a$ to $0$) and a pair connecting $+a$ to $+b$ (or $-b$ to $-a$), for some $a$ and $b$, exist; (b) kinks are generally asymmetric; (c) kinks with power-law tails exist, with a variety of decays possible in the case of three degenerate minima. \subsubsection{Statistical mechanics of the $\phi^{10}$ theory, including QES results} \jcmindex{\myidxeffect{S}!Statistical mechanics of kinks} As mentioned above, $\phi^{10}$ is the next example of a QES field theory after $\phi^6$. Following the approach in Sec.~\ref{sec:f6_PDF}, we posit the following generalization of the ansatz in Eq.~\eqref{eq:ground_state_phi6}: \begin{equation} \Psi_0(\phi) = \exp\left\{-\frac{\sqrt{2}}{6} \phi^2 \left(\phi^2 - K\right)^2\right\}, \label{eq:ground_state_phi10} \end{equation} which clearly has three maxima at $\phi = 0$ and $\phi = \pm \sqrt{K}$, and at these $\phi$ values $\Psi_0(\phi) = 1$, while at all other $\phi$ values $0<\Psi_0(\phi)< 1$. Upon substituting Eq.~\eqref{eq:ground_state_phi10} for the wavefunction $\Psi_0(\phi)$ and a generic tenth-order potential, namely $V(\phi) = \phi^{10} - \alpha_8\phi^8 + \alpha_6 \phi^6 - \alpha_4 \phi^4 + \alpha_2 \phi^2$ as in \cite{kcs}, into Eq.~\eqref{eq:schro_evp} and requiring that equality hold, one obtains two sets of consistency conditions: \begin{multline}\label{eq:phi10_PDF_cond} \alpha_8 = \frac{8 K}{3},\quad \alpha_6 = \frac{22 K^2}{9},\quad \alpha_4 = \frac{16 K^3+45 \sqrt{2}}{18},\quad \alpha_2 = \frac{K^4+18 \sqrt{2} K}{9}, \\ E_0 = \frac{K^2}{3 \sqrt{2}}, \quad \beta = 1\,. \end{multline} Hence, as long as the potential is of the generic form above, with coefficients $\alpha_{8,6,4,2}$ depending on $K$ as in Eq.~\eqref{eq:phi10_PDF_cond}, then Eq.~\eqref{eq:ground_state_phi10} is an exact ground-state wavefunction (no nodes) of the $\phi^{10}$ field theory at the (inverse) temperature of $\beta=1$ with eigenvalue $E_0 = K^2/(3\sqrt{2})$. The PDF is the normalized squared wavefunction, $\Psi_0^2$. Figure~\ref{fig:compare_6_10_PDF} shows a visual comparison between the exact PDFs obtained herein for the $\phi^6$ and $\phi^{10}$ field theories. Once again, employing different ans\"atze from, e.g., \cite{Ushve} can yield exact excited-state PDFs, as also shown in \cite{kcs}. As discussed above (see also {Chapter 2}), the exactness of these PDFs can be verified via Langevin simulations. \jcmindex{\myidxeffect{L}!Langevin dynamics} \begin{figure}[t] \center \includegraphics[width=0.5\textwidth]{ch11_figs/PDF6_combo.eps}\hfill \includegraphics[width=0.5\textwidth]{ch11_figs/PDF10_combo.eps} \caption{(a) Exact PDF for a specific $\phi^6$ potential (i.e., Eq.~\eqref{eq:V6_pt} with $\alpha_2=-1/2$, shown as an inset), based on the wavefunction in Eq.~\eqref{eq:ground_state_phi6} with $K=2$. (b) Exact PDF for a specific $\phi^{10}$ potential (i.e., the generic potential defined by the coefficients $\alpha_{8,6,4,2}$ in Eq.~\eqref{eq:phi10_PDF_cond}, shown as an inset), based on the wavefunction in Eq.~\eqref{eq:ground_state_phi10} with $K=1$.} \label{fig:compare_6_10_PDF} \end{figure} \subsection{$\phi^{4n+2}$ field theories with three degenerate minima} \label{sec:phi4n2} \jcmindex{\myidxeffect{T}!Triple well potential} Generalizing the result in \cite{CooperPhi2n} on $\phi^{2n+2}$ field theories, let us consider a special family of $\phi^{4n+2}$ field theories with three degenerate minima under a potential of the form \begin{equation} V(\phi) = \frac{\lambda^2}{2}\phi^2 (\phi^{2n} - a^2)^2,\quad n=1,2,3,\hdots\,. \end{equation} By standard methods, it can be shown that these field theories have \emph{explicit} exact half-kink solutions (connecting $-a^{1/n}$ and $0$ or $0$ and $+a^{1/n}$) given by \begin{equation} \phi_K(x) = \mp \left\{A[1 \mp \tanh(\beta x)]\right\}^{1/(2n)} \,, \label{eq:phi4np2_kink} \end{equation} provided that $A=a^2/2$ and $\beta = \lambda n a^2$. For $a=n=1$, Eq.~\eqref{eq:phi4np2_kink} reduces to Eq.~\eqref{eq:V6_hkink}. \subsection{Complex, $\mathcal{PT}$-invariant solutions of the $\phi^4$ field theory} \label{sec:PT_solutions} \jcmindex{\myidxeffect{P}!$\mathcal{PT}$-symmetry} Since the introduction of the concept of $\mathcal{PT}$-symmetry in the late 1990s, a host of new physical insights (see, e.g., \cite{BenderRPP} and the references therein) have emerged, resulting in the rapid growth of research on open systems with balanced loss and gain. Here, $\mathcal{P}$ stands for parity symmetry $\{x,t\} \mapsto \{-x, t\}$, while $\mathcal{T}$ stands for time-reversal symmetry $\{x,t,\mathrm{i}\} \mapsto \{x, -t, -\mathrm{i}\}$. Then, the combined $\mathcal{PT}$-symmetry stands for $\{ x,t,\mathrm{i}\} \mapsto \{-x, -t, -\mathrm{i}\}$. As before, $\mathrm{i}=\sqrt{-1}$. Recently, novel complex periodic as well as hyperbolic kink solutions with $\mathcal{PT}$-eigenvalue $-1$ have been derived for a number of real nonlinear equations, including the $\phi^4$ and $\phi^{4n+2}$ models, and several higher-order non-polynomial field theories such as sine-Gordon (sG), double-sine-Gordon (DSG) and double-sine-hyperbolic-Gordon (DSHG), etc.~\cite{ks3a,ks3b}. But, while kinks of $\mathcal{PT}$-symmetric nonlinear field theories are not affected by loss/gain, their stability critically depends on the loss/gain profile \cite{Dem13}. In this section, let us consider a model $\phi^4$ theory: Eq.~\eqref{1.1b} with $V(\phi) = -\frac{a}{2}\phi^2 + \frac{b}{4}\phi^4$. Taking $a,b>0$, $V(\phi)=\mathfrak{C}$ has real solutions $\phi_{1,2,3}$. Then, similarly to how the periodic solutions for $\phi^6$ were constructed in Sec.~\ref{sec:phi6_kink_periodic_etc} (recall Fig.~\ref{fig:V6_kink_lattice}), it can be shown that this $\phi^4$ theory has a kink lattice solution \begin{equation}\label{4.22} \phi_{KL}(x) = A\sqrt{m} \sn(\beta x \,|\, m)\,, \end{equation} provided that $A = \sqrt{2 \beta^2/b}$ and $\beta = \sqrt{a/(1+m)}$. As before, $\sn$, $\cn$ and $\dn$ are Jacobi's elliptic functions with modulus $m\in[0,1]$ \cite{as}. Equation~\eqref{4.22} reduces to Eq.~\eqref{eq:V4_kink} for $a=b=m=1$. Remarkably, this same field theory also admits two complex, $\mathcal{PT}$-invariant periodic kink lattice solutions with $\mathcal{PT}$-eigenvalue $-1$: \begin{subequations}\begin{align} \phi_{cKL,1}(x) &= A \sqrt{m} \,[\sn(\beta x \,|\, m) \pm \mathrm{i} \cn(\beta x \,|\, m)]\,,\quad \beta = \sqrt{2a/(2-m)}\,,\label{4.24}\\ \phi_{cKL,2}(x) &= A \left[\sqrt{m} \sn(\beta x \,|\, m) \pm \mathrm{i} \dn(\beta x \,|\, m)\right],\quad \beta = \sqrt{2a/(2m-1)}\,,\label{4.26} \end{align}\end{subequations} provided that $A = \sqrt{\beta^2/(2b)}$. Notice that, unlike the solution in Eq.~\eqref{4.22}, $a<0$ is allowed here if $m<1/2$ in Eq.~\eqref{4.22}. \jcmindex{\myidxeffect{K}!Kink lattice solution} \jcmindex{\myidxeffect{J}!Jacobi elliptic function} In the limit $m \to 1^-$, Eqs.~\eqref{4.24} and \eqref{4.26} both reduce to the complex, $\mathcal{PT}$-invariant kink solution \begin{equation}\label{4.28} \phi_{cK}(x) = A[\tanh(\beta x) \pm \mathrm{i} \sech(\beta x)]\,, \end{equation} with $A = \sqrt{\beta^2/(2b)}$ and $\beta = \sqrt{2a}$. While the width, $1/\beta$, of the complex, $\mathcal{PT}$-invariant kink in Eq.~\eqref{4.28} is half of the width of the real kink (i.e., Eq.~\eqref{4.22} with $m = 1$), their amplitudes are the same. As described in \cite{cf}, the existence of a complex, $\mathcal{PT}$-invariant kink solution can be traced back to translational invariance: if $\tanh(\beta x)$ is a solution, then so is $\tanh(\beta x + x_0)$. Now, take $x_0 = \mathrm{i}\pi /4$ and observe that $\tanh(\beta x \pm \mathrm{i}\pi/4) = \tanh(2\beta x) \pm \mathrm{i}\sech(2\beta x)$, which immediately substantiates the existence of complex, $\mathcal{PT}$-invariant kink solution with half of the width. Clearly, this argument applies to any model that admits a kink solution of the form $\tanh x$. \jcmindex{\myidxeffect{C}!Complex kink solution} Unfortunately, however, a similar argument for the existence of complex, $\mathcal{PT}$-invariant periodic solutions such as those in Eqs.~\eqref{4.24} and \eqref{4.26} is lacking. The obvious generalization would be to argue that if $\sn(\beta x \,|\, m)$ is a solution, then so is $\sn(\beta x+x_0 \,|\, m)$ due to translational invariance. To this end, take $x_0 = \mathrm{i}K'(m)/2$, and, on using the addition theorem for $\sn$, one finds that \begin{equation}\label{4.31} \sn\left(\xi \pm \tfrac{1}{2} \mathrm{i} K'(m) \,|\, m\right) = \frac{(1+\sqrt{m})\sn(\xi \,|\, m) \pm \mathrm{i} \cn(\xi \,|\, m) \dn(\xi \,|\, m)}{m^{1/4} [1+\sqrt{m} \sn^2(\xi \,|\, m)]}\,, \end{equation} where, recalling that $K(m)=K'(1-m)$, one has used the fact that (see \cite{as}) \begin{multline}\label{4.32} \sn\left(\tfrac{1}{2}\mathrm{i}K'(m) \,|\, m\right) = \frac{\mathrm{i}}{m^{1/4}}\,,\quad \cn\left(\tfrac{1}{2}\mathrm{i}K'(m) \,|\, m\right) = \frac{\left(1+\sqrt{m}\right)^{1/2}}{m^{1/4}}\,,\\ \dn\left(\tfrac{1}{2}\mathrm{i}K'(m) \,|\, m\right) = \left(1+\sqrt{m}\right)^{1/2}\,. \end{multline} Inspired by the identity in Eq.~\eqref{4.31}, recently two of us asked \cite{ks4} if there is a more general complex, $\mathcal{PT}$-invariant periodic solution. To this end, consider the ansatz \begin{equation}\label{4.33} \phi_{cKL,3}(x) = \frac{A\sn(\beta x \,|\, m) \pm \mathrm{i} B \cn(\beta x \,|\, m)\dn(\beta x \,|\, m)}{1+ D \sn^2(\beta x \,|\, m)}\,, \end{equation} where $A$, $B$, $D$ and $\beta$ have to be determined in terms of $a$, $b$ and $m$. After a lengthy calculation, we find that Eq.~\eqref{4.33} is a complex, $\mathcal{PT}$-invariant periodic solution, if \begin{equation}\label{4.34} A = \sqrt{(2/b)(D+1)(D+m)\beta^2}\,,\quad \beta = \sqrt{a/(m+1)}\,,\quad bB^2 = 2D\beta^2\,. \end{equation} Unlike the real or the complex periodic kink solutions discussed above, the periodic kink solution in Eq.~\eqref{4.33} exists even if $b < 0$. In particular, if $-1 < D < -m$, then $b < 0$, which shows that this is a distinct periodic kink solution. In the special case \begin{equation}\label{4.35} A = \frac{(1+\sqrt{m})}{m^{1/4}} F\,,\quad B = \frac{F}{m^{1/4}}\,, \quad D = \sqrt{m}\,, \end{equation} the solution in Eq.~\eqref{4.33} takes the form \begin{equation}\label{4.36} \phi_{cKL,3}(x) = \frac{F [(1+\sqrt{m})\sn(\beta x \,|\, m) \pm \mathrm{i}\cn(\beta x \,|\, m) \dn(\beta x \,|\, m)]}{m^{1/4} [1+\sqrt{m}\sn^2(\beta x \,|\, m)]}\,, \end{equation} while the conditions in Eq.~\eqref{4.34} become $F = \sqrt{2m\beta^2/b}$ and $\beta = \sqrt{a/(m+1)}$, which coincide \emph{exactly} with the condition under which the solution in Eq.~\eqref{4.22} exists. \jcmindex{\myidxeffect{C}!Complex kink lattice solution} In the limit $m \to 1$, Eq.~\eqref{4.33} leads to a more general, complex kink solution: \begin{equation}\label{4.38} \phi_{cK,3}(x) = \frac{A\tanh(\beta x) \pm \mathrm{i}B \sech^2(\beta x)}{1+D\tanh^2(\beta x)}\,, \end{equation} provided that $A = \sqrt{(2/b)(D+1)^2 \beta^2}$, $\beta = \sqrt{a/2}$, and $bB^2 = 2D\beta^2$. However, Eq.~\eqref{4.38} does not represent a new kink solution. Specifically, as argued above, translational invariance means that given the ``standard'' kink solution $\hat{A} \tanh(\beta x)$, another kinks solution is $\hat{A}\tanh(\beta x+ \mathrm{i}x_0)$. Then, it is easily shown that Eq.~\eqref{4.38} and the standard kink solution $\hat{A} \tanh(\beta x)$ are related via \begin{equation}\label{4.39a} A = (1+D)\hat{A}\,,\quad B = \sqrt{D} \hat{A}\,,\quad D = \frac{1-\cos(2x_0)}{1+\cos(2x_0)}\,. \end{equation} Summarizing, while the complex $\mathcal{PT}$-invariant periodic kink is a new solution, the complex $\mathcal{PT}$-invariant hyperbolic kink is not. Using addition theorems for $\cn$ and $\dn$, more general complex, $\mathcal{PT}$-invariant pulse solutions (similar in structure to Eq.~\eqref{4.33} but with $\mathcal{PT}$-eigenvalue $+1$) have also been recently obtained \cite{ks4}. Determining the stability of these new solutions is an open problem. \jcmindex{\myidxeffect{C}!Complex kink lattice solution} Another observation in the recent work \cite{ks4} is that there exist remarkable connections between the complex solutions of various real scalar field theories. For example, consider a general field under Eq.~\eqref{1.1b} with $V(\phi) = \frac{a}{2}\phi^2 + \frac{b}{2n+2}\phi^{2n+2} + \frac{c}{4n+2}\phi^{4n+2}$, where $n$ is a positive integer. It is amusing to note that, for given $a$, $b$, $c$, and some complex solution $\phi=\phi_1(x)$, then $\phi=\pm \mathrm{i}\phi_1(x)$ is also a solution for the same values of the parameters $a$, $b$, $c$ if $n$ is an even integer (i.e., $n = 2, 4, 6, \hdots$), or with the same $a$ and $c$ \emph{but} with $-b$ if $n$ happens to be an odd integer (i.e., $n = 1, 3, 5, \hdots$). As special cases, $c=0$ and $n=1$ yields a $\phi^4$ field, while $c\ne0$ and $n=1$ yields a $\phi^6$ field. \section{Conclusion} In this chapter, we confined our attention to one-dimensional, higher-than-fourth-order field theories. Just as the $\phi^4$ model has served as a prototype for describing second-order phase transitions and their attendant kinks (or domain walls) as well as breathers, the $\phi^6$ model is a prototype for exploring first-order transitions with a richer phenomenology and different types of kinks. In particular, we discussed exact kink solutions of the $\phi^6$ model, collisional dynamics of various kinks, statistical mechanics of this field theory. However, the $\phi^6$ model is incapable of describing two or more (first- or second-order) successive phase transitions, and we must resort to $\phi^8$ or even higher-order field theories. In this context, we discussed exact kink solutions and their interaction in the $\phi^8$ model and interestingly elucidated the possibility of kinks with power-law tail asymptotics, which is quite different from the exponential tails in the $\phi^4$ and $\phi^6$ field theories. We also briefly considered $\phi^{10}$ as well as a general $\phi^{4n+2}$ field theory with degenerate minima and discussed their kink solutions. Finally, we explored complex, $\mathcal{PT}$-invariant kink solutions of polynomial field theories, and in particular $\phi^4$. \subsection{Open problems} \jcmindex{\myidxeffect{O}!Open problems} Beyond the $\phi^6$ model, we merely scratched the surface of the number of open questions for higher-order field theories, their kink-solution collisional dynamics, their statistical mechanics, and their connection to other nonlinear science models, etc. The thermodynamic limiting case of infinite-order (continuous) phase transitions is an exciting area in this vein. The behavior of topological excitations in two (and possibly three) dimensional higher-order field theories is an entirely open issue as well. Nonlocal higher-order field theories, coupled higher-order models and the kink solutions that they harbor remain topics for future investigation. One of the major open problems in higher-order field theories of the type discussed here is the kink collisional dynamics. Not only is there a far richer phenomenology of kinks in higher-order field theories (including kinks with power-law tails, the difficulties associated with studying their collisions having been mentioned in Sec.~\ref{sec:power-law}), but there are also many more possibilities for pair-wise interaction. The coexistence of kinks with pure power-law (or pure exponential tail) asymptotics with kinks with mixed tail asymptotics (i.e., power-law as $x\to-\infty$ but exponential as $x\to+\infty$) is possible in $\phi^{12}$ field theories with five and four degenerate minima \cite[Secs.~IV-B.2, IV-C.2]{kcs}. What is the nature of these distinct kink-kink interactions? Can we generalize Manton's approach \cite{MantonSut,manton_npb} for calculating kink-(anti)kink effective force of interaction to power-law tails? \jcmindex{\myidxeffect{C}!Collisions of kinks} \jcmindex{\myidxeffect{M}!Manton's method} Furthermore, kinks with different energies can co-exist, as is the case described in \cite[Eqs.~(9)--(17)]{kcs}. To summarize, consider the octic potential with four degenerate minima: $V(\phi) = (\phi^2-a^2)^2(\phi^2-b^2)^2$. It possesses an exact kink solution connecting $-a$ to $+a$ (also found in \cite{Lohe79}) with energy (rest mass) $E_{K,1} = \frac{4\sqrt{2}}{15} a^3(5b^2-a^2)$. There is also an exact half-kink solution connecting $a$ to $b$ (or $-b$ to $-a$) with energy (rest mass) $E_{K,2} = \frac{2\sqrt{2} }{15} (b-a)^3 (b^2+3ab+a^2)$. In \cite{kcs}, it was shown that $E_{K,1} \gtreqqless E_{K,2}$ if $b/a \lesseqqgtr 2/(3-\sqrt{5})$. In particular, for $b/a = 2/(3-\sqrt{5})$, the kinks and half-kinks have equal energies. So far, in lower-order field theories, the kinks (and anti-kinks) being scattered necessarily have the same energy because the field theory can have only two ($\phi^4$ and $\phi^6$) or three ($\phi^6$) degenerate minima. Here, for the first time, two kinks of the same type as well as two kinks of different types can exist having equal or unequal energies. Thus, the question to be addressed is: how does the ratio $b/a$ affect kink scattering dynamics? A similar situation occurs in the $\phi^{12}$ field theory with six degenerate minima \cite[Eq.~(113)]{kcs}. \jcmindex{\myidxeffect{I}!Internal mode} Another branch of open questions relates to the fact that the $\phi^{4n+2}$ field theories with three degenerate minima mentioned in Sec.~\ref{sec:phi4n2} offer a \emph{parametrized} way to ``turn up'' the order of the field theory while maintaining the basic exact half-kink structure in Eq.~\eqref{eq:phi4np2_kink}. Thus, we ask: how does $n$ affect kink scattering dynamics, starting with the known case of the $\phi^6$ field theory \cite{dorey} for $n=1$? Furthermore, what is the number of internal modes that the half-kink structure in Eq.~\eqref{eq:phi4np2_kink} possess, and does this number depend on $n$? It is also worth investigating the stability of the complex $\mathcal{PT}$-symmetric periodic kink solutions of the $\phi^4$ field theory discussed in Sec.~\ref{sec:PT_solutions}. Finally, we inquire: Have we understood all the connection between the solutions of non-polynomial field theories like sine-Gordon (sG), double-sine-Gordon (DSG), double-sine-hyperbolic-Gordon (DSHG) and the solutions of higher-order polynomial field theories? The former present infinite-order (both periodic and non-periodic) potentials, and some of the early motivation for studying higher-order field theories originated from truncating infinite-order periodic potentials to obtain polynomial field theories \cite{Lohe79} (see also \cite{bazeia06,bazeia11,bazeia13} and \cite[Sec.~V]{kcs}). Specifically, it would be of interest to find out how the nature of kink interactions in non-polynomial theories differs from the corresponding one under a truncated higher-order field theory. \section*{Acknowledgments} I.C.C.\ acknowledges the hospitality of the Center for Nonlinear Studies and the Theoretical Division at Los Alamos National Laboratory (LANL), where the authors' collaboration on higher-order field theory was initiated. We acknowledge the support of the U.S.\ Department of Energy (DOE): LANL is operated by Los Alamos National Security, L.L.C.\ for the National Nuclear Security Administration of the U.S.\ DOE under Contract No.\ DE-AC52-06NA25396. I.C.C.\ also thanks V.A.\ Gani and P.G.\ Kevrekidis for many insightful discussions on kinks, collisions, collective coordinates, Manton's method and $\phi^8$ field theory. A.K.\ is grateful to INSA (Indian National Science Academy) for the award of INSA Senior Scientist position.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Semantic segmentation of MRI scans is an essential but highly challenging task. State-of-the-art methods for semantic segmentation imply the use of DNN, which usually have millions of tuning parameters, hence demanding a large amount of labelled training samples. Manual labelling of the MRI with tumor is time consuming and expensive. Hence, in most cases only tiny datasets are available for training. To improve the model performance, we can exploit knowledge from existing labelled datasets. Nevertheless, these images may be pretty different in terms of diseases, modality, protocols and preprocessing methods, which leads to extra difficulties. In this work, we address the problem of knowledge transfer between medical datasets when source dataset potentially contains relevant information for the given problem (e.g. it depicts scans of the same organ), but still comes from the different domain, complicating the work of the conventional transfer learning techniques. The main contributions of the paper are the following: \begin{itemize} \item We highlight that the fine-tuning (start training with the pre-trained model) is not applicable for medical images \item We propose using Bayesian approach with implicit generative prior in the space of the convolution filters instead of simple fine-tuning \item Results are validated, using BRATS18 \citep{menze_multimodal_2015, bakas2017advancing} and Multiple Sclerosis Human Brain MR Imaging (MS) \citep{MsDataset} datasets \end{itemize} \section{Method} Transfer learning is a set of techniques from machine learning, used to store knowledge from source dataset and apply it to the related target dataset. During our experiments with MRI semantic segmentation, we have noticed that kernels from different segmentation networks share a similar structure, when appropriately trained, in contrast to noisy kernels from models trained on small datasets. Therefore, prior distribution, which restricts kernels to be more structured, presumably, should improve segmentation quality on modest training sets. We propose to apply Deep Weight Prior \citep{atanov_deep_2018} to enforce precisely this property. Deep Weight Prior is an expressive prior distribution, which helps to incorporate information about the structure of previously learned convolutional filters during the training of a new model. We will consider implicit prior distribution in the form of Variational Autoencoder (VAE) \citep{kingma_vae} with encoder $r_{\psi^{(i)}}(x| w)$ and decoder $p_{\phi^{(i)}}(w|z)$, modeled by neural networks. \begingroup \setlength{\intextsep}{0pt}% \setlength{\columnsep}{10pt}% \begin{wrapfigure}{r}{0.4\textwidth} \begin{center} \includegraphics[width=0.4\textwidth]{pics/ms_dwp} \end{center} \vspace{-15pt} \caption*{\small{Left: Kernels from U-Net, trained on the source MS dataset. Right: Samples from the trained DWP}} \label{fig:dwp_ker} \end{wrapfigure} The approach has the following steps: \begin{enumerate} \item Given source dataset $D_1$, train the DNN model and collect dataset of the convolutions filters during training. \item Train VAE on the collected dataset of the of the convolutions filters. \item Perform the variational inference for target dataset $D_2$ using VAE as the prior over the model filters. \end{enumerate} U-Net \citep{ronneberger2015u} was chosen due to its popularity and experimentally proven efficiency for MRI semantic segmentation tasks \citep{deniz_segmentation_2018, livne_u-net_2019, guerrero_white_2017, milletari_v-net:_2016}. The chosen architecture has 726480 parameters. We denote by $w^{(i)}$, $i = 1,...,L$ kernels for the $i$th convolutional layer and by $w = (w^{(1)}, \ldots, w^{(L)})$ vector of all the model parameters. If kernel filters at a layer $i$ are of size $3\times 3\times 3$, with $C_{inp}^{(i)}$ input channels and $ C_{out}^{(i)}$ output channels, then the weight matrix has dimensions of $ C_{inp}^{(i)} \times C_{out}^{(i)} \times 3\times 3\times 3$. We assume that both variational approximation $q_{\theta}(w)$ and prior distribution $p(w)$ are factorized over layers, input and output channels. with encoder $r_{\psi^{(i)}}(x| w)$ and decoder $p_{\phi^{(i)}}(w|z)$, modeled by neural networks. Finally, we need to optimize over $\theta, \psi$ to learn the model with the DWP-prior using the following loss: $$\max_{\theta, \psi}\mathcal{L}^{\text{approx}} = \max_{\theta, \psi}\mathcal{L_{D}} + \sum_{p, k, i} \left[ -\log q_{\theta_{i p k}}(\widehat{w}^{(i)}_{p, k})) - \log r_{\psi^{(i)}}(\widehat{z}| \widehat{w}^{(i)}_{p, k}) + \log p(\widehat{z}) + \log p_{\phi^{(i)}} (\widehat{w}^{(i)}_{p, k} | \widehat{z}) \right],$$ where $\mathcal{L_{D}}$ is the likelihood of the selected model. The methodology is discussed in more details in \citet{kuzina2019bayesian}. \newpage \section{Experiments and Results} The experiments aim at comparing the proposed method (UNet-DWP) with the conventional transfer learning approaches: training the whole model on the small target dataset with the weights pretrained on the source dataset (UNet-PR) or freezing layers in the middle of the network (UNet-PRf) while fine-tuning only the first and the last block of the model to reduce overfitting on a small dataset. As a baseline, we also consider random initialization (UNet-RI), where the model is trained only on the small target dataset. To compare the proposed methods, we use MS dataset \citep{MsDataset} as a source and small subsets of BRATS18 dataset \citep{menze_multimodal_2015, bakas2017advancing} as targets. Both datasets consider the MRI scans of the brain, however, with different diseases. The purpose of this setup is to show the ability of the method to generalize between diseases. Models performance was compared on the whole tumour segmentation on subsets of BRATS18 volumes, containing 5, 10, 15 or 20 randomly selected images with the fixed test sample size of 50 images. To train U-Net in the non-Bayesian setting, we use a combination of binary cross-entropy and Dice losses. Each model was estimated at three different random train/test splits. Table \ref{table:results_iou} summarizes the obtained results \footnote{An implementation of the methods can be found at \url{https://github.com/AKuzina/DWP}}. \subsection{Results} We can see that the models trained with DWP noticeably outperform both randomly initialized and pre-trained U-Net for all the training sizes. We observe higher variability in prediction accuracy for the problems with smaller sample sizes, which shrinks as training dataset grows, and the superiority of UNet-WDP becomes clearer. It is also worth mentioning that the pre-trained model, where part of the weights was frozen, fails. We believe that this means that information from other diseases is not relevant for the new task by default, and without fine-tuning of the whole network, we are not able to achieve consistent results. \begin{table}[h] \begin{center} \begin{tabular}{@{}lllllllll@{}} \toprule Train size & UNet-DWP (ours) & UNet-PR & UNet-PRf & UNet-RI \\ \midrule 5 &\textbf{0.52} (0.05) & 0.49 (0.02)& 0.45 (0.03) & 0.50 (0.02) \\ 10 & \textbf{0.58} (0.05) & 0.52 (0.01)& 0.47 (0.03) & 0.53 (0.01) \\ 15 & \textbf{0.60} (0.02) & 0.56 (0.02)& 0.50 (0.02) & 0.58 (0.02) \\ 20 & \textbf{0.63}(0.01) & 0.58 (0.01)& 0.53 (0.02) & 0.60 (0.01) \\ \bottomrule \end{tabular} \caption{Intersection over Union metrics for the experiments with small available target dataset.} \label{table:results_iou} \end{center} \end{table} It is worth mentioning, that transfer learning model on average performs even worse than the model without any prior knowledge about the data. This result is quite surprising, but it can be explained by strong disease specificity of the data. Datasets differ not only in the shapes of the target segmentation (plaques of multiple sclerosis are much smaller and difficult to notice that brain tumour) but also in resolution, contrast and preprocessing method. As a result, after corresponding initialization, fine-tuning may converge to a worse solution. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} We will be interested in subgroups of $S_\infty$, the group of all permutations of the natural numbers which have the additional property that all of their non-identity elements have only finitely many fixed points. Such groups are referred to as \emph{cofinitary groups}, while permutations which have only finitely many fixed points are referred to as \emph{cofinitary permutations}. A cofinitary group which is not properly contained in another cofinitary group, is called a \emph{maximal} cofinitary group, abbreviated \emph{MCG}. The existence of maximal cofinitary groups follows from the axiom of choice, which leaves many questions open regarding their possible cardinalities and their descriptive set-theoretic definability. The study of the possible sizes of maximal cofinitary groups, i.e., of the set $$\hbox{spec}(\mathrm{MCG}):=\{|\mathcal{G}|: \mathcal{G}\hbox{ is a maximal cofinitary group}\}$$ was of interest since the early development of the subject. Adeleke~\cite{Adeleke} proved that every maximal cofinitary groups is uncountable, Neumann showed that there is always a maximal cofinitary group of size $\mathfrak{c}$, while Zhang~\cite{Zhang} showed whenever $\omega<\kappa\leq\mathfrak{c}$, consistently there is a maximal cofinitary group of size $\kappa$. A systematic study of $\hbox{spec}(\mathrm{MCG})$ is found in~\cite{JBOSYZ}, a study which was later generalized to analyze also the spectrum of the $\kappa$-maximal cofinitary groups (see~\cite{VF}), where $\kappa$ is an arbitrary regular uncountable cardinal. In~\cite{VFAT} it was shown that the minimum of $\hbox{spec}(\mathrm{MCG})$, denoted $\mathfrak{a}_g$, can be consistently of countable cofinality. Note that any two distinct elements of a cofinitary group are eventually different reals and so cofinitary groups can be viewed as particular instances of almost disjoint families. Exactly this similarity was one of the major driving forces in the early studies of the definability properties of maximal cofinitary groups. While there are no analytic maximal almost disjoint families, a well-known result of A.\ R.\ D.\ Mathias, see~\cite{Mathias}, in the constructible universe $L$ there is a co-analytic maximal almost disjoint family (see~\cite{Mille}). Regarding the definability properties of maximal cofinitary groups, Gao and Zhang (see~\cite{SGYZ}) constructed in $L$ a maximal cofinitary group with a co-analytic set of generators, a result which was later improved by Kastermans~\cite{BK}, who showed that in $L$ there is a co-analytic maximal cofinitary group. The existence of analytic maximal cofinitary groups was one of the most interesting open questions in the area, a question which was answered in 2016 by Horowitz and Shelah~\cite{HHSS}, who showed that there is a Borel maximal cofinitary group.\footnote{Another interesting dissimilarity between MAD families and MCGs is the fact that consistently $\mathfrak{d}=\omega_1<\mathfrak{a}_g=\omega_2$ (see~\cite{MHJSYZ}), while the consistency of $\mathfrak{d}=\omega_1<\mathfrak{a}=\omega_2$ is a well-known open problem.} Further studies of the definability properties of maximal almost disjoint families can be found in~\cite{JBYK, VFSFYK, SFLZ, AT}. The present paper is motivated by the following question: What can we say about the definability properties of maximal cofinitary groups $\mathcal{G}$ such that $|\mathcal{G}|<\mathfrak{c}$? Clearly a Borel maximal cofinitary group must be of size continuum and a $\mathbf{\Sigma}^1_2$ maximal cofinitary group must be either of size $\aleph_1$ or continuum, since a $\mathbf{\Sigma}^1_2$ set is the union of $\aleph_1$ many Borel sets. We show: \begin{theorem} Let $2\leq M < N < \aleph_0$ be given. There is a cardinal preserving generic extension of the constructible universe $L$ in which $$\mathfrak{a}_g=\mathfrak{b}=\mathfrak{d}=\aleph_M<\mathfrak{c}=\aleph_N$$ and in which there is a $\Pi^1_2$ definable maximal cofinitary group of size $\mathfrak a_g$. \end{theorem} The cardinal characteristics $\mathfrak b$ and $\mathfrak d$ referred to in the above theorem are the \emph{bounding number} and the \emph{dominating number}. For readers unfamiliar with them, we review definitions of all cardinals characteristics mentioned in this paper in the next section. Our techniques allow us to have also $M=1$, i.e., to construct a model in which $\mathfrak{a}_g=\mathfrak{d}=\aleph_1<\mathfrak{c}=\omega_N$ and in which $\mathfrak{a}_g$ is witnessed by a $\Pi^1_2$-maximal cofinitary group. The projective definition to the witness of $\mathfrak{a}_g$ though in this model is perhaps not optimal. The consistency of $\mathfrak{a}_g=\mathfrak{d}=\aleph_1<\mathfrak{c}$ with a $\Pi^1_1$ witness to $\mathfrak{a}_g$ is work in progress of the first and third authors (see~\cite{VFDS}). The main result of the paper, should also be compared to~\cite{VFDSAT}, where the authors construct a co-analytic, Cohen indestructible maximal cofinitary group in $L$. Thus, consistently $\mathfrak{a}_g=\omega_1<\mathfrak{d}=\mathfrak{c}$ with a $\Pi^1_1$-witness to $\mathfrak{a}_g$. The methods of~\cite{VFDSAT} and the current paper differ significantly. While the result of~\cite{VFDSAT} is rooted in the preservation properties of a specially constructed cofinitary group in $L$, and so necessarily of cardinality $\aleph_1$, the techniques of the current paper allow us to control the value of $\mathfrak{a}_g$ beyond $\aleph_1$. There are many remaining open questions, some of which will be discussed in our final section. \section{Some Notation and Terminology} Given an index set $A$, we will call a mapping $\rho: A\to S_\infty$ such that $\im(\rho)$ generates a cofinitary group, a \emph{cofinitary representation}. In particular, given a freely generated cofinitary group with generating set $\{g_a:a\in A\}$, the mapping $\rho:A\to S_\infty$ sending each $a$ to $g_a$ is a cofinitary representation. Given such a cofinitary representation $\rho$ and an index $a$ which does not occur in $\dom(\rho)$, we denote by $W_{\rho,\{a\}}$ the set of all words $w$ of the form $w=a_n^{j_n}\cdots a_1^{j_1}$ where for each $l$ such that $1\leq l\leq n$ we have $a_l\in \dom(\rho)\cup\{a\}$, $j_l\in\{1,-1\}$ and no cancellations are allowed; or $n=0$ and $w=\emptyset$.\footnote{Such words are referred to as {\emph{reduced words}}.} An injective partial function $s:{\mathbb N}\rightharpoonup{\mathbb N}$ will be referred to as a partial permutation. Given a word $w\in W_{\rho,\{a\}}$ and a (possibly partial) injective mapping $s$, we denote by $w[s]$ the (possibly partial) injective mapping $w[s]$ obtained by substituting each occurrence of $b^j$ where $b\in\dom(\rho)$ and $j\in\{-1,1\}$ with $\rho(b)^j$ and $a^j$ where $j\in\{-1,1\}$ with $s^j$. Now, given a word $w\in W_{\rho,\{a\}}$, $w=a_n^{j_n}\cdots a_1^{j_1}$, where $j_l\in\{-1,1\}$ and a (possibly partial) injective mapping $s$, \emph{the evaluation path} of a given integer $m$ under $w[s]$ is the sequence $\langle m_k:k\in\omega^\prime\rangle$, where $m_0=m$, for each $k$ if $k=nl+i$, then $$m_k=(a_i^{j_i}[s]\circ\cdots\circ a_1^{j_1}[s]\circ w^{nl}[s])(m),$$ where $\omega^\prime$ is either $\omega$, or denotes the least natural number for which $m_{\omega^\prime}$ is not defined. Following the notation of~\cite{VFDSAT}, we denote by $\hbox{use}(w,s,m)$ the set of natural numbers appearing in the evaluation path of $m$ under $w[s]$. Another notion naturally appearing in the analysis of the fixed points and evaluation paths associated to a given word $w$, is the notion of a \emph{circular shift} of a word (see~\cite{VFDSAT}). More precisely, given a word $w=w_n\cdots w_1$, where $w_i=a_i^{j_i}$, $j_i\in\{-1,1\}$ for each $i$, and a permutation $\sigma:\{1,\cdots, n\}\to\{1,\cdots,n\}$ such that $\sigma(i)=i + k\hbox{ mod }n$ for some $k\in {\mathbb N}$, we will refer to $w_{\sigma(n)}\cdots w_{\sigma(1)}$ as a circular shift of $w$. Thus, in particular, for each $n$ there are only finitely many circular shifts of a given word. Finally, for $w_0, w_1 \in W_{\rho,\{a\}}$ we say $w_1$ is a \emph{proper conjugate subword of} $w_0$ if $w_0 = w^{-1} w_1 w$ for some word $w \in W_{\rho,\{a\}}\setminus\{\emptyset\}$ and $w_1 \neq \emptyset$. \medskip We review definitions of the well-known cardinal characteristics $\mathfrak a$, $\mathfrak a_g$, $\mathfrak b$, and $\mathfrak d$ (for an introduction to cardinal characteristics, see \cite{BLASS}). An \emph{almost disjoint family} is a collection of infinite subsets of $\omega$ any two of which have finite intersection. A \emph{maximal almost disjoint} (short MAD) \emph{family} is an almost disjoint family which is not a proper subset of an almost disjoint family. Write $\omega^\omega$ for the set of functions from $\omega$ to $\omega$. Given $f,g\in \omega^\omega$ write $f \mathbin{\leq^*} g$ to mean that $\{n : f(n) > g(n)\}$ is finite. Now \begin{align*} \mathfrak a &= \min\{\lvert \mathcal A\rvert : \mathcal A \subseteq \powerset(\omega), \text{ $\mathcal A$ is an infinite MAD family}\},\\ \mathfrak a_g &= \min\{\lvert \mathcal G\rvert : \mathcal G \subseteq \omega^\omega, \text{ $\mathcal G$ is a MCG}\},\\ \mathfrak b &= \min\{ \lvert \mathcal F\rvert : \mathcal F \subseteq \omega^\omega, (\forall g \in \omega^\omega) (\exists f \in \mathcal F)\; f \mathbin{\not\leq^*} g\},\\ \mathfrak d &= \min\{ \lvert \mathcal F\rvert : \mathcal F \subseteq \omega^\omega, (\forall g \in \omega^\omega) (\exists f \in \mathcal F)\; g \mathbin{\leq^*} f\} \end{align*} where of course $\lvert x\rvert$ denotes the cardinality of $x$. \section{Adding cofinitary groups of coding permutations}\label{section.group.forcing} Fix a recursive bijection \[ \psi:\omega\times\omega\to\omega. \] Suppose that $\rho:A\to S_\infty$ is a cofinitary group presentation and let $a$ be an index not included in $A$ (i.e., we ask $a, a^{-1} \notin A$). Write $\mathcal G$ for the group generated by $\im(\rho)$, $W=W_{\rho,\{a\}}$ for the set of reduced words in the alphabet $\dom(\rho)\cup\{a,a^{-1}\}$, $\mathrm{WD}$ for the set of words from $W$ in which $a$ or $a^{-1}$ occurs at least once, and $\mathrm{WS}$ for the set of words $w \in \mathrm{WD}$ without a proper conjugate subword. Further, suppose that we are given \begin{itemize} \item $\mathcal{F}=\{f_{m,\xi}:m\in\omega,\xi\in\omega_1\}$, a family of almost disjoint permutations (i.e., the graphs are pairwise almost disjoint subsets of $\omega\times\omega$) so that $ f_{m,\xi} \notin \im(\rho)$ and $\langle \im(\rho), f_{m,\xi}\rangle$ is cofinitary for each $m\in\omega,\xi\in\omega_1$. \item For each $w \in \mathrm{WS}$, a family $\mathcal{Y}^w=\{Y^w_m:m\in\omega\}$ of subsets of $\omega_1$, \item For each $w \in \mathrm{WS}$ a subset $z^w$ of $\omega$. \end{itemize} Write $\mathcal F$ for $\langle \mathcal{F} : w \in \mathrm{WS}\rangle$, $\mathcal Y$ for $\langle \mathcal{Y}^w : w \in \mathrm{WS}\rangle$, and $\bar z$ for $\langle z^w : w \in \mathrm{WS}\rangle$. We will define a $\sigma$-centered poset, denoted ${\mathbb Q}^{\mathcal{F},\mathcal{Y},\bar z}_{\rho,\{a\}}$, which adjoins a generic permutation $g$ such that the mapping $\hat{\rho}:A\cup\{a\}\to S_\infty$, which extends $\rho$ and sends $a$ to $g$ is a cofinitary representation; moreover, for each $w \in \mathrm{WS}$ \begin{itemize} \item the permutation $w[g]$ codes (in a sense about to be defined) the real $z^w$, \item for each $m\in \psi[g]$, $w[g]$ almost disjointly via the family $\mathcal{F}^m=\{f_{m,\xi}: \xi\in\omega_1\}$ codes $Y^w_m$. \end{itemize} In order to define the poset we must discuss how each $z^w$ will be coded and introduce some related terminology. To this end, let $S_0$ be the unique function from $\mathrm{WS}$ into the set of words in the alphabet $\{a,a^{-1},y,y^{-1}\}$ which in each word replaces each letter from $A$ with $y$ (and inverses of letters from $A$ with $y^{-1}$). Moreover, fix a function $S\colon \mathrm{WS} \to \omega$ such that for all $w,w'\in \mathrm{WS}$ : \begin{itemize} \item $S(w)=S(w')\iff S_0(w)= S_0(w)$, \item $\lh(w) < \lh(w') \Rightarrow S(w) < S(w')$, and \item $S(w) > 1$. \end{itemize} \begin{definition}[Coding]\label{d.coding} Let a sequence $\chi \in 2^{\leq\omega}$ be given. Suppose $\sigma$ is a partial function from $\omega$ to $\omega$ and $w\in \mathrm{WS}$. \begin{enumerate} \item We say $(w,\sigma)$ \emph{codes} $\chi$ \emph{with parameter} $m$ if and only if \begin{equation}\label{e.code} (\forall k < \lh(\chi)) \; \sigma^{S(w)\cdot(k+1)}(m) \equiv \chi(k) \pmod{2}. \end{equation} \item Suppose now that $\lh(\chi) <\omega$. Write $w=w_1 w_0$ where $w_0$ is shortest so that its leftmost letter is $a$ or $a^{-1}$. We say that $(w,\sigma)$ \emph{exactly codes} $\chi$ \emph{with parameter} $m$ if $(w,\sigma)$ codes $\chi$ and in addition \[ w_0 w^{S(w)\cdot \lh(\chi)}[\sigma](m)\text{ is undefined,} \] that is, if the path of $m$ under $w[\sigma]$ terminates as soon as possible. \item We say that $m'$ is \emph{a critical point in the path of $m$ under $(w,\sigma)$} if this path terminates with $m'$ and has length $S(w)(k+1)-1$ for some $k$. \end{enumerate} \end{definition} Note that clearly $(w,\sigma)$ can only \emph{exactly} code $\chi$ if the latter is finite and $\sigma$ is not a bijection (i.e., $\sigma$ or $\sigma^{-1}$ is a partial function). \medskip Finally given $\mathcal{F},\mathcal{Y},\bar z,\rho,\{a\}$ as above we define ${\mathbb Q}={\mathbb Q}^{\mathcal{F},\mathcal{Y}, \bar z}_{\rho,\{a\}}$. First we define an auxiliary forcing ${\mathbb Q}_0$; it consists of all tuples $p=\langle s^p, F^p, \bar m^p, s^{p,*}\rangle$ where: \begin{enumerate} \item\label{Q.first} $s^p$ is an injective finite partial function from $\omega$ to $\omega$; \item $F^p$ is a finite subset of $\mathrm{WS}$ which is closed with respect to taking subwords; \item $\bar m^p = \langle m^p_w:w\in \dom(\bar m^p)\rangle$ with $\dom(\bar m^p) \subseteq F^p$ and each $m^p_w\in \omega$; \item\label{Q.last} $s^{p,*} = \langle s^{p,*}_w:w\in \dom(s^{p,*})\rangle$ is a finite partial function from $F^p$ to \[ \big\{f_{m,\xi}: m\in\psi\big[w[s]\big],\xi\in Y^w_m\big\}; \] \end{enumerate} The extension relation for ${\mathbb Q}_0$ is defined as follows: $q=\langle s^q,F^q,\bar{m}^q, s^{q,*}\rangle \leq_0 p=\langle s^p,F^p,\bar{m}^p, s^{p,*}\rangle$ if and only if \begin{enumerate}[(A)] \item $s^q$ end-extends $s^p$, $F^q\supseteq F^p$; \item for every $w\in F^p$ if $m\in\hbox{fix}(w[s^q])$, then there is a non-empty subword $w'$ of $w$ such that letting $w=w_1w' w_0$ and letting $\langle \hdots m_1, m_0 \rangle$ be the $(w,s^q)$-path of $m$, $m_k \in \fix(w'[s^p])$ where $k$ is the length of $w_0$; i.e., the path has the following form: \[ m \xleftarrow{\ w_1} m_{k} \xleftarrow{\ w'} m_{k}\stackrel{\ w_0}{\longleftarrow} m \] \item $s^{q,*}\supseteq s^{p,*}$ and for all $f\in s^{p,*}$, $s^q\backslash s^p\cap f=\emptyset$. \item $\bar{m}^q\upharpoonright (\hbox{dom}(\bar{m}^p)\cap\hbox{dom}(\bar{m}^q)) = \bar{m}^p\upharpoonright (\hbox{dom}(\bar{m}^p)\cap\hbox{dom}(\bar{m}^q))$ \end{enumerate} Finally, ${\mathbb Q}$ is defined to be the set of $p\in{\mathbb Q}_0$ which in addition to items \eqref{Q.first}--\eqref{Q.last} above also satisfy \begin{enumerate}[start=5] \item\label{Q.exact} for each $w \in \dom(\bar m^p)$ there exists a (unique) $l$ which we denote by $l^p_w$ such that $(w,s)$ exactly codes $\chi_{z^w}{\upharpoonright} l$ with parameter $m^p_w$; \end{enumerate} The ordering on ${\mathbb Q}$, which we denote by $\leq$ is just ${\leq_0}\mathbin{\cap}({\mathbb Q}\times{\mathbb Q})$. \medskip \begin{prop}\label{prop.group.forcing} Let $G$ be a ${\mathbb Q}$-generic filter and let $$\sigma^G=\bigcup\{s:\exists F,\bar{m}, s^*\hbox{ s.t. }\langle s,F,\bar{m}, s^*\rangle\in G\}.$$ The permutation $\sigma^G$ has the following properties: \begin{enumerate}[label=(\Alph*)] \item The group $\langle \im(\rho )\cup\{\sigma^G \}\rangle$ is cofinitary. \item If $f$ is a ground model permutation, $f\notin \langle\im(\rho)\rangle$, $\langle \{f\}\cup\im(\rho)\rangle$ is cofinitary and $f$ is not covered by finitely many permutation in $\mathcal{F}$, then there are infinitely many $n$ such that $f(n)=\sigma^G(n)$ and so $\langle\im(\rho)\cup\{\sigma^G\}\cup\{f\}\rangle$ is not cofinitary; \item For each $w\in \mathrm{WS}$ there is $m_w\in \omega$ such that $w[\sigma^G]$ codes the characteristic function of $z^w$ with parameter $m_w$. \item For each $w\in \mathrm{WS}$, for all $m\in\psi[w[\sigma^G ]]$, for all $\xi\in\omega_1$ $$|w[\sigma^G ]\cap f_{m,\xi}|<\omega\hbox{ iff }\xi\in Y^w_{m}.$$ \end{enumerate} \end{prop} \iffalse \bigskip \noindent $(A.)$ $\langle \im(\rho)\cup\{g\}\rangle$ is a cofinitary group; \noindent $(B.)$ if $f$ is a ground model permutation, $\langle \{f\}\cup\im(\rho)\rangle$ is cofinitary and $f$ is not covered by finitely many permutation in $\mathcal{F}$, then there are infinitely many $n$ such that $f(n)=g(n)$ and so $\langle\im(\rho)\cup\{g\}\cup\{f\}\rangle$ is not cofinitary; \noindent $(C.)$ for every $k\in\omega$, $g^k(2017)\equiv \chi_z(k)\hbox{ mod }2$ and so $g$ encodes $z$; \noindent $(D.)$ for every $m\in\psi[g]$ and every $\xi\in\omega_1$, $$|g\cap f_{m,\xi}|<\infty\;\hbox{ iff }\xi\in Y_m.$$ Thus, for all $m\in\psi[g]$, $Y_m=\{\xi\in\omega_1: |g\cap f_{m,\xi}|<\infty\}$. \fi We shall now show these properties to hold, in a series of lemmas. It is most convenient to start with the most involved of the series; it has a precursor in~\cite[Lemma 3.12]{VFDSAT} and in conjunction with the following lemmas, it proves Property $(C)$. \begin{lemma}[Generic Coding]\label{lemma.generic.coding} For any $w \in \mathrm{WD}$ and any $l \in {\mathbb N}$, let $D^{\textup{code}}_{w,l}$ denote the set of $q\in{\mathbb Q}$ such that $w \in \dom(\bar m^q)$ and for some $l' \geq l$, $q$ exactly codes $z^w\restriction l'$ with parameter $\bar m^q_w$. Then $D^{\textup{code}}_{w,l}$ is dense in ${\mathbb Q}$. \end{lemma} \begin{proof} Suppose $p\in{\mathbb Q}$ and $w\in \mathrm{WS}$ are given. If $w \notin \dom(\bar m^p)$ it is clear that we can choose $m$ large enough so that letting \[ q = \langle s^p, F^p, \bar m^p \cup\{(w,m)\},s^{p,*}\rangle \] we obtain a condition $q \in {\mathbb Q}$ with $l^q_w = 0$ (i.e., that we can chose $m$ so that $(w,s^p)$ codes the trivial string $\emptyset$ with parameter $m$). So suppose $w \in \dom(\bar m^p)$. Write $m$ for $\bar m^p_w$ and $l$ for $l^p_w$. It suffices to find $s \supseteq s^p$ such that letting \[ q = \langle s, F^p, \bar m^p,s^{p,*}\rangle \] we obtain a condition $q \in {\mathbb Q}$ with $l^q_w = l+1$. Let $m_0$ be the terminating value in the path of $m$ under $w$ and suppose the next letter in $w$ that should be applied is $a^i$ for $i\in\{-1,1\}$. Let $W_0$ denote the set of words $w'$ in $\dom(\bar m^p)$ whose path from $m^p_{w'}$ also terminates with $m_0$ and with next letter also $a^i$ (we cannot avoid extending coding paths of words in $W_0$ and have to ensure exact coding for all of them). Note that this path has length $l^p_{w'} \cdot S(w')$ if the right-most letter of $w'$ is $a$ or $a^{-1}$ and $l^p_{w'} \cdot S(w') +1$ otherwise. For each $w' \in W_0$ let $g(w') \in \im(\rho)\cup\{\emptyset\}$ be the rightmost letter if this letter is not $a$ nor $a^{-1}$, and $g(w')=\emptyset$ otherwise. Then \[ m_0 = g(w') {w'}^{S(w')\cdot l^p_{w'}}[s^p] (m). \] The next point in the path at which we must meet a coding requirement for a word $w' \in W_0$ will be reached after applying $(w')^{S(w')}$ to $g(w')^{-1}(m_0)$. Write $W(w')$ for the set of initial segments of $(w')^{S(w')}$ and consider the tree \[ T=\bigcup_{w' \in W_0}W(w') \] ordered by end-extension. We make finitely many extensions of $s^p$, each time extending a coding path starting with $m_0$ by one step, working along all words in $T$ by induction on their length. So suppose $w' \in T$ and we have already extended $s^p$ to $s'$ so that \[ w'[s'](m_0) = m', \] and that for no extension $w''$ of $w'$ in $T$ is $w''[s'](m_0)$ defined, and fix a word $a^jw' \in T$ where $j\in\{-1,1\}$. For each $w^* \in W_0$ denote by $l(w^*)$ the length of the path of $m^p_{w^*}$ under $(w,s')$. We shall now find $s''$ extending $s'$. Let \[ E= \dom(s') \cup \ran(s')\cup \ran(\bar m^p) \] and let $F$ consist of all subwords of circular shifts of words in $F^p$. Find $m''$ satisfying the following requirements: \begin{align} \label{r.avoid.fixedpoints} m'' &\notin \bigcup\{\fix(u[s']) : u \in F\setminus \{\emptyset\}\},\\ \label{r.avoid.morefixedpoints} m'' &\notin \bigcup\{\fix(g^{-1}_0 g_1[s']) : u_0,u_1 \in F\cap\langle\im(\rho)\rangle\setminus\{\emptyset\}, g_0 \neq g_1\},\\ \label{r.avoid.paths} m'' &\notin \bigcup\{ g^iu^j[s'][E] : i,j\in\{-1,1\}, u \in F, g \in F \cap \langle\im(\rho)\rangle\},\\ \intertext{and if $m'$ is a critical point in the path under $(w^*,s')$ of $\bar m^p_{w*}$,} \label{r.code.z} m'' &\equiv z^{w^*}\left(\frac{l(w^*)+1}{S(w^*)\lh(w^*)}\right) \pmod 2. \end{align} Note that all but the last requirement exclude only finitely many values for $m''$. To see that $m''$ as above can be found, we show that $m'$ is a critical point in the path under $(w^*,s')$ of $\bar m^p_{w*}$ for at most a single $w^*$. Therefore we can chose $m''$ to be any large enough number with the parity prescribed by \eqref{r.code.z}. \begin{claim}\label{claim.one.word} There is at most one word $w^* \in W_0$ such that the path of $m^p_{w^*}$ under $(w^*,s')$ terminates at $m'$ and $l(w^*)+1=(l^p_{w^*} +1)\cdot S(w^*)\cdot \lh(w^*)$, i.e., so that we must respect the coding requirement \eqref{r.code.z} for $w^*$. \end{claim} \begin{proof} Suppose there are $w^*_0 \neq w^*_1$ with the above property. Depending on whether $g(w^*_i)=\emptyset$ or $g(w^*_i)\in\im(\rho)$ we have $l(w^*_i)= k\cdot \lh(w^*_i)$ or $l(w^*_i)= k\cdot \lh(w^*_i)-1$ for each $i\in\{0,1\}$. First assume the words are not of equal length, w.l.o.g. $\lh(w^*_0) < \lh(w^*_1)$. But then \[ S(w^*_0) \cdot \lh(w^*_0) < S(w^*_1) \cdot \lh(w^*_1) - 1 \] so for at most one $i\in\{0,1\}$ can the length of the path from $m_0$ to $m'$ under $(w^*_i,s')$ be of length $S(w^*_0) \cdot \lh(w^*_0)$ or $S(w^*_0) \cdot \lh(w^*_0)-1$. If on the other hand $\lh(w^*_0) = \lh(w^*_1)$ then since $w^*_0 \neq w^*_1$ the path of $m_0$ under $(w^*_0,s')$ must diverge from its path under $(w^*_1,s')$ before reaching $m'$: These paths diverge at some $m_k$ where $w^*_0$ and $w^*_1$ disagree at the next letter since by induction, $s'$ was chosen to satisfy Requirements~\eqref{r.avoid.morefixedpoints} and \eqref{r.avoid.paths} each time we made an extension; and these paths are long enough to witness a disagreement between $w^*_0$ and $w^*_1$ because $S(w^*_i)>1$ (this is necessary and sufficient to deal with words where the only difference is in the first letter and this letter is from $\im(\rho)$). \renewcommand{\qedsymbol}{{\tiny Claim \ref{claim.one.word}.} $\Box$} \end{proof} Let $s'' = s' \cup\{(m',m'')\}$; the next two claims shall show that $p' = \langle s'', F^p, \bar m^p,s^{p,*}\rangle$ is a condition in ${\mathbb Q}_0$ below $p$ (that is, a condition in ${\mathbb Q}$ except for the requirement of exact coding). \begin{claim}\label{claim.no.new.paths} For any $w \in \dom(\bar m^p) \setminus W_0$, the path of $m^p_w$ under $(w,s^p)$ is the same as under $(w,s'')$. \end{claim} \begin{proof} This is obvious by Requirement~\eqref{r.avoid.paths} above. \renewcommand{\qedsymbol}{{\tiny Claim \ref{claim.no.new.paths}.} $\Box$} \end{proof} The next claim shows that $p' \leq_0 p$. \begin{claim}\label{claim.no.new.fixed.points} For every $w\in F^p$ and $m\in\fix(w[s''])$ there is a non-empty subword $w_0$ of $w$ such that letting $w=w'w_0 w''$ and letting $\langle \hdots m_1, m_0 \rangle$ be the $(w,s'')$-path of $m$, $m_k \in \fix(w_0[s'])$ where $k$ is the length of $w''$; i.e., the path has the following form: \[ m \xleftarrow{\ w'} m_{k} \xleftarrow{\ w_0} m_{k}\stackrel{\ w''}{\longleftarrow} m. \] \end{claim} \begin{proof} Fix $w\in F^p$. Assume that $m_0 \in \fix (w[s'']) \setminus \fix(w[s'])$. As the $(w,s')$-path of $m_0$ differs from the $(w,s'')$-path, the latter must contain an application of $a$ to $m'$ or of $a^{-1}$ to $m''$. Write this latter path as \begin{equation}\label{e.path} \hdots m_{k(3)}\stackrel{\;\;w''}{\longleftarrow} m_{k(2)} \stackrel{\;\;a^{j}}{\longleftarrow} m_{k(1)} \stackrel{\;\;w'}{\longleftarrow} m_{k(0)}=m_0 \end{equation} where $j\in\{-1,1\}$ and $m_{k(1)}=n$ when $j=1$, $m_{k(1)}=n'$ when $j=-1$; moreover we ask that $w',w''\in W$ are the maximal subwords of $w$ such that from $m_{k(0)}$ to $m_{k(1)}$ and $m_{k(2)}$ to $m_{k(3)}$, the path contains no application of $a$ to $m'$ or of $a^{-1}$ to $m''$ (allowing either of $w'$, $w''$ to be empty). Thus, $w'$ and $w''$ correspond to path segments where $s'$ and $s''$ agree: \begin{align*} w'[s''] (m_{k(0)})=w'[s'] (m_{k(0)}) = m_{k(1)},\\ w''[s''] (m_{k(2)})=w''[s'] (m_{k(2)}) = m_{k(3)}. \end{align*} It is impossible that $w=w''a^jw'$ and $m_0 = m_{k(3)}$ (for then \[ m''= (w' w'')^{-j}[s''](m'), \] again contradicting the choice of $m''$). Therefore, at step $k(3)$ again $a$ is applied to $m'$ or $a^{-1}$ to $m''$ by maximality of $w''$. Write the path as \begin{equation*} \hdots \stackrel{\;\;\;a^{j'}}{\longleftarrow} m_{k(3)} \stackrel{\;\;w''}{\longleftarrow} m_{k(2)} \stackrel{\;\;a^{j}}{\longleftarrow} m_{k(1)} \stackrel{\;\;w'}{\longleftarrow} m_{k(0)}= m_0 \end{equation*} with $j'\in\{-1,1\}$ and observe: \begin{enumerate}[label=\arabic*.] \item $m_{k(2)} = m_{k(3)}$; for otherwise, $m'' = (w'')^i[s'](m')$ for some $i\in\{-1,1\}$, contradicting the choice of $m''$. \item Thus, $w'' \neq \emptyset$, since on one side of $w''$ we have $a$ and on the other $a^{-1}$ and $w$ is in reduced form. \item As $m'' \notin \fix(w''[s'])$, we have that $m_{k(2)}=m_{k(3)}=m'$. \end{enumerate} So $m' \in \fix(w''[s'])$ proving the claim. \renewcommand{\qedsymbol}{{\tiny Claim \ref{claim.no.new.fixed.points}.} $\Box$} \end{proof} Repeating the above argument for each relevant word in $T$ we obtain a condition $q \leq p$ also satisfying the exact coding condition \eqref{Q.exact} and such that for each $w^* \in W_0$, $l^q_{w*}=l^p_{w*} +1$ as promised. \renewcommand{\qedsymbol}{{\tiny Lemma \ref{lemma.generic.coding}.} $\Box$} \end{proof} The next lemma shows that $g$ is permutation of $\omega$. \begin{lemma} For each $n\in\omega$ the sets $D_n=\{q\in{\mathbb Q}: n\in\dom(s^q)\}$ and $D^n=\{q\in{\mathbb Q}:n\in\ran(s^q)\}$ are dense in ${\mathbb Q}$. \end{lemma} \begin{proof} To see $D_n$ is dense, let $p\in {\mathbb Q}$ be given and find $q\in D_n$, $q \leq p$. If $n$ occurs as the last value in a coding path, the previous lemma applies. Otherwise let $W^*$ be the set of subwords of circular shifts of words in $F^p$ and pick $n'$ arbitrary such that \begin{equation*} \begin{split}\label{e.domain.ext.cont} n' \notin &\bigcup \big\{\fix(w'[s^p]) \colon w' \in W^*\setminus\{ \emptyset\}\big\},\\ n' \notin &\bigcup \big\{w'[s^p]^i (n) \colon i \in \{-1,1\}, w' \in W^* \big\},\text{ and}\\ n' \notin &\ran(s^p). \end{split} \end{equation*} Let $s'=s\cup\{(n,n')\}$ and $q=\langle s', F^p, \bar m^p,s^{p,*}\rangle$. Then $q \in {\mathbb Q}$ and $q\leq p$ by exactly the same argument as in Claims \ref{claim.no.new.paths} and \ref{claim.no.new.fixed.points} above. The case $D^n$ is symmetrical and is left to the reader. \end{proof} Property $(A)$ above is established by the previous lemma and the following one. \begin{lemma} For each $w\in W_{\rho,\{a\}}$, the set \[ D_w=\{q\in{\mathbb Q}: q\Vdash\left|\fix(w[\sigma_G])\right|<\infty\} \] is dense in ${\mathbb Q}$. \end{lemma} \begin{proof} First note that $q\Vdash\lvert\fix(w[\sigma_G])\rvert<\infty$ if $w \in F^q$: This is because such $q$ forces---by the definition of the ordering on ${\mathbb Q}$---that any fixed point of $w[\sigma_G]$ must arise from a fixed point of $w'[s^q]$ where $w'$ is a subword of $w$ and there are only finitely many such points. Therefore clearly $D_w$ is dense, since we may always add the shortest conjugated subword of any word $w$ to $F^q$ to form a new condition, and of course $w[\sigma^G]$ has the same number of fixed points as its shortest conjugated subword. \end{proof} The next lemma shows Property $(B)$ above. Moreover, Property $(D)$ is a direct corollary to this lemma and the almost disjoint requirement in the extension relation of our poset. \begin{lemma} Suppose we are given $m\in\omega$, $w\in \mathrm{WS}$ and $\tau \in S_\infty$. \begin{enumerate} \item If $\tau \notin \langle \im(\rho)\rangle$, $\langle \im(\rho), \tau\rangle$ is cofinitary, and $\tau$ is not covered by finitely many elements of $\mathcal{F}$, the set $D^{\textup{hit}}_{\tau,m} = \{q\in {\mathbb Q}\;\colon (\exists n\geq m) \; w[s^q](n)=\tau(n)\}$ is dense. \item If $\tau \in \mathcal{F}$, $\tau = f^w_{n,\xi}$, and $\xi \notin Y^w_m$ then too is the set $D^{\textup{hit}}_{\tau,m}$ dense. \item If $\tau \in \mathcal{F}$, $\tau = f^w_{n,\xi}$, and $\xi \in Y^w_m$ the set $D^{\textup{hit}}_{\tau,m}\cup\{ p \in {\mathbb Q} : n \in \psi[w[s^p]]\}$ is dense in ${\mathbb Q}$. \end{enumerate} \end{lemma} \begin{proof} Let $\tau$ and $m$ as in the lemma be given. Note that in all three cases $\tau \notin \langle \im(\rho)\rangle$ and $\langle \im(\rho), \tau\rangle$ is cofinitary and we can assume $\tau \notin s^{*,p}$ (for in the third case, otherwise $n\in\psi[w[s^p]]$) and therefore that \begin{equation}\label{e.tau.infinite} \lvert \tau \setminus \bigcup s^{p,*} \rvert = \omega. \end{equation} Let $E'=\dom(s^p)\cup\ran(s^p)\cup\ran(\bar m^p)$, and find $n\in\omega\setminus m$ such that \begin{equation*}\label{e.generic.hitting} \begin{split} n &\notin \tau^{-1}\Big[ \bigcup \big\{ \fix(w[s]) \colon w \in F^*\setminus\{ \emptyset\}\big\}\Big],\\ n &\notin \tau^{-1} \Big[ \bigcup \big\{ g^{-1}w'[s]^i [E'] \colon i \in \{-1,1\}, w' \in F^*, g\in F^*\cap\mathcal \langle\im(\rho)\rangle \big\}\Big],\\ n &\notin \bigcup \big\{ \fix(\tau^{-1} g^{-1}w'[s]^i) \colon i \in \{-1,1\}, w' \in F^*, g\in F^*\cap\mathcal \langle\im(\rho)\rangle \big\}, \text{ and}\\ \tau(n) &\neq f(n) \text{ for each $f\in s^{p,*}$.} \end{split} \end{equation*} The first two requirements obviously exclude only finitely many $n$; the same holds for the third requirement since $\tau \notin \langle \im(\rho)\rangle$ and $\langle \im(\rho), \tau\rangle$ is cofinitary. Since the last requirement holds for infinitely many $n$ by \eqref{e.tau.infinite}, we can pick $n$ satisfying all the requirements. It follows that letting $n' = \tau(n)$ and $E=\{n\}\cup \dom(s^p)\cup\ran(s^p)\cup\ran(\bar m^p)$, $n'$ satisfies the requirements from \eqref{r.avoid.fixedpoints}--\eqref{r.code.z}. Therefore as in Lemma~\ref{lemma.generic.coding} we can let $s=s^p\cup\{(n,n')\}$ and $q = \langle s, F^p, \bar m^p, s^{p,*}\rangle$ is a condition below $p$ satisfying $q\in D^{\textup{hit}}_{\tau,m}$. \end{proof} Finally we show the following. \begin{lemma} The forcing ${\mathbb Q}$ is Knaster. \end{lemma} \begin{proof} It is straightforward to check that if $p,q\in{\mathbb Q}$ are such that $s^p = s^q$ and $\bar m^p$ agrees with $\bar m^q$ on $\dom(\bar m^p) \cap \dom(\bar m^q)$ then \[ r= \langle s^p, F^p\cup F^q, \bar m^p \cup \bar m^q, s^{p,*}\cup s^{q,*}\rangle \] is a condition in ${\mathbb Q}$ and $r \leq p, q$. Therefore ${\mathbb Q}$ is Knaster by a standard $\Delta$-systems argument. \end{proof} \section{The forcing iteration} Since the proof is long and involved, we present a short road-map which may also be used as a reference for notation. We proceed in several steps: \begin{enumerate} \item\label{roadmap.clubs}\label{roadmap.Y} We start with the constructible universe $L$ as the ground model. We chose a sequence $\langle S_\delta : \delta < \omega_M \rangle$ of stationary subsets of $\omega_{M-1}$ and force to add a sequence $\langle C_\delta : \delta < \omega_M \rangle$ such that $C_\delta$ is a club in $\omega_{M-1}$ which is disjoint from $S_\delta$, ``killing'' the stationarity of $S_\delta$. Then we force to add a sequence $\langle Y_\delta : \delta < \omega_M\rangle$ such that $Y_\delta \subseteq \omega_1$ and $Y_\delta$ ``locally codes'' $C_\delta$. By ``locally coding'' we mean the property $(***)_{\gamma,m}$ below. For this purpose we also have to add a sequence $\mathcal{W}=\langle W^0_\gamma:\gamma \in \limord(\omega_M) \rangle$ of auxillary subsets of $\omega_1$ where $W^0_\gamma$ will serve as a code for the ordinal $\gamma$. The forcing that adds $\langle C_\delta : \delta < \omega_M\rangle$, the auxillary sets $\mathcal W$, as well as $\langle Y_\delta : \delta < \omega_M\rangle$ is denoted by ${\mathbb P}^*_0$, and the $({\mathbb P}^*_0, L)$-generic extension is denoted by $V_1$. It will be the case that $\powerset(\omega)^{V_1}= \powerset(\omega)^L$. \item\label{roadmap.coding.reals} We force over $V_1$ to add a sequence \[ \mathcal{C}= \langle c^W_\gamma : \gamma \in \limord(\omega_M)\rangle \] of reals such that $c^W_\gamma$ codes $W^0_\gamma$. We denote the forcing that adds $\mathcal{C}$ by ${\mathbb P}(\mathcal{C})$ and the $(V_1,{\mathbb P}(\mathcal{C}))$-generic extension by $V_2$. \item\label{roadmap.continuum} We increase $2^\omega$ by adding $\omega_N$-many reals forcing with $\operatorname{Add}(\omega,\omega_N)$. Write $V_3$ for the $(V_2,\operatorname{Add}(\omega,\omega_N))$-generic extension. \item\label{roadmap.group} We now force to add the definable MCG. This is done in an iteration ${\mathbb P}(\mathcal{G}):=\langle{\mathbb P}^{\mathcal G}_\alpha,\dot{{\mathbb Q}}^{\mathcal G}_\alpha:\alpha\in\omega_M\rangle$ of length $\omega_M$ over $V_3$. The final $(V_3,{\mathbb P}(\mathcal G))$-generic extension is denoted by $L[G]$. We denote the $(V_3, {\mathbb P}^{\mathcal G}_\alpha)$-generic extension by $V_3[G^{\mathcal G}_\alpha]$. At step $\alpha < \omega_M$ in the iteration we force over $V_3[G^{\mathcal G}_\alpha]$ with ${\mathbb Q}_\alpha = {\mathbb P}_{\mathcal{F}_\alpha} * {\mathbb P}^{\text{cd}}_{\mathcal{F}_\alpha}* {\mathbb P}^{\mathcal G}_\alpha$ where: \begin{enumerate} \item\label{roadmap.group.first} The first forcing ${\mathbb P}^{\mathcal F}$ adds a family $\mathcal F_\alpha$ of size $\omega_1$ consisting of cofinitary permutations of $\omega$. We do this so that in the final model $L[G]$ the graphs of any two elements of $\bigcup_{\alpha < \omega_M} \mathcal F_\alpha$ will be almost disjoint. \item\label{roadmap.coding.reals2} The next forcing ${\mathbb P}^{\text{cd}}_{\mathcal{F}_\alpha}$ adds a real $c^{\mathcal F}_\alpha$ which almost disjointly codes $\mathcal F_\alpha$ via a definable almost disjoint family $\mathcal F^* \in L$ which remains fixed throughout the iteration. \item\label{roadmap.group.last} Finally ${\mathbb P}^{\mathcal G}_\alpha$ is the forcing discussed in the previous section adding a single generator of our MCG, using all the machinery added in the previous steps to ensure definability of the resulting group. \end{enumerate} \end{enumerate} Step \eqref{roadmap.clubs} is described in Section~\ref{subsection.preparing} below. In this part we do not add countable sequences. Steps \eqref{roadmap.coding.reals} and \eqref{roadmap.continuum} are described in Section~\ref{subsection.adding.reals}. Finally Steps \eqref{roadmap.group.first}--\eqref{roadmap.group.last}, in which we force to add a MCG of size less than $2^\omega$, are described in Section~\ref{subsection.group}. \subsection{Preparing the Universe}\label{subsection.preparing} We will work over the constructible universe $L$. Fix $2\leq M<N<\omega$ arbitrary. We will show that consistently $\mathfrak{a}_g=\omega_M<\mathfrak{c}=\omega_N$ with a $\Pi^1_2$ definable witness to $\mathfrak{a}_g$. Let $\bar{S}=\langle S_\delta:\delta < \omega_M\rangle$ be a sequence of stationary costationary subsets of $\omega_{M-1}$ consisting of ordinals of cofinality $\omega_{M-2}$ and such that for $\delta\neq\delta'$, $S_\delta \cap S_{\delta'}$ is non-stationary. We also ask that $\bar S$ be definable in $L_{\omega_M}$. Every element of the intended $\Pi^1_2$-definable maximal cofinitary group will witness itself by encoding a pattern of stationarity, non-stationarity on a segment (a block of the form $[\gamma, \gamma+\omega)$ for $\gamma \in \limord(\omega_M)$) of $\bar{S}$. To achieve this, the following terminology will be useful. \begin{definition} A {\emph{suitable model}} is a transitive model $\mathcal M$ such that $\mathcal M \vDash\mathrm{ZF}^-$, $(\omega_M)^{\mathcal M}$ exists and $(\omega_M)^{\mathcal M}=(\omega_M)^{L^{\mathcal M}}$ (by $\mathrm{ZF}^-$ we mean an appropriate axiomatization of set theory without the Power Set Axiom). \end{definition} For each ordinal $\gamma \in \limord(\omega_M)$ write $W_\gamma$ for the $L$-least subset of $\omega_{M-1}$ such that \[ \langle \gamma, < \rangle \cong \langle W_\gamma, \in\rangle. \] For each $m=1,\cdots,M-2$, let $\bar{S}^m=\langle S^m_\xi:\xi<\omega_{M-m}\rangle$ be a sequence of almost disjoint subsets of $\omega_{M-m-1}$ which is definable $L_{M-m-1}$ (without parameters). Successively using almost disjoint coding with respect to the sequences $\bar{S}^m$ (see~\cite{VFSFLZ11}), we can code each $W_\gamma$ into a set $W^0_\gamma \subseteq \omega_1$ such that the following holds: \bigskip \noindent If $\omega_1<\beta\leq\omega_2$ and ${\mathcal M}$ is a suitable model with $\omega_2^{\mathcal M}=\beta$, $\{W^0_\gamma\}\cup\omega_1\subseteq{\mathcal M}$, then $\mathcal{M}\vDash$ ``Using the sequences $\{\bar{S}^m\}_{m=1}^{m=M-2}$, the set $W^0_\gamma$ almost disjointly codes a set $W$ such that for some $\gamma<\omega_M$, $\langle \gamma,<\rangle \cong \langle W, \in\rangle$''. \bigskip \medskip Write ${\mathbb P}^{\mathcal{W}}$ for the forcing which adds $\mathcal{W}=\langle W^0_\gamma : \gamma \in \limord(\omega_M)\rangle$. It is easy to see that this forcing preserved stationarity of each $S_\delta$ for $\delta<\omega_M$, preserves cofinalities, and does not add countable sequences (see again~\cite{VFSFLZ11}). Fix (until the last paragraph of this section) some $\delta < \omega_M$. Using bounded approximations adjoin a closed unbounded subset $C_\delta$ of $\omega_{M-1}$ such that $C_\delta\cap S_\delta=\emptyset$. The forcing ${\mathbb P}^{\text{cl}}_\delta$ which adds $C_\delta$ preserves stationarity of $S_\eta$ for each $\eta \in \omega_M\setminus\{\delta\}$, has size $\omega_{M-1}$, preserves cardinals and cofinalities, and doesn't add any countable sequences. Following the notation of~\cite{VFSFLZ11}, for a set of ordinals $X$, $\mathrm{Even}(X)$ denotes the subset of all even ordinals in $X$. Furthermore reproducing the ideas of~\cite{VFSFLZ11}, in $L[C_\delta]$ we can find subsets $Z_\delta\subseteq\omega_{M-1}$ such that \bigskip \noindent $(*)_\delta$: If $\beta<\omega_{M-1}$ and $\mathcal{M}$ is a suitable model such that $\omega_{M-2}\subseteq \mathcal{M}$, $(\omega_{M-1})^{{\mathcal M}}=\beta$, and $Z_\delta\cap\beta\in\mathcal{M}$, then $\mathcal{M}\vDash \theta(\omega_{M-1}, Z_\delta\cap\beta)$, where $\theta(\omega_{M-1},X)$ is the formula ``$\mathrm{Even}(X)$ codes a triple $(\bar{C},\bar{W},\bar{X})$ where $\bar{W}$, $\bar{X}$ are the $L$-least codes of ordinals $\gamma$, $\delta<\omega_M$ respectively such that $\gamma$ is the largest limit ordinal not exceeding $\delta$, and $\bar{C}$ is a club in $\omega_{M-1}$ disjoint from $S_{\delta}$". \bigskip Using the same sequences $\bar{S}^m$ as when coding $W_\delta$ into $W^0_\delta$, we code the sets $Z_\delta$ into subsets $X_\delta$ of $\omega_1$ with the following property (again using the construction from~\cite{VFSFLZ11}): \bigskip \noindent $(**)_\delta$: Suppose that $\omega_1<\beta\leq\omega_2$, ${\mathcal M}$ is a suitable model with $\omega_2^{\mathcal M}=\beta$, and letting $\gamma$ be the largest limit ordinal below $\delta$, it holds that $\{W^0_\gamma, X_\delta\}\cup\omega_1\subseteq{\mathcal M}$. Then $\mathcal{M}\vDash \varphi(W^0_\gamma,X_\delta)$, where $\varphi(W,X,m)$ is the formula: ``Using the sequences $\{\bar{S}^m\}_{m=1}^{m=M-2}$, the set $W$ almost disjointly codes $\bar W^0 \subseteq \omega_{M-1}$ and $X$ almost disjointly codes a subset $Z$ of $\omega_{M-1}$ whose even part codes the triple $(\bar{C},\bar{W},\bar{X})$ with $\bar{W}=\bar W^0$ and where $\bar{W}$, $\bar{X}$ are the $L$-least codes of ordinals $\gamma$, $\delta<\omega_M$ such that $\delta = \gamma+m$ and $\bar{C}$ is a club in $\omega_{M-1}$ disjoint from $S_{\delta}$''. \bigskip Note that $\varphi$ is a statement about $(\omega_{M-1})^{\mathcal M}$ and $(\{\bar{S}^m\}_{m=1}^{m=M-2})^{\mathcal M}$, i.e., about the interpretation of their \emph{definition} in $\mathcal M$ (indeed of course these objects are generally too large to be a parameter in $\varphi$). The forcing ${\mathbb P}^{\text{cd}}_\delta$ over $L[\mathcal{W}][C_\delta]$ described above which codes $Z_\delta$ into $X_\delta$ preserves stationarity of preserves stationarity of $S_\eta$ for each $\eta \in \omega_M\setminus\{\delta\}$, has size $\omega_{M-1}$, preserves cardinals and cofinalities, and doesn't add countable sequences. Next, suppose $\delta = \gamma +m$ for $\gamma \in \limord(\omega_M)$. We will force over $L[\mathcal{W}][X_{\delta}]$ (which is the same as $L[\mathcal{W}][C_\delta][X_{\delta}]$) to achieve localization of the pair of sets $W^0_{\gamma}$, $X_\delta$ (see~\cite[Definition 1]{VFSFLZ11}). Let $\varphi$ be as above. \begin{definition} Let $W$, $X$ be subsets of $\omega_1$ such that $\varphi(W,X,m)$ holds in any suitable model $\mathcal{M}$ with $(\omega_1)^{\mathcal{M}}=(\omega_1)^L$ containing both $W$ and $X$ as elements. Denote by $\mathcal{L}(W,X,m)$ the poset of all functions $r:|r|\to 2$, where the domain $|r|$ of $r$ is a countable limit ordinal such that \begin{enumerate} \item if $\xi<|r|$ then $\xi\in X$ iff $r(3\cdot \xi)=1$, \item if $\xi<|r|$ then $\xi\in X^\prime$ iff $r(3\cdot \xi+1)=1$, \item if $\xi\leq|r|$, $\mathcal{M}$ is a countable suitable model containing $r{\upharpoonright} \xi$ as an element and $\xi=\omega_1^{\mathcal{M}}$, then $$\mathcal{M}\vDash\varphi(W\cap \xi,X\cap\xi,m).$$ \end{enumerate} The extension relation is end-extension. \end{definition} For each $\gamma\in \limord(\omega_M)$ and $m\in\omega$ we use the poset $\mathcal{L}(W^0_\gamma,X_{\gamma+m},m)$ to add the characteristic functions of a subset $Y_{\gamma+m}$ of $\omega_1$ such that: \bigskip \noindent $(***)_{\gamma,m}$: If $\beta<\omega_1$, $\mathcal{M}$ is suitable with $\omega_1^{\mathcal{M}}=\beta$, $W^0_\gamma\cap\beta\in{\mathcal M}$, and $Y_{\gamma+m}\cap\beta\in{\mathcal M}$, then $\mathcal{M}\vDash \varphi(W^0_\gamma\cap\beta,X_\gamma\cap\beta,m)$. \bigskip \noindent With this the preliminary stage of the construction is complete. We let ${\mathbb P}^0$ denote the forcing \[ {\mathbb P}^{\mathcal{W}} * \prod_{\delta\in \omega_M} {\mathbb P}^{\text{cl}}_{\delta} * {\mathbb P}^{\text{cd}}_{\delta} * \mathcal{L}\big(W^0_{\gamma(\delta)},X_{\delta},m(\delta)\big). \] where $\gamma(\delta)$ is the greatest limit ordinal below $\delta$ and $m(\delta)$ is the unique $m$ such that $\delta= \gamma(\delta) +m$ and where the product is with the appropriate support as in \cite{VFSFLZ11}. Denote by $V_0$ the resulting model. Note that $V_0\cap[\omega]^\omega=L\cap [\omega]^\omega$. \subsection{Adding reals}\label{subsection.adding.reals} Fix (for the rest of the proof) a constructible almost disjoint family $$\mathcal{F}^*:=\{a_{i,j,\xi}: i\in\omega, j\in 2, \xi\in\omega_1 \cdot 2\}$$ which is $\Sigma_1$ (without parameters) in $L_{\omega_2}$ and such that $a_{i,j,\xi} \in L_\mu$ whenever $L_\mu\vDash |\xi| = \omega$. Next force with the finite support iteration \[ {\mathbb P}(\mathcal{C}):=\langle{\mathbb P}^{W}_\delta,\dot{{\mathbb Q}}^{\mathcal C}_\delta:\delta\in\limord(\omega_M)\} \] where for each $\delta$, $\dot {\mathbb Q}^2_\delta$ adds the real $c_\delta^W$ which almost disjointly via the family $\mathcal F^*$ codes $W^0_\delta$. Let $V_2$ be the $(\dot {\mathbb P}(\mathcal{C}),V_0)$-generic extension . Using the standard forcing $\operatorname{Add}(\omega,\omega_N)$ (finite partial functions from $\omega_N \times \omega$ into $2$) adjoin $\omega_N$-many reals to $V_2$ to increase the size of the continuum to $\omega_N$ and denote the resulting model to obtain a model $V_3$. \subsection{Adding the MCG}\label{subsection.group} \medskip We shall now define a finitely supported iteration ${\mathbb P}(\mathcal{G}):=\langle{\mathbb P}^{\mathcal G}_\alpha,\dot{{\mathbb Q}}^{\mathcal G}_\alpha:\alpha\in\omega_M\rangle$ which adds a self-coding MCG to the model $V_3$. Along the iteration, for each $\alpha\in\omega_M$ we will define a ${\mathbb P}^{\mathcal G}_\alpha$-name $\dot{I}_\alpha\subseteq [\beta_\alpha,\beta_{\alpha+1})$ for a set of ordinals, such that at stage $\alpha$ of the construction we adjoin reals encoding a stationary kill of $S_\delta$ (that is, a real locally coding $C_\delta$) for $\delta\in I_\alpha$. We then show that there is ``no accidental coding of a stationary kill'' in Lemma~\ref{no_accidental_real}. \iffalse Then ${\mathbb P}(\mathcal{G})$ will adjoin a $\Pi^1_2$-definable maximal cofinitary group with a set of generators $\{g_\alpha:\alpha\in \hbox{CD}_{\omega_M}\}$. Note that the whole construction can be viewed as a finite support iteration ${\mathbb P}_0$ of $\sigma$-centered posets of length $\alpha^{**}$ over $V_0$. Since ${\mathbb P}_0$ does not adjoin new countable sequences, $V_0\cap[\omega]^\omega= L\cap [\omega]^\omega$. Thus, the entire construction can be viewed as a finite support iteration of length $\alpha^{**}$. will define a finite a finitely supported support iteration ${\mathbb P}(\mathcal{G}):=\langle{\mathbb P}^{\mathcal G}_\alpha,\dot{{\mathbb Q}}^{\mathcal G}_\alpha:\alpha\in\omega_M\rangle$ of Knaster posets of length $\omega_M$, which will adjoin a $\Pi^1_2$-definable maximal cofinitary group. We will also define a strictly increasing sequence $\langle\beta_\gamma\rangle_{\gamma<\omega_M}$ of ordinals below $\omega_M$ to which we will refer as generator indices. The group added by stage $\alpha$ will have generators indexed by the set $\hbox{CD}_\alpha=\{\beta_\gamma\}_{\gamma<\alpha}$. \fi In order to define ${\mathbb P}(\mathcal{G}):=\langle{\mathbb P}^{\mathcal G}_\alpha,\dot{{\mathbb Q}}^{\mathcal G}_\alpha:\alpha\in\omega_M\rangle$, first fix primitive set recursive bijections \[ \psi:\omega\times\omega\to\omega \] and $\psi':\omega_1\times\omega\times\omega\to\omega_1$. The function $\psi'$ will be used to identify the family $\mathcal{F}_\alpha$ which we add at stage $\alpha$ with a subset of $\omega_1$. Suppose now by induction we are in the $(V_3, {\mathbb P}^{\mathcal G}_\alpha)$-generic extension by $V_3[G^{\mathcal G}_\alpha]$. We presently define ${\mathbb Q}_\alpha = {\mathbb P}_{\mathcal{F}_\alpha} * {\mathbb P}^{\text{cd}}_{\mathcal{F}_\alpha}* {\mathbb P}^{\mathcal G}_\alpha$. For the definition of ${\mathbb P}_{\mathcal{F}_\alpha}$ assume by induction that at previous stages we have added families $\mathcal F_\beta$ for $\beta < \alpha$ consisting of cofinitary permutations. We now adjoin a family \[ \mathcal{F}^\alpha=\langle f^\alpha_{m,\xi} : m\in\omega,\xi\in\omega_1\rangle \] of permutations such that $\lvert f^\alpha_\xi \cap f^{\beta}_{\xi'}\rvert < \omega$ when $\beta < \alpha$ or $\xi \neq \xi'$. For this we can use a finite support iteration of the $\sigma$-centered posets with finite conditions defined in~\cite{VFAT}. Denote this forcing by ${\mathbb P}_{\mathcal{F}_\alpha}$ and by $V_{\alpha,1}$ the resulting model. Next let ${\mathbb P}^{\text{cd}}_{\mathcal{F}_\alpha}$ be the forcing to add a real $c_\alpha$ which almost disjointly via the family $\mathcal F^*$ (see Section~\ref{subsection.adding.reals}) codes \[ \psi'\left[\bigcup_{\xi<\omega_1} \{\omega \cdot \xi + m\}\times f^\alpha_{m,\xi}\right], \] a subset of $\omega_1$ which via $\psi'$ codes $\mathcal F_\alpha$. Let $V_{\alpha,2}$ be the extension of $V_{\alpha,1}$ which contains $c^{\mathcal F}_\alpha$. \medskip Finally, working in $V_{\alpha,2}$ we define ${\mathbb P}^{\mathcal G}_\alpha$, the forcing which adds a new group generator. \medskip Suppose by induction that ${\mathbb P}^{\mathcal G}_\alpha$ has added a cofinitary representation $\rho_\alpha$. Its image generates a cofinitary group $\mathcal{G}_\alpha$. Suppose by induction that $\dom(\rho_\alpha) = \{\beta_\xi\}_{\xi<\alpha}$ and write $\hbox{CD}_\alpha=\{\beta_\gamma\}_{\gamma<\alpha}$, the set of generators used at a stage before $\alpha$. Moreover suppose $\rho_\alpha(\beta_\xi)=g_\xi$ for each $\xi<\alpha$. Our next forcing will add the generic permutation $g_\alpha$ thus enlarging our group to $\mathcal G_{\alpha+1}$, the group generated by $\mathcal G_\alpha\cup\{g_\alpha\}$. If $\alpha$ is a limit, let \[ \beta_\alpha = \sup\{\beta_\xi : \xi<\alpha\} \] and otherwise, let \[ \beta_\alpha=\beta_{\alpha-1} + |\alpha \cdot \omega| \] (we mean ordinal addition of course). This is the ordinal generator to which we associate the generic generator $g_\alpha$ so that \[ \rho_{\alpha+1}=\rho_{\alpha}\cup\{(\beta_\alpha,g_\alpha)\} \] is a cofinitary representation. Every element of the group freely generated by $\hbox{CD}_\alpha\cup\{a\}$ corresponds to a reduced word in the alphabet $\hbox{CD}_\alpha\cup\{a\}$, where $a=\beta_\alpha$. Let $\hbox{WD}_\alpha$ be the set of such words in {\emph{which $a$ occurs}}. Note that the set $\hbox{WD}_\alpha$ corresponds to the new permutations in the group $\mathcal{G}_{\alpha+1}$. More precisely, every permutation in $\mathcal{G}_{\alpha+1}\setminus\mathcal G_\alpha$ is of the form $w[g_\alpha]$ (which is the same as $\rho_{\alpha+1}(w)$ by definition) for some $w\in \hbox{WD}_\alpha$. As before write $\mathrm{WS}_\alpha$ for the set of words in $\hbox{WD}_\alpha$ which do not have a proper conjugated subword. Let $i_\alpha: \mathrm{WS}_\alpha\to \limord(\lvert \alpha \rvert)$ be a bijection sending $a$ to $0$; we shall use $i_\alpha$ to associate the ordinal $\beta_\alpha + i_\alpha(w)$ to each $w \in \mathrm{WS}_\alpha$. We note that those elements of $\mathcal G_{\alpha+1}\setminus \mathcal G_\alpha$ which correspond via $\rho_{\alpha+1}^{-1}$ to words in $\mathrm{WS}_\alpha$ will be associated to ordinals in $[\beta_\alpha,\beta_{\alpha+1})$, and in fact $g_\alpha$ is associated to $\beta_\alpha$ (elements of $\mathcal G_{\alpha+1}$ which are not of the form $\rho_\alpha(w)$ for $w\in\mathrm{WD}\setminus\mathrm{WS}$ we can ignore for now) . \medskip For each $w\in \mathrm{WS}_\alpha$ it is the pattern of stationarity on the block of $\bar S$ consisting of the next $\omega$ ordinals after $\beta_\alpha + i_{\alpha}(w)$ that will code $w$. Let for such $w\in \mathrm{WS}_\alpha$ \[ z^w = \{ 2^m : m \in c^{\mathcal F}_{\alpha}\}\cup\{3^m : m \in c^W_{\beta_\alpha + i_{\alpha}(w)}\} \] and define \[ \bar z = \langle z^w : w\in \mathrm{WS}_\alpha\rangle. \] Further, define \[ Y^w_m = Y_{\beta_\alpha + i_\alpha(w) + m} \] for each $w \in \mathrm{WS}_\alpha$ and let \[ \mathcal Y = \langle Y^w_m : w \in \mathrm{WS}_\alpha, m\in\omega \rangle. \] With the notation from Section~\ref{section.group.forcing} we now define \[ {\mathbb Q}^{\mathcal G}_\alpha = {\mathbb Q}^{\mathcal{F}_\alpha,\mathcal{Y},\bar z}_{\rho_\alpha,\{\beta_\alpha\}}. \] In Proposition~\ref{prop.group.forcing} we have seen that ${\mathbb Q}^{\mathcal G}_\alpha$ adjoins a new generator $g_{\alpha}$ such that the following properties hold: \begin{enumerate}[label=(\Alph*$_\alpha$)] \item The group $\langle \im(\rho_\alpha)\cup\{g_\alpha\}\rangle$ is cofinitary. \item If $f\in V^{{\mathbb P}_\alpha}\backslash\mathcal{G}_\alpha$ is a permutation which is not covered by finitely many members of $\mathcal{F}_\alpha$ and $\langle\mathcal{G}_\alpha\cup\{f\}\rangle$ is cofinitary, then for infinitely many $k$, $f(k)=g_\alpha(k)$. This property, will eventually provide maximality of $\mathcal{G}_{\omega_M}$. \item for each $w\in \mathrm{WS}_\alpha$ there is $m_w\in \omega$ such that for all $k\in\omega$, $w^{2k}[g_\alpha](m_w)=\chi_{z^w}(k)\mod 2$. That is, every new permutation $w[g_{\alpha}]$ encodes $\mathcal{F}_\alpha$ via the real $c^{\mathcal F}_\alpha$ as well as $W^0_{\beta_\alpha+i_\alpha(w)}$ via the real $c^W_{\beta_\alpha+i_\alpha(w)}$. \item\label{i.D.alpha} for each $w\in WD_\alpha$, for all $m\in\psi\big[w[g_\alpha]\big]$, for all $\xi\in\omega_1$ $$|w[g_\alpha]\cap f^{\alpha}_{m,\xi}|<\omega\hbox{ iff }\xi\in Y^w_{m}.$$ That is, $w[g_\alpha]$ encodes $Y^w_m$ for each $m\in\psi^{-1}(w[g_\alpha])$. \end{enumerate} As we are going to see in the next section, property \ref{i.D.alpha} implies that the new permutation $w[g_\alpha]$ encodes itself via a stationary kill on the segment $\langle S_\delta:\beta_\alpha+i_\alpha(w)\leq \delta<\beta_\alpha+i_\alpha(w)+\omega\rangle$. Furthermore, this stationary kill is accessible to countable suitable models containing $w[g_\alpha]$. Let  $\dot I_\alpha$ be a $\bar{\mathbb P}^{\mathcal G}_{\alpha+1}$-name for \[ I_\alpha=\big\{\beta_\alpha+i_\alpha(w)+m: w\in\mathrm{WS}_\alpha, m\in\psi\big[w[g_\alpha]\big]\big\}. \] Thus $I_\alpha$ denotes the set of indices of the stationary sets for which we explicitly adjoin reals encoding a stationary kill at stage $\alpha$ of the iteration. Note that $\beta_\alpha=\sup I_\alpha$. With this the inductive construction is complete. \section{Definability and maximality of the group} Forcing with ${\mathbb P}(\mathcal G)$ over $V_3$ we obtain a generic $G$ over $L$ for the entire forcing \[ {\mathbb P}:={\mathbb P}^*_0 * {\mathbb P}(\mathcal{C}) * \operatorname{Add}(\omega,\omega_N) *{\mathbb P}(\mathcal G) \] recalling that ${\mathbb P}^*_0$ was the product which added the sets $W^0_\alpha$ and $Y_{\alpha+m}$, and ${\mathbb P}(\mathcal{C})$ added a real $c^W_\alpha$ ``locally coding'' the ordinal $\alpha$ for each $\alpha \in \limord(\omega_M)$; $\operatorname{Add}(\omega,\omega_N)$ made $2^\omega = \omega_N$; and finally ${\mathbb P}(\mathcal G)$ added a generic self-coding subgroup of $S_\infty$. Also recall that all the forcings after ${\mathbb P}^*_0$ are Knaster, and ${\mathbb P}^*_0$ did not add any countable sequences. Work in $L[G]$ from now on. We shall now show that in this model there is a MCG of size $\aleph_N$. First we must show that no real codes an ``accidental'' stationary kill. \begin{lemma}\label{no_accidental_real} For each $\delta$ which is not in $$I= \bigcup\{I_\gamma:\gamma\in\limord(\omega_M)\}$$ there is no real in $L[G]$ coding a stationary kill of $S_\delta$, i.e., there is no $r\in \powerset(\omega) \cap L[G]$ such that $L[r]\vDash S_\delta \in \mathsf{NS}$. \end{lemma} \begin{proof} The argument closely follows ~\cite[Lemma 3]{VFSFLZ11}; for the readers convenience we give a brief sketch. Let $\dot{I}$ be a name for $I$ and suppose that for all $\gamma\in\limord(\omega_M)$ we have $p \Vdash \check \delta \notin \dot I$. In the $(L,{\mathbb P}^{\mathcal{W}})$-generic extension, write \begin{align*} {\mathbb P}^{\neq \delta}_0&= \prod_{\xi\in \omega_M\setminus\{\delta\}} {\mathbb P}^{\text{cl}}_{\xi} * {\mathbb P}^{\text{cd}}_{\xi} * \mathcal{L}(W^0_{\sup \xi \cap \lim},X_{\xi})\\ \intertext{and} {\mathbb P}^\delta_0&={\mathbb P}^{\text{cl}}_{\delta} * {\mathbb P}^{\text{cd}}_{\delta} * \mathcal{L}(W^0_{\gamma},X_{\gamma},m). \end{align*} where $\gamma$ is the greatest limit ordinal below $\delta$ and $m$ is the unique $m$ such that $\delta= \gamma +m$. Use that ${\mathbb P}^*_0 = {\mathbb P}^{\mathcal{W}} * ({\mathbb P}^{\neq \delta}_0 \times {\mathbb P}^\delta_0)$ to decompose the ${\mathbb P}_0$-generic $G_0$ as follows: \[ G_0 = G^{\mathcal{W}} * (G^{\neq \delta}_0 \times G^\delta_0). \] Working in $L[G_0]=L[G^{\mathcal{W}}][G^{\neq \delta}_0][G^\delta_0]$ let \[ {\mathbb P}' = \big(\operatorname{Add}(\omega,\omega_N) *{\mathbb P}(\mathcal G) \big)\upharpoonright p \] be the quotient ${\mathbb P} / {\mathbb P}^*_0$ below $p$, it is easy to verify that ${\mathbb P}' \in L[G^{\mathcal{W}}][G^{\neq \delta}_0]$ since the iteration never uses $Y_\delta$. Thus letting $G'$ be shorthand for the ${\mathbb P}'$ generic, we may decompose $G = (G^{\mathcal{W}}*G^{\neq \delta_0} * G' ) \times G^\delta_0$. Let $r$ be any real in $L[G] = L[G^{\mathcal{W}}][G^{\neq \delta}_0][G'][G^\delta_0]$ and write \[ V_* = L[G^{\mathcal{W}}][G^{\neq \delta}_0] \] Then in fact $ r \in V_*[G']= L[G^{\mathcal{W}}][G^{\neq \delta}_0][G'] $ since ${\mathbb P}^\delta_0$ adds no countable sequences over $V_*$ and since ${\mathbb P}'$ is Knaster and so ${\mathbb P}^\delta_0$ also adds no countable sequences over $V_*[G']$. But since ${\mathbb P}^{\mathcal{W}} *{\mathbb P}^{\neq \delta} * {\mathbb P}'$ preserves stationarity of $S_\delta$, the latter is still stationary in $ V_*[G']=L[G^{\mathcal{W}}][G^{\neq \delta}_0][G']$ and hence in $L[r]$. \end{proof} Let $\mathcal{G}$ be the group generated by $\{g_\alpha:\alpha\in\omega_M\}=\bigcup_{\alpha<\omega_M} \im(\rho_\alpha)$. Given $w \in \mathrm{WD}_\alpha$, we write $w^G$ for $\rho_\alpha(w)$, i.e., for the interpretation of $w$ that replaces every generator index $\beta_\gamma$ by the corresponding generic permutation $g_\gamma$. \begin{lemma} The group $\mathcal{G}$ is a maximal cofinitary group. \end{lemma} \begin{proof} By property $(A_\alpha)$ of the iterands $\dot{{\mathbb Q}}_\alpha$ the group $\mathcal{G}$ is cofinitary. It remains to show maximality. Suppose by contradiction that $\mathcal{G}$ is not maximal. Then there is a cofinitary permutation $h\notin \mathcal{G}$ such that the group generated by $\mathcal{G}\cup \{h\}$ is cofinitary. Find $\beta$ such that $h\in V_3[G_\beta]$. Then there is $\beta'\in\{\beta, \beta+1\}$ such that $h$ is not a subset of the union of finitely many members of $\mathcal{F}_{\beta'}$: For otherwise by the pigeonhole principle we find $f \in \mathcal F_\beta$ and $f' \in \mathcal F_{\beta+1}$ such that $\lvert f \cap f' \rvert = \omega$, contradicting the choice of $\mathcal F_\beta$ and $\mathcal F_{\beta+1}$. Letting $\alpha = \beta'+1$, by property $(B_\alpha)$ of the poset ${\mathbb Q}_\alpha$ in $V_3[G_\alpha]$, the generic permutation $g_\alpha$ infinitely often takes the same value as $h$, and so $g_\alpha\circ h^{-1}$ is not cofinitary, which is a contradiction. \end{proof} It remains to show that $\mathcal G$ is $\Pi^1_2$. \begin{lemma}\label{lemma1} Let $g \in S^\infty \cap L[G]$. Then $g = w^G$ for some $w \in \bigcup_{\alpha<\omega_M}\mathrm{WS}_\alpha$ if and only if there is $\gamma\in\limord(\omega_N)$ and $k \in \omega$ such that \begin{multline}\label{e.lemma1} \psi[g] = \{m \in \omega : L[g]\vDash S_{\gamma+m} \in \mathsf{NS}\} =\\ \{m \in \omega : (\exists r \in\powerset(\omega)) \; L[r]\vDash S_{\gamma+m} \in \mathsf{NS}\} \end{multline} \end{lemma} \begin{proof} Suppose $g = w^G$ for $w \in \mathrm{WS}_\alpha$ and $w$ has no proper conjugated subword. We prove the lemma for $\gamma=\beta_\alpha+i_\alpha(w)$. By property $(C_\alpha)$ of the poset ${\mathbb Q}_\alpha$ the real $g$ codes $z^w$ and therefore \[ \mathcal F_{\beta_\alpha+i_\alpha(w)} \in L[g]. \] By property $(D_\alpha)$ of the poset ${\mathbb Q}_\alpha$ the real $g$ codes almost disjointly via the family $\mathcal{F}_\alpha$ codes $Y_{\beta_\alpha+i_\alpha(w)+m}$ for each $m\in \psi[g]$. However $Y_{\beta_\alpha+i_\alpha(w)+m}$ codes $X_{\beta_\alpha+i_\alpha(w)+m}$ which implies that for every $m\in\psi[g]$, the real $g$ codes the closed unbounded subset $C_{\beta_\alpha+i_\alpha(w)+m}$, which is disjoint from $S_{\beta_\alpha+i_\alpha(w)+m}$. If $m\notin\psi[g]$, then $\beta_\alpha+i_\alpha(w)+m\notin I_\alpha$ and so by Lemma~\ref{no_accidental_real}, there is no real $r$ in $L[G]$ coding the stationary kill of $S_{\beta_\alpha+i_\alpha(w)+m}$ (i.e., such that in $L[r]$, $S_{\beta_\alpha+i_\alpha(w)+m}$ is no longer stationary). Now, suppose there is $\gamma\in\limord(\omega_M)$ and $k\in\omega$ such that the following holds for all $n\in\omega$: $L[g]\vDash S_{\gamma+m} \in \mathsf{NS}$ if and only if $m\in\psi[g]$. Then by Lemma~\ref{no_accidental_real}, $\psi[g] = \{n \in \omega : \gamma + n \notin I_\alpha\}=\psi[w^G]$ where $w$ is such that $\beta_\alpha + i_{\alpha}(w)=\gamma$ for some $\alpha<\omega_M$. So $g=w[g_\alpha]=w^G$. \end{proof} \begin{lemma} Let $g=w^G$ for some $w \in \mathrm{WS}_\alpha$ with $\alpha<\omega_M$. Then for every countable suitable model ${\mathcal{M}}$ such that $g\in\mathcal{M}$ there is a limit ordinal $\gamma<(\omega_M)^{\mathcal{M}}$ such that $$(L[w^G])^{\mathcal{M}}\vDash \psi[g]= \{m \in\omega : L[w^G]\vDash S_{\gamma+m} \in \mathsf{NS} \}.$$ \end{lemma} \begin{proof} Let ${\mathcal{M}}$ be a countable suitable model and let $g\in\mathcal{M}$. Let $\gamma = i_{\alpha}(w)$. Since $w^G$ encodes $z^w$ (by property $(C_{\alpha})$ of ${\mathbb Q}_\alpha$) we have that \[ \{f^\alpha_{m,\xi} : m\in\omega, \xi < (\omega_1)^{\mathcal{M}}\} \in\mathcal{M} \] and $W^0_{\gamma} \cap (\omega_1)^{\mathcal{M}}\in \mathcal{M}$. By property $(D_\alpha)$, $w^G$ almost disjointly codes $Y_{\gamma+m}\cap(\omega_1)^{\mathcal{M}}$ for each $m\in\psi[g_\alpha]$ and hence $Y_{\gamma+m}\cap (\omega_1)^{\mathcal{M}}\in\mathcal{M}$ and also $X_{\gamma+m}\cap (\omega_1)^{\mathcal{M}}\in\mathcal{M}$. These sets belong also to $L[g]^{\mathcal{M}}$. Then for each $m\in\psi[g]$, by $(***)_{\gamma,m}$ we have that $L[g]^{\mathcal{M}}\vDash \varphi(W^0_{\gamma}\cap\beta,X_{\gamma+m}\cap\beta)$ where $\beta = (\omega_1)^{\mathcal{M}}$. This means: Using the sequences $\{\bar{S}^k\}_{k=1}^{k=M-2}$, the set $W^0_{\gamma}\cap\beta$ almost disjointly codes $\bar W^0 \subseteq \omega_{N-1}$ and $X_{\gamma+m}\cap\beta$ almost disjointly codes a subset $Z$ of $\omega_{M-1}$ whose even part codes the triple $(\bar{C},\bar W,\bar{X})$ with $\bar W=\bar W^0$ and where $\bar W$, $\bar{X}$ are the $L$-least codes of ordinals $\bar\gamma$, $\bar\delta<\omega_M$ such that $\bar\delta = \bar\gamma+m$ and $\bar{C}$ is a club in $\omega_{M-1}$ disjoint from $S_{\bar\gamma}$. In particular, in the above $\bar\gamma=\gamma$, $\bar\delta = \gamma + m$ and $\bar{C}$ is a club disjoint from $S_{\gamma+m}$. As $m\in \psi^{-1}[g]$ was arbitrary, $\gamma$ indeed witnesses that the lemma holds. \end{proof} \begin{lemma} Let $g$ be a real such that for every countable suitable model $\mathcal{M}$ containing $g$ as an element there is $\gamma<(\omega_{M})^{\mathcal{M}}$ such that $$(L[g])^{\mathcal{M}}\vDash \psi[g]= \{m \in\omega : L[g]\vDash S_{\gamma+m} \in \mathsf{NS}\}.$$ Then for some $\alpha < \omega_M$, $g = w^G$ where $w \in \mathrm{WS}_\alpha$. \end{lemma} \begin{proof} By L\"owenheim-Skolem take a countable elementary submodel $\mathcal{M}_0$ of $L_{\omega_{n+1}}$ such that $g \in \mathcal{M}_0$ and let $\mathcal{M}$ be the unique transitive model isomorphic to $\mathcal{M}_0$. Then by assumption $$(L[g])^{\mathcal{M}}\vDash \big(\exists \gamma\in\limord(\omega_{M})\big)\;\psi[g]= \{m \in\omega :S_{\gamma+m}\;\hbox{is non-stationary}\}$$ so by elementarity the same holds with $(L[g])^{\mathcal{M}}$ replaced by $L_{\omega_{M+1}}[g]$, and hence for some $\gamma \in\limord(\omega_M)$ $$L[g]\vDash \psi[g]= \{m \in\omega : L[g]\vDash S_{\gamma+m} \in \mathsf{NS} \}.$$ But at some stage $\alpha < \omega_M$ we adjoined a generic permutation $w^G$ such that $\beta_\alpha+i_{\alpha}(w)=\gamma$ and by \eqref{e.lemma1} we have $$\psi[w^G]= \{m \in\omega : (\exists r\in \powerset(\omega)\: L[r]\vDash S_{\gamma+m} \in \mathsf{NS} \}.$$ Since there is no accidental coding of a stationary kill (Lemma~\ref{no_accidental_real}) $\psi[g] \subseteq \psi[w^G]$, and so $g=w^G$. \end{proof} \begin{lemma} The MCG $\mathcal G$ is $\Pi^1_2$ in $L[G]$. \end{lemma} \begin{proof} Recall that we denote by $g_0$ the first generator added by ${\mathbb P}^{\mathcal G}_1={\mathbb Q}^{\mathcal G}_0$ over $V_3$. Note first that $g \in \mathcal G$ if and only if there is $k\in \omega$, $\alpha < \omega_M$, and $w \in \mathrm{WS}_\alpha$ (i.e., $w$ has no proper conjugated subwords) such that $(g_0)^k g = w^G$. By the previous lemmas, $g \in \mathcal G$ if and only if $g \in S_\infty$ and the following statement $\Phi(g)$ holds: For every suitable countable model $\mathcal{M}$ \emph{if} for some $g_* \in \mathcal M \cap S_\infty$ $$L[g_*]^{\mathcal{M}}\vDash \psi[g_*]=\{m\in\omega: S_{m}\text{ is stationary}\}$$ \emph{then} for some $k\in\omega$ $$L[(g_*)^k g]^{\mathcal{M}}\vDash \big(\exists \gamma \in \limord(\omega_M)\big)\;\psi\big[(g_*)^k g\big]=\big\{m\in\omega : S_{\gamma+m}\text{ is stationary}\big\}.$$ It is standard to see $\Phi(g)$ can be expressed by a $\Pi^1_2$ formula. \end{proof} Thus we obtain our main result: \begin{theorem} Let $2\leq M < N < \aleph_0$ be given. There is a cardinal preserving generic extension of the constructible universe $L$ in which $$\mathfrak{a}_g=\mathfrak{b}=\mathfrak{d}=\aleph_M<\mathfrak{c}=\aleph_N$$ and in which there is a $\Pi^1_2$ definable maximal cofinitary group of size $\mathfrak a_g$. \end{theorem} \begin{proof} The construction outlined in steps $(1)-(4)$ and developed in detail in Sections 4 and 5, provide a generic extension in which there is a $\Pi^1_2$-definable maximal cofinitary group of cardinality $\aleph_M$, while $\mathfrak{c}=\aleph_N$. To guarantee that in the same model there are no maximal cofinitary groups of cardinality strictly smaller than $\aleph_M$, we slightly modify the definition of ${\mathbb Q}_\alpha$ from step $(4)$ to ${\mathbb P}_{\mathcal{F}_\alpha} * {\mathbb P}^{\text{cd}}_{\mathcal{F}_\alpha}* {\mathbb P}^{\mathcal G}_\alpha*\dot{\mathbb{D}}$, where $\mathbb{D}$ is Hechler's forcing for adding a dominating real. Thus in the final model, there is a scale of length $\omega_M$ and so $\mathfrak{b}=\mathfrak{d}=\aleph_M$. Since $\mathfrak{b}\leq\mathfrak{a}_g$ we obtain $\mathfrak{a}_g=\aleph_M$. \end{proof} \section{Questions} In this section, we state some of the remaining open questions. \begin{enumerate} \item Can one construct in $ZFC$ a countable cofinitary group which can not be enlarged to a Borel MCG? Note that in $L$, every countable group can be enlarged to a $\mathbf{\Pi}^1_1$ MCG. \item Can we add a countable cofinitary group which cannot be enlarged to a $\mathbf{\Pi}^1_1$ MCG using forcing? \item Is there a model where $2^\omega > \omega_1$ and every cofinitary group $\mathcal G_0$ of size $<2^\omega$ is a subgroup of a definable MCG \emph{of the same size} as $\mathcal G_0$? \item Suppose that $\alpha< 2^\omega$ is a cardinal and there is a $\Sigma^1_2$ MCG of size $\alpha$. Is there a $\Pi^1_1$ MCG of size $\alpha$? \item Is there a model where there is a projective MCG of size $\alpha$ with $\omega_1<\alpha<2^\omega$ but there is no MED family of size $\alpha$? \end{enumerate}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $(M, {\omega})$ be a closed symplectic manifold and ${\Sigma}_g$ the real oriented compact surface with boundary obtained by removing an open disc from the real compact surface without boundary of genus $g$. So, in other terms, ${\Sigma}_g$ is obtained by attaching $g$ handles to the closed unit disk. Now let $(M, {\omega}) \hookrightarrow (P, {\Omega}) \stackrel{\pi}{\to} {\Sigma}_g$ be a Hamiltonian fibration with fiber $(M, {\omega})$ and base ${\Sigma}_g$. This means that $P$ is a smooth fibration with fiber $M$ over ${\Sigma}_g$ and structure group the group of Hamiltonian diffeomorphisms ${\rm Ham}(M,\omega)$ of $M$, and the connection defined by $\Omega$ is compatible with this structure group (see \cite{LM}). This is equivalent to asking that the symplectic form ${\Omega}$ on $P$ be ruled (which means that ${\Omega}$ restricts on each fiber to a symplectic form isotopic to ${\omega}$) with monodromy around any loop of the base belonging to ${\rm Ham}(M, {\omega})$. Recall that ${\rm Ham}(M, {\omega})$ is simple, so the subgroup generated by all products of commutators in ${\rm Ham}(M, {\omega})$ must be the whole group, which means that every Hamiltonian diffeomorphism of $M$ can be written as a product of commutators. The {\it commutator length} of $f \in {\rm Ham}(M,{\omega})$ is by definition the smallest $k$ such that $f$ can be written as a product $[\phi_1, \psi_1] \ldots [\phi_k, \psi_k] $ of $k$ commutators. Now let $(P, {\Omega}) \stackrel{\pi}{\to} {\Sigma}_g$ be a ruled symplectic manifold with Hamiltonian monodromies. Because ${\Omega}$ is non-degenerate on the fibers, the kernel $K$ of the restriction ${\Omega} |_{\partial {\Sigma}_g}$ of ${\Omega}$ to the boundary $\partial {\Sigma}_g \simeq S^1$ is transversal to the fibers of the projection $ \pi^{-1}(\partial {\Sigma}_g) \to \partial {\Sigma}_g$. If $b_0$ is a base point belonging to $\partial {\Sigma}_g$, the monodromy of $K$ round $\partial {\Sigma}_g$ defines a symplectic diffeomorphism of $(M_{b_0}, {\omega}_{b_0})$ which is Hamiltonian by hypothesis. Conversely, any Hamiltonian diffeomorphism $f$ of $(M, {\omega}) \simeq (M_{b_0}, {\omega}_{b_0})$ can be obtained as monodromy of a ruled symplectic fibration $(M, {\omega}) \hookrightarrow (P, {\Omega}) \stackrel{\pi}{\to} {\Sigma}_g$. This can be easily seen by first representing $f$ as monodromy of a ruled $(M, {\omega})$-fibration $(P, {\Omega})$ over ${\Sigma}_0$, then by trivialising $(P, {\Omega})$ over a small neighbourhood $U$ of an interior point $b$ of ${\Sigma}_0$ and by replacing finally $P |_U$ by $(M, {\omega}) \times {\Sigma}_g$. We define the $g_+$-{\em area} $\| f \|_g^+$ as the infimum of the quotient $$ \frac{vol (P, {\Omega})}{vol (M, {\omega})} $$ when $(P, {\Omega})$ runs over all Hamiltonian fibrations $$ (M, {\omega}) \hookrightarrow (P, {\Omega}) \stackrel{\pi}{\to} {\Sigma}_g $$ whose monodromies round $\partial {\Sigma}_g$ equal $f$. Similarly, we define $\| f \|_g^-$ as $\| f^{-1} \|_g^+$, which means filling $P |_{\partial {\Sigma}_g}$ from the other side with a copy of $\Sigma_g$. Finally, we set: $$ \| f \|_g = \| f \|_g^+ + \| f \|_g^-. $$ It is easy to see that $ \| f \|_0 $ is less or equal to the positive Hofer norm of $f$ (see \cite{H, LM1} for details on the Hofer norm). The first result of this article is the following: \begin{theorem} \label{thm:theorem1} If $f$ is a product of $k$ commutators $ f = [\phi_1, \psi_1] \ldots [\phi_k, \psi_k]$, then $$ \| f \|_g^+ = 0 $$ for all $g \geq k$. \end{theorem} Actually, we will prove more. Let us denote by $\mathcal{C}_k$ the subset of ${\rm Ham}(M, {\omega})$ of all products of $k$ commutators of Hamiltonian diffeomorphisms. By convention, $\mathcal{C}_0$ contains only the identity. Then obviously $\mathcal{C}_0 \subset \mathcal{C}_1 \subset \ldots$ , $\cup_{k \geq 0} \mathcal{C}_k = {\rm Ham}(M,\omega)$ and the following theorem holds: \begin{theorem} \label{thm:theorem2} For all $f \in {\rm Ham}(M,\omega)$, $$ \|f \|_k^+ \leq d_0^+ (f, \mathcal{C}_k) $$ where $d_0^+ (f, \mathcal{C}_k)$ is the infimum over $h \in \mathcal{C}_k$ of $d_0^+(f,h)$ and where $d_0^+(f,h)$ is $\|fh^{-1} \|_0^+$. \end{theorem} The first theorem is clearly a corollary of the second one. It will therefore be enough to establish the latter one. Note that the positive 0-area appearing in the right hand side of the above theorem is smaller or equal to the positive Hofer norm $\| \cdot \|_H^+$. Thus we get the following corollary that relates our $g$-area to the positive Hofer norm : \begin{corollary} For all $f \in {\rm Ham}(M,\omega)$, $$ \|f \|_k^+ \leq d_H^+ (f, \mathcal{C}_k) $$ where $d_H^+ (f, \mathcal{C}_k)$ is the infimum over $h \in \mathcal{C}_k$ of $d_H^+(f,h)$ and where $d_H^+(f,h)$ is $\|fh^{-1} \|_H^+$. \end{corollary} It is clear that the positive g-area is far from being a norm on the group of Hamiltonian diffeomorphisms. However, the natural object to consider is the following polynomial, that we call the {\em positive total Hofer norm} defined by: $$ \|f \|_T^+ = \sum_{k \geq 0} \|f \|_k^+ t^k. $$ We define in a similar way the total negative norm. Let us denote by $\star$ the operation on polynomials obtained from the ordinary product by replacing in each coefficient the sum by the minimum and the product by the sum. Explicitely, set $$ (\sum a_k t ^k) \star (\sum b_k t ^k) = \sum c_k t ^k $$ where $$ c_k = \min_{k_1 + k_2 =k} (a_{k_1} + b_{k_2}). $$ \begin{prop} For any $f, h \in Ham(M,{\omega})$, $$ \| f \circ h \|_T^+ \leq \|f \|_T^+ \star \|h \|_T^+. $$ \end{prop} Let $f \in {\rm Ham}(M,\omega)$, and consider by abuse of notation $\| f \|_T^+ : {\bb{N}} \to {\bb{R}}$ the application assigning to each integer $n$ the coefficient $\|f\|_n^+$. Then $\| f \|_T^+$ is evidently a non-negative and non-increasing application. The following question is the principal conjecture concerning the behaviour of $g$-areas. \begin{conjecture} For all $f \in {\rm Ham}(M,\omega)$, the application $\| f \|_T^+ : {\bb{N}} \to {\bb{R}}$ is convex, that is to say: for all $g > 0$, $$ \| f \|_{g+1}^+ - \| f \|_{g}^+ \le \| f \|_g^+ - \| f \|_{g-1}^+ $$ \end{conjecture} Before giving the proofs of our theorems below, let us examine the infinitesimal version of these results. We have seen that, since the group of Hamiltonian diffeomorphisms of a closed symplectic manifold $(M,\omega)$ is simple, any Hamiltonian diffeomorphism of $M$ can be written as a product of a finite number of commutators. Moreover we show in the above theorem that the $g_+$-area of a Hamiltonian diffeomorphism $f$ can be regarded as the obstruction to writing $f$ as a product of $g$ commutators. It is natural to ask whether there exist {\it infinitesimal versions} of these results and constructions. First of all note that, by a well known theorem due to Lichnerowicz (and also to Calabi and Rosenfeld) (see Theorem 1.4.3 in \cite{Ba}) the Lie algebra of Hamiltonian vector fields is perfect, so any Hamiltonian vector field can be written as a linear combination of Lie brackets of Hamiltonian vector fields. Therefore it is natural to ask \\ \\ {\bf Question:} {\it Can one define for every $k\in{\mathbb N}$ an invariant for Hamiltonian vector fields which can be interpreted as the obstruction to writing the given vector field as a linear combination of $k$ Lie brackets of Hamiltonian vector fields?} \\ It is convenient to replace Hamiltonian vector fields by smooth functions (modulo constants) using the correspondence $f\mapsto X_f$, and the Lie bracket by the Poisson bracket. The two questions above have obvious reformulations in terms of Hamiltonians. We believe that the first step in studying these questions is to describe explicitly the cone of {\it infinitesimal commutators} $\{\{f,g\}|\ f,\ g\in {\mathcal C}^\infty(M,{\mathbb R})\}$ in ${\mathcal C}^\infty(M,{\mathbb R})$, but we realized that this question is already quite difficult. In Sect. 3 we will make progress in this direction which, although modest, shows that the problem is interesting and difficult: we will fix a Morse function $f$ (satisfying a weak genericity condition) on a closed symplectic surface $(M,\omega)$ and we will describe explicitly the space of functions $u\in {\mathcal C}^\infty(M,{\mathbb R})$ which can be written under the form $u=\{f,g\}$ (or, equivalently, $u=X_f(g)$, where $X_f$ is the Hamiltonian vector field of $f$) with $g\in {\mathcal C}^\infty(M,{\mathbb R})$. The referee kindly informed us that our problem is often called in the literature "the cohomological equation for flows" and there exists an ample literature dedicated to this problem for an interesting class of Hamiltonian vector fields on higher dimensional manifolds, for instance for Hamiltonian fields defining Anosov flows. The so called Livsic theory (see \cite{Li}) deals with this problem. A special example of such a flow is the geodesic flow on a hyperbolic surface, which has been investigated by many authors (see \cite{CEG}, \cite{FF}, \cite{LL}, \cite{LLMM}). Further important results dedicated to Livsic's cohomological equation concern the ''locally Hamiltonian flows'', or area-preserving flows (on surfaces of higher genus) with canonical saddle-like singularities and their return maps (interval exchange transformations) \cite{Fo}, \cite{MMY}. \medskip \noindent {\bf Acknowledgements.} We are very grateful to Michael Entov for pointing out the relation between his notion of size and the invariants introduced in this paper. We are also grateful to the referee for his pertinent comments and interesting bibliographic references. \section{Proof of Theorem~\ref{thm:theorem2}} We begin by constructing, for $f = [\phi, \psi] \in {\rm Ham}(M,\omega)$ and $\epsilon >0$ given, a ruled symplectic fibration $(M, {\omega}) \hookrightarrow (P_f, {\Omega}) \to {\Sigma}_1$ with fiber $(M, {\omega})$ and base the punctured torus, that satisfies: 1) the monodromy of $P_f$ round the boundary of $\Sigma_1$ is equal to $f$, and 2) $A(P_f, {\Omega}) \leq \epsilon$ where $A(P_f, {\Omega})$ is the area of $P_f$ defined as the quotient of the volume of $ (P_f, {\Omega})$ by the volume of $(M, {\omega})$. To achieve this, let us take two copies $A_1, A_2$ of the annulus ${\bb{R}}/3{\bb{Z}} \times [0,1].$ Let us glue $A_1$ to $A_2$ by the rotation $$ R: [0,1] \times [0,1] ( \subset A_1) \to [0,1] \times [0,1] ( \subset A_2) $$ of angle equal to $\pi/2$. The space $A = A_1 \cup_R A_2$ is a thickening of the $1$-squeletton, homeomorphic to the punctured torus ${\Sigma}_1$. Consider now the direct product $(A, \sigma) \times (M, {\omega})$ where $\sigma$ is the obvious induced area form from the two copies of ${\bb{R}}/3{\bb{Z}} \times [0,1]$. Cut $A \times M$ over the segment $B_1 = \{2\} \times [0,1] \subset A_1$ and glue $B_1^- = \{2^-\} \times [0,1] \times M$ to $B_1^+ = \{2^+\} \times [0,1] \times M$ using $\phi$. Similarly, cut $A \times M$ over the segment $B_2 = \{2\} \times [0,1] \subset A_2$ and glue $B_2^- = \{2^-\} \times [0,1] \times M$ to $B_2^+ = \{2^+\} \times [0,1] \times M$ using $\psi$. \begin{figure \centering \scalebox{0.6} {\includegraphics{base-new.pdf}} \label{base-new} \end{figure} It is clear that the monodromy of the resulting fibration $(M, {\omega}) \hookrightarrow (P_f, {\Omega}) \to A \simeq {\Sigma}_1$ over the boundary $\partial A$ is $\psi^{-1} \circ \phi^{-1} \circ \psi \circ \phi = [\phi, \psi]$. On the other hand, the area $A(P, {\Omega})$ can be chosen as small as one wishes. Now, let a product of $k$ commutators $f = [\phi_1, \psi_1] \ldots [\phi_k, \psi_k]$ be given. Let $p_0$ be the base point on the boundary of the base and for each $1 \leq i \leq k$ let $P_{f_i = [\phi_i, \psi_i]} \to {\Sigma}_{1,i}$ be the symplectic fibration constructed above with arbitrary small area and monodromy equal to $f_i = [\phi_i, \psi_i]$. We glue together the $k$ fibrations by first glueing the base near the base point and then by identifying the corresponding fibers over a small neighbourhood of $p_0$ where the fibrations are trivial. This construction shows that each handle can be used to produce, and therefore to kill, a commutator. To prove Theorem\ref{thm:theorem2}, we must find for each $f \in {\rm Ham}(M,\omega), h \in \mathcal{C}_k$ and $\epsilon > 0$, a fibration $(M, {\omega}) \hookrightarrow (P, {\Omega}) \to {\Sigma}_k$ whose monodromy is $f$ and whose area is bounded above by $\|f \circ h^{-1}\|_0^+ + \epsilon$. Let $(M, {\omega}) \hookrightarrow (P', {\Omega}) \to {\Sigma}_0$ be a fibration whose area is bounded above by $\|f \circ h^{-1}\|_0^+ + \epsilon/2$ and whose monodromy is equal to $f \circ h^{-1}$. This exists by the definition of the $0$-distance between $f$ and $h$. Since $h$ belongs to $\mathcal{C}_k$, there is also a fibration $(M, {\omega}) \hookrightarrow (P'', {\Omega}) \to {\Sigma}_k$ of area bounded above by $\epsilon/2$ and monodromy $h$. The connected sum of both fibrations $P'$ and $P''$ near the base point $p_0$ of the boundary gives a fibration $P$ over ${\Sigma}_k$ of area bounded above by $\|f \circ h^{-1}\|_0^+ + \epsilon$ with monodromy $f \circ h^{-1} \circ h = f$. This completes the proofs of Theorems~\ref{thm:theorem1} and \ref{thm:theorem2}. \subsection{The relation with Entov's article} \label{ss:entov} Let us fix an area form $\sigma$ of total area 1 on the surface $\Sigma_g$. Since a surface with boundary is homotopy equivalent to its 1-skeleton and the structure group ${\rm Ham} (M,\omega)$ is connected, all our Hamiltonian fibrations are topologically trivial. This means that the (relative, modulo the boundaries) cohomology classes of the forms are related as follows: $$ [\Omega] = [\pi_1^* \omega] + \tau [\pi_2^* \sigma] $$ for some real positive $\tau$, where $\pi_1$ and $\pi_2$ are the projections of $P$ to $M$ and $\Sigma_g$ respectively under a trivialization of the fibration. Thus $$ vol(P,\Omega) = \int_P [\Omega]^{n+1}= \tau \int_P [\omega]^n [\sigma], $$ $$ vol(M,\omega) = \int_M [\omega]^n $$ and the ratio $vol(P,\Omega)/vol(M,\omega)$ is $\tau$. In other words, our $g_+$ area is the infimum of all positive $\tau$ for which one can find a ruled symplectic form $\Omega$ on P representing the class $[\omega]+\tau [\sigma]$ and having the prescribed holonomy over the boundary. Now $1/g_+$ is more or less the notion of $size_g$ appearing in Entov's paper \cite{E} in Section 5 (its analog for Hamiltonian fibrations over $S^2$ had appeared before in Polterovich's papers cited there). One of the main differences with our setup is that we work with the group ${\rm Ham}$ while in Entov's paper, $size_g$ is defined for elements of the universal cover of ${\rm Ham}$. Then Proposition 5.0.9 in Entov's paper (with $\ell=1$), translated to our setup and proved by the same methods as in Polterovich's previous papers for the case of fibrations over $S^2$, would say that: \\ \\ $g_+ - area (f) = 1/size_g (f) \leq $ {\it the Hofer distance from $f $ to the set of elements of ${\rm Ham}(M,\omega)$ which are products of at most $g$ commutators.}\\ This is weaker than our Theorem 1.2 : indeed in our Theorem, we distinguish between the positive and negative Hofer norm, yielding a finer result. In fact, our proof is completely different from Entov-Polterovich. \section{The space of infinitesimal commutators on a closed symplectic surface} Let $(M,\omega)$ be a closed symplectic manifold. Our goal is to determine explicitly the space of infinitesimal commutators $${\mathcal C}_\omega:=\big\{\{f,g\}\ \vline \ f,\ g\in {\mathcal C}^\infty(M,{\mathbb R})\big\}\ . $$ Recall that $\{f,g\}:=\iota_{X_f}dg$, where $X_f$ is the Hamiltonian vector field associated with $f$. The first step is to compute, for a fixed smooth function $f\in {\mathcal C}^\infty(M,{\mathbb R})$ the space $${\mathcal C}_{\omega,f}:=\big\{\{f,g\}\ \vline \ g\in {\mathcal C}^\infty(M,{\mathbb R})\big\}\ . $$ A pair $s=(\sigma,\xi)$ consisting of a connected 1-dimensional manifold and a nowhere vanishing vector field $\xi\in{\mathcal X}(\sigma)$ of $\sigma$ will be called {\it framed 1-manifold}. A framed 1-manifold $s=(\sigma,\xi)$ will be called {\it framed circle} if $\sigma$ is a circle; in this case $\xi$ is induced by a periodic map $\lambda:{\mathbb R}\to \sigma$, which is well defined up to reparametrisation given by a translation, so the period $\tau(s)\in(0,\infty)$ of $\lambda$ is well defined (it depends only on the pair $s$). Such a parametrisation $\lambda$ will be called compatible. For a compact framed 1-manifold with non-empty boundary, a compatible parametrization will be a diffeomorphism $\lambda:[t, t+\tau(s)]\to\sigma$ inducing $\xi$. Let $s=(\sigma,\xi)$ be a compact (closed or compact with boundary) framed 1-manifold, and denote by $\mu_\xi$ the 1-form on $\sigma$ defined by $\langle \mu_\xi,\xi\rangle\equiv 1$. For a continuous function $f$ on $\sigma$ we put $$\int _s f:=\int_\sigma f \mu_\xi=\int_t^{t+\tau(s)} (f\circ\lambda)\lambda^*(\mu_\xi)=\int_t^{t+\tau(s)} f(\lambda(t)) d t\ , $$ where $\lambda$ is a compatible parametrisation of $s$. Note that any integral circle (non-constant periodic orbit) of a vector field $X$ on a manifold can be regarded naturally as a (closed) framed circle. \begin{re} Let $(M,\omega)$ be a symplectic manifold, $f$, $g\in{\mathcal C}^\infty(M,{\mathbb R})$ smooth functions on $M$. Then \begin{enumerate} \item $\{f,g\}(p)=0$ for any critical point $p$ of $f$. \item $\int_s \{f,g\}=0$ for every integral circle of the Hamiltonian vector field $X_f$. \end{enumerate} \end{re} Therefore \begin{equation}\label{incl}{\mathcal C}_{\omega,f}\subset \left\{u\in{\mathcal C}^\infty(X,{\mathbb R})\vline\begin{array}{c} \resto{u}{{\rm Crit}(f)}\equiv 0\\ \int_s u=0\ \forall s \hbox{ integral circle of }X_f\end{array}\right\}. \end{equation} \begin{thry} \label{Th} The inclusion (\ref{incl}) is an equality when $M$ is a closed surface and $f$ is a Morse function with the property that any connected component of a level set of $f$ contains at most one index 1 - critical point. \end{thry} \begin{proof} Let $u\in{\mathcal C}^\infty(M,{\mathbb R})$ such that \begin{equation}\label{cycle}\int_s u=0\ \forall s \hbox{ integral circle of }X_f\ . \end{equation} We will show that the differential equation $\{f,g\}=u$ has a smooth solution $g\in{\mathcal C}^\infty(M,{\mathbb R})$. \vspace{2mm}\\ {\bf Step 1}: Local solvability. \\ The interesting part of the proof is to show that condition (\ref{cycle}) which has a global character (it concerns the behavior of $u$ on certain curves contained in $M$) implies the local conditions which are necessary to assure the local solvability of our equation around the critical points. Whereas this is not surprising for critical points of index 0 and 2 (because there exists integral circles which are arbitrary close to such a point), for critical points of index 1 passing from (\ref{cycle}) to the needed local solvability conditions is not obvious at all. The argument is based on the solvability Theorem \ref{many} stated at the end of the section and a geometric remark.\\ Suppose that $p_0$ is a critical point of $f$, and let $p_0\in U\textmap{h} V\subset{\mathbb R}^2$ be a Morse chart around $p_0$ for $f$, i.e. a chart such that $h(p_0)=0$, $\resto{f}{U}(x,y)=\frac{1}{2}(\pm x^2\pm y^2)$, and write $\resto{\omega}{U}= a dx\wedge dy$. We may suppose that our chart is compatible with the symplectic orientation, so $a$ is positive on $U$. Therefore $$X_f= \frac{1}{a}\left[\frac{\partial f}{\partial y}\frac{\partial }{\partial x}-\frac{\partial f}{\partial x}\frac{\partial}{\partial y}\right]\ ,\ \iota_{X_f} dg=\frac{1}{a}\left[\frac{\partial f}{\partial y}\frac{\partial g }{\partial x}-\frac{\partial f}{\partial x}\frac{\partial g}{\partial y} \right]\ . $$ {\ }\\ {\bf A}. Suppose $p_0$ has index 0, i.e. it is a local minimum and $f(x,y)=\frac{1}{2}(x^2+y^2)$, $\resto{\omega}{U}= a dx\wedge dy$. In this case $\{f,g\}=\frac{1}{a}(y\frac{\partial g }{\partial x}-x\frac{\partial g}{\partial y})$ and the condition $\{f,g\}=u$ becomes $P_Xg =-au$, where $X=-y\frac{\partial}{\partial x}+x\frac{\partial}{\partial y}$ is the vector field whose associated first order differential equation is dealt with in Theorem \ref{newtheorem} below. The same arguments applies around index 2 - critical points. \\ \\ {\bf B}. Suppose now that $p_0$ has index 1. Using a linear change of coordinates we may suppose that $\resto{f}{U}(x,y)=xy$, so $X_f=\frac{1}{a}(x\frac{\partial}{\partial x}-y\frac{\partial}{\partial y})$, which, up to the factor $\frac{1}{a}$ is just the vector field whose associated differential equation is studied in Theorem \ref{many} below. We will prove that the equation $P_{X_f} g=u$ is locally solvable around $p_0$. Let $Z$ be a vector field on $M\setminus\mathrm{Crit}(f)$ such that $df(Z)\equiv 1$. Let $(\Psi_t)_{t\in{\mathbb R}}$ be the corresponding 1-parameter group, every $\Psi_t$ being considered as usual on its maximal domain. Let $c_0$ be the connected component of the level set $f^{-1}(f(p_0))$ containing $p_0$. Taking into account our assumption about $f$, $p_0$ is the only critical point belonging to $c_0$. Let $\gamma_0\subset c_0$ be a singular circle containing $p_0$, i.e. the closure of a non-compact integral line of $X_f$ converging in both directions to $p_0$. Let $p_0'$, $p_0''\in \gamma_0$ be two points close to $p_0$ belonging to the two branches which intersect transversally in $p_0$. These points cut $\gamma_0$ in two segments $\mu_0$, $\nu_0$, where the notations were chosen such that $p_0\in\nu_0$. \begin{figure \centering \scalebox{0.8} {\includegraphics{critical.pdf}} \label{critical} \end{figure} Consider the image $\Psi_t(\gamma_0\setminus\{p_0\}$) for $t\in(-\epsilon,\epsilon)$. The point is that, either for positive or for negative $t$, the image $\Psi_t(\gamma_0\setminus\{p_0\})$ can be written as $\gamma_t\setminus\{p_t\}$, where $\gamma_t$ is a smooth circle (containing no critical points) converging to $\gamma_0$, and $(p_t)$ is a smooth path converging to $p_0$ as $t\to 0$. Suppose that this occurs for positive $t$. We put $p'_t:=\Psi_t(p'_0)$, $p''_t:=\Psi_t(p_0'')$, $\mu_t:=\Psi_t(\mu_0)$ for $t\in(-\epsilon,\epsilon)$, and $\nu_t:=\overline{\gamma_t\setminus\mu_t} $ for $t\in(0,\epsilon)$. All the segments $\mu_t$, $\nu_t$ are naturally framed by the field $X_f$. The argument is now very simple: since $\gamma_t$ is an integral circle of $X_f$ our hypothesis implies $0=\int_{\gamma_t} u=\int_{\mu_t} u+\int_{\nu_t} u$ for any $t\in(0,\epsilon)$, so for positive $t$ one has $\int_{\nu_t} u=-\int_{\mu_t} u$, which extends smoothly for $t\leq 0$, because the segment $\mu_t$ does. This shows that one of the functions $\phi_u^{++}$, $\phi_u^{--}$, $\phi_u^{+-}$, $\phi_u^{-+}$ appearing in Theorem \ref{many} below extends smoothly at 0, which, by this theorem, implies that the equation $P_{X_f}g=u$ is locally solvable around $p_0$. \vspace{1mm} {\ }\\ {\bf Step 2}: Global solvability. Let ${\mathcal K}$ be the sheaf of germs of (locally defined) smooth functions $\kappa$ satisfying the equation $P_{X_f} \kappa=0$ associated with the vector field $X_f$. According to the first step, there exists an open cover $(U_i)_{i\in I}$ of $M$ and local solutions $g_i:U_i\to {\mathbb R}$ of the equation $P_{X_f} g=u$. The system $\kappa_{ij}=g_j- g_i\in{\mathcal K}(U_i\cap U_j)$ defines a 1-cohomology class ${\scriptstyle{\cal O}}(u)\in H^1(M,{\mathcal K})$, which is the obstruction to the construction of a global solution. According to Lemma \ref{coho} below, the space $H^1(M,{\mathcal K})$ can be embedded in the space of smooth functions ${\mathcal H}_1\to{\mathbb R}$ which are fibrewise linear, where ${\mathcal H}_1$ is the union of the 1-homology groups of the connected components of the level sets of $f$, endowed with a natural topology and differentiable structure (see the construction below). For an integral circle $s$ of $X_f$ it is easy to prove that ${\scriptstyle{\cal O}}([s])=\int_s u$. Therefore our assumption implies that ${\scriptstyle{\cal O}}$ vanishes on the 1-homology of every smooth 1-dimensional component of a level set. If $c_0$ is a singular 1-dimensional component, it will contain only one index 1 - critical point, and $H_1(c_0)\simeq{\mathbb Z}^2$ is generated by the fundamental classes of two singular circles. But these circles are limits (in ${\mathcal H}_1$) of smooth integral circles, so our assumption implies ${\scriptstyle{\cal O}}=0$. \end{proof}{\ }\vspace{-10mm}\\ {\it The cohomology of ${\mathcal K}$.} \vspace{-2mm}\\ Let ${\mathcal K}$ be the sheaf of germs of (locally defined) smooth functions $\kappa$ satisfying the equation $P_{X_f}\kappa=0$ on $M$ and let ${\mathcal C}$ be the space of connected components of level sets of $f$, endowed with the quotient topology. This space comes with two obvious continuous surjections $$M\textmap{\pi}{\mathcal C}\textmap{{\mathfrak f}} [a,b]:=f(M)\ .$$ ${\mathcal C}$ has naturally the structure of a connected 1-dimensional CW complex, whose 0-cells correspond to the critical points of $f$ and whose 1-cells are mapped homeomorphically on sub-intervals of $[a,b]$ connecting two critical values. We will compute the cohomology of the sheaf ${\mathcal K}$ on $M$ using the Leray spectral sequence associated with the projection $\pi:M\to {\mathcal C}$ (see \cite{G} Theorem 4.17.1, p. 201). \\ Every point $t\in[a,b]$ has a neighborhood $J_t\subset[a,b]$ such that the inclusion $f^{-1}(t)\hookrightarrow f^{-1}(J_t)$ is a homotopy equivalence. The disjoint union ${\mathcal H}_1:= \cup_{c\in{\mathcal C}} H_1(c) $ comes with a natural surjection $\chi: {\mathcal H}_1\to {\mathcal C}$ and has a natural structure of 1-manifold with boundary such that ${\mathfrak f}\circ\chi:{\mathcal H}_1\to[a,b]$ becomes an immersion. A section $[a,b]\supset J\ni t\mapsto h(t)$ is smooth with respect to this structure if for every $t\in J$ and $t'\in J_t\cap J$ the classes $h(t')$, $h(t)$ coincide in $H_1(f^{-1}(J_t))$. Let $U\subset {\mathcal C}$ open. A continuous function $\beta:U\to{\mathbb R}$ will be called {\it smooth} if for every $c\in U$ and section $\gamma:V\to U\subset{\mathcal C}$ of ${\mathfrak f}$ defined on an open neighborhood $V$ of ${\mathfrak f}(c)$ in $[a,b]$, the composition $\beta\circ\gamma$ is smooth on $V$. \begin{lm} \label{coho} With the assumptions and the notations above: \begin{enumerate} \item For an open set $U\subset {\mathcal C}$ the space $R^0(\pi_*)({\mathcal K})(U)$ can be embedded in the space of smooth functions on $U$. This sheaf if fine. \item For an open set $U\subset {\mathcal C}$ the space $R^1(\pi_*)({\mathcal K})(U)$ can be embedded in the space of smooth functions on $\chi^{-1}(U)\subset{\mathcal H}_1$ which are fibrewise group-morphisms. \item The sheaves $R^i(\pi_*)({\mathcal K})(U)$ vanish for $i>1$. \item The cohomology group $H^1(M,{\mathcal K})$ can be embedded in the space of smooth functions ${\mathcal H}_1\to{\mathbb R}$ which are fibrewise linear. \end{enumerate} \end{lm} \begin{proof} The proof is straightforward, so the details will be omitted. The only difficulty is to describe explicitly the stalk $R^i(\pi_*)({\mathcal K})_{c_0}$ at a point $c_0\in{\mathcal C}$, which contains an index 1 - critical point $p_0$. Using the general theory of Leray spectral sequences (see \cite{G} p. 201) we see that $R^i(\pi_*)({\mathcal K})_{c_0}=H^i(c_0, {\mathcal K}^0)$, where ${\mathcal K}^0:=\resto{\mathcal K}{c_0}$ is the restriction of the sheaf ${\mathcal K}$ to $c_0$, i.e. the sheaf whose stalk at a point $p\in c_0$ coincides with the stalk ${\mathcal K}_{p}$. The point is that $c_0$ is a CW complex, and ${\mathcal K}^0$ a {\it cellular sheaf} in the sense of \cite{T}, i.e. a sheaf whose restriction to every cell is constant. The obvious CW structure of $c_0$ has a 0-cell, namely $\{p_0\}$ and two 1-cells, denoted $\mu_0$, $\nu_0$ (the connected components of $c_0\setminus\{p_0\}$). According to the main result of \cite{T}, the cohomology of a cellular sheaf can be computed using a cochain complex constructed in a similar way as the usual cellular cochain complex which computes the singular cohomology. Using this result we obtain (2) and (3). The last statement is obtained using the Leray spectral sequence associated with the projection $\pi:M\to{\mathcal C}$ \end{proof} {\ }\vspace{-3mm}\\ {\it Local solvability theorems:}\vspace{-3mm}\\ We could not find in the literature the local solvability results used above, so we state them here for completeness, indicateing only briefly the method of proof: \begin{thry} \label{newtheorem} Let $X=-y\frac{\partial}{\partial x}+x\frac{\partial}{\partial y}$ and $r\in(0,\infty]$. A function $u\in {\mathcal C}^\infty(B_r,{\mathbb R})$ belongs to $\mathrm{im}(P_X:{\mathcal C}^\infty(B_r,{\mathbb R})\to {\mathcal C}^\infty(B_r,{\mathbb R}))$ if and only if \begin{equation}\label{int}\int_{0}^{2\pi} u(\rho\cos(t),\rho\sin(t))dt=0\ \forall \rho\in [0,r)\ . \end{equation} \end{thry} The condition is obviously necessary. In order to prove that it is also sufficient, we use polar coordinates and we solve the equation $\frac{\partial}{\partial \theta} g=u$ on the circles $C(0,\rho)$ ($0\leq\rho < r$) with the condition $$\int_0^{2\pi} g(\rho (\cos(t), \sin(t)))dt = 0\ \ \forall \rho\in[0,r) \ . $$ Explicit computations show that the function $g:B(0,r)\to {\mathbb R}$ is smooth and solves the equation $Xg=u$. \\ For the vector field $X=x\frac\partial{\partial x}-y\frac\partial{\partial y}$ the solvability problem has a completely different answer. This germ of this vector field at 0 is hyperbolic, so the corresponding {\it flat problem} (see \cite{Rou}, sections I.2.2, II.2.1)) is always solvable. Using the method explained in the introduction of \cite{Rou} (page 15), we see that the equation $X(g)=u$ has a ${\mathcal C}^\infty$ solution around 0 if and only if it has a formal solution. The first order operator $P_X$ associated with $X$ commutes with the hyperbolic order 2-operator $Q:=\frac{\partial^2}{\partial x\partial y}$, and we see easily that a formal solution exists if and only if \begin{equation}\label{solv} (Q^nu)(0)=0\ ,\ \forall n\in{\mathbb N}\ . \end{equation} Therefore (\ref{solv}) is the ${\mathcal C}^\infty$ solvability condition around 0 for the equation $X(g)=u$. In the proof of Theorem \ref{Th} we need a simple consequence of this criterion. Let $u\in {\mathcal C}^\infty({\mathbb R}^2,{\mathbb R})$. We define the functions $\phi_u^{++}\ ,\ \phi_u^{--}:(0,1)\to{\mathbb R}\ ,\ \phi_u^{+-}\ ,\ \phi^{-+}_u:(-1,0)\to{\mathbb R}$ by $$\phi_u^{++}(\rho)=\int_{\ln(\rho)}^0 u(e^t,\rho e^{-t}) dt\ ,\ \phi_u^{--}(\rho)=\int_{\ln(\rho)}^0 u(-e^t,-\rho e^{-t}) dt\ , $$ $$\phi_u^{+-}(\rho)=\int_{\ln(-\rho)}^0 u(e^t,\rho e^{-t}) dt\ ,\ \phi_u^{-+}(\rho)=\int_{\ln(-\rho)}^0 u(-e^t,-\rho e^{-t}) dt\ . $$ Note that, for $\rho\in(0,1)$ (respectively $\rho\in(-1,0)$) $\phi_u^{++}(\rho)$ and $\phi_u^{--}(\rho)$ (respectively $\phi_u^{+-}(\rho)$ and $\phi_u^{-+}(\rho)$) are defined by integrating $u$ along the two arcs obtained by intersecting the hyperbola $H_\rho:=\{(x,y)\in{\mathbb R}^2|\ xy=\rho\}$ (parameterized as integral curve of $X$) with the square $[-1,1]\times[-1,1]$. Since these parameterized arcs do not have a limit as $\rho\to 0$, we cannot expect the four functions to extend continuously at 0. But the geometric interpretation of the four functions (as integrals along integral curves of $X$) shows that, if $u=P_Xg$ for a smooth function $g$ defined around this square, we will have \begin{equation}\label{diff}\phi_u^{++}(\rho)=g(1,\rho)-g(\rho,1)\ ,\ \phi_u^{--}(\rho)=g(-1,-\rho)-g(-\rho,-1)\ ,\ \rho\in(0,1)\ , \end{equation} $$\phi_u^{+-}(\rho)=g(1,\rho)-g(-\rho,-1)\ ,\ \phi^{-+}_u(\rho)=g(-1,-\rho)-g(\rho,1)\ , \ \rho\in(-1,0)\ . $$ Therefore, if $u=P_Xg$ for a smooth function $g$ defined around $[-1,1]\times[-1,1]$, then the functions $\phi_u^{++}$, $\phi_u^{--}$, $\phi_u^{+-}$, $\phi^{-+}_u$ extend ${\mathcal C}^\infty$ at 0. The same is true when $u$ can be written as $P_Xg$ on a smaller square $[-r,r]\times[-r,r]$ with $r>0$, so when the equation $P_X g=u$ is locally ${\mathcal C}^\infty$-solvable around the origin. The following theorem states that the converse is also true. \begin{thry}\label{many} Let $u\in{\mathcal C}^\infty({\mathbb R}^2,{\mathbb R})$. The following conditions are equivalent: \begin{enumerate} \item The functions $\phi_u^{++}$, $\phi_u^{--}$, $\phi_u^{+-}$, $\phi^{-+}_u$ extend ${\mathcal C}^\infty$ at 0. \item One of the four functions $\phi_u^{++}$, $\phi_u^{--}$, $\phi_u^{+-}$, $\phi^{-+}_u$ extends ${\mathcal C}^\infty$ at 0. \item The derivatives of $u\in{\mathcal C}^\infty({\mathbb R}^2,{\mathbb R})$ at 0 satisfy (\ref{solv}) \item The equation $P_X g=u$ has a ${\mathcal C}^\infty$-solution around 0. \end{enumerate} \end{thry}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The human brain contains up to 86 billion neurons connected by close to a million kilometers of axons and dendrites. Most of these connections ($\sim 80\%) are short range on the order of a few hundred microns, the rest ($\sim $20\%) being long-range myelinated fibers on the order of several centimeters. The insulating myelin sheath increases conduction velocity of the action potentials but at the cost of taking up more volume in the brain as well as rendering axons unable to synapse onto nearby neurons. That evolution has found profitable to accept this additional hardware cost highlights the importance of long-range connections. From a network point of view the brain has a modular and hierarchical structure \cite{Park} \cite{Wig}. Each module is associated to a specialized function mediated by short-range connections whereas global integration, for higher cognition functions, relies on the long-range connections between modules. The existence and importance of long-range connections in the brain has been much studied in recent years \cite{Knosche} \cite{Betzel} \cite{Padula} \cit {Markov} \cite{Modha} \cite{Fluo}, with diminished long-range functional connectivity being associated to cognitive disorders \cite{Barttfeld}. Of course, by itself, existence of long-range connections between the specialized nodes does not guarantee global integration of the cognitive functions. It is also necessary that the flow of information be sufficiently fast for the stimulus integration to be performed in a timely manner. This seems of particular relevance for the forward and backwards loops in the predictive coding mode \cite{Clark1} \cite{Friston2} \cite{Friston3} \cit {Clark2} \cite{Spratling} \cite{Hogendoorn} of brain operation. One may therefore ask what type of communication short- and long-range connections establish and whether it depend or not on the structure and density of the long-range connections. The network modules in the brain are in fact repertoires of many neurons and, when dealing with the interactions of these intrinsic connectivity networks (ICN's), a continuous diffusion approximation might be a good modelling hypothesis. In another paper \cite{Vilela-FNL} the nature of the diffusion processes associated to short and long-range connections have been analyzed. In particular it was concluded that whereas for short-range connections information propagates as a normal diffusion, for long-range connections of a certain type, one has anomalous diffusion, sub- or super-diffusion depending on the power law distance dependence of the connections. Networks with long-range connections leading to superdiffusion display properties so very different from scale-free and hub dominated networks that in \cite{Vilela-FNL} it was proposed to characterize them as a new class of networks, \textbf{the fractional networks}. Notice that long-range connections are also important in social networks \cite{Hogan} \cit {Romantic} \cite{Carvalho} \cite{Gustafson}. In a network the Laplacian matrix i \begin{equation} L=G-A \label{1.1} \end{equation $G$ being the degree matrix ($G_{ij}=\delta _{ij}\times $ number of connections of node $i$) and $A$ the adjacency matrix ($A_{ij}=1$ if $i$ and $j$ are connected, $A_{ij}=0$ otherwise). Let $\psi \left( i\right) $ for each node $i$ be the intensity of some function $\psi $ across the network. For a node $i$ connected along some coordinate to two other nearest neighbor nodes $i+1$ and $i-1$ the action of the Laplacian matrix on a vector leads to $-\psi \left( i-1\right) +2\psi \left( i\right) -\psi \left( i-1\right) , which is a discrete version of $-d^{2}$ (minus the second derivative). It is reasonable to think that $\psi $ diffuses from $i$ to $j$ proportional to $\psi \left( i\right) -\psi \left( j\right) $ if $i$ and $j$ are connected. Then \begin{equation} \frac{d\psi \left( i\right) }{dt}=-k\sum_{j}A_{ij}\left( \psi \left( i\right) -\psi \left( j\right) \right) =-k\left( \psi \left( i\right) \sum_{j}A_{ij}-\sum_{j}A_{ij}\psi \left( j\right) \right) \label{1.2} \end{equation which in matrix form i \begin{equation} \frac{d\psi }{dt}+kL\psi =0 \label{1.3} \end{equation a heat-like equation. Therefore the Laplacian matrix controls the diffusion of quantities in the network and in the continuous approximation and for short-range connections the propagation of signals in the network may be represented by a normal diffusion equation \begin{equation} \frac{d\psi }{dt}=k\Delta \psi \label{1.4} \end{equation $\Delta $ being the Laplacian in the dimension of the space where the network is embedded. However, for long-range connections the situation is different and from the symmetrized Gr\"{u}nwald-Letnikov representation of the fractional derivative it was found \cite{Vilela-FNL} (see also the Appendix) that for networks where the probability of establishment of a link at distance $d$ is proportional to a power of the distanc \begin{equation} P_{ij}=cd_{ij}^{-\gamma } \label{1.5} \end{equation diffusion would be fractional diffusion of exponent $\beta =\gamma -1$. \beta =2$ being normal diffusion and all $\beta <2$ corresponding to superdiffusions \begin{equation} \frac{d\psi }{dt}=-k\left( -\Delta \right) ^{\frac{\beta }{2}}\psi \label{1.6} \end{equation} Anomalous diffusion and other phenomena \cite{Vilela-FNL} emerge naturally as a structural property in long-range connection networks with distance dependence as in (\ref{1.5}). Here, in Section 2, the case of networks characterized by a modular hierarchical structure with both short and long range connections will be studied. This is the structure that occurs in brain networks and also in some social networks. Whereas in the networks studied in \cite{Vilela-FNL} the uniform scaling law of the connections leads to pure anomalous diffusion, here one faces a mixture of both normal and anomalous diffusion. This is the central phenomena that is studied in this paper with emphasis on the nature of the time scales of propagation of information. This is discussed in the framework of the continuous approximation to the network leading to a fractional differential equation. The continuous approximation is a reasonable approximation for very large networks. However it is also found that qualitatively similar results are obtained even for small discrete networks. This is illustrated in Section 3, by numerical simulation in a relatively small network (400 nodes). \section{Mixed diffusion} In the mixed case the diffusion equation will b \begin{equation} \frac{d\psi \left( x,t\right) }{dt}=\left( a\Delta -b\left( -\Delta \right) ^{\frac{\beta }{2}}\right) \psi \left( x,t\right) \label{2.1} \end{equation with $x\in \mathbb{R}^{n}$, $n$ being the dimension of the embedding Euclidean space. For the Fourier transfor \begin{equation} \widetilde{\psi }\left( k,t\right) =\int d^{n}x\psi \left( x,t\right) e^{-ik\cdot x} \label{2.2} \end{equation one has the equatio \begin{equation} \frac{d\widetilde{\psi }\left( k,t\right) }{dt}=\left( -a\left\vert k\right\vert ^{2}-b\left\vert k\right\vert ^{\beta }\right) \widetilde{\psi \left( k,t\right) \label{2.3} \end{equation with solutio \begin{equation} \widetilde{\psi }\left( k,t\right) =\widetilde{\psi }\left( k,0\right) e^{-t\left( a\left\vert k\right\vert ^{2}+b\left\vert k\right\vert ^{\beta }\right) } \label{2.4} \end{equation $\widetilde{\psi }\left( k,0\right) =1$ corresponds to $\psi \left( x,0\right) =\delta ^{\left( n\right) }\left( x\right) $, that is, an initial localized disturbance at the origin. This is the situation of interest to study the propagation of information in the network. Computing the inverse Fourier transform one ha \begin{equation} \psi \left( x,t\right) =\frac{2A_{n}}{\left( 2\pi \right) ^{n-1} \int_{0}^{\infty }d\left\vert k\right\vert \left\vert k\right\vert ^{n-1}e^{-t\left( a\left\vert k\right\vert ^{2}+b\left\vert k\right\vert ^{\beta }\right) }\frac{\sin \left( \left\vert k\right\vert \left\vert x\right\vert \right) }{\left\vert k\right\vert \left\vert x\right\vert } \label{2.5} \end{equation wit \begin{equation} A_{n}=\left\{ \begin{array}{l} \frac{\pi ^{m-1}2^{m-2}}{\left( 2m-2\right) !!}\;\;n=2m \\ \frac{\pi ^{m-1}2^{m-1}}{\left( 2m-1\right) !!}\;\;n=2m+ \end{array \right. \label{2.6} \end{equation As in the purely fractional multidimensional solution \cite{Hanyga} one notices the strong dependence on the dimension $n$. Numerical evaluation of (\ref{2.5}) shows the remarkable difference in the speed of propagation of information between normal and mixed diffusion. For n=3$, Figures \ref{d_10} and \ref{d_100} compare the propagation of a delta signal at $\left( x=0,t=0\right) $ to distances $x=10$ and $100$ for normal and mixed diffusion. One sees that whereas for normal diffusion it takes a long time for the signal to be detected at a distance, for mixed diffusion the behavior is qualitatively very different. \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{d_10.eps} \caption{Comparison of the propagation time of a delta signal at $\left( x=0,t=0\right) $ to a distance $x=10$ for normal and mixed diffusion ($\protect\beta =1.1,a=0.7,b=0.3$)} \label{d_10} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{d_100.eps} \caption{Comparison of the propagation time of a delta signal at $\left( x=0,t=0\right) $ to a distance $x=100$ for normal and mixed diffusion ($\protect\beta =1.1,a=0.7,b=0.3$)} \label{d_100} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{d_10_2.eps} \caption{Comparison of the propagation time of a delta signal at $\left( x=0,t=0\right) $ to a distance $x=10$ for normal and mixed diffusion ($\protect\beta =1.1,a=0.9,b=0.1$)} \label{d_10_2} \end{figure} Figures \ref{d_10_2} and \ref{d_100_2} show that this effect is obtained even with a very small amount of fractional diffusion. \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{d_100_2.eps} \caption{Comparison of the propagation time of a delta signal at $\left( x=0,t=0\right) $ to a distance $x=100$ for normal and mixed diffusion ($\protect\beta =1.1,a=0.9,b=0.1$)} \label{d_100_2} \end{figure} Of course the effect exist only if $\beta <2$. For $\beta \geq 2$ the behavior would be practically indistinguishable from normal diffusion. This puts into evidence the fact that the mere existence of long-range connections does not guarantee the existence of fractional superdiffusion. That is, a sufficient density of long-range connections to be at least consistent with the one in \ref{1.5}) is required. This is an important hint to be taken into account on the relation of functional connectivity to brain cognitive disorders. \clearpage \section{Diffusion in a small fractional network: Numerical results} So far we have discussed the diffusion behavior of fractional networks in the framework of the continuous approximation to the network. Here, by numerically simulating the propagation of a pulse of information in a discrete network, we show that the results are consistent with those obtained from the continuous approximation modeled by the fractional differential equations. We consider 400 agents (nodes) placed in two-dimensional 20x20 grid and establish connections among the nodes with a distance-dependent power-law distributio \begin{equation*} P_{ij}\symbol{126}d^{-\gamma } \end{equation* Namely, we pick a node at random and establish a connection to another node at a distance $d \begin{equation*} d=\exp \left\{ \frac{\log \left( d_{\min }^{1-\gamma }-C\gamma y\right) } 1-\gamma }\right\} \end{equation* $y$ being a random number in the interval $\left[ 0,1\right] $ and $C$ a constan \begin{equation*} C=\frac{\left( d_{\min }^{1-\gamma }-d_{\max }^{1-\gamma }\right) }{\gamma } \end{equation* In Fig.\ref{Adjs} we show the pattern of connections for the fractional network with $\gamma =2$ and a sparsity index of $0.1$. In the same figure are also shown the patterns of connections for a random network with the same sparsity and for a nearest-neighbor network with the maximal number of connections. \begin{figure}[htb] \centering \includegraphics[width=0.7\textwidth]{Adjs.eps} \caption{Connection patterns for a fractional network with $\gamma =2$ and sparsity $0.1$, for a random network with the same sparsity and a nearest-neighbor network} \label{Adjs} \end{figure} To study the diffusion in the fractional network, we consider, at time zero, a unit pulse at one node and study how it propagates throughout the network. At each time step the pulse is transmitted to the neighbors of each activated node, with a no-backflow condition being imposed. That is, after a node transmits the pulse to its neighbors it no longer transmits the same pulse even if it receives it back through some cycle in the network. In the Fig.\ref{signals} we show the results of two typical simulations. In each case we have chosen, among the nodes that have a long-distance connection, those that are further apart. In the same figures we compare with the results of the same experiment for a nearest-neighbor network (the single pulses at times 16 and 20). Not only is the signal transmitted much faster in the fractional network, but also its coherent nature is preserved, instead of being spread over a very large number of distinct times as it may occur in a sparse random network. Very similar behavior is also obtained for the propagation between nodes that are not directly connected at time one. \begin{figure}[htb] \centering \includegraphics[width=0.7\textwidth]{signals.eps} \caption{Propagation of a unit pulse between two distant nodes for a fractional network $\left( \protect\gamma =2\right) $, compared with the same phenomenon in a nearest-neighbor network (the single pulses at t=16 and t=20)} \label{signals} \end{figure} \clearpage \section{Remarks and conclusions} 1. As has been experimentally confirmed, existence of long-range connections between the brain ICN's is critical for integration and interpretation of sensory stimuli and higher cognitive functions. One view of brain integration and consciousness \cite{Zhou} \cite{Enzo} is based on a percolation model. For percolation, that is, for the formation of a global cluster, it suffices that connections exist between the local clusters. However for the establishment of higher cognitive functions, and in particular in the predictive coding mode, it is necessary that the interaction between the ICN's be established at a sufficiently fast rate. Therefore the mere existence of long-range connections is not sufficient, it also necessary that they have, for example, a power-law dependence with \gamma <3$. 2. The additional hardware cost of myelinated long-range connections in the brain is compensated by the integration of information and higher cognitive functions. Another puzzling additional energetic cost is that, when tested with fMRI, the resting brain is in fact turbulent and restless \cite{Raichle . There is a good reason for that, probably related to speed of reaction. With the operating time scales of individual neurons and their low average firing rate, pattern recognition by evolution towards an equilibrium fixed point or minimizing an energy function would be much too slow for practical living purposes. As has been conjectured, for example from the studies of the olfactory bulb \cite{Freeman}, a much faster recognition is achieved by replacing the low-level chaos that exists in the absence of an external stimulus by, in the presence of a signal, a pattern of bursts with different intensities in different regions. A network of Bernoulli units \cite{Dente} is a model confirmation of this conjecture. 3. Finally, as already discussed in \cite{Vilela-FNL}, the robustness and controllability properties of the fractional networks are so very different from the scale-free networks that they deserve a detailed study. This is relevant not only for brain functions but also concern the uses and misuses of information flow in the social networks. \section*{Appendix: Power-law long-range connections and fractional diffusion} For completeness we include here a short derivation of the relation between power-law long-range connections and fractional diffusion equations, already discussed in Ref.\cite{Vilela-FNL}. Let the probability of a link at distance $d$ be proportional to a power of the distanc \begin{equation*} P_{ij}=cd_{ij}^{-\gamma }\mathnormal{\hspace{2cm}}\text{\textnormal{\ with } \gamma \leq 3 \end{equation* Consider now a block renormalized network $N^{\ast }$ where each set of $q$ nearby nodes in the original network $N$ are mapped to a node of the N^{\ast }$ network. With the block renormalization, the power-law connection probability leads to actual connection strengths in the renormalized network. In the $N^{\ast }$ network the connections ar \begin{equation*} A_{ij}^{\ast }\simeq cqd_{ij}^{-\gamma } \end{equation* with the Laplacian $L^{\ast }$ and degree$\ G^{\ast }$ matrices of the N^{\ast }$ network being \begin{equation*} L^{\ast }\psi \left( i\right) =G_{ii}^{\ast }\psi \left( i\right) -cq\sum_{j\neq i}d_{ij}^{-\gamma }\psi \left( j\right) \end{equation* Compare the distance dependence of the elements of the Laplacian matrix L^{\ast }$ along one of the coordinate axis with a discrete one-dimensional representation of a fractional derivative. The symmetrized Gr\"{u nwald-Letnikov representation of the fractional derivative $\left( a<x<b\right) $ (see \cite{Mainardi}) i \begin{eqnarray} D^{\beta }\psi \left( x\right) &=&\frac{1}{2}\lim_{h\rightarrow 0}\frac{1}{h \left\{ \sum_{n=0}^{\left[ \frac{x-a}{h}\right] }\left( -1\right) ^{n}\left( \begin{array}{c} \beta \\ \end{array \right) \psi \left( x-nh\right) \right. \notag \\ &&\left. +\sum_{n=0}^{\left[ \frac{b-x}{h}\right] }\left( -1\right) ^{n}\left( \begin{array}{c} \beta \\ \end{array \right) \psi \left( x+nh\right) \right\} \label{A1} \end{eqnarray with coefficient \begin{equation} \left\vert \left( \begin{array}{c} \beta \\ \end{array \right) \right\vert =\frac{\Gamma \left( \beta +1\right) \left\vert \sin \left( \pi \beta \right) \right\vert }{\pi }\frac{\Gamma \left( n-\beta \right) }{\Gamma \left( n+1\right) }\underset{n\text{ large}}{\backsim \frac{\Gamma \left( \beta +1\right) \left\vert \sin \left( \pi \beta \right) \right\vert }{\pi }n^{-\left( \beta +1\right) } \label{A2} \end{equation and $sign\left( \begin{array}{c} \beta \\ \end{array \right) =\left( -1\right) ^{n+1}$. Comparing (\ref{A1}-\ref{A2}) with the expression for $L^{\ast }\psi \left( i\right) $, the conclusion is that diffusion in the $N^{\ast }$ network is fractional diffusion of exponent $\beta =\gamma -1$. $\beta =2$ would be normal diffusion, all $\beta <2$ corresponding to superdiffusions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{}} \input{introduction} \input{SED-and-el_spec} \input{crosscal} \input{summary} \bigskip \begin{acknowledgements} The participation of the Fermi Symposium was made possible with the support of the German federal ministry for education and research (Bundesministerium f\"ur Bildung und Forschung). It was also supported by the collaborated research center (SFB) 676 ``Particle, Strings and the early Universe'' at the University of Hamburg. \end{acknowledgements} \bigskip \bibliographystyle{authordate3} \section{Model of the Spectral Energy distribution of the Crab Nebula} \label{sec:SED} A population of relativistic electrons is assumed to be distributed in a spherical volume following a Gaussian density distribution with its maximum at the nebula's center. The width of the Gaussian is parameterized in order to reproduce the shrinking size of the nebula with increasing frequency. The volume of the nebula is assumed to be filled with an entangled magnetic field of constant field strength. Within the volume occupied by the electrons, various seed photons are upscattered. The effective density of seed photons $n_\mathrm{seed}$ is simply found by convolving the electron density with the photon density \citep[see][for further details]{1998ApJ...503..744H}. The total emission from the nebula is then found by integration: \begin{eqnarray} L_\nu &=& \int~\mathrm{d}\gamma~ n(\gamma) \left (\mathcal{L}_\nu^\mathrm{Sy} + \mathcal{L}_\nu^\mathrm{IC}\right), \end{eqnarray} with $\mathcal{L}_\nu$ the single particle emission functions for synchrotron (Sy) and inverse Compton processes (IC) \citep[see e.g.][]{1970RvMP...42..237B}: \begin{eqnarray} \mathcal{L}_\nu^\mathrm{Sy} &=& \frac{\sqrt 2 e^3 B }{m c^2}~\frac{\nu}{\nu_\mathrm{c}}\int_{\nu\slash\nu_\mathrm{c}}^{~\infty} K_{5\slash 3} (x)~ \mathrm d x\, , \\ \mathcal{L}_\nu^\mathrm{IC} &=& \frac 3 4 ~\frac{\sigma_\mathrm{T}c}{\gamma^2}h\nu\int_{h\nu\slash(4\gamma^2)}^{~h\nu}~\mathrm d \epsilon ~\frac{n_\mathrm{seed}(\epsilon)}{\epsilon}f_\mathrm{IC}(\epsilon,\nu,\gamma),\label{eqn:ic} \end{eqnarray} where we have averaged the pitch angle to give $\sqrt{2\slash3}$ and $\sigma_\mathrm{T}$ denotes the Thomson cross-section. The critical frequency $\nu_\mathrm{c}$ is defined as \begin{eqnarray} \nu_\mathrm{c} = \frac{3 e}{4\pi m c}~B~\gamma^2, \end{eqnarray} and $K_{5\slash3}(x)$ stands for the modified Bessel function of fractional order $5\slash3$. Introducing the kinematic variable $q$, \begin{eqnarray} q = \frac{h \nu}{4\epsilon\gamma^2[1-h\nu\slash(\gamma m c^2)]}, \end{eqnarray} the IC distribution function $f_\mathrm{IC}$ can be written as \begin{eqnarray} f_\mathrm{IC}(\epsilon,\nu,\gamma) & = & \nonumber\\ 2q\ln q & + & (1+2q)(1-q) \nonumber\\ &{}&+ \frac 1 2~\frac{\left[{4\epsilon\gamma q}/({mc^2})\right]^2}{1+{4\epsilon\gamma q^2}/{(mc^2)}}(1-q). \end{eqnarray} For the inverse Compton channel photons from several seed photon fields are taken into account: (1) synchrotron radiation, (2) emission from thermal dust, (3) the cosmic microwave background (CMB), and (4) optical line emission from the nebula's filaments. The seed photon density $n_\mathrm{seed}$ in Eqn. \ref{eqn:ic} is the sum of all these components. Like the electron population, the spatial photon densities are approximated by Gaussian distributions whereas the photon density of the CMB is taken to be constant throughout the nebula. The spatial variance of these distributions is also energy dependent. The resulting broadband energy distribution is shown in Fig. \ref{fig:SED}. The electron spectrum is varied until the resulting synchrotron spectrum reproduces the observational data. \\ The compilation of data used here is summarized in \citet{2004ApJ...614..897A} and references therein. Additionally, new data are added which are listed in Table \ref{tbl:data}. The solid black curve in Fig. \ref{fig:SED} is the sum of all contributions including synchrotron and IC emission as well as thermal emission from dust in the nebula and optical line emission from the filaments. For the thermal dust emission a graybody spectrum is used and a temperature of $T = 93 \unit{K}$ was derived by fitting the combined spectrum (thermal and non-thermal emission) to the data (solid gray line in Fig. \ref{fig:SED}). The flux of the line emissions (orange solid line) is taken from \citet{1985ARA&A..23..119D,1987AJ.....94..964D} and \citet{1990ApJ...357..539H}. The optical line emission of the filaments in the nebula is estimated in the following way: The high resolution spectral observations of individual filaments have been corrected for extinction \citep{1990ApJ...357..539H} and scaled to match the global emission from the filaments \citep[see e.g. the discussion in][]{1985ARA&A..23..119D}. \\ The FIR observations from Spitzer, ISO, and Scuba (orange and magenta circles in Fig.~\ref{fig:SED}respectively) deviate from the power-law extrapolation of the radio spectra. In the framework of two distinct electron populations, the shape of the continuum is naturally explained by the transition of the two synchrotron emission components. \\ The dashed blue curve indicates the total synchrotron and IC emission which results from the contributions from wind electrons whereas the dashed red line shows the contribution of the radio electrons. The dashed cyan lines show the total synchrotron and IC flux from both populations, respectively. \begin{figure*}[t!] \centering \includegraphics[width = 0.9\textwidth]{SED_talk} \caption{Broadband SED of the Crab nebula. See section \ref{sec:SED} for details. The two black curves correspond to the models described in \citet{Meyer}} \label{fig:SED} \end{figure*} \begin{table}[hbt] \centering \begin{small} \begin{tabular}{lll} \textbf{Energy Band} & \textbf{Instrument} & \textbf{Reference}\\ \hline \hline Sub milimeter & ISO \& SCUBA & {\citet{2004MNRAS.355.1315G}} \\ to far infrared & SPITZER & {\citet{2006AJ....132.1610T}} \\ \hline X-Ray to & XMM-Newton & {\citet{2005SPIE.5898...22K}}\\ $\gamma$-rays & SPI & {\citet{2009ApJ...704...17J}}\\ {} & IBIS$\slash$ISGRI & {\citet{2008int..workE.144J}}\\ {} & Fermi / LAT & {\citet{2009arXiv0911.2412T}}\\ \hline VHE & H.E.S.S. & {\citet{2006A&A...457..899A}}\\ {} & MAGIC & {\citet{2008ApJ...674.1037A}} \end{tabular} \end{small} \caption[Observations for SED of the Crab]{References for the observations used for the SED. All other data are taken from \citet{2004ApJ...614..897A} and references therein.} \label{tbl:data} \end{table} The electron spectrum was adapted such that the resulting synchrotron emission reproduces the power law measured with XMM-Newton. Both, the XMM-Newton and INTEGRAL (with the instruments SPI and IBIS/ISGRI) observatories are calibrated on the basis of detailed simulation and laboratory measurements. This approach differs from widely used corrections of the response function in order to re-produce a specific spectral shape and flux of the Crab nebula. The corrected measurements are therefore model-dependent and have not been included here. Furthermore, XMM-Newton is able to spatially resolve the Crab, whereas the measurement by SPI and IBIS/ISGRI may include contributions from the pulsar, possibly leading to higher fluxes in comparison to the XMM-Newton observations. Note, that the difference in flux normalization between XMM-Newton and SPI are beyond the systematic errors quoted. The model calculations shown here are fixed to the XMM-Newton spectra which then naturally underpredict the SPI measurements. The shape of the spectrum measured by SPI has been taken into account as both the power-law below 100~keV as well as above smoothly connect to the spectra measured at the low energy end by XMM-Newton and with Comptel at higher energies. The predicted IC emission and the Fermi observations are used to determine the average magnetic field. This is shown in Fig. \ref{fig:IC}. The IC fluxes due to the different photon fields and electron populations add up to give the total black solid line. A standard $\chi^2$-minimization is used to determine the best value for the average $B$-field. Since a varying magnetic field also changes the synchrotron flux, the underlying electron spectrum is varied accordingly to compensate for the change, i.e. the synchrotron flux remains constant. Taking the systematic energy uncertainty on the global energy scale of the Fermi data into account, $\Delta E/E = {}^{+5\%}_{-10\%} \,\mathrm{(sys.)}$ \citep[see e.g.][]{2009PhRvL.102r1101A}, the average $B$-field is found to be \begin{equation} B = \left(124\,{}\pm6\,\mathrm{(stat.)}\,{}^{+15}_{-6}\,\mathrm{(sys.)}\right)\,\mu\mathrm{G}\label{eqn:Bfit}. \end{equation} \begin{figure*}[t!bh] \centering \includegraphics[width = 0.9\textwidth]{IC_fix_Fermi_talk} \caption{The total IC flux due to different seed photon fields and the Fermi data points are shown. For the numbering see the text.} \label{fig:IC} \end{figure*} The discussion of this result in the light of MHD calculations \citep{1984ApJ...283..694K} can be found in \citet{Meyer}. It is worthwhile noting that the magnetic field derived here is less than half the value of the commonly used $300\,\mu\mathrm{G}$. \subsection{The electron spectrum of the nebula} \label{sec:el_spec} The underlying electron spectrum $\Difft{N_\text{el}}{\gamma}$ is the crucial quantity that determines the shape of the SED \citep[see][for a detailed discussion]{Meyer}. It consists of the two afore mentioned electron populations where the radio electron spectrum is given by \begin{eqnarray} \Diff{N_\text{el}^r}{\gamma} &=& \left\{\begin{array}{ll} N_0^r \gamma^{-S_r} & \mathrm{for}\quad \gamma^r_1\le\gamma\le\gamma^r_2,\\ \\ 0 & \mathrm{otherwise},\end{array}\right.\label{eqn:el_pop}. \end{eqnarray} The radio electrons were probably injected in the phase of rapid spin-down during the initial stages of the pulsar-wind evolution \citep{1999A&A...346L..49A}. The values for $\gamma_0^r,\gamma_1^r$, and $\gamma_2^r$ are summarized in Table \ref{tbl:el_spec}. Above $\gamma^r_2$ and below $\gamma_1^r$ a sharp cut-off for the spectrum is chosen, i.e. $\Difft{N_\text{el}}{\gamma} = 0$ for $\gamma<\gamma^r_1$ and $\gamma>\gamma^r_2$. \begin{table*}[t!hb] \centering \begin{tabular}{lccccc} \centering \textbf{Population} &$ N_0$& $\ln\gamma_0$ & $\ln\gamma_1$ & $\ln\gamma_2$ & $ S$\\ \hline \hline Radio &$120.0(1)$ & -- & $3.1$ & $12.1(7)$ & $1.60(1)$ \\ \hline Wind &$78.6(3)$& $19.5(1)$ & $12.96(3)$ & $22.51(3)$ & $3.23(1)$ \end{tabular} \caption{Cut-off and normalization energies together with the spectral indices $S$ of the electron spectrum. See text and Eqn. \ref{eqn:wind} for further details.} \label{tbl:el_spec} \end{table*} The wind electrons produce via synchrotron emission the bulk of the observed SED above sub-mm/FIR wave-lengths. The wind electrons are constantly injected downstream of the wind shock (hence the name). The radiatively cooled spectrum of the wind electron has a spectral index of $S_w = 3.23 = 2.23 + 1$ which can naturally be explained by ultra-relativistic 1${}^\mathrm{st}$ order Fermi acceleration with synchrotron cooling \citep[see e.g.][]{1986MPARp.239.....K}. \\ An additional feature present in the hard X-ray spectrum which follows a broken power-law with a break at $\sim 70\unit{kev}$ requires a break with $\Delta S=0.43$ in the electron-spectrum. We tentatively relate this break to the injection mechanism, given that it can hardly be related to energy dependent escape (the X-ray emitting electrons suffer cooling well before escaping the nebula). The value of $\Delta S$ could hint at an energy dependent effect similar to diffusion in a Kolmogorov-type turbulence power spectrum. \\ For high and low energies the wind electron spectrum cuts off super-exponentially, \begin{eqnarray} \Diff{N_\text{el}^w}{\gamma} &=& N_0^w\left\{\begin{array}{ll} \left(\frac{\gamma}{\gamma_0^w}\right)^{-S_w}, &\mathrm{for}\quad\gamma<\gamma_0^w, \nonumber\\ \\ \left(\frac{\gamma}{\gamma_0^w}\right)^{-(S_w+\Delta S)}, & \mathrm{for}\quad\gamma^w_0\le\gamma\le\gamma_2^w ,\\ \\ 0, & \mathrm{for}\quad \gamma > \gamma^w_2, \\ \end{array}\right\}\nonumber\\ &{}&\quad\times\exp\left(-\left[\frac{\gamma_1^w}{\gamma}\right]^{2.8(4)}\right)\label{eqn:wind}. \end{eqnarray} where the values for $\gamma_0^w$, $\gamma_1^w$ and $\gamma_2^w$ are listed in Table \ref{tbl:el_spec}. The total energy of the radio and wind electrons, respectively is found to be \begin{eqnarray} E_r & = & mc^2 \int_{1}^{\infty} \,\gamma \Diff{N_\text{el}^r}{\gamma}\,\mathrm{d}\gamma = 3.10\times10^{48}\unit{ergs},\\ E_w & = & mc^2 \int_{1}^{\infty} \,\gamma \Diff{N_\text{el}^w}{\gamma}\,\mathrm{d}\gamma =2.28\times10^{48}\unit{ergs}, \end{eqnarray} indicating that the total energy is much smaller than the integrated energy released through the spin-down of the pulsar. The fact that both relic electrons and wind electrons share roughly equal energy is probably coincidental. \section{Cross Calibration of IACTs \& Fermi} \label{sec:crosscal} The updated model of the SED of the Crab Nebula provides an opportunity for the cross calibration between ground based air shower experiments and the Fermi large area telescope. The method is demonstrated here with the imaging air Cherenkov telescopes (IACTs) HEGRA, H.E.S.S. and MAGIC but is applicable to any other ground based air shower experiment. The energy calibration of IACTs is done indirectly with the help of detailed simulations of air showers and the detector response. However, the remaining systematic uncertainty on the absolute energy scale of typically 15~\% leads to substantial differences in the observed flux and position of cut-offs in the energy spectra between different IACTs and also between Fermi/LAT and IACTs.\\ Since the observed energy spectra are usually quite broad in energy, the position of features in the spectra are not useful (and may be time-dependent for some objects) for cross calibration. On the other hand, cross calibration between Fermi/LAT and IACTs provides indirectly a means of benefitting from the careful beam-line calibration of the Fermi/LAT \citep[see e.g.][]{2009ApJ...697.1071A}. For this reason, the average magnetic field used in the model was fixed to Fermi observations. \\ The cross calibration is now accomplished in the following way: for each IACT an energy scaling factor $s_\mathrm{IACT}$ is introduced such that \begin{equation} E^\prime = E \cdot s_\mathrm{IACT}.\label{eqn:scale} \end{equation} The scaling factor $s_\mathrm{IACT}$ is determined via a $\chi^2$-minimization in which the energy scale is changed according to the formula above until the data points reproduce the model best. The scaling factors for the different instruments are listed in Table \ref{tbl:scale} together with the statistical errors and the reduced $\chi^2$ values of the fit. The statistical uncertainties were obtained by summing the errors of the $\chi^2$-fit and the statistical errors of the model in quadrature. The latter are mainly due to the uncertainties on the $B$-field of Eqn. \ref{eqn:Bfit}. To illustrate the result, Figure \ref{fig:unscale} and \ref{fig:scale} compare the unscaled data points of the Crab nebula with the scaled ones. It is evident that the data points fit the model better after scaling. All scaling factors lie within the afore mentioned 15~\% energy uncertainty of the IACTs The application of the cross calibration eliminates the systematic uncertainty of the energy scale of the IACTs and adjusts the energy scale to the one of the Fermi/LAT. Since the model relies on the Fermi/LAT measurements, the Fermi/LAT's absolute energy uncertainty remains. This implies an improvement for the systematic uncertainty on the energy scale of IACT measurements from $\pm 15 ~\%$ to ${}^{+5~\%}_{-10~\%}$. However, the model itself also contributes to the systematic uncertainties. The main uncertainty stems from the fact that a constant magnetic field is assumed which is very likely not the case. It should be also noted, that the cross calibration factors hinge mainly on the high statistics measurement of the IACTs at low TeV-energies which are produced by electrons that co-exist in the same volume in the nebula as the electrons emitting at lower energies (i.e. in the Fermi/LAT energy range). \begin{figure*}[tbh] \centering \subfigure[]{ \includegraphics[width = 0.45\textwidth]{TeV_IC_unscaled_talk} \label{fig:unscale}} \subfigure[]{ \includegraphics[width = 0.45\textwidth]{TeV_IC_scale_talk} \label{fig:scale}} \caption[]{\subref{fig:unscale} The IC model (solid black line) together with measurements from IACTs and Fermi. No Enery scaling is applied. \subref{fig:scale} The same situation as in \ref{fig:unscale} but with the scaling factors of Eqn. \ref{eqn:scale} and Table \ref{tbl:scale} applied. The open squares denote H.E.S.S. points that were not included in the fit due to the high systematic uncertainty of the highest energy measurements.} \end{figure*} \begin{table*} \centering \caption{Energy scaling factors of the IACTs for the cross calibration.} \label{tbl:scale} \centering \begin{tabular}{ccccc} \textbf{Instrument} & \textbf{Scaling factor $s_\mathrm{IACT}$} & \textbf{Stat. error $\Delta s$} & $\chi^2_\mathrm{before}\slash\mathrm{d.o.f.}$& $\chi^2_\mathrm{after}\slash\mathrm{d.o.f.}$\\ \hline \hline Fermi/LAT & $1$ &$+0.05 ~-0.03$ & -- &$0.49$ \\ HEGRA & $1.042$ &$\pm 0.005$ & $7.652$ &$1.046$ \\ H.E.S.S. & $0.961$ &$\pm0.004 $ & $11.84 $ &$6.476$ \\ MAGIC & $1.03$ &$\pm0.01 $ & $1.671$ &$0.656$ \end{tabular} \end{table*} As a first application of the cross calibration, we derive upper limits on the diffuse $\gamma$-ray background. Both Fermi \citep{2009PhRvL.102r1101A}, and H.E.S.S. \citep{2008PhRvL.101z1104A,2009arXiv0905.0105H} have measured the cosmic ray $e^- + e^+$ spectrum. Unlike Fermi, the telescopes from H.E.S.S. cannot accurately distinguish between showers induced by electrons (or positrons) or photons, such that up to $\approx 50$~\% of the observed electromagnetic air showers could be induced by photons. Hence, H.E.S.S. actually measures electrons and diffuse background photons. Taking the difference of the two measurements we can derive an upper limit on the flux of the $\gamma$-ray background. The scaling factors derived above are now used to convert the IACT data to the same energy scale of Fermi, which reduces substantially the systematic uncertainty on the observed flux given that the electron spectrum follows a soft power-law with $E^{-3}$. The upper limits were derived by subtracting the two fluxes from the overlapping region of the measurements. This corresponds to the first six H.E.S.S. points of the low energy analysis in Figure \ref{fig:atic}. The remaining systematic energy uncertainties, denoted by the green and yellow bowties, were taken into account for the derivation of the upper limits: the flux points of the H.E.S.S. measurements were shifted to their maximum value allowed by the systematic uncertainties while the Fermi points were shifted to the minimum value. Hence, the result represents a conservative approximation of the upper limits. An important result of the cross calibration is that the peak in the spectrum observed by ATIC \citep{2008Natur.456..362C} appears more unlikely after applying the scaling factors. \begin{figure}[tbh] \centering \includegraphics[width = 0.4\textwidth]{el_egrb} \caption{$e^- + e^+$ Spectrum reported by H.E.S.S. and Fermi. The cross calibration is applied, hence the uncertainty on the global energy scale is eliminated.} \label{fig:atic} \end{figure} \section{Introduction} The Crab Nebula is probably the best studied object in astrophysics (for a recent review see e.g. \citet{2008ARA&A..46..127H}). It is the remnant of a core-collapse supernova which occured in 1054 AD at a distance of $d \approx 2 \unit{kpc}$ \citep{1968AJ.....73..535T}. Near its geometric center resides a pulsar which continuously injects a wind of ultra-relativistic particles into the nebula. The wind terminates at a shock front where electrons are pitch angle isotropized, forming a broad power-law in energy. Observations of the synchrotron and inverse Compton nebula have been carried out in every accessible wavelength resulting in a remarkably well known spectral energy distribution (SED). The large area telescope (LAT) onboard the Fermi satellite \citep{2009arXiv0911.2412T} has recently measured the flux of the $\gamma$-rays of the Crab between $100\unit{MeV}$ and $300\unit{GeV}$ with unpreceded accuracy. These observations can be used for a cross calibration between the Fermi/LAT and ground based air shower experiments. This leads to the elimination of the systematic uncertainties on the absolute energy scale of typically 15~\% for imaging air Cherenkov telescopes (IACTs). For the cross calibration we use our approach presented in \citet{Meyer}: based on the work of \citet{1998ApJ...503..744H}, we extract the distribution of electrons from the synchrotron spectrum under the assumption of a constant magnetic field strength throughout the nebula. Using this electron distribution in conjunction with the seed photon fields as extracted from observations, we obtain a detailed prediction for the inverse Compton emission. The average magnetic field inside the nebula is determined by fixing the model to the Fermi/LAT observations. The model can directly be used as the basis for the cross calibration. For this purpose energy scaling factors are derived to correct the measurements of the ground based instruments to the model. The article is organized as follows. In section \ref{sec:SED} the model for the SED is quickly reviewed, where the details and the discussion of the underlying electron spectrum are spared out and we refer the reader to \citet{Meyer}. The results of the cross calibration are presented in section \ref{sec:crosscal} together with an application to extract limits on the diffuse $\gamma$-ray background at TeV energies. Section \ref{sec:summary} summarizes the article and gives a short outlook on possible future work. \section{Summary and Outlook} \label{sec:summary} An updated model for the SED of the Crab nebula has been introduced. It incorporates a new electron spectrum that consists of two electron populations that have been studied extensively in the past \citep[see e.g.][]{1996MNRAS.278..525A, 1999A&A...346L..49A}. The average $B$-field was derived by fixing the IC flux to Fermi/LAT measurements. The model makes it possible to derive energy scaling factors that eliminate the systematic energy uncertainties of ground based air shower experiments. An application of the cross calibration to the diffuse $\gamma$-background has been presented and upper limits have been derived. Moreover, the excess measured by the ATIC collaboration seems unlikely with the scaled H.E.S.S. observations. Nevertheless, an improved model for the emission of the Crab is conceivable. Such a model could comprise a spatially varying $B$-field and hence the fact that electrons emitting different energies are exposed to different field strenghts. To conclude, a way to establish the Crab as a \textit{true} standard candle in $\gamma$-ray astronomy has been presented and versatile applications to other observations of bright steady or pulsed sources (e.g. the Crab Pulsar, the Galactic Center other Pulsar Wind Nebulae, etc.) are imaginable. Moreover, an application of the cross calibration will help to improve dark matter searches and constraints on the extra galactic background light.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The time-dependent mean-field Gross-Pitaevskii (GP) equation can accurately describe many static and dynamic properties of a harmonically trapped Bose-Einstein condensate (BEC) \cite{rev,rev-2,lpl-1,lpl-2,lpl-3,lpl-4,ref-a01,ref-a02,ref-a03,ref-a04,ref-b01,ref-b02,ref-b03,ref-b04,ref-b05,ref-b06,balaz2011}. However, the numerical solution of the three-dimensional (3D) GP equation could often be a difficult task due to a large nonlinear term \cite{CPC,CPC-1}. Fortunately, in many experimental situations the 3D axially symmetric harmonic trap has extreme symmetry so that the BEC has either a cigar or a disk shape \cite{geom}. In these cases the essential statics and dynamics of a BEC take place in reduced dimensions. By integrating out the unimportant dimensional variable(s), reduced GP equations have been derived in lower dimensions \cite{luca,other,other-1}, which give a faithful description of the BEC in disk and cigar shapes. For disk and cigar shapes the reduced GP equation is written in two (2D) and one dimensions (1D), respectively. The numerical solution of such 2D, or 1D equation, although simpler than that of the original 3D equation, remain complex due to the nonlinear nature of the GP equation. Hence, for small values of the nonlinearity parameter, a Gaussian variational approximation is much useful for the solution of these equations \cite{var}. The alkali metal atoms used in early BEC experiments have negligible dipole moment. However, most bosonic atoms and molecules have large dipole moments and a $^{52}$Cr \cite{lahaye,pfau,pfau-1,pfau-4}, and $^{164}$Dy \cite{dy,dy-2} BEC with a larger long-range dipolar interaction superposed on the short-range atomic interaction, has been realized. Other atoms, like $^{166}$Er \cite{otherdi,otherdi-1}, and molecules, such as $^7$Li-$^{133}$Cs \cite{becmol}, with much larger dipole moment are being considered for BEC experiments. A 3D GP equation for a dipolar BEC with a nonlocal nonlinear interaction has been suggested \cite{lahaye} and successfully used to describe many properties of these condensates \cite{dipsol,jb,YY,Yi2000,Dutta2007-1,Dutta2007-2,Dutta2007-3,Dutta2007-5,Dutta2007-6}. The applicability of the nonlocal GP equation to the case of {dipolar BEC} has been a subject of intensive study \cite{lahaye}. After a detailed analysis, You and Yi \cite{YY,Yi2000} concluded that the GP equation is valid for the {dipolar BEC}. Further support on the validity of this equation came from the study of Bortolotti {\it et al.} \cite{BB,BB-1}. They compared the solution of the dipolar GP equation with the results of diffusive Monte Carlo calculations and found good agreement between the two. However, the 3D GP equation for a dipolar BEC with the nonlocal dipolar interaction has a complex structure and its numerical solution, involving the Fourier transformation of the dipolar nonlinear term to momentum space \cite{jb,YY,Yi2000}, is even more challenging than that of the GP equation of a non-{dipolar BEC}. Here we reconsider the dimensional reduction \cite{SS,deu} of the GP equation to 1D form for cigar-shaped dipolar BEC and obtain the precise 1D potential with a dipolar contact-interaction term. Previous derivations \cite{SS,deu} of the 1D reduced equation for {dipolar BEC} did not include the proper contact-interaction term, lacking which the 1D model will not provide a correct description of the full 3D system. We also consider the reduced 2D GP equation \cite{fisch,PS} for a disk-shaped dipolar BEC. Though these reduced GP equations for dipolar BEC are computationally less expensive than their 3D counterparts, the numerical solution procedure remains complicated due to repeated forward and backward Fourier transformations of the non-local dipolar term. As an alternative, here we suggest time-dependent Gaussian Lagrangian variational approximation of the 1D and 2D reduced equations. A direct attempt to derive the variational Lagrangian density of the reduced equations is not straightforward due to nonlocal integrals with error functions. We present an indirect evaluation of the Lagrangian density avoiding the above complex procedure. Thus, the present variational approximation involves algebraical quantities without requiring any Fourier transformation to momentum space. In case of {dipolar BEC} of Cr and Dy atoms we consider the numerical solution of the 3D and the reduced 1D and 2D GP equations for cigar and disk shapes to demonstrate the appropriateness of the solution of the reduced equations. The variational approximation of the reduced equations provided results for density, root-mean-square (rms) size, chemical potential, and breathing oscillation dynamics in good agreement with the numerical solution of the reduced and full 3D GP equations. \section{Analytical formulation} \subsection{3D GP Equation} We study a {dipolar BEC} of $N$ atoms, each of mass $m$, using the dimensionless GP equation \cite{pfau,pfau-1,pfau-4} \begin{align} \label{gp3d} i \frac{\partial \phi({\bf r},t)}{\partial t} &\, = \biggr[ -\frac{1}{2}\nabla^2 +V({\bf r}) + 4\pi a N\vert \phi({\bf r},t)\vert^2 \notag \\ &\, + N \int U_{dd}({\bf r -r'})\vert\phi({\bf r'},t)\vert^2d^3{ r'} \biggr] \phi({\bf r},t), \end{align} with dipolar interaction $ U_{dd}({\bf R}) = 3 a_{dd}(1-3\cos^2\theta)$ $/R^3,$ $\quad {\bf R=r-r'}. $ Here $V({\bf r})$ is the confining axially symmetric harmonic potential, $\phi({\bf r},t)$ the wave function at time $t$ with normalization $\int \vert\phi({\bf r},t)\vert^2 d {\bf r}=1$, $a$ the atomic scattering length, $\theta$ the angle between $\bf R$ and the polarization direction $z$. The constant $a_{dd} =\mu_0\bar \mu^2 m /(12\pi \hbar^2)$ is a length characterizing the strength of dipolar interaction and its experimental value for $^{52}$Cr is $15a_0$ \cite{pfau,pfau-1,pfau-4}, with $a_0$ the Bohr radius, $\bar \mu$ the (magnetic) dipole moment of a single atom, and $\mu_0$ the permeability of free space. In equation (\ref{gp3d}) length is measured in units of characteristic harmonic oscillator length $l\equiv \sqrt{\hbar/m\omega}$, angular frequency of trap in units of $\omega$, time $t$ in units of $\omega^{-1}$, and energy in units of $\hbar\omega$. The axial and radial angular frequencies of the trap are $\Omega_z\omega$ and $\Omega_\rho\omega$, respectively. The dimensionless 3D harmonic trap is \begin{equation} V({\bf r}) = \frac{1}{2}\Omega_\rho^2\rho^2 + \frac{1}{2} \Omega_z^2 z^2, \end{equation} where ${\bf r}\equiv (\vec\rho,z)$, with $\vec\rho$ the radial coordinate and $z$ the axial coordinate. The Lagrangian density of equation (\ref{gp3d}) is given by \begin{align} {\mathcal L} &\, = \frac{i}{2}( \phi \phi^{\star}_t - \phi^{\star}\phi_t)+\frac{\vert\nabla\phi\vert^2}{2} + V({\bf r})\vert\phi\vert^2 \notag \\ &\, + 2\pi aN\vert\phi\vert^4 + \frac{N}{2}\vert \phi\vert^2\int U_{dd}({\mathbf r}- {\mathbf r'})\vert\phi({\mathbf r'})\vert^2 d^3{ r}' .\label{eqn:vari} \end{align} We use the Gaussian ansatz \cite{var,jb,YY,Yi2000} \begin{equation}\label{anz3d} \phi({\bf r},t)= \frac{\pi^{-3/4}}{w_\rho \sqrt {w_z}} \exp\left(-\frac{\rho^2}{2w_\rho^2} - \frac{z^2}{2w_z^2} +i\alpha\rho^2 +i\beta z^2 \right) \end{equation} for a variational calculation, where $w_\rho$ and $w_z$ are time-dependent radial and axial widths, and $\alpha$ and $\beta$ time-dependent phases. The effective Lagrangian $L\equiv \int {\mathcal L}d^3{ r}$ (per particle) becomes \begin{eqnarray} L & = &\, \left(w_\rho^2\dot{\alpha} + \frac{w_z^2\dot{\beta}}{2}\right) +\frac{\Omega_\rho^2 w_\rho^2}{2} +\frac{\Omega_z^2 w_z^2}{4} + \frac{1}{2{w_\rho^2}} + \frac{1}{4w_z^2} \nonumber \\ &&\, + 2w_\rho^2 \alpha^2 + w_z^2\beta^2 + \frac{N }{(\sqrt{2 \pi} w_\rho^2w_z)} \left[ {a}-{a_{dd}} f(\kappa)\right], \label{lag:eff} \end{eqnarray} with \begin{align} & f(\kappa)= \frac{1+2\kappa^2-3\kappa^2d(\kappa)}{(1-\kappa^2)}, \\ & d(\kappa)= \frac{\mbox{atanh}\sqrt{1-\kappa^2}}{\sqrt{1-\kappa^2}}, \;\; \kappa=\frac{w_\rho}{w_z}. \end{align} The Euler-Lagrange equations for variational parameters $w_\rho, w_z, \alpha$ and $\beta$ yield the following equations for widths $w_\rho$ and $w_z$ \begin{eqnarray} && \ddot{w}_{\rho +\Omega_\rho^2 w_\rho = \frac{1}{w_\rho^3} +\frac{ N}{\sqrt{2\pi}} \frac{ \left[2{a} - a_{dd} {g(\kappa) }\right] }{w_\rho^3w_{z}} , \label{f1} \\ && \ddot{w}_{z} +\Omega_z^2 w_z = \frac{1}{w_z^3}+ \frac{ 2N}{\sqrt{2\pi}} \frac{ \left[{a}-a_{dd} c(\kappa)\right] }{w_\rho^2w_z^2} , \label{f2} \end{eqnarray} with \begin{align} & g(\kappa)=\frac{2-7\kappa^2-4\kappa^4+9\kappa^4 d(\kappa)}{(1-\kappa^2)^2}, \\ & c(\kappa) =\frac{1+10\kappa^2 -2\kappa^4 -9\kappa^2 d(\kappa)}{(1-\kappa^2)^2}. \end{align} The chemical potential $\mu$ for a stationary state is \begin{align} \mu = &\, \frac{1}{2w_\rho^2}+ \frac{1}{4w_z^2}+\frac{2N[a-a_{dd}f(\kappa)]}{\sqrt{2\pi} w_zw_\rho^2} +\frac{\Omega_\rho^2 w_\rho^2}{2} +\frac{\Omega_z^2 w_z^2}{4}. \end{align} \subsection{1D reduction} For a cigar-shaped {dipolar BEC} with a strong radial trap $(\Omega_\rho >\Omega_z) $ one can write the following effective 1D equation (details given in Appendix) \begin{align}\label{gp1d} & i\frac{\partial \phi_{1D}(z,t)}{\partial t}=\biggr[-\frac{\partial_z^2}{2}+\frac{\Omega_z^2 z^2}{2}+ \frac{2 aN} { d_\rho^2}\vert\phi_{1D}\vert^2+ \frac{2a_{dd}N}{d_\rho^2} \nonumber \\ &\times \int_{-\infty}^{\infty}\frac{dk_z}{2\pi}e^{ik_z z}\tilde n(k_z)s_{1D}\left(\frac{k_z d_\rho}{\sqrt 2}\right)\biggr]\phi_{1D}(z,t) , \end{align} where $s_{1D}$ is defined by equation (\ref{zeta}) and $d_\rho\equiv 1/\sqrt{\Omega_\rho}$ is the radial harmonic oscillator length. To solve equation (\ref{gp1d}), we use the Gaussian variational ansatz \begin{equation}\label{anzv1d} \phi_{1D}(z)= \frac{\pi^{-1/4}}{\sqrt{w_z}}\exp\left[-\frac{z^2}{2w_z^2}+i\beta z^2\right] . \end{equation} From equation (\ref{anz1d}) we see that the variational 1D ansatz (\ref{anzv1d}) corresponds to the following 3D wave function \begin{equation}\label{1d3d} \phi({\bf r},t)= \frac{\pi^{-3/4}}{d_\rho \sqrt {w_z}} \exp\left(-\frac{\rho^2}{2d_\rho^2} - \frac{z^2}{2w_z^2} +i\beta z^2 \right) . \end{equation} The present variational wave function (\ref{1d3d}) is a special case of the 3D variational wave function (\ref{anz3d}) with $w_\rho=d_\rho$ and $\alpha=0$. Hence, the 1D variational Lagrangian can be written from the 3D Lagrangian (\ref{lag:eff}), (using $w_\rho=d_\rho$ and $\alpha=0$,) as \begin{align} L_{1D} & = \frac{w_z^2\dot{\beta}}{2} + \frac{1}{4w_z^2} + w_z^2\beta^2+\frac{\Omega_z^2 w_z^2}{4} \notag\\ & + \frac{N }{\sqrt{2 \pi} d_\rho^2w_z} \left[ {a}-{a_{dd}} f(\kappa_0)\right] ; \quad \kappa_0=\frac{d_\rho}{w_z}, \label{1dlag} \end{align} where we have removed the constant terms. { This inductive derivation of the 1D Lagrangian (\ref{1dlag}) avoids the construction of Lagrangian density involving error functions in the 1D potential (\ref{1dpotx}) and subsequent integration to obtain the Lagrangian.} The Euler-Lagrange equation for the variational parameter $w_z$ of Lagrangian (\ref{1dlag}) is \begin{eqnarray}\ddot w_z + \Omega_z ^2 w_z = \frac{1}{w_z^{3}} +\frac{2 N[a-a_{dd}c(\kappa_0)]} {\sqrt {2\pi}w_z^2 d_\rho^2}. \label{v1d} \end{eqnarray} The variational chemical potential is given by \begin{eqnarray}\mu= \frac{1}{4w_z^2}+\frac{2N[a-a_{dd}f(\kappa_0)]}{\sqrt{2\pi} w_z d_\rho^2} +\frac{\Omega_z^2 w_z^2}{4 }. \end{eqnarray} Not only are the above variational results simple and yield a good approximation to the 1D GP equation, much physical insight about the system can be obtained from the variational Lagrangian (\ref{1dlag}). In a quasi-1D system, the axial width is much larger than the transverse oscillator length: $w_z\gg d_\rho$. Consequently, $\kappa_0\to 0$ and $f(\kappa_0)\to 1$. From equation (\ref{1dlag}), we see that the interaction term becomes in this limit $N(a-a_{dd})/(\sqrt{2 \pi} d_\rho^2w_z)$. In equation (\ref{gp1d}), the dipolar term involves a nonlocal integral. However, the variational approximation suggests that the effect of the dipolar interaction integral is to reduce the contact interaction term in equation (\ref{gp1d}) replacing the scattering length $a$ by $(a-a_{dd})$. Immediately, one can conclude that the system effectively becomes attractive for $a_{dd}>a$. So one can have the formation of bright soliton even for positive (repulsive) scattering length $a$, provided that $a_{dd}>a$. \subsection{2D reduction} In the disk-shape, with a strong axial trap ($\Omega_z>\Omega_\rho$), the {dipolar BEC} is assumed to be in the ground state $\phi(z)= \exp(-z^2/2d_z^2)/{(\pi d_z^2)}^{1/4}$ of the axial trap and the wave function $\phi({\bf r}) $ can be written as \cite{fisch,PS} \begin{equation}\label{anz2d} \phi({\bf r})= \frac{1}{{(\pi d_z^2)}^{1/4}}\exp \left(-\frac{z^2}{2d_z^2}\right) \phi_{2D}(x,y), \end{equation} where $ \phi_{2D}(x,y)$ is the 2D wave function and $d_z=\sqrt{1/\Omega_z}$. Using ansatz (\ref{anz2d}) in equation (\ref{gp3d}), the $z$ dependence can be integrated out to obtain the following effective 2D equation \cite{fisch,PS} \begin{align} \label{gp2d} &\, i\frac{\partial \phi_{2D}(\vec \rho,t)}{\partial t} = \biggr[-\frac{\nabla_\rho^2}{2} +\frac{\Omega_\rho^2\rho^2}{2} +\frac{4\pi a N}{\sqrt{2\pi}d_z} \vert\phi_{2D}\vert^2 +\frac{4\pi a_{dd} N}{\sqrt{2\pi}d_z} \nonumber \\ &\, \quad \times \int \frac{d^2 k_\rho}{(2\pi)^2} \exp ({i{\bf k}_\rho \cdot {\vec \rho}}) {\tilde n}({\bf k_\rho}) h_{2D}(\frac{k_\rho d_z} {\sqrt 2}) \biggr] \phi_{2D}(\vec \rho,t), \\ &\, \tilde n ({\bf k}_\rho) = \int \exp \left(i {\bf k}_\rho \cdot \vec \rho \right) |\phi_{2D}(\vec\rho) |^2 d \vec\rho, \end{align} where $h_{2D}(\xi) = 2-3\sqrt\pi \xi e^{\xi^2}{\mbox{erfc}}(\xi)$ \cite{PS}, ${\bf k}_\rho \equiv (k_x, k_y)$, and the dipolar term is written in Fourier space. To solve equation (\ref{gp2d}), we use the Gaussian ansatz \begin{equation}\label{anzv2d} \phi_{2D}(\rho)=\frac{1}{w_\rho\sqrt \pi}\exp\left(-\frac{\rho^2}{2w_\rho^2}+i\alpha \rho^2\right). \end{equation} From equation (\ref{anz2d}) we see that the 2D wave function (\ref{anzv2d}) corresponds to the following 3D wave function \begin{equation}\label{2d3d} \phi({\bf r},t)= \frac{\pi^{-3/4}}{w_\rho \sqrt {d_z}} \exp\left(-\frac{\rho^2}{2w_\rho^2} - \frac{z^2}{2d_z^2} +i\alpha\rho^2 \right) . \end{equation} The present variational wave function (\ref{2d3d}) is a special case of the 3D variational wave function (\ref{anz3d}) with $w_z=d_z$ and $\beta=0$. Hence, the 2D variational Lagrangian can be written from the 3D Lagrangian (\ref{lag:eff}) as \begin{align}\label{2dlag} L_{2D} &\, = {w_\rho^2\dot{\alpha}} +\frac{ w_\rho^2\Omega_\rho^2}{2} + \frac{1}{2w_\rho^2} + 2w_\rho^2\alpha^2 \notag \\ &\,+ \frac{N }{\sqrt{2 \pi} w_\rho^2d_z} \left[ {a}-{a_{dd}} f(\bar \kappa)\right]; \quad \bar \kappa=\frac{w_\rho}{d_z}, \end{align} where we have removed the constant terms. The Euler-Lagrange variational equation for width $w_\rho$ becomes \begin{eqnarray} \ddot{w}_{\rho}+ {w_\rho\Omega_\rho^2} = \frac{1}{w_\rho^3} +\frac{ N}{\sqrt{2\pi}} \frac{ \left[2{a} - a_{dd} {g(\bar \kappa) }\right] }{w_\rho^3d_{z}}. \label{v2d} \end{eqnarray} The chemical potential $\mu$ for a stationary state is \begin{eqnarray} \mu& = & \frac{1}{2w_\rho^2}+ \frac{2N[a-a_{dd}f(\bar \kappa)]}{\sqrt{2\pi} d_zw_\rho^2} +\frac{ w_\rho^2\Omega_\rho^2}{2}. \end{eqnarray} In a quasi-2D system, the radial width is much larger than the axial oscillator length: $w_\rho\gg d_z$. Consequently, $\bar \kappa\to \infty$ and $f(\bar \kappa)\to -2$. From equation (\ref{2dlag}), we see that the interaction term becomes in this limit $N(a+2a_{dd})/(\sqrt{2 \pi} w_\rho^2d_z)$. The variational approximation suggests that the effect of the dipolar interaction in equation (\ref{gp2d}) is to increase the contact interaction term replacing $a$ by $(a+2a_{dd})$. Hence, for positive $a$, there cannot be any bright soliton in 2D, which was found from a solution of the 2D GP equation (\ref{gp2d}) and Bogoliubov theory \cite{tick}. However, effectively the sign of the dipolar term in the GP equation can be changed by rotating the external field that orients the dipoles much faster than any other relevant time scale in the system \cite{nath}. In this fashion Nath {\it et al.} \cite{nath} suggest changing the dipole interaction term by a factor of $-1/2$, which changes the effective scattering length in the Lagrange variational approximation to $(a-a_{dd})$, (as discussed in the 1D case above,) leading to the formation of bright 2D solitons for $a_{dd}>a$. These solitons were obtained by Nath {\it et al.} from a solution of the 2D GP equation (\ref{gp2d}). \section{Numerical results} We solve the 1D, 2D, and 3D GP equations employing imaginary- and real-time propagation with Crank-Nicolson method \cite{CPC,CPC-1}. The dipolar interaction is evaluated by fast Fourier transform \cite{jb,YY}. We present results for $^{52}$Cr and $^{164}$Dy atoms. The $^{52}$Cr has a moderate dipole moment with $a_{dd}=15a_0$ \cite{pfau,pfau-1,pfau-4}, while the $^{164}$Dy atom has a large dipole moment with $a_{dd}=130a_0$ \cite{dy,dy-2}. In both cases we present results for {dipolar BEC} of up to 10,000 atoms for $0 < a < 10$ nm and choose the frequency $\omega$ such that the oscillator length $l=1 \mu$m. First, we present the results for density profiles obtained from a solution of the reduced 1D and 2D equation and compare with the full 3D results. It is known that the densities obtained from the reduced equations agree well with the full 3D density, as the nonlinearity tends to zero and/or the trap asymmetry is extreme \cite{luca}. Hence in this study we consider a moderately small trap asymmetry and a relatively large nonlinearity of experimental interest. In the cigar (1D) case we consider $^{52}$Cr atoms with $a=6$ nm, and in the disk (2D) case we consider $^{164}$Dy atoms with $a=6$ nm. \begin{figure}[!ht] \begin{center} \includegraphics[width=.99\linewidth]{fig1.pdf} \end{center} \caption{ Linear density of a cigar-shaped $^{52}$Cr {dipolar BEC} of 1,000 atoms of $a=6$ nm, with trap parameters $\Omega_z=1$ and (a) $\Omega_\rho =4, $ and (b) $\Omega_\rho =9 $ from a numerical (N) solution of the 3D equation (\ref{gp3d}) and 1D equation (\ref{gp1d}), and its variational (V) result. Radial density of a disk-shaped $^{164}$Dy {dipolar BEC} of 1,000 atoms of $a=6$ nm, with trap parameters $\Omega_\rho=1$ and (c) $\Omega_z =4, $ and (d) $\Omega_z =9 $ from a numerical solution of the 3D equation (\ref{gp3d}) and 2D equation (\ref{gp2d}), and its variational result. } \label{fig1} \end{figure} In Figs. \ref{fig1} (a) and (b), we plot results for linear density of a cigar-shaped $^{52}$Cr {dipolar BEC} of 1,000 atoms as calculated from the numerical solution of the 3D equation (\ref{gp3d}) and the 1D equation (\ref{gp1d}) and its variational result (\ref{v1d}) for $\Omega_z=1$ and $\Omega_\rho = 4$ and 9. We find, as the trap asymmetry increases by changing $\Omega_\rho$ from 4 to 9, the agreement between 3D and 1D models improves. In Figs. \ref{fig1} (c) and (d), we plot results for radial density of a disk-shaped $^{164}$Dy {dipolar BEC} of 1,000 atoms as calculated from the numerical solution of the 3D equation (\ref{gp3d}) and the 2D equation (\ref{gp2d}) and its variational approximation (\ref{v2d}) for $\Omega_\rho=1$ and $\Omega_z = 4$ and 9. We find that, with the increase of the trap asymmetry from $\Omega_z =4$ to 9, the agreement between the 3D and 2D models enhances. In all cases the variational results of the reduced 1D and 2D equations are in good agreement with those of the full 3D model. After having established the appropriateness of the reduced 1D and 2D equations in the cigar and disk shapes, it is realized that although the numerical solution of these reduced GP equations are simpler than that of the full 3D GP equation, they are still complicated due to the presence of the nonlocal dipolar interaction. The variational approximation of these equations presented here is relatively simple and could be used for approximate solution of these equations. Now we test the variational results of the reduced 1D and 2D equations by comparing with the numerica solution of these equations. \begin{figure}[!ht] \begin{center} \includegraphics[width=.99\linewidth]{fig2.pdf} \end{center} \caption{The numerical (N) and variational (V) rms size $\langle \rho \rangle $ versus scattering length $a$ of a disk-shaped {dipolar BEC} of 10,000 (a) $^{52}$Cr and (b) $^{164}$Dy atoms for trap parameters $\Omega_\rho =1$ and $\Omega_z=4$ and 9 from a solution of the reduced 2D GP equation (\ref{gp2d}). The corresponding chemical potential $\mu$ in these cases for (c) $^{52}$Cr and (d) $^{164}$Dy atoms. } \label{fig2} \end{figure} In Figs. \ref{fig2} we present the results for rms size $\langle \rho \rangle$ and chemical potential $\mu$ of a disk-shaped $^{52}$Cr and $^{164}$Dy {dipolar BEC} of 10,000 atoms with the trap parameters $\Omega_\rho=1$ and $\Omega_z=4$ and 9 for $0 < a < 10$ nm as calculated from numerical and variational approaches of the reduced 2D equation (\ref{gp2d}). In Figs. \ref{fig3} we exhibit the results for rms size $\langle z \rangle$ and chemical potential $\mu$ of a cigar-shaped $^{52}$Cr and $^{164}$Dy {dipolar BEC} of 10,000 atoms with the trap parameters $\Omega_z=1$ and $\Omega_\rho=4$ and 9 for $0 < a < 20$ nm as calculated from numerical and variational approaches of the reduced 1D equation (\ref{gp1d}). \begin{figure}[!ht] \begin{center} \includegraphics[width=.99\linewidth]{fig3.pdf} \end{center} \caption{The numerical (N) and variational (V) rms length $\langle z \rangle $ versus scattering length $a$ of a cigar-shaped {dipolar BEC} of 10,000 (a) $^{52}$Cr and (b) $^{164}$Dy atoms for trap parameters $\Omega_z =1$ and $\Omega_\rho=4$ and 9 from a solution of the reduced 1D GP equation (\ref{gp1d}). The corresponding chemical potential $\mu$ in these cases for (c) $^{52}$Cr and (d) $^{164}$Dy atoms. } \label{fig3} \end{figure} The dipolar interaction changes from strongly attractive in the extreme cigar shape ($\Omega _\rho \gg \Omega_z$) to strongly repulsive in the extreme disk shape ($\Omega _\rho \ll \Omega_z$) and its effect is minimum (nearly zero) for $\Omega _\rho$ slightly less than $\Omega _z$. In Fig. \ref{fig2} the dipolar interaction is slightly attractive for $\Omega_z=4$ and $\Omega_\rho=1$. Hence in the absence of any short-range interaction ($a=0$), the system will collapse and no stable solution of the GP equation can be obtained. For Cr atoms the dipolar interaction is weak, and for $a\ge 1$ nm, the short-range repulsion for 10,000 atoms {surpluses} the dipolar attraction and a stable state can be obtained for $\Omega_z=4$. For Dy atoms the dipolar interaction is stronger, and a stable state can be obtained only for $a\ge 2$ nm for $\Omega_z=4$. For $\Omega_z=9$, the dipolar interaction for both Cr and Dy atoms are repulsive and a stable state is obtained in this case for $a>0$. In Fig. \ref{fig3} the dipolar interaction is attractive for both $\Omega_\rho=4$ and 9. Hence the {dipolar BEC} can be stable only for scattering length $a$ greater than a critical value. This is why the curves in this figure start above this critical value. This critical value is larger for Dy atoms and $\Omega_\rho=9$ compared to that of Cr atoms and $\Omega_\rho=4$ as can be found in Fig. \ref{fig3}. As there is no real collapse in 1D models with cubic nonlinearity; for confirming the collapse correctly one must solve the full 3D GP equation. \begin{figure}[!ht] \begin{center} \includegraphics[width=.99\linewidth]{fig5.pdf} \end{center} \caption{ The rms sizes $\langle z \rangle $ of a cigar-shaped Cr {dipolar BEC} of 1,000 atoms versus time $t$ for (a) $\Omega_\rho =4$ and (b) 9 from a numerical (N) and variational (V) results of the reduced 1D equation. The rms sizes $\langle \rho \rangle $ of a disk-shaped Dy {dipolar BEC} of 1,000 atoms versus time $t$ for (c) $\Omega_z =4$ and (d) 9. The oscillation was initiated by jumping the scattering length $a$ from 6 nm to $6.15$ nm for Cr ($6.3$ nm for Dy) at $t=0$ from a solution of the reduced 2D equation. } \label{fig5} \end{figure} Next we study, by numerical and variational solutions of the reduced 1D and 2D equations, the dynamics of breathing oscillation of the four {dipolar BEC} of cigar- and disk-shaped Cr and Dy atoms shown in Figs. \ref{fig1} started by a small change of the scattering length. This can be implemented experimentally by a Feshbach resonance \cite{fesh}. In Fig. \ref{fig5} this dynamics is shown for a cigar-shaped Cr {dipolar BEC} of 1,000 atoms for (a) $\Omega_\rho =4$ and (b) $\Omega_\rho =9$ from a solution of the reduced 1D equation, and for a disk-shaped Dy {dipolar BEC} of 1,000 atoms for (c) $\Omega_z =4$ and (d) $\Omega_z =9$ from a solution of the reduced 2D equation. In these figures we also show the results from a numerical solution of the 3D Eq. (\ref{gp3d}). The agreement between the numerical and variational results is good in all cases. We also calculated the angular frequencies of these oscillations. In case of Cr in Figs. \ref{fig5} (a) and (b), the axial frequencies are 1.75 (variational, 1D), 1.76 (numerical, 1D) and 1.63 (numerical, 3D), and in case of Dy in Figs. \ref{fig5} (c) and (d), the radial frequencies are 1.93 (variational, 2D), 1.89 (numerical, 2D) and 1.76 (numerical, 3D). For quasi-linear systems, these angular frequencies are expected to be 2 \cite{stringari}. The deviation from this value is due to the large nonlinearity of the {dipolar BEC}s considered here. \section{Conclusion} The usual GP equation provides a good description of statics and dynamics of a normal non{dipolar BEC}. For a {dipolar BEC} the numerical solution of the GP equation is a difficult task due to the nonlocal dipolar interaction. For a cigar- and disk-shaped {dipolar BEC}, the reduced 1D and 2D equations provide an alternative to the full 3D equation. Nevertheless, the solution of these reduced equations is also challenging involving Fourier and inverse Fourier transformations. As an alternative, we suggest a time-dependent variational scheme for these reduced equations, not requiring any Fourier transformation. The variational approximation of these reduced equations provides results for stationary cigar- and disk-shaped {dipolar BEC} as well as for breathing oscillation of the same in good agreement with the numerical solution of the respective GP equations. This is illustrated for large Cr and Dy {dipolar BEC}s of 10,000 atoms and large atomic scattering lengths $a$ up to 20 nm. We also study the breathing oscillation of a bright soliton of 1,000 Cr atoms using the numerical solution of the 3D equation as well as the numerical and variational approaches to the 1D equation. A typical {dipolar BEC} considered here corresponds to a large short-range cubic nonlinearity of about $4\pi a N \approx 1250$ for $a=10$ nm and $N=10,000$ and a large dipolar nonlinearity of $4\pi a_{dd}\approx 865$ for Dy atoms for $a_{dd}=130a_0$ and $N=10,000$. The variational approximations considered here provided good results for such large nonlinearities and should be useful for analyzing the statics and dynamics of realistic {dipolar BEC}s under appropriate experimental conditions. \acknowledgements We thank FAPESP (Brazil), CNPq (Brazil), DST (India), and CSIR (India) for partial support.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{The degrees of a system of parameters} Let $R$ be a graded ${\bf C}$-algebra. A {\em homogeneous system of parameters} (hsop) of $R$ is an algebraically independent set $S$ of homogeneous elements of $R$ such that $R$ is module-finite over the subalgebra generated by $S$. By the Noether normalization lemma, a hsop always exists. The size $|S|$ of $S$ equals the Krull dimension of $R$. In this note we consider the special case where $R$ is the ring $I$ of invariants of binary forms of degree $n$ under the action of $\mathrm{SL}(2,{\bf C})$. This ring is Cohen-Macaulay, that is, $I$ is free over the subring generated by any hsop $S$. Its Krull dimension is $n-2$. One cannot expect to classify all hsops of $I$. Indeed, any generic subset with the right degrees will be a hsop (cf.~Dixmier's criterion below). But one can expect to classify the sets of degrees of hsops. In this note we give a divisibility restriction on the set of degrees for the elements of a hsop, and conjecture that when all degrees are large this restriction also suffices for the existence of a hsop with these given degrees. For small degrees there are further restrictions. We give a complete classification for $n \le 8$. \section{Hilbert's criterion} Hilbert's criterion gives a characterization of homogeneous systems of parameters as sets that define the nullcone. Denote by $V_n$ the set of binary forms of degree $n$. The {\em nullcone} of $V_n$, denoted ${\mathcal N}(V_n)$, is the set of binary forms of degree $n$ on which all invariants vanish. By the Hilbert-Mumford numerical criterion (see \cite{Hi2} and \cite[Chapter 2]{MFK}) this is precisely the set of binary forms of degree $n$ with a root of multiplicity $>\frac{n}{2}$. Moreover, the binary forms with no root of multiplicity $\geq \frac{n}{2}$ have closed $\mathrm{SL}(2,{\bf C})$-orbits. The elements of ${\mathcal N}(V_n)$ are called {\em nullforms}. Another result from \cite{Hi2} that we will use is the following. \begin{Proposition} \label{hilbert} For $n \ge 3$, consider $i_1,\ldots ,i_{n-2}$ homogeneous invariants of $V_n$. The following two conditions are equivalent: \begin{itemize} \item[(i)] ${\mathcal N}(V_n)={\mathcal V}(i_1,\ldots ,i_{n-2})$, \item[(ii)] $\{i_1,\ldots ,i_{n-2}\}$ is a hsop of the invariant ring of $V_n$. \end{itemize} \end{Proposition} \section{A divisibility condition} Assume $n \ge 3$. \begin{Lemma} \label{lm:divisible} Fix integers $j$, $t$ with $t > 0$. If an invariant of degree $d$ is nonzero on a form $\sum a_i x^{n-i} y^i$ with the property that all nonzero $a_i$ have $i \equiv j$ (mod $t$), then $d(n-2j)/2 \equiv 0$ (mod $t$). \end{Lemma} {\bf Proof}\quad For an invariant of degree $d$ with nonzero term $\prod a_i^{m_i}$ we have $\sum m_i = d$ and $\sum i m_i = nd/2$. If $i \equiv j$ (mod $t$) when $a_i \ne 0$, then $nd/2 = \sum i m_i \equiv j \sum m_i = jd$ (mod $t$). \hfill$\Box$\medskip \medskip\noindent For odd $n$ we recover the well-known fact that all degrees are even (take $t=1$). \begin{Lemma} Fix integers $j$, $t$ with $t > 1$ and $0 \le j \le n$. Among the degrees $d$ of a hsop, at least $\lfloor (n-j)/t \rfloor$ satisfy $d(n-2j)/2 \equiv 0$ (mod $t$). \end{Lemma} {\bf Proof}\quad Subtracting a multiple of $t$ from $j$ results in a stronger statement, so it suffices to prove the lemma for $0 \le j < t$. There are $1 + \lfloor (n-j)/t \rfloor=:1+N$ coefficients $a_i$ with $i \equiv j$ (mod $t$), so the subpace $U$ of $V_n$ defined by $a_i = 0$ for $i \not\equiv j$ (mod $t$) has dimension $1+N$. If $N=0$ there is nothing to prove, so we assume that $N>0$. We claim that a general form $f \in U$ has only zeroes of multiplicity strictly less than $n/2$. Indeed, write \[ f=a_j x^{n-j} y^j + a_{j+t} x^{n-j-t} y^{j+t} + \ldots + a_{j+mt} x^{n-j-mt} y^{j+mt} \] where $j+(m+1)t>n$ and $m>0$. So $f$ has a factor $y$ of multiplicity $j$ and a factor $x$ of multiplicity $n-j-mt$. If $j$ were at least $n/2$, then $j+mt\geq j+t>2j\geq n$, a contradiction. If $n-j-mt$ were at least $n/2$, then $j+mt \leq n/2$ and hence $t \leq n/2$ and hence $j+(m+1)t \leq n$, a contradiction. The remaining roots of $f$ are roots of \[ a_j x^{mt} + a_{j+t} x^{(m-1)t}y^t + \ldots + a_{j+mt}y^{mt}, \] which is a general binary form of degree $m$ in $x^t,y^t$ and hence has $mt$ distinct roots. Let $\pi:V_n \to V_n/\kern-1.5pt/\mathrm{SL}(2,{\bf C})$ be the quotient map; so the right-hand side is the spectrum of the invariant ring $I$. Set $X:=\overline{\pi(U)}$. We claim that $X$ has dimension $N$. It certainly cannot have dimension larger than $N$, since acting with the one-dimensional torus of diagonal matrices on an element of $U$ gives another element of $U$. To show that $\dim X=N$ we need to show that for general $f \in U$ the fibre $\pi^{-1}(\pi(f))$ intersects $U$ in a one-dimensional variety. By the above and the Hilbert-Mumford criterion, the $\mathrm{SL}(2,{\bf C})$-orbit of $f$ is closed. Moreover, its stabiliser is zero-dimensional. So by properties of the quotient map we have $\pi^{-1}(\pi(f))=\mathrm{SL}(2,{\bf C}) \cdot f$. Hence it suffices that the intersection of this orbit with $U$ is one-dimensional. For this a Lie algebra argument suffices, in which we may ignore the Lie algebra of the torus: if $(b x \frac{\partial}{\partial y} + c y \frac{\partial}{\partial x})f$ lies in $U$, then we find that $b=c=0$ if $t>2$ (so that the contribution of one term from $f$ cannot cancel the contribution from the next term); and $b=0$ if $j>0$ (look at the first term), and then also $c=0$; and $c=0$ if $j+mt<n$ (look at the last term), and then also $b=0$. Hence the only case that remains is $t=2,j=0,$ and $n \geq 4$ even. Then the equations $c a_0 n + b a_2 2=0$ and $c a_2 (n-2) + b a_4 4=0$ are independent and force $b=c=0$. This concludes the proof that $\dim X=N$. Intersecting $X$ with the hypersurfaces corresponding to elements of an hsop reduces $X$ to the single point in $X$ representing the null-cone. In the process, $\dim X$ drops by $N$. But the only invariants that contribute to this dimension drop, i.e., the only invariants that do not vanish identically on $X$ (hence on $U$) are those considered in Lemma~\ref{lm:divisible}. Hence there must be at least $N$ of these among the hsop. \hfill$\Box$\medskip \begin{Lemma}\label{lemma3} Let $t$ be an integer with $t > 1$. (i) If $n$ is odd, and $j$ is minimal such that $0 \le j \le n$ and $(n-2j,t) = 1$, then among the degrees of any hsop at least $\lfloor (n-j)/t \rfloor$ are divisible by $2t$. (ii) If $n$ is even, and $j$ is minimal with $0 \le j \le \frac{1}{2}n$ and $(\frac{1}{2}n-j,t) = 1$, then among the degrees of any hsop at least $\lfloor (n-j)/t \rfloor$ are divisible by $t$. \hfill$\Box$\medskip \end{Lemma} \begin{Theorem}\label{thm1} Let $t$ be an integer with $t > 1$. (i) If $n$ is odd, then among the degrees of any hsop at least $\lfloor (n-1)/t \rfloor$ are divisible by $2t$ (and all degrees are even). (ii) If $n$ is even, then among the degrees of any hsop at least $\lfloor (n-1)/t \rfloor$ are divisible by $t$, and if $n \equiv 2$ $({\rm mod\,} 4)$ then at least $n/2$ by $2$. \end{Theorem} {\bf Proof}\quad (i) By part (i) of Lemma \ref{lemma3} we find a lower bound $\lfloor (n-j)/t \rfloor$ for a $j$ as described there. If that is smaller than $\lfloor (n-1)/t \rfloor$, then there is some multple $at$ of $t$ with $n-j+1 \le at \le n-1$. Put $n=at+b$, where $1 \le b \le j-1$. By definition of $j$ we have $(b-2i,t) > 1$ for $i=0,1,...,j-1$. If $b$ is odd, say $b=2i+1$, we find a contradiction. If $b$ is even, say $b=2i+2$, then $t$ is even and $n$ is even, contradiction. (ii) By part (ii) of Lemma \ref{lemma3} we find a lower bound $\lfloor (n-j)/t \rfloor$ for a $j$ as described there. For $t = 2$ our claim follows. Now let $t > 2$. If $\lfloor (n-j)/t \rfloor$ is smaller than $\lfloor (n-1)/t \rfloor$, then there is some multple $at$ of $t$ with $n-j+1 \le at \le n-1$. Put $n=at+b$, where $1 \le b \le j-1$. By definition of $j$ we have $(b-2i,2t) > 2$ for $i=0,1,...,j-1$, impossible. \hfill$\Box$\medskip \medskip For example, it is known that there exist homogeneous systems of parameters with degree sequences 4 $(n=3)$; 2, 3 $(n=4)$; 4, 8, 12 $(n=5)$; 2, 4, 6, 10 $(n=6)$; 4, 8, 12, 12, 20 and 4, 8, 8, 12, 30 $(n=7)$ \cite{Di0}; 2, 3, 4, 5, 6, 7 $(n=8)$ \cite{Sh}; 4, 8, 10, 12, 12, 14, 16 and 4, 4, 10, 12, 14, 16, 24 and 4, 4, 8, 12, 14, 16, 30 and 4, 4, 8, 10, 12, 16, 42 and 4, 4, 8, 10, 12, 14, 48 $(n=9)$ \cite{nonic}; 2, 4, 6, 6, 8, 9, 10, 14 $(n=10)$ \cite{decimic}. \begin{Conjecture} Any sequence $d_1,...,d_{n-2}$ of sufficiently large integers satisfying the divisibility conditions of Theorem \ref{thm1} is the sequence of degrees of a hsop. \end{Conjecture} This can be compared to the conjecture \medskip \begin{Conjecture} {\rm (Dixmier\cite{Di1})} (i) If $n$ is odd, $n \ge 15$, then $4,6,8,...,2n-2$ is the sequence of degrees of a hsop. (ii) If $n \equiv 2$ $({\rm mod\,} 4)$, $n \ge 18$, then $2,4,5,6,6,7,8,9,...,n-1$ is the sequence of degrees of a hsop. (iii) If $n \equiv 0$ $({\rm mod\,} 4)$, then $2,3,4,...,n-1$ is the sequence of degrees of a hsop. \end{Conjecture} \section{Poincar\'e series} If there exists a hsop with degrees $d_1,\ldots,d_{n-2}$, then the Poincar\'e series can be written as a quotient $P(t) = a(t) / \prod (t^{d_i}-1)$ for some polynomial $a(t)$ with nonnegative coefficients. If one does not have a hsop, but only a sequence of degrees, the conditions of Theorem \ref{thm1} above are strong enough to guarantee that $P(t)$ can be written in this way, but without the condition that the numerator has nonnegative coefficients. \begin{Proposition} Let $d_1,\ldots,d_{n-2}$ be a sequence of positive integers satisfying the conditions of Theorem \ref{thm1}. Then $P(t) \prod (t^{d_i}-1)$ is a polynomial. \end{Proposition} {\bf Proof}\quad Dixmier \cite{Di1} proves that $P(t) B(t)$ is a polynomial, where $B(t)$ is defined by \[ B(t) = \left\{ \begin{array}{ll} \prod_{i=2}^{n-1} (1-t^{2i}) & \mbox{if $n$ is odd} \\[2pt] \prod_{i=2}^{n-1} (1-t^i).(1+t) & \mbox{if $n \equiv 2$ (mod 4)} \\[2pt] \prod_{i=2}^{n-3} (1-t^i).(1+t)(1-t^{(n-2)/2})(1-t^{n-1}) & \mbox{if $n \equiv 0$ (mod 4)} \\ \end{array} \right. \] Consider a primitive $t$-th root of unity $\zeta$. We have to show that if $B(t)$ has root $\zeta$ with multiplicity $m$, then at least $m$ of the $d_i$ are divisible by $t$, but this follows immediately from Theorem \ref{thm1}. Note that in case $n \equiv 0$ (mod 4) the factor $(1+t)(1-t^{(n-2)/2})$ divides $(1-t^{n-2})$. \hfill$\Box$\medskip \medskip\noindent We see that if $n \equiv 0$ (mod 4), $n > 4$, then $P(t)$ can be written with a smaller denominator than corresponds to the degrees of a hsop. \begin{table}[ht] \medskip\noindent\begin{center}{\small \begin{tabular}{@{}r|rrrrrrrrrrr@{~}r@{~}r@{~}r@{~}r@{}} $h^n_m$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline 1 & . & . & . & . & . & . & . & . & . & . & . & . & . & . & . \\ 2 & . & 1 & . & 1 & . & 1 & . & 1 & . & 1 & . & 1 & . & 1 & .\\ 3 & . & . & . & 1 & . & . & . & 1 & . & . & . & 1 & . & . & .\\ 4 & . & 1 & 1 & 1 & 1 & 2 & 1 & 2 & 2 & 2 & 2 & 3 & 2 & 3 & 3\\ 5 & . & . & . & 1 & . & . & . & 2 & . & . & . & 3 & . & . & .\\ 6 & . & 1 & . & 2 & . & 3 & . & 4 & . & 6 & . & 8 & . & 10 & 1\\ 7 & . & . & . & 1 & . & . & . & 4 & . & . & . & 10 & . & 4 & .\\ 8 & . & 1 & 1 & 2 & 2 & 4 & 4 & 7 & 8 & 12 & 13 & 20 & 22 & 31 & 36\\ 9 & . & . & . & 2 & . & . & . & 8 & . & 5 & . & 28 & . & 27 & .\\ 10 & . & 1 & . & 2 & . & 6 & . & 12 & 5 & 24 & 13 & 52 & 33 & 97 & 80\\ 11 & . & . & . & 2 & . & . & . & 13 & . & 13 & . & 73 & . & 110 & .\\ 12 & . & 1 & 1 & 3 & 3 & 8 & 10 & 20 & 28 & 52 & 73 & 127 & 181 & 291 & 418\\ 13 & . & . & . & 2 & . & . & . & 22 & . & 33 & . & 181 & . & 375 & .\\ 14 & . & 1 & . & 3 & . & 10 & 4 & 31 & 27 & 97 & 110 & 291 & 375 & 802 & 1111\\ 15 & . & . & . & 3 & . & 1 & . & 36 & . & 80 & . & 418 & . & 1111 & .\\ 16 & . & 1 & 1 & 3 & 4 & 13 & 18 & 47 & 84 & 177 & 320 & 639 & 1120 & 2077 & 3581 \\ 17 & . & . & . & 3 & . & 1 & . & 54 & . & 160 & . & 902 & . & 2930 & . \\ 18 & . & 1 & . & 4 & 1 & 16 & 13 & 71 & 99 & 319 & 529 & 1330 & 2342 & 5034 & 8899 \\ \end{tabular}}\end{center} \caption{Values of $h^n_m = \dim_{\bf C} I_m$ with $I$ the ring of invariants of a binary form of degree $n$. Here . denotes 0. One has $h^n_m = h^m_n$ and $P(t) = \sum_m h^n_m t^m$. \label{tab1}} \end{table} \medskip We shall need the first few coefficients of $P(t)$. Messy details arise for small $n$ because there are too few invariants of certain small degrees. Let $I$ be the ring of invariants of a binary form of degree (order) $n$, let $I_m$ be the graded part of $I$ of degree $m$, and put $h_m = h^n_m = \dim_{\bf C} I_m$, so that $P(t) = \sum_m h_m t^m$. The coefficients $h^n_m$ can be computed by the Cayley-Sylvester formula: The dimension of the space of covariants of degree $m$ and order $a$ is zero when $mn-a$ is odd, and equals $N(n,m,t)-N(n,m,t-1)$ if $nm-a = 2t$, where $N(n,m,t)$ is the number of ways $t$ can be written as sum of $m$ integers in the range $0..n$, that is, the number of Ferrers diagrams of size $t$ that fit into a $m \times n$ rectangle. We have Hermite reciprocity $h^n_m = h^m_n$, as follows immediately since reflection in the main diagonal shows $N(n,m,t) = N(m,n,t)$. That means that Table \ref{tab1} is symmetric. \medskip Dixmier \cite{Di1} gives the cases in which $h_m = 0$. Since his statement is not precisely accurate, we repeat his proof. \begin{Proposition}\label{zeroh} Let $m,n \geq 1$. One has $h_m = h^n_m = 0$ precisely in the following cases: \begin{itemize} \item[(i)] if $mn$ is odd, \item[(ii)] if $m=1$; if $n = 1$, \item[(iii)] if $m=2$ and $n$ is odd; if $n=2$ and $m$ is odd, \item[(iv)] if $m = 3$ and $n \equiv 2$ $({\rm mod\,}~4)$; if $n=3$ and $m \equiv 2$ $({\rm mod\,}~4)$, \item[(v)] if $m = 5$ and $n = 6,10,14$; if $n = 5$ and $m = 6,10,14$, \item[(vi)] if $m = 6$ and $n = 7,9,11,13$; if $n = 6$ and $m = 7,9,11,13$, \item[(vii)] if $m = 7$ and $n = 10$; if $n = 7$ and $m = 10$. \end{itemize} \end{Proposition} {\bf Proof}\quad (i) If $n$ is odd, then all degrees are even. (ii) For $n=1$ we have $P(t) = 1$. (iii) For $n=2$ we have $P(t) = 1/(1-t^2)$. (iv) For $n=3$ we have $P(t) = 1/(1-t^4)$. Now let $m,n \ge 4$. For $n = 4$ we have invariants of degrees 2, 3 and hence of all degrees $m \ne 1$. That means that $h^n_4 \ne 0$. For $n = 6$ we have invariants of degrees 2, 15 and hence of all degrees $m \ge 14$. That means that $h^n_6 \ne 0$ for $n \ge 14$. If $n$ is odd this shows the presence of invariants of degrees 4, 6 and hence of all even degrees $m > 2$, provided $n \ge 15$. For $n = 5$ we have invariants of degrees 4, 18 and hence of all even degrees $m \ge 16$. That means that $h^n_5 \ne 0$ for even $n \ge 16$. If $n$ is even this shows the presence of invariants of degrees 2, 5 and hence of all degrees $m \ge 4$, provided $n \ge 16$. It remains only to inspect the table for $4 \le m,n \le 14$. \hfill$\Box$\medskip \section{Dixmier's criterion} Dividing out the ideal spanned by $p$ elements of a hsop diminishes the dimension by precisely (and hence at least) $p$. This means that the below gives a necessary and sufficient condition for a sequence of degrees to be the degree sequence of a hsop. \begin{Proposition} {\rm (Dixmier \cite{Di1})} Let $G$ be a reductive group over ${\bf C}$, with a rational representation in a vector space $R$ of finite dimension over ${\bf C}$. Let ${\bf C}[R]$ be the algebra of complex polynomials on $R$, ${\bf C}[R]^G$ the subalgebra of $G$-invariants, and ${\bf C}[R]^G_d$ the subset of homogeneous polynomials of degree $d$ in ${\bf C}[R]^G$. Let $V$ be the affine variety such that ${\bf C}[V] = {\bf C}[R]^G$. Let $r = \dim V$. Let $(d_1,\ldots,d_r)$ be a sequence of positive integers. Assume that for each subsequence $(j_1,\ldots,j_p)$ of $(d_1,\ldots,d_r)$ the subset of points of $V$ where all elements of all ${\bf C}[R]^G_j$ with $j \in \{j_1,\ldots,j_p\}$ vanish has codimension not less than $p$ in $V$. Then ${\bf C}[R]^G$ has a system of parameters of degrees $d_1,\ldots,d_r$. \hfill$\Box$\medskip \end{Proposition} This criterion is very convenient, it means that one can work with degrees only, without worrying about individual elements of a hsop. \section{Minimal degree sequences} If $y_1,...,y_r$ is a hsop, then also $y_1^{e_1},...,y_r^{e_r}$ for any sequence of positive integers $e_1,...,e_r$, not all 1. This means that if the degree sequence $d_1,...,d_r$ occurs, also the sequence $d_1e_1,...,d_re_r$ occurs. We would like to describe the minimal sequences, where such multiples are discarded. There are further reasons for non-minimality. \begin{Lemma}\label{mainlemma} If there exist hsops with degree sequences $d_1,...,d_{r-1},d'$ and $d_1,...$, $d_{r-1}, d''$, then there also exists a hsop with degree sequence $d_1,...,d_{r-1},d'+d''$. \end{Lemma} {\bf Proof}\quad We verify Dixmier's criterion. Consider a finite basis $f_1,...,f_s$ for the space of invariants of degree $d'$. Split the variety $V$ in the $s$ pieces defined by $f_i \ne 0$ $(1 \le i \le s)$ together with the single piece defined by $f_1 = ... = f_s = 0$. Given $p$ elements of the sequence $d_1,...,d_{r-1},d'+d''$ we have to show that the codimension in $V$ obtained by requiring all invariants of such degrees to vanish is at least $p$, that is, that the dimension is at most $r-p$. This is true by assumption if $d'+d''$ is not among these $p$ elements. Otherwise, consider the $s+1$ pieces separately. We wish to show that each has dimension at most $r-p$, then the same will hold for their union. For the last piece, where all invariants of degree $d'$ vanish, this is true by assumption. But if some invariant of degree $d'$ does not vanish, and all invariants of degree $d'+d''$ vanish, then all invariants of degree $d''$ vanish, and we are done. \hfill$\Box$\medskip \medskip Note that taking multiples is a special case of (repeated application of) this lemma, used with $d' = d''$. Let us call a sequence {\em minimal} if it occurs (as the degree sequence of the elements of a hsop), and its occurrence is not a consequence, via the above lemma or via taking multiples, of the occurrence of smaller sequences. We might try to classify all minimal sequences, at least in small cases. \medskip Is it perhaps true that a hsop exists for any degree sequence that satisfies the conditions of Theorem \ref{thm1} when there are sufficiently many invariants? E.g. when the coefficients of $P(t) \prod (1-t^{d_i})$ are nonnegative? \medskip\noindent {\bf Example}~ Some caution is required. For example, look at $n=6$. The conditions of Theorem \ref{thm1} are: at least three factors 2, at least one factor of each of 3, 4, 5. The sequence 6, 6, 6, 20 satisfies this restriction. Moreover, $P(t) (1-t^6)^3 (1-t^{20}) = 1 + t^2 + 2t^4 + t^8 + 2t^{12} + t^{14} + t^{15} + t^{16} + t^{17} + 2t^{19} + t^{23} + 2t^{27} + t^{29} + t^{31}$ has only nonnegative coefficients. But no hsop with these degrees exists: since $h_2 = 1$, $h_4 = 2$, $h_6 = 3$ it follows that there are invariants $i_2, i_4, i_6$ of degrees 2, 4, 6, and we have $I_4 = \langle i_2^2, i_4 \rangle$ and $I_6 = \langle i_2^3, i_2i_4, i_6 \rangle$. Requiring all invariants of degree 6 to vanish is equivalent to the two conditions $i_2 = i_6 = 0$, and a hsop cannot contain three elements of degree 6. \medskip Still, the above conditions almost suffice. And for $n < 6$ they actually do suffice. \subsection{$n=3$} For $n=3$ we only have simple multiples of the minimal degree. \begin{Proposition} A positive integer $d$ is the degree of a hsop in case $n = 3$ if and only if it is divisible by $4$. \hfill$\Box$\medskip \end{Proposition} \noindent If $i_4$ is an invariant of degree 4, then $\{i_4\}$ is a hsop. \subsection{$n=4$} For $n=4$ one has the sequence 2, 3, but for example also 5, 6. \begin{Proposition} A sequence $d_1,d_2$ of two positive integers is the sequence of degrees of a hsop for the quartic if and only if neither of them equals $1$, at least one is divisible by $2$, and at least one is divisible by $3$. \end{Proposition} {\bf Proof}\quad Clearly the conditions are necessary. In order to show that they suffice apply induction and the known existence of a hsop with degrees 2, 3. If $d_2 > 7$, then apply Lemma \ref{mainlemma} to the two sequences $d_1,6$ and $d_1,d_2-6$ to conclude the existence of a hsop with degrees $d_1,d_2$. If $2 \le d_1,d_2 \le 7$ and one is divisible by 2, the other by 3, then we have a multiple of the sequence 2, 3. Otherwise, one equals 6 and the other is 5 or 7. But 5, 6 is obtained from 2, 6 and 3, 6, and 7, 6 is obtained from 2, 6 and 5, 6. \hfill$\Box$\medskip \medskip\noindent If $i_2$ and $i_3$ are invariants of degrees 2 and 3, then $\{i_2,i_3\}$ is a hsop. \begin{Proposition} There is precisely one minimal degree sequence of hsops in case $n = 4$, namely $2$, $3$. \hfill$\Box$\medskip \end{Proposition} \subsection{$n=5$} \begin{Proposition} A sequence $d_1,d_2,d_3$ of three positive integers is the sequence of degrees of a hsop for the quintic if and only if all $d_i$ are even, and distinct from $2$, $6$, $10$, $14$, and no two are $4$, $4$ or $4$, $22$ and at least two are divisible by $4$, at least one is divisible by $6$, and at least one is divisible by $8$. \end{Proposition} {\bf Proof}\quad For $n=5$ the Poincar\'e series is $P(t) = 1 + t^4 + 2t^8 + 3t^{12} + 4t^{16} + t^{18} + 5t^{20} + t^{22} + 7t^{24} + 2t^{26} + 8t^{28} + 3t^{30} + ...$. The stated conditions are necessary: the divisibility conditions are seen from Theorem \ref{thm1}, and there are no invariants of degrees 2, 6, 10, 14. Finally, we have $h_4 = 1$ and $h_{18} = h_{22} = 1$, so that there are unique invariants $i_4$ and $i_{18}$ of degrees 4 and 18, respectively, and $I_{22} = \langle i_4i_{18} \rangle$, so that all invariants of degree 22 will vanish as soon as $i_4$ vanishes. The stated conditions suffice: We use (and verify below) that there are hsops with degrees 4, 8, 12 and with degrees 4, 8, 18. If all $d_i$ are divisible by 4, and we do not have a multiple of 4, 8, 12, then we have $4a$, $4b$, $24c$ where $a$ and $b$ have no factor 2 or 3, and not both are 1. It suffices to find 4, $4b$, 24. Since 4, 8, 24 exists, we can decrease $b$ by 2, and it suffices to find 4, 12, 24, which exists. So, some $d_i$, is not divisible by 4. We have one of the three cases $24a,4b,2c$ and $8a,12b,2c$ and $8a,4b,6c$, where $c$ is odd. In the middle case we have $c \ge 9$ and it suffices to make $8,12,2c$. Since 8, 12, 4 exists, we can reduce $c$ by 2, and it suffices to make 8, 12, 18, which exists since 4, 8, 18 exists. In the first case we have $c \ge 9$ and it suffices to make $24,4,2c$. Since 12, 4, 8 exists, we can reduce $c$ by 4, and it suffices to make 24, 4, 18 and 24, 4, 30. The former is a multiple of 4, 8, 18 and the latter follows from 24, 4, 18 and 24, 4, 12. Since 24, 4, 22 does not exist we still have to consider $24a,4b,22$. Since 8, 12, 22 exists we can reduce $b$ by 2, and it suffices to make 24, 12, 22. But that is a multiple of 8, 12, 22. Finally in the last case we have $c \ge 3$, and since 8, 4, 12 exists we can reduce $c$ by 2. So it suffices to do 4, 8, 18, and that exists. \hfill$\Box$\medskip \begin{Proposition} There are precisely two minimal degree sequences of hsops in case $n = 5$, namely $4,8,12$ and $4,8,18$. \end{Proposition} {\bf Proof}\quad By the proof of the previous proposition, all we have to do is show the existence of hsops with the indicated degree sequences. It is well-known (see, e.g., Schur \cite{Schur}, p.86) that the quintic has four invariants $i_4$, $i_8$, $i_{12}$, $i_{18}$ (with degrees as indicated by the index) that generate the ring of invariants, and every invariant of degree divisible by 4 (in particular $i_{18}^2$) is a polynomial in the first three. Thus, when $i_4$, $i_8$, $i_{12}$ vanish, all invariants vanish, and $\{i_4,i_8,i_{12}\}$ is a hsop. Knowing this, it is easy to see that also $\{i_4,i_8,i_{18}\}$ is a hsop: a simple Groebner computation shows that $i_{12}^3 \in (i_4,i_8,i_{18})$, hence ${\mathcal N}(V_5)={\mathcal V} (i_4,i_8,i_{18})$. \hfill$\Box$\medskip \subsection{$n=6$} Similarly, we find for $n = 6$: \begin{Proposition} A sequence $d_1,d_2,d_3,d_4$ of four positive integers is the sequence of degrees of a hsop for the sextic if and only if all $d_i$ are distinct from $1$, $3$, $5$, $7$, $9$, $11$, $13$, and no two are in $\{2,17\}$, and no three are in $\{2,4,8,14,17,19,23,29\}$, and no three are in $\{2,6,17,21\}$, and at least three are divisible by $2$, at least one is divisible by $3$, at least one by $4$, and at least one by $5$. \end{Proposition} {\bf Proof}\quad For $n=6$ the Poincar\'e series is \begin{eqnarray*} P(t) & = & 1 + t^2 + 2t^4 + 3t^6 + 4t^8 + 6t^{10} + 8t^{12} + 10t^{14} + t^{15} + 13t^{16} + t^{17} + \\ && 16t^{18} + 2t^{19} + 20t^{20} + 3t^{21} + 24t^{22} + 4t^{23} + 29t^{24} + 6t^{25} + 34t^{26} + \\ && 8t^{27} + 40t^{28} + 10t^{29} + 47t^{30} + \cdots . \end{eqnarray*} We have \[ I_2 = \langle i_2 \rangle,~~ I_4 = \langle i_2^2, i_4 \rangle,~~ I_6 = \langle i_2^3, i_2i_4, i_6 \rangle,~~ I_8 = \langle i_2^4, i_2^2i_4, i_2i_6, i_4^2 \rangle, \] \[ I_{10} = \langle i_2^5, i_2^3i_4, i_2^2i_6, i_2i_4^2, i_4i_6, i_{10} \rangle,~~ I_{12} = \langle i_2^6, i_2^4i_4, i_2^3i_6, i_2^2i_4^2, i_2i_4i_6, i_2i_{10}, i_4^3, i_6^2 \rangle, \] \[ I_{14} = \langle i_2^7, i_2^5i_4, i_2^4i_6, i_2^3i_4^2, i_2^2i_4i_6, i_2^2i_{10}, i_2i_4^3, i_2i_6^2, i_4^2i_6, i_4i_{10} \rangle,~~ I_{15} = \langle i_{15} \rangle, \] and the invariants in degrees 17, 19, 23, 29 are $i_{15}$ times the invariants in degrees 2, 4, 8, 14, respectively. Let us denote by $[i_1,...,i_t]$ the condition that all invariants of degrees $i_1, ..., i_t$ vanish. Then $[2] = [2,17]$ and hence a hsop cannot have two element degrees among 2, 17. Also $[4] = [2,4,8,14,17,19,23,29]$ and hence a hsop cannot have three element degrees among 2, 4, 8, 14, 17, 19, 23, 29. And $[6] = [2,6,17,21]$ is the condition $i_2 = i_6 = 0$ so that a hsop cannot have three element degrees among 2, 6, 17, 21. It follows that the stated conditions are necessary. \medskip The stated conditions suffice: We use (and verify below) that there are hsops with each of the degree sequences 2, 4, 6, 10 and 2, 4, 6, 15 and 2, 4, 10, 15. Prove by induction that any 4-tuple of degrees that satisfies the given conditions occurs as the degree sequence of a hsop. Given $d_1,d_2,d_3,d_4$, if $d_i \geq 90$ then by induction we already have the 4-tuples obtained by replacing $d_i$ by 60 and by $d_i-60$. It remains to check the finitely many cases where all $d_i$ are less than 90. A small computer check settles this. \hfill$\Box$\medskip \begin{Proposition} There are precisely three minimal degree sequences of hsops in case $n = 6$, namely $2,4,6,10$ and $2,4,6,15$ and $2,4,10,15$. \end{Proposition} {\bf Proof}\quad By the proof of the previous proposition, all we have to do is show the existence of hsops with the indicated degree sequences. It is well-known (see, e.g., Schur \cite{Schur}, p.90) that the sextic has five invariants $i_2$, $i_4$, $i_6$, $i_{10}$, $i_{15}$ (with degrees as indicated by the index) that generate the ring of invariants, where $i_{15}^2$ is a polynomial in the first four. This implies that ${\mathcal N}(V_6)={\mathcal V}(i_2,i_4,i_6,i_{10})$, so that $\{i_2,i_4,i_6,i_{10}\}$ is a hsop. Now $\{i_2,i_4,i_6,i_{15}\}$ and $\{i_2,i_4,i_{10},i_{15}\}$ are also hsops: we verified by computer that $i_{10}^3 \in (i_2,i_4,i_6,i_{15})$ and $i_6^5\in (i_2,i_4,i_{10},i_{15})$, so that ${\mathcal N}(V_6)={\mathcal V}(i_2,i_4,i_6,i_{15})={\mathcal V}(i_2,i_4,i_{10},i_{15})$. \hfill$\Box$\medskip \subsection{$n=7$} For $n = 7$ we have to consider the invariants a bit more closely in order to decide which degree sequences are admissable for hsops. Let $f$ be our septimic and let $\psi$ be the covariant $\psi = (f,f)_6$. There are thirty basic invariants, of degrees 4, 8 (3$\times$), 12 (6$\times$), 14 (4$\times$), 16 (2$\times$), 18 (9$\times$), 20, 22 (2$\times$), 26, 30. These can all be taken to be transvectants with a power of $\psi$ except for three basic invariants of degrees 12, 20 and 30 (that von Gall \cite{vG} calls $R$, $A$, $B$ and Dixmier \cite{Di0} $q_{12}$, $p_{20}$, $p_{30}$). This means that all invariants of degrees not of the form $12a+20b+30c$ vanish on the set defined by $\psi = 0$. But $\psi$ is a covariant of order 2, i.e., $\psi = Ax^2 + Bxy + Cy^2$ for certain $A$, $B$, $C$. It follows that no hsop degree sequence can have four elements in the set $\{4,8,14,16,18,22,26,28,34,38,46,58\}$. \begin{Proposition} A sequence of five positive even integers is the sequence of degrees of a hsop for the septimic if and only if all are distinct from $2$, $6$, $10$, no two equal $4$, no four are in $\{4,8,14,16,18,22,26,28,34,38,46,58\}$ and at least three are divisible by $4$, at least two by $6$, at least one by $8$, at least one by $10$ and at least one by $12$. \end{Proposition} {\bf Proof}\quad We already saw that these conditions are necessary. For sufficiency, use induction. The divisibility conditions concern moduli with l.c.m. 120, and the restrictions concern numbers smaller than 60, so if one of the degrees is not less than 180, we are done by induction. A small computer program checks all degree sequences with degrees at most 180, and finds that all can be reduced to the 23 sequences given in the following proposition. \hfill$\Box$\medskip \begin{Proposition} There are precisely $23$ minimal degree sequences of hsops in case $n = 7$, namely \[ \begin{array}{llll} 4,8,8,12,30 & 4,12,12,12,40 & 4,12,18,18,40 & 8,12,12,14,20 \\ 4,8,12,12,20 & 4,12,12,14,40 & 4,14,14,24,60 & 8,12,14,14,60 \\ 4,8,12,12,30 & 4,12,12,18,40 & 4,14,18,20,24 & 8,12,14,18,20 \\ 4,8,12,14,30 & 4,12,14,14,120 & 4,14,18,32,60 & 12,12,14,14,40 \\ 4,8,12,18,20 & 4,12,14,18,40 & 4,18,18,20,24 & 12,14,14,20,24 \\ 4,8,12,18,30 & 4,12,14,20,24 & 4,18,18,32,60 \\ \end{array} \] \end{Proposition} {\bf Proof}\quad We only have to show existence. Apply Dixmier's criterion. Denote by $[d_1,...,d_p]$ the codimension in $V$ of the subset of points of $V$ where all elements of all ${\bf C}[R]^G_{d_j}$ vanish $(1 \le j \le p)$. We have to show that for all $p$ and each of these 23 sequences $(d_i)$ the inequality $[d_1,...,d_p] \ge p$ holds. For $p=1$ that means that we need $[m] \ge 1$ for $m = 4$, 8, 12, 14, 18, 20, 24, 30, 32, 40, 60, 120, and that is true, for example by inspection of Table \ref{tab1}. We can save some work by observing that Dixmier \cite{Di0} already showed the existence of hsops with degree sequences 4, 8, 8, 12, 30 and 4, 8, 12, 12, 20. It follows that $[8] \ge 3$ and $[12] \ge 3$ and $[24] \ge [8,12] \ge 4$ and $[20] \ge 2$ and $[60] \ge [12,20] \ge 4$ and $[4,30] \ge 2$ and $[8,30] \ge 4$. Since there are several basic invariants of degree 14 or 18, no two of which can have a common factor, it follows that $[14] \ge 2$ and $[18] \ge 2$. This suffices to settle $p=2$. For $p=3$ we must look at triples $[d,d',d'']$ without element 8 or 12 or multiple. First check that $[4,14] \ge 3$ and $[4,18] \ge 3$. We'll do this below. Now all the rest needed for $p=3$ follows. Below we shall show that $[12] \ge 4$. For $p=4$ we must look at quadruples $[d,d',d'',d''']$ without element 12 or 8, 30 or multiple. The minimal of these are (omitting implied elements) $[18,20]$ and $[18,32]$. However, $[18,32] \ge \min([18,12],[18,20])$ and $[18,20] \ge \min([18,20,8],[18,20,12])$. Finally for $p=5$ we have to show that each of these 23 sets determines the nullcone. But that follows immediately, since it is known already that $[8,12,20] = [8,12,30] = 5$. Altogether, our obligations are: show that $[4,14] \ge 3$, $[4,18] \ge 3$, $[12] \ge 4$ and $[8,18,20] \ge 4$. \medskip Consider the part of $V$ defined by $\psi=0$. Dixmier shows that if $\psi=q_{12}=p_{20}=0$ (for certain invariants $q_{12}$ and $p_{20}$ of degrees 12 and 20, respectively), then $f$ is a nullform. It follows that the subsets of $V$ defined by $\psi=q_{12}=0$ or by $\psi=p_{20}=0$ have codimension at least 4 in $V$. Now we have to do some actual computations. With $f = ax^7+\binom{7}{1}bx^6y+\cdots+\binom{7}{1}gxy^6+hy^7$ (the two meanings of $f$, as form and as coefficient will not cause confusion), we find $\psi = (ag-6bf+15ce-10d^2)x^2 + (ah-5bg+9cf-5de)xy + (bh-6cg+15df-10e^2)y^2$. Assume that the invariant of degree 4 vanishes, as it does in all cases we still have to consider. Then $\psi$ has zero discriminant. If $\psi \ne 0$, then w.l.o.g. $\psi \sim x^2$, and $ah-5bg+9cf-5de = bh-6cg+15df-10e^2 = 0$, $ag-6bf+15ce-10d^2 \ne 0$. Distinguish the four cases (i) $h \ne 0$, (ii) $h = 0$, $g \ne 0$, (iii) $h = g = 0$, $f \ne 0$, (iv) $h = g = f = 0$, $e \ne 0$. W.l.o.g. these become (i) $h = 1$, $g = 0$, $a+9cf-5de = 0$, $b+15df-10e^2 = 0$, (ii) $h = 0$, $g = 1$, $f = 0$, $b+de = 0$, $3c+5e^2 = 0$, (iii) $h = g = 0$, $f = 1$, $e = 0$, $c = 0$, $d = 0$, $b \ne 0$, (iv) $h = g = f = 0$, $e = 1$, $d = 0$, contradiction. \medskip Let us first show that $[12] \ge 4$. We may suppose $\psi \ne 0$. One of the invariants of degree 12 is $(\psi_1,\psi^5)_{10} \sim (\psi_1,x^{10})_{10} = fh-g^2$, where $\psi_1 = (f,f)_2$. If all invariants of degree 12 vanish, then in case (i) $f=0$, and in case (ii) contradiction. Look at case (iii). The only invariant of degree 12 that does not vanish identically is $a^2b^2f^8$, and we find $a = 0$, a 1-dimensional set. Finally, in case (i), if all invariants of degree 12 vanish, but $ag-6bf+15ce-10d^2 \ne 0$, then the remaining conditions define an ideal $(18e^3-cd,12de^2-c^2,2cd^2-3c^2e)$ in the three variables $c,d,e$ and the quotient is 1-dimensional. This shows that $[12] \ge 4$. \medskip Let us show next that $[8,18] \ge 4$. We may suppose $\psi \ne 0$. One of the invariants of degree 8 is $(\psi_2,\psi^3)_6 \sim (\psi_2,x^6)_6 = dh-4eg+3f^2$ where $\psi_2 = (f,f)_4$. This gives a contradiction in case (iii). In case (ii) it gives $e=b=c=0$, leaving only variables $a,d$. In case (i) it gives $d+3f^2=0$, leaving only variables $c,e,f$. An invariant of degree 18 is $((\psi_1,\psi_2)_1,\psi^7)_{14} \sim ((\psi_1,\psi_2)_1,x^{14})_{14} = -cfh^2+cg^2h+deh^2+2dfgh-3dg^3-4e^2gh+ef^2h+6efg^2-3f^3g$. In case (ii) this says $d = 0$, leaving only variable $a$. In case (i) this says $f(2ef+c) = 0$. This gives us two subcases: (ia) with $f=0$ and variables $c,e$, and (ib) with $c+2ef=0$ and variables $e,f$. Another invariant of degree 8 is $(\psi_3,\psi^2)_4 \sim (\psi_3,x^4)_4$, where $\psi_3 = (\psi_2,\psi_2)_4$, which vanishes in case (ii) and says $c^2f+4cef^2+76e^2f^3+9e^4+144f^6 = 0$ in case (i). In case (ia) this means $e = 0$ leaving only variable $c$. In case (ib) this means $(4f^3+e^2)^2 = 0$, leaving the dimension 1. This proves $[8,18] \ge 4$. \medskip Let us show next that $[4,14] \ge 3$. First consider the case $\psi=0$. Now all invariants of degrees 4 or 14 (or 18) vanish, but the condition $\psi=0$ itself yields the three equations $A=B=C=0$ where $\psi=Ax^2+Bxy+Cy^2$. Earlier, the choice $\psi \sim x^2$ used up some of the freedom given by the group, but here we are free to choose a zero for the form, and assume $h=0$. Again consider the four cases, this time with $ag-6bf+15ce-10d^2$ zero instead of nonzero. We have (iii) $f=1$, $h=g=e=d=c=b=0$, only variables $a,f$ left. And (ii) $g=1$, $h=f=0$, $b+de = 0$, $3c+5e^2 = 0$, $a+15ce-10d^2 = 0$, only variables $d,e$ left. And by assumption $h = 0$ we are not in case (i). That settles the case $\psi=0$. Now assume $\psi \ne 0$ and take $\psi \sim x^2$. In case (iii) only variables $a,b$ are left, and we are done. In case (ii) only variables $a,d,e$ are left. In case (i) only variables $c,d,e,f$ are left. An invariant of degree 14 is $(f.(f,\psi_2)_5,\psi^5)_{10} \sim (f.(f,\psi_2)_5,x^{10})_{10} = -2afh^2+2ag^2h+7beh^2-7bfgh-5cdh^2-22cegh+27cf^2h+25d^2gh-45defh+20e^3h$. In case (ii) this vanishes. In case (i) this becomes (up to a constant) $18e^3-32def+9cf^2-cd$. Another invariant of degree 14 is $((\psi_2,\psi_3)_1,\psi^4)_8 \sim ((\psi_2,\psi_3)_1,x^8)_8$. In case (ii) this becomes $de(26e^3-35d^2-10a)$ and we are reduced to three pieces, each with only two variables. In case (i) this becomes (up to a constant) $70e^3f^4-120def^5+27cf^6+36e^5f-60de^3f^2+6ce^2f^3+3cdf^4+6d^2e^3+ 18ce^4-8d^3ef-54cde^2f+33cd^2f^2+3c^2ef^2+cd^3-3c^2de+2c^3f$. Both polynomials found are irreducible and hence have no common factor, and we are reduced to a 2-dimensional situation. This proves $[4,14] \ge 3$. \medskip Finally, let us show that $[4,18] \ge 3$. The subcase $\psi=0$ was handled already, so we can assume that $\psi \ne 0$ and take $\psi \sim x^2$. Again only cases (i) and (ii) need to be considered. Above we already considered the invariant $((\psi_1,\psi_2)_1,\psi^7)_{14}$ of degree 18. In case (ii) this yields $d=0$, leaving only the two variables $a,e$. In case (i) we find $ef^2+de-cf = 0$. Another invariant of degree 18 is $(f.((f,\psi_2)_5,\psi_2)_2,\psi^6)_{12}$. In case (i) this yields $70e^3f^3-120def^4+27cf^5-54e^5+210de^3f-200d^2ef^2-15ce^2f^2+ 30cdf^3+15cde^2-25cd^2f-c^3 = 0$. Both polynomials found are irreducible and hence have no common factor, and we are reduced to a 2-dimensional situation. This proves $[4,18] \ge 3$. \hfill$\Box$\medskip \subsection{$n=8$} For the octavic there there are nine basic invariants $i_d$ $(2 \le d \le 10)$. There is a hsop with degrees 2, 3, 4, 5, 6, 7. The Poincar\'e series is \begin{eqnarray*} P(t) & = & 1 + t^2 + t^3 + 2t^4 + 2t^5 + 4t^6 + 4t^7 + 7t^8 + 8t^9 + \\ && 12t^{10} + 13t^{11} + 20t^{12} + 22t^{13} + 31t^{14} + \cdots ~=~ \\ & = & (1+t^8+t^9+t^{10}+t^{18}) / \prod_{d=2}^7 (1-t^d). \\ \end{eqnarray*} \vskip -0.5cm Given a finite sequence $(d_i)$, the {\em numerator} of $P(t)$ corresponding to this sequence is by definition $P(t) \prod (1-t^{d_i})$. If $(d_i)$ is a subsequence of the sequence of degrees of a hsop, then the corresponding numerator has nonnegative coefficients. This rules out, e.g., the following sequences $(d_i)$. \medskip \begin{tabular}{llll} 2, 2 & 2, 4, 4 & 3, 5, 5 & 5, 5, 5 \\ 3, 3 & 2, 5, 5 & 4, 4, 4 & 2, 3, 7, 7 \\ \end{tabular} \medskip What is wrong with these sequences is that there just aren't enough invariants of these degrees. More interesting are the cases where there are enough invariants, but they cannot be chosen algebraically independent. \begin{Proposition} A sequence of six integers larger than $1$ is the sequence of degrees of a hsop for the octavic if and only if (i) (`divisibility') at least three of them are even, at least two are divisible by $3$, at least one has a factor $4$, at least one a factor $5$, at least one a factor $6$, and at least one a factor $7$, and moreover (ii) (`nonnegativity') none of the eight sequences in the above table occur as a subsequence, and moreover (iii) (`algebraic independence') there are no four elements in any of $\{2,3,6\}$, $\{2,4,5\}$, $\{2,4,7\}$, and no five elements in any of $\{2,3,4,5,11\}$, $\{2,3,4,6,11\}$, $\{2,3,4,7\}$, $\{2,3,4,8\}$, $\{2,3,4,9\}$, $\{2,3,5,6\}$, $\{2,3,6,7,11\}$. \end{Proposition} {\bf Proof}\quad We have \[ I_2 = \langle i_2 \rangle,~~ I_3 = \langle i_3 \rangle,~~ I_4 = \langle i_2^2, i_4 \rangle,~~ I_5 = \langle i_2i_3, i_5 \rangle,~~ I_6 = \langle i_2^3, i_2i_4, i_3^2, i_6 \rangle, \] \[ I_7 = \langle i_2^2i_3, i_2i_5, i_3i_4, i_7 \rangle,~~ I_8 = \langle i_2^4, i_2^2i_4, i_2i_3^2, i_2i_6, i_3i_5, i_4^2, i_8 \rangle , \] \[ I_9 = \langle i_2^3i_3, i_2^2i_5, i_2i_3i_4, i_2i_7, i_3^3, i_3i_6, i_4i_5, i_9 \rangle , \] \[ I_{11} = \langle i_2^4i_3, i_2^3i_5, i_2^2i_3i_4, i_2^2i_7, i_2i_3^3, i_2i_3i_6, i_2i_4i_5, i_2i_9, i_3^2i_5, i_3i_4^2, i_3i_8, i_4i_7, i_5i_6 \rangle . \] \medskip We see that $V(\cup_{a\in A} I_a) = V(\{i_b \mid b \in B\})$ for $A$ and $B$ as in the table below. \medskip \begin{tabular}{cc|cc|cc} $A$ & $B$ & $A$ & $B$ & $A$ & $B$ \\ \hline 2,3,6 & 2,3,6 & 2,3,4,6,11 & 2,3,4,6 & 2,3,5,6 & 2,3,5,6 \\ 2,4,5 & 2,4,5 & 2,3,4,7 & 2,3,4,7 & 2,3,6,7,11 & 2,3,6,7 \\ 2,4,7 & 2,4,7 & 2,3,4,8 & 2,3,4,8 & \\ 2,3,4,5,11 & 2,3,4,5 & 2,3,4,9 & 2,3,4,9 & \\ \end{tabular} \medskip This shows that the given conditions are necessary. For sufficiency, use induction. The basis of the induction is provided by the 13 hsops constructed in the next proposition. Given a sequence of six numbers satisfying the conditions, order the numbers in such a way that the last is divisible by 7 and at least one of the last two is divisible by 5. All restrictions concern numbers at most 11, so if we split a number from the sequence into two parts each at least 12, such that the divisibility conditions remain true for the two resulting sequences, then by Lemma \ref{mainlemma} and induction there exists a hsop with the given sequence as degree sequence. This means that one can reduce the first four numbers modulo 12, the fifth modulo 60, and the last modulo 420. It remains to check a $24 \times 24 \times 24 \times 24 \times 72 \times 432$ box, and this is done by a small computer program. \hfill$\Box$\medskip \begin{Proposition} There are precisely $13$ minimal degree sequences of hsops in case $n = 8$, namely \[ \begin{array}{llll} 2,3,4,5,6,7 & 2,3,4,6,9,35 & 2,3,5,6,10,28 \\ 2,3,4,5,8,42 & 2,3,4,7,8,30 & 2,3,5,9,12,14 \\ 2,3,4,5,9,42 & 2,3,4,7,9,30 & 2,4,5,6,8,21 \\ 2,3,4,5,10,42 & 2,3,4,8,9,210 & \\ 2,3,4,6,8,35 & 2,3,5,6,9,28 & \\ \end{array} \] \end{Proposition} {\bf Proof}\quad Minimality is immediately clear, so we only have to show existence. Apply Dixmier's criterion. As before we have to show that for all $p$ and each subsequence $d_1,...,d_p$ of one of these 13 sequences the inequality $[d_1,...,d_p] \ge p$ holds. We can save some work by observing that Shioda \cite{Sh} already showed the existence of a hsop with degree sequence 2, 3, 4, 5, 6, 7. It follows that $[d_1,...,d_p] \ge p$ when (at least) $p$ of the numbers 2, 3, 4, 5, 6, 7 divide some of the $d_i$. For $p=1$, nothing remains to check. For $p=2$, there only remains to show $[9] \ge 2$, and this follows since there are two invariants of degree 9 without common factor, for example $i_3i_6$ and $i_4i_5$. For $p=3$, we have to show $[8] \ge 3$, $[2,9] \ge 3$, $[5,9] \ge 3$, $[7,9] \ge 3$, $[10] \ge 3$. For $p=4$, we have to show $[3,8] \ge 4$, $[5,8] \ge 4$, $[7,8] \ge 4$, $[4,9] \ge 4$, $[2,5,9] \ge 4$, $[6,9] \ge 4$, $[2,7,9] \ge 4$, $[8,9] \ge 4$, $[3,10] \ge 4$, $[4,10] \ge 4$, $[9,14] \ge 4$. For $p=5$, we have to show $[3,5,8] \ge 5$, $[6,8] \ge 5$, $[3,7,8] \ge 5$, $[4,5,9] \ge 5$, $[4,6,9] \ge 5$, $[5,6,9] \ge 5$, $[4,7,9] \ge 5$, $[8,9] \ge 5$, $[3,4,10] \ge 5$, $[6,10] \ge 5$, $[5,9,14] \ge 5$. There are no conditions left to check for $p=6$. Remain 27 conditions to check. Let $V[d_1,...,d_p]$ denote the variety defined by all invariants of degrees $d_i$. Split $V[9]$ into two parts depending on whether $i_2$ vanishes or not. Where it does not vanish, all invariants of degrees 3, 5, 7 must vanish. Hence $[5,9], [7,9] \ge [9] \ge \min([2,9], [3,5,7,9])$. Split $[2,9]$ into two parts depending on whether $i_4$ vanishes or not. The first part has $[2,3,4,9] \ge 3$, the second $[2,3,5,9] \ge 3$. Hence $[9] \ge 3$. Similarly, $[8] = [2,4,8] \ge \min([2,3,4,8],[2,4,5,8]) \ge 3$. Finally, $[10] = [2,5,10] \ge \min([2,3,5,10],[2,3,7,10]) \ge 3$. This settles $p=3$. The same argument shows that $[7,8], [2,7,9], [6,9], [3,10], [4,10], [9,14] \ge 4$ and $[5,9,14] \ge 5$. Since adding a single condition diminishes the dimension by at most one, $[3,8] \ge 4$ follows from $[3,5,8] \ge 5$. (Given that $i_2$ vanishes since $i_2^4$ has degree 8, the condition that all invariants of degree 5 vanish is equivalent to the requirement that $i_5$ vanishes.) Similarly $[5,8] \ge 4$ and $[4,9] \ge 4$ and $[2,5,9] \ge 4$ follow from $[3,5,8] \ge 5$ and $[4,5,9] \ge 5$. Trivially, $[8,9] \ge 4$ follows from $[8,9] \ge 5$. This settles $p=4$, assuming the inequalities for $p=5$. \bigskip Remain 10 conditions to check: $[3,5,8] \ge 5$, $[6,8] \ge 5$, $[3,7,8] \ge 5$, $[4,5,9] \ge 5$, $[4,6,9] \ge 5$, $[5,6,9] \ge 5$, $[4,7,9] \ge 5$, $[8,9] \ge 5$, $[3,4,10] \ge 5$, $[6,10] \ge 5$. Equivalently, for each of the sets $A$, where $A$ is one of \[\begin{array}{@{}lllll@{}} \{2,3,4,5,8\},& \{2,3,4,6,8\},& \{2,3,4,7,8\},& \{2,3,4,5,9\},& \{2,3,4,6,9\},\\ \{2,3,5,6,9\},& \{2,3,4,7,9\},& \{2,3,4,8,9\},& \{2,3,4,5,10\},& \{2,3,5,6,10\}, \end{array}\] \noindent we must have $\dim V(\{i_a\mid a \in A\}) = 1$. For example, we want $\dim V(i_2,i_3,i_4,i_5,i_8) = 1$. Now $i_2,i_3,i_4,i_5$ form part of a hsop, so $V(i_2,i_3,i_4,i_5)$ is irreducible and has dimension 2. Moreover $i_8$ does not vanish identically on $V(i_2,i_3,i_4,i_5)$ as we shall see, and it follows that $\dim V(i_2,i_3,i_4,i_5,i_8) = 1$. This argument works in all cases except that of $V(i_2,i_3,i_4,i_8,i_9)$ and shows that each of the claimed sequences of degrees with the possible exception of 2, 3, 4, 8, 9, 210, is that of a hsop. In particular, e.g. 2, 3, 4, 5, 8, 42 is the sequences of degrees of a hsop. But now this argument also applies to $V(i_2,i_3,i_4,i_8,i_9)$: $V(i_2,i_3,i_4,i_8)$ is irreducible of dimension 2 and $i_9$ does not vanish identically on it, and it follows that $V(i_2,i_3,i_4,i_8,i_9)$ has dimension 1. \medskip It remains to check the ten conditions that say that $i_8$ does not vanish on any of $V(i_2,i_3,i_4,i_5)$, $V(i_2,i_3,i_4,i_6)$, $V(i_2,i_3,i_4,i_7)$, that $i_9$ does not vanish on any of $V(i_2,i_3,i_4,i_5)$, $V(i_2,i_3,i_4,i_6)$, $V(i_2,i_3,i_5,i_6)$, $V(i_2,i_3,i_4,i_7)$, $V(i_2,i_3,i_4,i_8)$, and that $i_{10}$ does not vanish on $V(i_2,i_3,i_4,i_5)$ or $V(i_2,i_3,i_5,i_6)$. Using Singular we computed the radical of the ideals $(i_2,i_3,i_4,i_5)$, $(i_2,i_3,i_4,i_6)$, $(i_2,i_3,i_4,i_7)$, $(i_2,i_3,i_5,i_6)$ and $(i_2,i_3,i_4,i_8)$ and checked the required facts. (This shows that $i_8$, $i_9$ and $i_{10}$ do not vanish on the 2-dimensional pieces mentioned. Note that these invariants do vanish on various 1-dimensional pieces. For example, $i_8^2 \in (i_2,i_3,i_4,i_6,i_7)$, so that $i_8$ vanishes on $V(i_2,i_3,i_4,i_6,i_7)$, and $i_8^5 \in (i_2,i_3,i_4,i_5,i_6)$, and $i_{10}^2 \in (i_2,i_3,i_4,i_5,i_6)$ and $i_9^3 \in (i_2,i_3,i_4,i_5,i_6) \cap (i_2,i_3,i_4,i_6,i_7) \cap (i_2,i_3,i_5,i_6,i_7)$.) \hfill$\Box$\medskip
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} ${}^{}$\indent We study the Davenport constant, a central combinatorial invariant which has been investigated since Davenport popularized it in the 60's, see \cite{GGZ}, \cite{GHK}, \cite{RG} for a survey. We derive new explicit upper bound for the Davenport constant for groups of rank three. The exact value of the Davenport constant for groups of rank three is still unknown and this is an open and well-studied problem, see \cite{BGAA},\cite{BGWS1},\cite{BGWS2}. \section{Basic notations} ${}^{}$\indent Let $\mathbb{N}$ denote the set of the positive integers (natural numbers). We set \\ $[a,b]=\{x:a\le x\le b,\,\,x\in\mathbb{Z}\},$ where $a,b\in\mathbb{Z}.$ Let $G$ be a non-trivial additive finite abelian group. $G$ can be uniquely decomposed as a direct sum of cyclic groups $C_{n_1}\oplus C_{n_2}\oplus \ldots \oplus C_{n_r}$ with the integers satisfying $1<n_1|\ldots|n_r.$ The number of summands in the above decomposition of $G$ is denoted by $r = r(G)$ and called the rank of $G$. The integer $n_r$ denotes the exponent $\exp(G).$ In addition, we define $D^*(G)$ as $D^*(G) =1+ \sum\limits_{i=1}^r (n_i- 1)$.\\ We denote by $\mathcal{F}(G)$ the free, abelian, multiplicatively written monoid with basis $G.$ An element $S\in \mathcal{F}(G)$ is called sequence over $G$. We write any finite sequence $S$ of $l$ elements of $G$ in the form $\prod_{g\in G}g^{\nu_g(S)}=g_1\cdot\ldots\cdot g_l,$ where $l$ is the length of $S,$ denoted by $|S|; \nu_g(S)$ is the multiplicity of $g$ in $S.$ The sum of $S$ is defined as $\sigma (S)=\sum_{g\in G}\nu_g(S)g.$\\ Our notation and terminology is consistent with \cite{RG} and \cite{EEGKR}.\\ The Davenport constant $D(G)$ is defined as the smallest natural number $t$ such that each sequence over $G$ of length at least $t$ has a non-empty zero-sum subsequence. Equivalently, $D(G)$ is the maximal length of a zero-sum sequence of the elements of $G$ and with no proper zero-sum subsequence. The best bounds for $D(G)$ known so far are: \begin{equation}\label{xeq1} D^*(G)\le D(G)\le \exp(G)\left(1+\log{\tfrac{|G|}{\exp(G)}}\right). \end{equation} See \cite[Theorem 7.1]{EBKAIII} and \cite[Theorem 1.1]{AGP}. \section{Theorems and definitions} \begin{defin}\label{th8} For an additive finite abelian group $G$, and $m\in\mathbb{N}.$\\ We denote by: \begin{itemize} \item[1.] $D_m(G)$ the smallest natural number $t$ such that every sequence $S$ over $G$ of length $|S|\ge t$ contains at least $m$ disjoint and non-empty subsequences $S_1',S_2',\ldots,S_m'|S$ such that $\sigma(S_i')=0$ for $i\in [1,m].$ \item[2.] $\eta (G)$ the smallest natural number $t$ such that every sequence $S$ over $G$ of length $|S|\ge t$ contains a non-empty subsequences $S'|S$ such that $\sigma(S')=0$, $|S'|\in [1,\exp(G)].$ \item[3.] $s(G)$ the smallest natural number $t$ such that every sequence $S$ over $G$ of length $|S|\ge t$ contains a non-empty subsequences $S'|S$ such that $\sigma(S')=0$, $|S'|=\exp(G).$ \end{itemize} \end{defin} \begin{rem}\label{th4} $D_m(G)$ is called the $m$-th Davenport constant and $s(G)$ the Erd\"os-Ginzburg-Ziv constant. In this notation $D(G)=D_1(G).$ See also \cite{FH}, \cite{YGQZ}, \cite{FS10}. \end{rem} \begin{lem}\label{tomerge} Let $G$ be a finite abelian group, $H$ subgroup of $G$, and $k$~a~natural number. Then \begin{equation}\label{xeq2} D(G)\le D_{D(H)}(G/H), \end{equation} \begin{equation}\label{xeq3} D_{k}(G)\le \exp(G)(k-1)+\eta(G). \end{equation} \end{lem} \begin{proof} See \cite[Remark 3.3.3, Theorem 3.6]{FS10} and \cite[Lemma 6.1.3]{GHK}. \end{proof} \begin{lem}\label{lem01} Let $G$ be a finite abelian group. \begin{enumerate} \item[1.] If $G=C_{n_1}\oplus C_{n_2}$ with $1\le n_1|n_2,$ then $$s(G)=2n_1+2n_2-3,\, \eta(G)=2n_1+n_2-2,\, D(G)=n_1+n_2-2=D^*(G).$$ \item[2.] $D(G)\le \eta(G)\le s(G)-exp(G)+1.$ \end{enumerate} \end{lem} \begin{proof} See \cite[Theorem 5.8.3, Lemma 5.7.2]{GHK} . \end{proof} \begin{rem}\label{AlDu} Alon and Dubiner proved that for every natural $r$ and every prime $p$ we have \begin{equation}\label{AD} s(C_p^r)\le c(r) p, \end{equation} where $c(r)$ is recursively defined as follows \begin{equation}\label{AD1} c(r)=256 r(\log_2 r+5)c(r-1)+(r+1) \,\,\,\mathrm{for}\,\,\, r\ge 2,\,\,c(1)=2, \end{equation} There is a misprint in the corresponding formulas \cite[(6)]{AlD}, \cite[(1.4)]{ChMG}.\\ It should be $c(r)=256 r(\log_2 r+5)c(r-1)+(r+1)$ instead of\\ $c(r)=256 (r\log_2 r+5)c(r-1)+(r+1),$ for more details, see \cite[Remark 3.7]{EEGKR}. Note that $s(C_p^2)=4p-3\le 4p$ (see, Lemma \ref{lem01}), thus we can start a recurrence with initial term $c(2)=4$ and get $c(3)<20233.005.$ \end{rem} \begin{rem}\label{AlDu2} The method use in \cite{AlD} yields that for every natural number $r\ge 1,$ there exists $a_r>0$ such that for every natural number $n$ we have \begin{equation}\label{xeq5g} \eta (C_n^r)\le a_r(n-1)+1. \end{equation} We identify $a_r$ with its smallest possible value. It is known that \begin{equation}\label{xeq5c} 2^r-1\le a_r \le (cr\log{r})^r, \end{equation} where $c>0$ is an absolute constant. We know also that $a_1=1,\,a_2=3.$\\ See, \cite{BGAA} and Lemma \ref{lem01}. \end{rem} \begin{thm}\label{thE14} \textnormal{(Edel, Elscholtz, Geroldinger, Kubertin, Rackam \cite[Theorem 1.4]{EEGKR})} Let $G=C_{n_1}\oplus\ldots\oplus C_{n_r}$ with $r=r(G)$ and $1<n_1|\ldots|n_r.$\\ Let $b_1,\ldots, b_r\in\mathbb{N}$ such that for all primes $p$ with $p|n_r$ and all $i\in[1,r],$\\ we have $s(C_p^i)\le b_i(p-1)+1.$ Then \begin{equation}\label{sssG} s(G)\le \sum\limits_{i=1}^r(b_{r+1-i}-b_{r-i})n_i-b_r+1, \end{equation} where $b_0=0.$ In particular, if $n_1=\ldots=n_r=n,$ then $s(G)\le b_r(n-1)+1.$ \end{thm} \begin{lem}\label{th13a} Let $n\ge 2$ be a natural number. Then \begin{equation}\label{xeq5b} \eta (C_n^3)\le 20369(n-1)+1 \mathrm{\,\,\,\,and\,\,\,\,} s(C_n^3)\le 20370(n-1)+1. \end{equation} Therefore $a_3\le 20369.$ \end{lem} \begin{proof} For every finite abelian group $G$, by \cite[Theorem 5.7.4]{GHK} we have \begin{equation}\label{GaoYang} s(G)\le |G|+\exp(G)-1. \end{equation} Thus, if $p$ is a prime number such that $2\le p\le p_{34}=139$ then \begin{equation}\label{fff1} s(C_p^3)\le p^3+p-1< 20370(p-1)+1. \end{equation} Assume now that $p$ is a prime number such that $p\ge p_{35}=149.$\\ By Remark \ref{AlDu} we have $s(C_p^3)< 20233.005 \,p<20370(p-1)+1,$ since $p\ge 149.$ Therefore for all primes $p$ we have $s(C_p^3)< 20370(p-1)+1.$\\ By Lemma \ref{lem01} we also have $s(C_p)=2(p-1)+1,\,s(C_p^2)=4(p-1)+1$\\ for all primes $p.$\\ Hence by Theorem \ref{thE14} we obtain the upper bound $$s(C_n^3)\le 20370(n-1)+1,$$ for all natural $n\ge 2.$ Thus, by Lemma \ref{lem01} we obtain $$\eta(C_n^3)\le 20369(n-1)+1,$$ for all natural $n\ge 2.$ \end{proof} \begin{thm}\label{th14a} For an abelian group $C_{n_1}\oplus C_{n_2}\oplus C_{n_3}$ such that\\ $1<n_1|n_2|n_3\in\mathbb{N},$ there exists an absolute constant $a_3\le 20369$ such that: \begin{equation}\label{xeq5} D(C_{n_1}\oplus C_{n_2}\oplus C_{n_3})\le (n_1-1)+(n_2-1)+(n_3-1)+1+ (a_3-3)(n_1-1). \end{equation} \end{thm} \begin{proof} This proof is build on well-know strategy. Let $G$ be a non-trivial finite abelian group $C_{n_1}\oplus C_{n_2}\oplus C_{n_3}$ such that $1<n_1|n_2|n_3\in\mathbb{N}$. We have that the exponent $\exp(G)=n_3.$ Denoting by $H$ a~subgroup of $G$ such that \begin{equation}\label{xeq6} H\cong C_{\tfrac{n_2}{n_1}}\oplus C_{\tfrac{n_3}{n_1}}, \end{equation} where $\tfrac{n_2}{n_1},\tfrac{n_3}{n_1}\in\mathbb{N}.$ The quotient group $G/H\cong C_{n_1}^3.$\\ By Lemma \ref{tomerge} we get \begin{equation}\label{xeq7} D(G)\le D_{D(H)}(G/H)=D_{\tfrac{n_2}{n_1}+\tfrac{n_3}{n_1}-1}(C_{n_1}^3), \end{equation} since $D(H)=\tfrac{n_2}{n_1}+\tfrac{n_3}{n_1}-1$ (see Lemma \ref{lem01}).\\ By Lemma \ref{tomerge} and (\ref{xeq5g}) \begin{equation}\label{xeq8} \begin{split} D(G)&\le \exp(C_{n_1}^3)(\tfrac{n_2}{n_1}+\tfrac{n_3}{n_1}-2)+\eta(C_{n_1}^3)\le \\ &\le n_1(\tfrac{n_2}{n_1}+\tfrac{n_3}{n_1}-2)+a_3(n_1-1)+1=\\ &= (n_1-1)+(n_2-1)+(n_3-1)+1+ (a_3-3)(n_1-1), \end{split} \end{equation} where $a_3$ is a constant. By (\ref{xeq5c}) and (\ref{xeq5b}) we obtain $a_3\le 20369.$ \end{proof} \begin{rem}\label{th16} Let $1<n_1|n_2|n_3\in\mathbb{N}.$ By Theorem \ref{th14a} we have \begin{equation}\label{xeq9b} D(C_{n_1}\oplus C_{n_2}\oplus C_{n_3})\le 20367(n_1-1)+(n_2-1)+(n_3-1)+1. \end{equation} If $n_3>\tfrac{20367(n_1-1)+n_2-1}{\log{n_1}+\log{n_2}}$ then the upper bound in (\ref{xeq9b}) is smaller than the upper bound from (\ref{xeq1}). See also \cite{ChMG}. \end{rem} \begin{cor}\label{nowyap} Let $n\ge 2$ be a natural number, and let $\omega(n)$ denote the number of distinct prime factors of $n.$ Then \begin{equation} 3(n-1)+1\le D(C_n^3)\le \min\{20369,3^{\omega(n)}\}(n-1)+1. \end{equation} \end{cor} \begin{proof} Taking into account the inequality (\ref{xeq1}) and using Theorem \ref{th14a}, we obtain $$3(n-1)+1\le D(C_n^3)\le 20369(n-1)+1.$$ By \cite[Theorem 1.2]{ChMG}, we get $$D(C_n^3)\le 3^{\omega(n)}(n-1)+1.$$ \end{proof} Under the assumption that the conjecture of Gao and Thangadurai \cite[Conjecture 0]{GT} is valid, we can surmise that $a_3=8.$ Thus, it seemed desirable to attempt to put the conjecture: \begin{hip}\label{th15} Let $G$ be an abelian group $C_{n_1}\oplus C_{n_2}\oplus C_{n_3}$ such that $1<n_1|n_2|n_3\in\mathbb{N}.$ Then \begin{equation}\label{xeq9} D^*(G)\le D(G)\le D^*(G)+5(n_1-1), \end{equation} where $D^*(G)=n_1+n_2+n_3-2.$ \end{hip} We conclude with an application of Theorem \ref{th14a}. If $F$ is a~set of prime integers, then we shall refer to a positive integer each of whose prime factors belong to $F$ as a smooth over a set $F$. The smooth numbers are related to the Quadratic sieve, and are imported in cryptography in the fastest known integer factorization algorithms. Let $|F|=r.$ We denote by $c(n,r)$ the least positive integer $t$ such that any sequence $S,$ of length $t,$ of smooth integers over $F,$ has a nonempty subsequence $S'$ such that the product of all the terms pf $S'$ is an $n$th power of integer. It is known that $c(n,r)=D(C_n^r)$ see \cite[Theorem 1.6]{ChMG}. Thus by Corollary \ref{nowyap} we obtain: \begin{thm} If $n\ge 2$ integer, then \begin{equation} c(n,3)\le \min\{20369,3^{\omega(n)}\}(n-1)+1. \end{equation} \end{thm} \normalsize \baselineskip=17pt
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The search for the possibility of realizing the simplest Non Abelian anyon, the Majorana zero modes\cite{Read2000, Kitaev2001} in the vortex state of topological superconductors, has been one of the hottest topics in the field of condensed matter. One way to achieve this topological superconductor is to search for superconductors with broken time reversal symmetry (TRS), implying the possible coexistence of ferromagnetism and superconductivity. Such coexistence has been reported in some heavy fermion materials, including UGe$_2$\cite{Saxena2000}, URhGe\cite{Aoki2001}, UCoGe\cite{Huy2007}, Sr$_2$RuO$_4$, etc. Among them Sr$_2$RuO$_4$ is perhaps the most extensively studied candidate of chiral p-wave superconductor, in which anomalous responses to external magnetic field and the appearance of half flux quantum has been reported. However, the lack of clear evidences for chiral edge current, a key signature for chiral superconductivity, has made the claim of Sr$_2$RuO$_4$ being a p-wave superconductor less conclusive.\\ An alternative approach is to make artificial structures to realize such TRS broken superconductivity. Early proposals focus on the proximity effect between s-wave superconductor and ferromagnetic metal\cite{Efetov2001, Jonson2001} by artificially fabricating superconductor/ferromagnet(S/F) heterostructures. Numerous peculiar behaviors have been found in such systems such as spatial oscillation of electronic density of states, oscillatory superconducting transition temperature, and $\pi$ phase in Josephson junction to name a few. Recently with the emergence of topological insulators, it is proposed to use spin orbit coupled surface state of magnetic impurities doped topological insulators, or some semiconductors with strong spin orbit interactions placing in close proximity to a conventional superconductor combined with external magnetic field to achieve topological superconductivity. Zero bias anomaly\cite{Ng2009} in the tunneling differential conductance from the transport measurement is viewed as one of the important indications of obtaining Majorana zero modes. This zero bias anomaly has been experimentally observed in several candidate systems, including the point contact Andreev reflection spectra on the Bismuth surface of epitaxially grown Bismuth/Nickel (Bi/Ni) bilayer thin film observed by X. X. Gong et al.\cite{Jin2015} in 2015.\\ Bismuth/Nickel (Bi/Ni) bilayer film is viewed as one of the S/F heterostructures\cite{Moodera1990,Heiman2005}, but it is not a conventional S/F heterostructure as neither crystalline Bi nor Ni becomes superconducting above 1K at ambient pressure. Bulk crystalline Bi at ambient pressure enters into superconducting phase\cite{Prakash2017} at temperature below $0.53$mK due to its low carrier density. Making amorphous or polycrystalline Bi enhances the carrier density of state and the electron phonon interactions, brining up the superconducting transition temperature $T_c$ to $5\sim6$ K at ambient pressure. Similar enhancement is also achieved through placing single crystal under high pressure. Ni is a weak ferromagnetic metal, which shows no traces of superconductivity down to any measurable temperature. In 1990, it is found by J. S. Moodera and R. Meservey\cite{Moodera1990} that growing thin film of Bi on top of Ni thin film makes Bi side superconduct with optimal $T_c\simeq 4K$. In 2015, X. X. Gong et al.\cite{Jin2015} find zero bias anomaly which sustains even under high magnetic field in their transport measurement on the Bi/Ni thin film. \\ In our model, we assume that the observed superconductivity in Bi/Ni is of conventional phonon mediated type and existing in the bulk of Bi thin film. This is supported by previous experimental reports\cite{Pratap2015,Chao2017,Liu2018} showing traces of alloyic Bi$_3$Ni formed throughout the Bi, even though this randomly distributed amount of alloys may not be significant enough to distort the X-ray diffraction images. That is, those diffusively formed random impurities do not significantly blur the rhombohedral structure in the Bi layer observed using transmission electron microscopy or X-ray diffraction. This alloyic Bi$_3$Ni has superconducting transition temperature around 4K and is a type II superconductor\cite{Awana2011}. Those properties explain why this Bi/Ni superconductivity could sustain with Ni thickness increasing up to around $1/5$ of the Bi thickness, and the reason for the maximal transition temperature in this bilayer is around 4K. The remaining puzzle is then the zero bias anomaly seen in the Andreev reflection signals\cite{Jin2015} observed on the Bi surface away from the Bi/Ni interface. We propose a simple theoretical model to explore the physical parameters regime that can realize this effective p-wave superconductivity on the Bi surface in this bilayer system. The mechanism is very similar to the effective p-wave superconductivity on the semiconductor surface with strong Rashba spin orbit coupling in close proximity with a conventional superconductor\cite{Sau2010,Jason2010,Kane2008}. We also summarize other possible mechanisms or explanations for seeing the zero bias anomaly in this bilayer Bi/Ni system.\\ The rest of the paper is organized as follows. In section \ref{sect2} we briefly survey the literatures relevant to this Bi/Ni bilayer and summarize their claims. In section \ref{sect3} we propose a simple model to search for the physical parameters which can realize the possible time reversal broken $p\pm ip$ superconductivity in this Bi/Ni bilayer. Alternative explanations for the zero bias anomaly is also provided in the end of this section. In section \ref{sect4} we summarize our results, and suggest further experiments to explore this interesting Bi/Ni bilayer. \section{Literature survey}\label{sect2} Back in 1990 tunneling experiments on Bi/Ni bilayer done by J. S. Moodera and R. Meservey\cite{Moodera1990} shows that only Bi grown on top of Ni has superconductivity with $T_c\simeq 2\sim 4$K, while the reverse growth does not go into the superconducting phase. Simultaneous growth of Bi and Ni does not give superconductivity, and thus the alloyic Bi$_3$Ni superconductivity is ruled out. Based on this result they suspect the superconductivity is caused by the novel fcc structure, judged by the X-ray diffraction(XRD) patterns, grown on top of Ni.\\ However, their interpretation for the XRD data is controversial. The XRD data could also be explained by the common rhombohedral phase with the surface oriented along (110) direction instead of the novel fcc structure, as pointed out by J. A. van Hulst et al\cite{Jaeger93}. Recent experiments\cite{Jin2015,Zhou2017,Chao2017} with similar sample growth conditions show that the order of growth does not change its superconducting properties. The Bi surface orientation away from the Bi/Ni interface for thinner Bi is (110), while it changes to (111) for Bi thicker than 20 nm regardless of its order of growth. The reason for this discrepancy is not known, but suspected to be better control (better vacuum condition or lower substrate temperature) over the sample growth in the present day setup. \\ The tunneling transport and magnetic susceptibility measurements in Ref.~\onlinecite{Moodera1990} indicates the superconductivity in Bi/Ni is a strong to intermediate coupling ($2\Delta/k_BT_c\simeq 4$) type II s-wave superconductivity with upper critical field up to a few Tesla. Anisotropy in the critical field and tunneling measurement indicates this thin film superconductivity is not limited to the Bi/Ni interface but spreading out within the Bi layer. The normal state resistance of Bi/Ni is shown to be metallic (resistance drops with lowering of temperature) which is different from the insulating behavior seen in the pure Bi thin film\cite{Jin2015}. Ferromagnetism in the Ni layer is reduced in Bi/Ni compared with standalone Ni\cite{Moodera1990,Heiman2005,Chao2017}. In 2015, it is claimed to be p-wave like rather than s-wave superconductivity based on the Andreev reflection signal shown in the Ref.~\onlinecite{Jin2015}.\\ Artificially synthesized Bi$_3$Ni is shown to be a type II superconductor\cite{Awana2011,Zhao2018} with $T_c\simeq 4$K. The measured (bulk) upper critical field is in the order of $10^{-1}$T. Making Bi$_3$Ni in the form of thin film is expected to enhance its critical field. Since spectroscopy data\cite{Pratap2015,Chao2017} shows the formation of Bi$_3$Ni alloys, we suggest the superconductivity seen in the Bi/Ni bilayer is from the diffusively formed Bi$_3$Ni alloys. This viewpoint is also supported by the interesting experimental work done by L. Y. Liu et al.\cite{Liu2018}. In their work not only Bi$_3$Ni but also another type of alloy BiNi (with superconducting transition temperature $4.25$K) contributes to the superconductivity seen in the Bi/Ni bilayer. Due to the different growth methods (mainly pulsed laser deposition (PLD) in their work), the Ni ions have different spatial distributions in their work compared with others. Albeit with very similar transition temperature, those alloys have very different magneto-responses\cite{Liu2018} but the reason for superconductivity to happen in their samples is due to the formation of superconducting alloys. The remaining question then is that if we could also see the zero bias conductance peak in the point contact measurement, suggesting unconventional superconductivity in this Bi/Ni bilayer.\\ \underline{\textit{Update after posting this paper on the ArXiv:}}\\ Soon after this paper posted on the ArXiv, an experimental report on the study of superconductivity in the Bi/Ni bilayer is published by N. P. Armitage et. al.\cite{Armitage2019}. They use time domain THz spectroscopy to measure the low energy electrodynamic response of a Bi/Ni thin film. From their analysis the superconductivity is found to be fully gapped, and the superconductivity develops over the entire bilayer. Their experimental results are consistent with the s-wave bulk superconductivity in this bilayer system. \\ \section{Theoretical proposal for unconventional superconductivity}\label{sect3} If the superconductivity in Bi/Ni bilayers observed in the Bi/Ni bilayer is due to the alloy formation, the observed superconductivity should be conventional phonon mediated s-wave superconductivity. We claim that, based on the theoretical model presented here, it is still possible to observe the two dimensional p-wave like superconductivity as seen in the Ref.~\onlinecite{Jin2015} on the Bi surface in our samples.\\ The basic idea is very similar to the proximity induced topological superconductivity using conventional s-wave superconductor in contact with a strong spin orbit interaction material under external magnetic field (or coupled with a ferromagnetic insulator)\cite{Sau2010,Jason2010}. Bi thin film is known to have robust metallic surface state\cite{Jin2012} and strong Rashba spin orbit interaction\cite{Hofmann2004,Hasegawa2006} on its surface. The alloyic Bi$_3$Ni provides the platform for conventional type II s-wave superconductivity. The required magnetic field\cite{Sau2010} is provided by the nickel thin film. This scenario is illustrated in the Fig.\ref{plot1}. Thus we have all the ingredients needed for realizing topological superconductivity in this Bi/Ni bilayer.\\ Below we present our model Hamiltonian and the details of our theoretical results. The assumptions made in this model are that the spin orbit coupled surface state is not destroyed by the formation of a few randomly distributed alloys within the Bi layer, and the chemical potential of the sample is shifted to the region where topological superconductivity can be realized. The assumption of the spin orbit coupled surface state (not protected by band topology) on the Bi surface away from the interface is backed by the nice crystalline structure seen in the XRD and TEM\cite{Jin2015,Chao2017} in the Bi layer of the Bi/Ni bilayer, and the edge state property is not influenced by the local matrix properties away from the top surface in the tight binding model\cite{Aono2016}. However, the validity of existence of spin coupled surface state in the normal state should be checked by other surface probes such as Angle resolved photo emission spectroscopy(ARPES) or spin resolved scanning tunneling microscope(STM). \subsection{Model Hamiltonian for $p\pm ip$ superconductivity on the surface of Bi} Bi thin film has been shown to have robust metallic surface state, and the bulk state is changed from semi-metallic to insulating one as the thickness decreases\cite{Jin2012}. In forming the Ni/Bi interface, the smaller size of Ni allows Ni atoms to flow into the Bi layer, forming the superconducting alloy Bi$_3$Ni which has optimal critical temperature around $4K$. This alloy formation also serves as effective doping, leading to the shift of chemical potential. This is reflected in the normal state resistance seen in the Fig.1 of Ref.~\onlinecite{Jin2015} and similarly in the Ref.~\onlinecite{Aono2016}. Changes in the normal state charge carriers in the Bi/Ni bilayer compared with Bi thin film, using Hall bar measurements, also support this change in the chemical potential. At higher temperature the resistance goes up as rather than coming down as in the pure Bi thin film\cite{Jin2012}. This effective doping makes Bi/Ni bilayer metallic rather than insulator-like, which is the case for pure Bi thin film\cite{Jin2012}.\\ \begin{figure} \centering \includegraphics[width=0.5\textwidth]{illustration1.eps} \caption{\label{plot1} Sketch of the Bi/Ni bilayer. The magnetization of Ni, shown as a red arrow, is mostly in plane, and the magnetic field provided by this Ni layer on the Bi surface away from the interface is depicted by the black arrows. The field orientation at the Bi surface is also mostly in-plane. Alloys (mostly Bi$_3$Ni and few BiNi) is formed with higher concentration near the interface in the layer by layer epitaxial growth\cite{Pratap2015,Chao2017}, but the formation can vary with different growth techniques\cite{Liu2018}. At the top (Bi) surface of the figure, the spin orbit coupled surface state from Bi is assumed to be intact. The effective Hamiltonian for the Bi surface away from the interface is described in the Eq.(\ref{m1}) to Eq.(\ref{m4}).} \end{figure} Suppose the formation of alloys is mostly nearby the Bi-Ni interface, the crystalline structure of bismunth away from the interface region will not be significantly modified. This claim is backed by the nice XRD data and trace of alloys seen in the experiments\cite{Pratap2015}, although the actual distributions of the alloys may depend on the details of the growth procedures\cite{Liu2018}. Under this assumption, we can treat the alloys as few random impurities in the region of bismunth away from the interface, and the effect of random impurities would only modifies the chemical potential without hampering the surface states with strong Rashba spin orbit coupling.\\ The effective Hamiltonian for surface state of Bi with surface oriented in the (111) direction\cite{Aono2016} in proximity with the bulk superconducting alloyic Bi/Ni thin film can be written as\cite{Jason2010} \begin{eqnarray}\label{m1} H&=&H_{Bi(s)}+H_{Z}+H_{Sc}\\\nonumber H_{Bi(s)}&=&\int d^2 r \psi^{\dagger}\Big[-\left(\frac{\partial_x^2}{2m_x}+\frac{\partial_y^2}{2m_y}\right)-\mu-i(\alpha_x\sigma^x\partial_y\\\nonumber &&\quad \quad \quad-\alpha_y\sigma^y\partial_x)-i\alpha_z\sigma^z\Big((\partial_x+i\tilde{\beta}\partial_y)^3\\\label{m2} &&\quad \quad \quad +(\partial_x-i\tilde{\beta}\partial_y)^3\Big)\Big]\psi\\\label{m3} H_{Z}&=&\int d^2 r \psi^{\dagger}\left(\vec{h}\cdot\vec{\sigma}\right)\psi\\\label{m4} H_{Sc}&=&\int d^2 r\left(\Delta\psi^{\dagger}_{\uparrow}\psi^{\dagger}_{\downarrow}+\Delta^{\ast}\psi_{\downarrow}\psi_{\uparrow}\right) \end{eqnarray} The Hamiltonian for Bi surface $H_{Bi(s)}$ in the Eq.(\ref{m2}) describes the low energy dispersion of the bismuth up to cubic order near the $\Gamma$ point. This includes the usual quadratic kinetic energy, the linear in momentum Rashba spin orbit couplings due to broken inversion symmetry on the surface, and the cubic warping terms which results from the hexagonal lattice of bismuth. The $2\times 2$ Pauli matrices $\sigma_i$ act on the spin degree of freedom in $\psi(r)=\begin{pmatrix}\psi_{\uparrow}(r)\\ \psi_{\downarrow}(r)\end{pmatrix}$. This low energy effective Hamiltonian around $\Gamma$ point for (111) orientation is similar to that for (110) orientation\cite{Koroteev2008,Hirahara2007,Pascual2004}, which is seen for thinner sample as in the Ref~\onlinecite{Jin2015}, with some parameters change. Eq.(\ref{m3}) stands for the Zeeman field generated by the nickel thin film on the bismuth surface, and Eq.(\ref{m4}) is the proximity induced superconductivity on the surface state of bismuth thin film. The orbital term from the magnetic field is not included because the magnetic field generated from nickel film oriented in the in plane direction of the Bi/Ni bilayer. The magnitude of this in plane field decreases with increasing thickness of bismuth film (roughly proportional to inverse of the thickness to the third power, had we treated the nickel film as some bar magnet).\\ The proximity induced superconductivity pairing amplitude $\Delta$ also decreases with increasing thickness of Bi (with Ni thickness fixed), as the parenting superconductor is formed by the alloyic Bi$_3$Ni whose formation is limited by the diffusive motion of nickel in the bismuth. Both of these factors contribute to the disappearance of $p\pm ip$ superconductivity in the bismuth surface as we increase the thickness of bismuth film. If we chose a thinner bismuth film, the magnetic field generated by the nickel could be large enough to kill the superconductivity of alloyic Bi$_3$Ni. Following these arguments we see that for a given thickness of nickel film there could be only a limited range of thickness of bismuth film giving rise to the superconductivity within the bulk of bismunth layer, which is consistent with the observations in the Ref.~\onlinecite{Jin2015}.\\ We further simplify Eq.(\ref{m2}) by rescaling $\partial_x\rightarrow(m_x/m_y)^{1/4}\partial_x$ and $\partial_y\rightarrow(m_y/m_x)^{1/4}\partial_y$. After this rescaling $H_{Bi(s)}$ becomes: \begin{eqnarray}\nonumber H_{Bi(s)}&=&\int d^2 r \psi^{\dagger}\Big[-\frac{\nabla^2}{2m^*}-\mu-i\lambda_R(\sigma_x\partial_y-\gamma\sigma_y\partial_x)\\\label{bis}&+&i\lambda_D\sigma_z(\partial_x^3-3\beta\partial_x\partial_y^2)\Big]\psi \end{eqnarray} Here $m^*=\sqrt{m_xm_y}$, and the spin orbit coupling related parameters are $\lambda_D=2\alpha_z (m_x/m_y)^{3/4}$, $\beta=\tilde{\beta}^2 (m_x/m_y)$, $\gamma=(\alpha_y/\alpha_x)\sqrt{m_x/m_y}$, and $\lambda_R=\alpha_x(m_y/m_x)^{1/4}$. Eq.(\ref{bis}) is very similar to the low energy Hamiltonian describing the (110) quantum well mentioned in the Ref.~\onlinecite{Jason2010}, with the linear momentum dependent Dresselhaus term in the Ref.~\onlinecite{Jason2010} replaced by the cubic warping terms. Thus the physics leading to topological superconductivity with in plane magnetic field provided by nickel film here is basically the same as that of topological superconductivity formed by (110) quantum well with Rashba and Dresselhaus interactions with in plane magnetic field and in contact with a s-wave superconductor\cite{Jason2010}.\\ We rewrite the full Hamiltonian $H$ in momentum space and use diagonalized bases of $H_{Bi(s)}+H_{Z}$ by setting $\psi(\vec{k})=\begin{pmatrix}\psi_{\uparrow}(\vec{k}) \\\psi_{\downarrow}(\vec{k})\end{pmatrix}=\phi_-(\vec{k})\psi_-(\vec{k})+\phi_+(\vec{k})\psi_+(\vec{k})$. Here $\phi_{\pm}(\vec{k})$ represent some $2\times 1$ matrices, and $\psi_{\pm}(\vec{k})$ are the fermion annihilation operators for upper/lower bands. This is done in the same way as done in the Sau-Lutchyn-Tewari-Das Sarmar proposal for realizing topological superconductivity\cite{Sau2010}, which we summarize their results in the Appendix A. Here the explicit forms of $\phi_{\pm}(\vec{k})$ are not as illuminating as the case shown in the Appendix A, and we do not show their explicit forms. Following this bases change, we get: \begin{eqnarray}\nonumber H&=&\int d^2 \vec{k} \Big[\left(\bar{\epsilon}_+(\vec{k})\psi^{\dagger}_+(\vec{k})\psi_+(\vec{k})+\bar{\epsilon}_-(\vec{k})\psi^{\dagger}_-(\vec{k})\psi_-(\vec{k})\right)\\\nonumber&+&\Big(\Delta_{+-}(\vec{k})\psi^{\dagger}_+(\vec{k})\psi^{\dagger}_-(-\vec{k}) +\Delta_{++}(\vec{k})\psi^{\dagger}_+(\vec{k})\psi^{\dagger}_+(-\vec{k})\\\label{fullH}&+&\Delta_{--}(\vec{k})\psi^{\dagger}_-(\vec{k})\psi^{\dagger}_-(-\vec{k})+h.c.\Big)\Big] \end{eqnarray} The upper/lower band energies $\bar{\epsilon}_{\pm}(\vec{k})$ are given by \begin{widetext}\begin{eqnarray}\nonumber &&\bar{\epsilon}_{\pm}(\vec{k})=\frac{k^2}{2m^*}-\mu\pm\delta\epsilon(\vec{k})\quad,\\\label{eppm} &&\delta\epsilon(\vec{k})=\sqrt{(\gamma\lambda_Rk_x-h_y)^2+(\lambda_D(k_x^3-3\beta k_xk_y^2)+h_z)^2+(\lambda_Rk_y+h_x)^2}. \end{eqnarray}\end{widetext} Using this band bases, the $s$-wave like interband pairing strength $|\Delta_{+-}(\vec{k})|$ and $p\pm ip$-wave like intraband pairing ($|\Delta_{++}(\vec{k})|$ or $|\Delta_{--}(\vec{k})|$) are expressed as: \begin{widetext} \begin{eqnarray} &&|\Delta_{+-}(\vec{k})|^2=\frac{\Delta^2}{2}\left[1-\frac{\lambda_D^2(k_x^3-3\beta k_xk_y^2)^2+\gamma^2\lambda_R^2k_x^2+\lambda_R^2k_y^2-(h_x^2+h_y^2+h_z^2)}{\delta\epsilon(\vec{k})\delta\epsilon(-\vec{k})}\right],\\\nonumber &&|\Delta_{++}(\vec{k})|^2=|\Delta_{--}(\vec{k})|^2=\frac{\Delta^2}{8}\left[1+\frac{\lambda_D^2(k_x^3-3\beta k_xk_y^2)^2+\gamma^2\lambda_R^2k_x^2+\lambda_R^2k_y^2-(h_x^2+h_y^2+h_z^2)}{\delta\epsilon(\vec{k})\delta\epsilon(-\vec{k})}\right]. \end{eqnarray} \end{widetext} Solving the full Bogoliubov-de Gennes Hamiltonian obtained from Eq.(\ref{fullH}) with uniform $\Delta$, we get \begin{eqnarray}\nonumber &&E_{\pm}(\vec{k})^2=4|\Delta_{++}(\vec{k})|^2+|\Delta_{+-}(\vec{k})|^2+\frac{\bar{\epsilon}_+(\vec{k})^2+\bar{\epsilon}_-(\vec{k})^2}{2}\\ &&\pm |\bar{\epsilon}_+(\vec{k})-\bar{\epsilon}_-(\vec{k})|\sqrt{|\Delta_{+-}(\vec{k})|^2+\Big(\frac{\bar{\epsilon}_+(\vec{k}) +\bar{\epsilon}_-(\vec{k})}{2}\Big)^2} \end{eqnarray} We concentrate on the lower branch $E_-(\vec{k})$ , assuming the chemical potential of the bismunth surface state is lowered to be around $\frac{k^2}{2m^\ast}-\delta\epsilon(\vec{k})$, with $\vec{k}$ around the $\Gamma$ point in the momentum space. The lowering of the chemical potential could come from the effective doping due to the formation of random alloyic Bi$_3$Ni impurities within the bulk of bismunth. This is suggested by the different temperature dependence of the resistance in the normal state of pure bismunth\cite{Jin2012} and Bi/Ni\cite{Jin2015} thin film. The minimum of $E_-(\vec{k})$ around $\Gamma$ point determines the superconducting gap, denoted as $\mathcal{E}_g$ in the Fig.\ref{ph1} and Fig.\ref{ph2}, of the surface state. The change in $\mathcal{E}_g$ computed numerically is used to explore the stability conditions of various topological and non-topological phases evaluated at zero temperature as shown in the Fig.\ref{ph1} and Fig.\ref{ph2}. Finite temperature phase diagram can be done by constructing its Helmholtz free energy. As the goal here is to find the maximal proximity induced topological superconducting gap in the model Hamiltonian, we adhere to the zero temperature formulation throughout this paper.\\ \begin{figure} \centering \includegraphics[width=0.5\textwidth]{ph1.eps} \caption{\label{ph1} Phase diagram for gap magnitude $\mathcal{E}_g/\Delta$ as a function of anisotropic parameters $\gamma$ (with $\gamma=\beta$) and ratio between Rashba interaction $\lambda_R$ and warping terms induced Dresselhaus like interaction $\lambda_D$. Other fixed parameters are: $1/2m^{\ast}a^2=0.6$eV , lattice constant $a=4.53\AA$, chemical potential $\mu=0.9$eV, and effective Zeeman field from Ni layer around 2 meV. The dark blue dot corresponds to $\lambda_R/a=0.05$eV and $\lambda_D/a^3=0.8$eV, or $8\lambda_R a^2/\lambda_D=0.5$. The vertical axis corresponds to p-wave phase, similar to the phase diagram in the Ref.~\onlinecite{Jason2010}.} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{ph2.eps} \caption{\label{ph2} Phase diagram for gap magnitude $\mathcal{E}_g/\Delta$ as a function of the ratio between Rashba interaction $\lambda_R$ and warping terms induced Dresselhaus like interaction $\lambda_D$ and chemical potential $\mu$ normalized by the Zeeman field strength $h$. Other fixed parameters are: $\gamma=\beta=1$, $1/2m^{\ast}a^2=0.6$eV , lattice constant $a=4.53\AA$, $8\lambda_R a^2/\lambda_D=0.5$, and effective Zeeman field $h=2$ meV. The yellow arrow points to the maximal $p\pm ip$ order parameter magnitude which is around 0.4 meV in this calculation. NSC stands for normal superconducting state and TSC stands for topological superconducting state.} \end{figure} The phase diagram shown in the Fig.\ref{ph1} is $\mathcal{E}_g$ evaluated as a function of anisotropy parameter $\gamma$ (setting $\gamma=\beta$ to simplify the phase diagram) and dimensionless ratio between Rashba interaction strength $\lambda_R$ and warping terms induced Dresselhaus like interaction strength $\lambda_D$ at fixed chemical potential $\mu$ and Zeeman energy $|\vec{h}|\equiv h$ (with Zeeman field chosen to be along in-plane y-axis in both figures) evaluated at experimental relevant values discussed in the section \ref{mpr}. In Fig.\ref{ph1} the chemical potential $\mu$ is chosen to place the Fermi level crossing only the lower band, with energy dispersion $\bar{\epsilon}_-(\vec{k})$. The separation in energy from the upper band to lower one is mainly determined by $h$, which is chosen to be larger than the proximity induced superconducting gap magnitude $\Delta$.\\ For $\vec{h}=h_y$ considered here, the nonzero $\gamma$ lifts the $k_x\rightarrow -k_x$ symmetry of the $\Delta=0$ bands as can be seen from Eq.(\ref{eppm}). This suppresses the superconductivity since the pairing state involves $\vec{k}$ and $-\vec{k}$. Smaller $\lambda_R$ partially offsets this effect, and thus the smaller superconducting pairing gap (with larger $\gamma$ and $\lambda_R$) and the region termed "gapless superconductor" is located in the upper right corner of the Fig.\ref{ph1}. We shall emphasize that the proximity effect generates not only interband s-wave like pairing ($\Delta_{+-}$), but also intraband $p\pm ip$ like pairing ($\Delta_{++}$/$\Delta_{--}$) in the upper/lower band. The gapless superconducting region (where $\mathcal{E}_g=0$) does not mean that $\Delta_{--}$ or $\Delta_{+-}$ is zero, but rather a region of transition from topological nontrivial superconductor to topological trivial superconductor as in the cases of Sau et. al.'s model\cite{Sau2010} and Jason Alicea's model\cite{Jason2010}. For $\lambda_R\rightarrow 0$ but finite $\lambda_D$ (region close to y-axis in the Fig.\ref{ph1}) the dominant spin orbit coupling for $\Delta_{--}$ is the warping induced Dresselhaus like spin orbit coupling. Right at the y-axis, the induced topological superconductivity pairing symmetry is given by $k_x(k_x^2-3\beta k_y^2)\simeq -k_x|\vec{k}|^2$ (for $\beta\simeq 1$, $|k_x|\simeq|k_y|$ in an isotropic sample; $|\vec{k}|^2=k_x^2+k_y^2$) behaving like $p_x$ superconductor nearby $\Gamma$ point. Thus we mark that region as a "$p_x$ superconductor.\\ The topological phase transition is also present when we try to move the chemical potential away from the lower band. An naive guess for the phase boundary would be $|\mu|\le |h|$ as the "topological gap" is protected by the Zeeman field here (the actual topological region would be smaller as $\Delta$ is finite). Thus the phase boundary, or region named gapless superconductor, exists between the "normal superconductor (NSC)" and "topological superconductor (TSC)" as we change the chemical potential while fixing other parameters as shown in the Fig.\ref{ph2}. \subsection{Model parameters relevant to the known experimental results}\label{mpr} We use the model parameters $1/2m^{\ast}a^2=0.6$eV , $\lambda_R/a=0.05$eV, $\lambda_D/a^3=0.8$eV, lattice constant $a=4.53\AA$, and chemical potential $\mu=0.9$eV to fit the hexagonal Fermi surface of the pristine bismunth thin film\cite{Aono2016} around the $\Gamma$ point. With the addition of nickel layer, we lower the chemical potential to zero and add effective Zeeman field of magnitude around $2$meV. The upper bound of Zeeman field is estimated from the in plane upper critical field\cite{Pratap2015} of Bi$_3$Ni (with thickness around one tenth of magnetic penetration depth) and large gyromagnetic ratio ($g\simeq 33$) of Bi thin film, which gives $10$ meV. To keep the alloyic superconductivity from Bi$_3$Ni as intact as possible, we choose the Zeeman field $h$ generated from the nickel layer to be $2$meV. The superconducting gap magnitude $\Delta$ from the bulk Bi is estimated to be $0.9$meV ,using $2\Delta/k_BT_c=4.5$ and $T_c=4$K measured by tunneling experiment in a similar setup\cite{Moodera1990}.\\ With aforementioned parameters and assuming the film is uniform (with dimensionless anisotropic parameters $\gamma=1$, $\beta=1$), the surface state of Bi/Ni is then described by $p+ip$ topological superconductivity with superconducting gap magnitude around $0.08$meV or $0.09\Delta$(dark blue dot in the Fig.\ref{ph1}). Further lifting up of chemical potential (say, by around $1$ meV) with other parameters fixed enhances the gap magnitude up to $0.4$ meV as shown in the Fig.\ref{ph2}. This enhancement is attributed to the enlargement of density of state with the rising of chemical potential. Further increasing of chemical potential results in change from topological superconductivity to topologically trivial one. Making the film anisotropic also leads to a larger gap, although the effect is less significant compared with the shift of chemical potential.\\ Fitting using generalized Blonder, Tinkham, and Klapwijk (BTK) formula\cite{Tanaka2007} with the observed zero bias peak\cite{Jin2015} gives superconducting gap around $0.6$ to $1.1$ meV (depending on the choice of fitting range of bias voltage). This estimated gap magnitude is almost the same as that from the bulk superconductivity ($\sim 0.9$meV) with critical temperature around 4K. Our numerical results for largest proximity induced gap magnitude is around $0.4$meV. This factor of two discrepancy could come from the thermal broadening or multichannel tunnelings due to finite size of the point contact. Further reducing the measurement temperature and choosing a better contact could possibly resolve this issue. \subsection{Anisotropic point contact Andreev reflection} \begin{widetext} \begin{figure} \centering \includegraphics[width=1\textwidth]{illustration.eps} \parbox{\textwidth}{\caption{\label{ph3} Schematic plots for the anisotropic conductance from the point contact Andreev reflection measurement. Formula used in the evaluation of normalized conductance shown in the three sub-figures is taken from the Ref.~\onlinecite{Tanaka2007}, with $Z=\frac{2mU}{\hbar^2 k_F}=10$ ($U$ being the normal-superconductor interface potential) as in the Fig.1 of the Ref.~\onlinecite{Tanaka2007}. Different colors in the sub-figures means normalized conductance (vertical axis: $G/G_N$ where $G_N$ stands for normal state conductance) v.s. bias voltage (horizontal axis: $eV/\Delta$ where $V$ is the bias voltage) under different local magnetic fields.}} \end{figure} \end{widetext} Another interesting perspective of this Bi/Ni bilayer is that the differential conductance signal from the point contact shows different results from different sides\cite{Jintalk2016,Chientalk2018}. Effective equal spin $p\pm ip$ pairing is expected to be strongest only on the surface away from the Bi/Ni interface. For the side surfaces the conductance shape could show s-wave like or p-wave like structures. For rectangular shape of Bi/Ni bilayer, the side with longer length is supposed to be influenced by similar magnetic field strength as the top surface. The side with shorter length experiences stronger magnetic field compared with that of the top surface, but with more anisotropy in field distribution as illustrated in the cartoon picture of Fig.\ref{ph3}. This anisotropy leads to smaller effective in plane field compared with the other two side surfaces. Since the side surface Rashba terms are much weaker compared with that of the surface in parallel with the Bi/Ni interface, the side with large magnetic field is likely to show $p$-wave behavior as suggested in Fig.\ref{ph1} with $\lambda_R=0$. It could also shows opposite spin pairing\cite{Tanaka2007} in the triplet state which would be sensitive to the external magnetic field probes. For the side surface with weaker magnetic field, the chemical potential $\mu$ could be larger than the Zeeman field $h$, leading to $s$-wave like topologically trivial superconductivity as shown in right-hand side of the phase diagram in the Fig.\ref{ph2}.\\ \subsection{Competing models and other possibilities} Recent optical measurements of the polar Kerr effect supports the spontaneous time reversal symmetry breaking on the Bi surface in concurrence with the onset of superconductivity in this Bi/Ni bilayer\cite{Xia2017}. This experimental results are consistent with the time reversal broken $p\pm ip$ paring gap presented in our theoretical model. An alternative theoretical explanation for the same Kerr effect results is presented in Ref.~\onlinecite{Xia2017}, where the superconductivity is thought to be occurring only on the Bi surface away from the Bi/Ni interface. Based on symmetry requirements for two dimensional noncentrosymmetric crystalline superconductor\cite{Samokhin2015}, the authors in Ref.~\onlinecite{Xia2017} concluded that the time reversal broken superconducting state should be of $d\pm id$ instead of $p\pm ip$ as suggested in our scenario. The mechanism behind this $d\pm id$ superconductivity is the magnetic fluctuations induced by the Ni layer.\\ The key difference between this $d\pm id$ proposal and ours is that the superconductivity considered in our model is rooted from the bulk of Bi, not just on the surface away from the Bi/Ni interface. It is true that with the decrease of Bi thickness the bulk of Bi tends to be a normal insulator with metallic surface state\cite{Jin2012} (with the exception of few bilayers of Bi which could be topological insulator\cite{Matsuda2016} or single layer of Bi as two dimensional topological insulator\cite{Wu2011}). However, by placing Bi on top of Ni thin film, we see the whole Bi/Ni normal state behaves like an usual metal rather than an insulator. This leads us to believe that, in all the Bi/Ni samples we see, there exist effective doping of charges which increase the electronic density of state nearby the Fermi level. Also all the observed transport and magnetic properties, other than the surface probes such as the point contact Andreev reflection, in the superconducting state is consistent with the usual type II s-wave superconductor. Another experimental support is that we do not see any sign of superconductivity in the Bi/Fe or Bi/Co samples\cite{Chao2017}, which should have similar superconducting behavior if the surface $d\pm id$ superconductivity were induced by magnetic fluctuations.\\ It is also possible that the observed p-wave like signal in the point contact measurement is from the bulk of Bi/Ni bilayer\cite{Chientalk2018} instead of signals from the surface. It is found by T. Herrmannsd$\ddot{o}$rfer et. al.\cite{Ruck2011} that nano-structures of Bi$_3$Ni (submicrometer-sized particles and quasi-one-dimensional nanoscaled strains) also shows coexistence of superconductivity and ferromagnetism with onset superconducitng transition temperature around $5.2K$. This kind of nano-structured Bi$_3$Ni could also be formed during the epitaxial growth of Bi/Ni bilayer and becomes the source of the bulk p-wave superconductivity, although the mechanism behind it remains elusive. Whether these nanostructured Bi$_3$Ni could form well-oriented domains as suggested by the anisotropy measurement\cite{Jintalk2016,Chientalk2018} during the epitaxial growth is yet another puzzle to be clarified.\\ Another possibility for seeing magnetic field independent zero bias conductance peak in the point contact Andreev reflection measurement is that the point contact is not in the Sharvin ballistic limit\cite{Greene2010}. This has been seen in some of the multibands iron-pnictide superconductors with s$\pm$ pairing symmetry. In the polycrystalline iron-pnictide it is also found\cite{Greene2010} the coexistence of randomly distributed ferromagnetic and superconducting domains. Both the field independent zero bias conductance peak and the existence of ferromagnetic domains could possibly explain the experimental results from the point contact and magneto-Kerr effect measurements found in the Bi/Ni samples. This less exciting possibility can be ruled out, if the superconducting site found in the point contact measurement were the same as the ferromagnetic region found in the Kerr effect measurement. Multiple Andreev reflections\cite{Bose2017} is yet another possibility, although the experimental results from Ref.~\onlinecite{Jin2015} suggests this is less likely to be the case. \section{Summary and further experimental suggestions}\label{sect4} We propose a simple model, utilizing the strong spin orbit coupling nature of Bi and the effective doping coming from the alloy formation in the Bi/Ni bilayer, to suggest the possible existence of proximity induced time reversal broken $p\pm ip$ superconductivity on the Bi surface away from the Bi/Ni interface. The physics behind it is the same as the effective $p\pm ip$ superconductivity made by conventional superconductor combined with a semiconductor with strong spin orbit couplings under some external magnetic field\cite{Sau2010,Jason2010}. The key difference here is the Rashba spin orbit term is supplemented by a cubic spin orbit coupling, and the external magnetic field is provided by the ferromagnetic Ni layer. By mapping out the phase diagram with experimentally relevant parameters, we also explain the anisotropic Andreev reflection signals probed on different Bi surface\cite{Jintalk2016,Chientalk2018}. This $p\pm ip$ scenario is also consistent with the recent magneto-optical Kerr effect and magnetic measurements\cite{Xia2017,Zhou2017}, although other possibilities such as bulk p-wave superconductivity induced by nanostructured Bi$_3$Ni\cite{Ruck2011}, Multiple Andreev reflections\cite{Bose2017}, or point contact in the diffusive regime\cite{Greene2010} should also be considered. Since the alloy formations would vary with growth methods, the phase diagrams mentioned in our simple model for the actual bilayer system is surely more complicated. However, we think the main physics that the topological nontrivial superconductivity is induced through proximity effect on the surface of Bi should be still the same, as long as the surface state of Bi away from the interface is not destroyed after the formation of those alloys from interface diffusions.\\ To truly confirm whether our proposed scheme is correct or not, further surface probes such as Angle resolved photo emission spectroscopy or low temperature scanning tunneling microscope is needed to check the normal and superconducting state of electronic structures of Bi layer after forming the Bi/Ni bilayer. For sufficient thick Bi layer, the nickel from the interface diffusion shall not reach to the Bi surface away from the interface. The size of the superconducting gap from the point contact Andreev reflection measurement shall become smaller with the increasing bismuth thickness.\\ Another possible mechanism for inducing time reversal broken superconductivity is through the magnetic fluctuations from the Ni layer as mentioned in the Ref.~\onlinecite{Xia2017}. We tend to exclude this scenario based on the lack of superconductivity in the Bi/Fe and Bi/Co bilayers. Had it indeed been able to achieve effective $p\pm ip$ superconductivity on the Bi surface, we may adjust the sample makings processes to have the largest superconducting gap magnitude, using aforementioned phase diagrams, and make Majorana zero modes in its vortex state. Even if it were not the cases (say, with zero bias anomaly seen similar to some of the iron-pnictide superconductors), a further look at the Fulde Ferrell Larkin Ovchinnikov (FFLO) physics on the Ni side\cite{Heiman2005} is also an interesting topic in its own right. A systematic study of bilayer formed by metallic/semi-conducting thin film with strong spin orbit couplings and ferromagnetic metal/insulator layer could also possibly leads to the platform for effective $p\pm ip$ or even more exotic superconductors yet to be explored.\\ \acknowledgements I acknowledge useful discussions with Piers Coleman in the initial stage of this work, and various discussions with friends from Institute of Physics, Academia Sinica during the writeup of this work. I also thank the financial support from Taiwan's MOST (No.106-2112-M-017-002-MY3), NSF funding from U.S.A. for the Aspen center for physics 2017 winter conference, and Academia Sinica during my winter and summer visits in 2016-2017.\\
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} In recent years, deep neural networks on discrete sequential data have achieved remarkable success in various domains including natural language processing and protein structure prediction, with the advent of large-scale sequence models such as BERT and XLNet \cite{BERT, XLNet}. However, these networks have exhibited vulnerability against adversarial examples that are artificially crafted to raise network malfunction by adding perturbations imperceptible to humans \cite{papernot2016crafting, TextFooler}. Recent works have focused on developing adversarial attacks in the \emph{black-box} setting, where the adversary can only observe the predicted class probabilities on inputs with a limited number of queries to the network \cite{GA, PWWS}. This is a more realistic scenario since, for many commercial systems \cite{google, amazon}, the adversary can only query input sequences and receive their prediction scores with restricted resources such as time and cost. While a large body of works has proposed successful black-box attacks in the image domain with continuous attack spaces \cite{ilyas2018black, andriushchenko2020square}, developing a query-efficient black-box attack on discrete sequential data is quite challenging due to the discrete nature of their attack spaces. Some prior works employ evolutionary algorithms for the attack, but these methods require a large number of queries in practice \cite{GA, PSO}. Most of the recent works are based on greedy algorithms which first rank the elements in an input sequence by their importance score and then greedily perturb the elements according to the pre-computed ranking for query efficiency \cite{PWWS,TextFooler,LSH}. However, these algorithms have an inherent limitation in that each location is modified at most once and the search space is severely restricted \cite{yoo2020searching}. To this end, we propose a \emph{Blockwise Bayesian Attack} (BBA) framework, a query-efficient black-box attack based on \emph{Bayesian Optimization}. We first introduce a categorical kernel with automatic relevance determination (ARD), suited for dynamically learning the importance score for each categorical variable in an input sequence based on the query history. To make our algorithm scalable to a high-dimensional search space, which occurs when an input sequence is long, we devise block decomposition and history subsampling techniques that successfully improve the query and computation efficiency without compromising the attack success rate. Moreover, we propose a post-optimization algorithm that reduces the perturbation size. We validate the effectiveness of BBA in a variety of datasets from different domains, including text classification, textual entailment, and protein classification. Our extensive experiments on various victim models, ranging from classical LSTM to more recent Transformer-based models \cite{LSTM, BERT}, demonstrate state-of-the-art attack performance in comparison to the recent baseline methods. Notably, BBA achieves higher attack success rate with considerably less modification rate and fewer required queries on all experiments we consider. \section{Related Works} \label{rel} \subsection{Black-Box Attacks on Discrete Sequential Data} Black-box adversarial attacks on discrete sequential data have been primarily studied in natural language processing (NLP) domain, where an input text is manipulated at word levels by substitution \citep{GA}. A line of research exploits greedy algorithms for finding adversarial examples, which defines the word replacement order at the initial stage and greedily replaces each word under this order by its synonym chosen from a word substitution method \cite{PWWS, TextFooler, LSH, BAE, BERTAttack}. \citet{PWWS} determine the priority of words based on word saliency and construct the synonym sets using WordNet \cite{WordNet}. \citet{TextFooler} construct the word importance ranking by measuring the prediction change after deleting each word and utilize the word embedding space from \citet{mrkvsic2016counter} to identify the synonym sets. The follow-up work of \citet{LSH} proposes a query-efficient word ranking algorithm that leverages attention mechanism and locality-sensitive hashing. Another research direction is to employ combinatorial optimizations for crafting adversarial examples \cite{GA, PSO}. \citet{GA} generate adversarial examples via genetic algorithms. \citet{PSO} propose a particle swarm optimization-based attack (PSO) with a word substitution method based on sememes using HowNet \cite{HowNet}. \subsection{Bayesian Optimization} While Bayesian optimization has been proven to be remarkably successful for optimizing black-box functions, its application to high-dimensional spaces is known to be notoriously challenging due to its high \emph{query complexity}. There has been a large body of research that improves the query efficiency of high-dimensional Bayesian optimization. One major approach is to reduce the effective dimensionality of the objective function using a sparsity-inducing prior for the scale parameters in the kernel \citep{COMBO, SAAS}. Several methods address the problem by assuming an additive structure of the objective function and decomposing it into a sum of functions in lower-dimensional disjoint subspaces \citep{kandasamy2015high, wang2018batched}. Additionally, a line of works proposes methods that perform multiple evaluation queries in parallel, also referred to as batched Bayesian optimization, to further accelerate the optimization \citep{azimi2010batch, HDBBO}. Another challenge in Bayesian optimization with Gaussian processes (GPs) is its high \emph{computational complexity} of fitting surrogate models on the evaluation history. A common approach to this problem is to use a subset of the history to train GP models \citep{seeger2003fast, SOD}. \citet{seeger2003fast} greedily select a training point from the history that maximizes the information gain. \citet{SOD} choose a subset of the history using Farthest Point Clustering heuristic \citep{gonzalez1985clustering}. Many Bayesian optimization methods have focused on problem domains with continuous variables. Recently, Bayesian optimization on categorical variables has attained growing attention due to its broad potential applications to machine learning. \citet{baptista2018bayesian} use Bayesian linear regression as surrogate models for black-box functions over combinatorial structures. \citet{COMBO} propose a Bayesian optimization method for combinatorial search spaces using GPs with a discrete diffusion kernel. \subsection{Adversarial Attacks via Bayesian Optimization} Several works have proposed query-efficient adversarial attacks using Bayesian optimization in image and graph domains, but its applicability to discrete sequential data has not yet been explored. \citet{Kolter, BayesOpt} leverage Bayesian optimization to attack image classifiers in a low query regime. \citet{Kolter} introduce a noise upsampling technique to reduce the input dimensions of image spaces for the scalability of Bayesian optimization. A concurrent work of \citet{BayesOpt} proposes a new upsampling method, whose resize factor is automatically determined by the Bayesian model selection technique, and adopts an additive GP as a surrogate model to further reduce the dimensionality. Recently, \citet{GRABNEL} propose a query-efficient attack algorithm against graph classification models using Bayesian optimization with a sparse Bayesian linear regression surrogate. While these Bayesian optimization-based methods find adversarial examples with any perturbation size below a pre-defined threshold, we further consider minimizing the perturbation size, following the practice in the prior works in NLP \citep{PWWS,TextFooler,PSO,LSH}. \section{Preliminaries} \subsection{Problem Formulation} To start, we introduce the definition of adversarial attacks on discrete sequential data. Suppose we are given a target classifier $f_\theta: \mathcal{X}^l \to \mathbb{R}^{|\mathcal{Y}|}$, which takes an input sequence of $l$ elements $s = [w_0, \ldots, w_{l-1}] \in \mathcal{X}^l$ and outputs a logit vector used to predict its ground-truth label $y \in \mathcal{Y}$. For NLP tasks, $s$ is a text consisting of words $w_i$ from a dictionary $\mathcal{X}$. Our objective is to craft an adversarial sequence $s_\text{adv}$ that misleads $f_\theta$ to produce an incorrect prediction by replacing as few elements in the input sequence $s$ as possible. Formally, this can be written as the following optimization problem: \begin{align} \label{maineq} &\minimize_{s'\in \mathcal{X}^l} ~ d(s,s')\nonumber\\ &~ \mathrm{subject~to}~~\mathcal{L}(f_\theta(s'), y) \ge 0, \end{align} where $d$ is a distance metric that quantifies the amount of perturbation between two sequences (\eg, Hamming distance) and $\mathcal{L}(f_\theta(s), y) \triangleq \max_{y'\in\mathcal{Y},y'\neq y} f_\theta(s)_{y'} - f_\theta(s)_{y}$ denotes the attack criterion. In this paper, we consider the score-based black-box attack setting, where an adversary has access to the model prediction logits with a limited query budget, but not the model configurations such as network architectures and parameters. To make the adversarial perturbation imperceptible to humans, the modified sequence should be semantically similar to the original sequence and the perturbation size should be sufficiently small \cite{PWWS}. However, minimizing only the perturbation size does not always ensure the semantic similarity between the two sequences. For example, in the NLP domain, even a single word replacement can completely change the meaning of the original text due to the characteristics of natural languages. To address this, we replace elements with ones that are semantically similar to generate an adversarial example, which is a standard practice in the prior works in NLP. Concretely, we first define a set of semantically similar candidates $\mathcal{C}(w_i) \subseteq \mathcal{X}$ for each $i$-th element $w_i$ in the original sequence. In the NLP domain, this can be found by existing word substitution methods \citep{PWWS, TextFooler, PSO}. Then, we find an adversarial sequence in their product space $\prod_{i=0}^{l-1} \mathcal{C}(w_i)\subseteq \mathcal{X}^l$. We emphasize that the greedy-based attack methods have the restricted search spaces of size $\sum_{i=0}^{l-1} |\mathcal{C}(w_i)|-l+1$. In contrast, our search space is of cardinality $| \prod_{i=0}^{l-1} \mathcal{C}(w_i) |$, which is always larger than the greedy methods. \subsection{Bayesian Optimization} \label{main:bo} Bayesian optimization is one of the most powerful approaches for maximizing a black-box function $g: A \to \mathbb{R}$ \cite{snoek2012practical, frazier2018tutorial}. It constructs a probabilistic model that approximates the true function $g$, also referred to as a surrogate model, which can be evaluated relatively cheaply. The surrogate model assigns a prior distribution to $g$ and updates the prior with the evaluation history to get a posterior distribution that better approximates $g$. Gaussian processes (GPs) are common choices for the surrogate model due to their flexibility and theoretical properties \cite{osborne2009gaussian}. \normalsize A GP prior assumes that the values of $g$ on any finite collection of points $X\subseteq A$ are normally distributed, \ie, $g(X)\sim \mathcal{N}(\mu(X), K(X,X)+\sigma_n^2 I)$, where $\mu: A \to \mathbb{R}$ and $K: A \times A \to \mathbb{R}$ are the mean and kernel functions, respectively, and $\sigma_n^2$ is the noise variance. Given the evaluation history $\mathcal{D} = \{(\hat{x}_j, \hat{y}_j=g(\hat{x}_j))\}_{j=0}^{n-1}$, the posterior distribution of $g$ on a finite candidate points $X$ can also be expressed as a Gaussian distribution with the predictive mean and variance as follows: \begin{align*} &\mathrm{E}[g(X) \mid X,\mathcal{D}] \\ &~~~~= K(X,\hat{X}) [ K(\hat{X},\hat{X})+\sigma_n^2 I ]^{-1} (\hat{Y}-\mu(\hat{X}))+\mu(X) \\ &\mathrm{Var}[g(X) \mid X, \mathcal{D}] \\ &~~~~= K(X,X) - K(X,\hat{X}) [ K(\hat{X},\hat{X})+\sigma_n^2 I ]^{-1}K(\hat{X},X), \end{align*} where $\hat{X}$ and $\hat{Y}$ are the concatenations of $\hat{x}_j$'s and $\hat{y}_j$'s, respectively. Based on the current posterior distribution, an acquisition function quantifies the utility of querying $g$ at each point for the purpose of finding the maximizer. Bayesian optimization proceeds by maximizing the acquisition function to determine the next point $x_n$ to evaluate and updating the posterior distribution with the new evaluation history $\mathcal{D} \cup \{ (\hat{x}_n, g(\hat{x}_n)) \}$. After a fixed number of function evaluations, the point evaluated with the largest $g(x)$ is returned as the solution. \begin{figure*}[h] \centering \includegraphics[width=0.9\textwidth]{main_fig_v8.pdf} \caption{The overall process of BBA. A green arrow with a dataset $\mathcal{D}_k^\text{sub}$ denotes the Bayesian optimization step for the block $M_k$ using $D_k^\text{sub}$ as the initial dataset.} \label{fig:proc} \end{figure*} \section{Methods} \label{method} In this section, we introduce the proposed \emph{Blockwise Bayesian Attack} (BBA) framework. Instead of optimizing \cref{maineq} directly, we divide the optimization into two steps. First, we conduct Bayesian optimization to maximize the black-box function $\mathcal{L}(f_\theta(\cdot), y)$ on the attack space $\mathcal{S} \triangleq \prod_{i=0}^{l-1} \mathcal{C}(w_i)$ until finding an adversarial sequence $s_\text{adv}$, which is a feasible solution of \cref{maineq}. This step can be formulated as \begin{align} \label{subeq} \mathop{\mathrm{maximize}}_{s' \in \mathcal{S}} ~ \mathcal{L}(f_\theta(s'), y). \end{align} Second, after finding a valid adversarial sequence $s_\text{adv}$ that satisfies the attack criterion $\mathcal{L}(f_\theta(s_\text{adv}), y) \ge 0$, we seek to reduce the Hamming distance of the perturbed sequence from the original input while maintaining the constant feasibility. Note that \cref{subeq} is a high-dimensional Bayesian optimization problem on combinatorial search space, especially for datasets consisting of long sequences. However, the number of queries required to obtain good coverage of the input space, which is necessary to find the optimal solution, increases exponentially with respect to the input dimensions due to the curse of dimensionality \cite{shahriari2015taking}. This high \emph{query complexity} is prohibitive for query-efficient adversarial attacks. Furthermore, even in a low-dimensional space, the high \emph{computational complexity} of training GP models in Bayesian optimization can drastically slow down the runtime of the algorithm as the evaluation history becomes larger. Fitting the GP model requires the matrix inversion of the covariance matrix $K(\hat{X},\hat{X})$, whose computational complexity is $\mathcal{O}(n^3)$, where $n$ is the number of evaluations so far. To this end, we first introduce the surrogate model and the parameter fitting method which are suitable for our high-dimensional combinatorial search space. Next, we propose two techniques to deal with the scalability issues that arise from the high query and computational complexity of Bayesian optimization. Lastly, we introduce a post-optimization technique that effectively minimizes the perturbation size of an adversarial sequence. \subsection{Surrogate Model and GP Parameter Fitting} Choosing an appropriate kernel that captures the structure of the high-dimensional combinatorial search space is the key to the success of our GP-based surrogate model. We use a categorical kernel\footnote{\url{https://botorch.org/api/_modules/botorch/models/kernels/categorical.html}} with automatic relevance determination (ARD) to automatically determine the degree to which each input dimension is important \citep{mackay1992bayesian}. The kernel has the following form: \begin{align*} K^\text{cate}(s^{(1)}, s^{(2)}) = \sigma_f^2 \prod_{i=0}^{l-1} \exp \left( - \frac{\mathbf{1}[w_i^{(1)} \neq w_i^{(2)}]}{\beta_i} \right), \end{align*} where $\sigma_f^2$ is a signal variance, $\beta_i$ is a length-scale parameter corresponding to the relevance of $i$-th element position. This implies that the kernel regards a sequence pair sharing a larger number of elements as a more similar pair. The GP parameter $\beta_i$ is estimated by maximizing the posterior probability of the evaluation history under a prior using the gradient descent with Adam optimizer \cite{Adam}. More details can be found in \cref{app:GP}. \subsection{Techniques for Scalability} \label{scalability} To achieve a scalable Bayesian optimization algorithm, we decompose an input sequence into disjoint blocks of element positions and optimize each block in a sequential fashion for several iterations using data subsampled from the evaluation history corresponding to the block. \subsubsection{Block Decomposition} We divide an input sequence of length $l$ into $\lceil l/m \rceil$ disjoint blocks of length $m$. Each $k$-th block $M_k$ consists of consecutive indices $[km, \ldots, (k+1)m-1]$. We sequentially optimize each block for $R$ iterations, rather than updating all element positions concurrently. For each iteration, we set the maximum query budget to $N_k$ when optimizing the block $M_k$. While the dimension of the attack space $\prod_{i=0}^{l-1} \mathcal{C}(w_i)$ grows exponentially as $l$ increases, the block decomposition makes the dimension of the search space of each Bayesian optimization step independent of $l$ and upper bounded by $(C_\text{max})^m$, where $C_\text{max}\triangleq\max_{i\in[l]}|\mathcal{C}_i|$ is the size of the largest synonym set. At the start of each iteration, we assign an importance score to each block, which measures how much each block contributes to the objective function value. Then, we sequentially optimize blocks in order of highest importance score for query efficiency. For the first iteration, we set the importance score of each block to the change in the objective function value after deleting the block. For the remaining iterations, we reassign the importance score to each block $M_k$ by summing the inverses of the length-scale parameters that correspond to the element positions in $M_k$, \ie, $\sum_{i\in M_k}1/\beta_i$. \begin{algorithm}[t] \caption{SoD$(\mathcal{D},N)$, Subset of Data method} \label{alg:sod} \begin{algorithmic}[1] \STATE {\bfseries Input:} The evaluation history $\mathcal{D}$, the size of subsamples $N$. \IF{$|\mathcal{D}| < N$} \STATE {\bfseries Return} $\mathcal{D}$. \ENDIF \STATE Initialize the dataset $\mathcal{D}_\text{sub} \leftarrow \{s_0\}$ where $s_0$ is randomly sampled from $\mathcal{D}$. \WHILE {$|\mathcal{D}_\text{sub}| < N$} \STATE Select the farthest sequence. \\ $s_\text{far} \leftarrow \argmax_{s\in \mathcal{D}\setminus\mathcal{D}_\text{sub}} [\min_{s'\in\mathcal{D}_\text{sub}}d(s,s')]$ \STATE Update the dataset $\mathcal{D}_\text{sub}\leftarrow \mathcal{D}_\text{sub} \cup \{s_\text{far}\}$. \ENDWHILE \STATE {\bfseries Return} $\mathcal{D}_\text{sub}$. \end{algorithmic} \end{algorithm} \begin{algorithm}[ t] \caption{PostOpt$(s, s_\text{adv},\mathcal{D}_\text{sub},N_\text{post},N_b)$ \label{alg:postopt} \begin{algorithmic}[1] \STATE {\bfseries Input:} The original sequence $s$, an adversarial sequence $s_\text{adv}$, the evaluation dataset $\mathcal{D}_\text{sub}$ subsampled from the evaluation history, the query budget $N_\text{post}$, and the batch size $N_b$. \STATE Initialize $N_r \leftarrow N_\text{post}$. \WHILE {$N_r > 0$} \STATE Fit GP parameters to maximize the posterior probability distribution on $\mathcal{D}_\text{sub}$. \STATE Select a batch $B$ of the size $\min(N_b, N_r)$ from $\mathcal{B}_H(s,d_H(s,s_\text{adv})-1) \cap \mathcal{B}_H(s_\text{adv},r)$ according to the acquisition function and the DPP. \STATE Evaluate the batch $\mathcal{D}_\text{batch} = \{(s',\mathcal{L}(f_\theta(s'), y)\}_{s'\in B}$. \STATE Update the dataset $\mathcal{D}_\text{sub}\leftarrow \mathcal{D}_\text{sub}\cup \mathcal{D}_\text{batch}$. \STATE $N_r \leftarrow N_r - |\mathcal{D}_\text{batch}|$. \IF {$B$ has an adversarial sequence} \STATE Update $s_\text{adv}$ to the best adversarial sequence in $B$. \STATE $N_r \leftarrow N_\text{post}$. \ENDIF \ENDWHILE \STATE {\bfseries Return} $s_\text{adv}$. \end{algorithmic} \end{algorithm} \subsubsection{History Subsampling} Here, we propose a data subsampling strategy suitable for our block decomposition method. When we optimize a block $M_k$, only the elements in $M_k$ are updated while the remaining elements are unchanged. Thus, in terms of the block $M_k$, all sequences evaluated during the optimization steps for blocks other than $M_k$ share the same elements, which do not provide any information on how much $M_k$ affects the objective function value. To avoid this redundancy, we consider utilizing only the sequences collected from the previous optimization steps for $M_k$ as the evaluation history, denoted by $\mathcal{D}_k$, when optimizing $M_k$. On top of the strategy above, we further reduce the computational complexity of Bayesian optimization by subsampling a dataset from the evaluation history and training the GP surrogate model with the reduced dataset. We adopt the Subset of Data (SoD) method with Farthest Point Clustering (FPC) \cite{SOD}, a simple and efficient subsampling method widely used in the GP literature. Concretely, we randomly sample an initial sequence from the evaluation history and sequentially select the farthest sequence that maximizes the Hamming distance to the nearest of all sequences picked so far. The overall procedure is shown in \cref{alg:sod}. When optimizing a block $M_k$ at each iteration, we select a subset $\mathcal{D}_k^\text{sub}$ from the evaluation history $\mathcal{D}_k$ via the subsampling algorithm above and proceed with the Bayesian optimization step for $M_k$ using $\mathcal{D}_k^\text{sub}$ as the initial dataset for the GP model training. \label{main:ra} Here, we simply set the initial subset size to $N_k$, which is the same as the maximum query budget when optimizing the block $M_k$. Thus, the size of the dataset $\mathcal{D}_k^\text{sub}$ during a single block optimization step is upper bounded by $\mathcal{O}(N_k)$. Therefore, we can write the complexity of the GP model fitting step when optimizing a block by $\mathcal{O}((\max_k{N_k})^3)$, which is independent of the total number of evaluations, $n$. More details containing the runtime analysis of the overall process can be found in \cref{app:exp,app:RA}. \begin{table*}[hbt!] \centering \caption{Attack results for XLNet-base, BERT-base, and LSTM models on sentence-level classification datasets.} \begin{subtable}[h!]{0.655 \columnwidth} \label{tab:main1} \caption{WordNet} \centering \begin{adjustbox}{max width=\columnwidth} \begin{tabular}{cccccc} \toprule Dataset&Model&Method & ASR (\%)& MR (\%)& Qrs \\ \midrule AG&BERT-base& PWWS& 57.1& 18.3& 367\\ & & BBA& \textbf{77.4}& \textbf{17.8}& \textbf{217}\\ \cmidrule{2-6} &LSTM& PWWS& 78.3& 16.4& 336\\ & & BBA& \textbf{83.2}& \textbf{15.4}& \textbf{190}\\ \midrule MR&XLNet-base& PWWS& 83.9& \textbf{14.4}& 143\\ & & BBA& \textbf{87.8}& \textbf{14.4}& \textbf{77}\\ \cmidrule{2-6} &BERT-base& PWWS& 82.0& 15.0& 143\\ & & BBA& \textbf{88.3}& \textbf{14.6}& \textbf{94}\\ \cmidrule{2-6} &LSTM& PWWS& \textbf{94.2}& 13.3& 132\\ & & BBA& \textbf{94.2}& \textbf{13.0}& \textbf{67}\\ \bottomrule \end{tabular} \end{adjustbox} \end{subtable} \begin{subtable}[h!]{0.655 \columnwidth} \caption{Embedding} \centering \begin{adjustbox}{max width=\columnwidth} \begin{tabular}{cccccc} \toprule Dataset&Model&Method & ASR (\%)& MR (\%)& Qrs \\ \midrule AG&BERT-base& TF& 84.7& 24.9& 346\\ & & BBA& \textbf{96.0}& \textbf{18.9}& \textbf{154}\\ \cmidrule{2-6} &LSTM& TF& 94.9& 17.3& 228\\ & & BBA& \textbf{98.5}& \textbf{16.6}& \textbf{142}\\ \midrule MR&XLNet-base& TF& 95.0& 18.0& 101\\ & & BBA& \textbf{96.3}& \textbf{16.2}& \textbf{68}\\ \cmidrule{2-6} &BERT-base& TF& 89.2& 20.0& 115\\ & & BBA& \textbf{95.7}& \textbf{16.9}& \textbf{67}\\ \cmidrule{2-6} &LSTM& TF& \textbf{98.2}& 13.6& 72\\ & & BBA& \textbf{98.2}& \textbf{13.1}& \textbf{54}\\ \bottomrule \end{tabular} \end{adjustbox} \end{subtable} \begin{subtable}[h!]{0.682 \columnwidth} \caption{HowNet} \centering \begin{adjustbox}{max width=\columnwidth} \begin{tabular}{cccccc} \toprule Dataset&Model&Method & ASR (\%)& MR (\%)& Qrs \\ \midrule AG&BERT-base& PSO& 67.2& 21.2& 65860\\ & &BBA& \textbf{70.8}& \textbf{15.5}& \textbf{5176}\\ \cmidrule{2-6} &LSTM& PSO& 71.0& 19.7& 44956\\ & & BBA& \textbf{71.9}& \textbf{13.7}& \textbf{3278}\\ \midrule MR&XLNet-base& PSO& \textbf{91.3}& 18.6& 4504\\ & & BBA& \textbf{91.3}& \textbf{11.7}& \textbf{321}\\ \cmidrule{2-6} &BERT-base& PSO& \textbf{90.9}& 17.3& 6299\\ & & BBA& \textbf{90.9}& \textbf{12.4}& \textbf{403}\\ \cmidrule{2-6} &LSTM& PSO& \textbf{94.4}& 15.3& 2030\\ & & BBA& \textbf{94.4}& \textbf{11.2}& \textbf{138}\\ \bottomrule \end{tabular} \end{adjustbox} \end{subtable} \end{table*} \begin{table*}[hbt!] \centering \caption{Attack results for BERT-base models on document-level classification datasets.} \begin{subtable}[h!]{0.665\columnwidth} \label{tab:main2} \caption{WordNet} \centering \begin{adjustbox}{max width=\columnwidth} \begin{tabular}{cccccc} \toprule Dataset&Method & ASR (\%)& MR (\%)& Qrs \\ \midrule IMDB& PWWS& 97.6& 4.5& 1672\\ & BBA& \textbf{99.6}& \textbf{4.1}& \textbf{449}\\ \cmidrule{2-5} & LSH& 96.3& 5.3& 557\\ & BBA& \textbf{98.9}& \textbf{4.8}& \textbf{372}\\ \cmidrule{1-5} Yelp& PWWS& 94.3& 7.6& 1036\\ & BBA& \textbf{99.2}& \textbf{7.4}& \textbf{486}\\ \cmidrule{2-5} & LSH& 92.6& 9.5& 389\\ & BBA& \textbf{98.8}& \textbf{8.8}& \textbf{271}\\ \bottomrule \end{tabular} \end{adjustbox} \end{subtable} \begin{subtable}[h!]{0.65\columnwidth} \caption{Embedding} \centering \begin{adjustbox}{max width=\columnwidth} \begin{tabular}{cccccc} \toprule Dataset&Method & ASR (\%)& MR (\%)& Qrs \\ \midrule IMDB& TF& 99.1& 8.6& 712\\ & BBA& \textbf{99.6}& \textbf{6.1}& \textbf{339}\\ \cmidrule{2-5} & LSH& 98.5& 5.0& 770\\ & BBA& \textbf{99.8}& \textbf{4.9}& \textbf{413}\\ \cmidrule{1-5} Yelp& TF& 93.5& 11.1& 461\\ & BBA& \textbf{99.8}& \textbf{9.6}& \textbf{319}\\ \cmidrule{2-5} & LSH& 94.7& 8.9& 550\\ & BBA& \textbf{99.8}& \textbf{8.6}& \textbf{403}\\ \bottomrule \end{tabular} \end{adjustbox} \end{subtable} \begin{subtable}[h!]{0.695\columnwidth} \caption{HowNet} \centering \begin{adjustbox}{max width=\columnwidth} \begin{tabular}{cccccc} \toprule Dataset&Method & ASR (\%)& MR (\%)& Qrs \\ \midrule IMDB& PSO& \textbf{100.0}& 3.8& 113343\\ & BBA& \textbf{100.0}& \textbf{3.3}& \textbf{352}\\ \cmidrule{2-5} & LSH& 98.7& 3.2& 640\\ & BBA& \textbf{99.8}& \textbf{3.0}& \textbf{411}\\ \cmidrule{1-5} Yelp& PSO& \textbf{98.8}& 10.6& 86611\\ & BBA& \textbf{98.8}& \textbf{8.2}& \textbf{283}\\ \cmidrule{2-5} & LSH& 93.9& 8.0& 533\\ & BBA& \textbf{98.2}& \textbf{7.4}& \textbf{353}\\ \bottomrule \end{tabular} \end{adjustbox} \end{subtable} \end{table*} \subsubsection{Acquisition Maximization Considering Batch Diversity via Determinantal Point Process} We utilize expected improvement as the acquisition function, which is defined as $\mathrm{EI}(x) = \mathrm{E}[\max(g(s)-g^*_\mathcal{D},0)]$, where $g^*_\mathcal{D} = \max_{\hat{y}\in \hat{Y}} \hat{y}$ is the largest value evaluated so far. To further enhance the runtime of the Bayesian optimization algorithm, we evaluate a batch of sequences parallelly in a single round, following the practice in \citet{HDBBO}. We sample an evaluation batch $B$ via a Determinantal Point Process (DPP), which promotes batch diversity by maximizing the determinant of its posterior variance matrix $\mathrm{Var}(g(B) \mid \mathcal{D})$ \citep{kulesza2012determinantal}. Concretely, we first select sequences with top-$T$ acquisition values in the 1-Hamming distance ball $\mathcal{B}_H(s^*_\mathcal{D},1)$ of the best sequence $s^*_\mathcal{D}$ evaluated so far. Then, we greedily choose $N_b$ sequences among the top-$T$ sequences that maximize the determinant. More details can be found in \cref{app:exp}. \subsection{Post-Optimization for Perturbation Reduction} \label{postopt} Since we do not consider the perturbation size during the first step of BBA, we conduct a post-optimization step to reduce the perturbation size. To this end, we optimize for a sequence near the current adversarial sequence $s_\text{adv}$ that stays adversarial and has a smaller perturbation than $s_\text{adv}$. To achieve this, we search an adversarial sequence in a reduced search space $\mathcal{B}_H(s,d_H(s,s_\text{adv})-1) \cap \mathcal{B}_H(s_\text{adv},r)$, where $r$ controls the exploration size. We also conduct Bayesian optimization for the post-optimization step. We leverage the evaluation history collected during the first step of BBA and subsample an initial dataset for the GP model training from the history. If we find a new adversarial sequence during this step, we replace the current adversarial sequence with the new sequence and repeat the step above until we cannot find a new adversarial sequence using the query budget $N_\text{post}$ after the most recent update. The overall post-optimization procedure is summarized in \cref{alg:postopt}. \cref{fig:proc} illustrates the overall process of BBA. Please refer to \cref{alg:main} in \cref{app:mainalg} for the more detailed overall algorithm of BBA. \section{Experiments} We evaluate the performance of BBA on text classification, textual entailment, and protein classification tasks. We first provide a brief description of the datasets, victim models, and baseline methods used in the experiments. Then, we report the performance of BBA compared to the baselines. Our implementation is available at \url{https://github.com/snu-mllab/DiscreteBlockBayesAttack}. \input{fig2.tex} \subsection{Datasets and Victim Models} To demonstrate the wide applicability and effectiveness of BBA, we conduct experiments on various datasets in the NLP and protein domain. In the NLP domain, we use sentence-level text classification datasets (AG's News, Movie Review), document-level classification datasets (IMDB, Yelp), and textual entailment datasets (MNLI, QNLI) \citep{Yelp_and_AG,MR,IMDB,MNLI,QNLI}. In the protein domain, we use an enzyme classification dataset (EC) with 3-level hierarchical multi-labels \citep{EC50}. Note that a protein is a sequence of amino acids, each of which is a discrete categorical variable. We consider multiple types of victim models to attack, including bi-directional word LSTM, ASGD Weight-Dropped LSTM, fine-tuned BERT-base and BERT-large, and fine-tuned XLNet-base and XLNet-large \cite{LSTM, AWD-LSTM, BERT, XLNet}. More details on datasets and victim models can be found in \cref{app:datasets,app:models}, respectively. \subsection{Baseline Methods} In the NLP domain, we compare the performance of BBA against the state-of-the-art methods such as PWWS, TextFooler, LSH, BAE, and PSO, the first four of which are greedy-based algorithms \cite{PWWS,TextFooler, LSH, BAE, PSO}. Note that PWWS, TextFooler, BAE, and PSO have different attack search spaces since they utilize different word substitution methods (WordNet, Embedding, BERT masked language model, and HowNet, respectively) \cite{WordNet,mrkvsic2016counter,HowNet}. For a fair comparison, we follow the practice in \citet{LSH} and compare BBA against each baseline individually under the same attack setting (\eg, word substitution method, query budget) as used in the baseline. We also note that LSH leverages additional attention models, each of which is pre-trained on a different classification dataset. Please refer to \cref{app:spaces} for more details. For the protein classification task, we compare BBA with TextFooler. To define its attack space, we exploit the experimental exchangeability of amino acids \citep{ExEx}, which quantifies the mean effect of exchanging one amino acid to a different amino acid on protein activity, as the measure of semantic similarity. Then, we define a synonym set for each amino acid by thresholding amino acids with the experimental exchangeability and set the attack space to the product of the synonym sets. As in the NLP domain, we compare BBA with the baseline under the same experimental setting as used in the baseline. \subsection{Attack Performance} We quantify the attack performance in terms of three main metrics: attack success rate (ASR), modification rate (MR), and the average number of queries (Qrs). The attack success rate is defined as the rate of successfully finding misclassified sequences from the original sequences that are correctly classified, which directly measures the effectiveness of the attack method. The modification rate is defined as the percentage of modified elements after the attack, averaged over successfully fooled sequences. This rate is formally written by $\mathrm{E}[d_H(s,s_\text{adv}) / \mathrm{len}(s)]$, which quantifies the distortion of the perturbed sequences from the original. The average number of queries, computed over all sequences being attacked, represents the query efficiency of the attack methods. The main attack results on text classification tasks are summarized in \cref{tab:main1,tab:main2}. The results show that BBA significantly outperforms all the baseline methods in all the evaluation metrics for all datasets and victim models we consider. \cref{fig:main} shows the cumulative distribution of the number of queries required for the attack methods against a BERT-base model on the Yelp dataset. The results show that BBA finds successful adversarial texts using fewer queries than the baseline methods. More experimental results on other target models (BERT-large, XLNet-large), baseline method (BAE), and datasets (MNLI, QNLI) can be found in \cref{app:add}. Moreover, \cref{tab:protein} shows that BBA outperforms the baseline method by a large margin for the protein classification task, which shows the general applicability and effectiveness of BBA on multiple domains. \begin{table}[ht] \centering \caption{Attack results against AWD-LSTM models on the protein classification dataset EC50 level 0, 1, and 2.} \begin{adjustbox}{max width=\columnwidth} \begin{tabular}{cccccccccc} \toprule &&Level 0 &&&Level 1 &&& Level 2&\\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} Method &ASR &MR &Qrs &ASR &MR &Qrs &ASR &MR &Qrs\\ \midrule TF & 83.8 & 3.2 & 619 & 85.8 & 3.0 & 584 & 89.6 & 2.5 & 538 \\ BBA & \textbf{99.8} & \textbf{2.9} & \textbf{285} & \textbf{99.8} & \textbf{2.3} & \textbf{293} & \textbf{100.0} & \textbf{2.0} & \textbf{231}\\ \bottomrule \end{tabular} \end{adjustbox} \label{tab:protein} \end{table} For a direct comparison with a baseline, one can compute the MR and Qrs over the texts that both BBA and the baseline method are successful on. \Cref{tab:main1anal} shows that BBA outperforms PWWS in MR and Qrs on samples that both methods successfully fooled by a larger margin.\footnote{For BERT-base on AG, PWWS fools 267 texts, BBA fools 363 texts, and both commonly fools 262 texts among 500 texts. For LSTM on AG, PWWS fools 354 texts, BBA fools 376 texts, and both commonly fools 349 texts among 500 texts.} \begin{table}[ht] \caption{Attack results on the AG's News. MR and Qrs of `both success' are averaged over the texts that both PWWS and BBA successfully fooled.} \centering \begin{adjustbox}{max width=\columnwidth} \begin{tabular}{ccccccc} \toprule &&&&&\multicolumn{2}{c}{Both success}\\ \cmidrule(lr){6-7} Model&Method & ASR (\%)& MR (\%)& Qrs &MR (\%) & Qrs \\ \midrule BERT-base& PWWS& 57.1& 18.3& 367 & 17.8 & 311 \\ & BBA& \textbf{77.4}& \textbf{17.8}& \textbf{217} & \textbf{14.0} & \textbf{154}\\ \cmidrule{1-7} LSTM& PWWS& 78.3& 16.4& 336 & 16.1 & 311\\ & BBA& \textbf{83.2}& \textbf{15.4}& \textbf{190} & \textbf{14.4} & \textbf{163}\\ \bottomrule \end{tabular} \end{adjustbox} \label{tab:main1anal} \end{table} \subsection{Ablation Studies} \subsubsection{The Effect of DPP in Batch Update} To validate the effectiveness of the DPP-based batch update technique, we compare BBA with the greedy-style batch update which chooses the sequences of top-$N_b$ acquisition values for the next evaluations. We do not utilize the post-optimization process to isolate the effect of the batch update. \cref{tab:abl-batch} shows that the batch update with DPP consistently achieves higher attack success rate using fewer queries compared to the greedy-style batch update. Surprisingly, the batch update with DPP achieves higher attack success rate using fewer queries compared to `without batch update' in AG's News dataset. \begin{table}[ht] \caption{Attack results of BBA with and without batch update using the WordNet-based word substitution against BERT-base and LSTM models on the sentence-level classification datasets. `Top-$N_b$' denotes the greedy-style batch update that chooses sequences of top-$N_b$ acquisition values.} \centering \begin{adjustbox}{max width=\columnwidth} \begin{tabular}{cccccc} \toprule &&\multicolumn{2}{c}{BERT-base}&\multicolumn{2}{c}{LSTM}\\ \cmidrule(lr){3-4}\cmidrule(lr){5-6} Dataset& Method &ASR (\%) &Qrs & ASR (\%) & Qrs\\ \midrule AG& w/o batch& 76.1& 126 & 73.5& 127\\ \cmidrule{2-6} & w/ batch, Top-$N_b$& 75.9& 133 & 74.3& 127\\ & w/ batch, DPP& \textbf{77.4}& \textbf{124}& \textbf{83.2}& \textbf{86}\\ \midrule MR& w/o batch& 88.5& 26& 93.9& 18\\ \cmidrule{2-6} & w/ batch, Top-$N_b$& 87.1& 28& 93.6& 20\\ & w/ batch, DPP& \textbf{88.3}& \textbf{25}& \textbf{94.2}& \textbf{17}\\ \bottomrule \end{tabular} \end{adjustbox} \label{tab:abl-batch} \end{table} \begin{table*}[hbt!] \caption{Examples of the original and their adversarial sequences from Yelp and EC50 against BERT-base models.} \label{tab:qualitative} \centering \begin{adjustbox}{max width=2\columnwidth} \begin{tabular}{llccc} \toprule \multicolumn{2}{l}{Document-Level Text Classification (Yelp)}&Label\\ \midrule Orig & Food is fantastic and exceptionally clean! My only complaint is I went there with my 2 small children and they were showing a very&\multirow{2}{*}{Positive}\\ & inappropriate R rated movie! \\ \cdashlinelr{2-2} BBA & Food is \emph{\red{gorgeous}} and exceptionally \emph{\red{unpolluted}}! My only complaint is I went there with my 2 small children and they were showing a very& \multirow{2}{*}{Negative}\\ & inappropriate R rated movie! \\ \cdashlinelr{2-2} TF & Food is fantastic and \emph{\red{awfully}} clean! My only \emph{\red{grievances}} is I \emph{\red{turned}} there with my 2 small children and they were showing a very& \multirow{2}{*}{Negative}\\ & inappropriate R rated \emph{\red{footage}}! \\ \midrule \multicolumn{2}{l}{Protein Classification (EC50 level 0)}& Label\\ \midrule Orig&\texttt{MATPWRRALLMILASQVVTLVKCLEDDDVPEEWLLLHVVQGQIGAGNYSYLRLNHEGKIILRMQSLRGDADLYVSDSTPHPSFDDYELQSVT}&\multirow{3}{*}{Non-Enzyme}\\ &\texttt{CGQDVVSIPAHFQRPVGIGIYGHPSHHESDFEMRVYYDRTVDQYPFGEAAYFTDPTGASQQQAYAPEEAAQEEESVLWTILISILKLVLEILF} \\ \cdashlinelr{2-2} BBA &\texttt{MATPWRRALLM\red{R}LASQVVTLVKCLEDDDVPEEWLLLHVVQGQIGAGNYSYLRLNHEGKIILRMQSLRGDADLYVSDSTPHPSFDDYELQSVT}& \multirow{3}{*}{Enzyme}\\ &\texttt{CGQDVVSIPAHFQRPVGIGIYGHPSHHESDFEMRVYYD\red{W}TVD\red{W}YPFGEAAYFTDPTGASQQQAYAPEEAAQEEESVLWTILISILKLVLEILF}\\ \cdashlinelr{2-2} TF & \texttt{MATPWRRALLMILASQVVTLVKCLEDDDVPEEWLLLHVVQGQIGAGNYSYLRLNHEGKIILRMQSLRGDADLYVSDSTPHPSFDDYELQSVT}&\multirow{3}{*}{Enzyme}\\ &\texttt{CGQDVVSIPAHFQRPVGIGIYGHPSHHESDFEMRVYYDRTVDQYPFGE\red{W}AYF\red{C}\red{C}\red{G}\red{W}GASQQQAYAPEE\red{W}\red{W}\red{W}\red{F}EESVL\red{D}TILIS\red{G}LKLVLEILF} \\ \bottomrule \end{tabular} \end{adjustbox} \end{table*} \subsubsection{The Effect of Post-Optimization Process} We analyze the trend of change in modification rate during the post-optimization process. The post-optimization process reduces the distortion between the adversarial sequence and the original sequence using additional queries, which results in a trade-off between the distortion and the number of queries. \cref{fig:trav} shows the trajectory of the modification rate while traversing the query budget $N_\text{post}$ for post-optimization from $0$ to $200$. We find that the post-optimization process reduces the distortion and reaches the same distortion as PWWS using a fewer number of queries. \begin{figure}[hbt!] \centering \begin{adjustbox}{max width=0.99\columnwidth} \begin{tikzpicture} \begin{axis}[ width=4.5cm, height=4.2cm, no marks, every axis plot/.append style={thick}, grid=major, scaled ticks = false, ylabel near ticks, tick pos=left, tick label style={font=\small}, xtick={0, 500,1000,1500,2000}, xticklabels={0, 0.5k, 1k, 1.5k ,2k}, ytick={0, 10,20,30}, yticklabels={0, 10,20,30}, label style={font=\small}, xlabel={Number of queries}, ylabel={Modification rate (\%)}, ylabel style={at={(-0.2,0.5)}}, xmin=0, xmax=2100, ymin=0, ymax=31, legend cell align={left}, legend style={legend columns=1, at={(1.8, 0.62), mark size=1pt, font=\tiny}, font=\footnotesize}, ] \addplot[red, only marks, mark size=0.7pt] table [x=Qrs, y=modif, col sep=comma]{CSV_final/mp_trav_ours_wordnet.csv}; \addlegendentry{BBA} \addplot[blue, only marks, mark size=0.7pt] table [x=PWWSQrs, y=PWWSmodif, col sep=comma]{CSV_final/mp_trav_baselines.csv}; \addlegendentry{PWWS} \end{axis} \end{tikzpicture} \end{adjustbox} \caption{Modification rate versus the number of queries plot of adversarial texts generated by traversing $N_\text{post}$ from $0$ to $200$ on the IMDB dataset against a BERT-base model. We use WordNet substitution method for the attack.} \label{fig:trav} \end{figure} \subsubsection{The Actual Runtime Analysis} \begin{figure}[hbt!] \centering \begin{subfigure}[t]{0.65\columnwidth} \begin{tikzpicture} \begin{axis}[ width=3.7cm, height=3.5cm, grid=major, no marks, scaled ticks = false, tick pos = left, tick label style={font=\tiny}, ytick={0,20000,40000,60000,80000,100000}, yticklabels={0,20k,40k,60k,80k,100k}, xtick={0,2000,4000,6000,8000,10000}, xticklabels={0, 2k, 4k, 6k, 8k, 10k}, label style={font=\tiny}, xlabel={Number of queries}, ylabel={Actual runtime (sec)}, ylabel style={at={(0.25,0.5)}}, xlabel style={at={(0.5,0.15)}}, xmin=0, xmax=10000, ymin=0, ymax=105000, legend style={legend columns=1, font=\tiny}, legend cell align={left}, legend style={at={(-1.04,0.53)},anchor=center}] \addplot+[black] table [x=qrs, y=runtime, col sep=comma]{EXP2/bayesattack-hownet_categorical_inf_anal2_v3_straight_dpp_posterior_1000_pso_0-123.csv}; \addlegendentry{w/o both} \addplot+[blue] table [x=qrs, y=runtime, col sep=comma]{EXP2/bayesattack-hownet_categorical_wide_anal2_v3_sod_straight_dpp_posterior_1000_pso_0-123.csv}; \addlegendentry{w/ HS, $m\!=\!40$} \addplot+[red] table [x=qrs, y=runtime, col sep=comma]{EXP2/bayesattack-hownet_categorical_narrow_anal2_v3_sod_straight_dpp_posterior_1000_pso_0-123.csv}; \addlegendentry{w/ HS, $m\!=\!5$} \end{axis} \end{tikzpicture} {\captionsetup{justification=raggedleft,singlelinecheck=false} % \caption{$123$rd text~~~}} \label{fig:runtimea} \end{subfigure} \begin{subfigure}[t]{0.3\columnwidth} \begin{tikzpicture} \begin{axis}[width=3.7cm, height=3.5cm, grid=major, no marks, scaled ticks = false, tick pos = left, tick label style={font=\tiny}, ytick={0,20000}, yticklabels={0,20k}, xtick={0,2000,4000,6000,8000,10000}, xticklabels={0, 2k, 4k, 6k, 8k, 10k}, label style={font=\tiny}, xlabel={Number of queries}, xlabel style={at={(0.5,0.15)}}, xmin=0, xmax=10000, ymin=0, ymax=21000, legend style={legend columns=1, font=\tiny}, legend cell align={left}, legend style={at={(0.5,1.5)},anchor=center}] \addplot+[black] table [x=qrs, y=runtime, col sep=comma]{EXP2/bayesattack-hownet_categorical_inf_anal2_v3_straight_dpp_posterior_1000_pso_0-348.csv}; \addplot+[blue] table [x=qrs, y=runtime, col sep=comma]{EXP2/bayesattack-hownet_categorical_wide_anal2_v3_sod_straight_dpp_posterior_1000_pso_0-348.csv}; \addplot+[red] table [x=qrs, y=runtime, col sep=comma]{EXP2/bayesattack-hownet_categorical_narrow_anal2_v3_sod_straight_dpp_posterior_1000_pso_0-348.csv}; \end{axis} \end{tikzpicture} {\captionsetup{justification=raggedleft,singlelinecheck=false} \caption{$348$th text}} \label{fig:runtimeb} \end{subfigure} \caption{The cumulative runtime versus the number of queries plot. HS in the legend denotes history subsampling, and $m=k$ in the legend denotes block decomposition with the block size $k$.} \label{fig:runtime} \end{figure} To study the effectiveness of block decomposition and history subsampling techinques on runtime, we choose two texts ($123$rd text of length $641$ and $348$th text of length $40$) from the texts that BBA iterates more than once until attack success when attacking the Yelp dataset against BERT-base model. \Cref{fig:runtime} shows that block decomposition and history subsampling significantly reduces the actual runtime as the number of queries increases. Note that the comparison between the black and the blue curve of 348th text shows only the effect of history subsampling since the block size is equal to the text length (single block). In practice, attacking long documents against the robust model may require a large number of queries and our techniques can effectively reduce the actual runtime in that situation. \Cref{fig:runtime2} shows the cumulative distribution of the actual runtime required for attack methods. The result shows that BBA consistently finds successful texts faster than PWWS against the XLNet-large model on the Yelp dataset. Note that one could further accelerate the kernel computations of Bayesian optimization using a better computation resource such as a multi-GPU cluster. \begin{figure} \centering \begin{subfigure}[b]{\columnwidth} \centering \begin{adjustbox}{max width=\columnwidth} \begin{tikzpicture} \begin{axis}[at={(0cm,-0.5cm)},width=4.5cm, height=4.2cm, grid=major, no marks, scaled ticks = false, tick pos = left, tick label style={font=\small}, ytick={0,0.2,0.4,0.6,0.8,1.0}, yticklabels={0,20,40,60,80,100}, xtick={0,100,200,400}, label style={font=\small}, xlabel={Actual runtime (sec)}, ylabel={Success rate}, ylabel style={at={(0.1,0.5)}}, xlabel style={at={(0.5,0)}}, xmin=0, xmax=250, ymin=0, ymax=1.05, legend style={legend columns=1, font=\footnotesize}, legend cell align={left}, legend style={at={(1.7,0.5)},anchor=center}] \addplot+[red] table [x=time, y=asr, col sep=comma]{TIME/xlnet-large-cased-yelp-ours-pwws.csv}; \addlegendentry{BBA} \addplot+[blue] table [x=time, y=asr, col sep=comma]{TIME/xlnet-large-cased-yelp-pwws_0_product.csv}; \addlegendentry{PWWS} \end{axis} \end{tikzpicture} \end{adjustbox} \end{subfigure} \caption{The cumulative distribution of the actual runtime required for the attack methods against the XLNet-large model on the Yelp dataset. Refer to \cref{tab:large} in \cref{app:add} for the detailed attack results.} \label{fig:runtime2} \end{figure} \subsection{Qualitative Results} Attack examples of Yelp and EC50 datasets in \cref{tab:qualitative} show that our method successfully generates semantically consistent adversarial texts while baseline methods generate adversarial sequences with high modification rates. Please refer to \cref{app:qual} for more qualitative results. \section{Conclusion} We propose a query-efficient and scalable black-box attack method on discrete sequential data using Bayesian optimization. In contrast to greedy-based state-of-the-art methods, our method can dynamically compute important positions using an ARD categorical kernel during Bayesian optimization. Furthermore, we propose block decomposition and history subsampling techniques to scale our method to long sequences and large queries. Lastly, we develop a post-optimization algorithm that minimizes the perturbation size. Our extensive experiments on various victim models and datasets from different domains show the state-of-the-art attack performance compared to the baseline methods. Our method achieves a higher attack success rate with a significantly lower modification rate and the number of queries throughout all our experiments. \section*{Broader Ethical Impact} Our research focuses on the important problem of adversarial vulnerabilities of classification models on discrete sequential data. Even though there is the possibility of a malicious adversary misusing BBA to attack public text classification APIs, we believe our research can be a basis for the improvement in defenses against adversarial attacks on discrete sequential data. \section*{Acknowledgements} This work was supported by Samsung Research Funding \& Incubation Center of Samsung Electronics under Project Number SRFC-IT2101-01, Institute of Information \& communications Technology Planning \& Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2020-0-00882, (SW STAR LAB) Development of deployable learning intelligence via self-sustainable and trustworthy machine learning), and Basic Science Research Program through the National Research Foundation of Korea (NRF) (2020R1A2B5B03095585). This material is based upon work supported by the Air Force Office of Scientific Research under award number FA2386-20-1-4043. Hyun Oh Song is the corresponding author.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In many physical systems, the mean state and the typical fluctuations about this state, usually studied in statistical physics, are not the only quantities of interest. Indeed, fluctuations far away from the mean state, although they are usually very rare, can play a crucial part in the macroscopic behaviour of the system. For instance, they can drive the system to a new metastable state, possibly with radically different properties~\cite{Kramers1940}. Such transitions arise in a wide variety of situations, such as Josephson junctions~\cite{Kurkijarvi1972}, quantum oscillators~\cite{Dykman2012}, turbulent flows~\cite{Bouchet_Simonnet_2008}, magneto-hydrodynamics dynamos~\cite{Berhanu2007}, diffusion-controlled chemical reactions~\cite{Calef1983}, protein folding~\cite{Noe2009}, climate dynamics~\cite{Paillard1998}. Even if the system returns to its original state after undergoing the large fluctuation, the impact of this event may be so large that it is worth being studied on its own. One may think for instance about heat waves~\cite{Robine2008} and tropical cyclones, rogue waves in the ocean~\cite{Dysthe2008}, strong dissipative events in turbulent flows~\cite{Yeung2015}, shocks in financial markets~\cite{EmbrechtsBook}. Here, we are concerned with the study of such atypical fluctuations starting from the equations (deterministic or stochastic) which govern the dynamics of the system. This approach is different from and complementary to the purely statistical methods which try to extract the best possible information about the distribution of rare events from an existing timeseries, such as, for instance, extreme value statistics~\cite{Ghil2011,Fortin2015,LucariniExtremesBook}. The theoretical framework which has been developed over the last decades in statistical physics to tackle this problem is that of \emph{large deviation theory}~\cite{FreidlinWentzellBook,EllisBook,DenHollanderBook,Touchette2009,VulpianiBook}. Numerical methods have also been developed to efficiently sample rare events, which are not amenable to classical Monte-Carlo methods~\cite{AsmussenGlynn2007,LandauBinder2015,Liu2008}; see~\cite{Bucklew2004,RubinoTuffin2009} for general references on rare event simulation. Those algorithms can be roughly divided into two main classes: those which work in state space, and evolve a population of \emph{clones} of the system according to selection rules biased to favour the appearance of the desired rare event~\cite{Grassberger2002,DelMoral2005,Cerou2007,Giardina2011,Rolland2016}, and those which try to sample directly in path space the histories of the system which exhibit the phenomenon of interest~\cite{Dellago2002,E2002,E2004,laurie2015computation,Grafke2015b,grigorio2017instantons}. They can be used either for stochastic processes or deterministic chaotic dynamical systems~\cite{wouters2016rare}. Most of those algorithms ultimately compute either one-time statistics (typically, the stationary probability distribution of the system, for which they sample efficiently the tails, or alternatively, large deviation rate functions or scale cumulant generating functions), or reactive trajectories corresponding to the transition between two metastable states. From a modelling perspective, it is natural to assume that successive occurrences of a rare event are independent from one another~\cite{EmbrechtsBook}. Then, the average number of events occurring in a time interval is proportional to the length of that interval. This is the definition of a Poisson process. In this case, all the statistics are encoded in a single parameter, the rate of the Poisson process. In the following, we will assume that we are dealing with the simple case of a well identified process that can be described by a single return time or rate. This is often a sufficient framework; indeed the long time behaviour of many systems can be described phenomenologically, or exactly in some limits, as Markov processes described by a set of transition rates describing independent processes (see for instance~\cite{FreidlinWentzellBook} for systems driven by a weak noise). We note however that many other physical systems are not amenable to such a simple effective Markov processes, for instance structural glasses or amorphous media. For many practical applications, the most useful information about a rare event is the \emph{return time}: it is the typical time between two occurrences of the same event. This is how hydrologists measure the amplitude of floods for instance~\cite{Sveinsson2002}. As a matter of fact, one of the motivations of Gumbel, a founding father of extreme value theory, was exactly this problem~\cite{Gumbel1941}. Other natural hazards, such as earthquakes \cite{Corral2005} and landslides~\cite{Peres2016}, are also ranked according to their return time. Similarly, climatologists seek to determine how the frequency of given heat waves~\cite{Meehl2004b,Rahmstorf2011} or cold spells~\cite{Cattiaux2010} evolves in a changing climate~\cite{Shepherd2016}. Public policies rely heavily on a correct estimate of return times: for instance, in the United States, floodplains were defined in the National Flood Insurance Program in 1968 as areas vulnerable to events with a 100-year return time. Such definitions are then used to determine insurance policies for home owners. In the industry as well, return times are the metric used by engineers to design systems withstanding a given class of events. Just like the extreme values of any observable, the return time of a rare event is very difficult to estimate directly from observational or numerical data, because extremely long timeseries are necessary. Heuristically, the return time (return period) $r(a)$ for an event of amplitude $a$ (return level), interpreted as a first-passage time, may be at first sight related to the inverse of the stationary probability $p_s$: $r(a)=\tau_{c}(a)/{p_{s}(a)}$, where the correlation time $\tau_{c}(a)$ usually depends on $a$ but remains bounded when $p_{s}(a)$ goes to zero. This is true for instance for a system perturbed by a small-noise $\epsilon$ at the level of large deviations: $r(a) \underset{\epsilon \rightarrow 0}{\asymp} e^{U(a)/\epsilon}$, where the quasi-potential $U$ is defined by $p_s(a) \underset{\epsilon \rightarrow 0}{\asymp} e^{-U(a)/\epsilon}$~\cite{FreidlinWentzellBook}. However the return time is only roughly proportional to the inverse of the stationary probability. In order to compute $\tau_{c}(a)$ one has to go beyond large deviation theory. For instance for gradient dynamics and for first exit time problems, exact formulas exists~\cite{Langer1969,GardinerBook,RiskenBook}, valid at leading order in $\epsilon$ (we stress that different formula are obtained depending on the hypothesis made on the domain that the particle exits). Generalisations to irreversible non gradient dynamics also exist (see~\cite{bouchet2016generalisation} and references therein). From these computations, it appears clearly that $\tau_{c}(a)$ is not simply related to ${p_{s}(a)}$ and that the return time $r(a)$ is a trajectory property, not amenable to a one point statistics like ${p_{s}(a)}$. There is thus a need to develop rare event algorithms specifically designed for computing return times, valid also when large deviation estimates are not relevant. This is the aim of this paper. The approach developed in this work relies on the combination of two observations. First, if one assumes that rare events are described by a Poisson process, then return times can be related to the probability of observing extrema over pieces of trajectories, which are of duration much larger than the correlation time of the system, but typically much smaller than the computed return times. Second, several classes of rare event algorithms can be easily generalised to compute the probability of extrema over pieces of trajectories, rather than to compute single point statistics. We show that combining these two remarks allows us to build a powerful tool to compute return times in an elementary way with simple and robust algorithms. As a side remark, we also discuss a new way to construct return time plots from a timeseries, which provides an important improvement for return times moderately larger than the sampling time, even when we are not using a rare event algorithm. We illustrate the method by computing return times, first for an instantaneous observable (one-point statistics) using the \acf{ams} algorithm~\cite{Cerou2007,Cerou2011}, and second for a time-averaged observable, using both the \ac{ams} algorithm and the Del Moral-Garnier algorithm~\cite{DelMoral2005} (or equivalently the Giardina-Kurchan algorithm~\cite{Giardina2006} in a non-stationary context). The computation of return times with the \ac{ams} algorithm leads us to define a generalisation called the \acf{tams} algorithm. This generalisation has several practical advantages: it computes directly return times $r(a)$ for a full range of return level $a$ rather than a single one, and it avoids the tricky estimation of time scale on an auxiliary ensemble, and the sampling from this auxiliary ensemble. As a test, we first carry out these computations for a simple stochastic process, the \acf{ou} process, for which analytical results are available and the accuracy and efficiency of the algorithm can be tested thoroughly. Then, to demonstrate the usefulness of the method in realistic applications, we briefly showcase a problem involving a complex dynamical system: extreme values of the drag on an object immersed in a turbulent flow. The structure of this paper is as follows: in section~\ref{sec:Return_Time_Plots}, we introduce the method to compute return times from a timeseries and from rare event algorithms. We define the \acf{tams} algorithm in section~\ref{sec:AMS}. We apply the method to compute return times for the instantaneous and time-averaged observables for an Ornstein--Uhlenbeck process, respectively, in section~\ref{sec:AMS} (using the \ac{tams} algorithm) and~\ref{sec:gktl} (using both the \ac{tams} and the \acf{gktl} algorithms). Finally, we introduce the application to complex dynamical systems in section~\ref{sec:applications}, before presenting our conclusions in section~\ref{sec:cl}. We discuss in the conclusions the range of applicability of these algorithms. \section{Return Times: Definition and Sampling Methods} \label{sec:Return_Time_Plots} \subsection{Computing return times from a timeseries}\label{sec:return_time_timeseries} \subsubsection{Definition of return times} We consider a statistically time homogeneous ergodic process (a stationary timeseries) $\left\{ \ensuremath{A}(t)\right\} _{ t\geq t_0}$. Typically, $\ensuremath{A}:\mathbb{R}^d\to \mathbb{R}$ is an observable on a system of interest, considered here as a $\mathbb{R}^d$-valued stochastic process $\bigl(X_t\bigr)_{t\geq t_0}$, and we should denote $\ensuremath{A}(t)=\ensuremath{A}(X_t)$. We are interested in the statistical distribution of events where the observable reaches a prescribed threshold $a$. The occurrence of such events is illustrated for a sample \acf{ou} process, defined by $dX_t=-\alpha X_t dt+\sqrt{2\epsilon}dW_t$, on Fig.~\ref{fig:courbe_signal_avec_seuil}. \begin{figure} \centering \begin{subfigure}{1.0\textwidth} \centering \includegraphics[scale=0.6]{courbe_signal_avec_seuil} \caption{Sample timeseries (black curve), generated from an Ornstein--Uhlenbeck process~\eqref{eq:OUprocess} ($\alpha=1$, $\epsilon=1/2$; $\sigma=1/\sqrt{2}$ is the standard deviation). We are interested in fluctuations which reach a prescribed threshold $a$ (red curve). These events are identified by the red dots.} \label{fig:courbe_signal_avec_seuil} \end{subfigure} \begin{subfigure}{1.0\textwidth} \centering \includegraphics[scale=0.6]{illustration_tau} \caption{Time evolution of the waiting time $\tau(a,t)$ (see~\eqref{eq:Waiting_Time_Def}) associated to the above timeseries: it is a succession of affine parts with slope $-1$. Note that in principle, there should be small time intervals such that $\tau(a,t)=0$, corresponding to the duration of the event with $\ensuremath{A}(t)>a$, separating the triangles. Here, the duration of the events is too small for such intervals to be visible.} \label{fig:illustration_tau} \end{subfigure} \caption{An example of a random process (a) and the waiting time (b) associated to events reaching a given threshold.} \end{figure} We define the return time for a given threshold $a$ as the average time one has to wait before observing the next event with $\ensuremath{A}(t)>a$. More precisely, we define the waiting time \begin{align} \tau(a,t)&=\min\left\{ \tau\geq t\left|\ensuremath{A}\left(\tau\right)>a\right.\right\} -t. \label{eq:Waiting_Time_Def}\\ \intertext{As an illustration, the waiting time $\tau(a,t)$ is shown for our sample Ornstein--Uhlenbeck process on Fig.~\ref{fig:illustration_tau}. Then the return time $r(a)$ for the threshold $a$ is defined as} r(a) &=\mathbb{E}\left[\tau(a,t)\right]\label{eq:Return_Times_Definition}, \end{align} where $\mathbb{E}$ is the average with respect to realisations of the process $X$ with initial condition $X_{t_0}=x_0$ (hence the notation $\mathbb{E}=\mathbb{E}_{x_0,t_0}$ in that case), or is a time average for an ergodic process. From now on, we shall omit the indices when there is no ambiguity. The return time $r(a)$ is independent of time because the process is homogeneous. The problem we consider in this section is that of estimating $r(a)$ from a sample timeseries of duration $T_{d}: \left\{ \ensuremath{A}(t)\right\} _{0\leq t\leq T_{d}}$. The definition leads to an obvious estimator for $r(a)$, the \emph{direct estimator} $\hat{r}_D$ defined by \begin{equation} \label{eq:Return_Times_Direct_Estimate} \hat{r}_D(a)=\frac{1}{T_{d}}\int_{0}^{T_{d}}\tau(a,t)\,\text{dt}=\frac{1}{T_{d}}\sum_{n=1}^{N_{d}}\frac{\tau_{n}^{2}}{2}, \end{equation} where $\tau_{n}$ is the duration of the successive intervals over which $\ensuremath{A}(t)\leq a$, and $N_{d}$ is the number of such intervals. The last identity in~\eqref{eq:Return_Times_Direct_Estimate} is illustrated graphically in~Fig.~\ref{fig:illustration_tau}: the integral of $\tau(a,t)$ is given by computing the total area beneath the triangles. In the limit of rare events, the return time will also be the average time between two successive independent events. However the definition~\eqref{eq:Return_Times_Definition} for the return time has the big advantage of not having to deal with the definition of independent events, which is cumbersome when time correlations are not negligible. We explain this further in the following section. \subsubsection{Return times and the distribution of successive events} Estimating return times using~\eqref{eq:Return_Times_Direct_Estimate} implies computing the time intervals $\tau_n$ between successive events with $\ensuremath{A}(t)>a$. When $a$ is large enough, most of the times $\ensuremath{A}(t)<a$ and very rarely $\ensuremath{A}(t)>a$. Then we can distinguish two kinds of contributions to the time intervals $\tau_n$. On the one hand, we have correlated events corresponding to fluctuations around the threshold value $a$, on a timescale of the order of the correlation time. From our point of view, these correspond to the same event, with a finite duration. On the other hand, there are successive events such as those depicted in Fig.~\ref{fig:courbe_signal_avec_seuil}, which can be considered as statistically independent events. Therefore, we expect those events to form a Poisson point process, and the corresponding time intervals $\tau_n$ should be distributed according to the distribution of time intervals of a Poisson process: $P\left(\tau\right)=\lambda\exp\left(-\lambda\tau\right)$. \begin{figure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{histo_nonpoiss_log} \caption{Taking all intervals into account, including those corresponding to oscillations around the threshold.} \label{fig:histo_nonpoiss} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{histo_poiss_log} \caption{Discarding small intervals ($\tau<\tau_c$) linked to oscillations around the threshold.} \label{fig:histo_poiss} \end{subfigure} \caption{\ac{pdf} of waiting times between two consecutive fluctuations of amplitude $a = 2.5$, estimated from a timeseries of length $T_d = 10^6$ of the Ornstein--Uhlenbeck process~\eqref{eq:OUprocess} with $\alpha=1$ and $\epsilon=1/2$ (blue triangles), and assuming the events follow a Poisson process with rate $1/r(a)$, $P(\tau) = e^{-\tau/r(a)}/r(a)$ (black solid line), where $r(a)$ is computed from the timeseries. The correlation time of the Ornstein--Uhlenbeck process is $\tau_c=1/\alpha=1$. \label{fig:test}} \end{figure} Figure~\ref{fig:histo_nonpoiss} shows the \acf{pdf} of the time interval between two occurrences of an event $\ensuremath{A}(t)>a$, drawn from a sample timeseries generated with an Ornstein--Uhlenbeck process. One can see that most of the contributions are indeed small intervals of the order of the correlation time. Discarding all the time intervals below the correlation time, one obtains the \ac{pdf} displayed in Fig.~\ref{fig:histo_poiss}, which coincides with the exponential distribution corresponding to a Poisson point process. When $a$ is large, $r(a)\gg\tau_{c}$ where $\tau_{c}$ is the correlation time of the process. Then the contribution of intervals $\tau_n$ of duration comparable to $\tau_{c}$ in the formula~\eqref{eq:Return_Times_Direct_Estimate} becomes asymptotically negligible compared to the contribution of the time intervals $\tau_n \gg \tau_c$. Graphically, this may be seen as the fact that the sum in~\eqref{eq:Return_Times_Direct_Estimate} is dominated by the contribution of very big triangles, while for small $a$ all the triangles have roughly the same area. Then, the return time $r(a)$ coincides with the average time between two statistically independent events exceeding the value $a$. In other words, rare fluctuations can be considered as independent from one another, their duration can be neglected compared to their return time, and the distribution of such events is well approximated by a Poisson process of rate $\lambda = 1/r(a)$. Neglecting the duration of the extreme events yields $\sum_{n=1}^{N_{d}}\tau_{n}\approx T_{d}$ and then one can check that \begin{equation} \frac{1}{T_{d}}\sum_{n=1}^{N_{d}}\frac{\tau_{n}^{2}}{2} \approx \frac{N_{d}}{\sum_{n=1}^{N_{d}}\tau_{n}}\frac{1}{N_{d}}\sum_{n=1}^{N_{d}}\frac{\tau_{n}^{2}}{2}\underset{N_{d}\rightarrow\infty}{\rightarrow}\frac{1}{2}\frac{\mathbb{\mathbb{E}}\left\lbrack\tau^{2}\right\rbrack}{\mathbb{\mathbb{E}}\left\lbrack\tau\right\rbrack}=\frac{1}{\lambda(a)}=r(a), \label{eq:rate_poisson} \end{equation} where the average in this computation is taken with respect to the Poisson process interval \ac{pdf} $P\left(\tau\right)$ made explicit. One may be tempted to use the estimator $\hat{r}_D'(a)=\frac{1}{N_{d}}\sum_{n=1}^{N_{d}}\tau_{n}$ instead of the estimator $\hat{r}_D$ defined by~\eqref{eq:Return_Times_Direct_Estimate}. For an actual Poisson process, that would just give the same result. However this estimator would be more sensitive to the effect of a finite correlation time, since the contributions from time intervals $\tau_n \approx \tau_c$ between successive events will only become negligible linearly in $\tau_c/r(a)$, as opposed to quadratically in formula~\eqref{eq:Return_Times_Direct_Estimate}. From now on, we shall assume that the statistics of rare events is Poissonian. This is a reasonable approximation for many dynamical systems as long as there is a well-defined mixing time after which the initial conditions are forgotten. Of course, it would not hold for systems with long-term memory. In the next paragraph, we use this assumption to derive new expressions that allow accurate and efficient sampling of the return times. \subsubsection{Sampling return times for rare events} \label{sec:return_time_rare_events} In this section we present an alternative way to compute return times, that provides an easier and more efficient way to draw return time plots for rare events than using the direct estimator~\eqref{eq:Return_Times_Direct_Estimate}. Let us divide the timeseries $\left\{ \ensuremath{A}(t)\right\} _{0\leq t\leq T_d}$ in $M$ blocks of duration $\Delta T\gg\tau_{c}$, so that $T_d=M\Delta T$, and let us define the block maximum \begin{equation} a_{m}=\max\left\{ \ensuremath{A}(t)\left|(m-1)\Delta T\leq t\leq m\Delta T\right.\right\}, \end{equation} and $s_{m}(a)=1$ if $a_{m}>a$ and $0$ otherwise, for $1 \leq m \leq M$. For rare events, \textit{i.e.} $r(a) \gg \tau_c$, the number of events $N(t)=\sum_{m \leq \left\lceil t/\Delta T\right\rceil } s_m(a)$ is well approximated by a Poisson process with density $\lambda(a)=1/r(a)$. Then, assuming $\tau_{c}\ll\Delta T\ll r(a)$, the probability $q_{m}(a)$ that $a_{m}$ be larger than $a$ is well approximated by $q_{m}(a)\simeq\Delta T/r(a)$. As $q_{m}(a)$ can be estimated by $\frac{1}{M}\sum_{m=1}^{M}s_{m}(a)$, an estimator of $r(a)$ is the \emph{block maximum estimator}: \begin{equation} \hat{r}_B(a) = \frac{T_{d}}{\sum_{m=1}^{M}s_{m}(a)}.\label{eq:Return_Times_Rare} \end{equation} This is the classical method for computing the return time of rare events, valid when $\Delta T \ll r(a)$. We now introduce a new, more precise estimator, also valid when $\Delta T/r(a)$ is of order one. It is obtained by using $q_{m}(a)=1-e^{-\Delta T/r(a)}$. Then, a better estimator of $r(a)$ is the \emph{modified block maximum estimator}: \begin{equation} \hat{r}_B'(a) = -\frac{\Delta T}{\ln\left(1-\frac{1}{M}\sum_{m=1}^{M}s_{m}(a)\right)}.\label{eq:Return_Times_Rare-1} \end{equation} To compute these estimators in practice, we sort the sequence $\left\{ a_{m}\right\} _{1\leq m\leq M}$ in decreasing order and denote the sorted sequence $\left\{ \tilde{a}_{m}\right\} _{1\leq m\leq M}$ such that $\tilde{a}_{1}\geq\tilde{a}_{2}\geq...\geq\tilde{a}_{M}$. Based on \eqref{eq:Return_Times_Rare}, we then associate to the threshold $\tilde{a}_{m}$ the return time $r(\tilde{a}_{m})=M\Delta T/m$. Indeed, $\sum_{\ell=1}^{M}s_\ell(\tilde{a}_m)=m$, which means that $m$ events with amplitude larger than $\tilde{a}_{m}$ have been observed over a duration $M\Delta T$. Alternatively, using the more precise estimator $\hat{r}_B'$~\eqref{eq:Return_Times_Rare-1} we associate to the threshold $\tilde{a}_{m}$ the return time $r\left(\tilde{a}_{m}\right)=-\frac{\Delta T}{\log\left(1-\frac{m}{M}\right)}$. The return time plot represents $\tilde{a}_{m}$ as a function of $r\left(\tilde{a}_{m}\right)$, as illustrated for instance on Fig.~\ref{fig:temps_retour_OU_direct}. Let us stress again that formulas~\eqref{eq:Return_Times_Rare} and~\eqref{eq:Return_Times_Rare-1} and this method of plotting the return time are meaningful only if doing block maxima, and for ranges of parameters such that $\tau_{c}\ll\Delta T\ll r(a)$ for \eqref{eq:Return_Times_Rare} or $\tau_c \ll \Delta T$ for~\eqref{eq:Return_Times_Rare-1}. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{rt_instant} \caption{Return time plots for the Ornstein--Uhlenbeck process~\eqref{eq:OUprocess} with $\epsilon = 1/2$, $\alpha = 1$, estimated from a timeseries of length $T_d=10^6$ using the direct estimator $\hat{r}_D$~\eqref{eq:Return_Times_Direct_Estimate} (pentagrams), the block maximum estimator $\hat{r}_B$~\eqref{eq:Return_Times_Rare} ($\Delta T = 100$, solid blue line), and the enhanced block maximum estimator $\hat{r}_B'$~\eqref{eq:Return_Times_Rare-1} ($\Delta T = 100$, solid red line and white triangles). These estimates are compared to the analytical solution~\eqref{eq:MeanFirstPassage_S} (dashed black line).} \label{fig:temps_retour_OU_direct} \end{figure} Figure~\ref{fig:temps_retour_OU_direct} illustrates the three methods for computing return times from a timeseries: from the definition~\eqref{eq:Return_Times_Direct_Estimate} and the two formulas~\eqref{eq:Return_Times_Rare} and~\eqref{eq:Return_Times_Rare-1}. The sample timeseries used in this figure is extracted from an Ornstein--Uhlenbeck process, for which the return time curve can also be computed analytically. One can see that both formulas~\eqref{eq:Return_Times_Rare} and~\eqref{eq:Return_Times_Rare-1} lead to the same estimate for events with $r(a) \gg \Delta T$. However, formula~\eqref{eq:Return_Times_Rare} fails to yield a correct estimate as soon as $r(a) \simeq \Delta T$. For rare events, plotting return times using~\eqref{eq:Return_Times_Rare}, as is classically done, proves itself much more convenient and efficient than the naive sampling using~\eqref{eq:Return_Times_Direct_Estimate}. It is important to note however, that the use of~\eqref{eq:Return_Times_Rare} is valid only after computing maxima over an interval of duration $\Delta T$ much larger than $\tau_{c}$, a remark that not been considered in many previous publications. Moreover, the generalisation~\eqref{eq:Return_Times_Rare-1} we propose in this paper is much more accurate for events with a return time of order of $\Delta T$. This procedure to compute return time plots can also be generalised in combination with the use of rare event algorithms, as we shall see in the next section. \subsection{Computing return times from a rare event algorithm} \label{sec:return_time_algorithm} In section~\ref{sec:return_time_timeseries}, we defined the return time for a time-homogeneous stochastic process and explained how to efficiently compute it for rare events from a timeseries. However, a major difficulty remains as we still have to generate numerically the rare events in the timeseries, which comes at a large computational cost. In the present section, we explain how to apply the above method to the data produced by algorithms designed to sample efficiently rare events instead of direct simulations. Rare event algorithms provide an effective ensemble of $M$ trajectories $\{X_{m}(t)\}_{0\leq t \leq T_a}$ ($1 \leq m \leq M$). Note that the length $T_a$ of the trajectories generated by the algorithm does not necessarily coincide with the length $T_d$ of the trajectory generated by direct sampling: in practice, as we shall see, $T_a \ll T_d$. For each of these trajectories, we compute the maximum of the observable over the time evolution $a_{m}=\max_{0\leq t\leq T_{a}}\left( A(X_{m}(t))\right)$. This is similar to the block maximum method described in section~\ref{sec:return_time_rare_events}, with each trajectory playing the role of a block. There is however a major difference: unlike in the block maximum method, the different trajectories sampled by the rare event algorithm do not have identical statistical weight. To each trajectory $X_m(t)$, and thus to each maximum $a_m$, is associated a probability $p_{m}$ computed by the algorithm. Hence, rather than just a sequence $\left\{ a_{m}\right\} _{1\leq m\leq M}$, rare event algorithms yield a sequence $\left\{ a_{m},p_{m}\right\} _{1\leq m\leq M}$. The generalisation of the block maximum formula~\eqref{eq:Return_Times_Rare-1} to non-equiprobable blocks is straightforward and leads to the estimator \begin{equation} \hat{r}_A(a)=-\frac{T_{a}}{\ln\left(1-\sum_{m=1}^{M}p_{m}s_{m}(a)\right)}.\label{eq:Return_Time_Large_Deviation_Algorithm-1} \end{equation} Of course, we could construct similarly an estimator generalising~\eqref{eq:Return_Times_Rare}, but as we have seen in the previous section, the estimator~\eqref{eq:Return_Times_Rare-1} yields better performance. In practice, to plot the return time curve, we sort the sequence $\left\{ a_{m},p_{m}\right\} _{1\leq m\leq M}$ in decreasing order with respect to the $a_m$, and denote the sorted sequence $\left\{ \tilde{a}_{m},\tilde{p}_{m}\right\} _{1\leq m\leq M}$ such that $\tilde{a}_{1}\geq\tilde{a}_{2}\geq...\geq\tilde{a}_{M}$. We then associate to the threshold $\tilde{a}_{m}$ the return time \begin{equation} \hat{r}_A(\tilde{a}_{m})=-\frac{T_{a}}{\ln\left(1-\sum_{\ell=1}^{m}\tilde{p}_{\ell}\right)}.\label{eq:Return_Time_Large_Deviation_Algorithm} \end{equation} Indeed, the sum of the weights of the events with amplitude larger than $\tilde{a}_m$ is $\sum_{\ell=1}^{m}\tilde{p}_\ell$. Again, the return time plot represents $a$ as a function of $r\left(a\right)$. We stress that the method described here does not depend on the observable of interest, or on the details of the algorithm itself. In the remainder of the paper, we provide a \emph{proof-of-concept} for this method, by considering two kinds of observables, sampled by two different algorithms: first, we study the return times for instantaneous observables using the \acf{ams} algorithm (section~\ref{sec:AMS}), then we turn to time-averaged observables using both the \ac{ams} and the \acf{gktl} algorithm (section~\ref{sec:gktl}). We show that the method allows to compute accurately return times at a much smaller computational cost than direct simulation. In both cases, we apply the technique to the simple case of an Ornstein--Uhlenbeck process, for which the results are easily compared with direct simulation and theoretical predictions, before illustrating the potential of the method for applications in complex systems (section~\ref{sec:applications}). \section{Return times sampled with the Adaptive Multilevel Splitting algorithm} \label{sec:AMS} In this section, we present the computation of return times by applying the method presented in section~\ref{sec:return_time_algorithm} to a rare event algorithm known as \acf{ams}. This algorithm follows the strategy of \emph{splitting methods} for the estimation of rare event probabilities, which dates back to the 1950s~\cite{KahnHarris1951}. Many variants have been proposed since then. The \ac{ams} algorithm can be interpreted as simulating a system $\{x_i(t)\}$ of interacting replicas (instead of independent replicas in a crude Monte Carlo simulation), with some \emph{selection and mutation} mechanism. We describe this mechanism in section~\ref{sec:amsalgo_rettimes} as a method to sample trajectory space. This contains all the necessary details for practical use of the algorithm. Then, in section~\ref{sec:amsalgo} we connect the procedure to the general framework of the \ac{ams} algorithm, which allows us to directly benefit from the available mathematical results. In section~\ref{sec:committor}, we explain what is the optimal choice of score function for our problem and we analyse its behaviour. In section~\ref{sec:ams_returntimes}, we show how the algorithm allows us to estimate return times, under the Poisson statistics assumption made above. Finally, we illustrate in section~\ref{sec:return_time_AMS} the method by computing the return times for an Ornstein--Uhlenbeck process. \subsection{The \acf{tams} algorithm} \label{sec:amsalgo_rettimes} The classical \ac{ams} algorithm is based on the evolution of an ensemble of trajectories, based on selection-mutation rules, in order to compute rare event probabilities, and more generally committor functions. Return times can not be estimated directly from a committor function and require the estimation of trajectory statistics. The method we propose to compute return times involves the estimation of probabilities of trajectories with a fixed duration $T_a$. In order to deal with this, we propose a specific modification of the classical \ac{ams} algorithm, called Trajectory Adaptive Multilevel Splitting. While the classical \ac{ams} algorithm requires to specify only a real-valued \emph{score function} $\xi$ -- also called a \emph{reaction coordinate} in many works, due to connections with molecular dynamics simulations, see~\cite{Cerou2011}, and also~\cite[Section~4.3]{Brehier2016a} -- the Trajectory Adaptive Multilevel Splitting requires in general a time dependent score function, see Section~\ref{sec:committor} for the optimal choice. We consider a continuous time Markov model able to generate trajectories. It can be either a stochastic process, for instance a diffusion, or a chaotic deterministic dynamical system. Let us now describe the algorithmic procedure. We start by simulating $\ensuremath{N}$ independent trajectories, denoted $\{x_n^{(0)}(t)\}_{1\leq n \leq \ensuremath{N}}$, for a fixed duration $T_a$. To each of these trajectories, we associate a weight $w_0=1$. Then, at iteration $j\geq 1$, we evaluate the performance of all replicas $\{x_n^{(j-1)}(t)\}_{1 \leq n \leq \ensuremath{N}}$ at iteration $j-1$, measured by the maximum of the score function $\xi$ over the whole trajectory: \begin{equation} \mathcal{Q}_n^{(j)} = \sup_{0 \leq t \leq T_a} \xi(t,x_n^{(j-1)}(t)). \end{equation} We select the trajectories corresponding to the lowest $\mathcal{Q}_n^{(j)}$: let us denote $\mathcal{Q}_j^\star= \min_{1\leq n \leq \ensuremath{N}} \mathcal{Q}_n^{(j)}$ and $n_{j,1}^\star,\ldots,n_{j,\ell_j}^\star$ the indices such that: \begin{equation} \mathcal{Q}_{n_{j,1}^\star}^{(j)} = \cdots = \mathcal{Q}_{n_{j,\ell_j}^\star}^{(j)} = \mathcal{Q}_j^\star. \end{equation} One might expect intuitively that $\ell_j=1$. This is not necessarily the case, as explained in~\cite{Brehier2016a}: because of the discretization of the dynamical equations in the numerical model, two or more trajectories may yield the same level $\mathcal{Q}_n^{(j)}$. We then proceed to the mutation step. For each trajectory $x_{n_{j,\ell}^\star}^{(j-1)}$ ($1 \leq \ell \leq \ell_j$), we choose a trajectory $x_{n_\ell}^{(j-1)}$ ($n_{\ell} \neq n_{j,1},\ldots n_{j,\ell_j}$) randomly among the $\ensuremath{N}-\ell_j$ remaining trajectories, and define the time $t_{j,\ell}$ defined as the smallest time $t$ such that $\xi(t,x_{n_\ell}^{(j-1)}(t))>\mathcal{Q}_j^\star$. Finally, we define the new replica $x_{n_{j,\ell}^\star}^{(j)}$ by copying the trajectory $x_{n_\ell}^{(j-1)}$ from $t_0$ to $t_{j,\ell}$, and simulating the rest of the trajectory, from $t_{j,\ell}$ to $T_a$. For a Markov process, for instance a diffusion, a new realisation of the noise is used in order to simulate the new trajectory from $t_j$ to $T_a$. For a chaotic deterministic system, a small amplitude noise is added to the initial condition at time $t_j$. The other trajectories are not modified: $x_n^{(j)}=x_n^{(j-1)}$ for $n \neq n_{j,1}^\star,\ldots,n_{j,\ell}^\star$. The selection-mutation process is illustrated on Fig.~\ref{fig:AMS_schema}. \begin{figure}[!h] \centering \includegraphics[scale=0.50]{figure_AMS} \caption{Illustration of one selection-mutation step in the \ac{ams} algorithm for the computation of the probability that an observable $\ensuremath{A}:\mathbb{R}^d\to \mathbb{R}$ reaches values larger than $Q$ over a trajectory of duration $T_a$.} \label{fig:AMS_schema} \end{figure} We associate to the trajectories $x_n^{(j)}$ forming the ensemble at step $j$ the weight $w_j$ given by~\cite{Cerou2007,Cerou2011,Brehier2016a}: \begin{equation} w_j = \prod_{i=1}^j \left( 1 - \frac{\ell_i}{\ensuremath{N}}\right)=\left( 1 - \frac{\ell_j}{\ensuremath{N}}\right)w_{j-1}. \end{equation} Note that we could mutate more replicas at each step by selecting an arbitrary number of levels $\mathcal{Q}_n^{(j)}$, instead of just the minimum $\mathcal{Q}_j^\star$ as described above. The particular case described above is sometimes referred to as the \emph{last particle method}~\cite{Simonnet2016}. The selection-mutation process is iterated $\ensuremath{J}$ times (two possible definitions of $\ensuremath{J}$ are given below). The number of resampled trajectories is given by $\tilde{\ensuremath{J}} = \sum_{j=1}^\ensuremath{J} \ell_j$. Note that $\tilde{\ensuremath{J}} \geq \ensuremath{J}$, but the two need not necessarily coincide. In the end, the algorithm generates $M=\ensuremath{N}+\tilde{\ensuremath{J}}$ trajectories, given explicitly by the set $\{x_n^{(0)}\}_{1 \leq n \leq \ensuremath{N}} \cup \{ x_{n_{j,\ell}^\star}^{(j)}\}_{1 \leq \ell \leq \ell_j, 1 \leq j \leq \ensuremath{J}}$, or equivalently, the set $\{x_n^{(\ensuremath{J})}\}_{1 \leq n \leq \ensuremath{N}} \cup \{ x_{n_{j,\ell}^\star}^{(j-1)}\}_{1 \leq \ell \leq \ell_j, 1 \leq j \leq \ensuremath{J}}$. Each trajectory has an associated weight, given by the iteration until which it was a member of the ensemble: $w_J$ for the final trajectories $\{x_n^{(\ensuremath{J})}\}_{1 \leq n \leq \ensuremath{N}}$, and $w_{j-1}$ for the trajectories $\{ x_{n_{j,\ell}^\star}^{(j-1)}\}_{1 \leq \ell \leq \ell_j, 1 \leq j \leq \ensuremath{J}}$ mutated at iteration $1 \leq j \leq \ensuremath{J}$. Let us relabel these trajectories and their associated weights as $\{ (x_m,w_m)\}_{1 \leq m \leq M}$. Normalising the weights with $W=\sum_{m=1}^M w_m$, we obtain the probabilities $p_m=w_m/W$ associated with the trajectories. Note that instead of just one realisation of the algorithm, one may carry out $\ensuremath{K}$ independent realisations, thus yielding $M=\sum_{k=1}^\ensuremath{K} (\ensuremath{N}_k+\tilde{\ensuremath{J}}_k)$ trajectories with the associated weights, where $\ensuremath{N}_k$ and $\tilde{\ensuremath{J}}_k$ denote the number of initial trajectories and resampled trajectories for realisation $k$, respectively. The probabilities for the trajectories are computed as above. For any observable $O[x(t)]$, we can define an estimator based on our sampling of trajectory space: \begin{equation} \hat{O}_M = \sum_{m=1}^M p_m O[x_m(t)].\label{eq:amsestimator} \end{equation} For practical applications, we shall be interested in two particular cases: \begin{itemize} \item Instantaneous observable: $O[X,t]=\ensuremath{A}(X(t))$, for some time-independent observable $\ensuremath{A}: \mathbb{R}^d \to \mathbb{R}$. \item Time-averaged observable: $O[X,t]=\frac{1}{\tau}\int_t^{t+\tau}\ensuremath{A}(X(s))ds$ for some time-independent observable $\ensuremath{A}: \mathbb{R}^d \to \mathbb{R}$ and prescribed width $\tau$ for the averaging window. Note that this is a case where the time-dependent observable $O$ is defined on a different interval than the original process $X$, here $[0,T_a-\tau]$. \end{itemize} The number of iterations $\ensuremath{J}$ can either be a prescribed integer (in that case the stopping criterion for the algorithm is simply $j=\ensuremath{J}$), or a random number such that all the trajectories in the ensemble reach a threshold level $\mathcal{Q}$ (the stopping criterion is then $\mathcal{Q}_n^{(j)}>\mathcal{Q}$ for all $1 \leq n \leq \ensuremath{N}$). The latter case is more common in existing \ac{ams} implementations, however both cases are covered by the general framework developed in~\cite{Brehier2016a}, and give consistent results. We further discuss these two possible choices in section~\ref{sec:ams_returntimes}. Let us now estimate the computational cost of an \ac{ams} run. The number of trajectories generated by an \ac{ams} run is $M=\ensuremath{N}+\tilde{\ensuremath{J}}$, as pointed out above. Each resampled trajectory is not simulated over the whole duration $T_a$, but over $T<T_a$, with $T$ a random number depending on the branching point. We thus define $\gamma \in [0,1]$ so that $\mathbb{E}[T]=\gamma T_a$ is the average duration of the resampled part of a mutated trajectory. Performing $\ensuremath{K}$ identical and independent realisations of the \ac{ams} algorithm, the average computational cost associated with a given experiment is then approximately \begin{equation} \label{eq:computationalcost_ams} \mathcal{C} = \ensuremath{K} \times (\ensuremath{N} + \gamma \ensuremath{J})T_a. \end{equation} \subsection{Connection with the Adaptive Multilevel Splitting (AMS) algorithm for time-dependent observables} \label{sec:amsalgo} In this section, we describe the connection between the \acf{tams} algorithm and the classical \ac{ams} algorithm. The aim is to deduce the mathematical properties of the \ac{tams} algorithm from the known ones for the \ac{ams} algorithm. For instance we will conclude that the optimal score function is the committor function (\ref{eq:time_dependent_committor}). This section can be skipped by the reader interested in the algorithm only, without trying to understand the mathematical aspects. The \acf{ams} algorithm has originally been designed~\cite{Cerou2007} to efficiently and accurately estimate probabilities of rare events of the type $\mathbb{P}_{x_0,t_0}(\tau_{\set{B}}<\tau_{\set{A}})\in(0,1)$: the probability that a Markov process $\bigl(X_t\bigr)_{t\ge t_0}$, initialised with $X_{t_0}=x_0$, hits a set $\set{B}$ before hitting a set $\set{A}$ (with $\set{A}\cap \set{B}=\emptyset$), where $\tau_{\set{C}}=\inf\left\{t> t_0; X_t\in \set{C}\right\}$ is the hitting time of a set $\set{C}$. In this section, we show how the problem of estimating the maximum value of a time-dependent observable over a trajectory (which later will be used to estimate return times) falls within the scope of the \ac{ams} algorithm. This allows us to benefit directly from the theoretical properties of the \ac{ams} algorithm. Some recent mathematical results about the algorithm are reviewed in appendix~\ref{sec:AMSproperties}. This review is not exhaustive, see for instance~\cite{Brehier2016a} and references therein. We consider a $\mathbb{R}^d$-valued Markov process $\bigl(X_t\bigr)_{t\in[0,T_a]}$, with continuous trajectories, for some fixed final time $T_a$, and a time-dependent observable $O[X,t]$: this is a (time-dependent) functional of the process $X$, taking value in $\mathbb{R}$. It may be defined for times belonging to a subset of $[0,T_a]$, but for simplicity we shall still denote $T_a$ the final time. The aim is to estimate the probability that the observable reaches a threshold $a$ at some point of the trajectory, i.e. \begin{equation} q(a) = \mathbb{P}_{x_0,0} \left \lbrack \underset{0\le t\le T_a}\max O[X,t] > a \right\rbrack; \end{equation} (the notation $\mathbb{P}_{x_0,t_0}$ means the probability over realisations of the Markov process with initial condition $X_{t_0}=x_0$). The \ac{ams} algorithm provides an estimator $\hat{q}(a)$ for this quantity. Indeed, the event $\left\{\max_{0\le t\le T_a} O[X,t] > a\right\}$ can be identified with the event $\left\{\tau_\set{B}<\tau_\set{A}\right\}$ for an auxiliary Markov process $Y_t$, with an appropriate definition of the sets $\set{A}$ and $\set{B}$, as follows: \begin{equation} Y_t=(t,O[X,t])\in[0,T_a]\times \mathbb{R},\qquad \set{A}=\left\{(T_a,z);~z\le a\right\},\qquad \set{B}=\left\{(t,z);~t\in[0,T_a], z>a\right\}. \end{equation} Note that $Y$ is not necessarily a time-homogeneous process. In section~\ref{sec:amsalgo_rettimes}, we have described the \ac{tams} algorithm that gives a procedure to sample the process $Y$ to provide a good estimate of $q(a)$, based on a score function $\xi$, which measures the distance between $\mathcal{A}$ and $\mathcal{B}$ (in many implementations of the \ac{ams}, $\xi(\partial \mathcal{A})=0$ and $\xi(\partial\mathcal{B})=1$). We describe the corresponding estimator $\hat{q}(a)$, and the related estimator for return times, in section~\ref{sec:ams_returntimes}. \subsection{The optimal score function} \label{sec:committor} As explained in appendix~\ref{sec:AMSproperties}, the statistical properties, and in particular the variance of the \ac{ams} estimator $\hat{q}(a)$, depend on the choice of the score function $\xi$. The variance is minimal for a particular choice of the score function, sometimes referred to as the \emph{committor}. In a very generic manner, for the \ac{ams} algorithm, it is given by $\bar{\xi}=\mathbb{P}[\tau_\set{B}<\tau_\set{A}]$. In the specific case of the \ac{tams} algorithm, the optimal score function takes the form: \begin{equation} \bar{\xi}(t,x;T_a,a)=\mathbb{P}_{x,t}\left\lbrack\underset{t\le s\le T_a}\max O[X,s] > a\right\rbrack, \label{eq:time_dependent_committor} \end{equation} for all $(t,x)\in[0,T_a]\times \mathbb{R}^d$, where we denote $\mathbb{P}_{x,t}$ the probability over the process initialised at position $x$ at time $t$, and the threshold $a$ and trajectory duration $T_a$ are fixed parameters. Note that the optimal score function depends both on time and space. Of course, we cannot use this score function in practice, because it is exactly what we are trying to compute. Indeed, as mentioned above, the algorithm ultimately provides an estimate of the probability $q(a) =\bar{\xi}(0,x_0;T_a,a)$. A crucial point to implement the \ac{ams} algorithm is to choose a score function that provides a good approximation of the committor. We want to explain the qualitative properties of the time-dependent committor~\eqref{eq:time_dependent_committor}. For simplicity, we shall only discuss the case of an instantaneous observable: $O[X,t]=\ensuremath{A}(X_t)$. Moreover, for the precision of the discussion, we assume that the stochastic process $X$ solves the stochastic differential equation $dX_t = b(X_t)dt+\sqrt{2\epsilon}dW_t$, where $b$ is a vector field with a single fixed-point $x_\star$. We further assume that the basin of attraction of $x_\star$ is the full phase space. With this hypothesis, the invariant measure of the diffusion is concentrated close to the attractor $x_\star$ when $\epsilon \ll 1$. Let us assume that the set $\mathcal{C} = \{ x \mid \ensuremath{A}(x) \leq 0\}$ is a neighbourhood of $x_\star$ on which most of the invariant measure mass is concentrated. We call $\mathcal{C}$ the attractor. The target set $\mathcal{D} = \{ x \mid \ensuremath{A}(x) \geq a \}$ is similarly defined. The hitting times for the sets $\mathcal{C}$ and $\mathcal{D}$ are the random variables given by $\tau_\star = \inf \{ t>0 \mid \ensuremath{A}(X_t) \leq 0 \}$ and $\tau_a = \inf \{ t>0 \mid \ensuremath{A}(X_t) \geq a \}$, respectively, where the process is started from a point $x$ at time $t=0$, such that $0\leq A(x)\leq a$. We finally define the static committor $\xi_0(x,a) \equiv \mathbb{P}_{x,0}[\tau_a < \tau_\star]$. The aim of the following discussion is to explain the relation between the time-dependent committor~\eqref{eq:time_dependent_committor} and the static committor $\xi_0(x,a)$. On the one hand, the time-dependent committor $\bar{\xi}$ satisfies a backward Fokker-Planck equation \begin{equation} \frac{\partial \bar{\xi}}{\partial t} = - L[\bar{\xi}], \quad \text{ with } L=b_i\frac{\partial}{\partial x_i}+\epsilon \frac{\partial^2}{\partial x_i^2},\label{eq:committor-fp} \end{equation} in the domain $\ensuremath{A}^{-1}([0,a]) \subset \mathbb{R}^d$ with boundary condition $\bar{\xi}(t,x;T_a,a)=1$ for $x \in \partial \mathcal{D}$, and final condition $\bar{\xi}(T_a,x;T_a,a)=0$. This follows directly from the backward Fokker-Planck equation for the transition probability $P(y,s|x,t)$, and the fact that, with an absorbing boundary condition on $\partial \mathcal{D}$, $\bar{\xi}(t,x;T_a,a)=1-\int dy P(y,T_a|x,t)$. Note that when $T_a-t \gg r(a)$, $\bar{\xi}(t,x;T_a,a)\approx 1$ everywhere ($\bar{\xi}$ converges to $1$). On the other hand, $\xi_0(x,a)$ satisfies $L[\xi_0]=0$, but with different boundary conditions: $\xi_0(x,a)=1$ if $x \in \partial \mathcal{D}$ and $\xi_0(x,a)=0$ if $x\in \partial \mathcal{C}$. In the next paragraph, we argue that when $T_a-t$ is much smaller than $r(a)$, the time-dependent committor $\bar{\xi}(t,x;T_a,a)$ given by~\eqref{eq:time_dependent_committor} is well approximated by the static committor $\xi_0(x,a)$, except in two boundary layers: a spatial one of size $\epsilon$ for $x$ close to the attractor, and a temporal one of size $\tau_c$ for $t$ close to $T_a$. Using the notations of section~\ref{sec:amsalgo}, the events $\{ \tau_\set{B} < \tau_\set{A}\}$ can be decomposed into the disjoint union of events for which the observable reaches the threshold $a$ before or after hitting $0$. The typical time for $X$ to reach $\mathcal{C}$ is the correlation time $\tau_c$. If we assume that $T_a-t \gg \tau_c$, we have the approximation $\bar{\xi}(t,x;T_a,a) \simeq \xi_0(x,a) + \lbrack 1-\xi_0(x,a) \rbrack \bar{\xi}(t,x_\star;T_a,a)$ (we have used here the approximations $\bar{\xi}(\tau_\star,y;T_a,a) \simeq \bar{\xi}(\tau_\star,x_\star;T_a,a)$ for any $y\in \partial \mathcal{C}$, and $\bar{\xi}(\tau_\star,x_\star;T_a,a) \simeq \bar{\xi}(t,x_\star;T_a,a)$). Moreover, when $T_a-t \ll r(a)$, the Poisson approximation $\bar{\xi}(t,x_\star;T_a,a)\simeq (T_a-t)/r(a)$ holds. To sum up, in the limit $\tau_c \ll T_a-t \ll r(a)$, \begin{equation} \bar{\xi}(t,x;T_a,a) \simeq \xi_0(x,a) + \frac{T_a-t}{r(a)}\lbrack 1-\xi_0(x,a) \rbrack. \end{equation} Let us now introduce the quasipotential $V$. We note that $\xi_0(x,a) \underset{\epsilon \rightarrow 0}{\asymp} \exp(-(\inf_{y \in A^{-1}(\{a\})} V(y)-V(x))/\epsilon)$, while $r(a) \underset{\epsilon \rightarrow 0}{\asymp} \exp((\inf_{y \in A^{-1}(\{a\})} V(y))/\epsilon)$. We can thus conclude that $\xi_0(x,a)$ dominates this expression for all $x$ except in a region of size $\epsilon$ around the attractor $x_\star$. \begin{figure}[ht] \centering \includegraphics[width=0.5\linewidth]{committor-ou} \caption{\label{fig:committor} Contour lines of the time-dependent committor $\bar{\xi}(t,x;T_a,a)$ for the Ornstein--Uhlenbeck process (with $\alpha=1,\epsilon=1/2$; in particular $\tau_c=1$), obtained by solving numerically the backward Fokker-Planck equation~\eqref{eq:committor-fp}, with $a=4,T_a=5$.} \end{figure} As a conclusion, when $T_a-t$ is much smaller than $r(a)$, the time-dependent committor $\bar{\xi}(t,x;T_a,a)$~\eqref{eq:time_dependent_committor} is well approximated by the static committor $\xi_0(x,a)$, except in two boundary layers: a spatial one of size $\epsilon$ for $x$ close to the attractor, and a temporal one of size $\tau_c$ for $t$ close to $T_a$. This is illustrated in Fig.~\ref{fig:committor}, representing the committor $\bar{\xi}(t,x;T_a,a)$ for the Ornstein--Uhlenbeck process (with $\alpha=1,\epsilon=1/2$), obtained by solving numerically the backward Fokker-Planck equation~\eqref{eq:committor-fp}, with $a=4,T_a=5$. \subsection{Computing return times and the associated error bars}\label{sec:ams_returntimes} As explained in section~\ref{sec:amsalgo_rettimes}, the algorithm generates an ensemble of $M$ trajectories $x_m(t)$ with associated probability $p_m$. It follows directly from~\eqref{eq:amsestimator} that an estimator of $q(a)$ is: \begin{equation} \hat{q}_M(a) = \sum_{m=1}^M p_m s_m(a),\label{eq:ams_estimator_q} \end{equation} where $a_m=\max_{0\leq t \leq T_a} \ensuremath{A}(x_m(t))$ is the maximum value for the observable over the trajectory $m$, $p_m$ the associated probability (see~\ref{sec:amsalgo_rettimes}), and $s_m(a)=1$ if $a_m>a$, $0$ otherwise (\ref{sec:return_time_rare_events}). As explained in~\ref{sec:return_time_rare_events}, the return time is related to $q(a)$ by the hypothesis that these events are Poissonian, and we obtain the estimator for the return time $\hat{r}_M(a)=\frac{T_a}{\ln(1-\hat{q}_M(a))}$ given by~\eqref{eq:Return_Time_Large_Deviation_Algorithm-1} (alternatively, we could use $\hat{r}_M(a)=\frac{T_a}{\hat{q}_M(a)}$). In essence, to draw return time plots, it suffices to sort the set $\left\{ (a_{m},p_{m})\right\} _{1\leq m\leq M}$ according to the $a_m$ and use~\eqref{eq:Return_Time_Large_Deviation_Algorithm}, as described in~\ref{sec:return_time_algorithm}. Note that in practice, with the particular choice of score function $\xi(t,x)=\ensuremath{A}(x)$, storing the levels $\mathcal{Q}_n^{(j)}$ for the killed trajectories directly provides the corresponding values $a_m$. By definition, the estimators $\hat{q}_M(a)$ and $\hat{r}_M(a)$ are random variables. In appendix~\ref{sec:AMSproperties}, we describe their statistical properties, and how to interpret them in terms of consistency and efficiency of the \ac{ams} algorithm. In particular, we show that $\hat{q}_M(a)$ is an unbiased estimator of $q(a)$, study the variance, and show the existence of a Central Limit Theorem. In section~\ref{sec:amsalgo_rettimes}, we proposed two choices for the number of iterations in the algorithm. First, we described the algorithm with a fixed number of iterations $\ensuremath{J}$. Alternatively, as is often seen in the \ac{ams} literature, one may decide to iterate the algorithm until all trajectories reach set $\set{B}$. Then $\ensuremath{J}$ is a random number. In that case, the threshold $a$ which defines the set $\set{B}$ becomes the control parameter for the stopping criterion. Under those circumstances, the estimator $\hat{q}_M$ can be expressed as \begin{equation} \hat{q}_M(a) = \prod_{j=1}^\ensuremath{J} \left( 1-\frac{\ell_j}{\ensuremath{N}}\right).\label{eq:ams_estimator_q_rand} \end{equation} This formula remains valid in the case where the number of iterations $\ensuremath{J}$ is prescribed: it suffices to define the set $\set{B}$ \emph{a posteriori}, by choosing $a=\min_{1 \leq n \leq \ensuremath{N}} a_n^{(J)}$ the minimum value of the $a_m$ among the final trajectories. The formula could also be used to compute $\hat{q}_M(b)$ with $b<a$, simply by changing the number of iterations required to meet the stopping criterion. In practice, the easiest approach is to use the expression given in~\eqref{eq:ams_estimator_q}. In the above, we have defined the \ac{ams} estimators $\hat{q}_M$ and $\hat{r}_M$ based only on the number of trajectories generated by the algorithm. In fact, the $\ensuremath{N}$ initial trajectories and the $\tilde{\ensuremath{J}}$ resampled trajectories (generated during the $\ensuremath{J}$ iterations) are qualitatively different. In practice, the user does not choose the parameter $M$ directly, but rather the number of ensemble members $\ensuremath{N}$ on the one hand, and either the threshold $a$ or the number of iterations $\ensuremath{J}$ on the other hand. As explained in appendix~\ref{sec:AMSproperties}, the number of initial trajectories $\ensuremath{N}$ governs the convergence of the estimators. Another practical constraint on the choice of $\ensuremath{N}$ is the problem of \emph{extinction}: for some systems, if $\ensuremath{N}$ is too small, all the members of the ensemble become identical after a number of iterations. The other parameter (the threshold $a$ or the number of iterations $\ensuremath{J}$) selects the type of events we are interested in. Indeed, from~\eqref{eq:ams_estimator_q_rand}, we obtain an approximate relation between the number of resampled trajectories $\tilde{\ensuremath{J}}$ and the target return times: we write $\ln \hat{q}_M(a) = \sum_{j=1}^{\ensuremath{J}}\ln \left(1-\frac{\ell_j}{\ensuremath{N}}\right)$. For large $\ensuremath{N}$, this leads to $\ln \hat{q}_M(a) \approx - \sum_{j=1}^{\ensuremath{J}} \ell_{j}/\ensuremath{N} \approx - \tilde{\ensuremath{J}}/\ensuremath{N}$. Targeting rare events with probability $10^{-\beta}$, i.e. return times of order $10^{\beta}T_a$, $\tilde{\ensuremath{J}}$ is then $\mathcal{O}(\ensuremath{N} \beta)$. This indicates how to choose the number of iterations $\ensuremath{J}$ in practice. In particular, for rare events, we should often be in the regime $\ensuremath{J}=\ensuremath{N}\beta$. To sum up, to compute return time plots $r(a)$, one may either fix the target amplitude $a$, and run the algorithm for a random number of iterations, until the observable reaches $a$ for all the trajectories (i.e. until all the trajectories reach set $\set{B}$), or fix the target return time $r(a)$, and iterate the algorithm a fixed number of times by choosing $\ensuremath{J}=\ensuremath{N} \ln(r(a)/T_a)$. In the former case, the prescribed amplitude $a$ needs not correspond to the largest event for which we should estimate the return time, but it will approximately be the case as soon as $\ensuremath{N} \ll \ensuremath{J}$, i.e. if $a$ is large enough for fixed $\ensuremath{N}$. Similarly, in the latter case, the largest return time computed by the algorithm will approximately be equal to the prescribed target return time when $\ensuremath{N} \ll \ensuremath{J}$.\\ \textbf{Error bars.} Using $\ensuremath{K}$ independent realisations of the algorithm, one may also compute error bars through a sample standard deviation. In order to compute the error bar associated to the threshold $a$, one can compute for each realisation $k$ of the algorithm, the estimated probability $\hat{q}_\ensuremath{N}^{(k)}(a)$ that the observable exceeds the threshold $a$ (using the set of trajectories in this realisation of the algorithm). One can then compute the sample average and the sample standard deviation for $q(a)$: $\bar{q}(a)=\frac{1}{\ensuremath{K}}\sum_{k=1}^\ensuremath{K} \hat{q}_\ensuremath{N}^{(k)}(a)$ and $\sigma_q(a)=\sqrt{\frac{1}{\ensuremath{K}-1}\sum_{k=1}^\ensuremath{K} \left(\hat{q}_\ensuremath{N}^{(k)}(a)-\bar{q}(a)\right)^2}$. This gives us the relative error for $q(a)$: $err(a)=\sigma_q(a)/\bar{q}(a)$. Now, using that for rare events $r(a)$ is proportional to $1/q(a)$ (see section~\ref{sec:return_time_rare_events}), we know that the relative errors for respectively $q(a)$ and $r(a)$ are equal. Hence we define the error bars for $r(a)$ as $\sigma_r(a)=err(a)r(a)=\sigma_q(a)r(a)/\bar{q}(a)$. Please note that this method computes the probability to exceed a threshold $a$, by averaging over trajectories or over algorithm realisations the sampled value of $q(a)$. This gives an unbiased estimator of $q(a)$, as explained in appendix~\ref{sec:AMSproperties}. The standard deviation of this estimator is of order $1/\sqrt{\ensuremath{K} \ensuremath{N}}$. When computing $r(a)$ through the nonlinear relation $\hat{r}(a)=-T_a/\ln\left(1-\hat{q}(a)\right)$, we thus obtain an estimator of $r(a)$ with a bias of order of $1/(\ensuremath{K} \ensuremath{N})$ and a standard deviation of order $1/\sqrt{\ensuremath{K} \ensuremath{N}}$. If however we would have made averages over return times among algorithm realisations, then the estimator for each realisation would have been biased with a bias of order $1/\ensuremath{N}$ (see appendix~\ref{sec:AMSproperties}), and the final estimator after $\ensuremath{K}$ realisations would still be biased with a bias of order $1/\ensuremath{N}$. \subsection{Return times for the Ornstein--Uhlenbeck process from the \acl{tams} algorithm} \label{sec:return_time_AMS} We now illustrate the above method by computing return times for the instantaneous Ornstein--Uhlenbeck process with the \ac{ams} algorithm. For this simple problem, $d=1$, $\ensuremath{A}=\text{Id}$, and the mutation step simply amounts to simulating the stochastic process with a different initial condition. Three parameters remain: the length of the generated trajectories $T_a$, the threshold value $a$ and the number of replicas $\ensuremath{N}$. In practice, the latter is usually chosen independently of the two others. Indeed, $\ensuremath{N}$ determines the order of magnitude of the statistical error on the estimate of the probability $p$. In addition, $\ensuremath{N}$ is often constrained by the computational resources available: for instance, for parallel simulations, one may want to use as many replicas as the number of cores on the machine. One then has to choose both $T_a$ and $a$. On the one hand, we require $T_a \gg \tau_c$ in order to apply the method described in section~\ref{sec:return_time_rare_events}. On the other hand, the computational cost increases with $T_a$. Even though one could expect a lower number of iterations with a bigger $T_a$, dealing with rare fluctuations makes this gain negligible compared to the cost of simulating long trajectories. In practice, we obtained satisfactory results using trajectories of length $T_a$ equal to a few correlation times. \begin{figure} \centering \includegraphics[width=.75\linewidth]{return_time_plot_AMS} \caption{Return time plot for a random variable following an Ornstein--Uhlenbeck process~\eqref{eq:OUprocess} with $\alpha = 1$ and $\epsilon = 1/2$ ($\sigma=1/\sqrt{2}$ is the standard deviation). The solid red line represents the estimate obtained using the \ac{ams} with $100$ replica, $T_a=5\tau_c$ and $a=5$. The present curve results from the empirical average of $100$ independent runs, so that the total computational cost is $\mathcal{O}(10^6 \tau_c)$. It is compared to the modified block maximum estimator $\hat{r}_B'$ applied to a sample timeseries of length $T_d = 10^6\tau_c$ (blue stars) and to the analytical result~\eqref{eq:MeanFirstPassage_S}.} \label{fig:return_time_AMS} \end{figure} Figure~\ref{fig:return_time_AMS} shows the return time plot computed using $\ensuremath{N}=100$ replicas, $T_a = 5 \tau_c$ and $a = 5$, using the method described in section~\ref{sec:amsalgo}. For comparison, figure~\ref{fig:return_time_AMS} also features the theoretical value, estimated by computing the mean first-passage time (see appendix~\ref{sec:MeanFirstPassageOU}), and the estimate obtained from a direct sampling with the same computational cost as the \ac{ams} run. For events with $r(a) \simeq \tau_c$, the error in the estimation of return times is linked to the fact that such events do not form a Poisson process, contrary to the assumption made to derive formula~\eqref{eq:Return_Times_Rare-1}. For events with return time $r(a) > 10^{12}$, statistical error could not be computed due to lack of data. Between those two regimes, return times are very well recovered by the \ac{ams} algorithm. Furthermore, figure~\ref{fig:return_time_AMS} clearly illustrates the computational gain from the \ac{ams} algorithm. Indeed, for the same computational cost as direct sampling, the use of the \ac{ams} in conjunction with estimate~\eqref{eq:Return_Time_Large_Deviation_Algorithm} gives access to return times for much rarer events: we can now accurately compute return times on the order of $10^{13}$, about seven orders of magnitude larger than direct sampling. Finally, it was checked that the relative error on the estimate of the return times scales like $\ln(r(a))/\ensuremath{N}$, as predicted in appendix~\ref{sec:AMSproperties}. \section{Return times sampled with the Giardina-Kurchan-Tailleur-Lecomte algorithm} \label{sec:gktl} In this section, we illustrate the computation of return times using the method described in section~\ref{sec:return_time_algorithm} for a \textit{time-averaged} observable. While it could be done using the \ac{tams} algorithm presented in section~\ref{sec:amsalgo}, we instead illustrate the use of a different rare-event algorithm, specifically designed to compute large deviations of time-averaged dynamical observables: the \acf{gktl} algorithm~\cite{Giardina2006,Tailleur2007,Giardina2011}. \subsection{The algorithm} The underlying idea of the \acf{gktl} algorithm is to perform a biased sampling of trajectory space. It relies on the simulation of a population of trajectories, which, unlike direct Monte-Carlo methods, interact dynamically: at regular time intervals, some members of the ensemble are killed and some are cloned according to a weight which depends on the history of the replica. The weights are chosen such that, after several iterations of the algorithm, generated trajectories are distributed according to a probability distribution that is tilted in order to favour trajectories with large values of a chosen time averaged observable. This sort of algorithm has first been proposed by~\cite{Giardina2006} and has been used to study rare events in both stochastic~\cite{Giardina2006,Lecomte2007b,Garrahan2007,Hurtado2009b} and deterministic systems~\cite{Giardina2006,Tailleur2007}. More precisely, we perform simulations of an ensemble of $\ensuremath{N}$ trajectories $\left\{ X_{n}(t)\right\}$ (with $n=1,2,...,\ensuremath{N}$) starting from random initial conditions. Like in section~\ref{sec:AMS}, the total integration time of the trajectories is denoted $T_{a}$. We consider an observable of interest $\ensuremath{A}(X(t))$ and a resampling time $\tau$. At times $t_{i}=i\tau$ (with $i=1,2,...,T_{a}/\tau$) we assign to each trajectory $n$ a weight $W_{n}^{i}$ defined as \begin{equation} W_{n}^{i}=\frac{e^{k\intop_{t_{i-1}}^{t_{i}}\ensuremath{A}(X_{n}(t))dt}}{R_{i}}\,\,\,\mbox{with}\,\,\,R_{i}=\frac{1}{\ensuremath{N}}\sum_{n=1}^{\ensuremath{N}}e^{k\int_{t_{i-1}}^{t_{i}}\ensuremath{A}(X_{n}(t))dt}.\label{eq:Weight} \end{equation} For each trajectory $X_{n}$, a random number of copies of the trajectory are generated, on average proportional to the weight $W_{n}^{i}$ and such that the total number of trajectories produced at each event is equal to $\ensuremath{N}$. The parameter $k$ is chosen by the user in order to control the strength of the selection and thus to target a class of extreme events of interest. The larger the value of $k$, the more trajectories with large values of the time average observable will survive the selection. As mentioned above, the \ac{gktl} algorithm performs importance sampling in the space of trajectories, which is relevant for out-of-equilibrium systems. Let us denote formally $\mathbb{P}_{0}\left(\left\{ X(t)\right\} _{0\leq t\leq T_{a}} = \left\{ x(t)\right\} _{0\leq t\leq T_{a}}\right)$ the probability to observe a trajectory $\left\{ x(t)\right\} _{0\leq t\leq T_{a}}$ in the model, and $\mathbb{P}_{k}\left(\left\{ X(t)\right\} _{0\leq t\leq T_{a}} = \left\{ x(t)\right\} _{0\leq t\leq T_{a}} \right)$ the probability to observe the same trajectory with the algorithm. By construction of the algorithm through the weights~\eqref{eq:Weight}, we have \begin{align} \mathbb{P}_{k}\left(\left\{ X(t)\right\} _{0\leq t\leq T_{a}}=\left\{ x(t)\right\} _{0\leq t\leq T_{a}}\right) &\underset{\ensuremath{N}\rightarrow\infty}{\sim} \frac{e^{k\int_{0}^{T_{a}}\ensuremath{A}(x(t))dt}}{Z(k,T_a)}\mathbb{\mathbb{P}}_{0}\left(\left\{ X(t)\right\} _{0\leq t\leq T_{a}}=\left\{ x(t)\right\} _{0\leq t\leq T_{a}}\right).\label{eq:Biased_Path_Approximation} \end{align} where the normalisation factor is given by $Z(k,T_a)=\mathbb{E}_{0}\left[e^{k\int_{0}^{T_{a}}\ensuremath{A}(X(t))dt}\right]$, denoting by $\mathbb{E}_{0}$ the expectation value with respect to $\mathbb{P}_{0}$, and $\underset{\ensuremath{N}\rightarrow\infty}{\sim}$ means that this is true only asymptotically for large $\ensuremath{N}$. The typical error is of order $1/\sqrt{\ensuremath{N}}$ when evaluating averages over observables. Equation~\eqref{eq:Biased_Path_Approximation} is obtained by assuming the mean field approximation \begin{equation} R_{1}=\frac{1}{\ensuremath{N}}\sum_{n=1}^{\ensuremath{N}}e^{k\int_{0}^{t_{_{1}}}\ensuremath{A}(X_{n}(t))dt}\underset{\ensuremath{N}\rightarrow\infty}{\sim} Z(k,t_1)= \mathbb{E}_{0}\left[e^{k\int_{0}^{t_{1}}\ensuremath{A}(X(t))dt}\right],\label{eq:Mean_Field_Approximation} \end{equation} which, by induction, and using a formula similar to~\eqref{eq:Mean_Field_Approximation} at each step of the induction, leads to~\cite{Giardina2006,Giardina2011}: \begin{equation} \prod_{i=1}^{T_{a}/\tau}R_{i}\underset{\ensuremath{N}\rightarrow\infty}{\sim} Z(k,T_a) =\mathbb{E}_{0}\left[e^{k\int_{0}^{T_a}\ensuremath{A}(X(t))dt}\right].\label{eq:Estimate_Lambda} \end{equation} The validity of the mean field approximation and the fact that the typical relative error due to this approximation is of order $1/\sqrt{\ensuremath{N}}$ has been proven~\cite{DelMoralBook,DelMoral2013} to be true for a family of rare event algorithms including the one adopted in this paper. Formula~\eqref{eq:Biased_Path_Approximation} is valid only for times $T_{a}$ that are integer multiples of the resampling time $\tau$. The killed trajectories have to be discarded from the statistics. Starting from the final $\ensuremath{N}$ trajectories at time $T_{a}$, one goes backwards in time through the selection events attaching to each piece of trajectory its ancestor. In this way one obtains an effective ensemble of $\ensuremath{N}$ trajectories from time 0 to time $T_{a}$, distributed according to $\mathbb{P}_{k}$. All trajectories reconstructed in this way are real solutions of the model: we have not modified the dynamics, but only sampled trajectories according to the distribution $\mathbb{P}_{k}$ rather than according to the distribution $\mathbb{P}_{0}$. The \ac{gktl} algorithm was initially designed to compute large deviation rate functions~\cite{Giardina2006}. Indeed, using $\lambda(k,T_a)=\frac 1 {T_a} \ln Z(k,T_a)$, the \emph{scaled cumulant generating function}~\cite{Touchette2009} $\lambda(k)=\lim_{T_a \to +\infty} \lambda(k,T_a)$ can easily be estimated from the algorithm. From there, the large deviation rate function $I(a)$, such that $\mathbb{P}_0\left\lbrack\int_0^{T_a} \ensuremath{A}(X(t))dt = T_a a\right\rbrack \asymp e^{-T_a I(a)}$, is recovered by the Legendre-Fenchel transform $I(a) = \sup_k (ka-\lambda(k))$~\cite{Touchette2009}. In fact, the algorithm can be used to compute the statistical properties with respect to the distribution $\mathbb{P}_{0}$ of any observable, from the distribution $\mathbb{P}_{k}$. This is done using the backward reconstructed trajectories and inverting formula~\eqref{eq:Biased_Path_Approximation}. If, for example, one wants to estimate the expectation value of an observable $O\left(\left\{ X(t)\right\} _{0\leq t\leq T_{a}}\right)$, an estimator is given by \begin{equation} \mathbb{E}_{0}\left[O\left(\left\{ X(t)\right\} _{0\leq t\leq T_{a}}\right)\right]\underset{\ensuremath{N}\rightarrow\infty}{\sim}\frac{1}{\ensuremath{N}}\sum_{n=1}^{\ensuremath{N}}O\left(\left\{ X_{n}(t)\right\} _{0\leq t\leq T_{a}}\right)\mbox{e}^{-k\int_{0}^{T_{a}}\ensuremath{A}(X_{n}(t))dt}\mbox{e}^{T_{a}\lambda(k,T_{a})},\label{eq:GK_O_estimator} \end{equation} where the $X_{n}$ are the $\ensuremath{N}$ backward reconstructed trajectories. Empirical estimators of quantities related to rare (for $\mathbb{\mathbb{P}}_{0}$) events of the kind of~\eqref{eq:GK_O_estimator} (thus using data distributed according to $\mathbb{P}_{k}$) have a dramatically lower statistical error, due to the larger number of relevant rare events present in the effective ensemble. In particular, one can use the reconstructed trajectories to compute return times using the method described in section~\ref{sec:return_time_algorithm}. \subsection{Return times for the time-averaged Ornstein--Uhlenbeck process from the \ac{gktl} algorithm} It follows from~\eqref{eq:GK_O_estimator} that an estimator of $q(a)$ can be computed from the $N$ generated trajectories as : \begin{equation} \hat{q}_{N}(a) = \frac{1}{N}\sum_{n=1}^{N}s_{N}(a)\mbox{e}^{-k\int_{0}^{T_{a}}\ensuremath{A}(X_{n}(t))dt}\mbox{e}^{T_{a}\hat{\lambda}(k,T_{a})},\label{eq:GK_q_estimator} \end{equation} where $\hat{\lambda}(k,T_a) = \frac{1}{T_a}\ln \prod_{i=1}^{T_{a}/\tau}R_{i}$ according to~\eqref{eq:Estimate_Lambda}. In the following we consider the time averaged position \begin{equation} \label{eq:time_averaged} \overline{X}_{T}(t) = \frac{1}{T}\int_{t}^{t+T}\, X(s) ds, \end{equation} where the position $X$ follows an Ornstein--Uhlenbeck process~\eqref{eq:OUprocess}. We call $\sigma^{2}_{T}$ the variance of $\overline{X}_{T}$. Similarly to the case of the \ac{ams} (see section~\ref{sec:return_time_AMS}), the application of the \ac{gktl} depends on three parameters: the number of trajectories $\ensuremath{N}$, the length of the trajectories $T_a$ and the strength of the selection $k$. Like in the above, the number of trajectories $\ensuremath{N}$ is often fixed \emph{a priori}. The choice of $T_a$ is guided by the reasoning described in paragraph~\ref{sec:return_time_AMS} and we then set the trajectory length to a few times the correlation time of $\overline{X}_{T}$. As for the strength of the selection $k$, its relation with the amplitude of the generated fluctuations is not known beforehand, and one has to set its value empirically~\footnote{When the duration of the average is long enough so that a \emph{large deviation regime} is attained, the relation between the value of $k$ and the typical amplitude of the fluctuations generated by the algorithm is known from the Gartner-Ellis theorem. See Ref.~\cite{Touchette2009} for further details.}. \begin{figure} \centering \includegraphics[width=.7\linewidth]{return_time_AMS_GKTL_AVG} \caption{Return time plot for the time-averaged Ornstein--Uhlenbeck process $\overline{X}_T$~\eqref{eq:time_averaged} with $\alpha = 1$ and $\epsilon = 1/2$ ($\sigma=1/\sqrt{2}$ is the standard deviation), estimated from the \ac{gktl} algorithm (solid red line) and \ac{ams} algorithm (solid blue line). The \ac{gktl} algorithm was used with $\ensuremath{N}=500$ replica, $T_a=20\tau_c$ and $k=0.9$. The \ac{ams} algorithm was used with $\ensuremath{N} = 100$ replicas, $T_a = 50$ and $a = 2.0$. Finally, the dashed black line represents the result of a direct sampling over a timeseries of length $T_d = 10^9$. Parameters of both the \ac{gktl} and \ac{ams} algorithms were chosen so that the related computational cost is $\mathcal{O}(10^6 \tau_c)$. The cost of the direct sampling is $10^9 \tau_c$.} \label{fig:return_time_GKTL} \end{figure} In Fig.~\ref{fig:return_time_GKTL}, we show the return times $r(a)$ for $\overline{X}_{T}$, with $T=10\tau_c$, computed from the \ac{gktl} algorithm using the method described in~\ref{sec:return_time_algorithm}. In order to validate the computation, the estimate obtained from the algorithm is compared to the direct sampling method~\eqref{eq:Return_Times_Rare-1}. For rare events ($r(a) \gg \tau_c$), the results from the \ac{gktl} algorithm agree quite well with direct sampling. Again, the comparison of the computational costs for the two different methods shows the efficiency of the algorithm. For direct sampling, the length of the sample trajectory, $10^9 \tau_c$ in the case of Fig.~\ref{fig:return_time_GKTL}, naturally sets an upper bound on the return times one is able to compute. By contrast, the total cost of the \ac{gktl} estimate is $10^6 \tau_c$ and one can see in Fig.~\ref{fig:return_time_GKTL} that it allows to reach return times larger by many orders of magnitude. \begin{figure} \centering \includegraphics[width=.7\linewidth]{pdf_GKTL} \caption{\ac{pdf} of the time-averaged observable $\bar{X}_T$, with $T = 10\tau_c$, for the Ornstein--Uhlenbeck process with $\alpha=1$ and $\epsilon=1/2$ ($\sigma=1/\sqrt{2}$ is the standard deviation): computed from a direct simulation of length $T_d = 10^6$ (black curve), and based on the trajectories generated by the \ac{gktl} algorithm with $500$ replicas, $T_a=20\tau_c$ and $k=0.9$ (blue curve).} \label{fig:pdf_GKTL} \end{figure} Figure~\ref{fig:pdf_GKTL} shows an estimate of the \ac{pdf} for $\bar{X}_T$ along the trajectories generated using the \ac{gktl} algorithm. Even though importance sampling is performed for the observable $\bar{X}_{T_a}$, the observable averaged over the whole trajectory of length $T_a$, it better samples the tail of the \ac{pdf} for $\bar{X}_T$, resulting in better estimation of the corresponding return times. Lastly, Fig.~\ref{fig:return_time_GKTL} also shows the return times estimated for $\bar{X}_T$ using the \ac{ams} algorithm, for a comparable computational cost. The agreement between the two estimates illustrates that the method proposed in~\ref{sec:return_time_algorithm} may be applied to any rare event algorithm suitable for the type of observable under study. Here, while the \ac{ams} algorithm allows for computing return times for both the instantaneous and time-averaged observables, the \ac{gktl} algorithm is not suited for instantaneous observables. \section{Application: Extreme drag force on an object immersed in a turbulent flow} \label{sec:applications} A key issue with rare event algorithms is to understand if they are actually useful to compute rare events and their probabilities for actual complex dynamical systems. The \ac{ams} algorithm has shown to be very efficient for partial differential equations with noise~\cite{Rolland2016}. In this section, we give a brief illustration that more complex dynamics can be studied. We illustrate the computation of return times using rare event algorithms for a turbulent flow. The possible limitations of rare event algorithms are further discussed in the conclusion. Unlike simple low-dimensional models, such as the Ornstein-Uhlenbeck process, numerical simulations of turbulent flows of interest for physicists and/or engineers require tremendous computational efforts. As a consequence, direct sampling of rare events based on a long time series is simply unthinkable for such systems. A common practice in the engineering community is to generate synthetic turbulent flows, without resolving explicitly the small scales, to study numerically the physical phenomena of interest~\cite{Spalart2000,Moin2002}. However, the main difficulty is to capture synthetically the correct long-range (spatio-temporal) correlations of turbulence and such approaches can not capture the essential effects of coherent structures. We show here that rare event methods such as the \ac{gktl} and the \ac{ams} algorithms can be used in order to study extremes in turbulent flows without having to rely on such modelling. The example we consider is the sampling of extreme fluctuations of the mechanical stresses caused by a turbulent flow on an immersed object. Being able to compute flow trajectories associated to such extremes is of great interest both for fundamental issues and applied problems, such as reliability assessment for industrial structures. More specifically, we focus here on the averaged drag $F_{T}(t) = \frac{1}{T}\int_{t}^{t+T}f_d(t')\mathrm{d}t'$, which corresponds to the averaged sum of the efforts from the flow, projected along the flow direction. The length of the averaging window depends on the nature of the application. For instance, it could be related to the typical response time of a material, in order to average out high frequency excitation that has a minor impact on the deformation of the structure. Note that the choice of the observable is arbitrary and one could choose to study other related physical quantities, such as the lift or torque. In order to provide a \textit{proof-of-concept} for such rare events approaches for turbulent flows, we compute the return time for extreme values of the drag in a simple academic flow. The setup we consider, illustrated in Fig.~\ref{fig:snapshots}, is that of a two-dimensional channel flow, with a square obstacle immersed in the middle of the domain. Turbulence is generated upstream by means of a grid. \begin{figure}[h] \centering \includegraphics[width=.8\textwidth]{vorticity} \caption{Snapshot of a typical vorticity field of the flow under study. A steady parabolic velocity profile is imposed at the inlet. Turbulence is then generated by a grid. We used the \ac{gktl} algorithm the compute the return times of the average drag over the square here marked by the grey area.}\label{fig:snapshots} \end{figure} This flow is simple enough so that long time series can be obtained in a reasonable amount of computational time, allowing for the computation of reference return times. In practice, we carry out a direct numerical simulation using the Lattice Boltzmann Method~\cite{Chen1998c}, which offers low implementation effort for performances comparable to other methods for such simple geometries and boundary conditions. The application of the \ac{gktl} and \ac{ams} algorithms to deterministic dynamics requires that some randomness is artificially introduced in the dynamics so that copies originated from the same parent follow different paths. This can be achieved by randomly perturbing the restart state at branching points. \begin{figure}[thb] \centering \includegraphics[width=.6\textwidth]{return_time_prod} \caption{Illustration of the computation of return times for the averaged drag over the square obstacle pictured in Fig.~\ref{fig:snapshots}. The averaging window is $5$ correlation times. The dashed black line represents the reference return times computed from a timeseries spanning $10^6$ correlation times, using~\eqref{eq:Return_Times_Rare-1}. The solid blue line represents the return times obtained using the \ac{gktl} algorithm.} \label{fig:return_time_drag} \end{figure} Figure~\ref{fig:return_time_drag} illustrates the computation of the return times for the drag averaged over $5$ correlation times using the \ac{gktl} algorithm. It shows that the use of the algorithm makes accessible the computation of rare events at a much lower computational cost than direct sampling. More precisely, the algorithm was applied using $\ensuremath{N}=128$ replicas simulated over $10$ correlation times. To build the return time curve presented in Fig.~\ref{fig:return_time_drag}, we use $\ensuremath{K}=10$ repetitions of the algorithm, leading to an overall computational cost of about $10^4$ correlation times. From a direct sampling of similar computational cost, the rarest accessible event has a return time close to the computational cost itself, that is $10^4$. The reference curve was computed from a time series spanning $10^6$ correlation times. For events having a return time close to $5\cdot 10^5$ correlation times, the computational cost of estimating the return times using the \ac{gktl} algorithm is $50$ times lower than direct sampling. The occurrence of plateaus in Fig.~\ref{fig:return_time_drag} is due to the increasing multiplicity of trajectories as the amplitude $a$ increases. Indeed, because of the selection procedure involved in the \ac{gktl} algorithm, a subset of trajectories can share the same ancestor. Henceforth, they are likely to differ only by a small time-interval at the end of their whole duration. In such cases, it is common that the maximum over the trajectory is attained in earlier times. As a consequence, this subset of trajectories will contribute the same value to the set of maxima from which return times are computed. This effect is accentuated in the present case of a deterministic system, as it takes some time for copies to separate after being perturbed at a branching point. A straightforward way of mitigating the occurrence of such plateaus is to increase the number of trajectories or/and the number of repetitions of the algorithm. As an illustration, Fig.~\ref{fig:return_time_drag_50reps} shows the return time plot obtained using $50$ repetitions instead of $10$ in Fig.~\ref{fig:return_time_drag}. \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{return_time_prod_50reps} \caption{Illustration of the computation of return times for the averaged drag over the square obstacle pictured in Fig.~\ref{fig:snapshots}, using $50$ repetitions of the \ac{gktl} algorithm. The parameters are the same as in Fig.~\ref{fig:return_time_drag}. This figure illustrates the reduction in the occurrence of plateaus for the return time curve obtained using the \ac{gktl} algorithm. The dashed black line represents the reference return times. The solid blue line represents the return times obtained using the \ac{gktl} algorithm.} \label{fig:return_time_drag_50reps} \end{figure} \section{Conclusion} \label{sec:cl} In this paper, we have considered the question of estimating the return time of rare events in dynamical systems. We have compared several estimators, using both usual timeseries (generated with direct numerical simulations) and rare event algorithms, by generalising the approach relating the return times to the extrema over trajectory blocks. This approach relies on the fact that rare events behave, to a good approximation, like a Poisson process: this allows for the derivation of a simple formula (see~\eqref{eq:Return_Times_Rare}) for estimating the return times based on block maxima. We slightly improved this formula (see~\eqref{eq:Return_Times_Rare-1}), and further showed that it was possible, provided only minor modifications, to evaluate it with data produced by rare event algorithms. Indeed, while the traditional block maximum method consists in dividing a given trajectory in blocks with arbitrary length (larger than the correlation time of the system, and smaller than the return time one seeks to estimate), there is a class of rare event algorithms which yields precisely an ensemble of trajectories exhibiting the rare event more often than direct simulation, together with the probability of observing each member of the ensemble. Hence, we have generalised the block maximum formula to non-equiprobable trajectory blocks; this allowed us to use directly rare event algorithms, such as the \ac{ams} and the \ac{gktl} algorithm, to estimate return times for rare events. Using the Ornstein--Uhlenbeck process as an illustration, we showed that the method is easy to use and accurately computes return times in a computationally efficient manner. Indeed, compared to direct sampling, combining the generalised block maximum approach to rare event algorithms allowed for computing return times many orders of magnitude larger, at fixed computational cost. This method does not depend on the dynamics of the system or on the type of observable, as long as a suitable rare event algorithm is selected. As an illustration, we computed return time plots for both instantaneous and time-average observables for the Ornstein--Uhlenbeck process, using the \ac{ams} and the \ac{gktl} algorithms. This approach paves the way to numerical computation of return times in complex dynamical systems. To showcase the potential of the method, we discussed briefly an application of practical interest: extreme values of the drag force on an object immersed in turbulent flows. A key issue with rare event algorithm is to understand if they are actually useful to compute rare events and their probabilities for actual complex dynamical systems. Many of the proposed approaches fail to pass such a test, either because the algorithm is too complex to be used for complex dynamical systems, or the algorithm is restricted to specific systems (equilibrium or reversible dynamics, diffusions with small noises), or the algorithm simply fails. A key issue with many potentially successful rare event algorithms, for instance the \ac{ams} algorithm and the \ac{gktl} algorithm among others, is that their success depends much on the quality of the rule used for selecting trajectories. For instance the \ac{ams} or the \ac{tams} algorithm rely on a score function, and the \ac{gktl} use as a selection rule the increment of a the time average which one aims at computing. Whenever one uses a good score function, those algorithms are extremely useful and show tremendous sampling improvements~\cite{Rolland2016}. For the \ac{ams} algorithm, the choice of a good score function often relies on a good rough qualitative understanding by the user of the effective dynamics that leads to the rare events. Then the \ac{ams} algorithm leads to excellent quantitative results, even with complex dynamical systems (see for instance ~\cite{Rolland2016}). Several examples have illustrated than those algorithm may fail to lead to improvement in other cases, see for instance~\cite{nemoto2016population}. Faced with such difficulties, one may either use an empirical approach, or try to improve the algorithms in order to cure potential problems, as we explain now. The empirical approach consists in identifying a priori the conditions for success of the algorithms and identify relevant dynamical phenomena that fulfil these conditions. For the \ac{ams} algorithm this amounts at understanding sufficiently well the dynamics, in order to be able to define a macroscopic variable that will describe well the dynamics leading to the extremes, and to propose a related score function. The algorithm may also be used to test some hypothesis on such macroscopic variables, and learn about the dynamics. The \ac{gktl} algorithm is usually successful in conditions when the sampling of time averages is dominated by a persistent macroscopic state. Several authors have proposed new algorithms to cure some of the problems. A class of algorithms seek at changing the dynamics such that the computation will be more efficient (see for instance ~\cite{vanden2012rare} for diffusions with small noise, or~\cite{nemoto2016population} in relation with the \ac{gktl} algorithm and references therein). Those methods are limited to diffusions, as they require to relate the statistics of paths for different dynamics, for instance through the Girsanov formula. They can involve recursive learning of an optimal dynamics and be very successful for dynamics with a few degrees of freedom~\cite{nemoto2016population}. Another class of algorithms, milestoning (see~\cite{faradjian2004}), is aimed at computing a reduced description of the original dynamics, that can afterwards permits to efficiently compute dynamical quantities, for instance first passage times (see~\cite{schutte2011} and references therein). \begin{acknowledgments} The research leading to these results has received funding from the European Research Council under the European Union's seventh Framework Program (FP7/2007-2013 Grant Agreement No. 616811). Simulations have been performed on the local HPC facilities at Ecole Normale Sup\'erieure de Lyon (PSMN). These HPC facilities are supported by the Auvergne-Rh\^one-Alpes region (GRANT CPRT07-13 CIRA) and the national Equip@Meso grant (ANR-10-EQPX-29-01). \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Numerical studies of lattice field theories have developed significantly in parallel with the development of computers during the past decade. Of particular importance in this regard has been the construction of dedicated QCD computers (see Table 1 and for earlier reviews see Ref.\cite{review}) and the move of commercial vendors toward parallel computers in recent years. Due to these developments we now have access to parallel computers which are capable of 5--10 Gflops of sustained speed. However, a fully convincing numerical solution of many of lattice field theory problems, in particular those of lattice QCD, requires much more speed. In fact typical number of floating point operations required in these problems, such as full QCD hadron mass spectrum calculations, often exceeds $10^{18}$, which translates to 115 days of computing time with the sustained speed of 100 Gflops. Under this circumstance we really need computers with a sustained speed exceeding 100 Gflops. \input machines In this talk I review the present status of effort toward construction of dedicated parallel computers with the peak speed of 100--1000 Gflops. Of the six projects in this category (see Table 1), APE100\cite{ape100} is near completion and ACPMAPS upgraded\cite{acpmaps_upgraded} is running now. Because they have already been reviewed previously\cite{review}, we shall only describe their most recent status. The three recent projects, the Teraflops project\cite{teraflops_old,negele} in the United States, the CP-PACS project\cite{oyanagi} in Japan and the 0.5-Teraflops\cite{christ} project in the United States, are at a varying stage of development. I shall describe them in detail. Finally the APE1000\cite{rapuano} is the future plan of the APE Collaboration, of which details are not yet available. \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=6.3cm \epsfbox{speed3.ps} \vspace{-0.8cm} \end{center} \caption{Progress of theoretical peak speed.} \label{fig:ComputerSpeed} \end{figure} A key ingredient in the fast progress of parallel computers in recent years is the development in semiconductor technologies. Understanding this aspect is important when one considers possible approaches toward a Teraflops of speed. I shall therefore start this review with a brief reminder of the development of vector and parallel computers and the technological reasons why recently parallel computers have exceeded vector computers in the computing capability (Sec. 2). The status of APE100 and ACPMAPS upgraded are summarized in Sec. 3. The US Teraflops, CP-PACS and 0.5-Teraflops projects are described in Sec. 4. Powerful parallel computers are also available from commercial vendors. In Sec. 5 I shall discuss two new computers, the Fujitsu VPP500 and CRAY T3D. After these reviews I discuss several architectural issues for computers toward Teraflops in Sec. 6. A brief conclusion is given in Sec. 7. \section{Recent development of computers and semiconductor technology} In the upper part of Fig.~\ref{fig:ComputerSpeed} we show the progress of peak speed of vector and parallel computers over the years. Small symbols correspond to the first shipping date of computers made by commercial vendors, with open ones for vector and filled ones for parallel type. Parallel computers dedicated to lattice QCD are plotted by large symbols. We clearly observe that the rate of progress for parallel computers is roughly double that of vector computers and that a crossover in peak speed has taken place from vector to parallel computers around 1991. The ``linear fit'' drawn in Fig.~\ref{fig:ComputerSpeed} for parallel computers can be extrapolated to the period prior to 1985. QCDPAX is the fifth generation computer in the PAX series\cite{hoshino} and there are four earlier computers starting in 1978. In the lower part of Fig.~\ref{fig:ComputerSpeed} the peak speed of these computers are plotted in units of Mflops together with that of the Caltech computer described, for example, by Norman Christ at Fermilab in 1988\cite{review}. It is amusing to observe that the rapid increase of speed of parallel computers has been continuing for over a decade since the early days. \begin{figure}[t] \epsfxsize=7.5cm \epsfbox{clock.ps} \vspace{-0.5cm} \caption{Machine clock of ECL and CMOS semiconductors.} \label{fig:Clock} \end{figure} It is important to note that the first three PAX computers are limited to 8 bit arithmetic and the fourth one to 16 bit. We also recall that the first Columbia computer used 22 bit arithmetic. Thus not only the peak speed but also the precision of floating point numbers has increased significantly for parallel computers. Now the 64 bit arithmetic is becoming standard. To see more closely why the crossover happened, let us look at the development of technology of semiconductors. In Fig.~\ref{fig:Clock} we show how machine clocks become faster in the case of ECL which is utilized in vector-type supercomputers as well as in the the case of CMOS which is used in personal computers and workstations. As we can see, the speed of CMOS is about 10-fold less than ECL. However, the power consumption and the heat output are much lower than those of ECL. Furthermore the speed of CMOS itself has become comparable to that of ECL of the late 1980's. \begin{figure}[t] \epsfxsize=7.5cm \epsfbox{dram.ps} \vspace{-0.5cm} \caption{Development of minimum spacing of LSI and capacity of DRAM.} \label{fig:Dram} \end{figure} \newbox\B \epsfysize=7.0cm \setbox\B=\vbox{\epsfbox[96 64 500 673]{1evol.ps}} \begin{figure}[t] \begin{center} \leavevmode \makeatletter \@rotr\B \makeatother \vspace{-0.5cm} \end{center} \caption{Evolution of the number of pins. (From ``Nikkei Electronics'' August 2, 1993.)} \label{fig:Evol} \end{figure} \input 1table The machine cycle of one nano-second is a kind of limit to reach. This is understandable because one nano-second is the time in which light travels 30cm. In this time interval one has to load data from memory to a floating point operation unit, make a calculation and store results to the memory. Even in the ideal case of pipelined operations, one nano-second corresponds only to one Gflops. Usually a vector computer has a multiple operation units which consists of, for example, 8 floating point operation units (FPUs). Because of this, the theoretical peak speed becomes 8 Gflops. Further it has multiple sets of this kind of multiple FPUs; in the case of 4 sets the peak speed becomes 32 Gflops. This is the way how a vector computer gets the peak speed of order of 10 Gflops. That is, recent vector computers are already parallel computers. However, it is rather difficult to proceed further in this approach because of the power consumption and the heat output. On the other hand, the development of CMOS semiconductor technology, with its small-size, high speed and low power consumption, has made it possible to construct a massively parallel computer which is composed of order of 1,000 nodes with the peak speed which exceeds that of vector-type supercomputers. This is the reason why the crossover occurred. The speedup of CMOS has become possible due to the development of LSI technology. Figure~\ref{fig:Dram} shows the development in terms of the minimum feature size or minimum spacing. Now the spacing has been reduced to 0.5 micron. This development has also lead to a substantial increase of DRAM bit capacity which has recently reached the level of 16Mbit. The speed of transistors has also increased with the decrease of minimum spacing because electrons can move through the minimum spacing in a shorter time. This is the reason why the machine clock has become faster. The packaging technique has also developed: Figure~\ref{fig:Evol} shows the development of the number of pins of LSI. Due to these development, it is now not a dream to construct a 1Tflops computer with 64 bit arithmetic with reasonable size and reasonable power consumption. \section{Past and present of dedicated computers} The computers of the first group in Table~\ref{tab:machines}, the three computers of Columbia\cite{columbia}, two versions of APE\cite{ape}, QCDPAX\cite{qcdpax}, GF11\cite{gf11} and ACPMAPS\cite{acpmaps}, were constructed some years ago and have been producing physics results. The characteristics of these computers are given in Table~\ref{tab:QCD1}. These computers are already familiar to lattice community. Therefore I refer to earlier reviews~\cite{review} for details and just emphasize that a number of interesting physics results have been produced. This fact shows that there is really benefit in constructing dedicated computers. The computers of the second group in Table~\ref{tab:machines}, the 6 Gflops version of APE100 and ACPMAPS upgraded, have been recently completed. Both are now producing physics results, some of which have been reported at this conference. I list their characteristics in Table~\ref{tab:QCD2}. \subsection{APE100} The architecture of APE100\cite{ape100} is a combination of SIMD and MIMD. The full machine consists of 2048 nodes with a peak speed of 100 Gflops. The network is a 3-dimensional torus. Each node has a custom-designed floating point chip called MAD. The chip contains a 32-bit adder and a multiplier with a 128-word register file. The memory size is 4Mbytes/node with 80 ns access time 1M $\times 4$ DRAM. The bandwidth between MAD and the memory is 50 Mbytes/sec, which corresponds to one word/4 floating point operations. One board consists of $2 \times 2 \times 2$ = 8 nodes with a commuter for data transfer. The communication rates on-node and inter-node are 50 Mbytes/sec and 12.5 Mbytes/sec, respectively. Each board has a controller which takes care of program flow control, address generation and memory control. The 6 Gflops version of APE 100, which is called TUBE, is running and producing physics results. A TUBE is composed of 128 nodes making a $32 \times 2 \times 2$ torus with periodic boundary conditions. The naming originates from its topological shape. The memory size is 512 Mbytes. Four TUBEs have been completed. The sustained speed of a TUBE for the link update is about 1.5 microsecond/link with the Metropolis algorithm with 5 hits. The time for multiplication of the Wilson operator is 0.8 microsecond per site. These rates roughly correspond to 2.5 Gflops to 3 Gflops, which represents 40-50\% of the peak speed. These figures show good efficiency. The physics subjects being studied on TUBE are hadron spectrum and heavy quark physics, the results of which have been reported at this conference. A Tower which consists of 4 TUBEs with a peak speed of 25 Gflops is being assembled now and should be working in the late fall of 1993. The full machine which is composed of 4 Towers with a peak speed of 100 Gflops is expected to be completed by the first quarter of 1994. \input 2a_table \subsection{ACPMAPS Upgraded} This is an upgrade of the ACPMAPS replacing the processor boards without changing the communication backbone\cite{acpmaps_upgraded}. The ACPMAPS is a MIMD machine with distributed memory. On each node there are two Intel i860 microprocessors with a peak speed of 80 Mflops. The memory size is 32 Mbytes of DRAM for each node. The full machine consists of 612 i860 with a peak speed of 50 Gflops and has 20 Gbytes of memory. The network has a cluster structure: one crate consists of 16 boards with a 16-way crossbar. A board can be either a processor node or a Bus Switch Interface board. The 16-way crossbars are connected in a complicated way which makes a hyper-cube and other extra connections. The throughput between nodes is 20 Mbytes/sec. ACPMAPS has a strong distributed I/O system: there are 32 Exabyte tape drives and 20 Gbytes of disk space. This mass I/O subsystem is one of characteristics of ACPMAPS. The software package CANOPY which was well described several times\cite{acpmaps,acpmaps_upgraded} is very powerful to distribute physical variables to nodes without knowing the details of the hardware. The ACPMAPS is running and doing calculations of the quenched hadron spectrum and heavy quark physics, the results of which have been reported at this conference. The sustained speed measured on a $32^3 \times 48$ lattice are as follows. One link update time by a heat-bath method is 0.64 micro-second per link. One cycle of conjugate gradient inversion of the Wilson operator by red-black method takes about 0.64 micro-second per site. The L inversion together with the U back-inversion in the ILUMR method takes 2.23 micro-second per site. These figures for the sustained speeds are about 10-20\% of the peak speed. Therefore efficiency is not so good compared to TUBE. However, there are several good characteristics. First, it supports both 64 and 32 bit arithmetic operations. The network is very flexible and the distributed I/O system is convenient for users. \input 2b_table \section{Project under way and proposed} The three projects of the third group in Table~\ref{tab:machines}, the Teraflops project, the CP-PACS project and the 0.5-Teraflops project are well under way. The basic design targets are listed in Table~\ref{tab:QCD3}. \subsection{Teraflops project} The Teraflops project\cite{teraflops_old} has changed significantly since last year. The new plan (Multidisciplinary Teraflops Project)\cite{negele} utilizes Thinking Machine's next generation platform instead of CM5 as originally planned. A floating point processing unit(FPU) called an arithmetic accelerator is to be constructed with a peak speed in the range of 200 -- 300 Mflops. One node consists of 16 such FPUs plus one general processor, with a peak speed of more than 3.2 Gflops and 512 Mbytes of memory. The full machine consists of 512 nodes with a peak speed of at least 1.6 Tflops with 64 bit arithmetic. The sustained speed is expected to be more than 1 Tflops. A preliminary estimate for the cost of the full machine is \$20 -- 25M. This project is the collaboration of the QCD Teraflops Collaboration\cite{teraflops_mem}, MIT Laboratory for computer science, Lincoln Laboratory and TMC. Funding for the project began in the fall of 1992 with start-up funds provided by MIT. The proposal for the whole project will be submitted to NSF, DOE and ARPA this fall. The tentative schedule is to build a prototype node in 1994, a prototype system in 1995 and have the full system in operation in 1996. \subsection{CP-PACS project} We started the CP-PACS (Computational Physics by Parallel Array Computer Systems) project last year\cite{oyanagi}. The CP-PACS collaboration currently consists of 22 members\cite{cppacs-mem}, a half of them physicists and the other half computer scientists. The architecture is MIMD with a 3-dimensional hyper crossbar which will be explained later. The target of the peak speed is currently at least 300 Gflops with 64 bit arithmetic. We are making a proposal for additional funds to increase this peak speed. The memory size is planned to be more than 48 Gbytes. The processor is based on a Hewlett-Packard PA-RISC processor. This is a super-scalar processor which can perform two operations concurrently. We enhance the processor to support efficient vector calculations. The peak speed of one processor is 200 -- 300 Mflops. The enhancement will be described in detail later. For memory we use synchronous DRAM, pipelined by multi-interleaving banks and a storage controller. The memory bandwidth is one word per one machine cycle. \begin{figure}[t] \epsfxsize=7.5cm \epsfbox{performance.ps} \vspace{-0.5cm} \caption{Performance of a RISC processor in a large scale scientific calculation.} \label{fig:Performance} \end{figure} Now let me explain the vector enhancement of the processor. As is well-known, high performance of usual RISC processors like those of Intel, IBM, HP and DEC heavily depends on the existence of cache. However, when the data size exceeds the cache size, effectiveness of cache decreases. Figure~\ref{fig:Performance} shows a typical example of the performance of a RISC processor. When the data size exceeds the size of the on-chip level-1 cache, it drops down by about 50\%. Furthermore when it exceeds the size of the level-2 cache, the performance is of order 15\% of the theoretical peak speed. This feature is very common to cache-based RISC processors. To overcome this difficulty, our strategy is to increase the number of floating-point registers without serious changes in the instruction set architecture. This means upward compatibility. However, this is not straightforward because the register fields for instructions are limited; the number of registers is usually limited to 32. To resolve this problem we introduce slide windows as well as preload and poststore instructions\cite{nakamura}. We also pipeline the memory. Because of these features we are able to hide long memory access latency and perform vector calculations efficiently. \begin{figure}[t] \epsfxsize=7.5cm \epsfbox{slide-window.ps} \vspace{-0.5cm} \caption{Schematic graph of slide-windowed registers.} \label{fig:Slide-Window} \end{figure} Figure~\ref{fig:Slide-Window} is a schematic illustration of how slide windowed registers work. Arithmetic instructions use the registers in the active window which has 32 registers. The preload instruction can load data into registers of the next (or next-to-next) window and the poststore instruction stores data from registers of the previous window. The pitch for the window slide can be chosen by software. Due to the preload and poststore instructions we can use all of m ($m > 32$) physical registers. \begin{figure}[t] \epsfxsize=7.5cm \epsfbox{Livermore.ps} \caption{Comparison of performance with and without slide windows for Livermore loops.} \label{fig:Livermore} \end{figure} Figure~\ref{fig:Livermore} is a comparison of the performance with and without slide windows for Livermore Fortran Kernels : $<$Original$>$ means performance without slide windows, and $<$Perfect-Cache$>$ represents a hypothetical case for comparison where the cache size is infinite and the data are all in cache. In the case of $<$Slide-Window$>$, the number of slide-windowed floating-point registers is assumed to be 64. Except for \#14 of Livermore Fortran Kernels, the performance with slide windows is almost equal to that of the perfect cache case and it is about 6 times higher than the original one. \newbox\B \epsfysize=7.0cm \setbox\B=\vbox{\epsfbox[96 64 500 673]{mult.ps}} \begin{figure}[bt] \begin{center} \leavevmode \makeatletter \@rotr\B \makeatother \end{center} \caption{Performance for multiplication of Wilson matrix.} \label{fig:Mult} \end{figure} Figure~\ref{fig:Mult} shows the efficiency of performance for the case of multiplication of the Wilson matrix. The dashed line corresponds to efficiency in the case of the code optimized by hand without considering memory bank-conflicts. The solid line is the result of a simulation for the realistic case where the effect of memory bank conflict and the buffer size effect are taken into account. This shows that if the number of registers is larger than 100 the efficiency is more than 75\%. We will develop a compiler for the enhanced RISC processor, which will produce optimized codes for the slide-window architecture. \begin{figure}[t] \epsfxsize=7.5cm \epsfbox{board-hard.ps} \caption{Schematic configuration of Processing Unit(PU) board and IO Unit(IOU) board of CP-PACS.} \label{fig:BoardHard} \end{figure} On each processing unit(PU), we place one enhanced PA-RISC processor, local storage(DRAM) and a storage controller(see Fig.~\ref{fig:BoardHard}). NIA stands for Network Interface Adapter and EX for exchanger. On an IO unit(IOU), in addition to the components on PU, we place an IO bus to which disks are connected through IO adapters. \begin{figure}[t] \epsfxsize=7.5cm \epsfbox{HXBD.ps} \caption{System configuration of CP-PACS.} \label{fig:Hxb} \end{figure} The network is a 3-dimensional hyper crossbar as shown in Fig.~\ref{fig:Hxb}. It consists of x-direction crossbars as well as y and z direction crossbars. This hyper crossbar network is very flexible: from any node to another node data can be transferred through at most three switches. The data transfer is made by message passing with wormhole routing. The latency is expected to be of order of a few micro-second. A block-strided transfer is supported. We have also a global synchronization in addition to the hyper crossbar network. The system configuration of the CP-PACS with distributed disks is depicted in Fig.~\ref{fig:Hxb}. The disk space is more than 500 Gbytes in total. We use RAID5 which has extra parity bits. In general, when the number of disks is large as in this case, the MTBF(mean time between failure) becomes of order of one month. With RAID there is no such problem, however. The number of nodes, not fixed yet, is from 1000 to 1500. The host is a main frame computer with modifications for massive data transfer between the CP-PACS and the external disk storage. A prototype with the PA-RISC without enhancement, which will be used mainly for tests of network hardware, will be completed in early 1994 and the full scale machine with the newly developed processor is scheduled to be completed by spring 1996. The project is being carried out by a collaboration with Hitachi Ltd. A new center called ``Center for Computational Physics'' was established at University of Tsukuba for the development of CP-PACS. A new building for the center, where the new machine will be installed, was completed in the summer of 1993. The fund for the development of CP-PACS is about \$14M. \subsection{0.5-Teraflops project} \begin{figure}[t] \begin{center} \leavevmode \epsfysize=6.0cm \epsfbox{1norman.ps} \end{center} \caption{Schematic diagram of one node of the 0.5-Teraflops machine.} \label{fig:1norman} \end{figure} \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=5.0cm \epsfbox{3norman.ps} \end{center} \caption{Schematic diagram of NGA(node gate array) for the 0.5-Teraflops machine.} \label{fig:3norman} \end{figure} \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=7.0cm \epsfbox{2norman.ps} \end{center} \caption{Mechanical design of a mother board of the 0.5-Tflops project.} \label{fig:2norman} \end{figure} This project started quite recently\cite{christ}. The project is a collaboration of theoretical physicists and experimental physicists\cite{0.5teraflops_mem}. The machine consists of 16K nodes making a 4-dimensional torus $16 \times 16 \times 16 \times 4$ with a peak speed of 0.8 Tflops with 32 bit arithmetic. It is expected that the sustained speed for QCD is about 0.4 Tflops. The node architecture is depicted in Fig.~\ref{fig:1norman}. The processor is DSP(Digital Signal Processor) by Texas Instruments. A 32 bit addition and multiplication can be performed concurrently with 40 ns machine cycle. This leads to 50 Mflops for each node. It executes one word read for one machine cycle and one word write for two machine cycles. The DSP has 2K words of memory on chip. The size is small ($3.0$ cm$^2$), the power consumption very low (less than 1 Watt) and the price is less than 50\$. Each node has 2 Mbytes of DRAM. The maximum bandwidth between the processor and the memory is 25 Mwords/sec. The memory size is 32 Gbytes in total. The node gate array(NGA) which is shown in Fig.~\ref{fig:3norman} is to be newly developed. The design has been partly finished. It plays the roles of memory manager, network switch and specialized cache as a buffer. The buffer size is chosen in such a way that multiplications of $3 \times 3$ matrices on 3-vectors can be efficiently done. The 4-dimensional network is connected by eight bi-directional lines of NGA. Because the data transfer is made by handshaking, the latency is not low. To hide this latency, there is a mode called ``store and pass through''. In the calculation of the inner-product of two vectors which appears in the conjugate gradient method, the data transfer which takes 70 \% of the total time without this mode reduces to 28 \% with this mode. It supports a block-strided transfer. The mechanical design of a mother board is shown in Fig.~\ref{fig:2norman}. On the mother board there are $2 \times 2 \times 4 \times 4 = 64$ daughter boards with last 4 making a loop. Each node has a SCSI port to which peripheral tape and disk drives are connected. One of 256 boards of the full machine is connected to the host. The disk space is 48 Gbytes in total. The data transfer from disk to tape or visa-visa can be done concurrently with physics calculations. \input 3table The power consumption is expected to be about 50 KW, which is very low compared with other projects. The test board will be completed by summer 1994 and the full machine by summer 1995. The funds for 128 node machine with a peak speed of 6.4 Gflops is supported by DOE. The proposal for the full machine will be submitted in spring 1994. \subsection{APE1000} This is a successor of APE 100 with a peak speed of 1Tflops with 64 bit arithmetic\cite{rapuano}. The project will start by the end of 1994. \section{Commercial computers} I list the characteristics of the most powerful commercial computers in Table~\ref{tab:commercial} and describe in some details the two new ones below. For other computers I refer to the earlier reviews\cite{review}. \subsection{VPP500} This is the latest machine from Fujitsu. Each node is a vector processor with the same architecture as VP400 with a peak speed of 1.6 Gflops. Because of this, it is called a vector-parallel machine by Fujitsu. One node is a multi-chip-module which consists of 121 LSIs, a part of which is composed of GaAs. Each node has 128 Kbytes of vector registers and 2 Kbytes of mask registers. The memory size is 256Mbytes/node. The network is a complete crossbar connecting all nodes, which is very powerful for any application. The bandwidth for data transfer is 400 Mbytes/sec for each direction. The OS is UNIX and the language is Fortran plus directives for parallel procedures. The maximum number of nodes is 222 with the peak speed of 355 Gflops. The power consumption is 6KW/node. The power needed for the full machine is more than 1 MW. A small VPP500 with 4 processors is scheduled to be installed at Aachen this December. Another one with 7 processors will be installed at the Institute of Space and Astronomical Laboratory of Japan next January. \subsection{T3D} This is the machine just announced by CRAY. The node processor is the DEC Alpha chip, which is one of the most powerful RISC chip in the market. The clock cycle is 6.7ns and the peak speed of the chip is 150Mflops. The memory size is 16Mbytes for one node with 4Mbit DRAM at present. It will be upgraded soon to 64Mbytes with 16Mbit DRAM. The memory is globally shared and physically distributed. The network is a 3-dimensional torus. The bandwidth for data transfer is 300MB/sec for each direction. The latency of the communication is very low, less than 1 microsecond for hardware overhead. It is a MIMD machine with a maximum peak speed of 300Gflops when it is composed of 2048 nodes: the maximum number of nodes which is 1024 at present will be increased to 2048 soon. The OS is Mach and the language is Cray Research Adaptive Fortran. The machine with 32 nodes have been already installed at Pittsburgh Supercomputing Center. It will be upgraded to 512 nodes next spring. \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=7.0cm \epsfbox{flops.ps} \end{center} \vspace{-0.5cm} \caption{Sustained speed in terms of flops/node of commercial parallel computers for the conjugate gradient matrix inversion with staggered quarks[19]. The results for Paragon and CM5 are preliminary.} \label{fig:commercial1} \end{figure} \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=7.0cm \epsfbox{eff.ps} \end{center} \vspace{-0.5cm} \caption{Efficiency in terms of the ratio of the sustained speed to the theoretical peak speed[19].} \label{fig:commercial2} \end{figure} \subsection{Sustained speed of commercial parallel computers} The MILC collaboration has been running QCD codes on a number of commercial computers including the nCUBE2, the Intel iPSC/860, the Intel Paragon and the TMC CM5. They have results of benchmarks for the conjugate gradient matrix inversion with staggered quarks on these parallel computers\cite{sugar}. The performances of the benchmarks are plotted in Figs.~\ref{fig:commercial1} and \ref{fig:commercial2}, respectively, in terms of Mflops/node and the ratio of the sustained speed to the theoretical peak speed. It should be noted that the benchmarks quoted for the CM5 and the Paragon are preliminary. In particular, the communication speed of the Paragon is expected to improve significantly as the operating system is upgraded. The nCUBE2 is very stable and has nice software. Because nCUBE2 is slow, it is not suitable for large QCD simulations, but it is convenient for software development. When the code is written in C, the efficiency is very low for iPSC/860 and CM5 as is seen in the figures. Only when they are written in assembly languages, the efficiency becomes around 30\%. A similar efficiency has been also reported at this conference by Rajan Gupta\cite{gupta} for Wilson quarks in the case of CM5. \section{Toward Teraflops computers} \subsection{Three strategies} Roughly speaking, there are three strategies to get a 1 Tflops machine as shown in Table~\ref{tab:teraflops}. \input 5table The first is a vector-parallel approach taken by VPP500: 2 Gflops $\times$ 500 nodes =1 Tflops. The second is the approach taken by T3D and CP-PACS, that is, to use the most advanced RISC processor with an enhanced mechanism for high throughput between memory and processor: 200-400 Mflops $\times$ 2500-5000 nodes = 1Tflops. The approach taken by the Teraflops project is in between the first and the second in the sense that the peak speed of one FPU is 200--300 Mflops and that of one node is more than 1.6 Gflops. The third approach is to use well-established technology taken by CM5, Paragon, nCUBE and the 0.5-Tflops project: 50-100 Mflops $\times$ 10,000-20,000 nodes = 1 Tflops. In the first approach, the power consumption and the size will become problematical, although the number of nodes is small. In the second approach, the sustained speed of each node for arithmetic operations and that of the data transfer between nodes will be the key issue. In the third approach the packaging of the whole system and the reliability will be crucial. In spite of these potential obstacles, I believe that the rapid progress of technologies will enable all three approaches to reach 1 Tflops of theoretical peak speed in a few years. We should note, however, that achieving a high sustained speed with massively parallel computers and having flexibility for applications require additional considerations on the balance of speed of various components and other architectural issues. Let us make brief comments on these points. \subsection{Balance of speed} \begin{figure}[t] \epsfxsize=7.5cm \epsfbox{balance.ps} \caption{Balance of bandwidth and memory size against processor speed. The normalizations are 1 floating point operation/sec:0.5 words/sec:0.1 words/sec:0.025 words, which is roughly the balance for lattice QCD.} \label{fig:Balance} \end{figure} In Fig.~\ref{fig:Balance} the memory-processor bandwidth, the inter-node communication bandwidth, and the memory size are compared against the processor speed for the computers we reviewed in some detail. The processor speed is normalized to unity, and other normalizations are chosen for the following reason. For QCD calculation it is probably appropriate that the bandwidth between CPU and the memory is one word for two floating point operations. It also will be enough that the bandwidth for inter-node communication is 0.1 words for one floating point operation. For the memory size, the normalization is arbitrary, and I chose 0.025 words of memory size for 1 flops/sec. We see that each machine has its own characteristic. Securing a high bandwidth between memory and processor and that between nodes, sufficient to keep up with the processor speed, is one of the crucial factor for a high sustained speed. In dedicated computer projects these parameters can be tuned to specific applications (this in fact underlies the cost effectiveness of dedicated computers). For CP-PACS we have chosen the balance in such a way that it is optimized for lattice QCD. We should note, however, that the requirements on the bandwidths in lattice QCD are modest compared to many other applications. Higher bandwidths are probably preferred for general purpose computers as realized in the case of T3D. There are other points which do not appear in the figure such as the number of floating point registers on each processor, the structure of memory (pipelined or not) and the latency of the communication. These features are also important for the performance of a massively parallel machine. For example, the memory-processor bandwidth relative to the speed of one node is small for VPP500, but it has 8Kbytes of registers which probably compensates it. \subsection{Other issues of architecture} \vspace{0.2cm} \subsubsection{SIMD or MIMD} SIMD is simple and generally sufficient for QCD calculations. However, MIMD is more flexible and can accommodate more varieties of algorithms. An interesting question is whether there are efficient algorithms for inversion of quark matrices which requires a MIMD architecture. Another point is that MIMD hardware is probably simpler than SIMD for a machine with a large number of processors since the clock skew problem will become serious for SIMD. \subsubsection{Topology of network} The 3d torus and 4d torus networks are simple and natural for lattice QCD. However, precision measurement of observables requires finite-size analyses for which we need simulations on a number of lattice sizes. For this point more flexible network is preferable. \subsubsection{32bit or 64bit} In many cases of lattice QCD calculations it seems that 32bit arithmetic is sufficient. However, for example, at the global reject/accept step of the Hybrid Monte Carlo algorithm on a large lattice, the 32bit precision in not sufficient. In general the 64 bit precision is needed when the algorithm involves global variables. \section{Conclusions} In this review I have surveyed the development of parallel computers and the present status of dedicated computer projects toward Teraflops of speed. In the 1980's parallel computers were in their infancy and TMC was virtually the only company in the field. At that time there was no doubt that constructing dedicated parallel computers by physicists was a beneficial project. In fact dedicated computers which resulted from these projects have produced a number of interesting and important physics results on lattice field theories. The situation has become less clear-cut in recent years due to higher technology needed to achieve faster speed on one part, and emergence of powerful general purpose parallel computers from commercial vendors on the other. Historically projects for dedicated computers have been carried out by a small group of lattice physicists, in some cases in collaboration with experimental physicists and computer scientists, but without involvement of large commercial companies. The 0.5-Teraflops project follows this spirit. Fully utilizing well-established micro-processor technologies and designing aids which have become commercially available, the project aims to complete a computer precisely tuned to lattice QCD within a short period of time and at a low cost. It is very impressive to learn that this strategy is actually possible for computers approaching a Teraflops of speed. I believe that a vital factor in realizing this approach is the experience gained with the construction of three previous computers at Columbia. Another possible approach is to depart from the traditional style and to seek for a close collaboration with large companies from the start of the project. This strategy is the one taken by the US Teraflops project and the Japanese CP-PACS project. In the computers planned in these projects the most advanced processors are to be networked together with a large bandwidth. The 0.5-micron semiconductor technology, soon to become that of 0.3 micron, and the packaging technique needed for this type of architecture can not be handled by physicists and computer scientists alone. The cost is necessarily higher and the construction period longer. There are, however, the advantage of choosing more flexible architecture, reliability of hardware, and generally better software environment which is very important for development of application programs and data analysis. Regardless of the approaches, I think dedicated computer projects still represent an important avenue we should pursue for acquiring the computing power needed for advancement of lattice field theory studies. Hopefully all three computers will be completed in a few years time and produce a variety of fruitful results with some unexpected surprises. \section*{Acknowledgments} I am grateful to many colleagues for useful correspondence and discussions. I would particularly like to thank N. Christ, M. Fischler, F. Karsch, R. Kenway, J. Negele, F. Rapuano, R. Sugar A. Ukawa and D. Weingarten. I am also indebted to the members of the CP-PACS project, in particular K. Nakazawa. Valuable suggestions of A. Ukawa on the manuscript are gratefully acknowledged. Finally I would like to thank K. Kanaya, S. Aoki, H. Nakamura and H. Hirose for the help in the preparation of the manuscript. This work is supported in part by the Grant-in-Aid of the Ministry of Education(No. 05NP0601).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \hspace{1 pc} Social contacts influence people's behavior, and that behavior in turn affects the connections they form. For instance, a good study partner might lead a student to exert more effort in school, and that student's increased effort may incentivize the partner to maintain the collaboration. Understanding how networks and actions mutually influence one another is crucial for policy design. Nevertheless, a general theoretical framework for such situations is missing. While many studies analyze peer effects assuming an exogenously fixed social network, an important paper by \citet{Carrelletal2013} shows this can lead to mistaken predictions and interventions that backfire. The authors first estimated academic peer effects among cadets at the U.S. Air Force Academy and subsequently designed peer groups to improve the performance of academically weaker cadets. Extrapolating from peer effects estimates based on randomly assigned first-year peer groups, the authors expected that low-skilled freshmen would benefit from being placed in groups with higher proportions of high-skilled freshmen. Instead, the intervention produced a comparably sized \emph{negative} effect. The authors interpret this as a consequence of endogenous friendship and collaboration networks \emph{within} administratively assigned groups. Friendships between low- and high-skilled freshmen were much less likely to form in the designed groups than in the randomly assigned ones. Therefore, the positive spillovers that the design sought to maximize failed to happen. To account for such effects, researchers need models that permit a simultaneous analysis of network formation and peer effects. We introduce a framework to study games with network spillovers together with strategic link formation. Our theoretical contribution is twofold. First, we propose a model that nests standard analyses of each type of interaction on its own. Players choose actions (e.g., effort levels) from an ordered set, and they also have preferences over links given actions. We adapt the definitions of Nash equilibrium (for actions) and pairwise stability (for network formation) to define our solution concepts. Intuitively, in a solution to a network game with network formation, players should have an incentive to change neither their actions\footnote{We use this term from now on to mean the strategic action \emph{other} than the link choice.} nor their links. More precisely, our notion of a stable outcome requires that no player benefits from changing her action, holding the network fixed, nor from unilaterally removing links, and no pair of players can jointly benefit from creating a link between them.\footnote{Mirroring standard definitions in network formation, we consider both pairwise stability, in which players may drop only one link at a time, and pairwise Nash stability, in which players may drop many links simultaneously.} Second, we identify payoff properties under which stable networks have simple structures. Here, we first focus on a large class of separable network games that nests essentially all prior models of consensual network formation and action choice. We obtain sharp characterizations of outcomes using two kinds of order conditions. The first concerns the nature of spillovers. We say a game has \emph{positive spillovers} if players taking higher actions are more attractive neighbors; correspondingly, a game has \emph{negative spillovers} if players taking higher actions are less attractive neighbors.\footnote{Note these spillover properties are distinct from strategic complements or substitutes \emph{in actions}. Our properties concern levels of utility rather than how others' actions affect incentives to take higher actions.} The second type of condition concerns the relationship between action incentives and links. The game exhibits \emph{action--link complements} if the returns from taking higher actions increase with one's degree\footnote{Number of neighbors.} in the network. The game exhibits \emph{action--link substitutes} if these returns decrease with one's degree. Our main result characterizes the structure of both actions and links for any combination of order conditions (one of each type). Table \ref{tab:2by2} summarizes our findings. The result demonstrates that these conditions provide a useful way to organize our understanding of games on endogenous networks. {\footnotesize \renewcommand{\arraystretch}{1.5} \begin{table}[t] \begin{tabular}{p{2.6cm} p{3cm} P{4.2cm} P{4.2cm} } & & \multicolumn{2}{c}{\footnotesize \emph{Interaction between neighbors' action incentives}} \\ & & \textbf{Positive spillovers} & \textbf{Negative spillovers} \\ \hline \multirow{2}{2.7cm}{\footnotesize{\emph{Interaction between links and actions}}} & \textbf{Complements} & Nested split graph, higher degree implies higher action & Ordered overlapping cliques, neighbors take similar actions \\ & \textbf{Substitutes} & Ordered overlapping cliques, neighbors take similar actions & Nested split graph, higher degree implies lower action \\ \hline \end{tabular} \caption{Summary of main results; each cell indicates which network and action configurations are stable under the corresponding pair of assumptions on payoffs.}\label{tab:2by2} \end{table} \renewcommand{\arraystretch}{1} } \begin{figure} \centering \subfloat[Nested split graph]{{\includegraphics[width=0.4\textwidth]{illustrations/NSG.pdf} }}\quad \subfloat[Overlapping cliques]{{\includegraphics[width=0.4\textwidth]{illustrations/OverlappingCliques.pdf} }}\caption{Examples of the two types of networks that are characterized by our main result. }\label{fig:graphs} \end{figure} Figure \ref{fig:graphs} illustrates the two types of graphs our model predicts. Nested split graphs are strongly hierarchical networks: nodes are partitioned into classes according to their degree, and a higher class's neighborhood is a strict superset of any lower class's neighborhood. Ordered overlapping cliques means that we can order nodes such that every neighborhood is an interval, and the endpoints of a node's neighborhood are increasing in the node's own position in the order---imagine a ranking of nodes, and each is connected to all others' that are close enough in rank.\footnote{For recent econometric work concerning the estimation of related models, see \citet{chandrasekhar2021network}.} In each case, equilibrium action levels are ordered in a way that corresponds to the network structure. Going beyond the separable games on which we focus, we also identify ordinal properties of linking incentives---\emph{consistency} and \emph{alignment}---that yield this dichotomy in a much larger class of games. We subsequently specialize our framework to interpret the counterintuitive findings of \citet{Carrelletal2013}. We represent their environment as one of positive spillovers combined with action--link substitutes---we assume that students who study together create benefits for their peers, but more time studying makes link formation and maintenance more costly. We identify conditions under which a complete graph is part of a stable outcome and further conditions under which it is uniquely so. The overarching message of these characterizations is that the complete graph becomes harder to sustain as types get more spread out---types in the middle are necessary to induce types at the extremes to link with one another. We then look at a simple example in which students have one of three exogenous ability levels---low, medium, or high.\footnote{Interpreted as a mix of aptitude and preparation before their further investment in skills at the academy.} Although replacing just one medium-ability student with a high-ability student benefits the low-ability students in the group, additional replacements---representing the larger changes induced by the deliberate design of \cite{Carrelletal2013}---can reverse this effect. The reason is that small changes in group composition do not affect the set of stable networks---in our example, the complete network is uniquely stable at first, so all students benefit from each other's efforts---but larger changes cause the group to fragment. When the group divides into two cliques, one with high-ability students and one with low-ability students, low-ability students no longer experience peer effects from high-ability students, so the benefits of group design disappear. We discuss how the mechanism in our model closely tracks the interpretation that \citet{Carrelletal2013} give for their results. To further demonstrate the usefulness of our framework, we briefly study two additional applications. First, we introduce a model of ``status games'' based on \citet{Immorlicaetal2017}. Competitions for status entail a combination of action--link complements and negative spillovers. Imagine, for example, people who compete for social status through investments in conspicuous consumption and also choose their network connections. Those with more friends have a greater incentive to flaunt their wealth (action--link complements). On the other hand, those who do so are less attractive friends since linking with them creates negative comparisons (negative spillovers). In this setting, our model predicts that individuals will sort into cliques with members that invest similar amounts in status signaling---a finding consistent with stylized facts from sociological studies. Moreover, those in larger friend groups---popular individuals---engage in more conspicuous consumption due to heightened competition, and an increase in status concerns causes the social graph to fragment into smaller cliques. Our final application provides a microfoundation for ``club'' or ``group matching'' models. Theories of endogenous matching for public goods or team production often \emph{assume} that spillovers occur in disjoint cliques, which is critical for tractability. The question is then: which cliques form? We show that even if agents can arrange their interactions into more complex structures if they wish, there are natural conditions under which cliques are still the predicted outcome. Following our applications, we address several important questions that are tangential to our main analysis. First, we report conditions under which pairwise stable outcomes are guaranteed to exist---while existence is assured in all of our examples, it is not immediate in general because the presence of a link is a discrete outcome, and our solution concept considers joint deviations. Second, as our predicted structures are more rigid that what we observe in real networks, we discuss two ways to accommodate more complex structures. Finally, we provide a microfoundation for our solution concepts based on dynamic adjustment with forward-looking players. Our analysis generates new insights on strategic network formation while simultaneously unifying and organizing existing work. In our applications, we emphasize predictions not familiar from existing equilibrium models of joint link formation and action choice---in particular, ordered overlapping cliques. Identifying this type of outcome as a robust prediction under certain qualitative assumptions on the strategic interaction constitutes one of our main contributions. Nested split graphs, on the other hand, have received considerable attention in earlier equilibrium models of network formation under specific payoff functions, and are thought to capture key features of inter-bank lending networks and other trading networks \citep{Konigetal2014}. By identifying ordinal payoff properties that produce these structures, we help clarify and generalize the conditions under which nested split graphs are likely to emerge, unifying several earlier results. \subsection{Related Work} \hspace{1 pc} Our analysis sits at the intersection of two strands of work in network theory: games on fixed networks and strategic network formation. Within the network games literature, some of the most widely used and tractable models feature real-valued actions and best replies that are linear in opponents' strategies \citep{Ballesteretal2006, BramoulleKranton2007,Bramoulleetal2014}; many of our examples are based on these models. \citet{Sadler2020a} explores the robustness of equilibrium characterizations based on centrality measures when payoffs are more general. In the same spirit, our analysis derives predictions from order properties of the payoff functions (jointly with the action space) rather than particular functional forms. The literature thus far on strategic interactions in endogenous networks is small. \cite{jackson2002formation} observed that combining network formation and action choice could produce different predictions from those arising from either process separately.\footnote{Similar concerns motivate \cite{Cabralesetal2011}, who study network formation with a single global search investment. We focus on settings in which linking efforts are more precisely directed.} They demonstrated this in a two-action coordination game with a simple linear linking cost, focusing on stochastically stable play. In many subsequent papers, the decision to form a link is made unilaterally. \citet{GaleottiGoyal2009} study a game in which players invest in information gathering and simultaneously choose links to form. Linked players share the information that they gather. Though link formation is unilateral, and the proposer of a link incurs the cost, information flows in both directions. Equilibrium networks involve a core-periphery structure. Similarly, the model of \citet{Baetz2015} entails unilateral link formation together with a linear-quadratic game of strategic complements, leading to strongly hierarchical network structures. One key difference, however, is that decreasing marginal returns to linking cause those at the top of the hierarchy to refrain from linking with one another. In \citet{HerskovicRamos2020}, agents receive exogenous signals and form links to observe others' signals, and they subsequently play a beauty contest game. In this game, a player whose signal is observed by many others exerts greater influence on the average action, which in turn makes this signal more valuable to observe. The equilibrium networks again have a hierarchical structure closely related to nested split graphs. Unilateral link formation in these papers contrasts with our model, in which stability is based on mutual consent. \citet{JoshiMahmud2016}, \citet{Hiller2017}, and \citet{Badev2021} are closer to our approach. \citet{Badev2021} studies a binary action coordination game with endogenous link formation, proposing a solution concept that interpolates between pairwise stability and pairwise Nash stability. This is a parametric model for estimation and simulation procedures in a high-dimensional environment; the goal is to study empirical counterfactuals rather than deriving theoretical results. \citet{Hiller2017} studies a game in which each player chooses a real-valued effort level and simultaneously proposes a set of links---a link forms if and only if both players propose it. The author then refines the set of Nash equilibria by ruling out pairwise deviations in which two players create a link between them and simultaneously adjust their actions. In the underlying game, players have symmetric payoff functions that exhibit strategic complements and positive spillovers. The setting therefore falls into the first cell in our table, and even though the solution concept differs slightly from ours, the resulting outcomes are indeed nested split graphs.\footnote{Within the pure network formation literature, without action choice, \citet{Hellmann2020} studies a network formation game in which all players are ex-ante identical and uses order properties of the payoff functions to characterize the architecture of stable networks. A key result shows that if more central players are more attractive linking partners, then stable networks are nested split graphs. By specifying an appropriate network game, one can view this finding as a special case of the action--link complements and positive spillovers cell in our table.} \citet{JoshiMahmud2016} take a slightly different approach, modeling link proposals followed by action choices in a canonical linear-quadratic game. Their analysis includes both local and global interaction terms, but it still produces nested split graphs in the relevant cells.\footnote{\citet{bolletta2021model} offers a more recent example, studying a myopic-adjustment network formation dynamic under a particular linear peer effects specification.} Relative to this work, we significantly relax parametric and symmetry assumptions on players' payoffs, highlighting more fundamental properties that lead to this structure. In a related but distinct effort, \citet{Konigetal2014} study a dynamic network formation model in which agents choose strategic actions and myopically add and delete links. Motivated by observed patterns in interbank lending and trade networks, the authors seek to explain the prevalence of hierarchical, nested structures. The underlying incentives satisfy positive spillovers and action--link complements, and accordingly the stochastically stable outcomes are nested split graphs. Several other literatures connect to our applications. Most obviously, we highlight how our results can explain counterintuitive findings from studies on peer effects \citep{Carrelletal2013} and derive new insights on the effects of status competitions \citep{Immorlicaetal2017}. For two of the cells in Table \ref{tab:2by2}, our results state that stable structures consist of ordered cliques, and the members of a clique share similar attributes. In some cases, these cliques are disjoint. One can view this result as providing a microfoundation for group matching models. In these models, players choose what group to join rather than what links to form, so it is assumed ex ante that the graph is a collection of disjoint cliques. For instance, \citet{baccara2013homophily} study a model in which players choose to join clubs (i.e., cliques) before investing in club goods, finding that stable clubs exhibit homophily.\footnote{In other related work, \citet{bandyopadhyay2020pricing} study pricing for group membership in a similar setting, and \citet{chade2018matching} study the allocation of experts to disjoint teams.} Our analysis extends this finding, and one can use our results to find conditions under which the group matching assumption is without loss of generality. \section{Framework} \label{sec:framework} \hspace{1 pc} A \textbf{network game with network formation} is a tuple $\langle N, (S_i)_{i \in N}, (u_i)_{i \in N}\rangle$ consisting of the following data: \begin{itemize} \item There is a finite set $N$ of players; we write $\mathcal{G}$ for the set of all simple, undirected graphs on $N$.\footnote{The set $N$ is fixed, so we identify a graph with its set $E$ of \emph{edges} or \emph{links}---an edge is an unordered pair of players. We write $ij$ for the edge $\{i,j\}$.} \item For each player $i \in N$, there is a set $S_i$ of actions; we write $\mathcal{S} = \prod_{i \in N} S_i$ for the set of all action profiles. \item For each player $i \in N$, there is a payoff function $u_i \, : \, \mathcal{G} \times \mathcal{S} \to \mathbb{R}$. This gives player $i$'s payoff as a function of a graph $G \in \mathcal{G}$ and a profile of players' actions. \end{itemize} A pair $(G,\mathbf{s}) \in \mathcal{G} \times \mathcal{S}$ is an \textbf{outcome} of the game. Given a graph $G$, we write $G_i$ for the neighbors of player $i$; we write $G + E$ for the graph $G$ with the links $E$ added and $G-E$ for the graph $G$ with the links $E$ removed. \subsection{Solution concepts} \hspace{1 pc} Intuitively, in a solution to a network game with network formation, players should have an incentive to change neither their actions nor their links. We propose two nested solution concepts reflecting this idea. These parallel existing concepts in the network formation literature and extend them to our setting with action choices.\footnote{See \citet{bloch2006definitions} and \citet[Chapter 6]{jackson2008}.} \begin{definition} An outcome $(G,\mathbf{s})$ is \textbf{strictly pairwise stable}\footnote{We say the outcome is \textbf{pairwise stable} if in the second bullet we replace the weak inequality with a strict inequality (there can exist a link $ij$ such that $i$ is indifferent about deleting it), and in the third bullet we require one of the two inequalities to be strict (two players may both be indifferent about adding the missing link between them).} if the following conditions hold. \begin{itemize} \item The action profile $\mathbf{s}$ is a Nash equilibrium of the game $\langle N,(S_i)_{i \in N}, (u_i(G, \cdot))_{i \in N}\rangle$ in which $G$ is fixed and players only choose actions $s_i$. \item There is no link $ij \in G$ such that $u_i(G -ij, \mathbf{s}) \geq u_i(G, \mathbf{s})$. \item There is no link $ij \notin G$ such that both $u_i(G + ij, \mathbf{s}) \geq u_i(G, \mathbf{s})$ and $u_j(G + ij, \mathbf{s}) \geq u_j(G, \mathbf{s})$. \end{itemize} \noindent The outcome $(G,\mathbf{s})$ is \textbf{strictly pairwise Nash stable} if additionally there is no pair $(s'_i,H_i)$, consisting of an action $s'_i \in S_i$ and a subset of $i$'s neighbors, $H_i \subseteq G_i$, such that $u_i\left(G - \{ik \, : \, k \in H_i\}, (s'_i, s_{-i})\right) > u_i(G, \mathbf{s})$. \end{definition} Note that our primary definitions require strict preferences over links. This is technically convenient because it facilitates the cleanest characterization of stable networks, but one could obtain weaker results using analogous solution concepts that permit indifference. More substantively, both of these solution concepts reflect that link formation requires mutual consent. An outcome is strictly pairwise stable if $\mathbf{s}$ is a Nash equilibrium given the graph, no player wants to unilaterally delete a link, and no pair of players jointly wish to form a link. Pairwise Nash stability adds the stronger requirement that no player benefits from simultaneously changing her action and deleting some subset of her links. Whenever two players consider adding a link between them, they take the action profile $\mathbf{s}$ as given. Section \ref{sec:foundations} discusses this assumption and provides foundations for it. Note that standard models of network games and strategic network formation are nested in our framework. To represent a network game on a fixed graph $G$, one can take the utility functions from the network game and add terms so it is strictly optimal for all players to include exactly the links in $G$. Pairwise stable outcomes in the corresponding network game with network formation correspond to Nash equilibria in the original network game. To represent a model of network formation, simply make each $S_i$ a singleton (and let the payoffs correspond to those in the network formation model). \subsection{Separable network games} \label{sec:separable} \hspace{1 pc} We now introduce some structure on payoffs; the environment we study nests canonical models of network games and permits a simple statement of sufficient conditions for our characterization of network structures. Suppose all players have the same action set $S_i = S \subseteq \mathbb{R}$, and payoffs take the form \begin{equation}\label{eq:parametricpayoff} u_{i}(G, \mathbf{s}) = v_i(\mathbf{s}) + \sum_{j \in G_i} g(s_i, s_j) \end{equation} \noindent for each player $i$. Here, the function $v_i : \mathbb{R}^n \to \mathbb{R}$ is idiosyncratic to player $i$ and captures strategic incentives that are independent of the network. These payoffs embed two substantive assumptions about the incremental value of any link to its participants, namely that this value: (i) is invariant to permutations of players' labels (linking incentives are \emph{anonymous}); and (ii) depends only on the actions of the two players involved (linking incentives are \emph{separable}). These properties imply that the direct effects of links on agents' payoffs are determined bilaterally, which is a canonical feature of network games.\footnote{This refers only to the part of the payoff that affects linking incentives, since $v_i(\mathbf{s})$ is completely arbitrary. Importantly, there may be \emph{indirect} effects of other players' actions on a player's payoffs, but these are mediated through the action choices of neighbors.} For formalizations of conditions (i) and (ii) and a statement that these imply the payoff form (\ref{eq:parametricpayoff}), see Appendix \ref{sec:formalization_payoff}. These qualitative assumptions nest all models of which we are aware that study consensual link formation and action choice together. For instance, this nests the popular linear-quadratic form introduced by \cite{Ballesteretal2006}, as well as elaborations that include linking costs, such as \citet{JoshiMahmud2016}. Note that $g$ can readily incorporate a cost of linking that depends arbitrarily on player $i$'s own action $s_i$. Nevertheless, the payoff form (\ref{eq:parametricpayoff}) is considerably more restrictive than what our results require, and thus the next subsection introduces weaker, purely ordinal assumptions. Within the class of separable network games, we can state simple order conditions that determine the structure of stable networks, and these cover all of our applications. In a network game with network formation, we can view higher actions and additional links as two different activities in which a player can invest. Given payoffs of the form \eqref{eq:parametricpayoff}, we say the game exhibits \textbf{action-link complements} if $g$ satisfies a single-crossing condition in its first argument: $$g(s,r) > (\geq) \; 0 \quad \implies \quad g(s',r) > (\geq) \; 0$$ whenever $s' > s$. Actions and links are complements if taking higher actions makes forming additional links more attractive. The game exhibits \textbf{action-link substitutes} if the above implication holds whenever $s' < s$.\footnote{This definition readily extends to arbitrary network games with network formation---a game exhibits link-action complements if $u_i(G +ij, \mathbf{s}) - u_i(G - ij, \mathbf{s}) \geq (>) \; 0$ implies $ u_i(G+ij, s'_i, s_{-i}) - u_i(G - ij, s'_i, s_{-i}) \geq (>) \; 0)$ whenever $s'_i > s_i$ and link-action substitutes if the implication holds for $s'_i < s_i$.} Actions and links are substitutes if taking higher actions makes forming additional links less attractive. Analogously, we say the game exhibits \textbf{positive spillovers} if $g$ satisfies a single-crossing condition in its second argument: $$g(s,r) > (\geq) \; 0 \quad \implies \quad g(s,r') > (\geq) \; 0$$ whenever $r' > r$, and the game exhibits \textbf{negative spillovers} if the above implication holds whenever $r' < r$.\footnote{Again, these definitions naturally extend to arbitrary games---a game exhibits positive spillovers if $u_i(G+ij, \mathbf{s}) - u_i(G - ij, \mathbf{s}) \geq (>) \; 0$ implies $u_i(G + ij, s'_j, s_{-j}) - u_i(G - ij, s'_j, s_{-j}) \geq (>) \; 0$ whenever $s'_j > s_j$ and negative spillovers if the implication holds for $s'_j < s_j$.} If the game exhibits positive spillovers, players who take higher actions, all else equal, are more attractive neighbors. This assumption naturally captures situations in which the action $s_i$ represents effort that benefits neighbors (e.g., studying or gathering information). If the game exhibits negative spillovers, then players who take higher actions are less attractive neighbors. \begin{remark} We emphasize that whether a network game with network formation exhibits action-link complements/substitutes, or positive/negative spillovers, says nothing about whether there are strategic complements or substitutes in \emph{actions}: the actions $s_i$ and $s_j$ of two different players $i$ and $j$ could be (strategic) complements, substitutes, or neither. This should be clear as the term $v_i(\mathbf{s})$ in \eqref{eq:parametricpayoff}, which is independent of the graph, can depend on the entire profile of actions $\mathbf{s}$ in a completely arbitrary way. Moreover, there is no complementarity or substitutability assumed within the function $g$. Action-link complements/substitutes tells us how taking higher actions affects a given player $i$'s \emph{incentive to form links}. Similarly, positive/negative spillovers tells us how other players' actions affect player $i$'s linking incentives. \end{remark} \subsection{Ordinal assumptions} \hspace{1 pc} The single-crossing properties introduced in the previous subsection yield orderings on players capturing their preferences for linking and their desirability as partners. These orderings are ultimately the key inputs allowing us to characterize stable outcomes. Separable payoffs are helpful for interpretation, but they are not necessary to obtain these orderings. This subsection introduces weaker, more primitive order conditions that distill precisely what is needed for our results to hold. Taking one single-crossing condition from each category (action--link complements or substitutes, positive or negative spillovers) implies the ordinal assumptions sufficient for our main characterizations of stable outcomes. In the following definitions, we write $$\Delta_{ij} u_i(G, \mathbf{s}) = u_i(G + ij, \mathbf{s}) - u_i(G - ij, \mathbf{s})$$ for the marginal value of link $ij$ to player $i$ at outcome $(G,\mathbf{s})$, and we write $$S_i^+(G,\mathbf{s}) = \{j \in N \, : \, \Delta_{ij} u_j(G,\mathbf{s}) \geq 0\}$$ for the set of players with a weak incentive to link with $i$. \begin{definition}\label{def:consistent} Given a network game with network formation, linking incentives are \textbf{consistent} if there is no outcome $(G,\mathbf{s})$, and a corresponding collection of players $i,j,k,\ell$, such that $$\Delta_{ik} u_i(G,\mathbf{s}) \geq 0, \quad \Delta_{i\ell} u_i(G,\mathbf{s}) < 0, \quad \text{and}$$ $$\Delta_{jk} u_j(G,\mathbf{s}) < 0, \quad \Delta_{j\ell} u_j(G,\mathbf{s}) \geq 0.$$ \end{definition} In words, linking incentives are consistent if, given any outcome, the players agree on who is a more desirable linking partner. There are no players $i,j,k,\ell$ such that $i$ wishes to link with $k$ but not $\ell$, and $j$ wishes to link with $\ell$ but not $k$. Note this does not preclude heterogeneity in preferences over links---far from it---but if some player $i$ wishes to link with $k$ but not $\ell$, then any player who wishes to link with $\ell$ necessarily also wishes to link with $k$. \begin{definition} \label{def:aligned} Linking incentives are \textbf{aligned} if for any outcome $(G,\mathbf{s})$, any three players $i,j,k$ such that $$S_i^+(G,\mathbf{s}) \subseteq S_j^+(G,\mathbf{s}) \subseteq S_k^+(G,\mathbf{s}),$$ and any fourth player $\ell$, we have $$\Delta_{i \ell} u_i(G,\mathbf{s}) > (\geq) \; 0 \quad \text{and} \quad \Delta_{k \ell} u_k(G,\mathbf{s}) > (\geq) \; 0 \quad \implies \quad \Delta_{j \ell} u_j(G,\mathbf{s}) > (\geq) \; 0, \quad \text{and}$$ $$\Delta_{i \ell} u_i(G,\mathbf{s}) < 0, \quad \text{and} \quad \Delta_{k \ell} u_k(G,\mathbf{s}) < 0 \quad \implies \quad \Delta_{j \ell} u_j(G,\mathbf{s}) < 0.$$ \end{definition} Alignment relates a player's desirability as a linking partner to her own linking incentives. Suppose $k$ is a more desirable neighbor than $j$, and $j$ is a more desirable neighbor than $i$, so that $j$'s desirability is between that of $i$ and $k$. Alignment says that then $j$'s desire for links is also between that of $i$ and $k$. That is, if both $k$ and $i$ wish to link with $\ell$, then $j$ also wishes to link with $\ell$, and if both $k$ and $i$ do not wish to link with $\ell$, then $j$ also does not wish to link with $\ell$. Together, the two properties of consistency and alignment allow us to measure players' desirability as partners, and their inclination to form links, on the same one-dimensional scale. \begin{lemma}\label{lem:order} Suppose a network game with network formation has consistent and aligned linking incentives. At any outcome $(G,\mathbf{s})$, there exist weak orders $\succeq_{\text{in}}$ and $\succeq_{\text{out}}$ on the players such that $$\Delta_{ij}u_i(G,\mathbf{s}) > (\geq) \; 0 \implies \Delta_{ik} u_i(G, \mathbf{s}) > (\geq) \; 0$$ whenever $k \succeq_{\text{in}} j$, and $$\Delta_{ij}u_j(G,\mathbf{s}) > (\geq) \; 0 \implies \Delta_{ik} u_k(G, \mathbf{s}) > (\geq) \; 0$$ whenever $k \succeq_{\text{out}} j$. That is, if $i \succeq_{\text{in}} j$, then every player who wants to link with $j$ wants to link with $i$ as well, and if $i \succeq_{\text{out}} j$, then $i$ wants to link with every player with whom $j$ wants to link. Moreover, the two orders are either identical or directly opposed. \end{lemma} \begin{proof} See Appendix. \end{proof} Consistency and alignment may seem like they introduce a lot of structure, but they are implied by the natural single-crossing conditions introduced in the last subsection. If a game with payoffs of the form \eqref{eq:parametricpayoff} exhibits action-link complements or substitutes, and positive or negative spillovers, then linking incentives are necessarily consistent and aligned. Moreover, the orders $\succeq_{\text{in}}$ and $\succeq_{\text{out}}$, expressing players' desirability as neighbors and desire for neighbors, follow the order of the action set $S$. This result does not require $S \subseteq \mathbb{R}$---the same conclusion holds for any linearly ordered action set. \begin{lemma}\label{lem:consistentregular} Suppose a network game with network formation has payoffs of the form \eqref{eq:parametricpayoff}. If the game exhibits action-link complements or substitutes, and positive or negative spillovers, then linking incentives are consistent and aligned at any outcome. Moreover: \begin{enumerate} \item If there are action-link complements and positive spillovers, then $i \succeq_{\text{in}} j$ and $i \succeq_{\text{out}} j$ in any outcome with $s_i \geq s_j$. \item If there are action-link complements and negative spillovers, then $i \preceq_{\text{in}} j$ and $i \succeq_{\text{out}} j$ in any outcome with $s_i \geq s_j$. \item If there are action-link substitutes and positive spillovers, then $i \succeq_{\text{in}} j$ and $i \preceq_{\text{out}} j$ in any outcome with $s_i \geq s_j$. \item If there are action-link substitutes and negative spillovers, then $i \preceq_{\text{in}} j$ and $i \preceq_{\text{out}} j$ in any outcome with $s_i \geq s_j$. \end{enumerate} \end{lemma} \begin{proof} By definition, if the game exhibits action-link complements and positive spillovers, then any player $i$ desires more links when $s_i$ is higher and is a more attractive neighbor when $s_i$ is higher. We immediately see from \eqref{eq:parametricpayoff} that $i \succeq_{\text{in}} j$ and $i \succeq_{\text{out}} j$ whenever $s_i \geq s_j$, and the result follows. The other three cases are analogous. \end{proof} \section{The structure of stable graphs} \hspace{1 pc} How do properties of the payoff functions $(u_i)_{i \in N}$ affect stable network structures? This section derives our main results on the taxonomy of stable graphs. To state the results, we first define two classes of graphs---recall the illustration in Figure \ref{fig:graphs}. \begin{definition} A graph $G$ is a \textbf{nested split graph} if $d_j \geq d_i$ implies that $G_i \subseteq G_j \cup \{j\}$. \vspace{1 pc} \noindent A graph $G$ consists of \textbf{ordered overlapping cliques} if we can order the players $\{1,2,\ldots,n\}$ such that $G_i \cup \{i\}$ is an interval $I_i \subseteq \{1,2,\ldots,n\}$, for each $i$, and the endpoints of this interval $I_i$ are weakly increasing in $i$. \end{definition} In a nested split graph, neighborhoods are ordered through set inclusion, resulting in a strong hierarchical structure.\footnote{See \cite{Konigetal2014} and \cite{belhaj2016efficient} for more on the structure and other economic properties of these networks. A common alternative characterization is the following. Given a graph $G$, let $\mathcal{D} = (D_0, D_1,\ldots,D_k)$ denote its degree partition---players are grouped according to their degrees, and those in the (possibly empty) element $D_0$ have degree $0$. The graph $G$ with degree partition $\mathcal{D}$ is a nested split graph if and only if for each $\ell$ and each $i\in D_{\ell}$, we have \[G_i = \left[\,\bigcup_{j=1}^\ell D_{k + 1 - j}\,\right] \cap N_{-i}, \] in which $N_{-i}$ denotes players other than $i$---taking the intersection with this set is necessary because for $\ell>k/2$, the union includes $i$'s own partition element.} In a graph with ordered overlapping cliques, the player order induces an order on the set of maximal cliques. Each maximal clique consists of an interval of players, and both endpoints of these cliques are strictly increasing. Any graph in which every component is a clique is special case of this structure. \subsection{General characterization} \hspace{1 pc} Our first theorem shows that, if linking incentives are consistent and aligned, then stable graphs have one of the two structures we have defined. \begin{Theorem}\label{theo:structure} Suppose a network game with network formation has consistent and aligned linking incentives, and $(G,\mathbf{s})$ is a strictly pairwise stable outcome. Then: \begin{enumerate} \item If the orders $\succeq_{\text{in}}$ and $\succeq_{\text{out}}$ are identical, the graph $G$ is a nested split graph in which players higher in the two orders have higher degrees. \item If the orders $\succeq_{\text{in}}$ and $\succeq_{\text{out}}$ are opposed, the graph $G$ consists of ordered overlapping cliques with respect to either order. \end{enumerate} \end{Theorem} \begin{proof} We begin with part (a). Fixing the outcome $(G, \mathbf{s})$, suppose $j \succeq_{\text{in}} i$ and $j \succeq_{\text{out}} i$. This means that every $k$ that wants to link with $i$ also wants to link with $j$, and $j$ wants to link with every $k$ with whom $i$ wants to link. Since $(G, \mathbf{s})$ is strictly pairwise stable, there can be no indifference about links, so any $k \neq j$ that is a neighbor of $i$ must be a neighbor of $j$. For part (b), we show that if $i \succeq_{\text{in}} j \succeq_{\text{in}} k$ and $ik \in G$, then also $ij \in G$ and $jk \in G$---note this implies $i \preceq_{\text{out}} j \preceq_{\text{out}} k$. Since $ik \in G$, we know $i$ wants to link with $k$, and since $j \succeq_{\text{in}} k$, this means $i$ wants to link with $j$. Since $i$ wants to link with $j$, and $k \succeq_{\text{out}} i$, we know $k$ wants to link with $j$. Similarly, since $i$ wants to link with $k$ and $j \succeq_{\text{out}} i$, we know $j$ wants to link with $k$. Since $i \succeq_{\text{in}} k$, this implies $j$ wants to link with $i$. Since $(G, \mathbf{s})$ is strictly pairwise stable, there can be no indifference about links, so we conclude that $ij \in G$ and $jk \in G$. \end{proof} The characterization in Theorem \ref{theo:structure} is stark. There are essentially two network structures that can arise in stable outcomes: either neighborhoods are nested, or the network is organized into overlapping cliques of players. In case (a), if one player is ranked higher than another in the two orders, then the two neighborhoods are ordered by set inclusion. In case (b), a link between two players implies that the set of players ranked in between the two forms a clique. Strict comparisons play an important role as any link $ij$ need not be in $G$ if both $i$ and $j$ are indifferent about adding it. \begin{remark} Implicit in this result is a novel characterization of structures that arise in pure network formation games---the definitions of consistent and aligned link preferences do not change if action sets are singletons. In this case, it is as if each player has a one-dimensional type, and linking incentives depend on these types---higher types are more attractive neighbors, and a higher ranked player either always has a stronger incentive to form links or always has a weaker incentive to form links. Work on strategic network formation has thus far faced challenges in obtaining general results on the structure of pairwise stable graphs, and Theorem \ref{theo:structure} highlights non-trivial conditions that yield sharp predictions. \end{remark} \subsection{Separable network games}\label{sec:class} \hspace{1 pc} Recall the separable games of Section \ref{sec:separable}. Lemma \ref{lem:consistentregular} showed that single-crossing conditions in separable games imply consistent and aligned linking incentives. Combining this with Theorem \ref{theo:structure}, we obtain the following characterization of stable network structures in separable games. \begin{Cor}\label{cor:structure} Suppose a network game with network formation has payoff functions of the form \eqref{eq:parametricpayoff}. If $(G,\mathbf{s})$ is a strictly pairwise stable outcome, then: \begin{enumerate} \item If the game exhibits action-link complements and positive spillovers, then $G$ is a nested split graph in which players with higher degrees take higher actions. \item If the game exhibits action-link substitutes and negative spillovers, then $G$ is a nested split graph in which players with higher degrees take lower actions. \item If the game exhibits action-link complements and negative spillovers, or action-link substitutes and positive spillovers, then $G$ consists of ordered overlapping cliques with respect to the order of players' actions. \end{enumerate} \end{Cor} \begin{proof} The result is immediate from Theorem \ref{theo:structure} and Lemma \ref{lem:consistentregular}. \end{proof} \begin{remark} While payoffs of the form \eqref{eq:parametricpayoff} encompass all of our applications in the next section, we want to highlight that Theorem \ref{theo:structure} applies much more broadly. There are at least two natural extensions of this class of games for which an analogous result is immediate. First, one could replace the function $g(s_i, s_j)$ in the sum with a term of the form $g(s_i, s_j) - c_i(s_i)$---having added an idiosyncratic cost of linking, the conclusions of Corollary \ref{cor:structure} continue to hold for $g(s_i, s_j)$ increasing/decreasing in each argument. Second, one could replace $g(s_i, s_j)$ with a term of the form $g\left(h_i(s_i), h_j(s_j)\right)$, in which $\{h_i\}_{i \in N}$ are arbitrary idiosyncratic functions of the players' actions. The orders $\succeq_{\text{in}}$ and $\succeq_{\text{out}}$ would then follow the order of the function values $\{h_i\}_{i \in N}$ rather than the order of players' actions. \end{remark} \section{Perverse consequences of group design}\label{sec:carrell} \hspace{1 pc} We now turn to our first application, using the tractability our characterizations afford to incorporate a network formation analysis into the peer effects model of \citet{Carrelletal2013}. \citet{Carrelletal2013} estimated academic peer effects among first-year cadets at the US Air Force Academy (using a standard model that took networks as exogenous) and then used these estimates to inform the assignment of new cadets to squadrons---administratively designed peer groups of about 30 cadets. Based on a first cohort of randomly assigned squadrons, the authors concluded that being in a squadron with higher-performing peers\footnote{Specifically, those entering with relatively high scores on the verbal section of the SAT exam.} led to better academic performance among less prepared cadets. In the treatment group of a later cohort, incoming cadets with less preparation were systematically placed in squadrons with larger numbers of high-ability peers. While the researchers' goal was to improve the performance of the less prepared cadets,\footnote{The precise objective they were maximizing was the performance of the bottom third of cadets.} the intervention ultimately backfired: these students performed significantly worse. In this section, we present a model showing that our theory can simultaneously explain two distinctive features of the Air Force study: \begin{enumerate} \item When peer group composition changes slightly, low-ability cadets are better off when they have more high-ability peers, and \item Larger changes in peer group composition eliminate or even reverse this effect. \end{enumerate} \noindent Broadly, our results show that stable graphs become more fragmented if private incentives or abilities are more heterogeneous. Thus, placing cadets of high and low abilities together, without cadets of middle ability to bridge the gap, can result in isolated cliques that eliminate the desired spillovers. Methodologically, this application illustrates that our main theorems can apply to a natural specification of link formation and action choice suited to a practical setting. Our characterization implies an overlapping cliques structure that makes it tractable to analyze these outcomes and their welfare implications both numerically and analytically. \subsection{A model} Consider a network game with network formation in which $S_i = \mathbb{R}_+$ for each player $i$, and payoffs take the form $$u_i(G, \mathbf{s}) = b_i s_i + \alpha s_i \sum_{j \in G_i} s_j - \frac{1}{2}(1 + d_i) s_i^2,$$ in which $d_i = |G_i|$ is player $i$'s degree, and $\alpha \in [0,1]$. Holding the graph fixed, this is a standard linear-quadratic network game of strategic complements. Taking $v_i(\mathbf{s}) = b_i s_i - \frac{1}{2}s_i^2$ and $g(s_i, s_j) = \alpha s_i s_j - \frac{1}{2}s_i^2$, we see it also falls into the class \eqref{eq:parametricpayoff}. There are positive spillovers, as an increase in $s_j$ makes a link to player $j$ more valuable. Moreover, links and actions are substitutes as $g(s_i, s_j)$ satisfies the requisite single crossing property---as $s_i$ increases, the benefit to $i$ of linking to $j$ decreases and eventually turns negative, implying those who invest a lot of effort find linking too costly.\footnote{A natural interpretation is that studying and socializing each take time away from the other activity. While studying together can also strengthen social ties, the substantive assumption here is that a marginal hour studying together is less conducive to friendship formation than that same hour spent together on a leisure activity.}$^{, }$\footnote{One might alternatively use a payoff function with a hard resource constraint split between studying and socializing---our structural results would still apply---but we believe a flexible allocation is more realistic. Cadets spend time on other activities, such as sleep and solitary leisure, that can also be reallocated.} One can readily check that in a pairwise stable outcome, players $i$ and $j$ are neighbors only if $\frac{s_j}{2 \alpha} \leq s_i \leq 2 \alpha s_j$. A pairwise stable outcome satisfies the first-order condition $$s_i = \frac{1}{1 + d_i}\left(b_i + \alpha \sum_{j \in G_i} s_j\right)$$ for each $i \in N$. Writing $\tilde{G}$ for a matrix with entries $\tilde{g}_{ij} = \frac{1}{d_i + 1}$ if $ij \in G$ and $0$ otherwise, and $\tilde{\mathbf{b}}$ for a column vector with entries $\frac{b_i}{d_i + 1}$, we can express this in matrix notation as $$\mathbf{s} = \tilde{\mathbf{b}} + \alpha \tilde{G} \mathbf{s} \quad \implies \quad \mathbf{s} = (I - \alpha \tilde{G})^{-1}\tilde{\mathbf{b}}.$$ For $\alpha \in [0,1]$, the solution for $\mathbf{s}$ is unique and well-defined in any graph $G$, and it is an equilibrium of the game holding $G$ fixed. Notice that when $i$ plays her best response $s_i$, her payoff in the graph $G$ is $$u_i = \frac{1}{2}(1 + d_i) s_i^2.$$ Hence, if $(G, \mathbf{s})$ and $(G', \mathbf{s}')$ are two pairwise stable outcomes, player $i$ is better off under $(G', \mathbf{s}')$ if and only if $(1+d'_i)(s'_i)^2 > (1 + d_i) s_i^2$. \subsection{The structure of stable outcomes} \hspace{1 pc} How do private incentives $b_i$, and the strength of spillovers $\alpha$, affect the set of stable outcomes? As a general rule, stable graphs become more fragmented if spillovers $\alpha$ are small, if private incentives $b_i$ are more spaced out, and if the population size $n$ is large. As a first step towards our results for this model, a lemma characterizes equilibrium actions for players connected in a clique. \begin{lemma}\label{lem:cliqueaction} Suppose $\alpha \leq 1$, and the players in a set $C$ form a clique ---meaning $i$ and $j$ are linked for every $i,j \in C$---with no other links. Writing $\overline{b}_C = \frac{1}{|C|} \sum_{j \in C} b_j$ for the average private incentive, each player $i \in C$ has a unique equilibrium action \begin{equation}\label{eq:cliqueaction} s_i = \frac{1}{\alpha + |C|} \left(b_i + \frac{\alpha |C| \overline{b}_C}{\alpha + (1-\alpha)|C|}\right). \end{equation} Consequently, the payoff to player $i$ in clique $C$ is \begin{equation}\label{eq:welfare} u_i = \frac{1}{2}\frac{|C|}{(\alpha + |C|)^2}\left(b_i + \frac{\alpha |C| \overline{b}_C}{\alpha + (1-\alpha)|C|}\right)^2. \end{equation} \end{lemma} \begin{proof} From the first-order condition, we have $$s_i = \frac{1}{|C|}\left(b_i - \alpha s_i + \alpha \sum_{j \in C} s_j\right) \quad \implies \quad b_i = (|C| + \alpha)s_i - \alpha \sum_{j \in C} s_j.$$ Summing over all $i \in C$, we get $$|C| \overline{b}_C = (|C| + \alpha - \alpha |C|)\sum_{j \in C} s_j \quad \implies \quad \sum_{j \in C} s_j = \frac{|C| \overline{b}_C}{\alpha + (1-\alpha)|C|}.$$ Substituting into the first-order condition and solving yields the result. \end{proof} One consequence of Lemma \ref{lem:cliqueaction} is that welfare is higher in larger cliques only if spillovers are sufficiently strong. For any $\alpha < 1$, the denominator in \eqref{eq:welfare} contains a higher power of $|C|$ than the numerator. Holding private incentives fixed, this means that utility declines if the clique becomes large enough. In the case with $\alpha = 1$, equation \eqref{eq:welfare} gives $$u_i = \frac{|C|}{2}\frac{\left(b_i + |C| \overline{b}_C\right)^2}{(1 + |C|)^2}.$$ If $|C|$ gets larger without significantly affecting the average private incentive $\overline{b}_C$, this payoff increases: the first factor is proportional to $|C|$, and the second factor approaches $\overline{b}_C^2$. Our first proposition characterizes conditions under which the empty graph is part of a pairwise stable outcome. Note that if $G$ is empty, the unique equilibrium actions are $s_i = b_i$ for each player $i$. In the following results, we always assume players are ordered so that $b_1 \leq b_2 \leq \cdots \leq b_n$, and we write $\overline{b} = \frac{1}{n} \sum_{i=1}^n b_i$ for the average private incentive. \begin{Prop}\label{prop:empty} The empty graph is part of a pairwise stable outcome if and only if $\frac{b_{i+1}}{b_i} \geq 2\alpha$ for each $i = 1,2,\ldots,n-1$. Moreover, there exists a threshold $\underline{\alpha} \geq \frac{1}{2}$ such that, whenever $\alpha < \underline{\alpha}$, no nonempty graph is possible in a pairwise stable outcome. If the $b_i$ are all distinct, then the threshold satisfies $\underline{\alpha} > \frac{1}{2}$. Otherwise, we have $\underline{\alpha} = \frac{1}{2}$. \end{Prop} \begin{proof} See Appendix. \end{proof} The first part of Proposition \ref{prop:empty} tells us that the empty graph is stable whenever the private incentives are sufficiently spaced out. How spaced out they need to be is increasing in the strength of spillovers. Moreover, if the spillover parameter $\alpha$ is small enough, then the empty graph is the only graph that can appear in a pairwise stable outcome. Our next result provides an analogous characterization of conditions under which the complete graph is part of a pairwise stable outcome. \begin{Prop}\label{prop:complete} The following claims hold: \begin{enumerate} \item There exists a pairwise stable outcome $(G, \mathbf{s})$ in which $G$ is complete if \begin{equation}\label{eq:completeexist} \frac{b_n}{b_1} \leq \frac{\alpha(1 + n)}{(1-\alpha)(2\alpha + n)}. \end{equation} \item Moreover, if \begin{equation}\label{eq:completeunique}\frac{b_n}{b_1} < \frac{2\alpha}{\alpha + (1-\alpha)(n-1)}, \end{equation} then there is a unique pairwise stable outcome, and in it, $G$ is complete. \item Conversely, if $$b_n\left(2 \alpha^2 + n(1 - 2\alpha^2)\right) > b_1 \alpha \left(4 \alpha - 1 + 2n(1-\alpha)\right),$$ then the complete graph is not part of any pairwise stable outcome. \end{enumerate} \end{Prop} \begin{proof} See Appendix.\end{proof} The main message of Proposition \ref{prop:complete} is that the complete graph becomes harder to sustain as private incentives $b_i$ get more spread out: if the ratio $\frac{b_n}{b_1}$ is too high, there are pairwise stable outcomes with disconnected graphs, and the complete graph may not be part of any stable outcome. An important consequence of the first claim is that the complete graph is part of a pairwise stable outcome whenever $\alpha$ is sufficiently close to $1$. If $\alpha = 1$, the complete graph is always part of a pairwise stable outcome. More generally, stronger spillovers encourage more connected graphs. The proof implies that claims (a) and (b) are tight whenever $b_1 = b_2 = \cdots = b_{n-1}$---if \eqref{eq:completeunique} fails, there is a pairwise stable outcome in which the first $n-1$ players form a clique, and player $n$ is isolated. As $n$ gets larger, or $\alpha$ gets smaller, the inequalities \eqref{eq:completeexist} and \eqref{eq:completeunique} become harder to satisfy, and outcomes with disconnected graphs become more likely. Note that even though these results are specific to complete graphs, the analysis readily extends to subsets of players, giving both necessary and sufficient conditions for cliques to form. \subsection{The importance of intermediate-ability types} \hspace{1 pc} Specializing the model to include just three productivity types allows us to transparently relate the model to the findings of \citet{Carrelletal2013}. Suppose $\alpha = 1$, and private incentives take one of three values $b_i \in \{b_\ell, b_m, b_h\}$, satisfying the following inequalities: $2 < b_h / b_\ell < 4$, $b_h / b_m < 2$, and $b_m / b_\ell < 2$. We can interpret type $b_\ell$ as having low ability, type $b_h$ as having high ability, and type $b_m$ as having intermediate ability. Given an outcome $(G, \mathbf{s})$, we interpret the action $s_i$ as the academic performance of cadet $i$, and links are friendships through which peer effects operate. Since $\alpha = 1$, the complete graph is always part of a pairwise stable outcome. However, it should also be clear that without any middle types, an outcome with two isolated cliques---one of low types taking action $s = b_\ell$ and one of high types taking action $s = b_h$---is also pairwise stable. This outcome is clearly worse for low-type players as they take both lower actions and have fewer connections. Moreover, this outcome is the only one that survives a natural refinement. We call a pairwise stable outcome $(G,\mathbf{s})$ \textbf{uncoordinated} if there exists a sequence of graphs and action profiles $(G^{(0)}, \mathbf{s}^{(0)}, G^{(1)}, \mathbf{s}^{(1)},\ldots)$, ending at $(G, \mathbf{s})$, in which \begin{itemize} \item $G^{(0)}$ is empty, \item $\mathbf{s}^{(k)}$ is a Nash equilibrium holding $G^{(k)}$ fixed, and \item we have $G^{(k+1)} = G^{(k)} + ij$ for some pair $ij$, and both $u_i(G^{k+1}, \mathbf{s}^{(k)}) \geq u_i(G^{k}, \mathbf{s}^{(k)})$ and $u_j(G^{k+1}, \mathbf{s}^{(k)}) \geq u_j(G^{k}, \mathbf{s}^{(k)})$ with at least one strict inequality. \end{itemize} \noindent In words, a pairwise stable outcome is uncoordinated if it is reachable through myopically beneficial link additions, starting from an empty graph and assuming that players reach a Nash equilibrium action profile following each new link.\footnote{This selection criterion implicitly assumes that players have no prior relationships at the start of the adjustment process. Given information on prior relationships, one could adapt this criterion to select an outcome reachable from the initial state.} When both the complete graph and segregated cliques are pairwise stable, there is good reason to expect the latter outcome in practice---uncoordinated stable outcomes formalize this idea. However, if we include enough middle types, we can always eliminate the bad pairwise stable outcome. \paragraph{Three illustrative squadrons} Suppose there are $5$ cadets, and we take $\{b_\ell, b_m, b_h\} = \{4,6,9\}$. We now assess stable outcomes for three different squadron compositions: \begin{itemize} \item Squadron 1: $\mathbf{b} = (4,4,6,6,9)$ \item Squadron 2: $\mathbf{b} = (4,4,6,9,9)$ \item Squadron 3: $\mathbf{b} = (4,4,9,9,9)$ \end{itemize} \begin{figure} \centering \subfloat[Squadron 1]{{\includegraphics[width=0.3\textwidth]{illustrations/squadron1.png} }}\quad \subfloat[Squadron 2]{{\includegraphics[width=0.3\textwidth]{illustrations/squadron2.png} }}\quad \subfloat[Squadron 3]{{\includegraphics[width=0.3\textwidth]{illustrations/squadron3.png} }} \caption{An illustration of the stable outcomes for the three squadrons. Ability levels $b_i$ appear inside each node, while equilibrium actions $s_i$ are next to the node.}\label{fig:squadrons} \end{figure} \noindent In each successive squadron, we replace a cadet of intermediate ability with one of high ability, and we are interested in how the actions and welfare of the low-ability cadets change. In the context of \cite{Carrelletal2013}, the first two squadrons represent combinations that should occur frequently in the chance assignments of the first cohort, while the last one represents the designed groups.\footnote{This is consistent with the facts reported in \cite{Carrelletal2013}, who note that the protocol for designing squadrons, in order to group low-ability cadets with many high-ability peers, tended to exclude cadets of intermediate ability and place these into more homogeneous squadrons---we do not discuss these here.} In the first two squadrons, the unique pairwise stable outcome involves a complete graph. In squadron 1, the action vector is $\mathbf{s} = \left(5 \frac{1}{2}, 5 \frac{1}{2}, 5\frac{5}{6},5\frac{5}{6},6 \frac{1}{3}\right)$, and in squadron 2, the action vector is $\mathbf{s} = \left(6,6,6\frac{1}{3}, 6 \frac{5}{6}, 6 \frac{5}{6}\right)$. From this we see that adding a second high-ability cadet to the squadron increases the performance of low-ability cadets from $5 \frac{1}{2}$ to $6$, and low-ability cadets benefit from this small change in group composition. What happens if we add another high-ability cadet? In squadron 3, the unique uncoordinated pairwise stable outcome involves two separate cliques: the two low-ability cadets form one clique, the three high-ability cadets form the other, and the action vector is $\mathbf{s} = (4,4,9,9,9)$. A larger change in the group composition results in a marked decline in performance for the low-ability cadets. \subsection{Discussion} \hspace{1 pc} The predicted outcome in the designed squadrons is fragmentation: a critical mass of high-ability cadets forms their own clique, which is consistent with the explanation that \citet{Carrelletal2013} give for the unintended consequences they observed. Surveying their participants subsequently, they found that treatment squadrons had significantly higher rates of ability homophily than control squadrons.\footnote{These calculations controlled for opportunities to link.} Indeed, even though the designed squadrons had more high-ability cadets (compared to a random squadron), the low-ability cadets in those squadrons were actually less likely to have high-ability cadets as study partners. This is strong evidence of a new force preventing the peer effects from materializing. Our theory explains this via the strategic forces arising from endogenous network formation. Although the uncoordinated refinement is particularly compelling in this setting---cadets generally do not know one another beforehand---note that the complete graph is still part of a pairwise stable outcome in the ``designed'' squadron $3$---the corresponding action vector is $\mathbf{s} = \left(6\frac{1}{2}, 6\frac{1}{2}, 7\frac{1}{3}, 7\frac{1}{3}, 7\frac{1}{3}\right)$. This suggests that if a policymaker undertakes a more coordinated effort to facilitate collaboration, it may be possible for stable friendship networks to mediate peer effects between low- and high-ability students. % \section{Other applications} \hspace{1 pc} We briefly address two additional applications. We first study \emph{status games}, in which payoffs incorporate social comparisons---this allows us to interpret stylized facts about clique formation. We then discuss how our analysis can provide a foundation for group matching models that assume a coarse notion of network formation in which agents choose cliques in which to participate. Prior work studying network games together with network formation focuses predominantly on the case of link-action complements and positive spillovers, and such models predict nested split graphs. In contrast, our main applications fall into cells in which stable graphs consist of ordered overlapping cliques. \subsection{Status games and ordered cliques} \label{sec:statusgames} \hspace{1 pc} A natural model of competitions for status features action--link complements and negative spillovers. For instance, when people care about their relative position in their social neighborhood, (i) having more friends who engage in conspicuous consumption creates stronger incentives to consume more, but at the same time (ii) those who consume conspicuously are less attractive as friends. \cite{jackson2019friendship} argues that many social behaviors (e.g., binge drinking) have the same properties: those with more friends find these behaviors more rewarding, but they exert negative externalities across neighbors (e.g., due to health consequences or crowding out more productive behaviors). More generally, this pattern applies to any domain in which friends' achievement drives one to excel, but there is disutility from negative comparisons among friends. Our theory entails that such situations drive the formation of social cliques ordered according to both their popularity and their effort at the activity in question. This prediction agrees with anthropological and sociological studies documenting the pervasiveness of ranked cliques. For instance, \citet{davis1967structure} formalize the theory of \citet{homans1950human}, asserting that small or medium-sized groups (e.g., departments in workplaces, grades in a school) are often organized into cliques with a ranking among them in terms of their sociability and status-conferring behaviors.\footnote{\citet{davis1967structure} discuss purely graph-theoretic principles that guarantee some features of a ranked-cliques graph, but do not have a model of choices.} \citet{adler1995dynamics} conduct an ethnographic study of older elementary-school children that highlights the prevalence of cliques. The authors argue that status differentiation is clear across cliques, and indeed that there are unambiguous orderings, with one clique occupying the ``upper status rung of a grade'' and ``identified by members and nonmembers alike as the `popular clique." This study also emphasizes the salience of status comparisons with more popular individuals, consistent with our negative spillovers assumption. Building on this ethnographic work, \cite{gest2007features} carry out a detailed quantitative examination of the social structures in a middle school, with a particular focus on gender differences. The authors' summary confirms the ethnographic narrative: ``girls and boys were similar in their tendency to form same-sex peer groups that were distinct, tightly knit, and characterized by status hierarchies.'' Within the economics literature, \citet{Immorlicaetal2017} introduce a framework in which players exert inefficient effort in a status-seeking activity and earn disutility from network neighbors who exert higher effort---we can view this as a model of conspicuous consumption with upward-looking comparisons. The authors assume an exogenous network and explore how the network structure influences individual behavior. Formally, the authors take $S_i = \mathbb{R}_+$ for each player $i$, and payoffs are $$u_i(\mathbf{s}) = b_i s_i - \frac{s_i^2}{2} - \sum_{j \in G_i} g_{ij}\max\{s_j - s_i, 0\},$$ in which $g_{ij} \geq 0$ for each $ij \in G$. The paper shows that an equilibrium partitions the players into classes making the same level of effort, and the highest class consists of the subset of players that maximizes a measure of group cohesion. Our framework makes it possible to endogenize the network in this model. Under a natural extension of the payoff function, the classes that emerge in equilibrium form distinct cliques in the social graph. Consider a network game with network formation in which $S_i = \mathbb{R}_+$ for each $i$, and payoffs take the form $$u_i(G, \mathbf{s}) = b s_i - \frac{s_i^2}{2} + \sum_{j \in G_i} \left(1 - \delta \max\{s_j - s_i, 0\} \right).$$ In this game, player $i$ earns a unit of utility for each neighbor,\footnote{Note the graph here is unweighted.} but suffers a loss $s_j - s_i$ if neighbor $j$ invests more effort. There are no other linking costs. To highlight the role of network formation, rather than individual incentives, we also specialize the model so that all players have the same private benefit $b$ for effort. The game clearly falls into the class \eqref{eq:parametricpayoff} of separable network games, with negative spillovers and weak links-action complements. Hence, stable outcomes consist of ordered overlapping cliques, and we can only have $ij \in G$ if $|s_i - s_j| \leq \frac{1}{\delta}$. For the purposes of this example, we restrict attention to outcomes in which the cliques partition the players. Moreover, following \citet{Immorlicaetal2017}, we focus on maximal equilibria of the status game, with players taking the highest actions they can sustain given the graph. Since all players have the same private benefit $b$, all players in a clique play the same action, and the maximum equilibrium action in a clique of size $k$ is $b + (k-1) \delta$. Two features of stable outcomes stand out. First, those in large groups take higher actions---popular individuals invest more in status signaling. Second, as status concerns increase, the graph can fragment. Let $c^*$ denote the smallest integer such that $c^* \delta \geq \frac{1}{\delta}$---this is the unique integer satisfying $\delta \in [1/\sqrt{c^*}, 1/\sqrt{c^*-1})$. If $i$ and $j$ are in different cliques, we must have $|s_i - s_j| \geq \frac{1}{\delta}$, which implies the cliques differ in size by at least $c^*$. The larger $c^*$ is, the more cohesive stable networks are. If there are $n$ players in total, and $\delta < \frac{1}{\sqrt{n-2}}$, then the complete graph is the only stable outcome. As $\delta$ increases, meaning there are greater status concerns, then stable outcomes can involve more fragmented graphs. If $\delta \geq 1$, then separate cliques need only differ in size by one player, and the maximal number of cliques is the largest integer $k$ such that $\frac{k(k+1)}{2} \leq n$ (which is approximately $\sqrt{2n}$). \subsection{Foundations for group-matching models} \label{sec:groupmatching} \hspace{1 pc} Models of endogenous matching that go beyond standard pair matching frameworks often posit that individuals belong to a \emph{group} of others. Externalities and strategic interactions then occur within or across groups---with the crucial feature that payoffs are invariant to permutations of agents within groups. In essence, these models constrain the network that can form, assuming disjoint cliques. For example, \citet{baccara2013homophily} study a setting in which individuals join groups (e.g., social clubs) and then choose how much to contribute to an activity within the group. These contributions affect the payoffs of other group members symmetrically. Similarly, \citet{chade2018matching} model the allocation of experts to teams. These experts share information within their teams, benefiting all team members equally, but not across teams. % The interactions motivating these models are not so constrained in reality---there is no reason why pairs cannot meet outside the groups, and in many cases a person could choose to join multiple groups. However, assuming that interactions happen in groups allows simplifications that are essential to the tractability of these models. To what extent are these restrictions without loss of generality? Our results allow us to provide simple sufficient conditions. For this section, we assume the common action set $S$ is a closed interval in $\mathbb{R}$, and each player has one of finitely many types---write $t_i \in T$ for player $i$'s type. Payoffs take the form \begin{equation}\label{eq:groupmatch} u_i(G, \mathbf{s}) = v(s_i, t_i) + \sum_{j \in G_i} g(s_i, s_j), \end{equation} in which $v$ and $g$ are continuous. We further assume that players' have unique best responses, holding the graph and other players' actions fixed. Write $s^*_t = \argmax_{s \in S} v(s,t)$ for the action that a type $t$ player would take if isolated with no neighbors---this is the \emph{privately optimal action}. Payoffs exhibit a \emph{weak preference for conformity} if player $i$'s optimal action always lies somewhere in between her privately optimal benchmark and the actions of her neighbors. That is, for $\hat{s} = \argmax_{s_i \in S} u_i(G, s_i, s_{-i})$, we have $$\min\{s^*_{t_i}, \min_{j \in G_i}\{ s_j \} \} \leq \hat{s} \leq \max\{s^*_{t_i}, \max_{j \in G_i}\{ s_j \} \}$$ for all $i$ and $G$. We say that types form \emph{natural cliques} if there exists a partition $\{T_1, T_2,\ldots,T_K\}$ of $T$ such that \begin{itemize} \item $g\left(s^*_t, s^*_{t'}\right) \geq 0$ for any $t,t' \in T_k$ and any $k$. \item Either $g\left(s^*_t, s^*_{t'}\right) \leq 0$ or $g\left(s^*_{t'}, s^*_t\right) \leq 0$ with at least one strict inequality for any $t \in T_k$ and $t' \in T_\ell$ with $k \neq \ell$. \end{itemize} \noindent In words, this means that if all players were to choose their privately optimal actions, and form the network taking those actions as given, then disjoint cliques based on the partition of types would be pairwise stable. If payoffs exhibit a weak preference for conformity, these same cliques remain pairwise stable when players can change their actions. \begin{Prop}\label{prop:disjoint} Suppose a network game with network formation has payoffs of the form \eqref{eq:groupmatch}, exhibits a weak preference for conformity, and types form natural cliques. If the game exhibits either positive spillovers and action--link substitutes or negative spillovers and action--link complements, then there exists a pairwise stable outcome in which the network is exactly the partition into natural cliques. \end{Prop} \begin{proof} We carry out the proof assuming positive spillovers and action--link substitutes---the other case is analogous. Since types form natural cliques, there is a partition $\{T_1,T_2,\ldots,T_K\}$ of types such that, when playing the privately optimal actions, players have an incentive to link if and only if their types are in the same element of the partition. Suppose this graph forms. We show it is part of a pairwise stable outcome. For each $T_k$ let $\underline{s}_k$ and $\overline{s}_k$ denote the lowest and highest values respectively of $s^*_t$ for some type $t \in T_k$. Continuity together with Weak preference for conformity implies that there exists an equilibrium in actions with $s_i \in [\underline{s}_k, \overline{s}_k]$ for every player $i$ with type $t_i \in T_k$. Given two such players $i$ and $j$, we have $$g(s_i, s_j) \geq g(s_i, \underline{s}_k) \geq g(\overline{s}_k, \underline{s}_k) \geq 0,$$ in which the first inequality follows from positive spillovers, and the second follows from action--link substitutes. Hence, these two players have an incentive to link. For two partition elements $T_k$ and $T_\ell$, with $k \neq \ell$, assume without loss of generality that $\underline{s}_\ell \geq \overline{s}_k$. For player $i$ with type $t_i \in T_k$ and $j$ with type $t_j \in T_\ell$ we have $$g(s_i, s_j) \leq g(\underline{s}_k, s_j) \leq g(\underline{s}_k, \overline{s}_\ell) < 0,$$ so the players have no incentive to link. \end{proof} Under mild assumptions, stable networks preserve natural cleavages between identifiable types of individuals, and players endogenously organize themselves into disjoint cliques as assumed in group matching models. Even if the natural cleavages are not so stark, our results show that much of the simplifying structure remains: individuals can be part of multiple groups, but each group is a clique, and there is a clear ordering among the cliques. Imposing this slightly weaker assumption in models of group matching may allow for richer analysis while preserving the tractability that comes from group matching assumptions. \section{Existence} \hspace{1 pc} While pairwise stable outcomes exist in all of our applications, we have not yet addressed the general question of the existence of pairwise stable outcomes. There are two reasons why existence is non-trivial in our setting. First, the presence or absence of a link is a discrete event, so we cannot use standard arguments that rely on continuity. Second, pairwise stability requires the absence of profitable joint deviations to form new links. Nevertheless, there are natural sufficient conditions that ensure existence of pairwise stable outcomes. In what follows, we assume that players' action sets are complete lattices with order $\geq$. \begin{definition}\label{def:convexcomp} A network game with network formation exhibits \textbf{strategic complements} if for any graph $G$, any $s'_i > s_i$, and any $s'_{-i} > s_{-i}$, we have $$u_i(G, s'_i, s_{-i}) \geq (>) \; u_i(G, s_i, s_{-i}) \quad \implies \quad u_i(G, s'_i, s'_{-i}) \geq (>) \; u_i(G, s_i, s'_{-i}).$$ The game exhibits \textbf{convexity in links} if for any profile $\mathbf{s}$, any graph $G$, any pair $ij$, and any collection of edges $E$, we have $$\Delta_{ij} u_i(G, \mathbf{s}) \geq (>) \; 0 \quad \implies \quad \Delta_{ij} u_i(G+E, \mathbf{s}) \geq (>) \; 0.$$ \end{definition} A network game with network formation exhibits strategic complements if, holding the graph fixed, the underlying normal form game exhibits strategic complements. The definition imposes a single-crossing condition on players' strategies, which implies that best responses are weakly increasing in others' actions. The game exhibits convexity in links if, holding actions fixed, adding links to the network weakly increases players' incentives to form links. Note that in all of our examples, linking incentives are independent of $G$ holding others' actions fixed, so this condition trivially holds. To state our result, we also need to extend the notions of link-action complements/substitutes and positive/negative spillovers to arbitrary games. \begin{definition}\label{def:linkcompsub} A network game with network formation exhibits \textbf{link-action complements} if $$\Delta_{ij} u_i(G, \mathbf{s}) \geq (>)\; 0 \quad \implies \quad \Delta_{ij} u_i(G, s'_i, s_{-i}) \geq (>)\; 0)$$ whenever $s'_i > s_i$. The game exhibits \textbf{link-action substitutes} if the above inequality holds whenever $s'_i < s_i$. \end{definition} \begin{definition}\label{def:spillovers} A network game with network formation exhibits \textbf{positive spillovers} if $$\Delta_{ij} u_i(G, \mathbf{s}) \geq (>)\; 0 \quad \implies \quad \Delta_{ij} u_i(G, s'_j, s_{-j}) \geq (>)\; 0$$ whenever $s'_j > s_j$. The game exhibits \textbf{negative spillovers} if the above inequality holds whenever $s'_j < s_j$. \end{definition} \begin{Prop}\label{prop:exist1} Suppose a network game with network formation exhibits strategic complements and convexity in links. If either \begin{enumerate} \item the game exhibits action-link complements and positive spillovers, or \item the game exhibits action-link substitutes and negative spillovers, \end{enumerate} then there exists a pairwise stable outcome. Moreover, the set of graphs that occur in pairwise stable outcomes contains a maximal and minimal element.\footnote{In fact, one can also show using convexity in links that the minimal graph is part of a pairwise Nash stable outcome.} \end{Prop} \begin{proof} Since the game exhibits strategic complements, for any fixed $G$ there exist minimal and maximal Nash equilibria of the induced normal form game---this follows from standard arguments using Tarski's fixed point theorem. Likewise, since the game exhibits convexity in links, for any fixed profile $\mathbf{s}$ there exist minimal and maximal pairwise stable graphs. To get the minimal graph, start from an empty graph and iteratively add links that pairs of players jointly wish to form. Convexity in links implies that no player will later wish to remove a link that was added earlier, so we must eventually terminate at a stable graph. Similarly to get the maximal graph, start from a complete graph and iteratively delete links that one of the players wishes to remove. We now define two maps $\overline{B}(G,\mathbf{s})$ and $\underline{B}(G, \mathbf{s})$ mapping outcomes to outcomes. Let $\overline{B}(G, \mathbf{s})$ return an outcome $(\overline{G}, \overline{\mathbf{s}})$ in which $\overline{G}$ is the maximal pairwise stable graph given $\mathbf{s}$, and $\overline{\mathbf{s}}$ is the maximal Nash equilibrium given $G$. Similarly, let $\underline{B}(G, \mathbf{s})$ return an outcome $(\underline{G}, \underline{\mathbf{s}})$ in which $\underline{G}$ is the minimal pairwise stable graph given $\mathbf{s}$, and $\underline{\mathbf{s}}$ is the minimal Nash equilibrium given $G$. In case (a), positive spillovers and link-action complements imply that the graphs $\overline{G}$ in $\overline{B}(G,\mathbf{s})$ and $\underline{G}$ in $\underline{B}(G, \mathbf{s})$ are weakly increasing in $\mathbf{s}$---holding the rest of the graph fixed, higher $\mathbf{s}$ makes link $ij$ more desirable to both player $i$ and player $j$. Similarly, positive spillovers and link-action complements imply that the profiles $\overline{\mathbf{s}}$ in $\overline{B}(G, \mathbf{s})$ and $\underline{\mathbf{s}}$ in $\underline{B}(G,\mathbf{s})$ are weakly increasing in $G$. This means that both $\overline{B}$ and $\underline{B}$ are monotone maps with respect to the natural product order on $\mathcal{G} \times \mathcal{S}$, so Tarski's theorem implies minimal and maximal fixed points exist for both---the maximal fixed point of $\overline{B}$ is the maximal pairwise stable outcome, and the minimal fixed point of $\underline{B}$ is the minimal pairwise stable outcome. Case (b) follows from similar reasoning after reversing the order on action profiles. Negative spillovers and link-action substitutes implies that the graphs in $\overline{B}(G, \mathbf{s})$ and $\underline{B}(G, \mathbf{s})$ are weakly decreasing in $\mathbf{s}$, and the profiles are weakly decreasing in $G$, so we can again apply Tarski's theorem.\end{proof} Proposition \ref{prop:exist1} only applies within two out of four cells for the class of games in Section \ref{sec:class}. In general, we cannot ensure existence for the other two cells, as the following example illustrates. Suppose there are two players with a common action set $S = \{0,1\}$ and payoffs \[ u_i(G, \mathbf{s}) = \begin{cases} s_i & \text{if } G \text{ is empty} \\ 2s_{-i} + \frac{2 s_i s_{-i} - 1}{4} - s_i & \text{if } G \text{ is complete.} \end{cases} \] For player $i$, the marginal value of a link to player $-i$ is $$2(s_{-i} - s_i) + \frac{2 s_i s_{-i} - 1}{4}.$$ This is increasing in $s_{-i}$, and for the relevant range of values it is decreasing in $s_i$---the game exhibits positive spillovers and link--action substitutes. Moreover, it should be clear that, if the two players are linked, the game exhibits strategic complements. Nevertheless, there is no pairwise stable outcome. In the empty graph, each player optimally takes action $1$, and the marginal value of adding the link between them is $\frac{1}{4} > 0$, so they should form the link. In the complete graph, each player optimally takes action $0$, and the marginal value of the link is then $-\frac{1}{4} < 0$, so they should each drop the link. Even though existence in not always assured, the structural result in Corollary \ref{cor:structure} greatly simplifies the process of searching for a stable outcome. As we have already seen in three applications, starting with a clique structure and checking whether it is stable often provides a simple way to establish existence. We note that our existence result extends the main finding in \citet{Hellmann2013}, obtained in a setting with network formation only. The paper shows that pairwise stable graphs exist if payoffs are convex in own links, and others' links are complements to own links. These conditions are jointly equivalent to convexity in links in Definition \ref{def:convexcomp}. \section{Discussion} \subsection{Complex network structure} \hspace{1 pc} Our predictions about the structure of stable networks are stark. Real networks are typically not organized precisely into ordered cliques, nor are neighborhoods perfectly ordered via set inclusion. Nevertheless, our results provide a starting point to better understand how incentives affect the complex structures we observe in real networks. There are at least two natural directions to extend our analysis. One is to layer different relationships on top of one another in a ``multiplex'' network---rigid patterns across different layers can combine to form more realistic arrangements. Consider a simple example with two activities: work on the weekdays---in which the activity is production---and religious services on the weekends---in which the activity is attendance and engagement. Both entail positive spillovers, but work exhibits action--link substitutes---forming friendships takes time that could be devoted to production---while church exhibits action--link complements---attendance makes it easier to form ties. Assuming suitable heterogeneity in ability or preferences, a non-trivial network will form through each activity. In the work network, we get ordered cliques. In the church network, we get a nested split graph, with the more committed members more broadly connected. Layering these networks on top of each other can produce a complex network with aspects of both ``centralization,'' mediated by the weekend ties, and homophily, driven by the work ties. This description ties into Simmel's account, subsequently developed by many scholars, of cross-cutting cleavages. A second approach is to introduce noise. \citet{Konigetal2014} provide an example, describing a dynamic process in which agents either add or delete one link at a time, and the underlying incentives exhibit positive spillovers and action--link complements. If agents always make the myopically optimal link change, the graph is a nested split graph at every step of the process. However, if agents sometimes make sub-optimal changes, then all graphs appear with positive probability, but the distribution is still heavily skewed towards those with a nested structure. This allows the authors to fit the model to real-world data. Based on our analysis, one could adapt this model to study peer effects or status games, and under suitable assumptions obtain a noisy version of our ordered cliques prediction. \subsection{Foundations for pairwise (Nash) stability}\label{sec:foundations} \hspace{1 pc} As presented, pairwise (Nash) stability is a static solution concept that entails the absence of particular individual and pairwise deviations. A key feature is that deviations in links and actions are treated separately: Players consider link deviations holding actions fixed and action deviations holding links fixed. Why not require robustness to simultaneous deviations in both actions and links? There are two reasons. One is a pragmatic view of how link formation and action choice actually work: In practice, decisions over these dimensions often \emph{are} considered separately. Occasionally, people must invest to form and maintain relationships (e.g., doing a costly favor, attending a social event)---these are the times at which linking costs are actually paid and people are prone to reconsider relationships. Opportunities to revise productive actions (e.g., investing in a certain kind of expertise at work) occur at other times. We therefore consider it plausible that individuals consider these revisions separately, taking the current state of play as otherwise fixed. The second reason is methodological. Simultaneous deviations along multiple dimensions make it delicate to define how a counterparty should respond to a link offer. Should the recipient of an offer condition on other deviations by the sender? (E.g., ``If Bob is offering me a link, he would also change other actions and links.'') Should she contemplate subsequent changes in her own behavior? E.g., ``In the network that is likely given Bob's link offer to me, I will want to drop certain other links.'') Such considerations open the important but very complicated Pandora's box of farsighted stability. Following this logic to its natural conclusion requires that players are not only very sophisticated, but also omniscient (or at least in possession of a full theory) about how others respond to deviations. Allowing just one deviation at a time avoids this issue. As this argument only applies to deviations that require another player's consent, one might still ask: Why not allow a broader set of unilateral deviations? This is precisely what pairwise Nash stability does. While we still consider pairwise stability a more appropriate solution concept for the first, practical, reason above, refining the solution concept to pairwise Nash stability has no impact on our structural results---since a pairwise Nash stable outcome is pairwise stable, any statement true of all pairwise stable outcomes is also true of all pairwise Nash stable outcomes. Requiring robustness to other unilateral deviations can only further refine the outcome set, and our main results continue to apply. In the rest of this subsection, we present two dynamic models that provide foundations for our solution concepts, one for pairwise stability and one for pairwise Nash stability. These make explicit certain adjustment processes that imply our main stability conditions at equilibrium. Throughout the section, we restrict attention to finite action spaces and generic payoffs, so player $i$ has a unique myopically optimal action $s_i$ given $\mathbf{s}_{-i}$ and $G$. \subsubsection{A revision game} \hspace{1 pc} We first study a dynamic game that makes explicit the ``occasional revisions'' foundation for pairwise stability.\footnote{See \cite{jackson2002formation} for an early antecedent of stochastic revisions in a game of network formation and action choice.} Players have revision opportunities arriving at random times, and myopic best response is optimal as long as discount rates are sufficiently high, or the time between revisions is sufficiently long. Time $t$ is continuous, all players observe the current state $(G^{(t)}, \mathbf{s}^{(t)})$, and players can change their actions and links only at random arrival times. Each player $i$ has an independent Poisson clock with rate $\lambda$, which rings at times $\{\tau^i_k\}_{k=0}^\infty$. At each time $\tau^i_k$, player $i$ has an opportunity to revise her strategic action $s_i$. Additionally, each ordered pair of players $ij$ has an independent Poisson clock with rate $\lambda$, which rings at times $\{\tau_k^{ij}\}_{k=0}^\infty$. At each time $\tau^{ij}_k$, if $j \in G_i$, player $i$ has the option to delete link $ij$, and if $j \notin G_i$, player $i$ has the option to propose a link to player $j$. If $i$ proposes a link to $j$, player $j$ can either accept or reject at that instant. If player $j$ accepts, we add $ij$ to the graph, and otherwise the graph is unchanged. Players receive a constant flow payoff according to the current state (actions and links), and there is a common discount factor $\delta \in (0,1)$. To complete the model, we specify an arbitrary initial condition $(G^{(0)},\mathbf{s}^{(0)})$. \begin{Prop}\label{prop:foundation1} Fix any $\delta < 1$. If $\lambda > 0$ is sufficiently small, the following statements hold: \begin{enumerate} \item If there exists a subgame perfect Nash equilibrium of the revision game and an almost surely finite stopping time $\tau$ such that, in that equilibrium, $(G,\mathbf{s})$ is played on path at all $t \geq \tau$, then $(G,\mathbf{s})$ is pairwise stable. \item Conversely, if $(G,\mathbf{s})$ is pairwise stable, there exists a subgame perfect Nash equilibrium of the revision game, with initial condition $(G^{(0)},\mathbf{s}^{(0)})=(\mathbf{G},\mathbf{s})$, in which $(G, \mathbf{s})$ is played at all times $t \geq 0$. \end{enumerate} \end{Prop} \begin{proof} See Appendix. \end{proof} The main idea in the proof is that a sufficiently long delay until the next revision opportunity makes the immediate payoff implications of revising an action or link the dominant ones. \subsubsection{A two-stage game} \hspace{1 pc} We next present an explicit non-cooperative protocol for making joint deviations. This two-stage game provides a foundation for pairwise Nash stability. Beginning from some outcome $(G, \mathbf{s})$, the deviation game proceeds in two stages. In the first stage, a player $i$ is selected uniformly at random, and $i$ is allowed to make any unilateral deviations she wishes---she can change her strategic action $s_i$ and delete any subset of her links $S \subseteq G_i$. With probability $1- \epsilon$, the game ends here. With probability $\epsilon$, we move to the second stage in which $i$ is allowed to propose a link to a single other player $j$. If $i$ makes a proposal to $j$, player $j$ chooses whether to accept or reject, and the game ends with payoffs determined by the final outcome $(G', \mathbf{s}')$. An outcome $(G,\mathbf{s})$ is an \emph{equilibrium of the $\epsilon$-PNS-game} if, starting at this outcome, we remain at the outcome $(G, \mathbf{s})$ in any subgame perfect Nash equilibirum of the deviation game. \begin{Prop}\label{prop:foundation2} The outcome $(G,\mathbf{s})$ an equilibrium of the the $\epsilon$-PNS-game for all sufficiently small $\epsilon$ if and only if it is pairwise Nash stable. \end{Prop} \begin{proof} See Appendix. \end{proof} The deviation game captures an intuition that players can always change their own actions, or delete links, but an opportunity must arise in order to form a new link. If these are sufficiently infrequent, then it does not make sense to plan one's behavior in anticipation of such an opportunity.\footnote{The same result holds if we reverse the order of the stages---player $i$ makes a link offer in the first stage, and with probability $\epsilon$ she can change her action or delete links in the second. While the order is unimportant from a technical perspective, we believe the order we use has a more natural interpretation.} Formally, when $\epsilon$ is small, options available in the second stage have no bearing on optimal decisions in the first. \section{Final remarks} \hspace{1 pc} From academic peer effects to social status to trading networks, the connections people and firms choose to form affect the strategic actions they take and vice versa. Sound behavioral predictions and policy recommendations depend on taking these interactions into account, rather than studying each aspect separately. We offer a flexible formal framework that unites two types of models and solution concepts, pertaining to strategic actions and link choice. This unification enriches what we can capture, allowing us to make new and sharper predictions in important cases. We identify simple conditions that allow a stark characterization of equilibrium network structures and the behavior they support. Several widely studied applications fit within this framework, and we highlight new insights that emerge from applying our results. One key point---for which our applications serve as a proof-of-concept---is that the structural predictions the theory offers greatly reduce the space of possible networks, as well as the actions they can support. This is crucial for the tractability both of numerical calculations and theoretical analyses of how equilibria and welfare depend on the environment. Given the highly structured networks our theory predicts, the framework requires further elaboration to fit realistic network structures. There are two directions we have argued are promising. One introduces noise in linking decisions or incentives. We expect that qualitative insights about linking patterns are robust, but noise raises theoretical questions about how much predictions change relative to our benchmark, and econometric questions about what one can identify from an observed network. A second direction examines models that combine different types of relationships. This would allow ``overlaying'' the simple structures arising in our characterizations, and exploring the economic implications of interactions between different kinds of relationships. \bibliographystyle{ecta}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The shapes and orientations of galaxies have an intrinsic correlation with respect to those of nearby galaxies and the overall matter distribution; this effect is known as galaxy intrinsic alignments \citep[IA; see][and references therein for review]{troxel2015intrinsic,joachimi2015galaxy,kiessling2015galaxy,kirk2015galaxy}. The importance of IA is two fold: 1) IA emerges as a natural outcome of the current paradigm of galaxy formation in the $\Lambda$CDM cosmological model, as emphasized also in state-of-the-art cosmological hydrodynamic simulations that include direct modeling of galaxy formation ~\citep[e.g.,][]{tenneti2014galaxy,2015MNRAS.454.3328V,2015MNRAS.454.2736C,2017MNRAS.468..790H}. IA is therefore a promising probe for galaxy formation physics. 2) If not properly modeled and removed, IA is a significant source of systematic bias in inferring cosmological parameters in weak lensing studies~\citep{2016MNRAS.456..207K}. Many of the upcoming surveys like the Large Synoptic Survey Telescope \citep[LSST;][]{2008arXiv0805.2366I,abell2009lsst}, Euclid \citep{laureijs2011euclid}, and the Wide-Field Infrared Survey Telescope (WFIRST; \citealt{spergel2015wide}) aim to determine the dark energy equation of state to very high precision using weak lensing, and IA is one of the major sources of astrophysical systematic uncertainty for such studies \citep{2018ARA&A..56..393M}. The existence of IA in galaxies with correlations out to 100~$h^{-1}$Mpc scales has been firmly established in observational data \citep[e.g.,][]{2006MNRAS.367..611M, 2007MNRAS.381.1197H,2011A&A...527A..26J,singh2015intrinsic}. An understanding of intrinsic alignments and their scaling with galaxy mass and redshift is therefore crucial to mitigating this effect in weak lensing studies, and is also a good diagnostic for galaxy formation physics. Intrinsic alignments have been studied using analytical methods such as the linear~\citep{2001MNRAS.320L...7C}, the nonlinear alignment model \citep{2007NJPh....9..444B}, and the full tidal alignment model \citep{blazek2015tidal}. While these methods are easy to implement while also requiring few computational resources, they inevitably rely on assumptions about the alignment of galaxies and the underlying tidal field. This limitation can be overcome by state-of-the-art cosmological hydrodynamic simulations~\citep[e.g.,][]{2014MNRAS.444.1453D,2015MNRAS.446..521S,khandai2015massiveblack,vogelsberger2014introducing}, which can directly probe the impact of galaxy formation physics on the shapes and alignments of galaxies and the relation to their dark matter counterparts~(halos/subhalos) and the tidal fields themselves. Therefore, in recent years galaxy shapes and alignments have been extensively studied using hydrodynamic simulations \citep[e.g.,][]{2015MNRAS.454.2736C,tenneti2016intrinsic,2017MNRAS.472.1163C,2017MNRAS.468..790H} An important step towards understanding galaxy intrinsic alignments is to study their redshift evolution. This has been initiated by a series of works~\citep{tenneti2015intrinsic} using the \texttt{MassiveBlackII} (MBII) hydrodynamic simulation \citep{khandai2015massiveblack}, including a detailed study of the redshift evolution of galaxy shapes, alignment with respect to host halo/subhalo, and associated shape-density correlation functions. A noteworthy feature of these works was that the sampling of galaxies was based on fixed subhalo mass cut~($\gtrsim 10^{11},~10^{12},~10^{13}~M_{\odot}/h$) at each redshift~(from $z\sim0.06-1$); this is somewhat representative of cuts in observed galaxy samples in properties such as stellar mass or magnitude, which are known to correlate with the host subhalo mass. However, with such an approach, the resulting redshift evolution may be dominated by the effects of sample selection. In order to study the \textit{intrinsic} redshift evolution~(i.e. separated from the effects of sample selection), we must select samples of galaxies at a given redshift and trace their progenitors to higher redshifts. In this work, we study the redshift evolution of IA properties of MBII galaxies by making subhalo mass cuts at a single fixed redshift~($z\sim0.6$) and then tracing the properties of their progenitors along a merger tree. In Section~\ref{S:methods}, we outline the basic methodology and definitions. In Section ~\ref{S:results}, we study the redshift evolution of galaxy properties~(axis ratios, galaxy-subhalo misalignment angle and density-shape correlation functions) on the merger tree. We summarize our key results in Section~\ref{S:conclusions}. \section{Methods} \label{S:methods} \subsection{MassiveBlack-II simulation} \begin{figure} \includegraphics[width=80mm]{eff.png} \caption{$\eta_{\mathrm{matching}}$ is the \textit{matching efficiency}, i.e.\ the ratio between the number of \texttt{SUBFIND} trees with respect to the original number of \texttt{ROCKSTAR} trees~(before matching \texttt{ROCKSTAR} and \texttt{SUBFIND} trees). $1-\eta_{\mathrm{matching}}$ therefore is the fraction of \texttt{ROCKSTAR} trees lost because we could not find a corresponding \texttt{SUBFIND} tree to match with. ``$\geq\log(M^H_{z=0.6})$'' is the threshold subhalo mass of galaxies selected at $z=0.6$; $z_f$ is the maximum redshift up to which their progenitors are traced~(starting from $z_i=0.6$). } \label{matching_efficiency} \end{figure} \begin{figure} \includegraphics[width=7.5cm]{conv.png} \caption{\textbf{Shape convergence test:} Normalized histograms of $q=\frac{b}{a}$ of the dark matter component of \texttt{SAMPLE-TREE} galaxies at $z = 0.6$. We show the comparison between shapes determined using all particles in the subhalo with those obtained using a random subsample of $N_{\mathrm{part}}=50, 100, 300, 1000$ particles in the subhalo.} \label{shape_convergence} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{heatmap.png} \caption{The 2D histograms show the dark matter mass~($M_h$) versus stellar mass~($M_*$) relation of galaxies~(and dark matter subhaloes) on 27942 trees corresponding to $M_{h}>10^{11}M_{\odot}/h$ galaxies at $z=0.6$~(leftmost panel) and their main progenitors at $z=1.5$~(middle panel) and $z=3$~(rightmost panel). } \label{SM_HM_fig} \end{figure*} We briefly describe \texttt{MassiveBlack-II} (MB-II), which is a state-of-the-art cosmological hydrodynamic simulation of structure formation \citep{khandai2015massiveblack}. MB-II is evolved from $z=159$ to $z=0.06$ in a cubic periodic box of comoving volume $V_{\mathrm{box}}=(100~h^{-1}\mathrm{Mpc})^3$ and a gravitational smoothing length of $\epsilon = 1.85 h^{-1}~\mathrm{kpc}$. The box contains $2\times 1792^3$ particles (dark matter+gas). The mass of a single dark matter particle and a single gas particle is $m_{\mathrm{DM}}=1.1\times 10^7 h^{-1} M_{\odot} $ and $m_{\mathrm{gas}}=2.2\times 10^6 h^{-1} M_{\odot}$ respectively. The cosmological parameters used in the simulation are based on WMAP7 \citep{komatsu2011astrophys} with amplitude of matter fluctuations $\sigma_8 = 0.816$, spectral index $n_s = 0.96$, mass density parameter $\Omega_m = 0.275$, cosmological constant density parameter $\Omega_{\Lambda} = 0.725$, baryon density parameter $\Omega_{b} = 0.046$, and Hubble parameter $h = 0.702$. Halos are identified using a friends-of-friends (FOF) halo finder \citep{davis1985evolution} with a linking length of 0.2 times the mean particle separation. \subsection{Galaxy identification} Here we describe how galaxies are identified in MBII. Galaxies are defined to be the stellar component of \textit{subhalos}, which are locally overdense, self-bound particle groups within a larger parent group~(FOF halo) The subhalo catalogs are generated using the substructure finder \texttt{SUBFIND} on the halo catalogs. In \texttt{SUBFIND}, for each particle in the parent group, a local density is estimated using the positions of a prescribed number of nearest neighbours. After identifying the local peaks in density field, it rebuilds the parent group by adding particles in the order of decreasing density. In doing so, a saddle point is eventually reached which connects two disjoint overdense regions. The smaller structure is then identified as a candidate substructure. For further implementation details, see the original paper \citep{springel2001populating}. \subsection{Constructing the galaxy merger tree} \label{merger_tree_sec} In this section, we describe the key steps involved in the construction of the galaxy merger tree. To begin with, halo/subhalo merger trees were identified by running the \texttt{ROCKSTAR}~\citep{behroozi2012rockstar} halo/subhalo finder along with \texttt{CONSISTENT-TREES}~\citep{behroozi2012gravitationally}, both of which are described in the following two subsections. \subsubsection{\texttt{ROCKSTAR}} \label{rockstar_sec} \texttt{ROCKSTAR} (or `Robust Overdensity Calculation using K-Space Topologically Adaptive Refinement') is an algorithm based on adaptive hierarchical refinement of FOF groups. Primary FOF groups are first identified using a FOF finder. Within each FOF group, a hierarchy of FOF subgroups~(in phase space) is identified using an adaptive refinement of the linking length. The FOF subgroups at the lowest~(deepest) level of the hierarchy are then converted into seed haloes. Starting with the lowest level of the hierarchy, the FOF subgroup particles are assigned to the seed haloes based on phase space distances; this process is repeated for the higher levels of the hierarachy until all particles of the parent FOF group have been assigned to the halo. After assigning all the particles, the host-subhalo relationship is calculated by assigning a seed halo to be a \textit{subhalo} of the closest seed halo~(within the same FOF group) with larger number of assigned particles. This process is performed until all the seed haloes are either \textit{host haloes} or \textit{subhaloes}. For further implementation details, see the original paper \citep{behroozi2012rockstar}. \subsubsection{\texttt{CONSISTENT-TREES}} \label{trees_sec} We build a merger tree for our \texttt{ROCKSTAR} haloes/subhaloes using \texttt{CONSISTENT-TREES} algorithm \citep{behroozi2012gravitationally}. \texttt{CONSISTENT-TREES} is an extension to traditional \textit{particle based}~(constructed by tracing trajectories of halo/subhalo particles across different time steps) tree building algorithms which can potentially compromise the \textit{continuity} of halo/subhalo properties across simulation time-steps, due to the issues listed in Section 2.2 of \cite{behroozi2012gravitationally}. \texttt{CONSISTENT-TREES} resolves the foregoing problem by tracing~(in addition to particles) a subset of halo/subhalo properties which include halo mass, maximum circular velocity, halo position, and bulk velocity. A major component of the algorithm is to ensure continuity in these halo properties by construction. This is achieved by running a particle-based tree finder and establishing preliminary links between progenitor haloes~(at time step $t_{\mathrm{n-1}}$) and descendant haloes~(at time step $t_{\mathrm{n}}$). The subsequent steps consist of the following actions: \begin{enumerate} \item Gravitationally tracing the positions of descendant haloes from $t_{\mathrm{n}}$ to $t_{\mathrm{n-1}}$ to obtain their most likely progenitors at $t_{\mathrm{n-1}}$; removing progenitors whose properties do not resemble the most likely progenitors of the corresponding descendants. \item For each descendant halo at $t_{\mathrm{n}}$ that lacks a progenitor at $t_{\mathrm{n-1}}$ after step (i), a \textit{phantom} progenitor is assigned with halo properties identical to its most likely progenitor at $t_{\mathrm{n-1}}$; however, those descendant haloes that do not have progenitors for a sufficiently large sequence of time steps are removed. \item Finally, if a halo at $t_{\mathrm{n-1}}$ has no descendant at $t_{\mathrm{n}}$ after step (ii), it is \textit{merged} with a halo~(at $t_{\mathrm{n}}$) in its vicinity that has the strongest tidal field; additionally, the halo is removed as a statistical fluctuation if it is too far away from other haloes to experience any significant tidal field. \item Steps (i) to (iii) are iterated over the range of time steps (where each iteration corresponds a pair of time slices $t_{n-1}$ and $t_n$) from final time $t_f$ to initial time $t_i$. This establishes a lineage of haloes over the time range $t_i$ to $t_f$. \end{enumerate} Readers who are interested in more details are encouraged to refer to Section 5 of \cite{behroozi2012gravitationally}. \subsubsection{Constructing galaxy merger tree: Matching \texttt{ROCKSTAR} and \texttt{SUBFIND}} \label{matching_sec} The subhalo merger trees obtained using \texttt{ROCKSTAR-CONSISTENT TREES} are dark matter only. In order to construct the galaxy merger tree for our \texttt{SUBFIND} galaxies, we must match the subhaloes on the \texttt{ROCKSTAR} merger tree to our \texttt{SUBFIND} galaxies. We perform the following steps for the matching: \begin{enumerate} \item For a given \texttt{ROCKSTAR} subhalo (mass $M_h^{\mathrm{RS}}$) denoted by \texttt{SUBHALO-RS}, we select all \texttt{SUBFIND} subhalos~(with mass $M_h^{\mathrm{sub}}$) which satisfy $0.5\times M_h^{\mathrm{RS}}<M_h^{\mathrm{sub}}<2\times M_h^{\mathrm{RS}}$ and within a maximum distance of $5\times R^{\mathrm{RS}}_{\mathrm{vir}}$, where $R^{\mathrm{RS}}_{\mathrm{vir}}$ is the virial radius of the \texttt{ROCKSTAR} subhalo. We then choose the \texttt{SUBFIND} subhalo that is closest to the \texttt{ROCKSTAR} subhalo, denoted by \texttt{SUBHALO-RS-SUB}. \item For the \texttt{SUBFIND} subhalo \texttt{SUBHALO-RS-SUB}, we select all \texttt{ROCKSTAR} subhalos~(with mass $M_h^{\mathrm{sub}}$) which satisfy $0.5\times M_h^{\mathrm{sub}}<M_h^{\mathrm{RS}}<2\times M_h^{\mathrm{sub}}$ and within a maximum distance of $5\times R^{\mathrm{sub}}_{\mathrm{vir}}$, where $R^{\mathrm{sub}}_{\mathrm{vir}}$ is the virial radius of the \texttt{SUBFIND} subhalo. We then choose the \texttt{ROCKSTAR} subhalo that is closest to the \texttt{SUBFIND} subhalo, denoted by \texttt{SUBHALO-RS-SUB-RS} \item If~(and only if) we retrieve the original \texttt{ROCKSTAR} subhalo at the end of step (ii), i.e., \texttt{SUBHALO-RS-SUB-RS} is identical to \texttt{SUBHALO-RS}, we say that \texttt{SUBHALO-RS}~(from the \texttt{ROCKSTAR} merger tree) and \texttt{SUBHALO-RS-SUB}~(from the \texttt{SUBFIND} catalog) have been \textit{matched}. \end{enumerate} In order to generate a corresponding \texttt{SUBFIND} galaxy merger tree from a \texttt{ROCKSTAR} merger tree, every \texttt{ROCKSTAR} subhalo on the tree must be matched with a \texttt{SUBFIND} galaxy for the redshift range of our interest~($z_i\leq z\leq z_f$). If the matching fails at any redshift within ($z_i\leq z\leq z_f$), the entire tree is discarded. We quantify the matching success rate by defining a \textit{matching efficiency} $\eta_{\mathrm{matching}}$ as the ratio of the number of matched \texttt{SUBFIND} trees over the number of original \texttt{ROCKSTAR} trees~(present before matching). Figure \ref{matching_efficiency} shows $\eta_{\mathrm{matching}}$ as a function of $M_h$ at various values of $z_f$~($z_i=0.6$). For $z_f=1.5$~(red line), the efficiency is $86\%$ for all masses. At higher $z_f$, we lose more trees~(as expected) and the efficiency decreases to $75-82\%$ for $z_f=3$. This translates to a total of 27942 \texttt{SUBFIND} galaxy merger trees with progenitors up to redshift 3. This sample is sufficient for a statistical analysis, and to avoid further decrease in efficiency, we choose not to trace progenitors beyond redshift 3, hereafter defining the redshift range of our study to be $0.6\leq z\leq3$. We chose $z\geq 0.6$ since it is the time period when galaxy formation and merger processes are most active. \subsection{Shapes of galaxies and dark matter halos} \begin{figure*} \begin{center} \includegraphics[width=1.17\textwidth]{halo_vis.png} \end{center} \vspace{-1cm} \caption{A 2-d illustrative example of the evolution of a MBII galaxy on the merger tree. The red histograms show the distribution of stars and grey histograms show the distribution of underlying dark matter. The yellow ellipse represents the shape identified using dark matter particles, while the green ellipse represents the shape identified using stellar matter particles; The yellow and green dashed lines of color are their corresponding major axis directions. We can see that the subhalo shape is becoming more spherical from $z=3$ to $z=0.6$. Furthermore, the alignment between stellar matter and dark matter shapes is becoming stronger as we go from $z=3$ to $z=0.6$.} \label{illustration} \end{figure*} \begin{figure*} \includegraphics[width=130mm]{hist.png} \caption{Distribution of galaxy shapes: $P(q|z,M_h)$ (top) and $P(s|z,M_h)$ (bottom) show the normalized probability distributions of axis ratios $q=\frac{b}{a}$ and $s=\frac{c}{a}$ of dark/stellar matter components of galaxies~(subhaloes). Solid lines and dashed lines correspond to galaxy samples \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} respectively~(see section \ref{sample_definitions} for definition of galaxy samples). $\delta_q$ and $\delta_s$ correspond to the ratio of $P(q|z,M_h)$ and $P(s|z,M_h)$ respectively between \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} galaxies. The errorbars are $1\sigma$ poisson errors.} \label{fig:q} \end{figure*} We now describe how galaxy shapes are quantified. We model the shapes of the dark matter and stellar matter components of subhalos as ellipsoids in three dimensions by using the eigenvalues and eigenvectors of the \textit{reduced} inertia tensor~\citep{2005ApJ...627..647B,tenneti2014galaxy} given by \begin{equation} I_{ij}=\frac{\Sigma_{n}m_{n}\frac{x_{ni}x_{nj}}{r_n^2}}{\Sigma_{n}m_{n}} \label{inertia_tensor} \end{equation} where $m_n$ is the mass of the $n^{th}$ particle and $x_{ni}$ and $x_{nj}$ represent the $i$ and $j$ component of the position of the $n^{\rm th}$ particle ($0\le i,j\le 2$). $r_n$ is the distance of the $n^{\rm th}$ particle from the subhalo center and is given by $r_n^2=\sum x_{ni}^2$. We denote the principal axis directions or eigenvectors~(unit vectors) of $I_{ij}$ to be $(\hat{e}_a, \hat{e}_b, \hat{e}_c)$ with corresponding eigenvalues $(\lambda_a, \lambda_b, \lambda_c)$. The lengths of the principal axes $(a,b,c)$ are given by $(\sqrt{\lambda_a}, \sqrt{\lambda_b}, \sqrt{\lambda_c})$. The ellipticities can then be measured by the axis ratios, \begin{equation} q=\frac{b}{a},s=\frac{c}{a}. \label{axes_ratio} \end{equation} where $a$ is the length of the primary~(largest) axis. A perfectly spherical subhalo corresponds to $q=s=1$ and a triaxial halo corresponds to $q\neq s<1$. For a more robust measure of the shape, we adopt an iterative approach wherein we first determine the principal axes and axis ratios using all the particles in the subhalo, thereby determining the ellipsoidal volume. For each successive iteration, we then recalculate the inertia tensor and axis ratios ignoring particles outside the ellipsoidal volume. We repeat this until each iteration leads to $\lesssim1\%$ change in $a$, $b$ and $c$. \subsubsection{Shape convergence test} \label{S:appendixa} We require a sufficiently large number of particles to reliably measure galaxy~(subhalo) shapes. Here, we determine the minimum number of particles. Figure~\ref{shape_convergence} shows the distribution of $q$~(denoted by $P(q|M_h)$) for $z=0.6$ and $M_h>10^{11}~M_{\odot}/h$ galaxies. We show $P(q|M_h)$ for different numbers~($N_{\mathrm{part}}$) of subsampled dark matter particles within each subhalo. We find that the distributions converge for $N_{\mathrm{part}}=300,500,1000$ whereas for $N_{\mathrm{part}}=50$, $q$ is significantly underestimated. Therefore, we assume $N_{\mathrm{part}}\geq300$ in this work to ensure shape convergence; similarly, this choice is also sufficient for the convergence of $s$. This sets a minimum subhalo mass of our galaxies to $M_h\sim 3\times10^{9}~M_{\odot}/h$, which limits the subhalo mass and redshift range over which we can construct merger trees. We find that for galaxies with $M_h>10^{11}~M_{\odot}/h$ at $z=0.6$, their progenitors have $M_h\gtrsim3\times10^{9}~M_{\odot}/h$ up to $z=3$. Therefore our final choice for the subhalo mass range and redshift range in this work are $M_h>10^{11}~M_{\odot}/h$ and $0.6<z<3$. \subsection{Misalignment angle} To quantify the misalignment between the galaxy~(stellar matter component) and its host dark matter subhalo, we calculate the principal axes corresponding to the dark matter and star particles, i.e., $(\hat{e}^{\mathrm{DM}}_a, \hat{e}^{\mathrm{DM}}_b, \hat{e}^{\mathrm{DM}}_c)$ and $(\hat{e}^{*}_a, \hat{e}^{*}_b, \hat{e}^{*}_c)$ respectively. The misalignment angle is then defined as the angle between the eigenvectors corresponding to the primary~(longest) axes. \begin{equation} \theta_{m}=\arccos\left(\left| \hat{e}^{\mathrm{DM}}_{a} \cdot \hat{e}^{*}_{a}\right| \right) \end{equation} \subsection{Correlation function} The ellipticity-direction~(ED) correlation function \citep{lee2008quantifying} cross-correlates the orientation of the major axis of a subhalo with respect to the large-scale density field. For a subhalo centered at position $x$ with major axis direction $\hat{e}_{a}$, the ED cross-correlation function is given by \begin{equation} \omega \left(r\right) = \left \langle \left| \hat{e}_{a}(\vec{x}) \cdot \hat{r}(\vec{x}+\vec{r}) \right|^2 \right \rangle -\frac{1}{3} \end{equation} where $\hat{r}=\frac{\vec{r}}{r}$ and $\vec{r}$ is the position vector originating from the subhalo position~($\vec{x}$) to a tracer~(galaxy positions or dark matter particle positions) of the large scale matter distribution around the halo. In this work, we have used the dark matter particle positions as tracers of the matter density field. \begin{figure} \includegraphics[width=0.5\textwidth]{random.png} \caption{Comparison of the shapes of \textit{progenitor} galaxies and \textit{randomly-selected} galaxies of similar mass at $z=z_f=3$. The solid and dashed lines show $P(q|z,M_h)$~(dark matter component) for \texttt{SAMPLE-TREE}: $M_h>10^{11}~M_{\odot}/h$: $z=3$ and \texttt{SAMPLE-RANDOM}: $M_h>10^{11}~M_{\odot}/h$: $z=3$~(see Section~\ref{sample_definitions} for the sample definitions). $\delta_q$ is the ratio between the solid and dashed lines. \texttt{SAMPLE-RANDOM} is constructed to have a mass function identical to that of \texttt{SAMPLE-TREE} progenitors. } \label{fig:random} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{theta_hist.png} \caption{$P(\theta|z,M_h)$ is the distribution of misalignment angle $\theta$ between stellar and dark matter matter component of subhalos. Solid and dashed lines correspond to \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} galaxies, respectively. $\delta_\theta$ is the ratio between the solid and dashed lines. The errorbars are $1\sigma$ poisson errors. The black dotted lines represent the misalignment angle distribution if the two eigenvectors are uniformly distributed in 3d space.} \label{fig:theta_hist} \end{figure*} \section{Results} \label{S:results} \subsection{Stellar mass-subhalo mass relation} Figure \ref{SM_HM_fig} shows the subhalo total~(dark matter+gas+stars+black hole) mass~($M_h$) versus stellar mass~($M_*$) relation of \texttt{SAMPLE-TREE} galaxies at $z=0.6$, $1.5$, and $3.0$ with $M_h>10^{11}M_{\odot}/h$ at $z=0.6$. As expected, $M_h$ and $M_*$ are strongly correlated and both decrease with increasing redshift. We also note that as redshift increases, the $M_h$-$M_*$ relation does not significantly change either in slope or intercept, broadly consistent with predictions from semi-analytical models~\citep{2016MNRAS.456.1459M} as well as observations \citep{2012ApJ...744..159L}. This implies that galaxies grow in stellar mass and dark matter mass at roughly the same rate as they evolve along the merger tree As the subhalo mass strongly correlates with stellar mass, and therefore also correlates with other observable properties such as luminosity, star formation rate, we shall hereafter use subhalo mass cuts to construct the various galaxy samples defined in the next section for the rest of this work. \subsection{List of galaxy samples: Definitions and notations} \label{sample_definitions} Before we discuss the rest of the results, we describe the types of galaxy samples that we consider in this work. \begin{itemize} \item \texttt{SAMPLE-TREE}: The primary sample of interest consists of galaxies on the merger tree. We select galaxies with different subhhalo mass cuts~($M_h$) at $z=0.6$ and trace their progenitors to $z=3$ using the methods described in Sections \ref{merger_tree_sec}. Hereafter, we shall refer to this sample as \texttt{SAMPLE-TREE}. For example, the sample name ``\texttt{SAMPLE-TREE}: $M_h>10^{11}~M_{\odot}/h$: $z=2$" refers to galaxies at $z=2$ that are progenitors of the $M_h>10^{11}~M_{\odot}/h$ galaxies as selected at $z=0.6$. Using this sample, we study the redshift evolution of IA properties of galaxies, without having to consider the impact of evolution due to sample selection. \item \texttt{SAMPLE-MCUT}: The secondary sample of interest is obtained using the selection criterion of \cite{tenneti2015intrinsic}. Here we select galaxy samples with a fixed subhalo mass cut applied at all redshifts. Hereafter, we shall refer to this sample as \texttt{SAMPLE-MCUT}. For example, the sample name ``\texttt{SAMPLE-MCUT}: $M_h>10^{11}~M_{\odot}/h$: $z=2$" refers to all galaxies at $z=2$ with $M_h>10^{11}~M_{\odot}/h$. With this sample, the observed redshift evolution of IA properties is a combination of \textit{intrinsic} redshift evolution effects, and the evolution due to sample selection. \item \texttt{SAMPLE-RANDOM}: To interpret the impact of requiring galaxies to be a part of a merger tree, it will be necessary to look at differences in IA properties between a progenitor~(merger tree) galaxy and a randomly chosen galaxy of similar mass. To do this, we construct a galaxy sample by randomly drawing galaxies from the full sample at some redshift~(all galaxies in the simulation snapshot), such that the total~(dark matter+gas+stars+black hole) mass function is modulated to be identical to that of \texttt{SAMPLE-TREE}~(progenitor) galaxies at the same redshift. We shall refer to this as sample \texttt{SAMPLE-RANDOM}. For example, the sample name ``\texttt{SAMPLE-RANDOM}: $M_h>10^{11}~M_{\odot}/h$: $z=2$" refers to a random sample of galaxies at $z=2$ whose mass function is identical~(by construction) to ``\texttt{SAMPLE-TREE}: $M_h>10^{11}~M_{\odot}/h$: $z=2$". \end{itemize} \subsection{Evolution of galaxy shapes and misalignment angles} In this subsection, we will investigate how the shapes of galaxies~(and dark matter subhaloes), described by axis ratios $q=\frac{b}{a}$ and $s=\frac{c}{a}$, and the misalignments between stellar and dark matter components, evolve with redshift along the merger tree. Figure~\ref{illustration} shows an illustration of the evolution of a single simulated galaxy along the merger tree from $z=3$ to $z=0.6$. We can see that the shape of the dark matter component~(yellow ellipse) becomes more spherical with decreasing redshift. Furthermore, at $z=3$, the stellar matter is significantly misaligned with respect to the dark matter, but the alignment becomes stronger as redshift decreases. In the following subsections, we shall show that the foregoing trends persist for the overall distribution of shapes and misalignment angles for the entire set of \texttt{SAMPLE-TREE} galaxies. \subsubsection{Shape} \begin{figure*} \includegraphics[width=\textwidth]{corr.png} \caption{$\omega(r)$ is the ellipticity direction~(ED) correlation function of \texttt{SAMPLE-TREE} galaxies at different redshifts. Here we are using the major axes of the stellar matter components and galaxy positions as tracers of the matter distribution. The bottom panels show the ratio of $\omega(r,z)$ with respect to that of $\omega(r,z=0.6)$. Errorbars are jackknife errors obtained by dividing the simulation volume into eight octants.} \label{fig:corr} \includegraphics[width=\textwidth]{dm_bm.png} \caption{\textbf{Comparing ED correlation functions for \texttt{SAMPLE-TREE} galaxies and their dark matter subhaloes:} In the top panels, solid and dashed lines show the ED correlation functions of galaxies and their dark matter subhaloes, respectively. The ratio between the dashed vs solid lines are shown in the bottom panels. Errorbars in the correlation function are jackknife errors obtained by dividing the simulation volume into eight octants.} \label{fig:dm_bm} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{comparison.png} \caption{$\omega_{\texttt{MCUT}}/\omega_{\texttt{TREE}}$ is the ratio of $\omega(r)$ of \texttt{SAMPLE-TREE} to that of \texttt{SAMPLE-MCUT} galaxies. Errorbars are jackknife errors obtained by dividing the simulation volume into eight octants.} \label{fig:comparison} \end{figure*} Figure~\ref{fig:q} shows the distributions $P(q|z,M_h)$ and $P(s|z,M_h)$ of axis ratios $q$ and $s$ respectively. In Section \ref{S:appendixa}, we established that $\gtrsim$300 particles are required to reliably measure the shape; this dictates our choice of minimum subhalo mass threshold of $M_h>10^{11}~M_{\odot}/h$ at $z=0.6$. The solid and dashed lines correspond to \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} respectively. The bottom panels show the ratio between the axis ratio distributions of \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} galaxies. \textbf{Subhalo mass dependence on the merger tree}: We first focus on shapes of dark matter subhaloes. For \texttt{SAMPLE-TREE} galaxies~(solid lines), we see that as subhalo mass increases, $P(q|z,M_h)$ and $P(s|z,M_h)$~(for dark matter) is increasingly skewed towards lower values of $q$ and $s$. This is more clearly seen in the mean values of $q$ and $s$ in Figures B1. This implies that as subhalo mass increases, galaxies on the merger tree become less spherical at fixed redshift. This is also true for \texttt{SAMPLE-MCUT} galaxies~(dashed lines) and has been well established in previous studies~\citep{2005ApJ...618....1H,2006MNRAS.367.1781A,tenneti2015intrinsic}; therefore it is not surprising that it persists for galaxies on the merger tree. For the shapes of the stellar matter component, the dependence on subhalo mass at $z\lesssim1.5$ is the same as that of the dark matter component for both $P(q|M_h)$ and $P(s|M_h)$, also seen in \cite{tenneti2015intrinsic}. In other words, at $z\lesssim1.5$ more massive galaxies have less spherical shapes for the stellar matter component~(the mass dependence is seen much more clearly in Figure~B2). However, this result does not persist all the way up to $z\sim3$. In fact we see that the mass dependence of $P(q|M_h)$ is reversed~(i.e.\ $P(q|M_h)$ skews towards higher values with increasing subhalo mass) at $z\sim3$ while $P(s|M_h)$ has no significant mass dependence at $z\sim3$. Therefore, we find that at $z\sim3$, the sphericity of the stellar matter component of galaxies increases with increasing subhalo mass. To summarize the above trends, we find that: \begin{itemize} \item The shapes of the dark matter components of galaxies become less spherical with increasing subhalo mass. \item For the stellar matter components, the shapes become less spherical with increasing subhalo mass at $z\lesssim1.5$. The trend starts to reverse at $z\gtrsim1.5$ and by $z\sim3$, the shapes become more spherical with increasing subhalo mass. \end{itemize} \textbf{Redshift evolution on the merger tree}: We first focus on the shapes of dark matter subhaloes. For \texttt{SAMPLE-TREE} galaxies~(solid lines), we see that for all three panels, as redshift decreases, the peaks of $P(q|z,M_h)$ and $P(s|z,M_h)$~(for dark matter) shift towards higher values of $q$ and $s$. This implies that as redshift decreases, galaxies on the merger tree evolve to become more spherical. This is also true for \texttt{SAMPLE-MCUT} galaxies~(dashed lines), as was previously reported in \cite{tenneti2015intrinsic}. It also noteworthy that our results are consistent with \cite{2005ApJ...618....1H} which investigated the evolution of shapes of cluster sized haloes~($M_h>2\times10^{13}~M_{\odot}/h$) in N-body simulations over roughly the same range of redshifts. The shape evolution of the stellar matter component has significant differences compared to that of dark matter (as already hinted in the discussion on the subhalo mass dependence). For instance, $P(s|z,M_h)$ tends towards being less spherical as redshift decreases. This trend is opposite to that of dark matter. However, note also that the overall evolution of $P(s|z,M_h)$ is significantly weaker for stellar matter than for dark matter. For $P(q|z,M_h)$, the evolution is more complicated and depends on the subhalo mass threshold. For $M_h>10^{11}~M_{\odot}/h$, there is no significant evolution. On the other hand, for $M_h>10^{12}~M_{\odot}/h$ and $M_h>10^{13}~M_{\odot}/h$, the evolution is significant: $P(q|z,M_h)$ is less spherical at $z=0.6$ compared to $z=3$. To summarize the above trends, we find that: \begin{itemize} \item The shapes of the dark matter components of galaxies tend to become more spherical with time. \item The shapes of the stellar matter components of galaxies tend to become less spherical with time, especially for higher mass thresholds. \end{itemize} \textbf{Comparing \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT}}: We now compare the axis ratio distributions between \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT}~(see ratio plots in Figure \ref{fig:q}). For the dark matter shapes, we find that the axis ratio distributions of \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} are broadly consistent i.e.\ there is no statistically significant difference in their shapes given the errorbars. The fact that this is persistent all the way up to $z=3$ is noteworthy because at $z=3$, \texttt{SAMPLE-MCUT} galaxies are significantly more massive than \texttt{SAMPLE-TREE} galaxies. This suggests that at fixed redshift, the subhalo mass is not the sole parameter that determines the shapes of dark matter component of galaxies. In particular, galaxies that are progenitors of lower redshift galaxies above some mass threshold may be less spherical compared to a randomly chosen set of galaxies of similar subhalo mass. In order to show this explicitly, in Figure~\ref{fig:random} we compare the axis ratio distributions~(at $z=3$) of the dark matter components of \texttt{SAMPLE-TREE} galaxies to that of a random sample~(\texttt{SAMPLE-RANDOM}) whose mass functions are modulated to be identical to that of \texttt{SAMPLE-TREE}. We see that the axis ratios for \texttt{SAMPLE-TREE} galaxies are smaller than that of \texttt{SAMPLE-RANDOM} galaxies. This is also true in general for $z\gtrsim1.5$. This solidifies the impression that early galaxies that are progenitors of present-day massive galaxies~($M_h>10^{11}~M_{\odot}/h$ at $z=0.6$) are more elliptical~(on an average) than a randomly selected galaxy at similar subhalo mass and redshift. For the stellar matter shapes, the ratio plots show that at $z=3$, $P(q|M_h)$ for samples with mass thresholds of $M_h>10^{11}~M_{\odot}/h$ and $M_g>10^{12}~M_{\odot}/h$ are less spherical for \texttt{SAMPLE-TREE} galaxies compared to \texttt{SAMPLE-MCUT} galaxies. This is because \texttt{SAMPLE-MCUT} galaxies are more massive compared to \texttt{SAMPLE-TREE} galaxies at $z=3$ (we have already shown that stellar matter shapes are more spherical at higher subhalo masses at $z=3$). $P(s|M_h)$ however has no significant difference between \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} at $z=3$ despite the difference in subhalo masses. This is simply because there is insignificant mass dependence in $P(s/M_h)$ for stellar matter at $z=3$. The comparison of shapes between \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} galaxies at $z=3$ can now be summarized as follows: \begin{itemize} \item For the dark matter components, no difference is found between the shapes of \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} galaxies at $z=3$ despite the difference in masses. This is because at $z=3$ galaxies that are progenitors of $z\sim0.6$: $M_h\gtrsim10^{11}~M_{\odot}/h$ galaxies are significantly less spherical~(on an average) than a \textit{randomly selected} galaxy of similar subhalo mass and redshift. \item For the stellar matter component, \texttt{SAMPLE-TREE} galaxies are less spherical compared to \texttt{SAMPLE-MCUT} galaxies at $z=3$. This is because \texttt{SAMPLE-MCUT} galaxies are more massive (which we show to be more spherical for stellar matter component) than \texttt{SAMPLE-TREE} galaxies at $z=3$. \end{itemize} \subsubsection{Misalignment angle} In this section, we investigate how the misalignment angle of galaxies on the tree evolves with redshift. The solid lines in Figure~\ref{fig:theta_hist}~(top panels) show the distribution~$P(\theta|z,M_h)$ of misalignment angles~($\theta$) at different redshifts and subhalo mass cuts for \texttt{SAMPLE-TREE} galaxies. The distributions are skewed with a maximum at $\theta_m\sim5-10~\mathrm{deg}$ accompanied by a long tail at $\theta_m>10~\mathrm{deg}$, and a sharp fall-off at $\theta_m<5~\mathrm{deg}$. At fixed redshift, as the subhalo mass increases, $P(\theta|M_h)$ skews towards smaller values of $\theta$~(seen more clearly in Figure B3). This implies that more massive galaxies are more aligned with their subhaloes. $P(\theta|M_h)$ skews towards smaller $\theta$ as redshift decreases, implying that galaxies evolve over time to become increasingly aligned with their subhaloes, although the evolution is mild. The evolution of the misalignment angle can be put in the context of existing IA models. The fact that the evolution is mild suggests that it may possibly be mediated by the evolution of the instantaneous tidal field. This is hinted by the fact that the contribution of the instantaneous tidal field is small~(compared to observations), as predicted by the analytical model presented in \cite{2015A&A...575A.113C}. In such a scenario, the redshift evolution, contributed by the instantaneous tidal field, can be thought of as a perturbation to the pre-existing alignment~($\theta_m\sim10~\mathrm{deg}$). Given its strength, the pre-existing alignment is likely set by the primordial~(at the formation epoch of these galaxies) tidal field, as assumed in linear alignment models~\citep{2001MNRAS.320L...7C,hirata2004intrinsic}. We also compare $P(\theta|z,M_h)$ for \texttt{SAMPLE-TREE} galaxies to the predictions for \texttt{SAMPLE-MCUT} galaxies~(solid vs.\ dashed lines in Figure \ref{fig:theta_hist}~top panels); Figure \ref{fig:theta_hist}~(bottom panels) shows the ratio $\delta_{\theta}$. For $M_H>10^{11,12}~M_{\odot}/h$, we find that $\delta_{\theta}<1$ for $\theta<25~\mathrm{deg}$ and $\delta_{\theta}>1$ for $\theta>25~\mathrm{deg}$ at all redshifts. This implies that \texttt{SAMPLE-TREE} galaxies are less aligned with their subhaloes compared to \texttt{SAMPLE-MCUT} galaxies. At $z=1.5$ and $z=3$, one would expect this to be the case as \texttt{SAMPLE-MCUT} galaxies are more massive, and therefore more aligned, than \texttt{SAMPLE-TREE} galaxies. However, we also see the same effect at $z=0.6$, where both \texttt{SAMPLE-MCUT} and \texttt{SAMPLE-TREE} galaxies have the same subhalo mass thresholds. This implies that galaxies which formed between $0.6\lesssim z\lesssim 3$~(i.e. those that do not have progenitors up to $z=3$) are more aligned with their subhaloes than those that formed at $z>3$ We have so far discussed the evolution of distributions of galaxy shapes and misalignment angles. In Appendix~\ref{S:appendixb}, we present the evolution of the average values of the axis ratios and misalignment angles, and provide simple fitting functions to quantify them. \subsection{Ellipticity-direction~(ED) Correlation function} In this section, we will investigate how the ellipticity-direction correlation function of galaxies on the merger tree evolves with redshift We now present the results for the ED correlation function $\omega(r)$. The top panels in Figure~\ref{fig:corr} show $\omega(r)$ for \texttt{SAMPLE-TREE} galaxies and its redshift evolution along the merger tree. The bottom panels show the ratio $\omega(r,z)/\omega(r,z=0.6)$. They reveal the evolution of the ED correlation for a wide range of scales to be probed by LSST weak lensing~\citep{2018arXiv180901669T}. These include scales $\gtrsim5~\mathrm{Mpc}/h$ where the NLA model and its extensions such as \cite{2016IAUS..308..452B} already work well. Additionally, our simulations also reveal ED correlations at smaller scales which are not well probed by these analytical models. Accordingly, we choose $\sim 1~\mathrm{Mpc}/h$ as an interesting scale around which we shall now describe the evolution of the ED correlation. At $r>1~\mathrm{Mpc}/h$ we see that the correlation function is a power law as a function of $r$. The slope of the power law does not vary significantly with redshift or subhalo mass. The power law amplitude increases with subhalo mass at fixed redshift, as also reported in \cite{tenneti2015intrinsic}. The ED correlation amplitude increases with decreasing redshift along the merger tree~(up to factors of $\sim4$ from $z=3$ to $z=0.6$). At sufficiently small scales~($r\lesssim1~\mathrm{Mpc}/h$), $\omega(r)$ deviates from a power law and is suppressed~(compared to power-law extrapolation from large scales). The extent of the suppression increases with decreasing redshift. As we approach even smaller scales $\sim 0.1~\mathrm{Mpc}/h$, the redshift evolution is reversed compared to large scales, i.e., $\omega(r)$ decreases with decreasing redshift along the merger tree~(up to factors of $\sim2$ from $z=3$ to $z=0.6$). We compare $\omega(r)$ predictions of \texttt{SAMPLE-TREE} to that of \texttt{SAMPLE-MCUT}; Figure~\ref{fig:comparison} shows the ratio between the two as a function of $r$. We find that as redshift increases, $\omega(r)$ for \texttt{SAMPLE-TREE} becomes increasingly suppressed at scales $r\gtrsim1~\mathrm{Mpc}/h$ as compared to that of \texttt{SAMPLE-MCUT}; at $z=3$ the suppression is by factors $3-4$. At $r\lesssim1~\mathrm{Mpc}/h$, the differences are relatively small~(by factors $\lesssim2$). These differences are largely because \texttt{SAMPLE-TREE} galaxies are less massive compared to \texttt{SAMPLE-MCUT} galaxies at higher redshifts. In the following subsections, we shall dig deeper into the foregoing results by first putting them in the context of the galaxy-subhalo misalignments, and then finally revealing the factors that drive the evolution of ED correlations at different scales. \subsubsection{Implications of galaxy-subhalo misalignment on the ED correlation} We now study the implications of galaxy-subhalo misalignment and its evolution on the ED correlation function. To do this, we compare the ED correlations of galaxies~(also shown in Figure~\ref{fig:corr}) to their underlying dark matter subhaloes. The top panel of Figure~\ref{fig:dm_bm} shows the ED correlation functions of \texttt{SAMPLE-TREE} galaxies, where the solid and dashed lines correspond to galaxies and dark matter subhaloes, respectively. As a consequence of the misalignment between stellar matter and dark matter, the solid lines showing the galaxy ED correlation functions are significantly suppressed compared to the subhalo ED correlation functions~(by factors $\sim2-4$) at all scales. This implies that the alignment of galaxies with respect to the surrounding density field is suppressed as compared to their dark matter subhaloes. This has been established in previous works~\citep{2015MNRAS.453..469T}, and is also supported observationally in the alignments of luminous red galaxies \citep{2009ApJ...694..214O}. We now discuss how this suppression evolves with redshift on the merger tree. In the bottom panel of Figure~\ref{fig:dm_bm}, we see that the ratio $\omega_{\mathrm{DM subhalo}}/\omega_{\mathrm{galaxy}}$ decreases with decreasing redshift; this is because galaxy-subhalo misalignment decreases with decreasing redshift. Furthermore, the evolution is stronger for $M_h>10^{11}~M_{\odot}/h$ haloes as compared to $M_h>10^{13}~M_{\odot}/h$. This is because at $z=3$, $M_h>10^{13}~M_{\odot}/h$ galaxies are more aligned with their subhaloes as compared to $M_h>10^{11}~M_{\odot}/h$ galaxies~(compare leftmost and rightmost panels of Figure \ref{fig:theta_hist}). \subsubsection{What drives the evolution of ED correlation at different scales?} Here, we discuss the factors driving the evolution of the galaxy ED correlation at different scales, as inferred from Figure~\ref{fig:dm_bm}. At scales $\gtrsim1~\mathrm{Mpc}/h$, note that the ED correlations for dark matter subhaloes~(dashed lines) undergo a significantly weaker redshift evolution compared to that of galaxies~(solid lines). In fact, there is no significant evolution for $M_h>10^{11}~M_{\odot}/h$ and $M_h>10^{12}~M_{\odot}/h$ subhaloes. Therefore, the fact that we find a significant evolution for the galaxy ED correlation implies that its evolution at scales $>1~\mathrm{Mpc}/h$ is primarily driven by the evolution of the galaxy-subhalo misalignment, as opposed to being driven by the ED correlation for dark matter haloes. At scales $\lesssim1~\mathrm{Mpc}/h$, a suppression~(compared to a power law) is seen in the ED correlations for both galaxies and their dark matter subhaloes. Furthermore, the suppression in the galaxy ED correlation simply traces that of the dark matter subhalo, but at a lower normalization. Overall, this tells us that the evolution of the ED correlation profile for galaxies at scales $\lesssim1~\mathrm{Mpc}/h$ is governed by the evolution of both 1) the ED correlation for dark matter haloes, and 2) the misalignment between galaxies and subhaloes. The former leads to a decrease in the ED correlation for galaxies with time, whereas the latter drives an increase in the ED correlation for galaxies. Due to the complex interplay between these two competing effects, no straightforward trend is seen in the evolution of ED correlation at scales $\sim 1~\mathrm{Mpc}/h$~(to be targeted by LSST). At very small scales ($\sim 0.1~\mathrm{Mpc/h}$), the suppressed ED correlation of DM subhaloes is so large that it dominates compared to the evolution of galaxy subhalo misalignment angle. This competition causes the reversal in the redshift evolution of $\omega(r)$ for galaxies at these scales, compared to that in scales $>1~\mathrm{Mpc}/h$. \section{Conclusions} \label{S:conclusions} This work is part of a continued series of papers dedicated to studying the intrinsic alignments~(IA) of galaxies using the \texttt{MassiveBlackII} cosmological hydrodynamic simulation. In this work, we study redshift evolution~($0.6\lesssim z \lesssim 3$) by selecting galaxy samples (\texttt{SAMPLE-TREE}) based on subhalo mass cuts~($M_h>10^{11,12,13}~M_{\odot}/h$) at $z=0.6$ and tracing their progenitors to $z=3$ along a merger tree. We study the redshift evolution of galaxy shapes, misalignment with respect to host subhalo, and the ED correlation functions along the merger tree. Our key findings are as follows: \begin{itemize} \item The sphericity of the dark matter component of galaxies increases with time, whereas that of the stellar matter component decreases with time. \item The distribution of galaxy-subhalo misalignment angle peaks at $\sim$10~deg. With decreasing redshift, the distribution becomes narrower and more skewed towards smaller misalignment angles. \item The evolution of the ellipticity-direction~(ED) correlation~$\omega(r)$ of galaxies is driven by the evolution of their alignment with respect to their host DM subhaloes, as well as the alignment between DM subhaloes and the surrounding matter overdensity. \begin{itemize} \item At scales $\sim1~\mathrm{cMpc}/h$, the alignment between DM subhaloes and the matter overdensity gets suppressed with time. On the other hand, the alignment between galaxies and DM subhaloes is enhanced. Due to these competing tendencies, the redshift evolution of $\omega(r)$ for galaxies at $\sim1~\mathrm{cMpc}/h$ is not straightforward. \item At scales $>1~\mathrm{cMpc}/h$, there is no significant evolution in the alignment between DM subhaloes and the matter overdensity. As a result, the evolution of the galaxy-subhalo misalignment leads to an increase in $\omega(r)$ for galaxies by a factor of $\sim$4 from $z=3$ to $0.6$. \item At $\sim0.1~\mathrm{cMpc}/h$ scales, evolution in $\omega(r)$ for galaxies is completely reversed compared to that at scales $\gtrsim1~\mathrm{cMpc}/h$, i.e., it decreases by factors $\sim 2$ from $z=3$ to $0.6$. This is because at these scales, the alignment between DM subhaloes and the matter overdensity is strongly suppressed with time, and this effect dominates over evolution of galaxy-subhalo misalignment. \end{itemize} \end{itemize} We also compare our results with the sample selection applied in the previous work of this series \citep{tenneti2015intrinsic}. In particular, we also considered galaxy samples~(\texttt{SAMPLE-MCUT}) with fixed subhalo mass cuts~($M_h>10^{11,12,13}~M_{\odot}/h$), applied at all redshifts between 0.6 and 3. Interestingly, upon comparing the sphericities of dark matter components of \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} galaxies, we find that they do not significantly differ~($\lesssim 10\%$); this is true even at the highest redshift~($z=3$) where \texttt{SAMPLE-TREE} galaxies are significantly less massive than \texttt{SAMPLE-MCUT}. This is explained by our finding that at $z\gtrsim1.5$, progenitors of $z\sim0.6$: $M_h\gtrsim10^{11}~M_{\odot}/h$ galaxies have significantly less spherical~(on an average) dark matter shapes than a \textit{randomly selected} galaxy of similar subhalo mass and redshift. For the stellar matter component, we find that \texttt{SAMPLE-TREE} progenitors at $z=3$ are less spherical compared to \texttt{SAMPLE-MCUT} galaxies. This is because \texttt{SAMPLE-MCUT} galaxies are more massive (which we show to be more spherical for stellar matter component) than \texttt{SAMPLE-TREE} galaxies at $z=3$. We find that \texttt{SAMPLE-TREE} galaxies are less aligned with their subhaloes compared to \texttt{SAMPLE-MCUT} galaxies. At $z=1.5$ and $z=3$, this can be attributed to the differences between their subhalo masses. But the fact that we also see this at $z=0.6$ further implies that galaxies which formed earlier than $z=3$~(i.e. those that do not have progenitors up to $z=3$) are more aligned than those that formed at $z<3$. The effect of differences in subhalo masses~(at $z>0.6$) of \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} galaxies, is also seen in their ED correlation function $\omega(r)$. Compared to \texttt{SAMPLE-MCUT}, $\omega(r)$ for \texttt{SAMPLE-TREE} galaxies is suppressed at increasing redshift (by factors up to $\sim3-4$ at $z=3$); this is due to decreasing subhalo masses of progenitors in \texttt{SAMPLE-TREE} at increasing redshift. This work demonstrates that hydrodynamic simulations such as MBII are indispensible tools to study redshift evolution of galaxy properties such as IA, primarily because of the ability to directly trace progenitors of present-day galaxies by constructing merger trees. This enables us to disentangle true IA evolution from apparent evolution due to sample selection effects, which are inevitable in observations. Future work will involve the use of the results from this study, as well as previous works \citep{tenneti2014galaxy,tenneti2015intrinsic,tenneti2016intrinsic}, to construct halo models for IA of galaxies. These models can then be used to construct mock catalogs by populating N-body simulation volumes, and thereby analyse possible systematic biases caused by IA in weak lensing analyses. \section*{Acknowledgements} We thank Yu Feng for providing the data of MB-II simulation snapshots and raw data. This research is supported by the US National Science Foundation under Grant No.\ 1716131. TDM acknowledge funding from NSF ACI-1614853, NSF AST-1517593, NSF AST-1616168, NASA ATP 80NSSC18K1015 and NASA ATP 17-0123. \addcontentsline{toc}{section}{Acknowledgements} \bibliographystyle{mnras} \section{Introduction} The shapes and orientations of galaxies have an intrinsic correlation with respect to those of nearby galaxies and the overall matter distribution; this effect is known as galaxy intrinsic alignments \citep[IA; see][and references therein for review]{troxel2015intrinsic,joachimi2015galaxy,kiessling2015galaxy,kirk2015galaxy}. The importance of IA is two fold: 1) IA emerges as a natural outcome of the current paradigm of galaxy formation in the $\Lambda$CDM cosmological model, as emphasized also in state-of-the-art cosmological hydrodynamic simulations that include direct modeling of galaxy formation ~\citep[e.g.,][]{tenneti2014galaxy,2015MNRAS.454.3328V,2015MNRAS.454.2736C,2017MNRAS.468..790H}. IA is therefore a promising probe for galaxy formation physics. 2) If not properly modeled and removed, IA is a significant source of systematic bias in inferring cosmological parameters in weak lensing studies~\citep{2016MNRAS.456..207K}. Many of the upcoming surveys like the Large Synoptic Survey Telescope \citep[LSST;][]{2008arXiv0805.2366I,abell2009lsst}, Euclid \citep{laureijs2011euclid}, and the Wide-Field Infrared Survey Telescope (WFIRST; \citealt{spergel2015wide}) aim to determine the dark energy equation of state to very high precision using weak lensing, and IA is one of the major sources of astrophysical systematic uncertainty for such studies \citep{2018ARA&A..56..393M}. The existence of IA in galaxies with correlations out to 100~$h^{-1}$Mpc scales has been firmly established in observational data \citep[e.g.,][]{2006MNRAS.367..611M, 2007MNRAS.381.1197H,2011A&A...527A..26J,singh2015intrinsic}. An understanding of intrinsic alignments and their scaling with galaxy mass and redshift is therefore crucial to mitigating this effect in weak lensing studies, and is also a good diagnostic for galaxy formation physics. Intrinsic alignments have been studied using analytical methods such as the linear~\citep{2001MNRAS.320L...7C}, the nonlinear alignment model \citep{2007NJPh....9..444B}, and the full tidal alignment model \citep{blazek2015tidal}. While these methods are easy to implement while also requiring few computational resources, they inevitably rely on assumptions about the alignment of galaxies and the underlying tidal field. This limitation can be overcome by state-of-the-art cosmological hydrodynamic simulations~\citep[e.g.,][]{2014MNRAS.444.1453D,2015MNRAS.446..521S,khandai2015massiveblack,vogelsberger2014introducing}, which can directly probe the impact of galaxy formation physics on the shapes and alignments of galaxies and the relation to their dark matter counterparts~(halos/subhalos) and the tidal fields themselves. Therefore, in recent years galaxy shapes and alignments have been extensively studied using hydrodynamic simulations \citep[e.g.,][]{2015MNRAS.454.2736C,tenneti2016intrinsic,2017MNRAS.472.1163C,2017MNRAS.468..790H} An important step towards understanding galaxy intrinsic alignments is to study their redshift evolution. This has been initiated by a series of works~\citep{tenneti2015intrinsic} using the \texttt{MassiveBlackII} (MBII) hydrodynamic simulation \citep{khandai2015massiveblack}, including a detailed study of the redshift evolution of galaxy shapes, alignment with respect to host halo/subhalo, and associated shape-density correlation functions. A noteworthy feature of these works was that the sampling of galaxies was based on fixed subhalo mass cut~($\gtrsim 10^{11},~10^{12},~10^{13}~M_{\odot}/h$) at each redshift~(from $z\sim0.06-1$); this is somewhat representative of cuts in observed galaxy samples in properties such as stellar mass or magnitude, which are known to correlate with the host subhalo mass. However, with such an approach, the resulting redshift evolution may be dominated by the effects of sample selection. In order to study the \textit{intrinsic} redshift evolution~(i.e. separated from the effects of sample selection), we must select samples of galaxies at a given redshift and trace their progenitors to higher redshifts. In this work, we study the redshift evolution of IA properties of MBII galaxies by making subhalo mass cuts at a single fixed redshift~($z\sim0.6$) and then tracing the properties of their progenitors along a merger tree. In Section~\ref{S:methods}, we outline the basic methodology and definitions. In Section ~\ref{S:results}, we study the redshift evolution of galaxy properties~(axis ratios, galaxy-subhalo misalignment angle and density-shape correlation functions) on the merger tree. We summarize our key results in Section~\ref{S:conclusions}. \section{Methods} \label{S:methods} \subsection{MassiveBlack-II simulation} \begin{figure} \includegraphics[width=80mm]{eff.png} \caption{$\eta_{\mathrm{matching}}$ is the \textit{matching efficiency}, i.e.\ the ratio between the number of \texttt{SUBFIND} trees with respect to the original number of \texttt{ROCKSTAR} trees~(before matching \texttt{ROCKSTAR} and \texttt{SUBFIND} trees). $1-\eta_{\mathrm{matching}}$ therefore is the fraction of \texttt{ROCKSTAR} trees lost because we could not find a corresponding \texttt{SUBFIND} tree to match with. ``$\geq\log(M^H_{z=0.6})$'' is the threshold subhalo mass of galaxies selected at $z=0.6$; $z_f$ is the maximum redshift up to which their progenitors are traced~(starting from $z_i=0.6$). } \label{matching_efficiency} \end{figure} \begin{figure} \includegraphics[width=7.5cm]{conv.png} \caption{\textbf{Shape convergence test:} Normalized histograms of $q=\frac{b}{a}$ of the dark matter component of \texttt{SAMPLE-TREE} galaxies at $z = 0.6$. We show the comparison between shapes determined using all particles in the subhalo with those obtained using a random subsample of $N_{\mathrm{part}}=50, 100, 300, 1000$ particles in the subhalo.} \label{shape_convergence} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{heatmap.png} \caption{The 2D histograms show the dark matter mass~($M_h$) versus stellar mass~($M_*$) relation of galaxies~(and dark matter subhaloes) on 27942 trees corresponding to $M_{h}>10^{11}M_{\odot}/h$ galaxies at $z=0.6$~(leftmost panel) and their main progenitors at $z=1.5$~(middle panel) and $z=3$~(rightmost panel). } \label{SM_HM_fig} \end{figure*} We briefly describe \texttt{MassiveBlack-II} (MB-II), which is a state-of-the-art cosmological hydrodynamic simulation of structure formation \citep{khandai2015massiveblack}. MB-II is evolved from $z=159$ to $z=0.06$ in a cubic periodic box of comoving volume $V_{\mathrm{box}}=(100~h^{-1}\mathrm{Mpc})^3$ and a gravitational smoothing length of $\epsilon = 1.85 h^{-1}~\mathrm{kpc}$. The box contains $2\times 1792^3$ particles (dark matter+gas). The mass of a single dark matter particle and a single gas particle is $m_{\mathrm{DM}}=1.1\times 10^7 h^{-1} M_{\odot} $ and $m_{\mathrm{gas}}=2.2\times 10^6 h^{-1} M_{\odot}$ respectively. The cosmological parameters used in the simulation are based on WMAP7 \citep{komatsu2011astrophys} with amplitude of matter fluctuations $\sigma_8 = 0.816$, spectral index $n_s = 0.96$, mass density parameter $\Omega_m = 0.275$, cosmological constant density parameter $\Omega_{\Lambda} = 0.725$, baryon density parameter $\Omega_{b} = 0.046$, and Hubble parameter $h = 0.702$. Halos are identified using a friends-of-friends (FOF) halo finder \citep{davis1985evolution} with a linking length of 0.2 times the mean particle separation. \subsection{Galaxy identification} Here we describe how galaxies are identified in MBII. Galaxies are defined to be the stellar component of \textit{subhalos}, which are locally overdense, self-bound particle groups within a larger parent group~(FOF halo) The subhalo catalogs are generated using the substructure finder \texttt{SUBFIND} on the halo catalogs. In \texttt{SUBFIND}, for each particle in the parent group, a local density is estimated using the positions of a prescribed number of nearest neighbours. After identifying the local peaks in density field, it rebuilds the parent group by adding particles in the order of decreasing density. In doing so, a saddle point is eventually reached which connects two disjoint overdense regions. The smaller structure is then identified as a candidate substructure. For further implementation details, see the original paper \citep{springel2001populating}. \subsection{Constructing the galaxy merger tree} \label{merger_tree_sec} In this section, we describe the key steps involved in the construction of the galaxy merger tree. To begin with, halo/subhalo merger trees were identified by running the \texttt{ROCKSTAR}~\citep{behroozi2012rockstar} halo/subhalo finder along with \texttt{CONSISTENT-TREES}~\citep{behroozi2012gravitationally}, both of which are described in the following two subsections. \subsubsection{\texttt{ROCKSTAR}} \label{rockstar_sec} \texttt{ROCKSTAR} (or `Robust Overdensity Calculation using K-Space Topologically Adaptive Refinement') is an algorithm based on adaptive hierarchical refinement of FOF groups. Primary FOF groups are first identified using a FOF finder. Within each FOF group, a hierarchy of FOF subgroups~(in phase space) is identified using an adaptive refinement of the linking length. The FOF subgroups at the lowest~(deepest) level of the hierarchy are then converted into seed haloes. Starting with the lowest level of the hierarchy, the FOF subgroup particles are assigned to the seed haloes based on phase space distances; this process is repeated for the higher levels of the hierarachy until all particles of the parent FOF group have been assigned to the halo. After assigning all the particles, the host-subhalo relationship is calculated by assigning a seed halo to be a \textit{subhalo} of the closest seed halo~(within the same FOF group) with larger number of assigned particles. This process is performed until all the seed haloes are either \textit{host haloes} or \textit{subhaloes}. For further implementation details, see the original paper \citep{behroozi2012rockstar}. \subsubsection{\texttt{CONSISTENT-TREES}} \label{trees_sec} We build a merger tree for our \texttt{ROCKSTAR} haloes/subhaloes using \texttt{CONSISTENT-TREES} algorithm \citep{behroozi2012gravitationally}. \texttt{CONSISTENT-TREES} is an extension to traditional \textit{particle based}~(constructed by tracing trajectories of halo/subhalo particles across different time steps) tree building algorithms which can potentially compromise the \textit{continuity} of halo/subhalo properties across simulation time-steps, due to the issues listed in Section 2.2 of \cite{behroozi2012gravitationally}. \texttt{CONSISTENT-TREES} resolves the foregoing problem by tracing~(in addition to particles) a subset of halo/subhalo properties which include halo mass, maximum circular velocity, halo position, and bulk velocity. A major component of the algorithm is to ensure continuity in these halo properties by construction. This is achieved by running a particle-based tree finder and establishing preliminary links between progenitor haloes~(at time step $t_{\mathrm{n-1}}$) and descendant haloes~(at time step $t_{\mathrm{n}}$). The subsequent steps consist of the following actions: \begin{enumerate} \item Gravitationally tracing the positions of descendant haloes from $t_{\mathrm{n}}$ to $t_{\mathrm{n-1}}$ to obtain their most likely progenitors at $t_{\mathrm{n-1}}$; removing progenitors whose properties do not resemble the most likely progenitors of the corresponding descendants. \item For each descendant halo at $t_{\mathrm{n}}$ that lacks a progenitor at $t_{\mathrm{n-1}}$ after step (i), a \textit{phantom} progenitor is assigned with halo properties identical to its most likely progenitor at $t_{\mathrm{n-1}}$; however, those descendant haloes that do not have progenitors for a sufficiently large sequence of time steps are removed. \item Finally, if a halo at $t_{\mathrm{n-1}}$ has no descendant at $t_{\mathrm{n}}$ after step (ii), it is \textit{merged} with a halo~(at $t_{\mathrm{n}}$) in its vicinity that has the strongest tidal field; additionally, the halo is removed as a statistical fluctuation if it is too far away from other haloes to experience any significant tidal field. \item Steps (i) to (iii) are iterated over the range of time steps (where each iteration corresponds a pair of time slices $t_{n-1}$ and $t_n$) from final time $t_f$ to initial time $t_i$. This establishes a lineage of haloes over the time range $t_i$ to $t_f$. \end{enumerate} Readers who are interested in more details are encouraged to refer to Section 5 of \cite{behroozi2012gravitationally}. \subsubsection{Constructing galaxy merger tree: Matching \texttt{ROCKSTAR} and \texttt{SUBFIND}} \label{matching_sec} The subhalo merger trees obtained using \texttt{ROCKSTAR-CONSISTENT TREES} are dark matter only. In order to construct the galaxy merger tree for our \texttt{SUBFIND} galaxies, we must match the subhaloes on the \texttt{ROCKSTAR} merger tree to our \texttt{SUBFIND} galaxies. We perform the following steps for the matching: \begin{enumerate} \item For a given \texttt{ROCKSTAR} subhalo (mass $M_h^{\mathrm{RS}}$) denoted by \texttt{SUBHALO-RS}, we select all \texttt{SUBFIND} subhalos~(with mass $M_h^{\mathrm{sub}}$) which satisfy $0.5\times M_h^{\mathrm{RS}}<M_h^{\mathrm{sub}}<2\times M_h^{\mathrm{RS}}$ and within a maximum distance of $5\times R^{\mathrm{RS}}_{\mathrm{vir}}$, where $R^{\mathrm{RS}}_{\mathrm{vir}}$ is the virial radius of the \texttt{ROCKSTAR} subhalo. We then choose the \texttt{SUBFIND} subhalo that is closest to the \texttt{ROCKSTAR} subhalo, denoted by \texttt{SUBHALO-RS-SUB}. \item For the \texttt{SUBFIND} subhalo \texttt{SUBHALO-RS-SUB}, we select all \texttt{ROCKSTAR} subhalos~(with mass $M_h^{\mathrm{sub}}$) which satisfy $0.5\times M_h^{\mathrm{sub}}<M_h^{\mathrm{RS}}<2\times M_h^{\mathrm{sub}}$ and within a maximum distance of $5\times R^{\mathrm{sub}}_{\mathrm{vir}}$, where $R^{\mathrm{sub}}_{\mathrm{vir}}$ is the virial radius of the \texttt{SUBFIND} subhalo. We then choose the \texttt{ROCKSTAR} subhalo that is closest to the \texttt{SUBFIND} subhalo, denoted by \texttt{SUBHALO-RS-SUB-RS} \item If~(and only if) we retrieve the original \texttt{ROCKSTAR} subhalo at the end of step (ii), i.e., \texttt{SUBHALO-RS-SUB-RS} is identical to \texttt{SUBHALO-RS}, we say that \texttt{SUBHALO-RS}~(from the \texttt{ROCKSTAR} merger tree) and \texttt{SUBHALO-RS-SUB}~(from the \texttt{SUBFIND} catalog) have been \textit{matched}. \end{enumerate} In order to generate a corresponding \texttt{SUBFIND} galaxy merger tree from a \texttt{ROCKSTAR} merger tree, every \texttt{ROCKSTAR} subhalo on the tree must be matched with a \texttt{SUBFIND} galaxy for the redshift range of our interest~($z_i\leq z\leq z_f$). If the matching fails at any redshift within ($z_i\leq z\leq z_f$), the entire tree is discarded. We quantify the matching success rate by defining a \textit{matching efficiency} $\eta_{\mathrm{matching}}$ as the ratio of the number of matched \texttt{SUBFIND} trees over the number of original \texttt{ROCKSTAR} trees~(present before matching). Figure \ref{matching_efficiency} shows $\eta_{\mathrm{matching}}$ as a function of $M_h$ at various values of $z_f$~($z_i=0.6$). For $z_f=1.5$~(red line), the efficiency is $86\%$ for all masses. At higher $z_f$, we lose more trees~(as expected) and the efficiency decreases to $75-82\%$ for $z_f=3$. This translates to a total of 27942 \texttt{SUBFIND} galaxy merger trees with progenitors up to redshift 3. This sample is sufficient for a statistical analysis, and to avoid further decrease in efficiency, we choose not to trace progenitors beyond redshift 3, hereafter defining the redshift range of our study to be $0.6\leq z\leq3$. We chose $z\geq 0.6$ since it is the time period when galaxy formation and merger processes are most active. \subsection{Shapes of galaxies and dark matter halos} \begin{figure*} \begin{center} \includegraphics[width=1.17\textwidth]{halo_vis.png} \end{center} \vspace{-1cm} \caption{A 2-d illustrative example of the evolution of a MBII galaxy on the merger tree. The red histograms show the distribution of stars and grey histograms show the distribution of underlying dark matter. The yellow ellipse represents the shape identified using dark matter particles, while the green ellipse represents the shape identified using stellar matter particles; The yellow and green dashed lines of color are their corresponding major axis directions. We can see that the subhalo shape is becoming more spherical from $z=3$ to $z=0.6$. Furthermore, the alignment between stellar matter and dark matter shapes is becoming stronger as we go from $z=3$ to $z=0.6$.} \label{illustration} \end{figure*} \begin{figure*} \includegraphics[width=130mm]{hist.png} \caption{Distribution of galaxy shapes: $P(q|z,M_h)$ (top) and $P(s|z,M_h)$ (bottom) show the normalized probability distributions of axis ratios $q=\frac{b}{a}$ and $s=\frac{c}{a}$ of dark/stellar matter components of galaxies~(subhaloes). Solid lines and dashed lines correspond to galaxy samples \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} respectively~(see section \ref{sample_definitions} for definition of galaxy samples). $\delta_q$ and $\delta_s$ correspond to the ratio of $P(q|z,M_h)$ and $P(s|z,M_h)$ respectively between \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} galaxies. The errorbars are $1\sigma$ poisson errors.} \label{fig:q} \end{figure*} We now describe how galaxy shapes are quantified. We model the shapes of the dark matter and stellar matter components of subhalos as ellipsoids in three dimensions by using the eigenvalues and eigenvectors of the \textit{reduced} inertia tensor~\citep{2005ApJ...627..647B,tenneti2014galaxy} given by \begin{equation} I_{ij}=\frac{\Sigma_{n}m_{n}\frac{x_{ni}x_{nj}}{r_n^2}}{\Sigma_{n}m_{n}} \label{inertia_tensor} \end{equation} where $m_n$ is the mass of the $n^{th}$ particle and $x_{ni}$ and $x_{nj}$ represent the $i$ and $j$ component of the position of the $n^{\rm th}$ particle ($0\le i,j\le 2$). $r_n$ is the distance of the $n^{\rm th}$ particle from the subhalo center and is given by $r_n^2=\sum x_{ni}^2$. We denote the principal axis directions or eigenvectors~(unit vectors) of $I_{ij}$ to be $(\hat{e}_a, \hat{e}_b, \hat{e}_c)$ with corresponding eigenvalues $(\lambda_a, \lambda_b, \lambda_c)$. The lengths of the principal axes $(a,b,c)$ are given by $(\sqrt{\lambda_a}, \sqrt{\lambda_b}, \sqrt{\lambda_c})$. The ellipticities can then be measured by the axis ratios, \begin{equation} q=\frac{b}{a},s=\frac{c}{a}. \label{axes_ratio} \end{equation} where $a$ is the length of the primary~(largest) axis. A perfectly spherical subhalo corresponds to $q=s=1$ and a triaxial halo corresponds to $q\neq s<1$. For a more robust measure of the shape, we adopt an iterative approach wherein we first determine the principal axes and axis ratios using all the particles in the subhalo, thereby determining the ellipsoidal volume. For each successive iteration, we then recalculate the inertia tensor and axis ratios ignoring particles outside the ellipsoidal volume. We repeat this until each iteration leads to $\lesssim1\%$ change in $a$, $b$ and $c$. \subsubsection{Shape convergence test} \label{S:appendixa} We require a sufficiently large number of particles to reliably measure galaxy~(subhalo) shapes. Here, we determine the minimum number of particles. Figure~\ref{shape_convergence} shows the distribution of $q$~(denoted by $P(q|M_h)$) for $z=0.6$ and $M_h>10^{11}~M_{\odot}/h$ galaxies. We show $P(q|M_h)$ for different numbers~($N_{\mathrm{part}}$) of subsampled dark matter particles within each subhalo. We find that the distributions converge for $N_{\mathrm{part}}=300,500,1000$ whereas for $N_{\mathrm{part}}=50$, $q$ is significantly underestimated. Therefore, we assume $N_{\mathrm{part}}\geq300$ in this work to ensure shape convergence; similarly, this choice is also sufficient for the convergence of $s$. This sets a minimum subhalo mass of our galaxies to $M_h\sim 3\times10^{9}~M_{\odot}/h$, which limits the subhalo mass and redshift range over which we can construct merger trees. We find that for galaxies with $M_h>10^{11}~M_{\odot}/h$ at $z=0.6$, their progenitors have $M_h\gtrsim3\times10^{9}~M_{\odot}/h$ up to $z=3$. Therefore our final choice for the subhalo mass range and redshift range in this work are $M_h>10^{11}~M_{\odot}/h$ and $0.6<z<3$. \subsection{Misalignment angle} To quantify the misalignment between the galaxy~(stellar matter component) and its host dark matter subhalo, we calculate the principal axes corresponding to the dark matter and star particles, i.e., $(\hat{e}^{\mathrm{DM}}_a, \hat{e}^{\mathrm{DM}}_b, \hat{e}^{\mathrm{DM}}_c)$ and $(\hat{e}^{*}_a, \hat{e}^{*}_b, \hat{e}^{*}_c)$ respectively. The misalignment angle is then defined as the angle between the eigenvectors corresponding to the primary~(longest) axes. \begin{equation} \theta_{m}=\arccos\left(\left| \hat{e}^{\mathrm{DM}}_{a} \cdot \hat{e}^{*}_{a}\right| \right) \end{equation} \subsection{Correlation function} The ellipticity-direction~(ED) correlation function \citep{lee2008quantifying} cross-correlates the orientation of the major axis of a subhalo with respect to the large-scale density field. For a subhalo centered at position $x$ with major axis direction $\hat{e}_{a}$, the ED cross-correlation function is given by \begin{equation} \omega \left(r\right) = \left \langle \left| \hat{e}_{a}(\vec{x}) \cdot \hat{r}(\vec{x}+\vec{r}) \right|^2 \right \rangle -\frac{1}{3} \end{equation} where $\hat{r}=\frac{\vec{r}}{r}$ and $\vec{r}$ is the position vector originating from the subhalo position~($\vec{x}$) to a tracer~(galaxy positions or dark matter particle positions) of the large scale matter distribution around the halo. In this work, we have used the dark matter particle positions as tracers of the matter density field. \begin{figure} \includegraphics[width=0.5\textwidth]{random.png} \caption{Comparison of the shapes of \textit{progenitor} galaxies and \textit{randomly-selected} galaxies of similar mass at $z=z_f=3$. The solid and dashed lines show $P(q|z,M_h)$~(dark matter component) for \texttt{SAMPLE-TREE}: $M_h>10^{11}~M_{\odot}/h$: $z=3$ and \texttt{SAMPLE-RANDOM}: $M_h>10^{11}~M_{\odot}/h$: $z=3$~(see Section~\ref{sample_definitions} for the sample definitions). $\delta_q$ is the ratio between the solid and dashed lines. \texttt{SAMPLE-RANDOM} is constructed to have a mass function identical to that of \texttt{SAMPLE-TREE} progenitors. } \label{fig:random} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{theta_hist.png} \caption{$P(\theta|z,M_h)$ is the distribution of misalignment angle $\theta$ between stellar and dark matter matter component of subhalos. Solid and dashed lines correspond to \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} galaxies, respectively. $\delta_\theta$ is the ratio between the solid and dashed lines. The errorbars are $1\sigma$ poisson errors. The black dotted lines represent the misalignment angle distribution if the two eigenvectors are uniformly distributed in 3d space.} \label{fig:theta_hist} \end{figure*} \section{Results} \label{S:results} \subsection{Stellar mass-subhalo mass relation} Figure \ref{SM_HM_fig} shows the subhalo total~(dark matter+gas+stars+black hole) mass~($M_h$) versus stellar mass~($M_*$) relation of \texttt{SAMPLE-TREE} galaxies at $z=0.6$, $1.5$, and $3.0$ with $M_h>10^{11}M_{\odot}/h$ at $z=0.6$. As expected, $M_h$ and $M_*$ are strongly correlated and both decrease with increasing redshift. We also note that as redshift increases, the $M_h$-$M_*$ relation does not significantly change either in slope or intercept, broadly consistent with predictions from semi-analytical models~\citep{2016MNRAS.456.1459M} as well as observations \citep{2012ApJ...744..159L}. This implies that galaxies grow in stellar mass and dark matter mass at roughly the same rate as they evolve along the merger tree As the subhalo mass strongly correlates with stellar mass, and therefore also correlates with other observable properties such as luminosity, star formation rate, we shall hereafter use subhalo mass cuts to construct the various galaxy samples defined in the next section for the rest of this work. \subsection{List of galaxy samples: Definitions and notations} \label{sample_definitions} Before we discuss the rest of the results, we describe the types of galaxy samples that we consider in this work. \begin{itemize} \item \texttt{SAMPLE-TREE}: The primary sample of interest consists of galaxies on the merger tree. We select galaxies with different subhhalo mass cuts~($M_h$) at $z=0.6$ and trace their progenitors to $z=3$ using the methods described in Sections \ref{merger_tree_sec}. Hereafter, we shall refer to this sample as \texttt{SAMPLE-TREE}. For example, the sample name ``\texttt{SAMPLE-TREE}: $M_h>10^{11}~M_{\odot}/h$: $z=2$" refers to galaxies at $z=2$ that are progenitors of the $M_h>10^{11}~M_{\odot}/h$ galaxies as selected at $z=0.6$. Using this sample, we study the redshift evolution of IA properties of galaxies, without having to consider the impact of evolution due to sample selection. \item \texttt{SAMPLE-MCUT}: The secondary sample of interest is obtained using the selection criterion of \cite{tenneti2015intrinsic}. Here we select galaxy samples with a fixed subhalo mass cut applied at all redshifts. Hereafter, we shall refer to this sample as \texttt{SAMPLE-MCUT}. For example, the sample name ``\texttt{SAMPLE-MCUT}: $M_h>10^{11}~M_{\odot}/h$: $z=2$" refers to all galaxies at $z=2$ with $M_h>10^{11}~M_{\odot}/h$. With this sample, the observed redshift evolution of IA properties is a combination of \textit{intrinsic} redshift evolution effects, and the evolution due to sample selection. \item \texttt{SAMPLE-RANDOM}: To interpret the impact of requiring galaxies to be a part of a merger tree, it will be necessary to look at differences in IA properties between a progenitor~(merger tree) galaxy and a randomly chosen galaxy of similar mass. To do this, we construct a galaxy sample by randomly drawing galaxies from the full sample at some redshift~(all galaxies in the simulation snapshot), such that the total~(dark matter+gas+stars+black hole) mass function is modulated to be identical to that of \texttt{SAMPLE-TREE}~(progenitor) galaxies at the same redshift. We shall refer to this as sample \texttt{SAMPLE-RANDOM}. For example, the sample name ``\texttt{SAMPLE-RANDOM}: $M_h>10^{11}~M_{\odot}/h$: $z=2$" refers to a random sample of galaxies at $z=2$ whose mass function is identical~(by construction) to ``\texttt{SAMPLE-TREE}: $M_h>10^{11}~M_{\odot}/h$: $z=2$". \end{itemize} \subsection{Evolution of galaxy shapes and misalignment angles} In this subsection, we will investigate how the shapes of galaxies~(and dark matter subhaloes), described by axis ratios $q=\frac{b}{a}$ and $s=\frac{c}{a}$, and the misalignments between stellar and dark matter components, evolve with redshift along the merger tree. Figure~\ref{illustration} shows an illustration of the evolution of a single simulated galaxy along the merger tree from $z=3$ to $z=0.6$. We can see that the shape of the dark matter component~(yellow ellipse) becomes more spherical with decreasing redshift. Furthermore, at $z=3$, the stellar matter is significantly misaligned with respect to the dark matter, but the alignment becomes stronger as redshift decreases. In the following subsections, we shall show that the foregoing trends persist for the overall distribution of shapes and misalignment angles for the entire set of \texttt{SAMPLE-TREE} galaxies. \subsubsection{Shape} \begin{figure*} \includegraphics[width=\textwidth]{corr.png} \caption{$\omega(r)$ is the ellipticity direction~(ED) correlation function of \texttt{SAMPLE-TREE} galaxies at different redshifts. Here we are using the major axes of the stellar matter components and galaxy positions as tracers of the matter distribution. The bottom panels show the ratio of $\omega(r,z)$ with respect to that of $\omega(r,z=0.6)$. Errorbars are jackknife errors obtained by dividing the simulation volume into eight octants.} \label{fig:corr} \includegraphics[width=\textwidth]{dm_bm.png} \caption{\textbf{Comparing ED correlation functions for \texttt{SAMPLE-TREE} galaxies and their dark matter subhaloes:} In the top panels, solid and dashed lines show the ED correlation functions of galaxies and their dark matter subhaloes, respectively. The ratio between the dashed vs solid lines are shown in the bottom panels. Errorbars in the correlation function are jackknife errors obtained by dividing the simulation volume into eight octants.} \label{fig:dm_bm} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{comparison.png} \caption{$\omega_{\texttt{MCUT}}/\omega_{\texttt{TREE}}$ is the ratio of $\omega(r)$ of \texttt{SAMPLE-TREE} to that of \texttt{SAMPLE-MCUT} galaxies. Errorbars are jackknife errors obtained by dividing the simulation volume into eight octants.} \label{fig:comparison} \end{figure*} Figure~\ref{fig:q} shows the distributions $P(q|z,M_h)$ and $P(s|z,M_h)$ of axis ratios $q$ and $s$ respectively. In Section \ref{S:appendixa}, we established that $\gtrsim$300 particles are required to reliably measure the shape; this dictates our choice of minimum subhalo mass threshold of $M_h>10^{11}~M_{\odot}/h$ at $z=0.6$. The solid and dashed lines correspond to \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} respectively. The bottom panels show the ratio between the axis ratio distributions of \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} galaxies. \textbf{Subhalo mass dependence on the merger tree}: We first focus on shapes of dark matter subhaloes. For \texttt{SAMPLE-TREE} galaxies~(solid lines), we see that as subhalo mass increases, $P(q|z,M_h)$ and $P(s|z,M_h)$~(for dark matter) is increasingly skewed towards lower values of $q$ and $s$. This is more clearly seen in the mean values of $q$ and $s$ in Figures B1. This implies that as subhalo mass increases, galaxies on the merger tree become less spherical at fixed redshift. This is also true for \texttt{SAMPLE-MCUT} galaxies~(dashed lines) and has been well established in previous studies~\citep{2005ApJ...618....1H,2006MNRAS.367.1781A,tenneti2015intrinsic}; therefore it is not surprising that it persists for galaxies on the merger tree. For the shapes of the stellar matter component, the dependence on subhalo mass at $z\lesssim1.5$ is the same as that of the dark matter component for both $P(q|M_h)$ and $P(s|M_h)$, also seen in \cite{tenneti2015intrinsic}. In other words, at $z\lesssim1.5$ more massive galaxies have less spherical shapes for the stellar matter component~(the mass dependence is seen much more clearly in Figure~B2). However, this result does not persist all the way up to $z\sim3$. In fact we see that the mass dependence of $P(q|M_h)$ is reversed~(i.e.\ $P(q|M_h)$ skews towards higher values with increasing subhalo mass) at $z\sim3$ while $P(s|M_h)$ has no significant mass dependence at $z\sim3$. Therefore, we find that at $z\sim3$, the sphericity of the stellar matter component of galaxies increases with increasing subhalo mass. To summarize the above trends, we find that: \begin{itemize} \item The shapes of the dark matter components of galaxies become less spherical with increasing subhalo mass. \item For the stellar matter components, the shapes become less spherical with increasing subhalo mass at $z\lesssim1.5$. The trend starts to reverse at $z\gtrsim1.5$ and by $z\sim3$, the shapes become more spherical with increasing subhalo mass. \end{itemize} \textbf{Redshift evolution on the merger tree}: We first focus on the shapes of dark matter subhaloes. For \texttt{SAMPLE-TREE} galaxies~(solid lines), we see that for all three panels, as redshift decreases, the peaks of $P(q|z,M_h)$ and $P(s|z,M_h)$~(for dark matter) shift towards higher values of $q$ and $s$. This implies that as redshift decreases, galaxies on the merger tree evolve to become more spherical. This is also true for \texttt{SAMPLE-MCUT} galaxies~(dashed lines), as was previously reported in \cite{tenneti2015intrinsic}. It also noteworthy that our results are consistent with \cite{2005ApJ...618....1H} which investigated the evolution of shapes of cluster sized haloes~($M_h>2\times10^{13}~M_{\odot}/h$) in N-body simulations over roughly the same range of redshifts. The shape evolution of the stellar matter component has significant differences compared to that of dark matter (as already hinted in the discussion on the subhalo mass dependence). For instance, $P(s|z,M_h)$ tends towards being less spherical as redshift decreases. This trend is opposite to that of dark matter. However, note also that the overall evolution of $P(s|z,M_h)$ is significantly weaker for stellar matter than for dark matter. For $P(q|z,M_h)$, the evolution is more complicated and depends on the subhalo mass threshold. For $M_h>10^{11}~M_{\odot}/h$, there is no significant evolution. On the other hand, for $M_h>10^{12}~M_{\odot}/h$ and $M_h>10^{13}~M_{\odot}/h$, the evolution is significant: $P(q|z,M_h)$ is less spherical at $z=0.6$ compared to $z=3$. To summarize the above trends, we find that: \begin{itemize} \item The shapes of the dark matter components of galaxies tend to become more spherical with time. \item The shapes of the stellar matter components of galaxies tend to become less spherical with time, especially for higher mass thresholds. \end{itemize} \textbf{Comparing \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT}}: We now compare the axis ratio distributions between \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT}~(see ratio plots in Figure \ref{fig:q}). For the dark matter shapes, we find that the axis ratio distributions of \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} are broadly consistent i.e.\ there is no statistically significant difference in their shapes given the errorbars. The fact that this is persistent all the way up to $z=3$ is noteworthy because at $z=3$, \texttt{SAMPLE-MCUT} galaxies are significantly more massive than \texttt{SAMPLE-TREE} galaxies. This suggests that at fixed redshift, the subhalo mass is not the sole parameter that determines the shapes of dark matter component of galaxies. In particular, galaxies that are progenitors of lower redshift galaxies above some mass threshold may be less spherical compared to a randomly chosen set of galaxies of similar subhalo mass. In order to show this explicitly, in Figure~\ref{fig:random} we compare the axis ratio distributions~(at $z=3$) of the dark matter components of \texttt{SAMPLE-TREE} galaxies to that of a random sample~(\texttt{SAMPLE-RANDOM}) whose mass functions are modulated to be identical to that of \texttt{SAMPLE-TREE}. We see that the axis ratios for \texttt{SAMPLE-TREE} galaxies are smaller than that of \texttt{SAMPLE-RANDOM} galaxies. This is also true in general for $z\gtrsim1.5$. This solidifies the impression that early galaxies that are progenitors of present-day massive galaxies~($M_h>10^{11}~M_{\odot}/h$ at $z=0.6$) are more elliptical~(on an average) than a randomly selected galaxy at similar subhalo mass and redshift. For the stellar matter shapes, the ratio plots show that at $z=3$, $P(q|M_h)$ for samples with mass thresholds of $M_h>10^{11}~M_{\odot}/h$ and $M_g>10^{12}~M_{\odot}/h$ are less spherical for \texttt{SAMPLE-TREE} galaxies compared to \texttt{SAMPLE-MCUT} galaxies. This is because \texttt{SAMPLE-MCUT} galaxies are more massive compared to \texttt{SAMPLE-TREE} galaxies at $z=3$ (we have already shown that stellar matter shapes are more spherical at higher subhalo masses at $z=3$). $P(s|M_h)$ however has no significant difference between \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} at $z=3$ despite the difference in subhalo masses. This is simply because there is insignificant mass dependence in $P(s/M_h)$ for stellar matter at $z=3$. The comparison of shapes between \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} galaxies at $z=3$ can now be summarized as follows: \begin{itemize} \item For the dark matter components, no difference is found between the shapes of \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} galaxies at $z=3$ despite the difference in masses. This is because at $z=3$ galaxies that are progenitors of $z\sim0.6$: $M_h\gtrsim10^{11}~M_{\odot}/h$ galaxies are significantly less spherical~(on an average) than a \textit{randomly selected} galaxy of similar subhalo mass and redshift. \item For the stellar matter component, \texttt{SAMPLE-TREE} galaxies are less spherical compared to \texttt{SAMPLE-MCUT} galaxies at $z=3$. This is because \texttt{SAMPLE-MCUT} galaxies are more massive (which we show to be more spherical for stellar matter component) than \texttt{SAMPLE-TREE} galaxies at $z=3$. \end{itemize} \subsubsection{Misalignment angle} In this section, we investigate how the misalignment angle of galaxies on the tree evolves with redshift. The solid lines in Figure~\ref{fig:theta_hist}~(top panels) show the distribution~$P(\theta|z,M_h)$ of misalignment angles~($\theta$) at different redshifts and subhalo mass cuts for \texttt{SAMPLE-TREE} galaxies. The distributions are skewed with a maximum at $\theta_m\sim5-10~\mathrm{deg}$ accompanied by a long tail at $\theta_m>10~\mathrm{deg}$, and a sharp fall-off at $\theta_m<5~\mathrm{deg}$. At fixed redshift, as the subhalo mass increases, $P(\theta|M_h)$ skews towards smaller values of $\theta$~(seen more clearly in Figure B3). This implies that more massive galaxies are more aligned with their subhaloes. $P(\theta|M_h)$ skews towards smaller $\theta$ as redshift decreases, implying that galaxies evolve over time to become increasingly aligned with their subhaloes, although the evolution is mild. The evolution of the misalignment angle can be put in the context of existing IA models. The fact that the evolution is mild suggests that it may possibly be mediated by the evolution of the instantaneous tidal field. This is hinted by the fact that the contribution of the instantaneous tidal field is small~(compared to observations), as predicted by the analytical model presented in \cite{2015A&A...575A.113C}. In such a scenario, the redshift evolution, contributed by the instantaneous tidal field, can be thought of as a perturbation to the pre-existing alignment~($\theta_m\sim10~\mathrm{deg}$). Given its strength, the pre-existing alignment is likely set by the primordial~(at the formation epoch of these galaxies) tidal field, as assumed in linear alignment models~\citep{2001MNRAS.320L...7C,hirata2004intrinsic}. We also compare $P(\theta|z,M_h)$ for \texttt{SAMPLE-TREE} galaxies to the predictions for \texttt{SAMPLE-MCUT} galaxies~(solid vs.\ dashed lines in Figure \ref{fig:theta_hist}~top panels); Figure \ref{fig:theta_hist}~(bottom panels) shows the ratio $\delta_{\theta}$. For $M_H>10^{11,12}~M_{\odot}/h$, we find that $\delta_{\theta}<1$ for $\theta<25~\mathrm{deg}$ and $\delta_{\theta}>1$ for $\theta>25~\mathrm{deg}$ at all redshifts. This implies that \texttt{SAMPLE-TREE} galaxies are less aligned with their subhaloes compared to \texttt{SAMPLE-MCUT} galaxies. At $z=1.5$ and $z=3$, one would expect this to be the case as \texttt{SAMPLE-MCUT} galaxies are more massive, and therefore more aligned, than \texttt{SAMPLE-TREE} galaxies. However, we also see the same effect at $z=0.6$, where both \texttt{SAMPLE-MCUT} and \texttt{SAMPLE-TREE} galaxies have the same subhalo mass thresholds. This implies that galaxies which formed between $0.6\lesssim z\lesssim 3$~(i.e. those that do not have progenitors up to $z=3$) are more aligned with their subhaloes than those that formed at $z>3$ We have so far discussed the evolution of distributions of galaxy shapes and misalignment angles. In Appendix~\ref{S:appendixb}, we present the evolution of the average values of the axis ratios and misalignment angles, and provide simple fitting functions to quantify them. \subsection{Ellipticity-direction~(ED) Correlation function} In this section, we will investigate how the ellipticity-direction correlation function of galaxies on the merger tree evolves with redshift We now present the results for the ED correlation function $\omega(r)$. The top panels in Figure~\ref{fig:corr} show $\omega(r)$ for \texttt{SAMPLE-TREE} galaxies and its redshift evolution along the merger tree. The bottom panels show the ratio $\omega(r,z)/\omega(r,z=0.6)$. They reveal the evolution of the ED correlation for a wide range of scales to be probed by LSST weak lensing~\citep{2018arXiv180901669T}. These include scales $\gtrsim5~\mathrm{Mpc}/h$ where the NLA model and its extensions such as \cite{2016IAUS..308..452B} already work well. Additionally, our simulations also reveal ED correlations at smaller scales which are not well probed by these analytical models. Accordingly, we choose $\sim 1~\mathrm{Mpc}/h$ as an interesting scale around which we shall now describe the evolution of the ED correlation. At $r>1~\mathrm{Mpc}/h$ we see that the correlation function is a power law as a function of $r$. The slope of the power law does not vary significantly with redshift or subhalo mass. The power law amplitude increases with subhalo mass at fixed redshift, as also reported in \cite{tenneti2015intrinsic}. The ED correlation amplitude increases with decreasing redshift along the merger tree~(up to factors of $\sim4$ from $z=3$ to $z=0.6$). At sufficiently small scales~($r\lesssim1~\mathrm{Mpc}/h$), $\omega(r)$ deviates from a power law and is suppressed~(compared to power-law extrapolation from large scales). The extent of the suppression increases with decreasing redshift. As we approach even smaller scales $\sim 0.1~\mathrm{Mpc}/h$, the redshift evolution is reversed compared to large scales, i.e., $\omega(r)$ decreases with decreasing redshift along the merger tree~(up to factors of $\sim2$ from $z=3$ to $z=0.6$). We compare $\omega(r)$ predictions of \texttt{SAMPLE-TREE} to that of \texttt{SAMPLE-MCUT}; Figure~\ref{fig:comparison} shows the ratio between the two as a function of $r$. We find that as redshift increases, $\omega(r)$ for \texttt{SAMPLE-TREE} becomes increasingly suppressed at scales $r\gtrsim1~\mathrm{Mpc}/h$ as compared to that of \texttt{SAMPLE-MCUT}; at $z=3$ the suppression is by factors $3-4$. At $r\lesssim1~\mathrm{Mpc}/h$, the differences are relatively small~(by factors $\lesssim2$). These differences are largely because \texttt{SAMPLE-TREE} galaxies are less massive compared to \texttt{SAMPLE-MCUT} galaxies at higher redshifts. In the following subsections, we shall dig deeper into the foregoing results by first putting them in the context of the galaxy-subhalo misalignments, and then finally revealing the factors that drive the evolution of ED correlations at different scales. \subsubsection{Implications of galaxy-subhalo misalignment on the ED correlation} We now study the implications of galaxy-subhalo misalignment and its evolution on the ED correlation function. To do this, we compare the ED correlations of galaxies~(also shown in Figure~\ref{fig:corr}) to their underlying dark matter subhaloes. The top panel of Figure~\ref{fig:dm_bm} shows the ED correlation functions of \texttt{SAMPLE-TREE} galaxies, where the solid and dashed lines correspond to galaxies and dark matter subhaloes, respectively. As a consequence of the misalignment between stellar matter and dark matter, the solid lines showing the galaxy ED correlation functions are significantly suppressed compared to the subhalo ED correlation functions~(by factors $\sim2-4$) at all scales. This implies that the alignment of galaxies with respect to the surrounding density field is suppressed as compared to their dark matter subhaloes. This has been established in previous works~\citep{2015MNRAS.453..469T}, and is also supported observationally in the alignments of luminous red galaxies \citep{2009ApJ...694..214O}. We now discuss how this suppression evolves with redshift on the merger tree. In the bottom panel of Figure~\ref{fig:dm_bm}, we see that the ratio $\omega_{\mathrm{DM subhalo}}/\omega_{\mathrm{galaxy}}$ decreases with decreasing redshift; this is because galaxy-subhalo misalignment decreases with decreasing redshift. Furthermore, the evolution is stronger for $M_h>10^{11}~M_{\odot}/h$ haloes as compared to $M_h>10^{13}~M_{\odot}/h$. This is because at $z=3$, $M_h>10^{13}~M_{\odot}/h$ galaxies are more aligned with their subhaloes as compared to $M_h>10^{11}~M_{\odot}/h$ galaxies~(compare leftmost and rightmost panels of Figure \ref{fig:theta_hist}). \subsubsection{What drives the evolution of ED correlation at different scales?} Here, we discuss the factors driving the evolution of the galaxy ED correlation at different scales, as inferred from Figure~\ref{fig:dm_bm}. At scales $\gtrsim1~\mathrm{Mpc}/h$, note that the ED correlations for dark matter subhaloes~(dashed lines) undergo a significantly weaker redshift evolution compared to that of galaxies~(solid lines). In fact, there is no significant evolution for $M_h>10^{11}~M_{\odot}/h$ and $M_h>10^{12}~M_{\odot}/h$ subhaloes. Therefore, the fact that we find a significant evolution for the galaxy ED correlation implies that its evolution at scales $>1~\mathrm{Mpc}/h$ is primarily driven by the evolution of the galaxy-subhalo misalignment, as opposed to being driven by the ED correlation for dark matter haloes. At scales $\lesssim1~\mathrm{Mpc}/h$, a suppression~(compared to a power law) is seen in the ED correlations for both galaxies and their dark matter subhaloes. Furthermore, the suppression in the galaxy ED correlation simply traces that of the dark matter subhalo, but at a lower normalization. Overall, this tells us that the evolution of the ED correlation profile for galaxies at scales $\lesssim1~\mathrm{Mpc}/h$ is governed by the evolution of both 1) the ED correlation for dark matter haloes, and 2) the misalignment between galaxies and subhaloes. The former leads to a decrease in the ED correlation for galaxies with time, whereas the latter drives an increase in the ED correlation for galaxies. Due to the complex interplay between these two competing effects, no straightforward trend is seen in the evolution of ED correlation at scales $\sim 1~\mathrm{Mpc}/h$~(to be targeted by LSST). At very small scales ($\sim 0.1~\mathrm{Mpc/h}$), the suppressed ED correlation of DM subhaloes is so large that it dominates compared to the evolution of galaxy subhalo misalignment angle. This competition causes the reversal in the redshift evolution of $\omega(r)$ for galaxies at these scales, compared to that in scales $>1~\mathrm{Mpc}/h$. \section{Conclusions} \label{S:conclusions} This work is part of a continued series of papers dedicated to studying the intrinsic alignments~(IA) of galaxies using the \texttt{MassiveBlackII} cosmological hydrodynamic simulation. In this work, we study redshift evolution~($0.6\lesssim z \lesssim 3$) by selecting galaxy samples (\texttt{SAMPLE-TREE}) based on subhalo mass cuts~($M_h>10^{11,12,13}~M_{\odot}/h$) at $z=0.6$ and tracing their progenitors to $z=3$ along a merger tree. We study the redshift evolution of galaxy shapes, misalignment with respect to host subhalo, and the ED correlation functions along the merger tree. Our key findings are as follows: \begin{itemize} \item The sphericity of the dark matter component of galaxies increases with time, whereas that of the stellar matter component decreases with time. \item The distribution of galaxy-subhalo misalignment angle peaks at $\sim$10~deg. With decreasing redshift, the distribution becomes narrower and more skewed towards smaller misalignment angles. \item The evolution of the ellipticity-direction~(ED) correlation~$\omega(r)$ of galaxies is driven by the evolution of their alignment with respect to their host DM subhaloes, as well as the alignment between DM subhaloes and the surrounding matter overdensity. \begin{itemize} \item At scales $\sim1~\mathrm{cMpc}/h$, the alignment between DM subhaloes and the matter overdensity gets suppressed with time. On the other hand, the alignment between galaxies and DM subhaloes is enhanced. Due to these competing tendencies, the redshift evolution of $\omega(r)$ for galaxies at $\sim1~\mathrm{cMpc}/h$ is not straightforward. \item At scales $>1~\mathrm{cMpc}/h$, there is no significant evolution in the alignment between DM subhaloes and the matter overdensity. As a result, the evolution of the galaxy-subhalo misalignment leads to an increase in $\omega(r)$ for galaxies by a factor of $\sim$4 from $z=3$ to $0.6$. \item At $\sim0.1~\mathrm{cMpc}/h$ scales, evolution in $\omega(r)$ for galaxies is completely reversed compared to that at scales $\gtrsim1~\mathrm{cMpc}/h$, i.e., it decreases by factors $\sim 2$ from $z=3$ to $0.6$. This is because at these scales, the alignment between DM subhaloes and the matter overdensity is strongly suppressed with time, and this effect dominates over evolution of galaxy-subhalo misalignment. \end{itemize} \end{itemize} We also compare our results with the sample selection applied in the previous work of this series \citep{tenneti2015intrinsic}. In particular, we also considered galaxy samples~(\texttt{SAMPLE-MCUT}) with fixed subhalo mass cuts~($M_h>10^{11,12,13}~M_{\odot}/h$), applied at all redshifts between 0.6 and 3. Interestingly, upon comparing the sphericities of dark matter components of \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} galaxies, we find that they do not significantly differ~($\lesssim 10\%$); this is true even at the highest redshift~($z=3$) where \texttt{SAMPLE-TREE} galaxies are significantly less massive than \texttt{SAMPLE-MCUT}. This is explained by our finding that at $z\gtrsim1.5$, progenitors of $z\sim0.6$: $M_h\gtrsim10^{11}~M_{\odot}/h$ galaxies have significantly less spherical~(on an average) dark matter shapes than a \textit{randomly selected} galaxy of similar subhalo mass and redshift. For the stellar matter component, we find that \texttt{SAMPLE-TREE} progenitors at $z=3$ are less spherical compared to \texttt{SAMPLE-MCUT} galaxies. This is because \texttt{SAMPLE-MCUT} galaxies are more massive (which we show to be more spherical for stellar matter component) than \texttt{SAMPLE-TREE} galaxies at $z=3$. We find that \texttt{SAMPLE-TREE} galaxies are less aligned with their subhaloes compared to \texttt{SAMPLE-MCUT} galaxies. At $z=1.5$ and $z=3$, this can be attributed to the differences between their subhalo masses. But the fact that we also see this at $z=0.6$ further implies that galaxies which formed earlier than $z=3$~(i.e. those that do not have progenitors up to $z=3$) are more aligned than those that formed at $z<3$. The effect of differences in subhalo masses~(at $z>0.6$) of \texttt{SAMPLE-TREE} and \texttt{SAMPLE-MCUT} galaxies, is also seen in their ED correlation function $\omega(r)$. Compared to \texttt{SAMPLE-MCUT}, $\omega(r)$ for \texttt{SAMPLE-TREE} galaxies is suppressed at increasing redshift (by factors up to $\sim3-4$ at $z=3$); this is due to decreasing subhalo masses of progenitors in \texttt{SAMPLE-TREE} at increasing redshift. This work demonstrates that hydrodynamic simulations such as MBII are indispensible tools to study redshift evolution of galaxy properties such as IA, primarily because of the ability to directly trace progenitors of present-day galaxies by constructing merger trees. This enables us to disentangle true IA evolution from apparent evolution due to sample selection effects, which are inevitable in observations. Future work will involve the use of the results from this study, as well as previous works \citep{tenneti2014galaxy,tenneti2015intrinsic,tenneti2016intrinsic}, to construct halo models for IA of galaxies. These models can then be used to construct mock catalogs by populating N-body simulation volumes, and thereby analyse possible systematic biases caused by IA in weak lensing analyses. \section*{Acknowledgements} We thank Yu Feng for providing the data of MB-II simulation snapshots and raw data. This research is supported by the US National Science Foundation under Grant No.\ 1716131. TDM acknowledge funding from NSF ACI-1614853, NSF AST-1517593, NSF AST-1616168, NASA ATP 80NSSC18K1015 and NASA ATP 17-0123. \addcontentsline{toc}{section}{Acknowledgements} \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:introduction}} \IEEEPARstart{M}{ax-margin} learning has been effective on learning discriminative models, with many examples such as univariate-output support vector machines (SVMs)~\cite{Cortes:1995} and multivariate-output max-margin Markov networks (or structured SVMs)~\cite{Taskar:03,Altun:03,Tsochantaridis:04}. However, the ever-increasing size of complex data makes it hard to construct such a fully discriminative model, which has only a single layer of adjustable weights, due to the facts that: (1) the manually constructed features may not well capture the underlying high-order statistics; and (2) a fully discriminative approach cannot reconstruct the input data when noise or missing values are present. To address the first challenge, previous work has considered incorporating latent variables into a max-margin model, including partially observed maximum entropy discrimination Markov networks~\cite{Zhu:08b}, structured latent SVMs~\cite{Yu:2009} and max-margin min-entropy models~\cite{Miller:2012}. All this work has primarily focused on a shallow structure of latent variables. To improve the flexibility, learning SVMs with a deep latent structure has been presented in~\cite{Tang:2013}. However, these methods do not address the second challenge, which requires a generative model to describe the inputs. The recent work on learning max-margin generative models includes max-margin topic models~\cite{zhu12jmlr,zhu14jmlr-lda}, max-margin Harmoniums~\cite{Chen:2012pami}, and nonparametric Bayesian latent SVMs~\cite{zhu14jmlr} which can infer the dimension of latent features from data. However, these methods only consider the shallow structure of latent variables, which may not be flexible enough to describe complex data. Much work has been done on learning generative models with a deep structure of nonlinear hidden variables, including deep belief networks~\cite{Salakhutdinov:09,Lee:09,Ranzato:11}, autoregressive models~\cite{Larochelle:11,Gregor:14}, stochastic variations of autoencoders ~\cite{vincent2010stacked,bengio2013generalized,Bengio:14} and Generative Adversarial Nets (GANs)~\cite{goodfellow:14,radford2015unsupervised}. For such models, inference is a challenging problem, which has motivated much recent progress on stochastic variational inference algorithms~\cite{kingma14iclr,danilo14icml,bornschein2014reweighted,burda2015importance}. However, the primary focus of deep generative models (DGMs) has been on unsupervised learning, with the goals of learning latent representations and generating input samples. Though the latent representations can be used with a downstream classifier to make predictions, it is often beneficial to learn a joint model that considers both input and response variables. The recent work on semi-supervised deep generative models~\cite{kingma14nips,springenberg16,maaloe16,salimans2016improved} proves the effectiveness of DGMs on modeling the density of unlabeled data to benefit the prediction task (See Sec.~\ref{sec:related_work} for a detailed discussion). However, it remains open whether the discriminative max-margin learning is suitable for this task. In this paper, we revisit the max-margin principle and present max-margin deep generative models (mmDGMs), which learn multilayered representations that are good for both classification and input inference. Our mmDGMs conjoin the flexibility of DGMs on describing input data and the strong discriminative ability of max-margin learning on making accurate predictions. Given fully labeled data, we formulate mmDGMs as solving a variational inference problem of a DGM regularized by a set of max-margin posterior constraints, which bias the model to learn representations that are good for prediction. We define the max-margin posterior constraints as a linear functional of the target variational distribution of the latent presentations. To optimize the joint learning problems, we develop a doubly stochastic subgradient descent algorithm, which generalizes the Pagesos algorithm~\cite{shai11pegasos} to consider nontrivial latent variables. For the variational distribution, we build a recognition model to capture the nonlinearity, similar as in~\cite{kingma14iclr,danilo14icml}. To reduce the dependency on fully labeled data, we further propose a class-conditional variant of mmDGMs (mmDCGMs) to deal with partially labeled data for semi-supervised learning, where the amount of unlabeled data is typically much larger than that of labeled ones. Specifically, mmDCGMs employ a deep max-margin classifier to infer the missing labels for unlabeled data and a class-conditional deep generative model~\cite{kingma14nips} to capture the joint distribution of the data, labels and latent variables. Unlike~\cite{rasmus15,springenberg16,salimans2016improved}, our mmDCGMs separate the pathways of inferring labels and latent variables completely and can generate images given a specific class. Instead of inferring the full posterior of labels as in~\cite{kingma14nips,maaloe16}, which is computationally expensive for large datasets, we use the prediction of the classifier as a point estimation of the label to speed-up the training procedure. We further design additional max-margin and label-balance regularization terms of unlabeled data to enhance the classifier and significantly boost the classification performance. We consider two types of networks used in our mmDGMs and mmDCGMs---multiple layer perceptrons (MLPs) as in~\cite{kingma14iclr,danilo14icml} and convolutional neural networks (CNNs)~\cite{Lecun:98}. In the CNN case, following~\cite{Dosovitskiy:2014}, we apply unpooling, convolution and rectification sequentially to form a highly non-trivial deep generative network to generate images from the latent variables that are learned automatically by a recognition model using a standard CNN. We present the detailed network structures in the experiment section. Empirical results on the widely used MNIST~\cite{Lecun:98}, SVHN~\cite{Netzer:11} and small NORB~\cite{lecun2004learning} datasets demonstrate that: (1) mmDGMs can significantly improve the prediction performance in supervised learning, which is competitive to the best feedforward neural networks, while retaining the capability of generating input samples and completing their missing values; and (2) mmDCGMs can achieve state-of-the-art classification results with efficient inference and disentangle styles and classes based on raw images in semi-supervised learning. In summary, our main contributions are: \begin{itemize} \item We present max-margin DGMs for both supervised and semi-supervised settings to significantly enhance the discriminative power of DGMs while retaining their generative ability; \item We develop efficient algorithms to solve the joint learning problems, which involve intractable expectation and non-smooth piecewise linear operations; \item We achieve state-of-the-art results on several benchmarks in semi-supervised learning and competitive prediction accuracy as the fully discriminative CNNs in supervised learning. \end{itemize} The rest of the paper is structured as follows. Section 2 surveys the related work. Section 3 presents max-margin deep generative models for both supervised and semi-supervised learning. Section 4 presents experimental results. Finally, Section 5 concludes. \section{Related Work} \label{sec:related_work} Deep generative models (DGMs) are good at discovering the underlying structures in the input data, but the training of the model parameters and inference of the posterior distribution are highly nontrivial tasks. Recently, significant progress has been made on enriching the representative power of variational inference and Markov chain Monte Carlo methods for posterior inference, such as variational Autoencoders (VAEs)~\cite{kingma14iclr,danilo14icml} and neural adaptive MCMC~\cite{du2015learning}. VAEs~\cite{kingma14iclr,danilo14icml} build a recognition model to infer the posterior of latent variables and the parameters are trained to optimize a variational bound of the data likelihood. Neural adaptive MCMC~\cite{du2015learning} employs a similar recognition model as the proposal distribution for importance sampling to estimate the gradient of log-posterior and hence can perform approximate Bayesian inference of DGMs. To learn the parameters, besides the commonly used MLE estimator as adopted by VAEs, recent work has proposed various objectives. For example, Generative Adversarial Nets (GANs)~\cite{goodfellow:14} construct a discriminator to distinguish the generated samples from the training data and the parameters are trained based on a minimax two-player game framework. Generative Moment Matching Networks (GMMNs)~\cite{li2015generative,dziugaite2015training} generate samples from a directed deep generative model, which is trained to match all orders of statistics between training data and samples from the model. The very recent work~\cite{yong2016conditional} extends the ideas to learn conditional GMMNs with much broader applicability. Extensive work has been focusing on realistic image generation in unsupervised setting. For example, DRAW~\cite{gregor2015draw} employs recurrent neural networks as the generative model and recognition model and introduces a 2-D attention mechanism to generate sequences of real digits step by step. MEM-VAE~\cite{li2016learning} leverages an external memory and an attention mechanism to encode and retrieve the detailed information lost in the recognition model to enhance DGMs. LAP-GAN~\cite{denton2015deep} proposes a cascade of GANs to generate high quality natural images through a Laplacian pyramid framework~\cite{burt1983laplacian}. DCGAN~\cite{radford2015unsupervised} adopts fractionally strided convolution networks in the generator to learn the spatial upsampling and refines the generated samples. Some recent advances~\cite{kingma14nips,maaloe16,springenberg16,salimans2016improved,rasmus15} have been made on extending DGMs to deal with partially observed data. For example, the conditional VAEs~\cite{kingma14nips} treat labels as conditions of DGMs to describe input data; they perform posterior inference of labels given unlabeled data and can generate a specific class of images. ADGM~\cite{maaloe16} introduces auxiliary latent variables to DGMs to make the variational distribution more expressive and does well in semi-supervised learning. Cat-GAN~\cite{springenberg16} generalizes GANs with a categorical discriminative network and an objective function that includes the mutual information between the input data and the prediction of the discriminative network.~\cite{salimans2016improved} proposes feature mapping, virtual batch normalization and other techniques to improve the performance of GANs on semi-supervised learning and image generation. The Ladder Network~\cite{rasmus15} achieves excellent classification results in semi-supervised learning by employing lateral connections between autoencoders to reduce the competition between the invariant feature extraction and the reconstruction of object details. Our work is complimentary to the above progress in the sense that we investigate a new criterion (i.e., max-margin learning) for DGMs in both supervised and semi-supervised settings. Some preliminary results on the fully supervised mmDGMs were published in~\cite{li2015max}, while the semi-supervised extensions are novel. \section{Max-margin Deep Generative Models} We now present the max-margin deep generative models for supervised learning and their class-conditional variants for semi-supervised learning. For both methods, we present efficient algorithms. \subsection{Basics of Deep Generative Models} We start from a general setting, where we have $N$ i.i.d. data $\mathbf{X} = \{ \mathbf{x}_n \}^{N}_{n = 1}$. A deep generative model (DGM) assumes that each $\mathbf{x}_n \in \mathbb{R}^D$ is generated from a vector of latent variables $\mathbf{z}_n \in \mathbb{R}^K$, which itself follows some distribution. The joint probability of a DGM is as follows: \begin{equation}\label{eq:DGM-joint-dist} p(\mathbf{X}, \mathbf{Z}| \boldsymbol{\alpha}, \boldsymbol{\beta}) = \prod^{N}_{n = 1} p(\mathbf{z}_n | \boldsymbol{\alpha}) p(\mathbf{x}_n | \mathbf{z}_n, \boldsymbol{\beta}), \end{equation} where $p(\mathbf{z}_n | \boldsymbol{\alpha})$ is the prior of the latent variables and $p(\mathbf{x}_n | \mathbf{z}_n, \boldsymbol{\beta})$ is the likelihood model for generating observations. For notation simplicity, we define $\boldsymbol{\theta} = (\boldsymbol{\alpha}, \boldsymbol{\beta})$. Depending on the structure of $\mathbf{z}$, various DGMs have been developed, such as the deep belief networks~\cite{Salakhutdinov:09,Lee:09}, deep sigmoid networks~\cite{Mnih:icml2014}, deep latent Gaussian models~\cite{danilo14icml}, and deep autoregressive models~\cite{Gregor:14}. In this paper, we focus on the directed DGMs, which can be easily sampled from via an ancestral sampler. However, in most cases learning DGMs is challenging due to the intractability of posterior inference. The state-of-the-art methods resort to stochastic variational methods under the maximum likelihood estimation (MLE) framework, $\hat \boldsymbol{\theta} = \operatornamewithlimits{argmax}_{\boldsymbol{\theta}} \log p(\mathbf{X} | \boldsymbol{\theta})$ (See the related work for alternative learning methods). Specifically, let $q(\mathbf{Z})$ be the variational distribution that approximates the true posterior $p(\mathbf{Z} | \mathbf{X}, \boldsymbol{\theta})$. A variational upper bound of the per sample negative log-likelihood (NLL) $-\log p(\mathbf{x}_n | \boldsymbol{\alpha}, \boldsymbol{\beta})$ is: \setlength{\arraycolsep}{0.0em} \begin{eqnarray} \mathcal{L} (\boldsymbol{\theta}, q(\mathbf{z}_n ); \mathbf{x}_n) & \triangleq & \mathbf{KL} (q(\mathbf{z}_n) || p(\mathbf{z}_n | \boldsymbol{\alpha})) \nonumber \\ && - \mathbb{E}_{q(\mathbf{z}_n)}[\log p(\mathbf{x}_n | \mathbf{z}_n, \boldsymbol{\beta})] \nonumber, \end{eqnarray} where $\mathbf{KL} (q || p )$ is the Kullback-Leibler (KL) divergence between distributions $q$ and $p$. Then, $\mathcal{L} (\boldsymbol{\theta}, q(\mathbf{Z}); \mathbf{X}) \! \triangleq \! \sum_{n} \! \mathcal{L} (\boldsymbol{\theta}, q(\mathbf{z}_n) ; \mathbf{x}_n)$ upper bounds the full negative log-likelihood $-\log p(\mathbf{X} | \boldsymbol{\theta})$. It is important to notice that if we do not make restricting assumption on the variational distribution $q$, the bound is tight by simply setting $q(\mathbf{Z}) = p(\mathbf{Z} | \mathbf{X}, \boldsymbol{\theta})$. That is, the MLE is equivalent to solving the variational problem: $\min_{\boldsymbol{\theta}, q(\mathbf{Z})} \mathcal{L} (\boldsymbol{\theta}, q(\mathbf{Z}); \mathbf{X})$. However, since the true posterior is intractable except a handful of special cases, we must resort to approximation methods. One common assumption is that the variational distribution is of some parametric form, $q_{\boldsymbol{\phi}}(\mathbf{Z})$, and then we optimize the variational bound w.r.t the variational parameters $\boldsymbol{\phi}$. For DGMs, another challenge arises that the variational bound is often intractable to compute analytically. To address this challenge, the early work further bounds the intractable parts with tractable ones by introducing more variational parameters~\cite{saul1996}. However, this technique increases the gap between the bound being optimized and the log-likelihood, potentially resulting in poorer estimates. Much recent progress~\cite{kingma14iclr,danilo14icml,Mnih:icml2014} has been made on hybrid Monte Carlo and variational methods, which approximates the intractable expectations and their gradients over the parameters $(\boldsymbol{\theta}, \boldsymbol{\phi})$ via some unbiased Monte Carlo estimates. Furthermore, to handle large-scale datasets, stochastic optimization of the variational objective can be used with a suitable learning rate annealing scheme. It is important to notice that variance reduction is a key part of these methods in order to have fast and stable convergence. Most work on directed DGMs has been focusing on the generative capability on inferring the observations, such as filling in missing values~\cite{kingma14iclr,danilo14icml,Mnih:icml2014}, while relatively insufficient work has been done on investigating the predictive power, except the recent advances~\cite{kingma14nips,springenberg16} for semi-supervised learning. Below, we present max-margin deep generative models, which explore the discriminative max-margin principle to improve the predictive ability of the latent representations, while retaining the generative capability. \subsection{Max-margin Deep Generative Models} We first consider the fully supervised setting, where the training data is a pair $(\mathbf{x}, y)$ with input features $\mathbf{x} \in \mathbb{R}^D$ and the groundtruth label $y$. Without loss of generality, we consider the multiclass classification, where $y \in \mathcal{C} = \{1, \dots, M\}$. As illustrated in Fig.~\ref{fig:PGM}, a max-margin deep generative model (mmDGM) consists of two components: (1) a deep generative model to describe input features; and (2) a max-margin classifier to consider supervision. For the generative model, we can in theory adopt any DGM that defines a joint distribution over $(\mathbf{X}, \mathbf{Z})$ as in~Eq.~(\ref{eq:DGM-joint-dist}). For the max-margin classifier, instead of fitting the input features into a conventional SVM, we define the linear classifier on the latent representations, whose learning will be regularized by the supervision signal as we shall see. Specifically, if the latent representation $\mathbf{z}$ is given, we define the latent discriminant function $ F(y, \mathbf{z}, \boldsymbol{\eta}; \mathbf{x}) = \boldsymbol{\eta}^{\top} \mathbf{f}(y, \mathbf{z}), $ where $\mathbf{f}(y, \mathbf{z})$ is an $MK$-dimensional vector that concatenates $M$ subvectors, with the $y$th being $\mathbf{z}$ and all others being zero, and $\boldsymbol{\eta}$ is the corresponding weight vector. We consider the case that $\boldsymbol{\eta}$ is a random vector, following some prior distribution $p_0(\boldsymbol{\eta})$. Then our goal is to infer the posterior distribution $p(\boldsymbol{\eta}, \mathbf{Z} | \mathbf{X}, \mathbf{Y})$, which is typically approximated by a variational distribution $q(\boldsymbol{\eta}, \mathbf{Z})$ for computational tractability. Notice that this posterior is different from the one in the vanilla DGM. We expect that the supervision information will bias the learned representations to be more powerful on predicting the labels at testing. To account for the uncertainty of $(\boldsymbol{\eta}, \mathbf{Z})$, we take the expectation and define the discriminant function $ F(y; \mathbf{x}) = \mathbb{E}_{q}\left[ \boldsymbol{\eta}^{\top} \mathbf{f}(y, \mathbf{z}) \right], $ and the final prediction rule that maps inputs to outputs is: \begin{eqnarray}\label{eq:mm-classifier} \hat y = \operatornamewithlimits{argmax}_{y \in \mathcal{C}} F(y; \mathbf{x} ). \end{eqnarray} Note that different from the conditional DGM~\cite{kingma14nips}, which puts the class labels upstream and generates the latent representations as well as input data $\mathbf{x}$ by conditioning on $y$, the above classifier is a downstream model in the sense that the supervision signal is determined by conditioning on the latent representations. \begin{figure}[!t] \centering \includegraphics[width=.9\columnwidth]{incorporate_y.pdf} \caption{a) and b): Graphical models of mmDGMs when labels are given or missing. c) and d): Graphical models of mmDCGMs when labels are given or missing. The solid line and the dash dot line represent the generative model and recognition model respectively. The dot line stands for the max-margin classifier. Compared with mmDGMs, mmDCGMs disentangle the label information from the latent variables and separate the pathways of inferring labels and latent variables.} \label{fig:PGM} \end{figure} \subsubsection{The Learning Problem} We want to jointly learn the parameters $\boldsymbol{\theta}$ and infer the posterior distribution $q(\boldsymbol{\eta}, \mathbf{Z})$. Based on the equivalent variational formulation of MLE, we define the joint learning problem as solving: \begin{eqnarray} \min_{\boldsymbol{\theta}, q(\boldsymbol{\eta}, \mathbf{Z}), \boldsymbol{\xi}} && \mathcal{L} (\boldsymbol{\theta}, q(\boldsymbol{\eta}, \mathbf{Z}); \mathbf{X}) + C\sum_{n = 1}^N \xi_n \\ \forall n, y \in \mathcal{C}, \textrm{s.t. :} && \left\{ \begin{array}{ll} \mathbb{E}_q[\boldsymbol{\eta}^{\top} \Delta \mathbf{f}_n(y)] \ge \Delta l_n(y) - \xi_n\\ \xi_n \ge 0, \end{array} \right. \nonumber \end{eqnarray} where $\Delta \mathbf{f}_n(y) = \mathbf{f}(y_n, \mathbf{z}_n) - \mathbf{f}(y, \mathbf{z}_n)$ is the difference of the feature vectors; $\Delta l_n(y)$ is the loss function that measures the cost to predict $y$ if the true label is $y_n$; and $C$ is a nonnegative regularization parameter balancing the two components. In the objective, the variational bound is defined as $\mathcal{L} (\boldsymbol{\theta}, q(\boldsymbol{\eta}, \mathbf{Z}); \mathbf{X}) = \mathbf{KL} (q(\boldsymbol{\eta}, \mathbf{Z}) || p_0(\boldsymbol{\eta}, \mathbf{Z} | \boldsymbol{\alpha}) ) - \mathbb{E}_q \left[ \log p(\mathbf{X} | \mathbf{Z}, \boldsymbol{\beta}) \right] $, and the margin constraints are from the classifier~(\ref{eq:mm-classifier}). If we ignore the constraints (e.g., setting $C$ at 0), the solution of $q(\boldsymbol{\eta}, \mathbf{Z})$ will be exactly the Bayesian posterior, and the problem is equivalent to do MLE for $\boldsymbol{\theta}$. By absorbing the slack variables, we can rewrite the problem in an unconstrained form: \begin{equation}\label{eq:joint-problem} \min_{\boldsymbol{\theta}, q(\boldsymbol{\eta}, \mathbf{Z})} \mathcal{L} (\boldsymbol{\theta}, q(\boldsymbol{\eta}, \mathbf{Z}); \mathbf{X}) + C \mathcal{R}(q(\boldsymbol{\eta}, \mathbf{Z}; \mathbf{X})), \end{equation} where the hinge loss is: $ \mathcal{R}(q(\boldsymbol{\eta}, \mathbf{Z}); \mathbf{X}) = \sum_{n = 1}^N\max_{y \in \mathcal{C}} (\Delta l_n(y) - \mathbb{E}_q[\boldsymbol{\eta}^{\top} \Delta \mathbf{f}_n(y)]). $ Due to the convexity of $\max$ function, it is easy to verify that the hinge loss is an upper bound of the training error of classifier~(\ref{eq:mm-classifier}), that is, $\mathcal{R}(q(\boldsymbol{\eta}, \mathbf{Z}); \mathbf{X}) \geq \sum_n \Delta l_n( \hat{y}_n )$. Furthermore, the hinge loss is a convex functional over the variational distribution because of the linearity of the expectation operator. These properties render the hinge loss as a good surrogate to optimize over. Previous work has explored this idea to learn discriminative topic models~\cite{zhu12jmlr}, but with a restriction on the shallow structure of hidden variables. Our work presents a significant extension to learn deep generative models, which pose new challenges on the learning and inference. \subsubsection{The Doubly Stochastic Subgradient Algorithm}\label{sec:algorithm} The variational formulation of problem~(\ref{eq:joint-problem}) naturally suggests that we can develop a variational algorithm to address the intractability of the true posterior. We now present a new algorithm to solve problem~(\ref{eq:joint-problem}). Our method is a doubly stochastic generalization of the Pegasos (i.e., Primal Estimated sub-GrAdient SOlver for SVM) algorithm~\cite{shai11pegasos} for the classic SVMs with fully observed input features, with the new extension of dealing with a highly nontrivial structure of latent variables. First, we make the structured mean-field (SMF) assumption that $q(\boldsymbol{\eta}, \mathbf{Z}) = q(\boldsymbol{\eta}) q_{\boldsymbol{\phi}}(\mathbf{Z})$. Under the assumption, we have the discriminant function as $\mathbb{E}_q[\boldsymbol{\eta}^{\top} \Delta \mathbf{f}_n(y)] = \mathbb{E}_{q(\boldsymbol{\eta})}[\boldsymbol{\eta}^{\top}] \mathbb{E}_{q_{\boldsymbol{\phi}}(\mathbf{z}^{(n)})}[\Delta \mathbf{f}_n(y)].$ Moreover, we can solve for the optimal solution of $q(\boldsymbol{\eta})$ in some analytical form. In fact, by the calculus of variations, we can show that given the other parts the solution is $ q(\boldsymbol{\eta}) \propto p_0(\boldsymbol{\eta}) \exp \Big( \boldsymbol{\eta}^{\top} \sum_{n,y} \omega_n^y \mathbb{E}_{q_{\boldsymbol{\phi}} }[ \Delta \mathbf{f}_n (y)] \Big), $ where $\boldsymbol{\omega}$ are the Lagrange multipliers (See~\cite{zhu12jmlr} for details). If the prior is normal, $p_0(\boldsymbol{\eta}) = \mathcal{N}(\boldsymbol{0}, \sigma^2 \mathbf{I})$, we have the normal posterior: $ q(\boldsymbol{\eta}) = \mathcal{N}(\boldsymbol{\lambda}, \sigma^2 \mathbf{I}),~\textrm{where}~\boldsymbol{\lambda} = \sigma^2 \sum_{n,y} \omega_n^y \mathbb{E}_{q_{\boldsymbol{\phi}} }[ \Delta \mathbf{f}_n(y) ]. $ Therefore, even though we did not make a parametric form assumption of $q(\boldsymbol{\eta})$, the above results show that the optimal posterior distribution of $\boldsymbol{\eta}$ is Gaussian. Since we only use the expectation in the optimization problem and in prediction, we can directly solve for the mean parameter $\boldsymbol{\lambda}$ instead of $q(\boldsymbol{\eta})$. Further, in this case we can verify that $\mathbf{KL}(q(\boldsymbol{\eta}) ||p_0(\boldsymbol{\eta})) = \frac{||\boldsymbol{\lambda}||^2}{2\sigma^2}$ and then the equivalent objective function in terms of $\boldsymbol{\lambda}$ can be written as: \begin{equation}\label{eq:learning_problem} \min_{\boldsymbol{\theta}, \boldsymbol{\phi}, \boldsymbol{\lambda}} \mathcal{L} (\boldsymbol{\theta}, \boldsymbol{\phi}; \mathbf{X}) + \frac{||\boldsymbol{\lambda}||^2}{2\sigma^2} + C \mathcal{R}( \boldsymbol{\lambda}, \boldsymbol{\phi}; \mathbf{X}), \end{equation} where $\mathcal{R}( \boldsymbol{\lambda}, \boldsymbol{\phi}; \mathbf{X}) = \sum_{n = 1}^N \ell(\boldsymbol{\lambda}, \boldsymbol{\phi}; \mathbf{x}_n) $ is the total hinge loss, and the per-sample hinge-loss is $\ell(\boldsymbol{\lambda}, \boldsymbol{\phi}; \mathbf{x}_n) = \max_{y \in \mathcal{C}} (\Delta l_n(y) - \boldsymbol{\lambda}^{\top}\mathbb{E}_{q_{\boldsymbol{\phi}}}[ \Delta \mathbf{f}_n(y)])$. Below, we present a doubly stochastic subgradient descent algorithm to solve this problem. The {\it first stochasticity} arises from a stochastic estimate of the objective by random mini-batches. Specifically, the batch learning needs to scan the full dataset to compute subgradients, which is often too expensive to deal with large-scale datasets. One effective technique is to do stochastic subgradient descent~\cite{shai11pegasos}, where at each iteration we randomly draw a mini-batch of the training data and then do the variational updates over the small mini-batch. Formally, given a mini batch of size $m$, we get an unbiased estimate of the objective: \begin{equation} \tilde{\mathcal{L}}_m := \frac{N}{m}\sum_{n = 1}^m \mathcal{L} (\boldsymbol{\theta}, \boldsymbol{\phi}; \mathbf{x}_n) + \frac{||\boldsymbol{\lambda}||^2}{2\sigma^2} + \frac{NC}{m} \sum_{n = 1}^m \ell(\boldsymbol{\lambda}, \boldsymbol{\phi}; \mathbf{x}_n). \nonumber \end{equation} The {\it second stochasticity} arises from a stochastic estimate of the per-sample variational bound and its subgradient, whose intractability calls for another Monte Carlo estimator. Formally, let $\mathbf{z}_n^l \sim q_{\boldsymbol{\phi}}(\mathbf{z} | \mathbf{x}_n, y_n)$ be a set of samples from the variational distribution, where we explicitly put the conditions. Then, the estimates of the per-sample variational bound and the per-sample hinge-loss are \begin{equation} \tilde{\mathcal{L}}(\boldsymbol{\theta}, \boldsymbol{\phi}; \mathbf{x}_n) = \frac{1}{L} \sum_l \log p(\mathbf{x}_n, \mathbf{z}_n^l | \boldsymbol{\beta}) - \log q_{\boldsymbol{\phi}}(\mathbf{z}_n^l) \nonumber \end{equation} and \begin{equation} \tilde{\ell}(\boldsymbol{\lambda}, \boldsymbol{\phi}; \mathbf{x}_n) = \max_y \Big( \Delta l_n(y) - \frac{1}{L} \sum_l \boldsymbol{\lambda}^\top \Delta \mathbf{f}_n(y, \mathbf{z}_n^l) \Big) \nonumber \end{equation} respectively, where $\Delta \mathbf{f}_n(y, \mathbf{z}_n^l) = \mathbf{f}(y_n, \mathbf{z}_n^l) - \mathbf{f}(y, \mathbf{z}_n^l)$. Note that $\tilde{\mathcal{L}}$ is an unbiased estimate of $\mathcal{L}$, while $\tilde{\ell}$ is a biased estimate of $\ell$. Nevertheless, we can still show that $\tilde{\ell}$ is an upper bound estimate of $\ell$ under expectation. Furthermore, this biasedness does not affect our estimate of the gradient. In fact, by using the equality $\nabla_{\boldsymbol{\phi}}q_{\boldsymbol{\phi}}(\mathbf{z}) = q_{\boldsymbol{\phi}}(\mathbf{z}) \nabla_{\boldsymbol{\phi}} \log q_{\boldsymbol{\phi}}(\mathbf{z}) $, we can construct an unbiased Monte Carlo estimate of $\nabla_{\boldsymbol{\phi}} (\mathcal{L}(\boldsymbol{\theta}, \boldsymbol{\phi}; \mathbf{x}_n) + \ell(\boldsymbol{\lambda}, \boldsymbol{\phi}; \mathbf{x}_n))$ as: \setlength{\arraycolsep}{0.0em}\begin{eqnarray}\label{eq:var-grad} \mathbf{g}_{\boldsymbol{\phi}} &=& \frac{1}{L} \sum_{l=1}^L \Big( \log p(\mathbf{z}_n^l, \mathbf{x}_n) - \log q_{\boldsymbol{\phi}}(\mathbf{z}_n^l) \nonumber + C \boldsymbol{\lambda}^\top \Delta \mathbf{f}_n( \tilde{y}_n, \mathbf{z}_n^l ) \Big) \\ &&\nabla_{\boldsymbol{\phi}} \log q_{\boldsymbol{\phi}}(\mathbf{z}_n^l) , \end{eqnarray} where the last term roots from the hinge loss with the loss-augmented prediction $\tilde{y}_n = \operatornamewithlimits{argmax}_y (\Delta l_n(y) + \frac{1}{L} \sum_l \boldsymbol{\lambda}^\top \mathbf{f}(y, \mathbf{z}_n^l) )$. For $\boldsymbol{\theta}$ and $\boldsymbol{\lambda}$, the estimates of the gradient $\nabla_{\boldsymbol{\theta}} \mathcal{L}(\boldsymbol{\theta}, \boldsymbol{\phi}; \mathbf{x}_n)$ and the subgradient $\nabla_{\boldsymbol{\lambda}} \ell(\boldsymbol{\lambda}, \boldsymbol{\phi}; \mathbf{x}_n)$ are easier, which are: \begin{equation} \mathbf{g}_{\boldsymbol{\theta}} = \frac{1}{L} \sum_l \nabla_{\boldsymbol{\theta}} \log p(\mathbf{x}_n, \mathbf{z}_n^l | \boldsymbol{\theta}), \nonumber \end{equation} and \begin{equation} \mathbf{g}_{\boldsymbol{\lambda}} = \frac{1}{L} \sum_l \left( \mathbf{f}(\tilde{y}_n, \mathbf{z}_n^l) - \mathbf{f}(y_n, \mathbf{z}_n^l) \right). \nonumber \end{equation} Notice that the sampling and the gradient $\nabla_{\boldsymbol{\phi}} \log q_{\boldsymbol{\phi}}(\mathbf{z}_n^l)$ only depend on the variational distribution, not the underlying model. \begin{algorithm}[t] \caption{Doubly Stochastic Subgradient Algorithm}\label{alg:double-stochastic} \begin{algorithmic} \STATE Initialize $\boldsymbol{\theta}$, $\boldsymbol{\lambda}$, and $\boldsymbol{\phi}$ \REPEAT \STATE draw a random mini-batch of $m$ data points \STATE draw random samples from noise distribution $p(\boldsymbol{\epsilon})$ \STATE compute subgradient $\mathbf{g} = \nabla_{\boldsymbol{\theta}, \boldsymbol{\lambda}, \boldsymbol{\phi}} \tilde{\mathcal{L}}(\boldsymbol{\theta}, \boldsymbol{\lambda}, \boldsymbol{\phi}; \mathbf{X}^m, \boldsymbol{\epsilon})$ \STATE update parameters $(\boldsymbol{\theta}, \boldsymbol{\lambda}, \boldsymbol{\phi})$ using subgradient $\mathbf{g}$. \UNTIL{Converge} \STATE {\bf return} $\boldsymbol{\theta}$, $\boldsymbol{\lambda}$, and $\boldsymbol{\phi}$ \end{algorithmic} \end{algorithm} The above estimates consider the general case where the variational bound is intractable. In some cases, we can compute the KL-divergence term analytically, e.g., when the prior and the variational distribution are both Gaussian. In such cases, we only need to estimate the rest intractable part by sampling, which often reduces the variance~\cite{kingma14iclr}. Similarly, we could use the expectation of the features directly, if it can be computed analytically, in the computation of subgradients (e.g., $\mathbf{g}_{\boldsymbol{\theta}}$ and $\mathbf{g}_{\boldsymbol{\lambda}}$) instead of sampling, which again can lead to variance reduction. With the above estimates of subgradients, we can use stochastic optimization methods such as SGD~\cite{shai11pegasos} and AdaM~\cite{kingma:15} to update the parameters, as outlined in Alg.~\ref{alg:double-stochastic}. Overall, our algorithm is a doubly stochastic generalization of Pegasos to deal with the highly nontrivial latent variables. Now, the remaining question is how to define an appropriate variational distribution $q_{\boldsymbol{\phi}}(\mathbf{z})$ to obtain a robust estimate of the subgradients as well as the objective. Two types of methods have been developed for unsupervised DGMs, namely, variance reduction~\cite{Mnih:icml2014} and auto-encoding variational Bayes (AVB)~\cite{kingma14iclr}. Though both methods can be used for our models, we focus on the AVB approach. For continuous variables $\mathbf{Z}$, under certain mild conditions we can reparameterize the variational distribution $q_{\boldsymbol{\phi}}(\mathbf{z})$ using some simple variables $\boldsymbol{\epsilon}$. Specifically, we can draw samples $\boldsymbol{\epsilon}$ from some simple distribution $p(\boldsymbol{\epsilon})$ and do the transformation $\mathbf{z} = \mathbf{g}_{\boldsymbol{\phi}}(\boldsymbol{\epsilon}, \mathbf{x}, y)$ to get the sample of the distribution $q(\mathbf{z} | \mathbf{x}, y)$. We refer the readers to~\cite{kingma14iclr} for more details. In our experiments, we consider the special Gaussian case, where we assume that the variational distribution is a multivariate Gaussian with a diagonal covariance matrix: \begin{eqnarray}\label{eq:recognition-model} q_{\boldsymbol{\phi}}(\mathbf{z} | \mathbf{x}, y) = \mathcal{N}( \boldsymbol{\mu}(\mathbf{x},y; \boldsymbol{\phi}), \boldsymbol{\sigma}^2(\mathbf{x},y; \boldsymbol{\phi}) ), \end{eqnarray} whose mean and variance are functions of the input data. This defines our recognition model. Then, the reparameterization trick is as follows: we first draw standard normal variables $\boldsymbol{\epsilon}^l \sim \mathcal{N}(0, \mathbf{I})$ and then do the transformation $\mathbf{z}_n^l = \boldsymbol{\mu}(\mathbf{x}_n,y_n;\boldsymbol{\phi}) + \boldsymbol{\sigma}(\mathbf{x}_n,y_n; \boldsymbol{\phi}) \odot \boldsymbol{\epsilon}^l$ to get a sample. For simplicity, we assume that both the mean and variance are function of $\mathbf{x}$ only. However, it is worth to emphasize that although the recognition model is unsupervised, the parameters $\boldsymbol{\phi}$ are learned in a supervised manner because the subgradient~(\ref{eq:var-grad}) depends on the hinge loss. Further details of the experimental settings are presented in Sec.~\ref{sec:experiment_setting}. \subsection{Conditional Variants for Semi-supervised Learning} As collecting labeled data is often costly and time-consuming, semi-supervised learning (SSL)~\cite{zhu:book09} is an important setting, where the easy-to-get unlabeled data are leveraged to improve the quality. We now present an extension of mmDGMs to the semi-supervised learning scenario. Given a labeled dataset $\mathcal{D}_L = \{(\mathbf{x}_n, y_n )\}_{n=1}^{N_L}$ and an unlabeled dataset $\mathcal{D}_U = \{\mathbf{x}_n\}_{n=1}^{N_U}$, where the size $N_U$ is typically much larger than $N_L$, the goal of SSL is to explore the intrinsic structures underlying the unlabeled data to help learn a classifier. As the learning objective of mmDGMs consists of two parts---a data likelihood and a classification loss, a naive approach to considering unlabeled data is to simply ignore the loss term when the class label is missing. However, such ignorance leads to a weak coupling between the likelihood model and the classifier. Below, we present a conditional variant of mmDGMs, namely max-margin deep conditional generative models (mmDCGMs), to strongly couple the classifier and data likelihood. Similar as in mmDGMs, an mmDCGM consists of two components: (1) a deep max-margin classifier to infer labels given data and (2) a class-conditional deep generative model to describe the joint distribution of the data, labels and latent variables. Fig.~\ref{fig:PGM} compares the graphical models of the mmDGM and mmDCGM. Below, we present the learning objective of mmDCGM formally, which consists of several key components. For notation simplicity, we will omit the parameters $\boldsymbol{\theta}$, $\boldsymbol{\phi}$ and $\boldsymbol{\lambda}$ in the following formulae if no confusion arises. {\bf Generative loss:} The first part of our learning objective is a generative loss to describe the observed data. For the labeled data $\mathbf{x}_n$ whose $y_n$ is visible, mmDCGM maximizes the joint likelihood for the pair $(\mathbf{x}_n, y_n)$, $\log p(\mathbf{x}_n, y_n)$, which is lower bounded by: \begin{equation} \mathcal{L}(\mathbf{x}_n,y_n) = \mathbb{E}_{q(\mathbf{z}_n|\mathbf{x}_n,y_n)}\left[ \log \frac{ p(\mathbf{x}_n| \mathbf{z}_n, y_n) p (\mathbf{z}_n) p(y_n)}{q(\mathbf{z}_n|\mathbf{x}_n,y_n)}\right]. \end{equation} For the unlabeled data $\mathbf{x}_n$ whose $y_n$ is hidden, we can maximize the marginal likelihood $\log p(\mathbf{x}_n)$ by integrating out the hidden labels, whose variational lower-bound is: \setlength{\arraycolsep}{0.0em}\begin{eqnarray} \log p(\mathbf{x}_n) & \ge & \mathbb{E}_{q(y | \mathbf{x}_n )} \mathbb{E}_{q(\mathbf{z}_n | \mathbf{x}_n, y)}\left[ \log \frac{ p(\mathbf{x}_n| \mathbf{z}_n, y) p (\mathbf{z}_n) p (y)}{ q (y | \mathbf{x}_n) q(\mathbf{z}_n|\mathbf{x}_n, y)} \right] \nonumber \\ & = & \mathbb{E}_{q(y | \mathbf{x}_n )} \left[ \mathcal{L}(\mathbf{x}_n, y) \right] + \mathcal{H}(q(y | \mathbf{x}_n)). \end{eqnarray} These lower-bounds were adopted in the previous method~\cite{kingma14nips}. However, one issue with this method is on the computational inefficiency when dealing with a large set of unlabeled data and a large number of classes. This is because we need to compute the lower-bounds of the joint likelihood for all possible $y \in \mathcal{C}$ and for each unlabeled data point. To make it computationally efficient, we propose to use the prediction of a classifier $\hat y_n = \arg \max \tilde q(y | \mathbf{x}_n)$ as a point estimation to approximate the full posterior $q(y | \mathbf{x}_n)$ to speed-up the inference procedure, where we denote the classifier by $\tilde q$ because it is not restricted to a specific form with a proper distribution over labels but is an unnormalized one trained under the max-margin principle. Indeed, the outputs of the classifier are real values transformed by linear operations, denoting the signed distance from the data to the hyperplanes defined by the weights. Consequently, the entropy term should be zero and the lower-bound turns out to be: \begin{equation}\label{eqn:point_y} \log p(\mathbf{x}_n) \ge \mathbb{E}_{q(\mathbf{z}_n | \mathbf{x}_n, \hat y_n)}\left[ \log \frac{ p(\mathbf{x}_n | \mathbf{z}_n, \hat y_n) p (\mathbf{z}_n) p (\hat y_n)}{ q(\mathbf{z}_n | \mathbf{x}_n, \hat y_n)} \right]. \end{equation} Note that the lower-bound is valid because we can view $\hat y_n = \arg \max \tilde q(y_n | \mathbf{x}_n)$ as a delta distribution. With the above deviations, we define the overall generative loss as the summation of the negative variational bounds over $\mathcal{D}_L$ and $\mathcal{D}_U$: \begin{eqnarray}\label{eqn:gene_loss} -\mathcal{L_G}=&\sum_{(\mathbf{x}_n, y_n) \in \mathcal{D}_L}& \mathbb{E}_{q(\mathbf{z}_n | \mathbf{x}_n, y_n)}\left[ \log \frac{ p(\mathbf{x}_n, y_n, \mathbf{z}_n)}{q(\mathbf{z}_n | \mathbf{x}_n, y_n)} \right] \nonumber \\ & + \sum_{\mathbf{x}_n \in \mathcal{D}_U} & \mathbb{E}_{q(\mathbf{z}_n | \mathbf{x}_n, \hat y_n)}\left[ \log \frac{ p(\mathbf{x}_n, \hat y_n, \mathbf{z}_n)}{ q(\mathbf{z}_n|\mathbf{x}_n, \hat y_n)}\right]. \end{eqnarray} {\bf Hinge loss:} The second part of our learning objective is a hinge loss on the labeled data. Specifically, though the labeled data can contribute to the training of the classifier $\tilde q(y|\mathbf{x})$ implicitly through the objective function in Eqn.~(\ref{eqn:gene_loss}), it has been shown that adding a predictive loss for the labeled data can speed-up convergence and achieve better results~\cite{kingma14nips,maaloe16}. Here, we adopt the similar idea by introducing a hinge loss as the discriminative regularization for the labeled data: \begin{equation} \mathcal{L_L} = \sum_{(\mathbf{x}_n, y_n) \in \mathcal{D}_L} \max_{ y \in \mathcal{C}} (\Delta l_n (y) + \boldsymbol{\lambda}^{\top}\mathbb{E}_{q_{\boldsymbol{\phi}}}[ \Delta \mathbf{f}_n(y)]), \end{equation} which is the same as in the fully supervised case. {\bf Hat loss:} The third part of our learning objective is a hat loss on the unlabeled data. Specifically, as $N_U$ is typically much larger than $N_L$ in the semi-supervised learning, it is desirable that the unlabeled data can regularize the behaviour of the classifier explicitly. To this end, we further propose a max-margin ``hat loss''~\cite{zhu:book09} for the unlabeled data as follows: \begin{equation} \mathcal{L_U} = \sum_{\mathbf{x}_n \in \mathcal{D}_U} \max_{ y \in \mathcal{C}} (\Delta l_{\hat y_n} (y) + \boldsymbol{\lambda}^{\top}\mathbb{E}_{q_{\boldsymbol{\phi}}}[ \Delta \mathbf{f}_{\hat y_n }(y)]), \end{equation} where $\Delta \mathbf{f}_{\hat y_n }(y) = \mathbf{f}(\hat y_n, \mathbf{z}_n) - \mathbf{f}(y, \mathbf{z}_n)$ and $\Delta l_{\hat y_n }(y)$ is an function that indicates whether $y$ equals to the prediction $\hat y_n$ or not. Namely, we treat the prediction $\hat y_n$ as putative label and apply the hinge loss function on the unlabeled data. This function is called the hat loss due to its shape in a binary classification example~\cite{zhu:book09}. Intuitively, the hinge loss enforces the predictor to make prediction correctly and confidently with a large margin for labeled data, while the hat loss only requires the predictor to make decision confidently for unlabeled data. The hat loss, which has been originally proposed in S3VMs~\cite{vapnik:book}, assumes that decision boundary tends to lie on low-density areas of the feature space. In such shallow models, the correctness of the assumption heavily depends on the true data distribution, which is fixed but unknown. However, the constraint is much more relaxed when building upon the latent feature space learned by a deep model as described in our method. In practice, the predictive performance of mmDCGMs is improved substantially by adding this regularization, which will be shown in Sec.~\ref{sec:ssl-results}. {\bf Label-balance regularization:} The last part of our learning objective is a regularization term to balance the possible label predictions on the unlabeled data. Specifically, one practical problem of semi-supervised learning is the imbalance of the predictions~\cite{zhu:book09}, that is, a classifier may classify most of the unlabeled points as a same class. To address this problem, we introduce a balance constraint for multiclass semi-supervised learning: \begin{equation} \forall y \in \mathcal{C}, \frac{1}{N_U} \sum_{\mathbf{x}_n \in \mathcal{D}_U} \Delta l_{\hat y_n} (y) = \frac{1}{N_L} \sum_{(\mathbf{x}_n, y_n) \in \mathcal{D}_L} \Delta l_{n} (y), \label{eqn:constraint_hard} \end{equation} which assumes that the distribution of the predictions of unlabeled data should be the same as that of the groundtruth labels in the labeled set. However, both sides in Eqn.~(\ref{eqn:constraint_hard}) are summations of indicator functions, which are non-differentiable with respect to $\boldsymbol{\lambda}$. Therefore, we cannot optimize $\boldsymbol{\lambda}$ based on gradient methods to satisfy this constraint directly. Here, we relax the constraint (\ref{eqn:constraint_hard}) as: $\forall y \in \mathcal{C},$ \begin{equation} \label{eqn:constraint_relax} \frac{1}{N_U} \sum_{\mathcal{D}_U} \Delta l_{\hat y_n} (y) F(y; \mathbf{x}_n) = \frac{1}{N_L} \sum_{\mathcal{D}_L} \Delta l_{n} (y) F(y; \mathbf{x}_n), \end{equation} where $F(y; \mathbf{x}_n) = \boldsymbol{\lambda}^{\top}\mathbb{E}_{\tilde q}[ \mathbf{f}(y, \mathbf{z})]$ and we simplify the summation notation. Given certain class $y$, the left hand side selects the unlabeled data whose predictions equal to $y$ according to the indicator functions, and adds the corresponding activations (discriminant functions of $y$ divided by a factor $N_U$) together. The right hand side computes this normalized activations with indicator functions in same class for the labeled data. Note that $F(y; \mathbf{x}_n)$ is no smaller than $F(y'; \mathbf{x}_n)$ for any other $y'$ due to the definitions of the prediction $\hat y_n$ and the indicator function $\Delta l_{\hat y_n}(y)$. The gradients in the relaxed version are still not well-defined due to the indicator functions. However, assuming that the predictions $\hat y_n$ are given, both sides in Eqn.~(\ref{eqn:constraint_relax}) are summations without indicator functions, which are differentiable with respect to $\boldsymbol{\lambda}$. In our experiments, we indeed ignore the dependency of the indicator functions on $\boldsymbol{\lambda}$ and approximate the total gradients by the gradients of the cumulative activations. This approximation does not work for the constraint in Eqn.~(\ref{eqn:constraint_hard}) because both sides turn out to be scalars given $\hat y_n$ and the gradient with respect to $\boldsymbol{\lambda}$ is zero almost everywhere, which cannot be used to optimize parameters. In fact, the relaxed constraint balances the predictions of unlabeled data according to the groundtruth implicitly, under the further assumption that the cumulative activation is proportional to the number of predictions for any $y$. Intuitively, if the cumulative activation of the selected unlabeled data in certain class $y$ is larger than that of the labeled data, then probably the predictor classifies some unlabeled data as $y$ incorrectly. Consequently, the $\boldsymbol{\lambda}$ is updated to reduce the activations, and then the number of predictions in this class will decrease because $F(y; \mathbf{x}_n) $ may be smaller than $ F(y'; \mathbf{x}_n)$ for some other $y'$. Moreover, as hard constraints are unlikely to satisfy in practice, we further relax them by using a regularization penalty in the common $L_2$-norm: \begin{equation} \mathcal{L_B} = \sqrt{\sum_{y \in \mathcal{C}}\left( \frac{\sum_{\mathcal{D}_U} \Delta l_{\hat y_n} (y) F(y; \mathbf{x})}{N_U} - \frac{\sum_{\mathcal{D}_L} \Delta l_{n} (y) F(y; \mathbf{x})}{N_L} \right)^2}. \nonumber \end{equation} With the above sub-objectives, our final objective function is a weighted sum: \begin{equation}\label{eqn:final_obj} \mathcal{L} = \mathcal{L_G} + \alpha(\mathcal{L_L} + \alpha_{\mathcal{U}}\mathcal{L_U} + \alpha_{\mathcal{B}}\mathcal{L_B}), \end{equation} where $\alpha$, $\alpha_{\mathcal{U}}$ and $\alpha_{\mathcal{B}}$ are hyper-parameters that control the relative weights for the corresponding terms. We will discuss the choice of each value in Sec.~\ref{sec:experiment_setting}. \begin{figure*} \centering \includegraphics{architecture-cmmva.pdf} \caption{Network architecture of Conv-MMVA with conv-net in the recognition model and unconv-net in the generative model (best view in color).} \label{architecture-cmmva} \end{figure*} To optimize the overall learning objective, we still use our doubly stochastic algorithm described in Sec.~\ref{sec:algorithm} to compute the unbiased subgradient estimations for all of the parameters and perform updates. Specifically, given a mini-batch of data consisting of labeled data $\mathcal{M_L} = \{(\mathbf{x}_n, y_n)\}_{n=1}^{m_L}$ and unlabeled data $\mathcal{M_U} = \{\mathbf{x}_n\}_{n=1}^{m_U}$, we sequentially \begin{enumerate} \item predict $\hat y_n$ using the classifier for each $\mathbf{x}_n \in \mathcal{M_U}$; \item plug in the predictions of the unlabeled data $\hat y_n$ and the groudtruth of the labeled data $y_n$ into the indicator functions in the label-balance regularization; \item take (sub-)gradient with respect to all parameters in the generative model, recognition model and classfier to optimize the final objective~(\ref{eqn:final_obj}); \item approximate the (sub-)gradients with intractable expectations using the techniques described in Sec.~\ref{sec:algorithm} and update parameters. \end{enumerate} Though the objective in semi-supervised learning is complex, our method works well in practice. \section{Experiments} We now present the experimental results in both supervised and semi-supervised learning settings. Our results on several benchmark datasets demonstrate that both mmDGMs and mmDCGMs are highly competitive in classification while retaining the generative ability, under the comparison with various strong competitors. \subsection{Experiment Settings} \label{sec:experiment_setting} Though mmDGMs and mmDCGMs are applicable to any DGMs that define a joint distribution of $(\mathbf{X}, \mathbf{Z})$ and $(\mathbf{X}, \mathbf{Z}, \mathbf{Y})$ respectively, we concentrate on the Variational Auto-encoder (VA)~\cite{kingma14iclr} and Conditional VA~\cite{kingma14nips} in our experiments. We consider two types of recognition models: multiple layer perceptrons (MLPs) and convolutional neural networks (CNNs). We denote our mmDGM with MLPs by {\bf MMVA}. To perform classification using VA which is unsupervised, we first learn the feature representations by VA, and then build a linear SVM classifier on these features using the Pegasos stochastic subgradient algorithm~\cite{shai11pegasos}. This baseline will be denoted by {\bf VA+Pegasos}. The corresponding models with CNNs are denoted by {\bf Conv-MMVA} and {\bf Conv-VA+Pegasos} respectively. We denote our mmDCGM with CNNs by {\bf Conv-MMCVA}. We implement all experiments based on Theano~\cite{Bastien-Theano-2012}.~ \footnote{Source code and more detailed settings can be found at https://github.com/thu-ml/mmdcgm-ssl.} \subsubsection{Datasets and Preprocessing} We evaluate our models on the widely adopted MNIST~\cite{Lecun:98}, SVHN~\cite{Netzer:11} and small NORB~\cite{lecun2004learning} datasets. MNIST consists of handwritten digits of 10 different classes (0 to 9). There are 50,000 training samples, 10,000 validating samples and 10,000 testing samples and each one is of size $28 \times 28$. SVHN is a large dataset consisting of color images of size $32 \times 32$. The task is to recognize the center digits in natural scene images. We follow the work~\cite{Sermanet:12,goodfellow:13} to split the dataset into 598,388 training data, 6,000 validating data and 26,032 testing data. The small NORB dataset consisits of gray images distributed across 5 general classes: animal, human, airplane, truck and car. Both the training set and testing set in NORB contain 24,300 samples with different lighting conditions and azimuths. We down-sample the images to size of $32 \times 32$ as in~\cite{maaloe16} and split 1,000 samples from the training set as the validating data if required. For fair comparison in supervised learning on SVHN, we perform Local Contrast Normalization (LCN) in the experiment of the Conv-MMVA following ~\cite{Sermanet:12,goodfellow:13} and set the distribution of $\mathbf{x}$ given $\mathbf{z}$ as Gaussian. In other cases, we just normalize the data by a factor of 256 and choose Bernoulli as the distribution of data. \subsubsection{Supervised Learning} In mmDGMs, the recognition network and the classifier share layers in computation. The mean and variance of the latent variable $\mathbf{z}$ are transformed from the last layer of the recognition model through an affine transformation. It should be noticed that we could use not only the expectation of $\mathbf{z}$ but also the activation of any layer in the recognition model as features. The only theoretical difference is from where we add a hinge loss regularization to the gradient and back-propagate it to previous layers. In all of the experiments, the mean of $\mathbf{z}$ has the same nonlinearity but typically much lower dimension than the activation of the last layer in the recognition model, and hence often leads to a worse performance. We use different features in MMVA and Conv-MMVA, which will be explained below. We use AdaM~\cite{kingma:15} to optimize parameters in all of the models. Although it is an adaptive gradient-based optimization method, we decay the global learning rate by a factor after sufficient number of epochs to ensure a stable convergence. In MMVA, we follow the settings in~\cite{kingma14nips} to compare both generative and discriminative capacity of VA and MMVA. Both the recognition and generative models employ a two-layer MLP with 500 hidden units in each layer and the dimension of the latent variables is 50. We choose $C = 15$ as default in MMVA. We concatenate the activations of 2 layers as the features used in the supervised tasks. We illustrate the network architecture of MMVA in Appendix A. In Conv-MMVA, we use standard CNNs~\cite{Lecun:98} with convolution and max-pooling operation as the recognition model to obtain more competitive classification results. For the generative model, we use unconvnets~\cite{Dosovitskiy:2014} with a ``symmetric'' structure as the recognition model, to reconstruct the input images approximately. More specifically, the top-down generative model has the same structure as the bottom-up recognition model but replacing max-pooling with unpooling operation~\cite{Dosovitskiy:2014} and applies unpooling, convolution and rectification in order. Typically, there are 5 or 6 convolutional layers in the generative model and the recognition model and the kernel size is either 5 or 3, depending on the data. The total number of parameters is comparable with previous work~\cite{goodfellow:13,Lin:14,Lee:15} and the split of the training sets is the same. For simplicity, we do not involve mlpconv layers~\cite{Lin:14,Lee:15} and contrast normalization layers in our recognition model, but they are not exclusive to our model. We set $C=10^3$ on MNIST and $C=10^4$ on SVHN as default. We use the activations of the last deterministic layer as the features. We illustrate the network architecture of Conv-MMVA with Gaussian hidden variables and Bernoulli visible variables in Fig.~\ref{architecture-cmmva}. \subsubsection{Semi-supervised Learning} The mmDCGM separates the classifier and the recognition model of the latent variables completely, which allows us to simply combine the state-of-the-art classifier and deep generative models together without competition. We only consider the convolutional neural networks here and adopt advanced techniques including global average pooling~\cite{lin2014network} and batch normalization~\cite{ioffe2015batch} to boost the performance of our Conv-MMCVA. The architecture of the max-margin classifier refers to that of the discriminator in~\cite{springenberg16} and the generative model is similar with the Conv-MMVA but concatenates the feature maps and additional label maps in one-hot encoding format at each layer as in~\cite{radford2015unsupervised}. Similar with Conv-MMVA, the depth of each convolutional networks is 5 or 6. We set $\alpha = 0.1$ according to the conditional VAE~\cite{kingma14nips}. We optimize $\alpha_{\mathcal{U}}$ and $\alpha_{\mathcal{B}}$ with a search grid $\{..., 0.01, 0.03, 0.1, 0.3, 1, 3 ...\}$ in terms of the validation classification error of a shallow S3VM on MNIST given 100 labels. The best values are $\alpha_{\mathcal{U}} = 3$ and $\alpha_{\mathcal{B}} = 0.001$ and we fix them in our Conv-MMCVA across all of the datasets. Other hyper-parameters including the anneal strategy and batch size are chosen according to the validation generative loss. Once the hyperparameters are fixed, we run our model for 10 times with different random splits of the labeled and unlabeled data, and we report the mean and the standard deviation of the error rates. \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Error rates (\%) on the MNIST dataset given full labeled data.} \centering \begin{tabular}{lc} \hline Model & Error Rate \\ \hline {\it VA+Pegasos} & 1.04 \\ {\it VA+Class-conditionVA}~\cite{kingma14nips} & 0.96 \\ {\it MMVA} & 0.90 \\ \hline {\it Conv-VA+Pegasos} & 1.35\\ {\it Conv-MMVA} & 0.45\\ \hline {\it Stochastic Pooling}~\cite{Zeiler:13} & 0.47\\ {\it Network in Network}~\cite{Lin:14} & 0.47\\ {\it Maxout Network}~\cite{goodfellow:13} & 0.45\\ {\it DSN}~\cite{Lee:15} & 0.39\\ \hline \label{mnist-basic-table} \end{tabular} \end{table} \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Error rates (\%) on the SVHN dataset given full labeled data.} \centering \begin{tabular}{lc} \hline Model & Error Rate \\ \hline {\it Conv-VA+Pegasos} & 25.3 \\ {\it Conv-MMVA} & 3.09\\ \hline {\it CNN}~\cite{Sermanet:12} & 4.9 \\ {\it Stochastic Pooling}~\cite{Zeiler:13} & 2.80\\ {\it Maxout Network}~\cite{goodfellow:13} & 2.47\\ {\it Network in Network}~\cite{Lin:14} & 2.35\\ {\it DSN}~\cite{Lee:15} & 1.92\\ \hline \label{svhn-basic-table} \end{tabular} \end{table} \subsection{Results with Supervised Learning} \label{sec:results} We first present the results in the supervised learning setting. Specifically, we evaluate the predictive and generative performance of our MMVA and Conv-MMVA on the MNIST and SVHN datasets in various tasks, including classification, sample generation, and missing data imputation. \begin{figure*}[!t] \centering \subfigure[VA ]{\includegraphics[width=0.49\columnwidth]{va_sample.png}} \subfigure[MMVA]{\includegraphics[width=0.49\columnwidth]{mmva_sample.png}} \subfigure[Conv-VA]{\includegraphics[width=0.49\columnwidth]{cva_sample.png}} \subfigure[Conv-MMVA]{\includegraphics[width=0.49\columnwidth]{cmmva_sample.png}} \caption{Generation on MNIST. (a-b): images randomly generated by VA and MMVA respectively; (c-d): images randomly generated by Conv-VA and Conv-MMVA respectively. Our mmDGMs retain similar ability as the baselines to generate images.} \label{va_sample} \end{figure*} \begin{figure*}[!t] \centering \subfigure[Training data]{\includegraphics[width=0.49\columnwidth]{train_data_svhn.png}} \subfigure[Conv-VA]{\includegraphics[width=0.49\columnwidth]{mean-sample_svhn_va.png}} \subfigure[Conv-MMVA ($C=10^3$)]{\includegraphics[width=0.49\columnwidth]{mean-sample_svhn_mmva_1e3.png}} \subfigure[Conv-MMVA ($C=10^4$)]{\includegraphics[width=0.49\columnwidth]{mean-sample_svhn_mmva_1e4.png}} \caption{Generation on SVHN. (a): training data preprocessed by LCN; (b): samples randomly generated by Conv-VA; (c-d): samples randomly generated by Conv-MMVA when $C=10^3$ and $C=10^4$ respectively.} \label{svhn_sample} \end{figure*} \subsubsection{Predictive Performance} \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Effects of $C$ on the MNIST dataset in Conv-MMVA.} \centering \begin{tabular}{lcc} \hline C & Error Rate (\%) & Lower Bound \\ \hline 0 & 1.35 & -93.17 \\ 1 & 1.86 & -95.86 \\ $10$ & 0.88 & -95.90 \\ $10^{2}$ & 0.54 & -96.35 \\ $ 10^{3}$ & 0.45 & -99.62 \\ $ 10^{4}$ & 0.43 & -112.12 \\ \hline \label{effect-c} \end{tabular} \end{table} We test both MMVA and Conv-MMVA on the MNIST dataset. In MLP case, the first three rows in Table~\ref{mnist-basic-table} compare {VA+Pegasos}, {VA+Class-condtionVA} and {MMVA}, where {VA+Class-condtionVA} refers to the best fully supervised model in~\cite{kingma14nips}. Our model outperforms the baselines significantly. We further use the t-SNE algorithm~\cite{Maaten:08} to embed the features learned by VA and MMVA on 2D plane, which again demonstrates the stronger discriminative ability of MMVA (See Appendix B for details). In CNN cases, Table~\ref{effect-c} shows the effect of $C$ on classification error rate and variational lower bound. Typically, as $C$ gets lager, Conv-MMVA learns more discriminative features and leads to a worse estimation of data likelihood. However, if $C$ is too small, the supervision is not enough to lead to predictive features. Nevertheless, $C = 10^3$ is quite a good trade-off between the classification performance and generative performance. In this setting, the classification performance of our Conv-MMVA model is comparable to the state-of-the-art fully discriminative networks with comparable architectures and number of parameters, shown in the last four rows of Table~\ref{mnist-basic-table}. We focus on Conv-MMVA on the SVHN datset as it is more challenging. Table~\ref{svhn-basic-table} shows the predictive performance on SVHN. In this harder problem, we observe a larger improvement by Conv-MMVA as compared to Conv-VA+Pegasos, suggesting that DGMs benefit a lot from max-margin learning on image classification. We also compare Conv-MMVA with state-of-the-art results. To the best of our knowledge, there is no competitive generative models to classify digits on the SVHN dataset with full labels. \subsubsection{Generative Performance} We investigate the generative capability of MMVA and Conv-MMVA on generating samples. Fig.~\ref{va_sample} and Fig.~\ref{svhn_sample} illustrate the images randomly sampled from VA and MMVA models on MNIST and SVHN respectively, where we output the expectation of the value at each pixel to get a smooth visualization. Fig.~\ref{svhn_sample} demonstrates the benefits from jointly training of DGMs and max-margin classifiers. Though Conv-VA gives a tighter lower bound of data likelihood and reconstructs data more elaborately, it fails to learn the pattern of digits in a complex scenario and could not generate meaningful images. In this scenario, the hinge loss regularization on recognition model is useful for generating main objects to be classified in images. \begin{table}[!t] \renewcommand{\arraystretch}{1.2} \caption{MSE on MNIST data with missing values in the testing procedure.} \centering \begin{tabular}{lcccc} \hline Noise Type & VA & MMVA & Conv-VA & Conv-MMVA \\ \hline Rand-Drop (0.2) & \textbf{0.0109} & 0.0110 & 0.0111 & 0.0147 \\ Rand-Drop (0.4) & \textbf{0.0127} & \textbf{0.0127} & \textbf{0.0127} & 0.0161 \\ Rand-Drop (0.6) & 0.0168 & \textbf{0.0165} & 0.0175 & 0.0203 \\ Rand-Drop (0.8) & 0.0379 & \textbf{0.0358} & 0.0453 & 0.0449 \\ \hline Rect (6 $\times$ 6) & 0.0637 & 0.0645 & \textbf{0.0585} & 0.0597 \\ Rect (8 $\times$ 8) & 0.0850 & 0.0841 & 0.0754 & \textbf{0.0724} \\ Rect (10 $\times$ 10) & 0.1100 & 0.1079 & 0.0978 & \textbf{0.0884} \\ Rect (12 $\times$ 12) & 0.1450 & 0.1342 & 0.1299 & \textbf{0.1090} \\ \hline \label{deniose-mse} \end{tabular} \end{table} \subsubsection{Missing Data Imputation and Classification} \begin{figure*}[!t] \centering \subfigure[Rand-Drop (0.6)]{\includegraphics[width=.95\columnwidth]{denoise-random.pdf}} \subfigure[Rect ($12 \times 12$)]{\includegraphics[width=.95\columnwidth]{denoise-rectangle.pdf}} \caption{Imputation results of MMVA in two noising conditions: column 1 shows the true data; column 2 shows the perturbed data; and the remaining columns show the imputations for 20 iterations.} \label{denoise} \end{figure*} We further test MMVA and Conv-MMVA on the task of missing data imputation. For MNIST, we consider two types of missing values~\cite{little87missing}: (1) {\bf Rand-Drop}: each pixel is missing randomly with a pre-fixed probability; and (2) {\bf Rect}: a rectangle located at the center of the image is missing. Given the perturbed images, we uniformly initialize the missing values between 0 and 1, and then iteratively do the following steps: (1) using the recognition model to sample the hidden variables; (2) predicting the missing values to generate images; and (3) using the refined images as the input of the next round. For SVHN, we do the same procedure as in MNIST but initialize the missing values with Guassian random variables as the input distribution changes. Intuitively, generative models with CNNs could be more powerful on learning patterns and high-level structures, while generative models with MLPs learn more to reconstruct the pixels in detail. This conforms to the MSE results shown in Table~\ref{deniose-mse}: Conv-VA and Conv-MMVA outperform VA and MMVA with a missing rectangle, while VA and MMVA outperform Conv-VA and Conv-MMVA with random missing values. Compared with the baselines, mmDGMs also make more accurate completion when large patches are missing. All of the models infer missing values for 100 iterations. We visualize the inference procedure of MMVA in Fig.~\ref{denoise}. Considering both types of missing values, MMVA could infer the unknown values and refine the images in several iterations even with a large ratio of missing pixels. More visualization results on MNIST and SVHN are presented in Appendix C. \begin{table}[!t] \renewcommand{\arraystretch}{1.2} \caption{Error rates(\%) with missing values on MNIST.} \centering \begin{tabular}{lccc} \hline Noise Level & CNN & Conv-VA & Conv-MMVA\\ \hline Rect (6 $\times$ 6) & 7.5 & 2.5 & \textbf{1.9} \\ Rect (8 $\times$ 8) & 18.8 & 4.2 & \textbf{3.7} \\ Rect (10 $\times$ 10) & 30.3 & 8.4 & \textbf{7.7} \\ Rect (12 $\times$ 12) & 47.2 & 18.3 & \textbf{15.9} \\ \hline \label{errorwithmissing} \end{tabular} \end{table} We further present classification results with missing values on MNIST in Table~\ref{errorwithmissing}. CNN makes prediction on the incomplete data directly. Conv-VA and Conv-MMVA infer missing data for 100 iterations at first and then make prediction on the refined data. In this scenario, Conv-MMVA outperforms both Conv-VA and CNN, which demonstrates the advantages of our mmDGMs, which have both strong discriminative and generative capabilities. Overall, mmDGMs have comparable capability of inferring missing values and prefer to learn high-level patterns instead of local details. \subsection{Results with Semi-supervised Learning} \label{sec:ssl-results} We now present the predictive and generative results on MNIST, SVHN and small NORB datasets given partially labeled data. \subsubsection{Predictive Performance} \begin{table}[!t] \renewcommand{\arraystretch}{1.2} \caption{Error rates (\%) on (partially) labeled MNIST dataset.} \centering \begin{tabular}{lccc} \hline Algorithm & $n=100$ & $n=1000$ & ALL\\ \hline {\it M1+M2}~\cite{kingma14nips} & 3.33 ($\pm0.14$) & 2.4 ($\pm0.02$) & 0.96\\ {\it VAT}~\cite{miyato2015distributional} & 2.33 & 1.36 & 0.64 \\ {\it Ladder}~\cite{rasmus15} & 1.06 ($\pm 0.37$) & \textbf{0.84} ($\pm 0.08$) & 0.57 \\ {\it CatGAN}~\cite{springenberg16} & 1.91 ($\pm 0.10$) & 1.73 ($\pm 0.18$)& 0.91 \\ {\it ADGM}~\cite{maaloe16} & 0.96 ($\pm 0.02$) & - & - \\ {\it SDGM}~\cite{maaloe16} & 1.32 ($\pm 0.07$) & - & - \\ \hline {\it Conv-CatGAN}~\cite{springenberg16} & 1.39 ($\pm 0.28$) & -& \textbf{0.48}\\ {\it Improved-GAN}~\cite{salimans2016improved} & 0.96 ($\pm 0.07$) & - & - \\ {\it Conv-Ladder}~\cite{rasmus15} & \textbf{0.89} ($\pm 0.50$)& - & -\\ \hline {\it Conv-MMCVA} & 1.24 ($\pm0.54$) & \textbf{0.54} ($\pm0.04$) & \textbf{0.31}\\ \hline \label{ssl-mnist-table} \end{tabular} \end{table} \begin{figure}[!t] \centering \includegraphics[width=.8\columnwidth]{subset_plot.png} \caption{Effect of size of labeled data on MNIST. The labeled data of smaller size is a subset of that of larger size in the same curve to reduce variance. Generally, the error rates decrease as the number of labels increase and the peaks may be caused by the poor quality of new added labeled data. Nevertheless, 800 labels are sufficient to achieve an error rate that is comparable to the supervised learning results of other DGMs.} \label{fig:effect_labels} \end{figure} \begin{figure*}[!t] \centering \subfigure[MNIST data]{\includegraphics[width=0.49\columnwidth]{ssl-mnist-data.png}} \subfigure[MNIST samples]{\includegraphics[width=0.49\columnwidth]{ssl-mnist-sample.png}} \subfigure[SVHN data]{\includegraphics[width=0.49\columnwidth]{ssl-svhn-data.png}} \subfigure[SVHN samples]{\includegraphics[width=0.49\columnwidth]{ssl-svhn-sample.png}} \caption{Class-conditional generation on MNIST (100 labels) and SVHN (1000 labels) datasets. (a) and (c) present 100 labeled training data sorted by class on MNIST and SVHN datasets respectively. (b) and (d) show samples on corresponding datasets where each row shares same class $y$ and each column shares same latent variables $\mathbf{z}$.} \label{fig:disentangle} \end{figure*} We compare our Conv-MMCVA with a large body of previous methods on the MNIST dataset under different settings in Table~\ref{ssl-mnist-table}. Our method is competitive to the state-of-the-art results given 100 labels. As the number of labels increases, the max-margin principle significantly boosts the performance of Conv-MMCVA relative to the other models, including the Ladder Network~\cite{rasmus15}. Indeed, given 1,000 labels, Conv-MMCVA not only beats existing methods in the same setting, but also is comparable to the best supervised results of DGMs. The supervised learning results of Conv-MMCVA again confirm that by leveraging max-margin principle DGMs can achieve the same discriminative ability as the state-of-the-art CNNs with comparable architectures. We analyze the effect of the number of labels in Fig.~\ref{fig:effect_labels} for Conv-MMCVA, where the four curves share the same settings but use different random seeds to split data and initialize the networks. \begin{table}[!t] \renewcommand{\arraystretch}{1.2} \caption{Error rates (\%) on SVHN and NORB datasets given 1000 labels.} \centering \begin{tabular}{lcc} \hline Algorithm & SVHN $n=1000$ & NORB $n=1000$\\ \hline {\it M1+M2}~\cite{kingma14nips} & 36.02 ($\pm0.10$) & 18.79 ($\pm0.05$)\\ {\it VAT}~\cite{miyato2015distributional} & 24.63 & \textbf{9.88} \\ {\it ADGM}~\cite{maaloe16} & 22.86 & 10.06($\pm0.05$) \\ {\it SDGM}~\cite{maaloe16} & 16.61($\pm0.24$) & \textbf{9.40}($\pm0.04$) \\ \hline {\it Improved-GAN}~\cite{salimans2016improved} &\textbf{8.11} ($\pm1.3$) & - \\ {\it Ensemble-10-GANs}~\cite{salimans2016improved} &\textbf{5.88} ($\pm1.0$) & - \\ \hline {\it Conv-MMCVA} & \textbf{4.95} ($\pm0.18$) & \textbf{6.11} ($\pm0.58$) \\ \hline \label{ssl-real-table} \end{tabular} \end{table} Table~\ref{ssl-real-table} shows the classification results on the more challenging SVHN and NORB datasets. Following previous methods~\cite{kingma14nips,miyato2015distributional,maaloe16}, we use 1,000 labels on both datasets. We can see that our methods outperform the previous state-of-the-art substantially. {\it Ensemble-10-GANs} refers to an ensemble of 10 Improved-GANs~\cite{salimans2016improved} with 9-layer classifiers while we employ a single model with a shallower 6-layer classifier. Note that it is easy to further improve our model by using more advanced networks, e.g. ResNet~\cite{he2015deep}, without competition due to the separated architectures. In this paper, we focus on comparable architectures for fairness. We further analyze the effect of the regularization terms to investigate the possible reasons for the outstanding performance. If we omit the hat loss regularization, the Conv-MMCVA suffers from overfitting and only achieves 6.4\% error rates on the MNIST dataset given 100 labels. The underlying reason is that we approximate the full posterior inference by a greedy point estimation. If the prediction of the classifier is wrong, the generative model tends to interpret the unlabeled data with the incorrect label instead of enforcing the classifier to find the true label as in previous conditional DGM~\cite{kingma14nips}. However, the hat loss provides an effective way for the classifier to achieve a sufficiently good classification result, which can be fine-tuned according to the generative loss. In fact, trained to optimize the max-margin losses for both the labeled and unlabeled data, the classifier itself without the DGM can get 2.1\% error rates on MNIST given 100 labels. These results demonstrate the effectiveness of our proposed max-margin loss for the unlabeled data. We also reduce 0.2\% error rate in this setting by using the label-balance regularization. Besides the excellent performance, our Conv-MMCVA provides a potential way to apply class-conditional DGMs on large scale datasets with many more categories due to the efficient inference. \subsubsection{Generative Performance} \begin{figure}[!t] \centering \subfigure[NORB data]{\includegraphics[width=0.49\columnwidth]{ssl-norb-data.png}} \subfigure[NORB samples]{\includegraphics[width=0.49\columnwidth]{ssl-norb-sample.png}} \caption{Class-conditional generation on NORB dataset (1000 labels). (a) and (b) are labeled training data and generated samples respectively.} \label{fig:ssl-norb-sample} \end{figure} We demonstrate that our Conv-MMCVA has the ability to disentangle classes and styles given a small amount of labels on the MNIST, SVHN and NORB datasets, as shown in Fig.~\ref{fig:disentangle} and Fig.~\ref{fig:ssl-norb-sample}. The images are generated by conditioning on a label $y$ and a style vector $\mathbf{z}$. On the MNIST and SVHN datasets, Conv-MMCVA are able to generate high-quality images and $\mathbf{z}$ can capture the intensities, scales and colors of the images. Note that previous generation on SVHN in the semi-supervised learning setting is either unconditioned~\cite{salimans2016improved} or based on some preprocessed data~\cite{kingma14nips}. Our samples are a little blurry on the NORB dataset, which contains elaborate images of 3D toys with different lighting conditions and points of view. Nevertheless, Conv-MMCVA can still separate these physical semantics from the general categories beyond digits. To the best of our knowledge, there is no competitive generative models to generate NORB data class-conditionally given the partially labeled data. \section{Conclusions} In this paper, we propose max-margin deep generative models (mmDGMs) and the class-conditional variants (mmDCGMs), which conjoin the predictive power of max-margin principle and the generative ability of deep generative models. We develop a doubly stochastic subgradient algorithm to learn all parameters jointly and consider two types of recognition models with MLPs and CNNs respectively. We evaluate our mmDGMs and MMDCGMs in supervised learning and semi-supervised learning settings respectively. Given partially labeled data, we approximate the full posterior of the labels by a delta distribution for efficiency and propose additional max-margin and label balance losses for unlabeled data for effectiveness. We present extensive results to demonstrate that our methods can significantly improve the prediction performance of deep generative models, while retaining the strong generative ability on generating input samples as well as completing missing values. In fact, by employing CNNs in our mmDGMs and mmDCGMs, we achieve low error rates on several datasets including MNIST, SVHN and NORB, which are competitive to the best fully discriminative networks in supervised learning and improve the previous state-of-the-art semi-supervised results significantly. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The work was supported by the National Basic Research Program (973 Program) of China (No. 2013CB329403), National NSF of China (Nos. 61620106010, 61322308, 61332007), the Youth Top-notch Talent Support Program, and Tsinghua TNList Lab Big Data Initiative. \ifCLASSOPTIONcaptionsoff \newpage \fi
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The recent LHC data has revealed that the Higgs boson is ``light" with the mass of ${\cal O}(M_{W})$ \cite{ATLAS, CMS}. This implies that the Higgs self-coupling $\lambda$ is of ${\cal O}(g^{2}) \ (g: \ {\rm gauge \ coupling})$ and therefore governed by the gauge principle. Among various scenarios of physics beyond the standard model (BSM), the minimal supersymmetric standard model (MSSM) and gauge-Higgs unification (GHU) (formulated on multi-dimensional extra space) have such a desirable property and predict a light Higgs boson with definite mass ratios of the Higgs boson to weak gauge bosons; i.e. $M_{H} \leq M_{Z} \cos 2\beta$ ($\beta$: an angle to denote the relative weight of the vacuum expectation values of the two Higgs doublets) for MSSM and $M_{H} = 2M_{W}$ for the 6-dimensional (6D) SU(3) GHU model with one Higgs doublet at the classical level \cite{SSSW, LMM}. Since these definite mass ratios are inevitable consequences of symmetries, i.e., SUSY and (higher-dimensional) local gauge symmetry, respectively, we naturally expect that even under quantum correction, deviations from the relations mentioned above are UV-finite and definitely predictable. In fact, in MSSM, it is well known that, under the quantum correction by the SUSY multiplet of the top quark $(t, \ \tilde{t})$, $M_{H}$ deviates from $M_{Z} \cos 2\beta $ due to the SUSY breaking, $m_{\tilde{t}}^{2} \gg m_{t}^{2}$, and the deviation is UV-finite and calculable. In this paper, we focus on another scenario of BSM, i.e., GHU, where the Higgs boson originates from the extra space component of a higher dimensional gauge field \cite{1979Manton, 1983Hosotani} and therefore the quantum correction to the Higgs mass-squared is finite due to the higher-dimensional local gauge symmetry, thus providing an alternative solution of the well-known gauge hierarchy problem \cite{1998HIL}. There have also been studies on the finiteness of the Higgs boson mass in the context of the gauge-Higgs unification in various models \cite{ABQ, finiteness}. Interestingly, similarly to the case of MSSM, even under the quantum corrections, the Higgs mass itself and the deviation from the relation mentioned above, $M_{H} = 2M_{W}$, have been demonstrated to be both UV-finite in the 6D SU(3) GHU model with one Higgs doublet \cite{LMM}. In this case, although the local gauge symmetry still exists even after the compactification of the extra-space, the compactification to the non-simply connected extra space makes the Aharanov--Bohm (AB) phases along two different cycles of the torus physically meaningful and such nonlocal effects contribute to the finite deviation from the tree-level relations. Such predictability of the Higgs mass is desirable features of MSSM and GHU. However, the differences of the tree-level predictions of the Higgs mass from the observed value are rather large in both scenarios for the differences to be explained by quantum corrections: \be \label{-1.1} 125 - M_{Z} \simeq 2M_{W} - 125 \simeq 35 {\rm GeV}, \ee where $M_{Z}$ is the maximum value of the tree level prediction of MSSM, while $2M_{W}$ is the prediction of the 6D SU(3) GHU model with one Higgs doublet. Thus, to realize the observed Higgs mass, i.e., $M_{H} = 125$ GeV, a considerably large SUSY breaking $M_{SUSY}$ or a considerably small bulk mass of matter field \cite{LMM} is required, respectively. Hence, in this article, we address a question on whether there ever exist GHU models that provides more realistic tree level predictions of the Higgs mass. Namely, we investigate in the scheme of 6D GHU whether the tree level prediction of the Higgs mass becomes closer to or coincides with the observed value of $125$ GeV, by suitable choices of the gauge group and the compactification, especially the manner of orbifolding, which determines how many Higgs doublets of SU(2)$_L$ remain in the low energy effective theory as the KK zero modes. In the models of GHU, the gauge group of the standard model (SM) is forced to be enlarged, since the Higgs boson inevitably belongs to an adjoint representation (``repr.'' for short) of the gauge group in GHU, while in the SM, the Higgs boson belongs to the fundamental repr. of SU(2)$_L$. Thus, the minimal unified electro-weak model incorporating the SM is the SU(3) GHU model \cite{KLY, SSS}. In such models with the simple gauge group, the weak mixing angle, i.e., the mass ratio of weak gauge bosons, can also be predicted, in addition to the mass ratio of the Higgs to weak gauge bosons. Unfortunately, the predicted weak mixing angle in the minimal SU(3) model is far from the observed value: $\sin^{2}\theta_{W} = \frac{3}{4}$. Interestingly, however, it has been pointed out that a slightly larger gauge group G$_2$ leads to a successful prediction of the weak mixing angle: $\sin^{2}\theta_{W} = \frac{1}{4}$ \cite{1979Manton, CGM}. Thus, basically, we are in a position to predict mutual relations among all massive bosonic particles in the SM. The purpose of this article is to exploit the possibilities to realize realistic predictions on the Higgs mass and the weak mixing angle in the framework of the 6D GHU model with one or two Higgs doublets. \section{Weak Mixing Angle and Representations under SU(3)} We first discuss the prediction of the weak mixing angle. We can demonstrate that the predicted weak mixing angle can be easily calculated without explicit calculations of the weak gauge boson masses $M_{W, Z}$, once we know the gauge group. More precisely, we will argue that, by knowing which repr. of the minimal group SU(3) the Higgs doublet belongs to, the weak mixing angle is immediately fixed. One reasonable assumption here is that the gauge group of GHU model includes SU(3) as its subgroup and the electro-weak gauge symmetry of the SM, SU(2)$_L \times$ U(1)$_Y$, is embedded into the simple group SU(3). A key formula in this argument is \be \label{0.1} \sin^{2}\theta_{W} = \frac{{\rm Tr} \ I_{3}^{2}}{{\rm Tr} \ Q^{2}}, \ee where ${\rm Tr} \ I_{3}^{2}$ and ${\rm Tr} \ Q^{2}$ are the summations of the squared eigenvalues of the operators $I_{3}$ and $Q$ (the charge operator in the unit of $e$) for an arbitrary repr. of SU(3). The proof of this useful relation (\ref{0.1}) is as follows. The essentially important relation is the orthogonality of the generators associated with photon and $Z$ boson, ${\rm Tr} \{ Q(I_{3} - \sin^{2}\theta_{W}Q)\} = {\rm Tr}(QI_{3}) - \sin^{2}\theta_{W} {\rm Tr}Q^{2} = 0$, which holds generically for simple groups, since the gauge coupling is unique, and photon and $Z$ boson are two orthogonal states. We also note that ${\rm Tr}(QI_{3}) = {\rm Tr}\{ (I_{3} + \frac{Y}{2})I_{3}\} = {\rm Tr}I_{3}^{2}$, where $Y$ denotes the generator of the weak hypercharge and the orthogonality ${\rm Tr}(I_{3}Y) = 0$ has been used. We thus obtain ${\rm Tr}I_{3}^{2} - \sin^{2}\theta_{W}{\rm Tr}Q^{2} = 0$, leading to $\sin^{2}\theta_{W} = \frac{{\rm Tr} \ I_{3}^{2}}{{\rm Tr} \ Q^{2}}$. As the repr. of SU(3), we choose the simplest triplet. Since the triplet is decomposed under the subgroup SU(2)$_L$ as $3 \to 2 + 1$, the upper two components of the triplet can be regarded as the SU(2)$_L$ doublet, and therefore the electric charges of these upper two components differ by one unit. We also note that ${\rm Tr} \ Q = 0$, since the charge operator should be one of the generators of SU(3). Then, the charge assignment for the components of the triplet can be written generally in the form of \be \label{0.2} \begin{pmatrix} q \\ q-1 \\ 1-2q \end{pmatrix}, \ee with a parameter $q$. Then, using Eq.\,(\ref{0.1}), the weak mixing angle is written as \be \label{0.3} \sin^{2}\theta_{W} = \frac{(\frac{1}{2})^{2} + (-\frac{1}{2})^{2} + 0}{q^{2}+(q-1)^{2}+(1-2q)^{2}} = \frac{1}{4(3q^{2}-3q+1)}. \ee For instance, in the minimal SU(3) GHU model, the Higgs doublet inevitably belongs to the octet of SU(3). Since the octet is constructed by the product of the triplet and anti-triplet repr.s and the triplet is decomposed under the subgroup SU(2)$_L$ as $2 + 1$, the neutral component of the Higgs doublet [\,SU(2) doublet\,] comes from the product of the second component [\,SU(2) doublet\,] and the complex conjugate of the third component [\,SU(2) singlet\,] of the triplet. [\,We may choose the second component, not the first one, without any loss of generality invoking the SU(2) symmetry\,]. Thus, the condition that the electric charge of the neutral Higgs boson vanishes is written as $q-1+ [-(1-2q)] = 3q-2 = 0$, leading to $q = \frac{2}{3}$. Thus, we get $\sin^{2}\theta_{W} = \frac{3}{4}$ from Eq.\,(\ref{0.3}) \cite{KLY, SSS}. When the adopted gauge group $G$ is larger than SU(3), its adjoint repr. generally contains various repr.s of the subgroup SU(3). For instance, in the case of $G = G_{2}$, the adjoint 14 repr. is decomposed under the subgroup SU(3) as $14 \to 8 + 3 + \bar{3}$. Thus, the Higgs doublet no longer has to belong to the octet, but may belong to other repr.s of SU(3). Namely, there appears a possibility of obtaining a realistic prediction of the weak mixing angle, $\sin^{2}\theta_{W} = \frac{1}{4}$. Note that (\ref{0.3}) implies that $\sin^{2}\theta_{W} = \frac{1}{4}$ is obtained if and only if $q = 1$ or 0. We find that the first possibility $q = 1$ to obtain $\sin^{2}\theta_{W} = \frac{1}{4}$ is realized if the Higgs doublet belongs to the triplet of SU(3). In fact, in this case, the second component of (\ref{0.2}) itself should be the neutral component, and $q-1 = 0 \ \to \ q = 1$. This is why the gauge group $G_{2}$ leads to $\sin^{2}\theta_{W} = \frac{1}{4}$ \cite{1979Manton, CGM}. To be more precise, in the case where the triplet component among $8 + 3 + \bar{3}$ develops the VEV, we obtain the desirable result, while if the octet develops the VEV, we again obtain $\sin^{2}\theta_{W} = \frac{3}{4}$, just as in the minimal SU(3) model. We point out that another new possibility $q = 0$ is realized if the Higgs doublet belongs to the 2nd-rank symmetric tensor repr., i.e., sextet repr. 6 of SU(3). Since the sextet is constructed by the symmetric product of two triplet repr.s, the neutral component of the Higgs doublet comes from the product of the second and third components of the triplet. Thus, the parameter $q$ is fixed as $q-1+ (1-2q) = -q = 0 \ \to \ q = 0$. In the next section, we discuss the Sp(6) GHU model, whose adjoint repr. is known to incorporate the sextet of SU(3), as the prototype model for realizing this new possibility. By discussing the repr.s 3, 6, and 8 of SU(3), we have exhausted all repr.s up to the 2nd-rank tensor. The argument is easily generalized. Suppose that the Higgs doublet belongs to a generic tensor repr. $R^{i_{1}, \cdots, i_{m}}_{\bar{i}_{1}, \cdots, \bar{i}_{\bar{m}}}$, where the indices $i$ and $\bar{i}$ denote the components of 3 and $\bar{3}$ of SU(3), respectively. The indices $i_{1}, \cdots, i_{m}$ and $\bar{i}_{1}, \cdots, \bar{i}_{\bar{m}}$ are supposed to be totally symmetrized, respectively. Then, it is easy to see that $q = 1$ is realized for $|m-\bar{m}| = 1$, whose simplest case is the triplet ($m = 1, \ \bar{m} = 0$) and $q = 0$ is realized for $|m-\bar{m}| = 2$, whose simplest case is the sextet ($m = 2, \ \bar{m} = 0$). Now, we know what repr.s of SU(3) lead to the realistic weak mixing angle, and this knowledge is useful for choosing the gauge group: we can focus on the gauge group whose adjoint repr. contains such desirable repr.s of SU(3). Once we have a model with the realistic weak mixing angle, the next step will be to investigate whether the model predicts a realistic Higgs mass at the same time. In the following sections, we address this question by taking several concrete models with one or two Higgs doublets in 6D space-time. Since the weak mixing angle crucially depends on the choice of the gauge group, it may be natural to expect that the mass ratio of the Higgs boson to the weak gauge boson also depends on the choice of the gauge group. What we discuss in the following two sections are models with gauge groups of rank 3. We do not discuss the $G_{2}$ model with the simpler group of rank 2, since it has already been shown that the model predicts $M_{H} = M_{Z}$ at the classical level, although the prediction of the weak mixing angle is realistic: $\sin^{2} \theta_{W} = 1/4$ \cite{CGM}. \section{Sp(6) model} What we first discuss is the 6D Sp(6) GHU model with one Higgs doublet in its low energy effective theory. The reason for this choice is that the decomposition of the adjoint repr. of Sp(6) under its subgroup SU(3), \be \label{1.1} 21 \to 8 + 6 + \bar{6} + 1, \ee contains 6 (or $\bar{6}$) repr. Thus, this is a prototype model to realize the realistic weak mixing angle $\sin^{2}\theta_{W} = 1/4$ by assigning the Higgs doublet to the sextet repr. of SU(3), the new possibility proposed in the previous section. Knowing that the prediction of the weak mixing angle is successful, the main purpose in this section is to study the ratio of the Higgs boson mass $M_{H}$ to the weak scale $M_{W}$, in addition to the concrete confirmation of the weak mixing angle. The ratio $M_{H}/M_{W}$ has been known to be 2 at the classical level in the 6D SU(3) GHU model with one Higgs doublet \cite{SSSW, LMM}. If the prediction of this ratio changes depending on the gauge group, as we have seen in the case of the weak mixing angle, there may be a chance of realizing a more realistic mass ratio $M_{H}/M_{W}$ in this Sp(6) model. \subsection{Gauge kinetic term} The 21 generators of Sp(6) are given for the fundamental repr. 6 [\,$3 + \bar{3}$ under SU(3)\,] as follows: \bea &&T^{a} = \frac{1}{2\sqrt{2}} \begin{pmatrix} \lambda^{a} & 0 \\ 0 & - (\lambda^{a})^{\ast} \end{pmatrix} \ \ ({\rm for} \ a = 1 - 8), \label{2.1a} \\ &&T^{9} = \frac{1}{2\sqrt{2}} \begin{pmatrix} I & 0 \\ 0 & -I \end{pmatrix}, \label{2.1b} \\ &&T^{a} = \frac{1}{2\sqrt{2}} \begin{pmatrix} 0 & M_{j} \\ M_{j} & 0 \end{pmatrix} \ \ ({\rm for} \ a = 9+j, \ j =1 - 6), \label{2.1c} \\ &&T^{a} = \frac{1}{2\sqrt{2}} \begin{pmatrix} 0 & -iM_{j} \\ iM_{j} & 0 \end{pmatrix} \ \ ({\rm for} \ a = 15+j, \ j =1 - 6), \label{2.1d} \eea where $\lambda^{a}$ are Gell--Mann matrices and the six symmetric matrices $M_{j}$ are \bea && M_{1} = \begin{pmatrix} \sqrt{2} & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, \ M_{2} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & \sqrt{2} & 0 \\ 0 & 0 & 0 \end{pmatrix}, \ M_{3} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & \sqrt{2} \end{pmatrix}, \nonumber \\ && M_{4} = \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, \ M_{5} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix}, \ M_{6} = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{pmatrix}. \label{2.2} \eea These generators satisfy an ortho-normal condition: \be \label{2.3} {\rm Tr} (T^{a}T^{b}) = \frac{1}{2} \delta_{ab}. \ee We introduce 21 gauge fields $A^{a}_{M}$: \be \label{2.4} A_{M} \equiv \sum_{a = 1}^{21} A^{a}_{M} T^{a} = (A_{\mu}, \ A_{z}, \ A_{\bar{z}}), \ee where \be \label{2.5} A_{z} \equiv \frac{A_{5} + i A_{6}}{\sqrt{2}}, \ \ A_{\bar{z}} \equiv (A_{z})^{\dagger} = \frac{A_{5} - i A_{6}}{\sqrt{2}} \ \ \left(z \equiv \frac{x^{5} - i x^{6}}{\sqrt{2}} \right). \ee By using the field strength tensor, \be \label{2.6} F_{MN} \equiv \partial_{M}A_{N} - \partial_{N}A_{M} - ig [A_{M}, A_{N}], \ee the gauge kinetic term is constructed as usual: \be \label{2.7} -\frac{1}{2} {\rm Tr} (F^{MN}F_{MN}) = -\frac{1}{2} {\rm Tr} (F^{\mu \nu}F_{\mu \nu}) + 2 {\rm Tr} (F^{\mu} \ _{z}F_{\mu \bar{z}}) + {\rm Tr} \{(F_{z \bar{z}})^{2}\}, \ee where \bea &&F_{\mu z} = \partial_{\mu}A_{z} - \partial_{z}A_{\mu} - ig [A_{\mu}, A_{z}], \ \ F_{\mu \bar{z}} = (F_{\mu z})^{\dagger} = \partial_{\mu}A_{\bar{z}} - \partial_{\bar{z}}A_{\mu} - ig [A_{\mu}, A_{\bar{z}}], \label{2.8a} \\ &&F_{z \bar{z}} = \partial_{z}A_{\bar{z}} - \partial_{\bar{z}}A_{z} - ig [A_{z}, A_{\bar{z}}], \label{2.8b} \eea with \be \label{2.9} \partial_{z} \equiv \frac{\partial_{5} + i \partial_{6}}{\sqrt{2}}, \ \ \partial_{\bar{z}} \equiv \frac{\partial_{5} - i \partial_{6}}{\sqrt{2}}. \ee \subsection{Orbifolding and KK zero modes} In order to have one Higgs doublet as a KK zero-mode, we adopt an orbifold $T^{2}/Z_{6}$ as our extra space, imposing the invariance of the theory under $Z_{6}$ transformation: \be \label{2.10} z \ \to \ \omega z \ \ \ (\omega^{6} = 1). \ee The ``$Z_{6}$-parity" assignment for the fundamental 6 repr. is given by the matrix \be \label{2.11} P = {\rm diag} (\omega, \omega, \omega^{4}, \bar{\omega}, \bar{\omega}, \bar{\omega}^{4}). \ee Then, the corresponding $Z_{6}$-parities for 4D gauge and scalar fields are fixed as \bea &&A_{\mu}(x^{\mu}, \omega z) = P A_{\mu}(x^{\mu}, z) P^{\dagger}, \label{2.12a} \\ &&A_{z}(x^{\mu}, \omega z) = \omega P A_{z}(x^{\mu}, z) P^{\dagger}, \label{2.12b} \\ &&A_{\bar{z}}(x^{\mu}, \omega z) = \bar{\omega} P A_{\bar{z}}(x^{\mu}, z) P^{\dagger}. \label{2.12c} \eea We thus realize that the KK zero-modes of 4D gauge bosons are those of SU(2)$_L \times$ U(1)$_Y$, together with an additional U(1) gauge boson, and the KK zero-modes of 4D scalars just correspond to our Higgs doublet, $H = (\phi^{+}, \phi^{0})^{t}$: \bea &&A_{\mu} = \begin{pmatrix} a_{\mu} & 0 \\ 0 & -a_{\mu}^{\ast} \end{pmatrix} + A^{9}_{\mu}T^{9}, \label{2.13a} \\ &&A_{z} = \begin{pmatrix} 0 & a_{z} \\ 0 & 0 \end{pmatrix}, \label{2.13b} \eea where the $3 \times 3$ matrices $a_{\mu}$ and $a_{z}$ are given as \bea &&a_{\mu} = \begin{pmatrix} \frac{\sqrt{6}}{6}Z_{\mu} & \frac{1}{2} W^{+}_{\mu} & 0 \\ \frac{1}{2} W^{-}_{\mu} & -\frac{\sqrt{2}}{4}\gamma_{\mu} - \frac{\sqrt{6}}{12}Z_{\mu} & 0 \\ 0 & 0 & \frac{\sqrt{2}}{4}\gamma_{\mu} - \frac{\sqrt{6}}{12}Z_{\mu} \end{pmatrix} \label{2.14a} \\ &&a_{z} = \frac{1}{2} \begin{pmatrix} 0 & 0 & \phi^{+} \\ 0 & 0 & \phi^{0} \\ \phi^{+} & \phi^{0} & 0 \end{pmatrix}. \label{2.14b} \eea $\gamma_{\mu}$ and $A^{9}_{\mu}$ stand for the photon and the extra U(1) gauge boson, respectively. Note that the photon field appears in (\ref{2.14a}) so that the coupled charge operator is written as \be \label{2.15} Q = {\rm diag} (0, \ -1, \ 1) = \frac{1}{2}\lambda^{3} + \frac{\sqrt{3}}{2}(-\lambda^{8}), \ee whose form is fixed by the condition that $\phi^{0}$ in (\ref{2.14b}) is electrically neutral. (\ref{2.15}) in turn implies that $\sin \theta_{W} = \frac{1}{2}$ and $\cos \theta_{W} = \frac{\sqrt{3}}{2}$, and therefore \be \label{2.17} \sin^{2} \theta_{W} = \frac{1}{4}, \ee as we expected. \subsection{Mass ratios of weak gauge bosons and Higgs boson} In this subsection, we calculate the masses of the weak gauge bosons $W^{\pm}_{\mu}$ and $Z_{\mu}$, and the Higgs boson $h \ [\,\phi^{0} = \frac{v + h +iG^{0}}{\sqrt{2}}$ with $v$ being the vacuum expectation value (VEV) of the Higgs field\,]. For that purpose, we need $\kappa, \ \kappa'$, and $\lambda$ defined as the coefficients of the relevant part of the lagrangian \cite{LMM}, \be \label{2.19} \kappa |\phi^{(0)}|^{2}W^{+ \mu}W^{-}_{\mu} + \kappa' |\phi^{(0)}|^{2}Z^{\mu}Z_{\mu} - \lambda |\phi^{(0)}|^{4}. \ee Note that, in GHU, the quadratic term of the Higgs field does not exist at the tree level and is induced at the quantum level with a UV-finite coefficient \cite{LMM}. Once the VEV $v$ is generated by the radiatively induced negative mass-squared term, the masses of $W^{\pm}_{\mu}, \ Z_{\mu}$, and $h$ can be written in terms of these coefficients as \be \label{2.20} M_{W}^{2} = \frac{\kappa}{2}v^{2}, \ M_{Z}^{2} = \kappa' v^{2}, \ M_{H}^{2} = 2\lambda v^{2}. \ee The coefficients $\kappa, \ \kappa'$, and $\lambda$ can be read off from the commutator squared in ${\rm Tr} (F^{\mu} \ _{z}F_{\mu \bar{z}})$ and ${\rm Tr} \{(F_{z \bar{z}})^{2}\}$: \bea && 2 {\rm Tr} (F^{\mu} \ _{z}F_{\mu \bar{z}}) \ \ \to \ \ -2g^{2}{\rm Tr} \{[A^{\mu}, A_{z}][A_{\mu}, A_{\bar{z}}]\} = g^{2}|\phi^{0}|^{2} \left( \frac{1}{4}W^{+\mu}W^{-}_{\mu} + \frac{1}{6}Z^{\mu}Z_{\mu} \right), \label{2.22a} \\ && {\rm Tr} \{(F_{z \bar{z}})^{2}\} \ \ \to \ \ -g^{2}{\rm Tr} \{ [A_{z}, A_{\bar{z}}]^{2} \} = - \frac{g^{2}}{4}|\phi^{0}|^{4}. \label{2.22b} \eea We find \be \label{2.23} \kappa = \frac{g^{2}}{4}, \ \kappa' = \frac{g^{2}}{6}, \ \lambda = \frac{g^{2}}{4}, \ee which in turn mean, from (\ref{2.20}), \be \label{2.24} M_{W}^{2} = \frac{g^{2}}{8}v^{2}, \ M_{Z}^{2} = \frac{g^{2}}{6} v^{2}, \ M_{H}^{2} = \frac{g^{2}}{2}v^{2}. \ee We thus conclude that \be \label{2.25} M_{W} = \frac{\sqrt{3}}{2}M_{Z}, \ \ M_{H} = 2 M_{W}. \ee The former relation is consistent with $\sin^{2} \theta_{W} = 1/4 \ \left( \rho = \frac{M_{W}^{2}}{M_{Z}^{2}\cos^{2} \theta_{W}} = 1 \right)$. The latter relation, however, is exactly the same as that predicted in the SU(3) model with one Higgs doublet ($Z_{3}$ orbifolding) \cite{SSSW, LMM}, and unfortunately, we cannot obtain a closer Higgs mass to the observed value by adopting Sp(6). \section{SU(4) model} We have already mentioned that the exceptional group $G_{2}$ leads to the realistic weak mixing angle $\sin^{2} \theta_{W} = 1/4$, if the Higgs doublet belongs to the triplet component of the subgroup SU(3). There is another familiar gauge group, whose adjoint repr. contains the triplet, i.e., SU(4): the adjoint repr. 15 is decomposed under SU(3) as $15 \to 8 + 3 + \bar{3} + 1$. In this section, we address a question whether or not this another possibility of the gauge group predicts a desirable Higgs mass. We just follow the argument made in the previous section and will skip the detail. The orbifold we adopt is $T^{2}/Z_{6}$. The ``$Z_{6}$-parity" assignment for the fundamental repr. is given by a $4 \times 4$ matrix \be \label{2'.1} P = {\rm diag} (1, 1, \omega^{3}, \omega) \ \ (\omega^{6} = 1). \ee Accordingly, the KK zero-modes for 4D gauge and scalar fields are written as \bea &&A_{\mu} = \begin{pmatrix} \frac{1}{2}\gamma_{\mu} - \frac{\sqrt{3}}{6}Z_{\mu} & \frac{1}{\sqrt{2}} W^{+}_{\mu} & 0 & 0 \\ \frac{1}{\sqrt{2}} W^{-}_{\mu} & \frac{\sqrt{3}}{3}Z_{\mu} & 0 & 0 \\ 0 & 0 & - \frac{1}{2}\gamma_{\mu} - \frac{\sqrt{3}}{6}Z_{\mu} & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} + {\rm the \ extra \ U(1) \ gauge \ boson}, \label{2'.2a} \\ &&A_{z} = \frac{1}{\sqrt{2}} \begin{pmatrix} 0 & 0 & 0 & \phi^{+} \\ 0 & 0 & 0 & \phi^{0} \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}. \label{2'.2b} \eea (\ref{2'.2b}) means that the Higgs doublet behaves as a triplet repr. of the subgroup SU(3). Again, the coefficients $\kappa, \ \kappa'$, and $\lambda$ of (\ref{2.19}) can be read off from the commutator-squared in $\rm{Tr} (F^{\mu} \ _{z}F_{\mu \bar{z}})$ and $\rm{Tr} \{(F_{z \bar{z}})^{2}\}$: \bea && 2 \rm{Tr} (F^{\mu} \ _{z}F_{\mu \bar{z}}) \ \ \to \ \ 2g^{2}|\phi^{0}|^{2} \left( \frac{1}{4}W^{+\mu}W^{-}_{\mu} + \frac{1}{6}Z^{\mu}Z_{\mu} \right), \label{2'.3a} \\ && \rm{Tr} \{(F_{z \bar{z}})^{2}\} \ \ \to \ \ - \frac{g^{2}}{2}|\phi^{0}|^{4}. \label{2'.3b} \eea Thus, we conclude that \be \label{2'.4} \kappa = \frac{g^{2}}{2}, \ \kappa' = \frac{g^{2}}{3}, \ \lambda = \frac{g^{2}}{2}, \ee which in turn mean that \be \label{2'.5} M_{W}^{2} = \frac{g^{2}}{4}v^{2}, \ M_{Z}^{2} = \frac{g^{2}}{3} v^{2}, \ M_{H}^{2} = g^{2}v^{2}. \ee We realize that, although the weak mixing angle is realistic, $M_{W} = \frac{\sqrt{3}}{2}M_{Z} \ (\sin^{2} \theta_{W} = 1/4)$, the predicted Higgs mass $M_{H} = 2M_{W}$ is again the same as in the cases of the SU(3) and Sp(6) models. \section{SU(3) Model with Two Higgs Doublets} So far, we have studied 6D GHU models with only one Higgs doublet in its low energy effective theory and have seen that all the models predict $M_{H} = 2M_{W}$, which is rather far from the observed value of $M_{H}$ at LHC experiments for the quantum correction to recover the difference. Now, we consider the possibility of realizing a prediction of $M_{H}$, which is closer to or even coincides with the observed value, in the framework of the 6D GHU model with two Higgs doublets. It is interesting to note that the MSSM has some similarity to such a GHU model, having two Higgs doublets and the Higgs self-coupling being governed by the gauge principle; $\lambda \sim g^{2}$. In MSSM, however, the tree level prediction is $M_{H} \leq M_{Z} \cos 2\beta$ and there is no chance for the tree level prediction to coincide with the observed value. One remark here is that, in our model, the quadratic terms of the Higgs doublets do not exist at the tree level, while the quartic self-coupling is provided by $g^{2}[A_{5}, A_{6}]^{2}$ of the gauge kinetic term at the tree level, as we have seen in the previous sections. Thus, we are going to calculate the 1-loop induced quadratic terms. Our attitude here is to consider only the leading contribution of the perturbative expansion to each term of the Higgs potential. \subsection{The model} The model of interest is a 6D SU(3) GHU model with an orbifold $T^{2}/Z_{2}$ as its extra space. The $Z_{2}$ orbifolding is needed to obtain a chiral theory and the necessary breaking SU(3) $\to$ SU(2)$_L \times$ U(1)$_Y$, but we still have two Higgs doublets coming from the two extra space components $A_{5}$ and $A_{6}$ [\,For instance, the $Z_{3}$ orbifolding leaves only one Higgs doublet at the KK zero-mode sector \cite{SSSW, LMM}\,]. The torus $T^{2}$ are described by extra space coordinates $(x^{5}, x^{6})$. For simplicity, lattice vectors along the two independent cycles of the torus $\vec{l}_{1,2}$ are assumed to satisfy \be \label{3.1} |\vec{l}_{1}| = |\vec{l}_{2}| = 2\pi R, \ \ \vec{l}_{1} \perp \vec{l}_{2}. \ee The $Z_2$-parity assignment for the triplet of SU(3) is given by \be \label{3.3} P = {\rm diag} (1, 1, -1). \ee Accordingly, the $Z_2$-parities for the gauge-Higgs sector are fixed as \bea A_{\mu}(-x^{5}, -x^{6}) &=& PA_{\mu}(x^{5}, x^{6})P^{-1}, \nonumber \\ A_{5, 6}(-x^{5}, -x^{6}) &=& - PA_{5, 6}(x^{5}, x^{6})P^{-1}. \label{3.4} \eea We thus realize that the KK zero-modes of 4D gauge bosons $A_{\mu}$ are just those of SU(2)$_L \times$ U(1)$_Y$, although the predicted weak mixing angle is unrealistic: $\sin^{2}\theta_{W} = 3/4$ [\,We will consider SU(4) model in the next section to evade this problem\,]. As the KK zero-modes of 4D scalars $A_{5, 6}$, we obtain two Higgs doublets $H_{1, 2}$: \be \label{3.5} A_{5, 6}^{(0, 0)} = \frac{1}{\sqrt{2}} \begin{pmatrix} 0 & 0 & \phi_{1,2}^{+} \\ 0 & 0 & \phi_{1,2}^{0} \\ \phi_{1,2}^{-} & \phi_{1,2}^{0\ast} & 0 \end{pmatrix}, \ee with \be \label{3.5'} H_{1,2} = \begin{pmatrix} \phi_{1,2}^{+} \\ \phi_{1,2}^{0} \end{pmatrix}. \ee \subsection{A general analysis of the Higgs potential and Higgs mass} Here, we study the Higgs mass in the two Higgs doublet model in a general framework of the effective Higgs potential, where the quadratic term of the Higgs fields is assumed to take a general form allowed by gauge invariance, while the quartic self-coupling term is given by its tree level contribution: \be \label{3.6} {\rm Tr} \{(F_{5 6})^{2}\} \ \ \to \ \ - g^{2} {\rm Tr} \{ [A_{5}, A_{6}]^{2} \}. \ee Let us note that there should be no local operators responsible for the quadratic terms of the Higgs fields, since the gauge invariance in the bulk implies that the gauge-invariant and Lorentz-invariant local operators are written by the use of field strength, and even the operator with minimum mass dimension, $F_{MN}F^{MN}$, already has mass dimension 4 (from 4D viewpoint) and therefore does not contain the quadratic terms. Thus, the only possible operators relevant to the quadratic terms are either global operators due to the Wilson-loops along two independent cycles of the torus, \be \label{3.7} P \{ e^{ig \oint A_{5, 6} \ dy_{1,2}} \}, \ee or the ``tadpole" term, the linear term of $F_{56}$ corresponding to U(1)$_{Y}$ localized at the fixed points of the orbifold, which leads to a quadratic term ${\rm Tr} (Y [A_{5}, A_{6}])$ [\,$Y = {\rm diag} (\frac{1}{3}, \frac{1}{3}, - \frac{2}{3})$ is the U(1)$_Y$ generator\,]. The tadpole term is consistent with the remaining gauge symmetry at the fixed points, SU(2)$_{L}\times$ U(1)$_{Y}$, although it contradicts with the bulk gauge symmetry. This possible tadpole term, being a local operator, may be induced together with a UV-divergent coefficient. Thus, a general form of the effective potential with respect to the two Higgs doublets up to the quartic term is written as \be \label{3.8} V(H_{1}, H_{2}) = - \lambda \ {\rm Tr} \{ [A_{5}^{(0, 0)}, A_{6}^{(0, 0)}]^{2} \} + a \ {\rm Tr} [(A_{5}^{(0, 0)})^{2} + (A_{6}^{(0, 0)})^{2}] + i b \ {\rm Tr}\{ Y [A_{5}^{(0, 0)}, A_{6}^{(0, 0)}]\}. \ee Among the quadratic terms, the term with the coefficient $a$ is expected to come from the Wilson-loops (\ref{3.7}), while the term with the coefficient $b$ is expected to be the contribution of the tadpole. The potential (\ref{3.8}) is 4-dimensional and is written in terms of the KK zero-modes $A_{5, 6}^{(0, 0)}$, since the Wilson-loops (\ref{3.7}) obtain contributions only from the KK zero-modes. We have assumed that the coefficients of $(A_{5}^{(0, 0)})^{2}$ and $(A_{6}^{(0, 0)})^{2}$ are the same, since we have assumed $|\vec{l}_{1}| = |\vec{l}_{2}|$ for the torus [\,see (\ref{3.1})\,]. The concrete form of (\ref{3.8}) in terms of two Higgs doublets $H_{1,2}$ is calculated to be \bea V(H_{1}, H_{2}) &=& \frac{\lambda}{2} \{ (H_{1}^{\dagger}H_{1})(H_{2}^{\dagger}H_{2}) + (H_{1}^{\dagger}H_{2})(H_{2}^{\dagger}H_{1}) - (H_{2}^{\dagger}H_{1})^{2} - (H_{1}^{\dagger}H_{2})^{2} \} \nonumber \\ &&+ a (H_{1}^{\dagger}H_{1} + H_{2}^{\dagger}H_{2}) - \frac{i}{2}b (H_{1}^{\dagger}H_{2} - H_{2}^{\dagger}H_{1}). \label{3.8'} \eea At the tree level, \be \label{3.9} \lambda = g^{2}, \ \ a = b = 0. \ee Thus, the leading contributions to the quadratic terms appear at the 1-loop level, and we will calculate the quantum corrections later. \subsubsection{The minimization} Here, supposing $a$ and $b$ are radiatively induced, let us perform the minimization of the potential and calculate the mass eigenvalues of 4D scalar particles. First, we find that \be \label{3.9'} a > 0 \ee is necessary, otherwise the potential becomes unstable in the ``flat-direction", where $H_{1} \propto H_2$ and the quartic term disappears: $[A_{5}^{(0, 0)}, A_{6}^{(0, 0)}] = 0$. For the purpose of minimization, we first focus on the neutral components $\phi_{1,2}^{0}$ of the doublets. Then the potential (\ref{3.8'}) reduces to \be \label{3.10} V (\phi_{1}^{0}, \phi_{2}^{0}) = 2 \lambda \ \{{\rm Im} (\phi_{1}^{0\ast} \phi_{2}^{0})\}^{2} + a \ (|\phi_{1}^{0}|^{2} + |\phi_{2}^{0}|^{2}) + b \ {\rm Im} (\phi_{1}^{0\ast} \phi_{2}^{0}). \ee In terms of the VEVs of the neutral components, \be \label{3.11} |\phi_{1, 2}^{0}| = \frac{v_{1, 2}}{\sqrt{2}}, \ \ \phi_{1}^{0\ast}\phi_{2}^{0} = \frac{v_{1}v_{2}}{2}e^{-i\theta} \ \ (v_{1,2} \geq 0), \ee the potential reads as \be \label{3.12} V(v_{1}, v_{2}, \theta ) = \frac{\lambda}{2} (v_{1}v_{2} \sin \theta)^{2} + \frac{a}{2}(v_{1}^{2} + v_{2}^{2}) - \frac{b}{2} v_{1}v_{2} \sin \theta. \ee The minimization goes as follows. Let $x \equiv v_{1}v_{2} > 0, \ y \equiv v_{1} - v_{2}$. Then, we can complete the square: \be \label{3.14} V = \frac{\lambda \sin^{2}\theta}{2} \Bigl(x - \frac{b \sin \theta - 2a}{2\lambda \sin^{2}\theta} \Bigr)^{2} + \frac{a}{2}y^{2} - \frac{(b - \frac{2a}{\sin \theta})^{2}}{8\lambda}. \ee We need to impose \be \label{3.15} b \sin \theta - 2a > 0 \ \ \to \ \ |b| > 2a, \ee since otherwise the minimum of the potential is at $x = y = 0$, i.e., at $v_{1, 2} = 0$, and there is no spontaneous symmetry breaking. Under the condition (\ref{3.15}), we easily realize that the minimum is at $b \sin \theta = |b|$. For instance, for $b > 0$, (\ref{3.15}) implies $\sin \theta > \frac{2a}{b} > 0$ and the vacuum energy $-(b - \frac{2a}{\sin \theta})^{2}/8\lambda$ takes its minimum at $\sin \theta = 1$. We thus replace $\sin^{2}\theta \to 1, \ b \sin \theta \to |b|$ in (\ref{3.14}) to obtain \be \label{3.16} V = \frac{\lambda}{2} \Bigl(x - \frac{|b| - 2a}{2\lambda} \Bigr)^{2} + \frac{a}{2}y^{2} - \frac{(|b| - 2a)^{2}}{8\lambda}. \ee Thus, we finally obtain the VEVs of the Higgs fields, \be \label{3.17} x = \frac{|b| - 2a}{2\lambda}, \ y = 0 \ \ \to \ \ v_{1} = v_{2} = \frac{v}{\sqrt{2}} = \sqrt{\frac{|b| - 2a}{2\lambda}} \ \ \left(\theta = \epsilon(b) \frac{\pi}{2} \right), \ee where $\epsilon (b)$ is the sign-function of $b$: $\epsilon (b) = \pm 1$ depending on the sign of $b$. In (\ref{3.17}), $v$ should be understood as the VEV corresponding to that in the SM, since the mass of the charged weak gauge boson is given in this model as \be \label{3.18} M_{W}^{2} = \frac{g^{2}}{4}(v_{1}^{2} + v_{2}^{2}) = \frac{g^{2}}{4}v^{2}. \ee \subsubsection{Mass eigenvalues of 4D scalars} We will have five physically remaining scalar particles, just as in the MSSM. We now derive the mass eigenvalues of these physical states. It will be useful to perform a unitary transformation between two Higgs doublets, so that only one doublet develops the VEV $v$: \bea && H \equiv \frac{1}{\sqrt{2}} [H_{1} + i\epsilon (b) H_{2}], \nonumber \\ && \tilde{H} \equiv \frac{1}{\sqrt{2}} [H_{1} - i\epsilon (b) H_{2}]. \label{3.19} \eea From (\ref{3.17}), we realize that only $H$ develops a nonvanishing VEV: \be \label{3.20} \langle H \rangle = \begin{pmatrix} 0 \\ \frac{v}{\sqrt{2}} \end{pmatrix}, \ \ \langle \tilde{H} \rangle = \begin{pmatrix} 0 \\ 0 \end{pmatrix}, \ee where we have assumed that the VEV of $H$ is real without loss of generality. Namely, $H$ is regarded as the doublet behaving as one of the SM. Then, the NG bosons $G^{\pm}$ and $G^{0}$ and the physical Higgs scalar fields are denoted as \be \label{3.21} H = \begin{pmatrix} G^{+} \\ \frac{v + h + iG^{0}}{\sqrt{2}} \end{pmatrix}, \ \ \tilde{H} = \begin{pmatrix} h^{+} \\ \frac{\tilde{h} + iP}{\sqrt{2}} \end{pmatrix}. \ee Note that the NG bosons should belong to the doublet developing the VEV, i.e., $H$, and do not mix with physical scalar fields at the mass-squared matrices. That is why this base is convenient for analyzing the mass eigenvalues. Our task is to calculate the mass eigenvalues for the physical states, $h^{+}$ and $P$, and especially the ``CP-even" neutral Higgs $h$ and $\tilde{h}$. For that purpose, we rewrite the Higgs potential (\ref{3.8'}) by the use of $H$ and $\tilde{H}$: \bea V(H, \tilde{H}) &=& \frac{\lambda}{2} [(H^{\dagger}H)^{2} + (\tilde{H}^{\dagger}\tilde{H})^{2} - (H^{\dagger}H)(\tilde{H}^{\dagger}\tilde{H}) - (H^{\dagger}\tilde{H})(\tilde{H}^{\dagger}H)] \nonumber \\ &&+ a (H^{\dagger}H + \tilde{H}^{\dagger}\tilde{H}) - \frac{|b|}{2} (H^{\dagger}H - \tilde{H}^{\dagger}\tilde{H}). \label{3.22} \eea Substituting (\ref{3.21}) in (\ref{3.22}) and extracting the quadratic terms of the fields, we get \bea V(H, \tilde{H})_{{\rm quadratic}} &=& 0 \times |G^{+}|^{2} + (\frac{3}{2}a + \frac{1}{4}|b|) \ |h^{+}|^{2} \nonumber \\ &&+ \ 0 \times (G^{0})^{2} + \frac{1}{2}(2a) \ P^{2} + \frac{1}{2}(|b| - 2a) \ h^{2} + \frac{1}{2}(2a) \ \tilde{h}^{2}, \label{3.23} \eea where the relation \be \label{3.24} \lambda v^{2} = |b| - 2a, \ee obtained from (\ref{3.17}) has been used to show that $G^{\pm}$ and $G^{0}$ have vanishing masses. Namely, in the base of $H$ and $\tilde{H}$, the mass-squared matrix has been automatically diagonalized. In particular, there is no freedom of the angle $\alpha$ in the MSSM in our model owing to the adopted simplified torus compactification. Thus, as we expected, $G^{\pm}$ and $G^{0}$ are NG bosons and the masses of the remaining five physical states are given as \bea {\rm charged \ sector}&:& \ \ M_{h^{+}}^{2} = \frac{3}{2}a + \frac{1}{4}|b| = 2a + \frac{1}{4}\lambda v^{2} = 2a + M_{W}^{2}, \nonumber \\ \mbox{CP-odd sector}&:& \ \ M_{P}^{2} = 2a, \nonumber \\ \mbox{CP-even sector}&:& \ \ M_{h}^{2} = |b| - 2a = \lambda v^{2} = (2M_{W})^{2}, \ \ M_{\tilde{h}}^{2} = 2a, \label{3.25} \eea where the relations (\ref{3.9}), (\ref{3.18}), and (\ref{3.24}) have been used. If we identify the lighter CP-even neutral scalar with our Higgs particle, its mass is given depending on the relative magnitude of $a$ to $2M_{W}^{2}$ as \be \label{3.53} M_{H} = \begin{cases} 2M_{W} & \text{(for $a > 2M_{W}^{2}$)} \\ \sqrt{2a} & \text{(for $a < 2M_{W}^{2}$)}. \end{cases} \ee Interestingly, for $a < 2M_{W}^{2}$, the Higgs mass is predicted to be $M_{H} < 2M_{W}$ and even coincides with the observed value $M_{H} = 125$ GeV for a choice of $a$. We should note also that, in this case, the mass of the CP-odd neutral scalar $P$ is degenerated with $M_{H}$, while the mass of the charged Higgs $\sqrt{2a + M_{W}^{2}}$ is larger than $M_{H}$ but on the order of the weak scale, which may be potentially dangerous when confronted with the LHC data. Fortunately, however, the present experimental lower bounds for the masses of the exotic scalar particles are rather loose: even for the charged scalar, the lower bound is still around 80 GeV or so \cite{data}. Note that, in this case, the lighter Higgs $\tilde{h}$ does not belong to the doublet developing the VEV, although it should have Yukawa couplings with fermions through its higher-dimensional gauge interaction. In the opposite case of $a > 2M_{W}^{2}$, the Higgs mass is just $2M_{W}$ and we recover the prediction of the one Higgs doublet models. Now, the Higgs belongs to the doublet developing the VEV $v$, just as in the SM. In this case, as long as the coefficient $a$ is sufficiently large, the other physical scalars become massive. In fact, in the limit of the compactification scale $M_{c} \equiv \frac{1}{R} \to \infty$, we expect that the theory reduces to the SM, and such decoupling of four additional scalars, $h^{\pm}, \ P$, and $\tilde{h}$, and therefore the recovery of the prediction of the one doublet model are reasonable. We will see that $a$ is one order of magnitude smaller than ${\cal O} (\alpha M_{c}^{2}) \ [\,\alpha: {\rm the \ fine \ structure \ constant}$, see (\ref{4.20}) and (\ref{5.10})\,]. This situation mimics the case of MSSM, where in the limit $M_{SUSY} \to \infty$ the SM is expected to be recovered. To summarize, the 6D GHU model with two Higgs doublets has a desirable feature that it predicts \be \label{3.54} M_{H} \leq 2M_{W}, \ee and the predicted Higgs mass may even coincide with the observed value already at the leading order of the perturbative expansion. This is in clear contrast to the case of MSSM, where $M_{H} \leq M_{Z} \cos 2\beta \ (\leq M_{Z})$ at the leading order and cannot agree with the observed value. In order to confirm that the quadratic terms really take the form shown in (\ref{3.8'}), derived from an argument relying on the gauge symmetry and the symmetry of the torus, we now perform concrete calculations of the quantum corrections to the quadratic terms in two types of models: a model with a matter scalar and a model with a matter fermion. \subsection{A model with matter scalar} We introduce SU(3) triplet complex scalar fields to the theory as the matter fields: \be \label{4.1} \Phi = \begin{pmatrix} \varphi_{1} \\ \varphi_{2} \\ \varphi_{3} \end{pmatrix}. \ee The $Z_{2}$-parity assignment for this matter fields is \be \label{4.2} \Phi (-x^{5}, -x^{6}) = P \Phi (x^{5}, x^{6}), \ee where $P$ is given in (\ref{3.3}). Thus, its KK mode expansion is \be \label{4.3} \Phi (x^{\mu}, x^{5}, x^{6}) = \sum_{n_{1}=-\infty}^{\infty} \sum_{n_{2}=-\infty}^{\infty} \frac{1}{2 \pi R} \begin{pmatrix} \cos ( \frac{n_{1}x^{5} + n_{2}x^{6}}{R} ) \varphi_{1}^{(n_{1}, n_{2})}(x^{\mu}) \\ \cos ( \frac{n_{1}x^{5} + n_{2}x^{6}}{R} ) \varphi_{2}^{(n_{1}, n_{2})}(x^{\mu}) \\ i \sin ( \frac{n_{1}x^{5} + n_{2}x^{6}}{R} ) \varphi_{3}^{(n_{1}, n_{2})}(x^{\mu}) \end{pmatrix}, \ee where there are degeneracies among the 4D fields because of the $Z_{2}$ orbifolding: \bea &&\varphi_{1, 2}^{(-n_{1}, -n_{2})}(x^{\mu}) = \varphi_{1, 2}^{(n_{1}, n_{2})}(x^{\mu}) \nonumber \\ &&\varphi_{3}^{(-n_{1}, -n_{2})}(x^{\mu}) = - \varphi_{3}^{(n_{1}, n_{2})}(x^{\mu}). \label{4.4} \eea Instead of evaluating 2-point functions of $H_{1, 2}$ by directly calculating relevant Feynman diagrams, let us calculate the radiatively induced effective potential of $H_{1, 2}$ by the use of the background field method under the following background for the Higgs fields: \be \label{4.5} A^{(0, 0)}_{5, 6} = \frac{1}{\sqrt{2}} \begin{pmatrix} 0 & H_{1, 2} \\ H_{1, 2}^{\dagger} & 0 \end{pmatrix}. \ee Under this background, gauge covariant derivatives along the extra space, when they act on the KK mode $(n_{1}, n_{2})$ of {\color{red} (\ref{4.3})}, are equivalent to the multiplications of the following matrices: \be \label{4.6} D_{5, 6} = i \begin{pmatrix} \frac{n_{1,2}}{R}I_{2} & \frac{g}{\sqrt{2}} H_{1,2} \\ \frac{g}{\sqrt{2}} H_{1,2}^{\dagger} & \frac{n_{1,2}}{R} \end{pmatrix}, \ee where $I_{2}$ is the $2 \times 2$ unit matrix. This leads to the ``mass-squared" operator for the KK mode, \be \label{4.7} {\cal M}_{n_{1}, n_{2}}^{2} = - (D_{5}^{2} + D_{6}^{2}) = \frac{n_{1}^{2}+n_{2}^{2}}{R^{2}} I_{3} + \begin{pmatrix} \frac{g^{2}}{2} (H_{1}H_{1}^{\dagger} + H_{2}H_{2}^{\dagger}) & \sqrt{2}g \frac{n_{1}H_{1} + n_{2}H_{2}}{R} \\ \sqrt{2}g \frac{n_{1}H_{1}^{\dagger} + n_{2}H_{2}^{\dagger}}{R} & \frac{g^{2}}{2}(H_{1}^{\dagger}H_{1} + H_{2}^{\dagger}H_{2}) \end{pmatrix}. \ee We do not introduce a bulk mass for the scalar field, since the quadratic terms of the Higgs fields do not suffer from infrared divergences. The effective potential due to the bubble diagrams of the scalar matter fields is given as \be \label{4.8} V_{eff}^{(s)} = \frac{1}{2} \int \frac{d^{4}p_{E}}{(2\pi)^{4}} \sum_{n_{1}=-\infty}^{\infty} \sum_{n_{2}=-\infty}^{\infty} {\rm Tr} \ \log (p_{E}^{2}I_{3} + {\cal M}_{n_{1}, n_{2}}^{2}), \ee where $I_{3}$ is the 3$\times$3 unit matrix and the Tr is taken over the 3$\times$3 matrix. $p_{E}$ is an Euclidean 4-momentum. The factor $\frac{1}{2}$ is to take care of the degeneracy (\ref{4.4}) [\,This prescription is also applicable to the KK zero-mode with $(n_{1},n_{2}) = (0, 0)$, since in (\ref{4.7}), the zero-mode contribution exists for the third component of the triplet, although actually the mode function for the third component, being an odd function, disappears for the KK zero mode\,]. If we ignore the charged scalars of $H_{1, 2}$, the three eigenvalues of the matrix ${\cal M}_{n_{1}, n_{2}}^{2}$ are \be \label{4.9} \frac{n_{1}^{2}+n_{2}^{2}}{R^{2}}, \ \ \frac{n_{1}^{2}+n_{2}^{2}}{R^{2}} +\frac{g^{2}}{2}(|\phi_{1}^{0}|^{2} + |\phi_{2}^{0}|^{2}) \pm \sqrt{2}g \frac{|n_{1}\phi_{1}^{0} + n_{2}\phi_{2}^{0}|}{R}. \ee The field-dependent eigenvalues cannot take a simple form, except in specific cases such as $\theta = 0$, where \be \label{4.10} \frac{n_{1}^{2}+n_{2}^{2}}{R^{2}} +\frac{g^{2}}{2}(|\phi_{1}^{0}|^{2} + |\phi_{2}^{0}|^{2}) \pm \sqrt{2}g \frac{|n_{1}\phi_{1}^{0} + n_{2}\phi_{2}^{0}|}{R} = \Bigl(\frac{n_{1}}{R} \pm \frac{g}{\sqrt{2}} |\phi_{1}^{0}| \Bigr)^{2} + \Bigl(\frac{n_{2}}{R} \pm \frac{g}{\sqrt{2}} |\phi_{2}^{0}| \Bigr)^{2}. \ee The lesson here is that the evaluation of the whole effective potential by the use of Poisson resummation is difficult. Nevertheless, once we obtain the general formula (\ref{4.8}) for the effective potential, we easily get the quadratic terms of $H_{1,2}$ by the use of its Taylor expansion with respect to the fields $H_{1,2}$: \be \label{4.11} V_{2}^{(s)} = \frac{1}{2} \int \frac{d^{4}p_{E}}{(2\pi)^{4}} \sum_{n_{1}, n_{2} =-\infty}^{\infty} \left[ \frac{g^{2}(H_{1}^{\dagger}H_{1} + H_{2}^{\dagger}H_{2})}{p_{E}^{2} + \frac{n_{1}^{2}+n_{2}^{2}}{R^{2}}} - 2g^{2} \frac{\frac{(n_{1}H_{1}^{\dagger} + n_{2}H_{2}^{\dagger})(n_{1}H_{1} + n_{2}H_{2})}{R^{2}}}{\bigl(p_{E}^{2} + \frac{n_{1}^{2}+n_{2}^{2}}{R^{2}}\bigr)^{2}} \right]. \ee In the second term of the r.h.s. of (\ref{4.11}), the coefficient of the operator ${\rm Re}(H_{1}^{\dagger}H_{2})$ vanishes, just because it is proportional to $\sum_{n_{1}, n_{2} =-\infty}^{\infty} \frac{n_{1}n_{2}}{\bigl(p_{E}^{2} + \frac{n_{1}^{2}+n_{2}^{2}}{R^{2}}\bigr)^{2}} = 0$. Thus, by replacing $n_{1}^{2}H_{1}^{\dagger}H_{1}$ by $\frac{1}{2}(n_{1}^{2}+n_{2}^{2})H_{1}^{\dagger}H_{1}$ etc., invoking the symmetry between two extra spaces, (\ref{4.11}) is shown to yield only the operator $H_{1}^{\dagger}H_{1} + H_{2}^{\dagger}H_{2}$: \be \label{4.12} V_{2}^{(s)} = \frac{g^{2}}{2} \int \frac{d^{4}p_{E}}{(2\pi)^{4}} \sum_{n_{1}, n_{2} =-\infty}^{\infty} \left[ \frac{1}{p_{E}^{2} + \frac{n_{1}^{2}+n_{2}^{2}}{R^{2}}} - \frac{\frac{n_{1}^{2}+n_{2}^{2}}{R^{2}}}{\bigl(p_{E}^{2} + \frac{n_{1}^{2}+n_{2}^{2}}{R^{2}}\bigr)^{2}} \right] (H_{1}^{\dagger}H_{1} + H_{2}^{\dagger}H_{2}). \ee Thus in this model, there is no quadratic term other than the terms with coefficients $a$ and $b$ in (\ref{3.8'}), as we expected, and the contributions to these coefficients due to the matter scalars are given as \bea && a^{(s)} = \frac{g^{2}}{2} \int \frac{d^{4}p_{E}}{(2\pi)^{4}} \sum_{n_{1}, n_{2} =-\infty}^{\infty} \left[ \frac{1}{p_{E}^{2} + \frac{n_{1}^{2}+n_{2}^{2}}{R^{2}}} - \frac{\frac{n_{1}^{2}+n_{2}^{2}}{R^{2}}}{\bigl(p_{E}^{2} + \frac{n_{1}^{2}+n_{2}^{2}}{R^{2}}\bigr)^{2}} \right], \nonumber \\ && b^{(s)} = 0. \label{4.13} \eea Utilizing the formula \be \label{4.14} \frac{1}{\alpha} = \int_{0}^{\infty} \ e^{-\alpha t} dt, \ \ \frac{1}{\alpha^{2}} = \int_{0}^{\infty} \ te^{-\alpha t} dt, \ee we get \be \label{4.15} a^{(s)} = \frac{g^{2}}{2} \int_{0}^{\infty} \ dt \int \frac{d^{4}p_{E}}{(2\pi)^{4}} \sum_{n_{1}, n_{2} =-\infty}^{\infty} \Bigl(1 - t \frac{n_{1}^{2}+n_{2}^{2}}{R^{2}}\Bigr) e^{- \bigl(p_{E}^{2} + \frac{n_{1}^{2}+n_{2}^{2}}{R^{2}}\bigr) t}. \ee Then, a manipulation by the use of the Poisson resummations [\,$k_{1}$ and $k_{2}$ are winding numbers\,], \bea \sum_{n_{1}, n_{2}} e^{-t \frac{n_{1}^{2}+n_{2}^{2}}{R^{2}}} &=& \pi R^{2} \frac{1}{t} \sum_{k_{1}, k_{2}} e^{-\frac{(\pi R)^{2}(k_{1}^{2} + k_{2}^{2})}{t}}, \nonumber \\ \sum_{n_{1}, n_{2}} \frac{n_{1,2}^{2}}{R^{2}} e^{-t \frac{n_{1}^{2}+n_{2}^{2}}{R^{2}}} &=& \pi R^{2} \sum_{k_{1}, k_{2}} \Bigl(\frac{1}{2t^{2}} - \frac{(\pi R)^{2}}{t^{3}}k_{1,2}^{2}\Bigr) e^{-\frac{(\pi R)^{2}(k_{1}^{2} + k_{2}^{2})}{t}}, \label{4.16} \eea leads to \bea a^{(s)} &=& \frac{g^{2}}{2} \pi R^{2} \int_{0}^{\infty} \ dt \int \frac{d^{4}p_{E}}{(2\pi)^{4}} \sum_{k_{1}, k_{2} =-\infty}^{\infty} \left[\frac{1}{t} - \Bigl(\frac{1}{t} - \frac{(\pi R)^{2}}{t^{2}} (k_{1}^{2} + k_{2}^{2}) \Bigr) \right] e^{- t p_{E}^{2}} e^{-\frac{(\pi R)^{2}(k_{1}^{2} + k_{2}^{2})}{t}} \nonumber \\ &=& \frac{\pi^{3}g^{2}}{2} R^{4} \int_{0}^{\infty} \ \frac{dt}{t^{2}} \int \frac{d^{4}p_{E}}{(2\pi)^{4}} \sum_{k_{1}, k_{2} =-\infty}^{\infty} (k_{1}^{2} + k_{2}^{2}) e^{- t p_{E}^{2}} e^{-\frac{(\pi R)^{2}(k_{1}^{2} + k_{2}^{2})}{t}}. \label{4.17} \eea First, we comment on the zero-winding sector, $(k_{1}, k_{2}) = (0, 0)$, which corresponds to the contribution to the local mass-squared operator of the Higgs fields and should be forbidden by local gauge symmetry. In fact, (\ref{4.17}), having the prefactor $k_{1}^{2} + k_{2}^{2}$, clearly implies that the contribution of the zero-winding sector vanishes. Thus, as we expected, the zero-winding sector disappears, and in the remaining contribution the integrals over $t$ and 4-momentum $p_{E}$ are convergent. Then, using the formula \be \label{4.18} \int \frac{d^{4}p_{E}}{(2\pi)^{4}} \ e^{- t p_{E}^{2}} = \frac{1}{16\pi^{2}}\frac{1}{t^{2}}, \ee we get \be \label{4.19} a^{(s)} = \frac{\pi g^{2}}{32} R^{4} \int_{0}^{\infty} \ \frac{dt}{t^{4}} \sum_{(k_{1}, k_{2}) \neq (0, 0)} (k_{1}^{2} + k_{2}^{2}) e^{-\frac{(\pi R)^{2}(k_{1}^{2} + k_{2}^{2})}{t}}. \ee By changing the variable, $\frac{(\pi R)^{2}(k_{1}^{2} + k_{2}^{2})}{t} \to l$, and performing the integral over $l$, we finally get \bea &&a^{(s)} = \frac{g^{2}}{16\pi^{5}} \frac{1}{R^{2}} \sum_{(k_{1}, k_{2}) \neq (0, 0)} \frac{1}{(k_{1}^{2} + k_{2}^{2})^{2}} = 5.3 \times 10^{-4}\frac{1}{R^{2}} , \nonumber \\ &&b^{(s)} = 0. \label{4.20} \eea \subsection{A model with matter fermion} In the model with scalar matter fields, although the necessary condition (\ref{3.9'}) is satisfied, unfortunately another condition (\ref{3.15}) is not satisfied, as we see in (\ref{4.20}). Thus, we now discuss a model with SU(3) triplet fermions as the matter fields: \be \label{5.1} \Psi = \begin{pmatrix} \psi_{1} \\ \psi_{2} \\ \psi_{3} \end{pmatrix}. \ee The quantum correction due to the matter fermions is known to yield the nonvanishing coefficient $b$ through the commutator of gauge co variant derivatives, $[D_{5}, D_{6}]$, as we will see below. The 6D gamma matrices are given in the space of the direct product of the 4D spinor space and [\,SU(2)\,] internal space as \be \Gamma^{\mu} = \gamma^{\mu} \otimes I_2, \ \ \Gamma^5 = \gamma^5 \otimes i \sigma_1 , \ \ \Gamma^6 = \gamma^5 \otimes i \sigma_2 \ \ \ (\mu = 0, 1, 2, 3). \label{5.2} \ee Then, the 6D chiral operator is given as \be \label{5.3} \Gamma_{7} = \Gamma^{0} \Gamma^{1} \Gamma^{2} \Gamma^{3} \Gamma^{5} \Gamma^{6} = - \gamma^{5} \otimes \sigma_3 \ee Our matter fermion is assumed to be 6D Weyl fermion: \be \label{5.4} \Gamma_{7} \Psi = - \Psi. \ee Let us note that, even if we adopt the 6D Weyl fermion with the eigenvalue $-1$ of $\Gamma_{7}$, there are two cases, 4D right-handed fermion with +1 eigenvalue of $\sigma_{3}$ and 4D left-handed fermion with $-1$ eigenvalue of $\sigma_{3}$. The $Z_2$-parity assignment for $\Psi$ is \be \label{5.5} \Psi (-x^{5}, -x^{6}) = P(-i\Gamma_{4} \Gamma_{5}) \Psi (x^{5}, x^{6}) = - P I_{4} \otimes \sigma_3 \Psi (x^{5}, x^{6}). \ee Thus, its KK mode expansion is as follows: \be \label{5.6} \Psi (x^{\mu}, x^{5}, x^{6}) = \sum_{n_{1}=-\infty}^{\infty} \sum_{n_{2}=-\infty}^{\infty} \frac{1}{2 \pi R} \begin{pmatrix} \cos ( \frac{n_{1}x^{5} + n_{2}x^{6}}{R} ) \psi_{1L}^{(n_{1}, n_{2})}(x^{\mu}) + i\sin ( \frac{n_{1}x^{5} + n_{2}x^{6}}{R} ) \psi_{1R}^{(n_{1}, n_{2})}(x^{\mu}) \\ \cos ( \frac{n_{1}x^{5} + n_{2}x^{6}}{R} ) \psi_{2L}^{(n_{1}, n_{2})}(x^{\mu}) + i\sin ( \frac{n_{1}x^{5} + n_{2}x^{6}}{R} ) \psi_{2R}^{(n_{1}, n_{2})}(x^{\mu}) \\ i \sin ( \frac{n_{1}x^{5} + n_{2}x^{6}}{R} ) \psi_{3L}^{(n_{1}, n_{2})}(x^{\mu}) + \cos ( \frac{n_{1}x^{5} + n_{2}x^{6}}{R} ) \psi_{3R}^{(n_{1}, n_{2})}(x^{\mu}) \end{pmatrix}, \ee where there are degeneracies among the 4D fields because of the $Z_{2}$ orbifolding: \bea &&\psi_{1L, 2L}^{(-n_{1}, -n_{2})}(x^{\mu}) = \psi_{1L, 2L}^{(n_{1}, n_{2})}(x^{\mu}), \ \ \psi_{3R}^{(-n_{1}, -n_{2})}(x^{\mu}) = \psi_{3R}^{(n_{1}, n_{2})}(x^{\mu}) \nonumber \\ &&\psi_{1R, 2R}^{(-n_{1}, -n_{2})}(x^{\mu}) = - \psi_{1R, 2R}^{(n_{1}, n_{2})}(x^{\mu}), \ \ \psi_{3L}^{(-n_{1}, -n_{2})}(x^{\mu}) = - \psi_{3L}^{(n_{1}, n_{2})}(x^{\mu}). \label{5.7} \eea Similarly to the case of the model with a matter scalar, the mass-squared matrix, i.e., the squared Dirac operator, is calculated to be \bea &&\tilde{{\cal M}}_{n_{1}, n_{2}}^{2} = (D_{5}\Gamma_{5} + D_{6}\Gamma_{6})^{2} = \sum_{a, b = 5, 6} \Gamma_{a}\Gamma_{b}D_{a}D_{b} \nonumber \\ &&= \sum_{a, b = 5, 6} \Bigl\{ \frac{1}{2}\{\Gamma_{a}, \Gamma_{b}\} D_{a}D_{b} + \frac{1}{4}[\Gamma_{a}, \Gamma_{b}] [D_{a}, D_{b}]\Bigr\} \nonumber \\ &&= {\cal M}_{n_{1}, n_{2}}^{2} - g^{2} \Gamma_{5} \Gamma_{6} [A_{5}, A_{6}] \nonumber \\ &&= {\cal M}_{n_{1}, n_{2}}^{2} + \frac{i}{2}g^{2} (I_{4} \otimes \sigma_{3}) \cdot \begin{pmatrix} H_{1}H_{2}^{\dagger} - H_{2}H_{1}^{\dagger} & 0 \\ 0 & H_{1}^{\dagger}H_{2} - H_{2}^{\dagger}H_{1} \end{pmatrix}. \label{5.8} \eea We naively expect that the additional operator with the prefactor $I_{4} \otimes \sigma_{3}$, the contribution of the commutator $[D_{5}, D_{6}]$, i.e., of the ``tadpole" $F_{5, 6}$, vanishes under the Tr in the evaluation of the effective potential. In fact, for nonzero KK modes, each component of $\Psi$ has both 4D chiralities, i.e., both $\pm 1$ eigenvalues of $\sigma_{3}$ and the sum of the eigenvalues of the additional operator just vanishes. For the KK zero-mode sector, however, $\psi_{1, 2}$ and $\psi_{3}$ are L and R 4D Weyl spinors. This means that $\psi_{1, 2}$ and $\psi_{3}$ have $-1$ and $+1$ eigenvalues of $\sigma_{3}$, respectively, and the sum of the eigenvalues is nonvanishing\,: \be \label{5.8'} {\rm Tr}\Bigl[ \begin{pmatrix} - I_{2} & 0 \\ 0 & 1 \end{pmatrix} \cdot \begin{pmatrix} H_{1}H_{2}^{\dagger} - H_{2}H_{1}^{\dagger} & 0 \\ 0 & H_{1}^{\dagger}H_{2} - H_{2}^{\dagger}H_{1} \end{pmatrix} \Bigr] = 2(H_{1}^{\dagger}H_{2} - H_{2}^{\dagger}H_{1}). \ee Thus, we will take only the KK zero-mode into account when we evaluate the tadpole term. The effective potential due to the bubble diagram of the matter fermion is given similarly to (\ref{4.8}) as \be \label{5.9} V_{eff}^{(f)} = - \frac{1}{2}\times 2 \int \frac{d^{4}p_{E}}{(2\pi)^{4}} \sum_{n_{1}=-\infty}^{\infty} \sum_{n_{2}=-\infty}^{\infty} {\rm Tr} \ \log (p_{E}^{2}I_{3} + \tilde{{\cal M}}_{n_{1}, n_{2}}^{2}). \ee Again, we easily get the quadratic terms of $H_{1, 2}$ by performing the Taylor expansion with respect to $H_{1, 2}$. The contribution of ${\cal M}_{n_{1}, n_{2}}^{2}$ in $\tilde{{\cal M}}_{n_{1}, n_{2}}^{2}$ is exactly the same as that in the case of the model with a scalar matter, except for the difference in the overall factor, and the additional tadpole contribution in $\tilde{{\cal M}}_{n_{1}, n_{2}}^{2}$ readily leads to the coefficient $b$. Namely, the fermionic contributions to the coefficients $a$ and $b$ in (\ref{3.8'}) are given as \bea && a^{(f)} = - \frac{g^{2}}{8\pi^{5}} \frac{1}{R^{2}} \sum_{(k_{1}, k_{2}) \neq (0, 0)} \frac{1}{(k_{1}^{2} + k_{2}^{2})^{2}} = - 1.1\times 10^{-3}\frac{1}{R^{2}}, \nonumber \\ && b^{(f)} = 2g^{2} \int \frac{d^{4}p_{E}}{(2\pi)^{4}} \frac{1}{p_{E}^{2}}. \label{5.10} \eea We realize that $b^{(f)}$ is quadratically UV-divergent, as we anticipated, and the condition (\ref{3.15}) is easily satisfied, although we need a renormalization procedure. We also note that the sign of $a^{(f)}$ is opposite to that of $a^{(s)}$, coming from the difference in the statistics. Thus, supposing we introduce $n_{s}$ matter scalars and $n_{f}$ matter fermions, the condition (\ref{3.9'}) requires [\,see (\ref{4.20}) and (\ref{5.10})\,] \be \label{5.11} n_{s} - 2n_{f} > 0. \ee We, however, should note that to make the analysis more realistic, we anyway need to take the quantum corrections due to the gauge fields and Higgs fields into account, in addition to the contribution due to the matter fields. In fact, the contributions to the coefficient $a$ by such bosonic states are expected to be positive, and the condition (\ref{3.9'}) may be satisfied without introducing any matter scalar fields. Now, one comment is in order. Similar discussions concerning the effective Higgs potential and mass eigenvalues of physical scalars in the 6D U(3) GHU model with two Higgs doublets already exist in the literature \cite{ABQ, HNT}. In these works, however, the effective potential was evaluated only for the flat direction, $H_{1} \propto H_{2}$, and therefore the obtained vacuum states are different from what we obtained in this work. However, the mass eigenvalues of physical scalars were discussed by considering the fluctuations of the scalar fields around the origin or the vacuum state along the flat direction. \section{SU(4) GHU Model with Two Higgs Doublets} Although the 6D SU(3) GHU model with two Higgs doublets has an attractive feature that the predicted Higgs mass satisfies $M_{H} \leq 2M_{W}$ at the leading order of the perturbative expansion and even coincides with the observed value for a suitable choice of the parameter $a$, the predicted weak mixing angle is unrealistic: $\sin^{2} \theta_{W} = 3/4$. In this section, we very briefly discuss a model that can possibly account for both the observed Higgs mass and a realistic weak mixing angle, $\sin^{2} \theta_{W} = 1/4$, by the use of a familiar unitary gauge group. The model is the 6D SU(4) GHU model on the $T^{2}/Z_2$ orbifold as the extra space. Because of the $Z_2$ orbifolding, the model now involves two Higgs doublets behaving as triplets of the SU(3) subgroup [\,Refer to the discussion in Section 4 for the SU(4) model with one Higgs doublet\,]. In fact, by a suitable assignment of $Z_2$-parities, we realize that the KK zero-modes of $A_{5, 6}$ behave as $3 + \bar{3}$, not $8$ of SU(3), as we will see below, thus leading to the successful prediction of the weak mixing angle. What we need to realize for the KK zero-modes of the gauge-Higgs sector are the following forms: \bea &&A_{\mu} = \begin{pmatrix} \frac{1}{2}\gamma_{\mu} - \frac{\sqrt{3}}{6}Z_{\mu} & \frac{1}{\sqrt{2}} W^{+}_{\mu} & 0 & 0 \\ \frac{1}{\sqrt{2}} W^{-}_{\mu} & \frac{\sqrt{3}}{3}Z_{\mu} & 0 & 0 \\ 0 & 0 & - \frac{1}{2}\gamma_{\mu} - \frac{\sqrt{3}}{6}Z_{\mu} & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} + X_{\mu}\frac{\sqrt{6}}{12} \ {\rm diag} \ (1, 1, 1, -3), \label{6.2a} \\ &&A_{5, 6} = \frac{1}{\sqrt{2}} \begin{pmatrix} 0 & 0 & 0 & \phi_{1, 2}^{+} \\ 0 & 0 & 0 & \phi_{1, 2}^{0} \\ 0 & 0 & 0 & 0 \\ \phi_{1, 2}^{-} & \phi_{1, 2}^{0\ast} & 0 & 0 \end{pmatrix}, \label{6.2b} \eea where $X_{\mu}$ is the gauge boson of the extra U(1)$_X$. The idea to realize the KK zero-modes shown above is to break SU(4) in two steps, SU(4) $\to$ SU(3)$\times$ U(1)$_X \to$ SU(2)$_{L} \times$ U(1)$_{Y} \times$ U(1)$_X$, by assigning different $Z_2$-parities $(+, +, +, -), \ (+, +, -, -)$, where $+$ and $-$ stands for $+1$ and $-1$, respectively, for the reflection at two different fixed points, $(x^{5}, \ x^{6}) = (0, \ 0), \ (\pi R, \pi R)$, concerning the fundamental repr. of SU(4). Namely, we assign $Z_2$ parities under the rotation around two fixed points for the fundamental repr. as \be \label{6.3} \begin{pmatrix} (+, +) \\ (+, +) \\ (+, -) \\ (-, -) \end{pmatrix}. \ee Accordingly, the $Z_2$ parity assignments for the 4D gauge bosons and 4D scalars, i.e., the Higgs fields, are given as \bea &&A_{\mu} = \begin{pmatrix} (+, +) & (+, +) & (+, -) & (-, -) \\ (+, +) & (+, +) & (+, -) & (-, -) \\ (+, -) & (+, -) & (+, +) & (-, +) \\ (-, -) & (-, -) & (-, +) & (+, +) \end{pmatrix}, \label{6.4a} \\ &&A_{5, 6} = \begin{pmatrix} (-, -) & (-, -) & (-, +) & (+, +) \\ (-, -) & (-, -) & (-, +) & (+, +) \\ (-, +) & (-, +) & (-, -) & (+, -) \\ (+, +) & (+, +) & (+, -) & (-, -) \end{pmatrix}. \label{6.4b} \eea The calculation of the effective potential $V (H_{1}, H_{2})$ of the two Higgs doublets is exactly the same as in the case of the SU(3) model, discussed in the previous section; therefore, the relation $M_{H} < 2M_{W}$ can be realized by introducing matter fermions belonging to the fundamental repr. of SU(4), \be \label{6.5} \Psi = \begin{pmatrix} \psi_{1} \\ \psi_{2} \\ \psi_{3} \\ \psi_{4} \end{pmatrix}, \ee whose components are given $Z_2$-parities, in the same way as (\ref{5.5}) with \be \label{6.6} P = {\rm diag} \ ((1, 1), \ (1, 1), \ (1, -1), \ (-1, -1)), \ee in accordance with (\ref{6.3}). Now a pair $(\psi_{2}, \psi_{4})$ couples with $\phi_{1, 2}^{0}$ and corresponds to the pair $(\psi_{2}, \psi_{3})$ in the SU(3) model. \section{Summary} We discussed the scenario of gauge-Higgs unification (GHU), where the Higgs field is identified with the extra space component of the higher dimensional gauge field, as an interesting candidate of BSM physics. GHU models with multi-dimensional extra space predict at the classical level the Higgs self-coupling $\lambda \sim g^{2}$ and a light Higgs with the mass of ${\cal O}(M_{W})$, similarly to the case of MSSM. It has been known that the 6D SU(3) GHU model with one Higgs doublet in its low energy effective theory predicts an interesting relation $M_{H} = 2M_{W}$ at the leading order of the perturbative expansion \cite{SSSW, LMM}. In our previous paper \cite{LMM}, we demonstrated that the ratio of the Higgs mass to the weak scale is calculable as a UV-finite value even under the quantum correction and it is possible to recover the observed Higgs mass, $M_{H} = 125$ GeV. We, however, noticed that, to realize the difference in $2M_{W}$ and the observed mass, a rather large quantum correction is needed. There is another interesting prediction of the GHU scenario, namely, the prediction of the weak mixing angle, i.e., the mass ratio of weak gauge bosons. Unfortunately, in the minimal SU(3) GHU model with a simple gauge group \cite{SSSW, KLY}, the predicted weak mixing angle is unrealistic: $\sin^{2}\theta_{W} = 3/4$. We thus addressed a question whether there ever exist GHU models that provide more realistic predictions of the Higgs mass and the weak mixing angle at the leading order of the perturbative expansion. Namely, we investigated in the scheme of 6D GHU whether the predictions of the Higgs mass and the weak mixing angle become closer to or coincide with the observed values by suitable choices of the gauge group and the orbifolding of the extra space. We first discussed the weak mixing angle. By the use of the useful formula (\ref{0.3}), we studied which repr. of the minimal group SU(3), which is a subgroup of the adopted gauge group in general, the Higgs doublet should belong to in order to realize the realistic prediction, $\sin^{2}\theta_{W} = 1/4$. We showed that, among the repr.s up to the 2nd rank tensor, triplet and sextet repr.s of SU(3) lead to the realistic prediction. The decomposition of the adjoint repr. of $G_2$ under the subgroup SU(3) contains the triplet repr. and this is why the gauge group can predict $\sin^{2}\theta_{W} = 1/4$ \cite{1979Manton, CGM}. Next, we investigated the 6D Sp(6) GHU model with one Higgs doublet, as a prototype model whose adjoint repr. contains the sextet repr. of SU(3), the new possibility to get the realistic weak mixing angle. We have found that, although the weak mixing angle and the mass ratio of the weak gauge bosons are confirmed to be realistic as we expected, the predicted Higgs mass satisfies $M_{H} = 2M_{W}$, just as in the case of the 6D SU(3) GHU model with one Higgs doublet. We also briefly investigated the 6D SU(4) model with one Higgs doublet, as another possibility containing the triplet repr. of SU(3) with the familiar unitary gauge group. We again found that the predicted Higgs mass is $2M_{W}$, while keeping the successful weak mixing angle. In Sections 5 and 6, we discussed 6D GHU models with two Higgs doublets taking the choice of $Z_2$ orbifolding for the extra space, hoping that the prediction of the Higgs mass becomes more realistic than those in the models with one Higgs doublet. As the minimal model for such a purpose, in Section 5, we first investigated the 6D SU(3) model with two Higgs doublets in some detail. We first gave a general argument on the form of the effective potential for two Higgs doublets $H_{1, 2}$ relying on the higher dimensional gauge symmetry and the symmetry of the torus as the extra space. After the minimization of the potential, we calculated the mass eigenvalues for the five physically remaining scalar particles. Among other things, we have found that the prediction of the Higgs mass at the leading order of the perturbation is \be \label{7.1} M_{H} \leq 2M_{W}; \ee therefore, it is possible to realize the observed 125 GeV for a suitable choice of the parameter $a$ in the potential (\ref{3.8'}), which is calculable as a function of $R$, the size of the extra space, and the gauge coupling $g$. This is in clear contrast to the case of MSSM, where the Higgs mass satisfies $M_{H} \leq M_{Z} \cos 2\beta \leq M_{Z}$ at the classical level and has no chance of being in agreement with the observed value. Both the parameters $a$ and $b$ in the potential (\ref{3.8'}) are radiatively induced and we performed concrete calculations of the quantum corrections using the background field method. We realized that the parameter $b$, which corresponds to the contribution of the ``tadpole" term localized at the fixed points and plays an important role in making the model realistic, is induced only in the theory with fermions through the commutator of covariant derivatives $[D_{5}, \ D_{6}]$. In Section 6, we very briefly investigated the 6D SU(4) model with two Higgs doublets, for the purpose of improving the prediction of the weak mixing angle, while keeping the interesting feature $M_{H} \leq 2M_{W}$ obtained from the general argument of the Higgs potential in Section 5. An important central issue in the GHU scenario is how to generate the necessary hierarchy between the weak scale $M_{W}$ and the compactification scale $M_{c} = \frac{1}{R}$ ($R$: the size of the extra dimension). In the simplest 5D GHU models (formulated on a flat space-time), the radiatively induced Higgs potential, being described by the Wilson-loop phase, is completely UV-finite and also periodic in the Higgs field with a period $\sim \frac{1}{gR}$. Thus, by writing the Higgs VEV as $v = \frac{\alpha}{gR}$ with a parameter $\alpha$, $\alpha$ is usually of ${\cal O}(1)$ and the weak scale $M_{W} \sim gv = \frac{\alpha}{R}$ is comparable to $\frac{1}{R}$. This means that $M_{c} \sim M_{W}$, unless $\alpha$ becomes small for some reason (e.g., by the introduction of many exotic matter fields belonging to the adjoint repr. of the gauge group \cite{KLY}), and will lead to an immediate contradiction with the recent data from LHC experiments, which have not seen any evidence of BSM physics. In the GHU models with multiple extra dimensions discussed in this paper, however, the situation is different. Namely, although the ratio $\frac{M_{H}}{M_{W}}$ is predictable to be finite, we realize that $M_{W}$ itself is not correlated with $M_{c}$. The essential difference from the case of 5D GHU is that the radiatively induced VEV $v$ or the weak scale $M_{W}$ is UV-divergent due to the divergent coefficient $b$, as seen in (\ref{3.17}) and (\ref{3.18}). Hence, the VEV and therefore $M_{W}$ need to be renormalized and are not calculable. Now, $M_{c}$ is fixed so that it recovers the observed Higgs mass: as is seen in (\ref{3.53}), (\ref{4.20}), and (\ref{5.10}) the observed Higgs mass is realized for $M_{c} = \frac{1}{R}$ of a few TeV or so, not $M_{W}$. Note that the UV divergence in the quadratic term in the Higgs potential has its origin in the fact that our vacuum state is not along the flat direction: $[\langle H_{1} \rangle, \langle H_{2} \rangle] \neq 0$. This work is the first step toward a truly realistic GHU model with successful mass ratios of the Higgs boson and weak gauge bosons, and there remain issues to be settled. Concerning the prediction of the weak mixing angle, since Sp(6) and SU(4) are groups of rank 3, the unnecessary additional U(1) gauge boson needs to be removed. This may be realized by invoking the anomaly of the associated gauge symmetry or by putting an additional mass term for the gauge boson at the orbifold fixed points. We also note that $q = 1$ and $0$, necessary to realize the realistic prediction $\sin^{2}\theta_{W} = 1/4$, implies that all the components of the SU(3) triplet have integer charges as seen from (\ref{0.2}). This means that all the components of the repr.s of the theory have integer charges, and quarks cannot be assigned to any repr. of the bulk gauge symmetry. Then, we encounter the problem on how the quark fields are incorporated to the theory. One possibility may be to put the quark fields on the fixed points of the orbifold, as was proposed in \cite{CGM}. Concerning the prediction of the Higgs mass, the values of the coefficients $a$ and $b$ in the potential (\ref{3.8'}), which play crucial roles in the prediction of the Higgs mass and to realize the spontaneous symmetry breaking, are sensitive to the content of the fields contributing to the quantum corrections of $a$ and $b$. Thus, we need a full calculation of the quantum correction including the contributions of the gauge-Higgs sector in addition to that of the matter fields, before we get a conclusive prediction of the Higgs mass as the function of the compactification scale. On the other hand, we expect that the desirable feature (\ref{7.1}) of the GHU model itself, derived from the argument based on the general form of the effective potential, will not change. Finally, in the attractive case of $M_{H} < 2M_{W}$, the identified Higgs field $\tilde{h}$ is not the field developing the VEV $v$, although it has Yukawa couplings with matter fermions through higher-dimensional gauge interaction. Thus, the Higgs boson does not behave like the one in the SM. Let us note that, also in MSSM, the lighter CP-even state does not coincide with the field developing the VEV. Thus, the situation mentioned above in our two doublet models may change by introducing the degrees of freedom, corresponding to the two angles $\alpha$ and $\beta$ (denoting the relative weights of two doublets $H_{U}$ and $H_{D}$ in the CP-even mass eigenstates and the VEV) in the MSSM. Such modification of the model will be possible if we adopt ``asymmetric torus" with $|\vec{l}_{1}| \neq |\vec{l}_{2}| $ and/or an arbitrary relative angle between two lattice vectors $\vec{l}_{1}$ and $\vec{l}_{2}$, as was discussed in the literature \cite{ABQ, HNT}. We leave these issues for a future publication. \subsection*{Acknowledgments} This work of C.S.L. was supported in part by the Grant-in-Aid for Scientific Research of the Ministry of Education, Science and Culture, No.s 15K05062 and 23104009.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The characteristic modes of black holes and black branes are eigenmodes of these systems which convey important information about the background geometry. These systems are intrinsically dissipative (energy flows to the event horizon and/or towards the spatial infinity) and therefore their eigenmodes are not stationary. One calls the eigenmodes of these dissipative systems quasinormal modes (QNMs) \cite{Berti:2009kk,Ferrari:2007dd,Kokkotas:1999bd,Nollert:1999ji}. Within the anti-de Sitter/Conformal Field theory (AdS/CFT) duality \cite{Maldacena:1997re,Witten:1998qj, Gubser:1998bc}, these modes serve as an important tool for determining the near-equilibrium properties of strongly coupled quantum field theories, in particular their transport coefficients such as viscosity, conductivity and diffusion constants \cite{Berti:2009kk,Son:2007vk}. Recently, QNMs have also been used to study properties like the ``meson melting'' in D3/D7-brane models \cite{Hoyos:2006gb,Myers:2007we,Myers:2008cj} and the spectrum of collective excitations of holographic superconductors \cite{Amado:2009ts,Cubrovic:2009ye}. It is thus of interest to understand the spectrum of black holes and black branes in asymptotically anti-de Sitter (AdS) spacetimes. Recent studies by Festuccia and Liu (henceforth FL) have shed light on this issue, predicting the existence of long-lived modes in asymptotically AdS black hole (BH) geometries \cite{Festuccia:2008zx}. Their work shows that the eikonal limit in AdS depends sensitively on the relative size of the black hole. For small black holes, they find exponentially long-lived modes, which can be thought of as modes trapped inside the potential barrier. If we write $\omega=\omega_R-i\omega_I$ for the typical ``energy'' eigenvalue, these modes take the Bohr-Sommerfeld form \begin{equation} 2i\int_{r_b}^{\infty}Q\,dr=\pi\left (2n+\frac{5}{2}\right)\, ,\quad n=0,\,1,\,...\label{bohrsommerfeld} \end{equation} where \begin{equation} Q\equiv \frac{1}{rf}\sqrt{(l+1/2)^2f-r^2\omega_R^2}\,, \end{equation} $f$ is the black-hole horizon function (see Eq. (\ref{lineelementads}) below), and $l$ is the angular momentum of the perturbation. Accordingly, their lifetime is dictated by a ``tunneling probability'' of the form $\omega_I\propto \exp{(-2\int_{r_c}^{r_b}Q\,dr)}$, where $r_b$ and $r_c<r_{b}$ are two real zeros (turning points) of $Q$, and the proportionality coefficient is given in \cite{Berti:2009wx}. This prediction is supported by numerical studies for small AdS black holes \cite{Berti:2009wx}. On the other hand, the nature of the long-lived modes for large black holes is completely different, since no trapped modes are allowed in the large black-hole regime. A consequence of this fact is that the ``Breit-Wigner resonance method'' used to investigate small AdS black holes fails to give reliable results in the large black-hole regime \cite{Berti:2009wx}. These modes are also expected to be long-lived \cite{Festuccia:2008zx} (see Section \ref{eikonal} below). Thus these modes will presumably dominate the BH's response to arbitrary perturbations, hence the thermalization timescale in the dual CFT. Since their existence may be very relevant for the AdS/CFT conjecture, we decided to investigate numerically these long-lived modes. We use two, well established methods to study the QNMs of black holes. One is the series solution expansion \cite{Horowitz:1999jd,Cardoso:2001bb,Cardoso:2003cj}, which outputs the characteristic frequencies directly and is especially well suited for large black holes. By using time-domain methods, in particular the scattering of Gaussian wavepackets \cite{Wang:2000dt,Wang:2004bv} (see also \cite{Chan:1999sc}), we confirm and expand the series solution results and furthermore show that these modes {\it are} excited in physically interesting situations. As a by-product, we confirm once more that there are no late-time tails in this geometry. \section{\label{formal}Formulation of the problem} \subsection{The background spacetime} BH's in asymptotically AdS spacetimes form a class of solutions which is interesting from a theoretical point of view and central for the study of strongly coupled field theories at finite temperature in the gauge/gravity duality framework. We are interested in a simple class of non-rotating, uncharged $d$-dimensional Schwarzschild-AdS black holes with line element \begin{equation} ds^{2}= -fdt^{2}+f^{-1}dr^{2}+r^{2}d\Omega_{d-2}^2\,, \label{lineelementads} \end{equation} where $f(r) = 1 + r^2/R^2 - r_0^{d-3}/r^{d-3}$ and $d\Omega_{d-2}^2$ is the metric of the unitary $(d-2)$-sphere. The AdS curvature radius $R$ is related to the cosmological constant $\Lambda$ by $R^2 = -(d-2)(d-1)/2\Lambda$. The parameter $r_0$ is proportional to the mass $M$ of the black hole: $M=(d-2)A_{d-2}r_0^{d-3}/16\pi$, where $A_{d-2} = 2 \pi^{(d-1)/2}/\Gamma{\left[(d-1)/2\right]}$. The well-known Schwarzschild geometry corresponds to $R\to \infty$. The black-hole horizon radius $r=r_+$ is the (unique) positive real root of $f(r)=0$. In particular, we want to focus here on the large black hole regime, $r_+/R \rightarrow \infty$. In this case, the above geometry goes over to a $d-$dimensional plane-symmetric spacetime (black brane) \cite{Horowitz:1999jd}, which is also an exact solution of Einstein's equations \cite{Lemos:1994fn,Huang:1995zb, Lemos:1994xp,Cai:1996eg}. The black brane has the following line element \begin{equation}\label{fundo} ds^2=-f(r)dt^{2} +r^2\sum^{d-2}_{i=1}dx^{i}dx_{i}+\frac{1}{ f(r)}\;dr^{2}\,, \end{equation} where $f(r)=r^{2}/R^{2}-r_{+}^{d-1}/(R^{2}r^{d-3})$. The Hawking temperature of the black hole (\ref{lineelementads}) is \begin{equation}\label{hawkingTemp} T=\frac{(d-1)r_{+}^2+(d-3)R^2}{4 \pi r_+ R^{2}}\,, \end{equation} and it reduces to $T=(d-1)r_{+}/4 \pi R^{2}$ for large black holes, which is the Hawking temperature of the black brane (\ref{fundo}). \subsection{Gravitational perturbations} Gravitational perturbations in these backgrounds were considered in Ref. \cite{Cardoso:2001bb,Miranda:2005qx,Miranda:2007bv} for $d=4$ and in \cite{Kodama:2003jz,Ishibashi:2003ap} for higher dimensions. In a generic number of dimensions, the gravitational perturbations can be divided in three different types, the tensor-, vector- and scalar-type perturbations. These can all be reduced to a master wave equation of the form \begin{equation} f^2\frac{d^2\Psi}{dr^2}+ff'\frac{d\Psi}{dr}+ \left(\omega^2-V(r)\right )\Psi=0\,,\label{waveeq} \end{equation} where the potential $V(r)$ depends on both the type of perturbation and on basis functions used to separate the coordinates of the ($d-2$)-dimensional hypersurface of constant $r$ and $t$. For instance, for tensor-type perturbations in the black-hole background (\ref{lineelementads}), \begin{equation} \frac{V}{f}=\frac{l(l+d-3)}{r^2}+\frac{(d-2)(d-4)}{4r^2}f+\frac{(d-2)f'}{2r} \,.\label{potSAdS} \end{equation} Here, the angular number $l$ is related to the eigenvalue of the hyper-spherical functions used to factor out the dependence on the $(d-2)$-spherical angles. For the black brane background (\ref{fundo}), the separation of trivial dimensions is achieved through the Ansatz $e^{i\vec{q}.\vec{x}}$ and one ends up with the large $r_+/R$ limit of (\ref{potSAdS}), \begin{equation}\label{pott} \frac{V}{f}=\left[\frac{q^2}{r^2}+\frac{(d-2)(d-4)}{4r^2}f+\frac{(d-2)f'}{2r}\right]. \end{equation} The explicit form of $V(r)$ for the vector- and scalar-type perturbations can be found in Refs. \cite{Cardoso:2001bb,Kodama:2003jz,Ishibashi:2003ap}. The potential for tensor-type gravitational perturbations is equal to the potential for scalar-field perturbations, so our results are also valid for spin-0 fields. We will be interested in the large-$l,q$ limit of the QNM frequencies, i.e., characteristic $\omega$'s for which the solutions to (\ref{waveeq}) satisfy the appropriate boundary conditions. Note that in this limit one can formally identify $q$ with $l$ in the black hole/black brane spacetimes. Hereafter we will always refer to $q$, with the understanding that the replacement $q\to l$ describes large black holes. An important characteristic of classical field evolutions on asymptotically AdS spacetimes is the variety of choices for the boundary conditions at spatial infinity. In general, these can be Dirichlet, Neumann or Robin boundary conditions. So we need to establish an objective criterion before to choose a specific condition. In the AdS/CFT context, a natural criterion is such that the QNM frequencies of a certain field correspond to poles of two-point correlation functions of the dual operator in the boundary field theory \cite{Berti:2009kk,Nunez:2003eq,Kovtun:2005ev,Miranda:2008vb}. When we consider a variable $\Psi$ such that the master equation for the gravitational perturbations is of the form (\ref{waveeq}), in general the Dirichlet condition at spatial infinity is the `correct' boundary condition. There is only \textit{one} exception: for scalar-type perturbations in four spacetime dimensions, the boundary condition that leads to QNM frequencies corresponding to poles of retarded correlation functions is of Robin type (see Refs. \cite{Friess:2006kw,Michalogiorgakis:2006jc} for a discussion in $d=4,5$ and \cite{Morgan:2009} for an arbitrary number of dimensions.) \subsection{\label{eikonal}Long-lived modes in the eikonal limit: analytical prediction} In asymptotically AdS spacetimes the eikonal limit is especially interesting, since large-$q$ modes can be very long-lived \cite{Horowitz:1999jd,Festuccia:2008zx}. A WKB analysis suggests that for the tensor-type gravitational perturbations (and therefore also scalar fields) and $r_+/R\gg 1$ \cite{Festuccia:2008zx}, the following asymptotic behavior holds \begin{eqnarray} R\,\omega^{\rm FL}&=&q+ \Pi_n \left (\frac{r_+}{R}\right )^ {\frac{2d-2}{d+1}}\,q^ {-\frac{d-3}{d+1}} \,, \label{eikonalads}\\ \Pi_n &\equiv & \left (\sqrt{\pi}\left [\frac{d+1}{2}+2n\right ]\, \frac{\Gamma\left(\frac{3d-1}{2d-2}\right)} {\Gamma\left(\frac{1}{d-1}\right)}\right ) ^{\frac{2d-2}{d+1}}e^{-\frac{2i\pi}{d+1}}\,,\nonumber \end{eqnarray} as $q\rightarrow \infty$. So large-$q$ modes are very long-lived, and they could play a prominent role in the BH's response to generic perturbations. This is at variance with the asymptotically flat case, where the damping timescale is roughly constant as $q$ varies. Notice also that the scaling with the BH size differs from that of the weakly- and highly-damped modes. \section{\label{sec:numerics}Quasinormal frequencies} \subsection{Methods} \begin{figure}[ht] \begin{tabular}{c} \epsfig{file=tentempo.eps,width=6cm,angle=270} \end{tabular} \caption{Typical time-domain evolution of a gaussian wavepacket resulting in ringdown (top to bottom $d=4,5,6$). Here we take $\psi (u,-30) = 0$, $\psi (0,v) = \exp{\left[-(v -25)^2/18\right]}$ and $q=5$, where $u,v$ are standard null coordinates \cite{Wang:2000dt,Wang:2004bv}. The signal is measured at $r_{*}/R=-1$, where $dr/dr_{*}=f(r)$ and spatial infinity is at $r_*=0$. \label{fig:timeevolution}} \end{figure} We use two conceptually different methods to determine the gravitational QNM frequencies of the spacetimes \eqref{lineelementads} and \eqref{fundo}, and both methods yield consistent results, within the expected error bars. The first consists on a series expansion method \cite{Horowitz:1999jd,Berti:2009kk}, which reduces the problem to finding roots of a polynomial. This method is well suited to large black holes and black branes, though the convergence properties worsen for large wavenumber $q$. In fact, for larger dimensions ($d>6$) and higher overtones ($n>1$ or $2$) the problems of convergence of the series solution arise even for intermediate wavenumber values ($Rq/r_{+}\sim 10$). The second method employed in this work consists on a direct time-evolution in these backgrounds \cite{Wang:2000dt,Wang:2004bv}. A particular example is shown in Figure \ref{fig:timeevolution}, which shows the time-development of a Gaussian wavepacket in a $r_+/R=1$ black brane. The ringdown is characterized by the decay timescale and ringing frequency, which can directly be extracted from the slope and frequency of the signal above. Equivalently, we characterize the QNMs by a complex frequency $\omega=\omega_R-i\omega_I$. Time evolution methods cannot determine very accurately which overtone is dominating the response, though experience has shown that the fundamental mode seems to be more excited than all others, and by definition decays much slower \cite{Berti:2009kk}. Most importantly, the scattering of wavepackets shows that weakly-damped modes {\it are} excited in physically interesting situations. Although not the direct focus of this work, our numerical results show no sign of a power-law tail at late stages, confirming earlier predictions \cite{Horowitz:1999jd,Ching:1995tj}. \subsection{Numerical Results} \begin{figure}[ht] \begin{tabular}{c} \epsfig{file=grafreal.eps,width=6cm,angle=270} \\ \epsfig{file=grafima.eps,width=6cm,angle=270} \end{tabular} \caption{Numerical results for the fundamental scalar-field (tensor-type) QNM frequencies of a $r_+/R=1$ black brane. Upper panel: real component $\omega_R$ (top to bottom are $d=6,5,4$). Lower panel: imaginary component $\omega_I$ (top to bottom for $q>10$ corresponds to $d=4,5,6$). Dotted lines are the analytical prediction (\ref{eikonalads}), corrected by a prefactor $a$ shown in Table \ref{tab:summary}. \label{fig:SAdSs}} \end{figure} In Figure \ref{fig:SAdSs} we show numerical results for scalar-field (tensor-type gravitational) perturbations of a $r_+/R=1$ black brane. Similar results hold for large black holes. Low-$q$ results for $d=4,5$ are well known in the literature (See Ref. \cite{Berti:2009kk} and references therein), while the higher-dimensional cases are discussed in details in Ref. \cite{Morgan:2009}: both $\omega_R$ and $\omega_I$ are almost independent on $q$ (or $l$ for large black holes) in this regime. For wavenumbers $q \gg r_+/R$, the qualitative behavior changes: $\omega_R$ grows linearly while $\omega_I$ decreases with $q$. Furthermore, it is also clear from Fig. \ref{fig:SAdSs} that both the sub-leading term in $\omega_R$ and the leading term in $\omega_I$ scale as a power of $q$ (or $l$). This power can be directly read from the slope of the curves of Figure \ref{fig:SAdSs} at large $q$. To investigate this further, we parameterize the numerical results by \begin{equation} R\,\omega_{R}=q+\alpha_{R}\,q^{-\beta_{R}}\,, \qquad R\,\omega_{I}=\alpha_{I}\,q^{-\beta_{I}}\,, \end{equation} and we extract $\beta_{R,I}$ by least-squares. We obtain the values listed in Table \ref{tab:exponent}, where we also show the prediction by FL, i.e., $\beta_{R,I}=(d-3)/(d+1)$. \begin{table}[h] \caption{\label{tab:exponent}Best fit to exponents $\beta_{R,I}$, and FL's prediction, $\frac{d-3}{d+1}$.} \begin{tabular}{ccccccccccc} \hline Type & $n$ $\;\;$ &\multicolumn{3}{c}{$4d$}&\multicolumn{3}{c}{$5d$} &\multicolumn{3}{c}{$6d$}\\ & &$\beta_R$&$\beta_I$&FL$\;\;\;$& $\beta_R$&$\beta_I$&FL$\;\;\;$& $\beta_R$&$\beta_I$&FL\\ \hline \hline\\ &0$\;\;$&0.20&0.20&0.20$\;\;\;$&0.33&0.33&0.33$\;\;\;$&0.43&0.43 &0.43 \\ &1$\;\;$&0.21&0.20&0.20$\;\;\;$&0.34&0.33&0.33$\;\;\;$&0.43&0.43 &0.43\\ tensor &2$\;\;$&0.21&0.20&0.20$\;\;\;$&0.34&0.33&0.33$\;\;\;$&0.43&0.43 &0.43\\ &3$\;\;$&0.22&0.19&0.20$\;\;\;$&0.34&0.33&0.33$\;\;\;$&0.43&0.42 &0.43\\ &4$\;\;$&0.24&0.20&0.20$\;\;\;$&0.34&0.32&0.33$\;\;\;$&0.43&0.41 &0.43\\\\\hline\\ &0$\;\;$&0.19&0.20&0.20$\;\;\;$&0.33&0.34&0.33$\;\;\;$&0.42&0.44 &0.43\\ vector &1$\;\;$&0.20&0.20&0.20$\;\;\;$&0.33&0.33&0.33$\;\;\;$&0.43&0.43 &0.43\\ &2$\;\;$&0.20&0.20&0.20$\;\;\;$&--&--&0.33$\;\;\;$&--&--&0.43 \\\\\hline\\ & 0$\;\;$ & 0.17&0.21 & 0.20 $\;\;\;$ & 0.30&0.31 & 0.33 $\;\;\;$& 0.39 & 0.41&0.43 \\ scalar & 1$\;\;$ & 0.16&0.22 & 0.20 $\;\;\;$ & 0.31&0.35 & 0.33 $\;\;\;$ & -- & --&0.43 \\ & 2$\;\;$ & 0.19&0.21 & 0.20 $\;\;\;$& 0.31&0.34 & 0.33 $\;\;\;$ & -- & --&0.43 \\\hline \end{tabular} \end{table} Our numerical results are consistent with a $q^{-(d-3)/(d+1)}$ dependence of the characteristic frequencies, not only for the dimensions shown in Table \ref{tab:exponent}, but also for $d=7$, $8$ and $9$. Furthermore, we computed the same modes for a $r_+/R=100$ black hole, and to numerical accuracy we get the same results after a rescaling by $\left(\frac{r_+}{R}\right )^{\frac{2d-2}{d+1}}$ is performed. Thus, our results are also highly consistent with the functional dependence on $r_+,q$ as given by equation (\ref{eikonalads}). This agreement is nicely illustrated in Fig. \ref{fig:SAdSs}. In a log-log plot, the analytical result predicts a line with slope $-\frac{d-3}{d+1}$, which overlaps very well with the numerical results for large $q$. We now {\it assume} the power-law behavior (\ref{eikonalads}) in $q$ and $r_+$, and fit the numerical results to the following function \begin{equation} R\,\omega^{\rm Num}=1+ \left(a_R\,{\rm Re}[\Pi_n]+i a_I\,{\rm Im}[\Pi_n] \right)\left (\frac{r_+}{R}\right )^{\frac{2d-2}{d+1}}\,q^{-\frac{d-3}{d+1}} \,, \end{equation} thereby testing the prefactor in (\ref{eikonalads}). If Eq.(\ref{eikonalads}) captures correctly all of the features of these modes, then $a_R\approx a_I \approx 1$. Table \ref{tab:summary} summarizes our main results, with the correction factors to (\ref{eikonalads}) for each dimension number $d$ and gravitational sector. The results in Table \ref{tab:summary} are strong indicators that Eq.(\ref{eikonalads}) does {\it not} account for the correct quantitative behavior of these weakly-damped modes. \begin{table}[h] \caption{\label{tab:summary}Correction factors to the analytical formula \eqref{eikonalads}.} \begin{tabular}{cccccccc} \hline Type & $n$ $\quad$ &\multicolumn{2}{c}{$4d$}& \multicolumn{2}{c}{$5d$} &\multicolumn{2}{c}{$6d$}\\ & & $a_R$ & $a_I$ $\quad$& $a_R$ & $a_I$ & $a_R$ & $a_I$\\ \hline \hline\\ & 0$\quad$ & 1.83 & 1.82 $\quad$& 3.00 & 2.99$\quad$ & 4.56 &4.55 \\ & 1$\quad$ & 1.86 & 1.85 $\quad$& 3.12 & 3.10 $\quad$& 4.82 &4.80\\ tensor& 2$\quad$ & 1.87 & 1.86$\quad$ & 3.15 & 3.13$\quad$ & 4.86 &4.90\\ & 3$\quad$ & 1.88 & 1.86$\quad$ & 3.17 & 3.14$\quad$ & 4.94 &4.89\\ & 4$\quad$ & 1.86 & 1.85$\quad$ & 3.18 & 3.14$\quad$ & 4.96 &4.89\\\hline\\ & 0$\quad$ & 1.00 & 1.02 $\quad$& 1.80 & 1.82$\quad$ & 2.92 &2.95\\ vector & 1$\quad$ & 1.38 & 1.38 $\quad$& 2.34 & 2.35$\quad$& 3.68 &3.69\\ & 2$\quad$ & 1.53 & 1.53$\quad$ & - & -$\quad$ & - & - \\\hline\\ & 0$\quad$ & 0.29 & 0.30 $\quad$& 0.77 & 0.80 $\quad$& 1.59 & 1.64 \\ scalar & 1$\quad$ & 0.92 & 0.93$\quad$ & 1.59 & 1.65 $\quad$& - & - \\ & 2$\quad$ & 1.20 & 1.20 $\quad$& 2.02 & 2.04$\quad$ & - & - \\\hline \end{tabular} \end{table} The results are consistent with {\it real}, overtone-independent, but dimension-dependent correction factors for scalar-field (tensor-type gravitational) perturbations. This correction factor grows with $d$ and might become dominant at large $d$. Equation (\ref{eikonalads}) is not supposed to hold for vector-type and scalar-type gravitational perturbations, but we find it captures the essential qualitative behavior with $r_+$, $q$. It can describe quantitatively the numerical results if multiplied by a real constant, which depends on the overtone $n$ and the spacetime dimension $d$. This clearly suggests a new form for $\Pi_n$. \section{Conclusions and outlook} Our numerical results lend strong support to FL's prediction for the existence of long-lived modes in the eikonal limit. Furthermore, the functional dependence of these modes on the horizon radius and momentum $q$ is consistent with FL, but we show that the correct quantitative behavior is not. In particular, if we correct their prediction by some (real) correction factors, listed in Table \ref{tab:summary}, one can account extremely well for the numerical results. Taken together, our results suggest that large-$q$ tensor-type (or scalar-field) quasinormal frequencies of black holes and black branes are described by \begin{equation} R\,\omega=q+ a\,\Pi_n \left (\frac{r_+}{R}\right )^ {\frac{2d-2}{d+1}}\,q^{-\frac{d-3}{d+1}} \,,\label{finalguido} \end{equation} where the value of $a$ depends on $d$, but it is independent on the overtone number $n$. We also provide correction factors that would make prediction (\ref{eikonalads}) describe well other types of gravitational quasinormal modes (vector and scalar). For such perturbations, the real constant $a$ depends not only on $d$ but also on $n$, suggesting a completely different form to $\Pi_n$. In any case, there is a simple and universal dependence on $r_+$ and $q,l$ in this eikonal regime. Perhaps a simple interpretation in terms of geodesics can be given, as is done in asymptotically flat spacetimes \cite{Berti:2009kk,Cardoso:2008bp}. Clearly, more analytical and numerical studies are necessary to have a clear picture of the eikonal, weakly-damped regime of quasinormal modes of large black holes and black branes. A particularly interesting direction is to assess the degree to which these modes can be excited, which is tantamount to computing the residue of the Green function at the QNM pole. This is an important research topic in asymptotically flat spacetime, where it allows to predict how astrophysical black holes respond to external sources \cite{Berti:2006wq}, and has also recently started to be explored in the gauge/gravity duality scenario \cite{Amado:2008ji,Amado:2007yr}. \noindent {\bf Note added in proof:} We have recently been informed \cite{guido} of a mistake in one integral in FL, which introduces the correction factor $a=(1/2)(d-1)^{(2d-2)/(d+1)}$ in Eq. (\ref{finalguido}). This correction factor is consistent with all our numerical results for scalar fields or tensor-type gravitational perturbations. \section*{Acknowledgements} We would like to thank Guido Festuccia for helpful correspondence. This work is partially supported by Funda\c c\~ao para a Ci\^encia e Tecnologia (FCT) - Portugal through project PTDC/FIS/64175/2006 and by Conselho Nacional de Desenvolvimento Cient\'\i fico e Tecnol\'ogico of Brazil (CNPq). JM thanks Funda\c c\~ao Universidade Federal do ABC (UFABC) for a grant.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} This note is concerned with the stably finite real rank zero case of Elliott's program to classify nuclear $C^{*}$-algebras by $K$-theory data; see \cite{R1} for an introduction to this subject. There is growing body of evidence that one can only expect $K$-theoretical classification results up to ${\mathcal Z}$-stability, where ${\mathcal Z}$ denotes the Jiang--Su algebra constructed in \cite{JS1} and a $C^{*}$-algebra $A$ is called ${\mathcal Z}$-stable if it absorbs ${\mathcal Z}$ tensorially. Salient results supporting this point of view can be found in \cite{GJS}, \cite{J}, \cite{PT}, \cite{R2}, \cite{T1}, \cite{T2}, \cite{TW1} and \cite{TW2}. \\ It is known that a ${\mathcal Z}$-stable $C^{*}$-algebra $A$ behaves very well in many respects. In particular, it is either stably finite or purely infinite and, when exact, has nice comparison properties (cf.\ \cite{R2}). Moreover, $A$ has real rank zero if and only if the positive part of the $K_{0}$-group, $K_{0}(A)_{+}$, has dense image in the positive continuous affine functions on the tracial state space, $\mathrm{Aff}(T(A))_{+}$ (recall that $A$ has real rank zero if positive elements with finite spectrum are norm-dense in the set of all positive elements). \\ In \cite{W5} we confirmed the Elliott conjecture for the class of simple, separable, unital $C^{*}$-algebras which are ${\mathcal Z}$-stable, have real rank zero and finite decomposition rank (to be explained below) and, additionally, satisfy the Universal Coefficients Theorem (UCT). In the present paper we generalize this result to $C^{*}$-algebras which only have locally finite decomposition rank as opposed to finite decomposition rank. The difference might seem subtle at first glance, but we think that the generalization is substantial. The main point is that we use ${\mathcal Z}$-stability instead of a condition like slow (or no) dimension growth -- in our opinion this lends credibility to the point of view outlined above. Decomposition rank is a notion of covering dimension for nuclear $C^{*}$-algebras; it was introduced by E.\ Kirchberg and the author in \cite{KW}. Below we study a modified version of this concept: we say a $C^{*}$-algebra $A$ has locally finite decomposition rank if it can be exhausted by $C^{*}$-subalgebras each of which has finite decomposition rank. Note that we do not ask the decomposition ranks of the exhausting algebras to be globally bounded. Locally finite decomposition rank passes to quotients, inductive limits and to hereditary subalgebras which are generated by projections; it implies nuclearity and quasidiagonality. Examples include all separable approximately homogeneous (AH) $C^{*}$-algebras (in particular, all separable commutative $C^{*}$-algebras). In \cite{NgW}, P.\ W.\ Ng and the author have shown that separable approximately subhomogeneous (ASH) $C^{*}$-algebras also have locally finite decomposition rank. Clearly, finite decomposition rank implies its local version.\\ We wish to emphazise that locally finite decomposition rank is a fairly mild condition on a stably finite nuclear $C^{*}$-algebra; it does not even exclude the known counterexamples to the Elliott conjecture in the stably finite case. In particular, it does not imply stable rank one, Blackadar's second fundamental comparability property or weak unperforation of the ordered $K_{0}$-group. These are all properties known to hold for nuclear stably finite ${\mathcal Z}$-stable $C^{*}$-algebras by the results of \cite{R2}. In the classification results of \cite{El3}, \cite{EGL2} and \cite{W4} (to mention but a few), they are guaranteed by conditions involving noncommutative covering dimension, such as slow dimension growth or finite decomposition rank. In \cite{W5}, said properties were entailed by ${\mathcal Z}$-stability, but, following the lines of \cite{W4}, we could as well have used our assumptions of finite decomposition rank and real rank zero to obtain them -- a redundance which is removed in the present article. The main use of ${\mathcal Z}$-stability in \cite{W5} was to get rid of a condition on the tracial state space still present in \cite{W4}. In fact, in the case of a unique tracial state the other hypotheses (real rank zero, ${\mathcal Z}$-stability and finite decomposition rank) can be considerably weakened as shown by N.\ Brown in \cite{B} and, more recently, by H.\ Lin in \cite{Li6}. \\ Our main result generalizes Theorem 4.1 of \cite{W5}; it says that separable simple unital ${\mathcal Z}$-stable $C^{*}$-algebras with locally finite decomposition rank and real rank zero have tracial rank zero. Using results of H.\ Lin, this confirms the Elliott conjecture for the class of such algebras which, additionally, satisfy the UCT. In particular, this applies to simple unital ${\mathcal Z}$-stable ASH algebras with real rank zero. Thanks to earlier work of M.\ Dadarlat, G.\ Elliott, G.\ Gong and others, it then follows that such algebras are in fact AH of topological dimension at most 3 and that they have decomposition rank at most 2. The paper is organized as follows: In Section 1 we introduce the concept of locally finite decomposition rank, study some of its properties and consider a number of examples. In Section 2 we state our main result and derive its corollaries. In the following section we outline our strategy for the proof of Theorem \ref{lfdrtr0} and describe the technical difficulties. Section 4 recalls some facts about order zero maps and $C^{*}$-algebras with real rank zero. Section 5 contains the key technical steps (Corollary \ref{excisible-appr} and Lemma \ref{excision}) for the proof of Theorem \ref{lfdrtr0}, which is completed in Section 6. \\ To try to prove a classification result using locally finite decomposition rank as opposed to finite decomposition rank was already suggested to me by Nate Brown several years ago, however, at that time I did not know how to use ${\mathcal Z}$-stability to make such an attempt work. I would like to thank Nate as well as Ping Wong Ng, Mikael R{\o}rdam and Andrew Toms for many inspiring conversations on the classification program in general and and on ${\mathcal Z}$-stability in particular. \section{Locally finite decomposition rank} Below we introduce the notion of locally finite decomposition rank, study some of its properties, compare it to the original decomposition rank and give a list of examples. \noindent \begin{nummer} \rm For convenience, we recall the following definition from \cite{KW}: \label{d-dr} \begin{ndefn} (cf.\ \cite{KW}, Definitions 2.2 and 3.1) Let $A$ be a separable $C^*$-algebra. \begin{itemize} \item[(i)] A completely positive map $\varphi : F \to A$ has order zero, $\mathrm{ord}\, \varphi = 0$, if it preserves orthogonality, i.e., $\varphi(e) \varphi(f) = \varphi(f) \varphi(e) = 0$ for all $e,f \in F$ with $ef = fe = 0$. \item[(ii)] A completely positive map $\varphi : F \to A$ ($F$ a finite-dimensional $C^{*}$-algebra) is $n$-decomposable, if there is a decomposition $F=F^{(0)} \oplus \ldots \oplus F^{(n)}$ such that the restriction of $\varphi$ to $F^{(i)}$ has order zero for each $i \in \{0, \ldots, n\}$; we say $\varphi$ is $n$-decomposable with respect to $F=F^{(0)} \oplus \ldots \oplus F^{(n)}$. \item[(iii)] $A$ has decomposition rank $n$, $\mathrm{dr}\, A = n$, if $n$ is the least integer such that the following holds: For any finite subset ${\mathcal G} \subset A$ and $\varepsilon > 0$, there is a completely positive approximation $(F, \psi, \varphi)$ for ${\mathcal G}$ within $\varepsilon$ (i.e., $\psi:A \to F$ and $\varphi:F \to A$ are completely positive contractive and $\|\varphi \psi (b) - b\| < \varepsilon \; \forall \, b \in {\mathcal G}$) such that $\varphi$ is $n$-decomposable. If no such $n$ exists, we write $\mathrm{dr}\, A = \infty$. \end{itemize} \end{ndefn} \end{nummer} \noindent \begin{nummer} \rm $C^{*}$-algebras with finite decomposition rank enjoy many nice properties (cf.\ \cite{KW}, \cite{W4}), but in some situations it would be desirable to have a condition which is fulfilled by a larger class of $C^{*}$-algebras, yet retains at least some of the nice structural properties implied by finite decomposition rank. There are several reasonable ways of weakening Definition \ref{d-dr}(iii). For example, one might ask the map $\varphi$ only to be completely positive contractive; this yields nothing but the completely positive approximation property, which is well-known to characterize nuclear $C^{*}$-algebras. An a priori less general version would be to ask the map $\varphi$ to have order zero on each of the summands of $F$; this definition does not rule out infinite $C^{*}$-algebras -- it might even be equivalent to the completely positive approximation property.\\ In these notes, we study a definition which does not entirely drop the decomposability condition of \ref{d-dr}(iii), but which also does not ask for a global bound on the decomposition constant: \begin{ndefn} We say $A$ has locally finite decomposition rank, if, for any finite subset ${\mathcal G} \subset A$ and $\varepsilon>0$, there is a $C^{*}$-subalgebra $B \subset A$ such that $\mathrm{dr}\, B$ is finite and $\mathrm{dist}(b,B)<\varepsilon$ for all $b \in {\mathcal G}$. \end{ndefn} Just like finite decomposition rank, this notion is a so-called \emph{local} property -- in fact, these two concepts may be thought of as local analogues of topologically finite-dimensional AH algebras and general AH algebras, respectively. We shall return to this point of view in \ref{lfdr-examples}. \end{nummer} \noindent \begin{nummer} \rm \label{lfdr-permanence} \begin{nprop} The property of having locally finite decomposition rank passes to inductive limits, quotients, tensor products and to hereditary $C^{*}$-subalgebras generated by projections. \end{nprop} \begin{nproof} The statements about limits, quotients and tensor products follows immediately from the respective statements for decomposition rank, cf.\ \cite{W1}, Section 3, and \cite{KW}, 3.2. \\ Suppose $p$ is a projection in a $C^{*}$-algebra $A$ with locally finite decomposition rank. Let ${\mathcal G} \subset pAp$ be a finite subset and $\varepsilon>0$. We may assume that the elements of ${\mathcal G}$ are positive and normalized and that $p\in {\mathcal G}$. By assumption, for any $0<\delta<\varepsilon/3$ there is a $C^{*}$-subalgebra $B \subset A$ such that $\mathrm{dr}\, B < \infty$ and $\mathrm{dist}(b,B)<\delta \; \forall \, b \in {\mathcal G}$. But then it is straightforward to show that, if $\delta$ is chosen small enough, there is a partial isometry $s \in A$ such that $s^{*}s=p$, $q:=ss^{*} \in B$ and $\|s-p\|< \varepsilon/3$. Now $C:=s^{*}Bs$ is a $C^{*}$-subalgebra of $pAp$; since $s$ is a partial isometry, we have $C\cong qBq$. For any $b \in {\mathcal G}$, we have \[ \mathrm{dist}(b,C) = \mathrm{dist}(sbs^{*},qBq) \le \mathrm{dist}(b,qBq) + 2 \varepsilon/3 \le \mathrm{dist}(b,B) + 2 \varepsilon/3 \le \varepsilon \, ; \] by \cite{KW}, Proposition 3.8, $\mathrm{dr}\, C = \mathrm{dr}\,(qBq) \le \mathrm{dr}\, B < \infty$. We have thus shown that $pAp$ has locally finite decomposition rank. \end{nproof} \end{nummer} \noindent \begin{nummer} \rm \label{nuclear-qd} \begin{nprop} A separable $C^{*}$-algebra $A$ with locally finite decomposition rank is nuclear and strongly quasidiagonal (i.e., every representation of $A$ is quasidiagonal); in particular, $A$ is stably finite. \end{nprop} \begin{nproof} Since $A$ is exhausted by $C^{*}$-algebras with the completely positive approximation property, $A$ also has this property and hence is nuclear. \\ By \cite{BK2}, Corollary 5.7, a separable nuclear $C^{*}$-algebra is strongly quasidiagonal iff every quotient is strong NF in the sense of \cite{BK1}. By Proposition \ref{lfdr-permanence}, locally finite decomposition rank passes to quotients, so it will suffice to show that locally finite decomposition rank implies being strong NF. From \cite{KW}, Theorem 5.3, we already know that finite decomposition rank implies strong NF, and since being strong NF is a local property (see \cite{BK2}, Proposition 4.1 and the remark thereafter), the assertion follows. \end{nproof} \end{nummer} \noindent \begin{nummer} \rm \label{lfdr-examples} \begin{nexamples} It is trivial that finite decomposition rank implies locally finite decomposition rank, so all the examples of \cite{KW}, Section 4, of \cite{W3}, Section 1, and of \cite{TW2} have this property; this list includes the examples covered by virtually all known classification results for simple stably finite nuclear $C^{*}$-algebras. For example, all AF algebras, irrational rotation algebras and the Jiang--Su algebra ${\mathcal Z}$ have (locally) finite decomposition rank.\\ There is a slight ambiguity in the literature about how to define approximately (sub-)homogeneous $C^{*}$-algebras (cf.\ \cite{Bl2}). We shall use the following set of definitions: A $C^{*}$-algebra $A$ is homogeneous, if all its irreducible representations have the same dimension. $A$ is approximately homogeneous (AH), if it is an inductive limit of direct sums of homogeneous $C^{*}$-algebras. $A$ is subhomogeneous, if the dimensions of its irreducible representations have some finite upper bound, and $A$ is approximately subhomogeneous (ASH), if it is an inductive limit of subhomogeneous $C^{*}$-algebras.\\ By \cite{Bl2}, Proposition 2.2, any separable AH algebra $A$ can be written as an inductive limit of direct sums of homogeneous algebras $A_{i}$ each of which has finite topological dimension (hence finite decomposition rank). Therefore, any AH algebra has locally finite decomposition rank, regardless of whether it has no, slow or fast dimension growth. In particular, this holds for Villadsen's examples and for Toms' counterexamples to the Elliott conjecture (cf.\ \cite{T1}). \\ In \cite{NgW}, Ping Wong Ng and the author showed the respective statements for ASH algebras, i.e., any separable ASH algebra is an inductive limit $A= \lim_{\to} A_{i}$ of ASH algebras with finite topological dimension -- in particular, it has locally finite decomposition rank. Note that, again, we do not require the numbers $\mathrm{dr}\, A_{i}$ to have a common upper bound or the inductive limit decomposition to have slow dimension growth. \end{nexamples} \end{nummer} \section{The main result and its consequences} \noindent \begin{nummer} \rm \label{lfdrtr0} The concept of tracial rank zero was introduced by Lin (cf.\ \cite{Li2}, \cite{Li3}) as a somewhat more axiomatic approach to the stably finite real rank zero case of the Elliott program. We shall not need the original definition here (cf.\ \cite{Li0}, Definition 3.6.2), but we will give an alternative characterization in the next section, where we also outline the proof of the theorem below (the actual proof will have to wait until Section 6). Our main result states that many simple real rank zero $C^{*}$-algebras indeed have tracial rank zero: \begin{ntheorem} Let $A$ be a separable simple and unital $C^{*}$-algebra which is ${\mathcal Z}$-stable and has real rank zero and locally finite decomposition rank. Then, $A$ has tracial rank zero. \end{ntheorem} \end{nummer} In \cite{Li2}, Lin confirmed the Elliott conjecture for the class of simple $C^{*}$-algebras with tracial rank zero which satisfy the UCT. We now explain how Lin's classification theorem for tracially AF algebras and results of Elliott (in the ASH case) and Dadarlat, Elliott and Gong (in the AH case) may be used to derive a number of corollaries of Theorem \ref{lfdrtr0}; this is done in essentially the same way as in \cite{W5}. Moreover, we partially answer two questions of \cite{TW2}.\\ \noindent \begin{nummer} \rm \begin{ncor} Let $A$ be a separable simple unital $C^{*}$-algebra such that $A \otimes {\mathcal Z}$ has locally finite decomposition rank. Then, the following are equivalent: \begin{itemize} \item[(i)] $A \otimes {\mathcal Z}$ has tracial rank zero \item[(ii)] $A\otimes {\mathcal Z}$ has real rank zero \item[(iii)] the canonical image of $K_{0}(A\otimes {\mathcal Z})$ in $\mathrm{Aff}(T(A\otimes {\mathcal Z}))$ is dense \item[(iv)] the canonical image of $K_{0}(A\otimes {\mathcal Z})_{+}$ in $\mathrm{Aff}(T(A\otimes {\mathcal Z}))_{+}$ is dense. \end{itemize} \end{ncor} \begin{nproof} (i) implies (ii) by \cite{Li0}, Theorem 3.6.11, the converse follows from Theorem \ref{lfdrtr0} above. (ii) and (iii) are equivalent by Proposition 7.1 of \cite{R2}. Since $A \otimes {\mathcal Z}$ is nuclear and stably finite by Proposition \ref{nuclear-qd}, $A \otimes {\mathcal Z}$ satisfies Blackadar's second fundamental comparability property by \cite{R2}, Corollary 4.10, whence (iii) and (iv) are equivalent. \end{nproof} \end{nummer} \noindent \begin{nummer} \rm \label{lfdr-classification} In \cite{Li2}, Lin has confirmed the Elliott conjecture for the class of simple unital tracially AF algebras which satisfy the UCT. As a consequence we have the following \begin{ncor} Let $A$ and $B$ be separable simple unital $C^{*}$-algebras with real rank zero and locally finite decomposition rank; suppose $A$ and $B$ satisfy the UCT and are ${\mathcal Z}$-stable. Then, $A$ and $B$ are isomorphic iff their Elliott invariants are. \end{ncor} \end{nummer} \noindent \begin{nummer} \rm \label{AH-cor} Thanks to the known results about the range of the Elliott invariant in the nuclear stably finite case, we can say more about the structure of algebras as in the preceding corollaries: \begin{ncor} Let $A$ be a separable simple unital $C^{*}$-algebra; suppose $A \otimes {\mathcal Z}$ has real rank zero and locally finite decomposition rank and satisfies the UCT. Then: \begin{itemize} \item[(i)] $A \otimes {\mathcal Z}$ is AH of topological dimension at most 3. \item[(ii)] $A \otimes {\mathcal Z}$ is ASH of topological dimension at most 2. \item[(iii)] $\mathrm{dr}\, (A \otimes {\mathcal Z})$ is at most 2. \item[(iv)] $A \otimes {\mathcal Z}$ is approximately divisible. \item[(v)] $A$ is ${\mathcal Z}$-stable iff $A$ is approximately divisible. \end{itemize} \end{ncor} \begin{nproof} (i), (ii) and (iii) follow from results of Dadarlat, Elliott and Gong as in \cite{W4}, Corollary 6.4. By \cite{EGL}, an AH algebra of bounded topological dimension is approximately divisible. Conversely, an approximately divisible $C^{*}$-algebra is ${\mathcal Z}$-stable by \cite{TW2}. \end{nproof} \end{nummer} \noindent \begin{nummer} \rm We mention the following special case of Corollary \ref{lfdr-classification} explicitly: \begin{ncor} The class of separable simple unital ${\mathcal Z}$-stable ASH $C^{*}$-algebras with real rank zero satisfies the Elliott conjecture. \end{ncor} \begin{nproof} ASH $C^{*}$-algebras clearly satisfy the UCT; they have locally finite decomposition rank by \cite{NgW}. The result follows from \ref{lfdrtr0} and \cite{Li2}. \end{nproof} \end{nummer} \noindent \begin{nummer} \rm \begin{nremarks} (i) Note that \ref{AH-cor}(i) and (v) partially answer Questions 3.2 and 3.3 of \cite{TW2}. \\ (ii) In the preceding corollaries, note that the assumptions ``$A\otimes{\mathcal Z}$ has real rank zero'' and ``$A\otimes {\mathcal Z}$ has locally finite decomposition rank'' in particular hold if $A$ has real rank zero or locally finite decomposition rank, respectively (cf.\ Theorem 7.2 of \cite{R2}, Theorem 2.3 of \cite{W5} and Proposition \ref{lfdr-permanence} above). \end{nremarks} \end{nummer} \section{The proof of the main result: an outline} Since we only have a rather complicated proof of Theorem \ref{lfdrtr0}, we outline our strategy below. \noindent \begin{nummer} \rm \label{wu-tr0} First, we recall the definition of simple tracial rank zero $C^{*}$-algebras in the presence of small projections and comparability. This characterization is an immediate consequence of \cite{Li0}, Definition 3.6.2 (cf.\ also \cite{Li3}, Corollary 6.15); it will be more useful for our purposes than the original definition. \begin{nprop} Let $A$ be a separable simple and unital $C^{*}$-algebra which satisfies Blackadar's second fundamental comparability property and every nonzero hereditary subalgebra of which contains a nonzero projection. Then, $A$ has tracial rank zero if and only if the following holds:\\ For any finite subset ${\mathcal F} \subset A$ and $\varepsilon>0$ there is a finite-dimensional $C^{*}$-subalgebra $D\subset A$ such that \begin{itemize} \item[(i)] $\|[\mathbf{1}_{D},b]\|<\varepsilon \; \forall \, b \in {\mathcal F}$ \item[(ii)] $\mathrm{dist}(\mathbf{1}_{D}b\mathbf{1}_{D},D)<\varepsilon \; \forall \, b \in {\mathcal F}$ \item[(iii)] $\tau(\mathbf{1}_{A} - \mathbf{1}_{D})< \varepsilon \; \forall \, \tau \in T(A)$. \end{itemize} \end{nprop} \end{nummer} \noindent \begin{nummer} \rm \label{outline} A $C^{*}$-algebra $A$ as in Theorem \ref{lfdrtr0} satisfies the hypotheses of the preceding proposition by results of R{\o}rdam (\cite{R2}). Therefore, given ${\mathcal F} \subset A$ and $\varepsilon>0$, we have to find a finite-dimensional $C^{*}$-subalgebra $D \subset A$ satisfying (i), (ii) and (iii) above. \\ Since $A$ has locally finite decomposition rank, we may assume the elements of ${\mathcal F}$ to lie in some (unital) $C^{*}$-subalgebra $B$ of $A$ such that $\mathrm{dr}\, B=n$ for some $n \in {\mathbb{N}}$. Now suppose $B \stackrel{\psi}{\to} F \stackrel{\varphi}{\to} B$ is an $n$-decomposable c.p.\ approximation of ${\mathcal F}$ within some $\alpha>0$. Since $A$ has real rank zero, we may replace $\varphi:F \to B$ by a so-called discretely $n$-decomposable map $\tilde{\varphi}:\tilde{F} \to A$ (cf.\ \ref{discrete-order-zero} and \ref{rr0dr} below); the point is that $\tilde{\varphi} \circ \psi$ still is a good approximation for ${\mathcal F}$, while the image of $\tilde{\varphi}$ consists of a sum of $n+1$ (not necessarily pairwise orthogonal) finite-dimensional $C^{*}$-algebras $\tilde{F}^{(0)},\ldots,\tilde{F}^{(n)}$. Similar as in Section 4 of \cite{W5} (using \ref{tracial-division} below), one can then use ${\mathcal Z}$-stability of $A$ to find \emph{pairwise orthogonal} $C^{*}$-subalgebras $\bar{F}^{(0)}, \ldots, \bar{F}^{(n)}$ of $A$ such that $\bar{F}^{(i)} \cong \tilde{F}^{(i)}$ for all $i$ and such that $D_{1}:= \bar{F}^{(0)}\oplus \ldots \oplus \bar{F}^{(n)}$ satisfies (i) and (ii) above (with $D_{1}$ in place of $D$), if only $\alpha$ was chosen small enough. \\ This construction will not force $\bar{F}$ to quite satisfy (iii) -- the method of \cite{W5}, Section 4, will only yield $\tau(\mathbf{1}_{D_{1}}) > \frac{1}{2 (n+1)} =:\mu \; \forall \, \tau \in T(A)$. However, we may try to repeat the above process with $C_{1}:=(\mathbf{1}_{A} - \mathbf{1}_{D_{1}})A (\mathbf{1}_{A} - \mathbf{1}_{D_{1}})$ in place of $A$ and ${\mathcal F}_{1}:= \{(\mathbf{1}_{A} - \mathbf{1}_{D_{1}}) a (\mathbf{1}_{A} - \mathbf{1}_{D_{1}}) \, | \, x \in {\mathcal F}\}$ in place of ${\mathcal F}$ to obtain a finite-dimensional $D_{2} \supset D_{1}$ which not only satisfies (i) and (ii), but also $\tau(\mathbf{1}_{D_{2}}) > \mu (1-\mu) \; \forall \, \tau \in T(A)$. Induction will then yield an increasing sequence $D_{1} \subset D_{2} \subset \ldots \subset A$ such that $\tau(\mathbf{1}_{D_{k}}) > \mu \sum_{i =0}^{k} (1-\mu)^{i}$; by the formula for the geometric series we have $\mu \sum_{i =0}^{\infty} (1-\mu)^{i} = 1$, whence $\tau(\mathbf{1}_{D_{K}}) > 1- \varepsilon$ for some large enough $K$. \\ If $A$ itself has finite decomposition rank $n$, then all this works and, in fact, was carried out in \cite{W5}. But in the case where $A$ only has locally finite decomposition rank, there is a major problem with the induction process: Although the algebras $C_{k}:=(\mathbf{1}_{A} - \mathbf{1}_{D_{k}})A (\mathbf{1}_{A} - \mathbf{1}_{D_{k}})$ again satisfy the same hypotheses as $A$, we can only be sure to be able to approximate the elements of ${\mathcal F}_{k}$ by $m$-decomposable c.p.\ approximations for some $m \in {\mathbb{N}}$, but it may well happen that $m$ is much larger than $n$ -- and this would destroy the final geometric series argument. The difficulty could be circumvented if the compression with $(\mathbf{1}_{A} - \mathbf{1}_{D_{k}})$ was multiplicative on $B$, for then the image of $B$ in $C_{k}$ again had decomposition rank $n$ and we could proceed as before by approximating the elements of ${\mathcal F}_{k}$ with $n$-decomposable c.p.\ approximations. Of course, in general compression with $(\mathbf{1}_{A} - \mathbf{1}_{D_{k}})$ will not be multiplicative -- but with the help of (i) and (ii) above (with improved approximation constants) we can assume it to be \emph{almost} multiplicative with respect to some tolerance and some finite subset (which includes ${\mathcal F}_{k}$ and $\varphi(F)$). This will still be enough to obtain an $n$- (as opposed to $m$-) decomposable c.p.\ approximation of ${\mathcal F}_{k}$, and it will allow our induction process to work. The latter assertion is (roughly speaking) the content of our technical key results, \ref{excisible-appr} and \ref{excision}, the proof of which is the objective of Section 5. \\ What makes this procedure so complicated is the necessity to carefully keep track of the approximation constants chosen along the way. In fact, given ${\mathcal F}$ and $\varepsilon$, we first chose $B$ and, at the same time, obtain $n$. This $n$ determines how many induction steps will be needed ($\mu \sum_{i =0}^K (1-\mu)^{i} $ has to be larger than $1- \varepsilon$, and $\mu$ depends on $n$). Next we choose $\alpha$ and the c.p.\ approximation $(F,\psi,\varphi)$. The number $\alpha$ has to be so small that, even after $K$ induction steps, the algebra $D_{K}$ still satisfies (i) and (ii) of Proposition \ref{wu-tr0} (this is where Lemma \ref{excision} enters). Only now we can let the induction process start, i.e., carry out the actual construction of the $D_{k}$ for $k=1, \ldots,K$. These last steps will complete the proof of Theorem \ref{lfdrtr0} and are the content of Section 6. \end{nummer} \noindent \begin{nummer} \rm One might ask whether some of the technicalities outlined above could be avoided by using ultraproduct techniques. Such an approach could in fact help to replace the above mentioned compression with $(\mathbf{1}_{A} - \mathbf{1}_{D_{k}})$ by an honestly multiplicative map into the ultraproduct $A_{\omega}$ ($\omega$ being some free ultrafilter on ${\mathbb{N}}$). However, it does not seem to be possible to carry out the whole induction procedure of \ref{outline} just in the ultraproduct -- one would rather have to lift the multiplicative map into $A_{\omega}$ to a sequence of almost multiplicative maps into $A$, and this would leave us in essentially the same situation as before, so the technical advantages of employing ultraproducts seem to be rather moderate. Nontheless, such an approach will be used in \cite{NgW2} to prove a result related to our Theorem \ref{lfdrtr0}. \end{nummer} \section{Order zero maps} In this section we recall some facts about $n$-decomposable maps into $C^{*}$-algebras of real rank zero. \noindent \begin{nummer} \rm \label{discrete-order-zero} Recall from \cite{W4}, Definition 2.2(i), that a completely positive map \[ \varphi: F=M_{r_{1}} \oplus \ldots \oplus M_{r_{s}} \to A \] is a discrete order zero map, if $\mathrm{ord}\, \varphi = 0$ and each $\varphi(\mathbf{1}_{M_{r_{i}}})$, $i=1, \ldots, s$, is a multiple of a projection. \\ Let $\tilde{F}$ be another finite-dimensional $C^{*}$-algebra. We say an embedding $\iota:F \to \tilde{F}$ is centered, if there are $m_{1}, \ldots,m_{s} \in {\mathbb{N}}$ such that $\tilde{F}\cong \bigoplus_{i=1}^{s} {\mathbb{C}}^{m_{i}} \otimes M_{r_{i}}$ and, under this identification, \[ \iota = \bigoplus_{i=1}^{s} \mathbf{1}_{{\mathbb{C}}^{m_{i}}} \otimes \mathrm{id}_{M_{r_{i}}} \, . \] This is equivalent to saying that the commutant of $\iota(F)$ within $\tilde{F}$ coincides with the center of $\tilde{F}$. \end{nummer} \noindent \begin{nummer} \rm By \cite{W4}, Lemma 2.4 (and its proof), any order zero map into a real rank zero $C^{*}$-algebra $A$ can be approximated by a composition of a centered embedding with a discrete order zero map: \begin{nlemma} Let $A$ and $F$ be $C^{*}$-algebras, $A$ with real rank zero and $F$ finite-dimensional. Suppose $\varphi:F \to A$ is completely positive contractive with order zero and let $\delta>0$ be given. Then there are a centered unital embedding $\iota:F \to \tilde{F}$ of $F$ into some finite-dimensional $C^{*}$-algebra $\tilde{F}$ and a discrete order zero map $\tilde{\varphi}:\tilde{F} \to A$ such that $\tilde{\varphi}(\mathbf{1}_{\tilde{F}})\le \varphi(\mathbf{1}_{F})$ and $\|\varphi(x) - \tilde{\varphi} \circ \iota(x)\|< \delta \cdot \|x\|$ for all $0 \neq x \in F$. \end{nlemma} \end{nummer} \noindent \begin{nummer} \rm \label{rr0dr} The preceding Lemma carries over to $n$-decomposable maps, as the next proposition shows. First, we need some notation: Let $A$ and $F$ be $C^{*}$-algebras with $F$ finite-dimensional, and let $\varphi:F \to A$ be a c.p.\ map. Following \cite{W4}, Definition 2.2(ii), we say $\varphi$ is discretely $n$-decomposable, if $F$ can be written as $F = F^{(0)} \oplus \ldots \oplus F^{(n)}$ with $\varphi|_{F^{(j)}}$ being a discrete order zero map for $j=0, \ldots, n$. \begin{nprop} Let $A$ and $F$ be $C^{*}$-algebras, $F=M_{r_{1}} \oplus \ldots \oplus M_{r_{s}}$ finite-dimensional and $A$ with real rank zero. Let $\varphi:F \to A$ be an $n$-decomposable c.p.c.\ map.\\ Then, for any $\beta>0$, there are a centered unital embedding $\iota:F \to \tilde{F}$ into some finite-dimensional $C^{*}$-algebra $\tilde{F}$ and a discretely $n$-decomposable c.p.c.\ map $\tilde{\varphi}:\tilde{F} \to A$ such that $\tilde{\varphi}\circ \iota(\mathbf{1}_{M_{r_{i}}})\le \varphi(\mathbf{1}_{M_{r_{i}}})$ and $\|\varphi(x) - \tilde{\varphi} \circ \iota(x)\|< \beta \cdot \|x\|$ for all $i =1, \ldots, s$ and $0 \neq x \in F$.\\ If $\varphi$ is $n$-decomposable with respect to the decomposition $F=F^{(0)} \oplus \ldots \oplus F^{(n)}$, then $\tilde{\varphi}$ may be chosen to be $n$-decomposable with respect to the decomposition $\tilde{F}=\tilde{F}^{(0)} \oplus \ldots \oplus \tilde{F}^{(n)}$, where $\tilde{F}^{(j)}={\iota}(F^{(j)})$, $j=0, \ldots,n$. \end{nprop} \begin{nproof} Apply Lemma \ref{discrete-order-zero} with $\delta:=\frac{\beta}{n+1}$ to each of the maps $\varphi|_{M_{r_{i}}}$ to obtain discrete order zero maps $\tilde{\varphi}_{i}$, $i=1, \ldots, s$. The $\tilde{\varphi}_{i}$ will add up to a discretely $n$-decomposable map $\tilde{\varphi}$ with the desired properties; cf.\ also the proof of \cite{W4}, Proposition 2.5. \end{nproof} \end{nummer} \noindent \begin{nummer} \rm \label{multiplicative-domain} We shall have use for the following consequence of Stinespring's theorem, which is a standard tool to analyze completely positive approximations of nuclear $C^{*}$-algebras. See \cite{KW}, Lemma 3.5, for a proof. \begin{nlemma} Let $A$ and $F$ be $C^{*}$-algebras, $b \in A$ a normalized positive element and $\eta>0$. If $A \stackrel{\psi}{\longrightarrow} F \stackrel{\varphi}{\longrightarrow} A$ are completely positive contractive maps satisfying \[ \|\varphi \psi(b) - b\|, \, \|\varphi \psi(b^{2}) - b^{2}\| < \eta \, , \] then, for any $0 \neq x \in F_{+}$, \[ \|\varphi(\psi(b)x)- \varphi \psi(b) \varphi(x)\|< 2 \eta^{\frac{1}{2}} \|x\| \, . \] \end{nlemma} \end{nummer} \noindent \begin{nummer} \rm The proof of Theorem \ref{lfdrtr0} becomes considerably easier in the case of finitely (or countably) many tracial states. The following lemma (2.4 from \cite{W5}) will be used to avoid this assumption: \label{tracial-division} \begin{nlemma} For any $n \in {\mathbb{N}}$ and $0< \mu < 1/2(n+1)$ there is a completely positive contractive order zero map $\varrho: {\mathbb{C}}^{n+1} \to {\mathcal Z}$ such that $\bar{\tau}(\varrho(e_{i})) > \mu$ for $i=1, \ldots, n+1$, where the $e_{i}$ denote the canonical generators of ${\mathbb{C}}^{n+1}$ and $\bar{\tau}$ is the unique tracial state on ${\mathcal Z}$. \end{nlemma} \end{nummer} \section{Excising almost central subalgebras} This section contains the technical key steps for the proof of Theorem \ref{lfdrtr0}, namely Corollary \ref{excisible-appr} and Lemma \ref{excision}. First, we need some preparation. \noindent \begin{nummer} \rm \label{polynomial-appr} \begin{nprop} For any $\delta>0$ and $f ,g\in {\mathcal C}_{0}((0,1])$ there is $0 < \beta < \delta$ such that the following holds: If $0\le a,b \le \mathbf{1}$ are elements in some $C^{*}$-algebra which satisfy $\|a-b\|<\beta$ (or $\|[a,b]\|< \beta$, respectively), then $\|f(a)-f(b)\|< \delta$ (or $\|[g(a),f(b)]\|< \delta$, respectively). \end{nprop} \begin{nproof} The assertions are obvious if $f$ and $g$ are polynomials. By the Stone--Weierstrass Theorem any function in ${\mathcal C}_{0}((0,1])$ is a uniform limit of polynomials, from which the statements follow immediately. \end{nproof} \end{nummer} \noindent \begin{nummer} \rm \label{almost-hereditary} \begin{nprop} Let $0 \le a, b \le \mathbf{1}_{A}$ be positive elements of a unital $C^{*}$-algebra $A$ and let $\varepsilon>0$ be given. If $a \le b + \varepsilon \cdot \mathbf{1}_{A}$, then $\mathrm{dist} (a, \overline{bAb}) \le 3 \cdot \varepsilon^{\frac{1}{2}}$. \end{nprop} \begin{nproof} Let $(u_{n})_{n \in {\mathbb{N}}} \subset \overline{bAb}$ be an approximate unit of $\overline{bAb}$; assume that $0 \le u_{n} \le \mathbf{1}_{A}$. We then have \begin{eqnarray*} \|a - u_{n} a u_{n}\| & \le & \|(\mathbf{1}_{A}-u_{n}) a u_{n}\| + \|u_{n} a (\mathbf{1}_{A} - u_{n})\| + \|(\mathbf{1}_{A}-u_{n}) a (\mathbf{1}_{A}-u_{n})\| \\ & \le & 2 \|(\mathbf{1}_{A}-u_{n}) a u_{n}^{2}a (\mathbf{1}_{A}- u_{n})\|^{\frac{1}{2}} + \|(\mathbf{1}_{A} - u_{n}) a (\mathbf{1}_{A} - u_{n})\| \\ & \le & 3 \|(\mathbf{1}_{A} - u_{n}) a (\mathbf{1}_{A} - u_{n})\|^{\frac{1}{2}} \\ & \le & 3 (\|(\mathbf{1}_{A} - u_{n}) b (\mathbf{1}_{A} - u_{n})\| + \varepsilon)^{\frac{1}{2}} \, , \end{eqnarray*} from which follows that, for any $\delta>0$, there is $n \in {\mathbb{N}}$ such that \[ \|a - u_{n} a u_{n}\|< 3 (\delta + \varepsilon)^{\frac{1}{2}} \, . \] Since $u_{n}a u_{n} \in \overline{bAb}$ and $\delta$ is arbitrary, the assertion follows. \end{nproof} \end{nummer} \noindent \begin{nummer} \rm \label{functions} \begin{nnotation} For $0 < \alpha <\beta < 1$ we define continuous functions \[ g_{\alpha,\beta},h_{\alpha,\beta}:[0,1] \to {\mathbb{R}} \] by \[ g_{\alpha,\beta}(t) := \left\{ \begin{array}{ll} 0, & 0\le t \le {\alpha} \\ 1, & \beta \le t \le 1 \\ \mbox{linear,} & \mbox{else} \end{array} \right. \] and \[ h_{\alpha,\beta}(t) := \left\{ \begin{array}{ll} 0, & 0\le t \le {\alpha} \\ t^{-1}, & {\beta} \le t \le 1 \\ \mbox{linear,} & \mbox{else} \, . \end{array} \right. \] The subset of positive elements of norm at most one in a $C^{*}$-algebra $B$ will be denoted by ${\mathcal B}_{1}(B_{+})$. \end{nnotation} \end{nummer} \noindent \begin{nummer} \rm \label{p-kappa} \begin{nlemma} Let $A$ be a unital $C^{*}$-algebra and $B \subset A$ a unital $C^{*}$-subalgebra. Furthermore, let ${\mathcal G} \subset {\mathcal B}_{1}(B_{+})$ be a compact subset containing $\mathbf{1}_{A}$ and let $n \in {\mathbb{N}}$ and $0 < \zeta < 1/19$ be given. Then, there is $\zeta'>0$ such that the following holds:\\ If $(F, \psi, \varphi)$ is an $n$-decomposable c.p.\ approximation of $B$ such that \[ \|\varphi \psi(b) - b \| < \frac{\zeta^{6}}{(n+1)^{2}}\; \forall \, b \in \bar{{\mathcal G}}:={\mathcal G} \cup \{a^{2} \, | \, a \in {\mathcal G}\} \] and if $F_{1}, \ldots ,F_{s}$ are the matrix blocks of $F$ and $p_{1}, \ldots, p_{s} \in A$ are pairwise orthogonal projections satisfying \begin{equation} \label{34} \|[p_{i},\varphi( \mathbf{1}_{F_{i}}x)]\| < \zeta' \|x\| \; \forall \, 0 \neq x \in F_{+} \, , \end{equation} \begin{equation} \label{26} \|p_{i} g_{\frac{\zeta}{2},\zeta}(\varphi(\mathbf{1}_{F_{i}})) - p_{i}\| < \zeta' \end{equation} and \[ \mathrm{dist}(p_{i} , \overline{\varphi(\mathbf{1}_{F_{i}}) A \varphi(\mathbf{1}_{F_{i}})}) < \frac{\zeta'}{s} \] for $i=1, \ldots , s$, then $p:= \sum_{i=1}^{s} p_{i}$ satisfies \[ \|[p,b]\| < \zeta \] for all $b \in {\mathcal G}$. \end{nlemma} \begin{nproof} Consider $h_{\frac{\zeta}{4},\frac{\zeta}{2}} \in {\mathcal C}_{0}((0,1])$ and note that \begin{equation} \label{27} \mathrm{id}_{[0,1]} \cdot g_{\frac{\zeta}{2},\zeta} \cdot h_{\frac{\zeta}{4},\frac{\zeta}{2}} = g_{\frac{\zeta}{2},\zeta} \end{equation} and that \begin{equation} \label{31} \|h_{\frac{\zeta}{4},\frac{\zeta}{2}}\| = \frac{2}{\zeta} \, . \end{equation} By Proposition \ref{polynomial-appr}, there is $\zeta'$ such that the following holds: If $0 \le a,b \le \mathbf{1}_{A}$ are elements of $A$ with $\|[a,b]\|< \zeta'$, then \begin{equation} \label{35} \|[a, (g_{\frac{\zeta}{2},\zeta} \cdot h_{\frac{\zeta}{4},\frac{\zeta}{2}})(b)]\| < \frac{1}{19(n+1)} \zeta \, . \end{equation} We may assume that \begin{equation} \label{36} \zeta' < \frac{1}{38(n+1)} \zeta^{2} \, . \end{equation} Now suppose that $(F=F_{1}\oplus \ldots \oplus F_{s},\psi,\varphi)$ is a c.p.\ approximation and $p_{1}, \ldots, p_{s}\in A$ are projections as in the statement of the proposition. Let $\varphi$ be $n$-decomposable with respect to the decomposition $F=(\bigoplus_{i \in I_{0}} F_{i}) \oplus \ldots \oplus (\bigoplus_{i \in I_{n}} F_{i})$, where $\{1, \ldots,s\} = \coprod_{j=0}^{n} I_{j}$; in particular, this means that, for all $j \in \{0, \ldots,n\}$, \begin{equation} \label{29} \varphi(\mathbf{1}_{F_{i}}) \perp \varphi(\mathbf{1}_{F_{i'}}) \mbox{ if } i \neq i' \in I_{j} \end{equation} and \begin{equation} \label{32} [\varphi(\mathbf{1}_{F_{i}}), \varphi(\mathbf{1}_{F_{i}} x)] = 0 \; \forall \, i \in 1, \ldots, s, \, x \in F \, . \end{equation} By Lemma \ref{multiplicative-domain} and our assumption on $(F,\psi,\varphi)$ we have \begin{equation} \label{30} \| (\sum_{i \in I_{j}} \varphi(\mathbf{1}_{F_{i}}) ) \varphi\psi(b) - \varphi( \sum_{i \in I_{j}} \mathbf{1}_{F_{i}} \psi(b)) \| < 2 \cdot \frac{\zeta^{3}}{n+1} \end{equation} for each $j \in \{0, \ldots,n\}$ and $b \in {\mathcal G}$. Since $\mathrm{dist}(p_{i},\overline{\varphi(\mathbf{1}_{F_{i}})A\varphi(\mathbf{1}_{F_{i}})}) < \frac{\zeta'}{s}$ for each $i$, there are positive normalized elements \begin{equation} \label{33} d_{i} \in C^{*}(\varphi(\mathbf{1}_{F_{i}})) \end{equation} such that \begin{equation} \label{25} \|p_{i}-d_{i}p_{i}d_{i}\|< \frac{\zeta'}{s} \; \forall \, i \, . \end{equation} For each $j \in \{0, \ldots , n\}$ we obtain \begin{eqnarray} \label{28} \lefteqn{ \|\sum_{i \in I_{j}} p_{i} - \sum_{i \in I_{j}} d_{i} p_{i} d_{i} (g_{\frac{\zeta}{2},\zeta} \cdot h_{\frac{\zeta}{4},\frac{\zeta}{2}})(\varphi(\mathbf{1}_{F_{i}}))\varphi(\mathbf{1}_{F_{i}})\| } \nonumber \\ & \stackrel{(\ref{25})}{\le} & \|\sum_{i \in I_{j}} d_{i}p_{i}d_{i} - \sum_{i \in I_{j}} d_{i} p_{i} d_{i} (g_{\frac{\zeta}{2},\zeta} \cdot h_{\frac{\zeta}{4},\frac{\zeta}{2}})(\varphi(\mathbf{1}_{F_{i}}))\varphi(\mathbf{1}_{F_{i}})\| + s \cdot \frac{\zeta'}{s} \nonumber \\ & \stackrel{(\ref{29},\ref{32},\ref{33})}{\le} & \max_{i \in I_{j}} \| d_{i} ( p_{i} - p_{i} (g_{\frac{\zeta}{2},\zeta} \cdot h_{\frac{\zeta}{4},\frac{\zeta}{2}})(\varphi(\mathbf{1}_{F_{i}}))\varphi(\mathbf{1}_{F_{i}}) d_{i} \| + \zeta' \nonumber \\ & \stackrel{(\ref{26},\ref{27})}{\le} & 2 \cdot \zeta' \, . \end{eqnarray} We now compute for any $b \in {\mathcal G}$ \begin{eqnarray*} \lefteqn{ \|[(\sum_{i \in I_{j}} p_{i}), \varphi \psi(b)]\| }\\ & \stackrel{(\ref{28})}{\le} & \|\sum_{i \in I_{j}} d_{i}p_{i}d_{i} (g_{\frac{\zeta}{2},\zeta} \cdot h_{\frac{\zeta}{4},\frac{\zeta}{2}})(\varphi(\mathbf{1}_{F_{i}})) \varphi(\mathbf{1}_{F_{i}}) \varphi \psi(b) \\ & & - \varphi \psi(b) \sum_{i \in I_{j}} \varphi(\mathbf{1}_{F_{i}}) (g_{\frac{\zeta}{2},\zeta} \cdot h_{\frac{\zeta}{4},\frac{\zeta}{2}})(\varphi(\mathbf{1}_{F_{i}})) d_{i}p_{i}d_{i}\| \\ & & + 4 \zeta' \\ & \stackrel{(\ref{29})}{=} & \|\sum_{i \in I_{j}} d_{i}p_{i}d_{i} (g_{\frac{\zeta}{2},\zeta} \cdot h_{\frac{\zeta}{4},\frac{\zeta}{2}})(\varphi(\mathbf{1}_{F_{i}})) \sum_{i \in I_{j}} \varphi(\mathbf{1}_{F_{i}}) \varphi \psi(b) \\ & & - \varphi \psi(b) \sum_{i \in I_{j}} \varphi(\mathbf{1}_{F_{i}}) \sum_{i \in I_{j}}(g_{\frac{\zeta}{2},\zeta} \cdot h_{\frac{\zeta}{4},\frac{\zeta}{2}})(\varphi(\mathbf{1}_{F_{i}})) d_{i}p_{i}d_{i}\| \\ & & + 4 \zeta' \\ & \stackrel{(\ref{30},\ref{31})}{\le} & \|(\sum_{i \in I_{j}} d_{i}p_{i}d_{i} (g_{\frac{\zeta}{2},\zeta} \cdot h_{\frac{\zeta}{4},\frac{\zeta}{2}})(\varphi(\mathbf{1}_{F_{i}}))) \varphi(\sum_{i \in I_{j}} \mathbf{1}_{F_{i}} \psi(b)) \\ & & - \varphi (\sum_{i \in I_{j}} \psi(b) \mathbf{1}_{F_{i}}) \sum_{i \in I_{j}}(g_{\frac{\zeta}{2},\zeta} \cdot h_{\frac{\zeta}{4},\frac{\zeta}{2}})(\varphi(\mathbf{1}_{F_{i}})) d_{i}p_{i}d_{i}\| \\ & & + 4 \zeta' + 2 \cdot \frac{2}{\zeta} \cdot 2 \cdot \frac{\zeta^{3}}{n+1}\\ & \stackrel{(\ref{29})}{=} & \|\sum_{i \in I_{j}}[d_{i}p_{i}d_{i}, (g_{\frac{\zeta}{2},\zeta} \cdot h_{\frac{\zeta}{4},\frac{\zeta}{2}})(\varphi(\mathbf{1}_{F_{i}}))\varphi(\mathbf{1}_{F_{i}}\psi(b))]\| \\ && + 4 \zeta' + 8 \cdot \frac{\zeta^{2}}{n+1} \\ & \stackrel{(\ref{32},\ref{33})}{=} & \|\sum_{i\in I_{j}} d_{i} [p_{i}, (g_{\frac{\zeta}{2},\zeta} \cdot h_{\frac{\zeta}{4},\frac{\zeta}{2}})(\varphi(\mathbf{1}_{F_{i}}))\varphi(\mathbf{1}_{F_{i}}\psi(b))] d_{i}\| \\ && + 4 \zeta' + 8 \cdot \frac{\zeta^{2}}{n+1} \\ & \stackrel{(\ref{29},\ref{33})}{=} & \max_{i \in I_{j}} \| d_{i} [p_{i}, (g_{\frac{\zeta}{2},\zeta} \cdot h_{\frac{\zeta}{4},\frac{\zeta}{2}})(\varphi(\mathbf{1}_{F_{i}}))\varphi(\mathbf{1}_{F_{i}}\psi(b))] d_{i}\| \\ && + 4 \zeta' + 8 \cdot \frac{\zeta^{2}}{n+1} \\ & \le & \max_{i \in I_{j}} \| [p_{i}, (g_{\frac{\zeta}{2},\zeta} \cdot h_{\frac{\zeta}{4},\frac{\zeta}{2}})(\varphi(\mathbf{1}_{F_{i}}))\varphi(\mathbf{1}_{F_{i}}\psi(b))] \| \\ && + 4 \zeta' + 8 \cdot \frac{\zeta^{2}}{n+1} \\ & \stackrel{(\ref{34},\ref{35},\ref{31})}{\le} & \frac{\zeta}{19(n+1)} + \zeta' \cdot \frac{2}{\zeta} + 4 \zeta' + 8 \cdot \frac{\zeta^{2}}{n+1} \, . \end{eqnarray*} As a consequence, we obtain \begin{eqnarray*} \|[p,b]\| &\le & \|[p, \varphi \psi(b)]\| + 2 \cdot \frac{\zeta^{6}}{(n+1)^{2}}\\ & \le & (n+1) \left(\frac{\zeta}{19(n+1)} + \zeta' \cdot \frac{2}{\zeta} + 4 \zeta' + 8 \cdot \frac{\zeta^{2}}{n+1} \right) + 2 \cdot \frac{\zeta^{6}}{(n+1)^{2}}\\ & < & (n+1) \left(\frac{\zeta}{19(n+1)} + \zeta' \cdot \frac{2}{\zeta} + 4 \zeta' + 8 \cdot \frac{\zeta^{2}}{n+1} + 2 \cdot \frac{\zeta^{6}}{n+1} \right)\\ & \stackrel{(\ref{36})}{<} & \zeta \end{eqnarray*} for all $b \in {\mathcal G}$. \end{nproof} \end{nummer} \noindent \begin{nummer} \rm \label{excisible-appr} For convenience, we note the following corollary explicitly: \begin{ncor} Let $A$ be a unital $C^{*}$-algebra and $B \subset A$ a unital $C^{*}$-subalgebra with $\mathrm{dr}\, B = n < \infty$. \\ For any compact subset ${\mathcal G} \subset {\mathcal B}_{1}(B_{+})$, and $0 < \eta <1/19$ there are an $n$-decomposable c.p.\ approximation $(F, \psi, \varphi)$ of $B$ and $\delta>0$ such that the following hold: \begin{itemize} \item[a)] $\|\varphi \psi(b) - b\| < \frac{\eta^{6}}{(n+1)^{2}} \; \forall \, b \in \bar{{\mathcal G}}:= {\mathcal G} \cup \{a^{2} \, | \, a \in {\mathcal G}\}$ \item[b)] If $F_{1}, \ldots, F_{s}$ are the matrix blocks of $F$ and $p_{1}, \ldots,p_{s} \in A$ are pairwise orthogonal projections satisfying \[ \|[p_{i}, \varphi(\mathbf{1}_{F_{i}} x)]\| < \delta \|x\| \; \forall \, 0 \neq x \in F_{+} \, , \] \[ \|p_{i} g_{\frac{\eta}{2},\eta}(\varphi(\mathbf{1}_{F_{i}})) - p_{i}\| < \delta \] and \[ \mathrm{dist}(p_{i} , \overline{\varphi(\mathbf{1}_{F_{i}})A \varphi(\mathbf{1}_{F_{i}})} ) < \frac{\delta}{s} \] for $i=1, \ldots, s$, then $p:= \sum_{i=1}^{s}p_{i}$ satisfies \[ \|[p,b]\| < \eta \; \forall \, b \in {\mathcal G} \, . \] \end{itemize} \end{ncor} \end{nummer} \noindent \begin{nummer} \rm \label{excision} \begin{nlemma} Let $A$ be a separable simple and unital ${\mathcal Z}$-stable $C^{*}$-algebra with real rank zero and let $B \subset A$ be a unital $C^{*}$-subalgebra. Let $(F,\psi,\varphi)$ be a c.p.\ approximation of $B$ and suppose $\varphi$ is $n$-decomposable for some $n\in {\mathbb{N}}$. Let $\mu$ and $\eta$ be positive numbers such that \begin{equation} \label{62} 0<\mu<\frac{1}{2(n+1)}, \, \eta<\frac{1}{48} \mbox{ and } \eta < \frac{1}{10} \left(\frac{1}{2(n+1)}-\mu \right) \, . \end{equation} Furthermore, let ${\mathcal G} \subset {\mathcal B}_{1}(B_{+})$ be a compact subset containing $\mathbf{1}_{A}$ and satisfying \begin{equation} \label{51} \|\varphi \psi(b)-b\|<\frac{\eta^{6}}{(n+1)^{2}} \; \forall \, b \in \bar{{\mathcal G}}:= {\mathcal G} \cup \{a^{2} \, | \, a \in {\mathcal G}\} \, . \end{equation} Then, for any $0< \delta < \frac{1}{2}$ there is $\gamma>0$ such that the following holds:\\ If there is a projection $q \in A$ such that \[ \|[q,\varphi(x)]\| < \gamma \|x\| \; \forall \, 0 \neq x \in F_{+} \, , \] then there is a finite-dimensional $C^{*}$-subalgebra $C \subset (\mathbf{1}_{A}-q)A(\mathbf{1}_{A}-q) \subset A$ such that \begin{itemize} \item[(i)] $\mathrm{dist}(\mathbf{1}_{C} b \mathbf{1}_{C},C) < \eta \; \forall \, b \in {\mathcal G}$ \item[(ii)] $\tau(\mathbf{1}_{C}) \ge \mu \cdot \tau(\mathbf{1}_{A}-q) \; \forall \, \tau \in T(A)$ \item[(iii)] if $F_{1}, \ldots, F_{s}$ are the matrix blocks of $F$, then $\mathbf{1}_{C}$ can be written as a sum of $s$ pairwise orthogonal projections $p_{1}, \ldots,p_{s} \in C$ satisfying \[ \|[p_{i},\varphi(\mathbf{1}_{F_{i}}x)]\| < \delta \|x\| \; \forall \, 0 \neq x \in F_{+} \, , \] \[ \|p_{i} g_{\frac{\eta}{2},{\eta}}(\varphi(\mathbf{1}_{F_{i}})) - p_i\|<\delta \] and \[ \mathrm{dist} (p_{i}, \overline{\varphi(\mathbf{1}_{F_{i}})A\varphi(\mathbf{1}_{F_{i}})} ) < \frac{\delta}{s} \] for $i=1, \ldots,s$. \end{itemize} \end{nlemma} \begin{nproof} For convenience, we define $g,h\in {\mathcal C}_{0}((0,1])$ by \begin{equation} \label{44} g:=g_{\frac{\eta}{2},\eta} \mbox{ and } h:= h_{\frac{\eta}{4},\frac{\eta}{2}} \cdot g_{\frac{\eta}{2},\eta} \, . \end{equation} We clearly have \begin{equation} \label{41} t \cdot h(t) = g(t) \; \forall \, t \in [0,1] \mbox{ and } \|h\|=\frac{1}{\eta} \, . \end{equation} Given $\delta $, use Proposition \ref{polynomial-appr} to choose $\beta>0$ such that if $a$ and $b$ are positive elements in some $C^{*}$-algebra which have norm at most one and satisfy $\|a-b\|<\beta$, then \begin{equation} \label{38} \|g(a) - g(b)\| < \frac{\delta^{2}}{12} \, . \end{equation} We may also assume that \begin{equation} \label{48} \beta< \frac{\delta^{2}\eta^{6}}{12s^{2}(n+1)^{2}} \, . \end{equation} By \cite{KW}, Remark 2.4 and Proposition 2.5, the relations defining $n$-decomposability are weakly stable in the sense of \cite{Lo}, Definition 4.1.1. This implies that there is $\gamma>0$ such that the following holds:\\ If there is a projection $q \in A$ such that \begin{equation} \label{37} \|[q,\varphi(x)]\| < \gamma \|x\| \; \forall \, 0 \neq x \in F_{+} \, , \end{equation} then there are $n$-decomposable c.p.\ maps \begin{equation} \label{61} \varphi' : F \to (\mathbf{1}_{A} - q) A (\mathbf{1}_{A} - q) \end{equation} and \begin{equation} \label{55} \varphi^{\times}: F \to qAq \end{equation} such that \begin{equation} \label{13} \|\varphi'(x) - (\mathbf{1}_{A} - q)\varphi(x) (\mathbf{1}_{A} - q) \| < \beta \|x\| \; \forall \, 0 \neq x \in F_{+} \, , \end{equation} \[ \|\varphi^{\times}(x) - q \varphi(x) q\| < \beta \|x\| \; \forall \, 0 \neq x \in F_{+} \] and \begin{equation} \label{54} \|\varphi'(x) +\varphi^{\times}(x) - \varphi(x)\|<\beta \|x\| \; \forall \, 0 \neq x \in F_{+} \, . \end{equation} (In other words, the c.p.\ maps $q\varphi(\, . \,)q$ and $(\mathbf{1}_{A}-q)\varphi(\, . \,)(\mathbf{1}_{A}-q)$ are `almost' $n$-decomposable whatever particular $q \in A$ we choose, if only (\ref{37}) is satisfied.) By making $\gamma$ smaller, if necessary, and using Proposition \ref{polynomial-appr} and the fact that \[ g(\varphi'(\mathbf{1}_{F_{i}})) \le g(\varphi'(\mathbf{1}_{F_{i}})) +g(\varphi^{\times}(\mathbf{1}_{F_{i}})) = g(\varphi'(\mathbf{1}_{F_{i}}) + \varphi^{\times}(\mathbf{1}_{F_{i}})) \, , \] we may even assume that \begin{equation} \label{71} g(\varphi'(\mathbf{1}_{F_{i}})) \le g(\varphi(\mathbf{1}_{F_{i}})) + \frac{\delta^{2}}{12} \end{equation} for all $i \in \{1, \ldots, s\}$. We may further assume that \begin{equation} \label{50} \gamma < \frac{\delta^{2} \eta^{2}}{16 s^{2} (n+1)^{2}} \, . \end{equation} So, let $q \in A$ as above be given and suppose we have chosen $\varphi'$ and $\varphi^{\times}$. From Proposition \ref{rr0dr} and the choice of $\beta$ we obtain a finite-dimensional $C^{*}$-algebra $\bar{F}$ with a unital centered embedding $\bar{\iota}: F \to \bar{F}$ and a discretely $n$-decomposable c.p.c.\ map \[ \varphi'': \bar{F} \to (\mathbf{1}_{A} - q) A (\mathbf{1}_{A} - q) \] such that \begin{equation} \label{4} \|\varphi'' \circ \bar{\iota}(x) - \varphi'(x) \| < \beta \|x\| \; \forall \, 0 \neq x \in F_{+} \end{equation} and \begin{equation} \label{64} \varphi'' \circ \bar{\iota}(\mathbf{1}_{F_{i}}) \le \varphi'(\mathbf{1}_{F_{i}}) \; \forall \, i =1, \ldots, s\, . \end{equation} By (\ref{38}) we have \begin{equation} \label{70} \|g(\varphi''\bar{\iota}(\mathbf{1}_{F_{i}})) - g(\varphi'(\mathbf{1}_{F_{i}}))\| < \frac{\delta^{2}}{12} \; \forall \, i= 1, \ldots, s \, . \end{equation} Moreover, if $\varphi'$ is $n$-decomposable with respect to the decomposition $F=F^{(0)} \oplus \ldots \oplus F^{(n)}$, then we may assume $\varphi''$ to be $n$-decomposable with respect to the decomposition $\bar{F}=\bar{F}^{(0)} \oplus \ldots \oplus \bar{F}^{(n)}$, where $\bar{F}^{(j)}=\bar{\iota}(F^{(j)})$, $j=0, \ldots,n$. In particular, we have \begin{equation} \label{45} \mathrm{ord}\,(\varphi''|_{\bar{\iota}(F_{i})}) = 0 \; \forall \, i = 1, \ldots, s \, . \end{equation} We denote the matrix blocks of $\bar{F}$ by $\bar{F}_{i}$, $i=1, \ldots, \bar{s}$, and set $\varphi''_{i}:= \varphi''|_{\bar{F}_{i}}$. Each $\varphi''_{i}$ is a multiple of a $*$-homomorphism \[ \sigma''_{i}: \bar{F}_{i} \to (\mathbf{1}_{A} - q)A (\mathbf{1}_{A} - q) \, , \] that is, \begin{equation} \label{39} \varphi''_{i} = \lambda_{i} \cdot \sigma''_{i} \end{equation} for some $0 \le \lambda_{i} \le 1$, $i = 1, \ldots, \bar{s}$. Set \[ \bar{\mu}:= \frac{1}{2} \left(\mu + \frac{1}{2(n+1)} \right) \, . \] With $\varrho : {\mathbb{C}}^{n+1} \to {\mathcal Z}$ as in Lemma \ref{tracial-division} (using $\bar{\mu}$ in place of $\mu$), following \cite{W5}, Lemma 2.5, we may define a c.p.\ map \begin{equation} \label{60} \bar{\varphi}:\bar{F} \to (\mathbf{1}_{A}-q)A(\mathbf{1}_{A}-q) \otimes {\mathcal Z} \end{equation} by \begin{equation} \label{40} \bar{\varphi}(x):= \sum_{j=0}^{n} \varphi''(x \mathbf{1}_{\bar{F}^{(j)}}) \otimes \varrho(e_{j+1}), \end{equation} where $e_{1}, \ldots, e_{n+1}$ denote the canonical generators of ${\mathbb{C}}^{n+1}$. It is obvious that $\bar{\varphi}$ is in fact c.p.c.\ and has order zero, since the $\varrho(e_{j})$ are pairwise orthogonal. By \ref{tracial-division} we have \begin{equation} \label{12} \bar{\tau}(\varrho(e_{j})) > \bar{\mu} \end{equation} for $j= 1, \ldots,n+1$, where $\bar{\tau}$ denotes the unique tracial state on ${\mathcal Z}$. For later use we also note that $\bar{\varphi}_{i}:=\bar{\varphi}|_{\bar{F}_{i}}$ satisfies \begin{eqnarray} \label{42} \bar{\varphi}_{i}(x) & \stackrel{(\ref{39},\ref{40})}{=} & \sum_{j=0}^{n} \lambda_{i} \cdot \sigma_{i}''(x \mathbf{1}_{\bar{F}^{(j)}}) \otimes \varrho(e_{j+1}) \nonumber \\ & = & \sigma_{i}''(x) \otimes (\lambda_{i} \cdot \varrho(e_{\bar{\jmath} (i)+1})) \end{eqnarray} for all $x \in \bar{F}_{i}$, $i=1, \ldots, \bar{s}$, where $\bar{\jmath}(i)$ denotes the (uniquely determined) $j \in \{0, \ldots ,n\}$ for which $\mathbf{1}_{\bar{F}_{i}}\mathbf{1}_{\bar{F}^{(j)}} \neq 0$. In particular, we have \begin{equation} \label{66} \bar{\varphi}_{i}(\mathbf{1}_{\bar{F}_{i}}) = \sigma_{i}'' (\mathbf{1}_{\bar{F}_{i}}) \otimes (\lambda_{i} \cdot \varrho(e_{\bar{\jmath}(i) +1})) \, ; \end{equation} since $\sigma_{i}''(\mathbf{1}_{\bar{F}_{i}})$ is a projection, it is straightforward to check that \begin{equation} \label{2} f(\bar{\varphi}_{i}(\mathbf{1}_{\bar{F}_{i}})) = \sigma_{i}''(\mathbf{1}_{\bar{F}_{i}}) \otimes f(\lambda_{i} \cdot \varrho(e_{\bar{\jmath}(i) + 1})) \end{equation} for any $f \in {\mathcal C}_{0}((0,1])$. For $g$ and $h$ defined as above we obtain \begin{eqnarray} \sigma_{i}''(x) \otimes g(\lambda_{i} \cdot \varrho(e_{\bar{\jmath}(i)+1})) & \stackrel{(\ref{41})}{=} & \sigma_{i}''(x) \otimes h(\lambda_{i} \cdot \varrho(e_{\bar{\jmath}(i)+1})) (\lambda_{i} \cdot \varrho(e_{\bar{\jmath}(i)+1})) \nonumber \\ & \stackrel{(\ref{2},\ref{42})}= & h(\bar{\varphi}_{i}(\mathbf{1}_{\bar{F}_{i}})) \bar{\varphi}_{i}(x) \nonumber \\ & = & \bar{\varphi}_{i}(x) h (\bar{\varphi}_{i}(\mathbf{1}_{\bar{F}_{i}})) \, . \label{3} \end{eqnarray} Choose $\beta'>0$ such that \begin{equation} \label{47a} \bar{s} (4 \beta'(1+\frac{1}{\eta})) + 2 \bar{s}^{2}(\beta')^{\frac{1}{2}} < \frac{\delta}{8} \mbox{ and } \beta' < \frac{\delta^{2}\eta^{2}}{32} \, . \end{equation} By Proposition \ref{rr0dr} in connection with Proposition \ref{polynomial-appr} (with $\beta'$ in place of $\delta$, $\beta''$ in place of $\beta$ and both $g$ and $h$ in place of $f$) there are a finite-dimensional $C^{*}$-algebra $\tilde{F}$ with a centered embedding $\tilde{\iota}:\bar{F} \to \tilde{F}$ and a c.p.c.\ discrete order zero map \[ \tilde{\varphi}: \tilde{F} \to (\mathbf{1}_{A} - q) A (\mathbf{1}_{A} - q) \otimes {\mathcal Z} \] such that \begin{equation} \label{9} \tilde{\varphi}\tilde{\iota}(\mathbf{1}_{\bar{F}_{i}}) \le \bar{\varphi}(\mathbf{1}_{\bar{F}_{i}}) \, , \end{equation} \begin{equation} \label{7'} \|\tilde{\varphi}\tilde{\iota}(x) - \bar{\varphi}(x)\| < \beta'' \|x\| < \beta' \|x\| \; \forall \, 0 \neq x \in \bar{F}_{+} \, , \end{equation} \begin{equation} \label{7} \|g(\tilde{\varphi}\tilde{\iota}(\mathbf{1}_{\bar{F}_{i}})) - g(\bar{\varphi}(\mathbf{1}_{\bar{F}_{i}}))\|< \beta' \end{equation} and \begin{equation} \label{8} \|h(\tilde{\varphi}\tilde{\iota}(\mathbf{1}_{\bar{F}_{i}})) - h(\bar{\varphi}(\mathbf{1}_{\bar{F}_{i}}))\|< \beta' \end{equation} for $i=1, \ldots, \bar{s}$. Let $\chi_{(\eta,1]}$ denote the characteristic function on the interval $(\eta,1]$ and set \begin{equation} \label{43} \bar{p}_{i} := \chi_{(\eta,1]}(\tilde{\varphi} \tilde{\iota}(\mathbf{1}_{\bar{F}_{i}})) \in (\mathbf{1}_{A} - q)A(\mathbf{1}_{A} - q) \otimes {\mathcal Z} \end{equation} for $i=1, \ldots, \bar{s}$; note that the $\bar{p}_{i}$ are well-defined projections in $(\mathbf{1}_{A}-q)A(\mathbf{1}_{A}-q)\otimes {\mathcal Z}$, since $\tilde{\varphi}$ is a discrete order zero map (whence $\chi_{(\eta,1]}$ is continuous on the spectrum of $\tilde{\varphi}\tilde{\iota}(\mathbf{1}_{\bar{F}_{i}})$ for each $i$). Moreover, the $\bar{p}_{i}$ are pairwise orthogonal (again since $\mathrm{ord}\, \tilde{\varphi} = 0$), so they add up to a projection \begin{equation} \label{74} p:= \sum_{i=1}^{\bar{s}} \bar{p}_{i} \, ; \end{equation} it is clear that \begin{equation} \label{11} p = \chi_{(\eta,1]}(\tilde{\varphi}(\mathbf{1}_{\tilde{F}})) \in C^{*}(\tilde{\varphi}(\mathbf{1}_{\tilde{F}})) \subset (\mathbf{1}_{A}-q)A(\mathbf{1}_{A}-q)\otimes {\mathcal Z} \end{equation} and that \begin{equation} \label{56} p \stackrel{(\ref{41})}{=} p h(\tilde{\varphi} \tilde{\iota} (\mathbf{1}_{\bar{F}})) \tilde{\varphi}\tilde{\iota}(\mathbf{1}_{\bar{F}}) \, . \end{equation} From \cite{W4}, 1.2, we see that $p$ commutes with $\tilde{\varphi}(\tilde{F})$ and that $p \tilde{\varphi}(\, . \, ) = p \tilde{\varphi}( \, . \, )p$ is an order zero map. Define a map $\tilde{\sigma}: \tilde{F} \to (\mathbf{1}_{A}-q)A(\mathbf{1}_{A}-q) \otimes {\mathcal Z}$ by \begin{equation} \label{10} \tilde{\sigma}( \, . \,):= (p \tilde{\varphi}(\mathbf{1}_{\tilde{F}})p)^{-1} \tilde{\varphi}(\,.\,) \, , \end{equation} where the inverse is well-defined if taken in $p C^{*}(\tilde{\varphi} (\mathbf{1}_{\tilde{F}}))p$. It is obvious that $\tilde{\sigma}$ is a supporting $*$-homomorphism (in the sense of \cite{W4}, 1.2) for the c.p.c.\ map $p\tilde{\varphi}(\,.\,)p$, i.e., \begin{equation} \label{58} p \tilde{\varphi}(\, . \,)p = p \tilde{\varphi}(\mathbf{1}_{\tilde{F}}) p \tilde{\sigma}( \, . \,) \, , \end{equation} and that \begin{equation} \label{59} \tilde{\sigma}(\, . \,) = p \tilde{\sigma}(\, . \, ) p \, . \end{equation} For $0 \le x \in {\mathcal B}_{1}(\bar{F}_{i})$, $i = 1, \ldots, \bar{s}$, we now compute \begin{eqnarray} \lefteqn{\|[\bar{p}_{i}, \varphi_{i}''(x) \otimes \mathbf{1}_{{\mathcal Z}}]\| }\nonumber \\ & \stackrel{(\ref{39})}{=} & |\lambda_{i}| \|[\bar{p}_{i},\sigma_{i}''(x) \otimes \mathbf{1}_{{\mathcal Z}}]\| \nonumber \\ & \stackrel{(\ref{43},\ref{44})}{=} & |\lambda_{i}| \|\bar{p}_{i} g(\tilde{\varphi} \tilde{\iota} (\mathbf{1}_{\bar{F}_{i}})) (\sigma_{i}''(x) \otimes \mathbf{1}_{{\mathcal Z}}) - (\sigma_{i}''(x) \otimes \mathbf{1}_{{\mathcal Z}}) g(\tilde{\varphi} \tilde{\iota} (\mathbf{1}_{\bar{F}_{i}})) \bar{p}_{i}\| \nonumber \\ & \stackrel{(\ref{7})}{\le} & |\lambda_{i}| \|\bar{p}_{i} g(\bar{\varphi}_{i}(\mathbf{1}_{\bar{F}_{i}}))(\sigma_{i}''(x) \otimes \mathbf{1}_{{\mathcal Z}}) - (\sigma_{i}''(x) \otimes \mathbf{1}_{{\mathcal Z}}) g(\bar{\varphi}_{i}(\mathbf{1}_{\bar{F}_{i}})) \bar{p}_{i} \| +2 \beta' \nonumber \\ & \stackrel{(\ref{2})}{=} & |\lambda_{i}| \| [\bar{p}_{i}, (\sigma_{i}''(x) \otimes g(\lambda_{i} \cdot \varrho(e_{\bar{\jmath}(i)+1})))] \| + 2 \beta' \nonumber \\ & \stackrel{(\ref{3})}{=} & |\lambda_{i}| \| [\bar{p}_{i}, h(\bar{\varphi}_{i}(\mathbf{1}_{\bar{F}_{i}})) \bar{\varphi}_{i}(x)] \| + 2 \beta' \nonumber \\ & \stackrel{(\ref{8})}{\le} & |\lambda_{i}| \| [\bar{p}_{i}, h(\tilde{\varphi} \tilde{\iota} (\mathbf{1}_{\bar{F}_{i}})) \bar{\varphi}_{i}(x)] \| + 2 \beta' + 2 \beta' \nonumber \\ & \stackrel{(\ref{7'},\ref{41})}{\le} & |\lambda_{i}| \|[\bar{p}_{i}, h(\tilde{\varphi} \tilde{\iota}(\mathbf{1}_{\bar{F}_{i}})) \tilde{\varphi} \tilde{\iota}(\mathbf{1}_{\bar{F}_{i}}x)]\| + 4 \beta' + \frac{\beta'}{\eta} \nonumber \\ & = & 4 \beta' + \frac{\beta'}{\eta} \, , \label{6} \end{eqnarray} where for the last equation we have used that $\tilde{\varphi} \tilde{\iota}|_{\bar{F}_{i}}$ is an order zero map, whence the elements of $C^{*}(\tilde{\varphi}\tilde{\iota}(\mathbf{1}_{\bar{F}_{i}}))$ commute with those of $\tilde{\varphi}\tilde{\iota}(\bar{F}_{i})$ for each $i$ (cf.\ \cite{W4}, 1.2). Next note that, for $i = 1, \ldots,\bar{s}$, \begin{eqnarray*} \bar{p}_{i} & \stackrel{(\ref{43},\ref{44})}{\le} & g(\tilde{\varphi} \tilde{\iota}(\mathbf{1}_{\bar{F}_{i}})) \\ & \stackrel{(\ref{7},\ref{41})}{\le} & g(\bar{\varphi}_{i} (\mathbf{1}_{\bar{F}_{i}})) + \beta' \cdot \mathbf{1}_{A} \otimes \mathbf{1}_{{\mathcal Z}} \\ & \stackrel{(\ref{2})}{\le} & \sigma_{i}''(\mathbf{1}_{\bar{F}_{i}}) \otimes \mathbf{1}_{{\mathcal Z}} + \beta' \cdot \mathbf{1}_{A} \otimes \mathbf{1}_{{\mathcal Z}} \, . \end{eqnarray*} Therefore, if $\varphi_{i}'' \perp \varphi_{j}''$ for some $i, j \in \{1, \ldots , \bar{s}\}$, we have \begin{eqnarray} \label{5} \lefteqn{\| \bar{p}_{i} (\varphi_{j}''(x) \otimes \mathbf{1}_{{\mathcal Z}})\| } \nonumber \\ & \le & \|(\varphi_{j}''(x) \otimes \mathbf{1}_{{\mathcal Z}}) \bar{p}_{i} (\varphi_{j}''(x) \otimes \mathbf{1}_{{\mathcal Z}})\|^{\frac{1}{2}} \nonumber \\ & \stackrel{(\ref{39})}{\le} & \|(\varphi_{j}''(x) \otimes \mathbf{1}_{{\mathcal Z}}) \sigma_{i}''(\mathbf{1}_{\bar{F}_{i}})(\varphi_{j}''(x) \otimes \mathbf{1}_{{\mathcal Z}}) + \beta' \cdot (\varphi_{j}''(x) \otimes \mathbf{1}_{{\mathcal Z}})^{2}\|^{\frac{1}{2}} \nonumber \\ & \le & (\beta')^{\frac{1}{2}} \|x \| \; \forall \, 0 \neq x \in (\bar{F}_{j})_{+} \, . \end{eqnarray} For $i=1, \ldots,s$, define \begin{equation} \label{63} I(i) := \{j \in \{1, \ldots, \bar{s} \} \, | \, \mathbf{1}_{\bar{F}_{j}} \le \bar{\iota}(\mathbf{1}_{F_{i}}) \} \end{equation} and \begin{equation} \label{47} p_{i}' := \sum_{j \in I(i)} \bar{p}_{j} \, ; \end{equation} we have \begin{equation} \label{76} \sum_{i=1}^{s} p_{i}' = p \, . \end{equation} Note that if $j \neq k \in I(i)$, then \begin{equation} \label{46} \varphi_{j}'' \perp \varphi_{k}'' \, , \end{equation} since $\varphi''|\bar{\iota}(F_{i})$ has order zero for all $i=1, \ldots,s$ by (\ref{45}). For any $0 \neq x \in (F_{i})_{+}$ and $i=1, \ldots,s$, from (\ref{4}) we obtain \[ \|\varphi_{i}'(x) - \sum_{j \in I(i)} \varphi_{j}'' \circ \bar{\iota}(x) \|< \beta \|x\| \, , \] whence \begin{eqnarray} \label{49} \lefteqn{\|[p_{i}' , \varphi_{i}' (x) \otimes \mathbf{1}_{{\mathcal Z}}] \|} \nonumber \\ & < & \| [\sum_{j \in I(i)} \bar{p}_{j}, \sum_{j \in I(i)} \varphi_{j}'' (\mathbf{1}_{\bar{F}_{j}} \bar{\iota}(x)) \otimes \mathbf{1}_{{\mathcal Z}}] \| + 2 \beta \|x\| \nonumber \\ & \stackrel{(\ref{46},\ref{5})}{\le} & \| \sum_{j \in I(i)} [\bar{p}_{j}, \varphi_{j}'' (\mathbf{1}_{\bar{F}_{j}} \bar{\iota}(x)) \otimes \mathbf{1}_{{\mathcal Z}}] \| +(2 \bar{s}^{2} (\beta')^{\frac{1}{2}} + 2 \beta) \|x\| \nonumber \\ & \stackrel{(\ref{6})}{\le} & (\bar{s}(4 \beta' (1+\frac{1}{\eta})) + 2 \bar{s}^{2} (\beta')^{\frac{1}{2}} + 2 \beta) \|x\| \nonumber \\ & \stackrel{(\ref{47a},\ref{48})}{<} & \frac{\delta}{4} \|x\| \, . \end{eqnarray} Furthermore, \begin{eqnarray} \label{17} \lefteqn{\|[p_{i}' , \varphi_{i}(x) \otimes \mathbf{1}_{{\mathcal Z}}]\| } \nonumber \\ & \stackrel{(\ref{43},\ref{47})}{=} & \| p_{i}' ((\mathbf{1}_{A}-q) \varphi_{i}(x)) \otimes \mathbf{1}_{{\mathcal Z}} - (\varphi_{i}(x) (\mathbf{1}_{A} - q)) \otimes \mathbf{1}_{{\mathcal Z}} p_{i}'\| \nonumber \\ & \stackrel{(\ref{37})}{<} & \|[p_{i}', ((\mathbf{1}_{A}-q) \varphi_{i}(x)(\mathbf{1}_{A}-q)) \otimes \mathbf{1}_{{\mathcal Z}}] \| + 2 \gamma \|x\| \nonumber \\ & \stackrel{(\ref{13})}{\le} & \|[p_{i}' , \varphi_{i}'(x) \otimes \mathbf{1}_{{\mathcal Z}}] \| + 2 \gamma \|x\| + 2 \beta \|x\| \nonumber \\ & \stackrel{(\ref{49},\ref{50},\ref{48})}{<} & \left( \frac{\delta}{4} + \frac{\delta}{8} + \frac{\delta}{8} \right) \|x\| \nonumber \\ & = & \frac{\delta}{2} \|x\| \; \forall \, 0 \neq x \in (F_{i})_{+}, \, i=1, \ldots, s \, . \end{eqnarray} Next we check that, for $b \in \bar{{\mathcal G}}$ ($={\mathcal G} \cup \{a^{2} \, | \, a \in {\mathcal G}\})$, \begin{eqnarray} \label{52} \lefteqn{\|\varphi''\bar{\iota}\psi (b) + \varphi^{\times}\psi(b) - b\| } \nonumber \\ & \stackrel{(\ref{4})}{<} & \|\varphi'\psi(b)+\varphi^{\times}\psi(b) - b \| + \beta \nonumber \\ & \stackrel{(\ref{13})}{<} & \| (\mathbf{1}_{A}-q)\varphi \psi(b) (\mathbf{1}_{A}-q) + q \varphi \psi(b) q - b\| + 3\beta \nonumber \\ & \stackrel{(\ref{37})}{<} & \|\varphi \psi(b) - b \| + 3\beta + 2 \gamma \nonumber \\ & \stackrel{(\ref{51})}{<} & \frac{\eta^{6}}{(n+1)^{2}} + 3\beta + 2 \gamma \nonumber \\ & \stackrel{(\ref{48},\ref{50})}{<} & 2 \frac{\eta^{6}}{(n+1)^{2}} \, . \end{eqnarray} From (\ref{52}) and Lemma \ref{multiplicative-domain} (with $(\bar{F} \oplus F, \bar{\iota} \psi \oplus \psi, \varphi'' + \varphi^{\times})$ in place of $(F,\psi,\varphi)$) we see that \begin{equation} \label{53} \|\varphi''(\mathbf{1}_{\bar{F}^{(j)}}) \varphi''\bar{\iota}\psi(b) - \varphi''(\mathbf{1}_{\bar{F}^{(j)}} \bar{\iota}\psi(b))\| < 2 \cdot 2^{\frac{1}{2}} \frac{\eta^{3}}{n+1} \; \forall \, b \in {\mathcal G}, \, j=0, \ldots, n \, . \end{equation} Since the $\varrho(e_{j})$ are pairwise orthogonal, we even have \begin{eqnarray} \label{57} \lefteqn{\|\bar{\varphi}(\mathbf{1}_{\bar{F}})(\varphi'' \bar{\iota}\psi(b) \otimes \mathbf{1}_{{\mathcal Z}}) - \bar{\varphi}\bar{\iota} \psi(b)\|} \nonumber \\ & \stackrel{(\ref{40})}{=} & \|\sum_{j=0}^{n} (\varphi''(\mathbf{1}_{\bar{F}^{(j)}}) \varphi''\bar{\iota} \psi(b) - \varphi''(\mathbf{1}_{\bar{F}^{(j)}} \bar{\iota} \psi(b))) \otimes \varrho(e_{j+1}) \| \nonumber \\ & \stackrel{(\ref{53})}{<} & 4 \frac{\eta^{3}}{n+1} \; \forall \, b \in {\mathcal G} \, . \end{eqnarray} We are now prepared to compute \begin{eqnarray} \lefteqn{\|p(b \otimes \mathbf{1}_{{\mathcal Z}})p - \tilde{\sigma} \tilde{\iota} \bar{\iota} \psi(b) \|} \nonumber \\ & \stackrel{(\ref{51},\ref{54})}{\le} & \|p((\varphi'\psi(b) + \varphi^{\times}\psi(b)) \otimes \mathbf{1}_{{\mathcal Z}})p - \tilde{\sigma}\tilde{\iota}\bar{\iota}\psi(b)\| + \frac{\eta^6}{(n+1)^2} + \beta \nonumber \\ & \stackrel{(\ref{55},\ref{11})}{=} & \|p (\varphi'\psi(b) \otimes \mathbf{1}_{{\mathcal Z}})p - \tilde{\sigma}\tilde{\iota}\bar{\iota}\psi(b)\| + \frac{\eta^6}{(n+1)^2} + \beta \nonumber\\ & \stackrel{(\ref{4})}{\le} & \|p(\varphi''\bar{\iota}\psi(b) \otimes \mathbf{1}_{{\mathcal Z}})p - \tilde{\sigma}\tilde{\iota}\bar{\iota}\psi(b)\| + \frac{\eta^6}{(n+1)^2} + 2 \beta \nonumber \\ & \stackrel{(\ref{56})}{=} & \|p h(\tilde{\varphi}\tilde{\iota}(\mathbf{1}_{\bar{F}})) \tilde{\varphi}\tilde{\iota}(\mathbf{1}_{\bar{F}}) (\varphi'' \bar{\iota}\psi(b) \otimes \mathbf{1}_{{\mathcal Z}})p - \tilde{\sigma}\tilde{\iota}\bar{\iota}\psi(b)\| + \frac{\eta^6}{(n+1)^2} + 2 \beta \nonumber \\ & \stackrel{(\ref{7'},\ref{41})}{\le} & \|p h(\bar{\varphi}(\mathbf{1}_{\bar{F}})) \bar{\varphi}(\mathbf{1}_{\bar{F}}) (\varphi'' \bar{\iota}\psi(b) \otimes \mathbf{1}_{{\mathcal Z}})p - \tilde{\sigma}\tilde{\iota}\bar{\iota}\psi(b)\| \nonumber \\ & & + \frac{\eta^6}{(n+1)^2} + 2 \beta + \beta' + 2 \frac{\beta'}{\eta} \nonumber \\ & \stackrel{(\ref{57},\ref{41})}{\le} & \|p h(\bar{\varphi}(\mathbf{1}_{\bar{F}})) \bar{\varphi} \bar{\iota}\psi(b)p - \tilde{\sigma}\tilde{\iota}\bar{\iota}\psi(b)\| \nonumber \\ & & + \frac{\eta^6}{(n+1)^2} + 2 \beta + \beta' + 2 \frac{\beta'}{\eta} + 12 \frac{\eta^{2}}{n+1} \nonumber \\ & \stackrel{(\ref{7'},\ref{41})}{\le} & \|p h(\tilde{\varphi}\tilde{\iota}(\mathbf{1}_{\bar{F}})) \tilde{\varphi}\tilde{\iota} \bar{\iota}\psi(b)p - \tilde{\sigma}\tilde{\iota}\bar{\iota}\psi(b)\| \nonumber \\ & & + \frac{\eta^6}{(n+1)^2} + 2 \beta + \beta' + 2 \frac{\beta'}{\eta} + 12 \frac{\eta^{2}}{n+1} + \beta' + 2 \frac{\beta'}{\eta} \nonumber \\ & \stackrel{(\ref{58})}{=} & \|p h(\tilde{\varphi}\tilde{\iota}(\mathbf{1}_{\bar{F}})) \tilde{\varphi}\tilde{\iota} (\mathbf{1}_{\bar{F}}) \tilde{\sigma}\tilde{\iota} \bar{\iota}\psi(b)p - \tilde{\sigma}\tilde{\iota}\bar{\iota}\psi(b)\| \nonumber \\ & & + \frac{\eta^6}{(n+1)^2} + 2 \beta + \beta' + 2 \frac{\beta'}{\eta} + 12 \frac{\eta^{2}}{n+1} + \beta' + 2 \frac{\beta'}{\eta} \nonumber \\ & \stackrel{(\ref{56},\ref{59})}{=} & 0+\frac{\eta^6}{(n+1)^2} + 2 \beta + 2 \beta' + 4 \frac{\beta'}{\eta} + 12 \frac{\eta^{2}}{n+1} \nonumber \\ & \stackrel{(\ref{48},\ref{47a})}{<} & \frac{3}{4} \eta \label{15} \end{eqnarray} for all $b \in {\mathcal G}$. If $\tau \in T(A)$ is a tracial state, then \begin{eqnarray} \lefteqn{\tau \otimes \bar{\tau} (p)} \nonumber \\ & \stackrel{(\ref{11})}{\ge} & \tau \otimes \bar{\tau} (\tilde{\varphi}(\mathbf{1}_{\tilde{F}})) - \eta \cdot \tau(\mathbf{1}_{A}-q) \nonumber \\ & \stackrel{(\ref{7'},\ref{60})}{\ge} & \tau \otimes \bar{\tau} (\bar{\varphi}(\mathbf{1}_{\bar{F}})) - (\eta + \beta') \cdot \tau(\mathbf{1}_{A}-q) \nonumber \\ & \stackrel{(\ref{42})}{=} & \sum_{j=0}^{n} \tau(\varphi''(\mathbf{1}_{\bar{F}^{(j)}})) \bar{\tau}(\varrho(e_{\bar{\jmath}(i)+1})) - (\eta + \beta') \cdot \tau(\mathbf{1}_{A}-q) \nonumber \\ & \stackrel{(\ref{12})}{\ge} & \bar{\mu} \cdot \sum_{j=0}^{n} \tau(\varphi''(\mathbf{1}_{\bar{F}^{(j)}})) - (\eta + \beta') \cdot \tau(\mathbf{1}_{A}-q) \nonumber \\ & \stackrel{(\ref{4},\ref{61})}{\ge} & \bar{\mu} \cdot \tau (\varphi'(\mathbf{1}_{F})) - (\beta + \eta + \beta') \cdot \tau(\mathbf{1}_{A}-q) \nonumber \\ & \stackrel{(\ref{13})}{\ge} & \bar{\mu} \cdot \tau((\mathbf{1}_{A}-q) \varphi(\mathbf{1}_{F})(\mathbf{1}_{A}-q)) - (2 \beta + \eta + \beta') \cdot \tau(\mathbf{1}_{A}-q) \nonumber \\ & \stackrel{(\ref{51})}{\ge} & \bar{\mu} \cdot \tau(\mathbf{1}_{A}-q) - (\eta + 2 \beta + \eta + \beta') \cdot \tau(\mathbf{1}_{A}-q) \nonumber \\ & \stackrel{(\ref{48},\ref{47a})}{\ge} & (\bar{\mu} - 4 \eta) \cdot \tau(\mathbf{1}_{A}-q) \nonumber \\ & \stackrel{(\ref{62})}{>} & (\mu + \eta) \cdot \tau(\mathbf{1}_{A}-q) \, . \label{16} \end{eqnarray} For $i \in \{1, \ldots,s\}$ we have \begin{eqnarray} \label{65} \sum_{j \in I(i)} \tilde{\varphi}\tilde{\iota} (\mathbf{1}_{\bar{F}_{j}}) & \stackrel{(\ref{9})}{\le} & \sum_{j \in I(i)} \bar{\varphi}(\mathbf{1}_{\bar{F}_{j}}) \nonumber \\ & \stackrel{(\ref{40})}{\le} & \sum_{j \in I(i)} (\sum_{k=0}^{n} \varphi''(\mathbf{1}_{\bar{F}_{j}} \mathbf{1}_{\bar{F}^{(k)}}) \otimes \mathbf{1}_{{\mathcal Z}}) \nonumber \\ & \stackrel{(\ref{63})}{=} & \varphi''(\bar{\iota}(\mathbf{1}_{F_{i}})) \otimes \mathbf{1}_{{\mathcal Z}} \nonumber \\ & \stackrel{(\ref{64})}{\le} & \varphi'(\mathbf{1}_{F_{i}}) \otimes \mathbf{1}_{{\mathcal Z}} \, , \end{eqnarray} whence \begin{eqnarray*} p_{i}' & \stackrel{(\ref{47})}{=} & \sum_{j \in I(i)} \bar{p}_{j} \\ & \stackrel{(\ref{43})}{=} & \sum_{j \in I(i)} \chi_{(\eta,1]}(\tilde{\varphi}\tilde{\iota}(\mathbf{1}_{\bar{F}_{j}})) \in \overline{(\varphi'(\mathbf{1}_{F_{i}})\otimes \mathbf{1}_{{\mathcal Z}})(A \otimes {\mathcal Z}) (\varphi'(\mathbf{1}_{F_{i}}) \otimes \mathbf{1}_{{\mathcal Z}})} \, . \end{eqnarray*} Even more, \begin{eqnarray*} \eta \cdot p_{i}' & \le & \sum_{j \in I(i)} \tilde{\varphi}\tilde{\iota}(\mathbf{1}_{\bar{F}_{i}}) \\ & \stackrel{(\ref{65})}{\le} & \varphi'(\mathbf{1}_{F_{i}}) \otimes \mathbf{1}_{{\mathcal Z}} \\ & \stackrel{(\ref{13})}{\le} & ((\mathbf{1}_{A}-q)\varphi(\mathbf{1}_{F_{i}})(\mathbf{1}_{A}-q) \otimes \mathbf{1}_{{\mathcal Z}} + q \varphi(\mathbf{1}_{F_{i}})q \otimes \mathbf{1}_{{\mathcal Z}}) + \beta \cdot \mathbf{1}_{A} \otimes \mathbf{1}_{{\mathcal Z}} \\ & \stackrel{(\ref{37})}{\le} & (2 \gamma + \beta) \cdot \mathbf{1}_{A} \otimes \mathbf{1}_{{\mathcal Z}} + \varphi(\mathbf{1}_{F_{i}}) \otimes \mathbf{1}_{{\mathcal Z}} \, , \end{eqnarray*} and it follows from Proposition \ref{almost-hereditary} that \begin{equation} \label{14} \mathrm{dist}(p_{i}', \overline{(\varphi(\mathbf{1}_{F_{i}}) \otimes \mathbf{1}_{{\mathcal Z}})A \otimes {\mathcal Z}(\varphi(\mathbf{1}_{F_{i}}) \otimes \mathbf{1}_{{\mathcal Z}})}) \le \frac{3}{\eta} (2\gamma + \beta)^{\frac{1}{2}} \stackrel{(\ref{48},\ref{50})}{<} \frac{\delta}{2s} \, . \end{equation} From (\ref{7}) and the fact that $\tilde{\varphi}\tilde{\iota}$ is subordinate (in the sense of \cite{W4}, Definition 1.4) to the order zero map $\bar{\varphi}$ we know that, for $i \in \{1, \ldots, s \}$, \begin{eqnarray} \label{68} \|g(\tilde{\varphi}\tilde{\iota}\bar{\iota}(\mathbf{1}_{F_{i}})) - g(\bar{\varphi}\bar{\iota}(\mathbf{1}_{F_{i}}))\| & \le & \max_{j \in I(i)} \|g(\tilde{\varphi}\tilde{\iota}(\mathbf{1}_{\bar{F}_{j}})) - g(\tilde{\varphi}(\mathbf{1}_{\bar{F}_{j}}))\| \nonumber \\ & \stackrel{(\ref{32})}{<} & \beta' \, . \end{eqnarray} For $i \in \{1, \ldots, \bar{s}\}$ we have \begin{eqnarray} \label{67} g(\bar{\varphi}(\mathbf{1}_{\bar{F}_{i}})) & \stackrel{(\ref{66})}{=} & g(\sigma_{i}''(\mathbf{1}_{\bar{F}_{i}}) \otimes \lambda_{i} \cdot \varrho(e_{\bar{\jmath}(i)+1})) \nonumber \\ & \stackrel{(\ref{2})}{=} & \sigma_{i}'' (\mathbf{1}_{\bar{F}_{i}}) \otimes g(\lambda_{i} \cdot \varrho(e_{\bar{\jmath}(i)+1})) \nonumber \\ & \le & \sigma_{i}''(\mathbf{1}_{\bar{F}_{i}}) \otimes g(\lambda_{i}) \cdot \mathbf{1}_{{\mathcal Z}} \nonumber \\ & = & g(\sigma_{i}''(\mathbf{1}_{\bar{F}_{i}}) \otimes \lambda_{i} \cdot \mathbf{1}_{{\mathcal Z}}) \nonumber \\ & \stackrel{(\ref{2})}{=} & g(\varphi_{i}''(\mathbf{1}_{\bar{F}_{i}})) \otimes \mathbf{1}_{{\mathcal Z}} \, , \end{eqnarray} where the inequality follows from the fact that $g(\lambda \cdot t) \le g(\lambda)$ for all $0\le \lambda,t \le 1$: the latter implies that the constant function $g(\lambda) \cdot \mathbf{1}_{[0,1]}$ on $[0,1]$ dominates the function $(t \mapsto g(\lambda \cdot t)) \in {\mathcal C}([0,1])$; Gelfand's theorem now yields $g(\lambda \cdot a) \le g(\lambda) \cdot \mathbf{1}$ for any $0\le \lambda \le 1$ and any $0\le a \le \mathbf{1}$ in a unital $C^{*}$-algebra. \\ Since $\bar{\varphi}$ (by construction) and $\varphi''|_{\bar{\iota}(F_{i})}$ (by (\ref{45})) have order zero, we even have \begin{eqnarray} \label{69} g(\bar{\varphi}\bar{\iota}(\mathbf{1}_{F_{i}})) & = & \sum_{j \in I(i)} g(\bar{\varphi}(\mathbf{1}_{\bar{F}_{j}})) \nonumber \\ & \stackrel{(\ref{67})}{\le} & \sum_{j \in I(i)} g(\varphi''_{i}(\mathbf{1}_{\bar{F}_{i}})) \otimes \mathbf{1}_{{\mathcal Z}} \nonumber \\ & = & g(\varphi''\bar{\iota}(\mathbf{1}_{F_{i}})) \otimes \mathbf{1}_{{\mathcal Z}} \; \forall \, i \in \{1, \ldots, s\} \, . \end{eqnarray} We conclude that \begin{eqnarray} \label{72} g(\tilde{\varphi}\tilde{\iota} \bar{\iota}(\mathbf{1}_{F_{i}})) & \stackrel{(\ref{68})}{\le} & g(\bar{\varphi}\bar{\iota}(\mathbf{1}_{F_{i}})) + \beta' \cdot \mathbf{1}_{A} \otimes \mathbf{1}_{{\mathcal Z}} \nonumber \\ & \stackrel{(\ref{69})}{\le} & g(\varphi''\bar{\iota}(\mathbf{1}_{F_{i}})) \otimes \mathbf{1}_{{\mathcal Z}} + \beta' \cdot \mathbf{1}_{A} \otimes \mathbf{1}_{{\mathcal Z}} \nonumber \\ & \stackrel{(\ref{70})}{\le} & g(\varphi'(\mathbf{1}_{F_{i}})) \otimes \mathbf{1}_{{\mathcal Z}} + (\beta' + \frac{\delta^{2}}{12}) \cdot \mathbf{1}_{A}\otimes \mathbf{1}_{{\mathcal Z}} \nonumber \\ & \stackrel{(\ref{71})}{\le} & g(\varphi(\mathbf{1}_{F_{i}})) \otimes \mathbf{1}_{{\mathcal Z}} + (\beta' + \frac{\delta^{2}}{12} + \beta) \cdot \mathbf{1}_{A}\otimes \mathbf{1}_{{\mathcal Z}} \, . \end{eqnarray} for $i \in \{1, \ldots, s\}$. Now since \[ p_{i}' \stackrel{(\ref{47},\ref{43})}{=} g(\tilde{\varphi}\tilde{\iota}\bar{\iota}(\mathbf{1}_{F_{i}}))p_{i}' \] for $i \in \{1, \ldots, s\}$, we have \begin{eqnarray*} \|p_{i}' (g(\varphi(\mathbf{1}_{F_{i}})) \otimes \mathbf{1}_{{\mathcal Z}}) - p_{i}'\|^{2} & \le & \|p_{i}' (\mathbf{1}_{A \otimes {\mathcal Z}} - g(\varphi(\mathbf{1}_{F_{i}})) \otimes \mathbf{1}_{{\mathcal Z}}) p_{i}' \| \\ & \stackrel{(\ref{72})}{\le} & \beta' + \frac{\delta^{2}}{12} + \beta \\ & \stackrel{(\ref{47a})}{<} & \frac{\delta^{2}}{4} \, , \end{eqnarray*} hence \begin{equation} \label{73} \|p_{i}' (g(\varphi(\mathbf{1}_{F_{i}})) \otimes \mathbf{1}_{{\mathcal Z}}) - p_{i}'\| < \frac{\delta}{2} \end{equation} for each $i \in \{1, \ldots , s\}$. By \cite{TW1}, Remark 2.7, there is a unital $*$-homomorphism $\theta: A \otimes {\mathcal Z} \to A$ satisfying \begin{equation} \label{75} \|\theta(b \otimes \mathbf{1}_{{\mathcal Z}}) - b\| < \frac{\eta}{4} \; \forall \, b \in {\mathcal G} \, , \end{equation} \begin{equation} \label{19} \| \theta (\varphi(x) \otimes \mathbf{1}_{{\mathcal Z}}) - \varphi(x)\| < \frac{\delta}{4} \|x\| \; \forall \, 0 \neq x \in F_{+} \, \end{equation} \begin{equation} \label{20} \|\theta((\mathbf{1}_{A}-q) \otimes \mathbf{1}_{{\mathcal Z}}) - (\mathbf{1}_{A}-q)\| < \frac{\eta}{\mu + \eta} \cdot \min_{\tau \in T(A)} \{\tau(\mathbf{1}_{A}-q)\} \end{equation} and \begin{equation} \label{21} \|\theta(g(\varphi(\mathbf{1}_{F_{i}})) \otimes \mathbf{1}_{{\mathcal Z}}) - g(\varphi(\mathbf{1}_{F_{i}})) \| < \frac{\delta}{2} \; \forall \, i \in \{1, \ldots, s\} \end{equation} (note that $\min_{\tau \in T(A)} \{\tau(\mathbf{1}_{A} - q) \}$ exists and is nonzero since $A$ is unital and simple, whence $T(A)$ is compact and $\tau(\mathbf{1}_{A} - q)>0 \; \forall \, \tau \in T(A)$).\\ Using (\ref{14}), it is straightforward to check that we may even assume that \begin{equation} \label{18} \mathrm{dist}(\theta(p_{i}'),\overline{\varphi(\mathbf{1}_{F_{i}})A\varphi(\mathbf{1}_{F_{i}})}) < \frac{\delta}{s} \; \forall \, i \in \{1, \ldots, s\} \, . \end{equation} Define a finite-dimensional $C^{*}$-algebra $C \subset A$ by \[ C:= \theta \tilde{\sigma}(\tilde{F}) \] and projections $p_{1}, \ldots, p_{s} \in A$ by \begin{equation} \label{77} p_{i}:= \theta(p_{i}'), i=1, \ldots,s \, . \end{equation} It is clear from our construction that $p_{i} \in C \; \forall \, i$ and that \[ \sum_{i=1}^{s} p_{i} \stackrel{(\ref{76})}{=} \theta(p) \stackrel{(\ref{59})}{=} \mathbf{1}_{C} \, . \] We proceed to check assertions (i), (ii) and (iii) of the lemma: \begin{eqnarray*} \lefteqn{\mathrm{dist}(\mathbf{1}_{C}b\mathbf{1}_{C},C)}\\ & \stackrel{(\ref{75})}{\le} & \mathrm{dist}(\theta(p)\theta(b \otimes \mathbf{1}_{{\mathcal Z}}) \theta(p), \theta(\tilde{\sigma}(\tilde{F}))) + \frac{\eta}{4} \\ & \le & \|p (b \otimes \mathbf{1}_{{\mathcal Z}}) p - \tilde{\sigma} \tilde{\iota} \bar{\iota} \psi(b) \| + \frac{\eta}{4} \\ & \stackrel{(\ref{15})}{<} & \eta \; \forall \, b \in {\mathcal G} \, . \end{eqnarray*} If $\tau \in T(A)$, then $\tau \circ \theta \in T(A \otimes {\mathcal Z})$, whence there is $\tau' \in T(A)$ such that $\tau \circ \theta = \tau' \otimes \bar{\tau}$. Therefore, \begin{eqnarray*} \tau(\mathbf{1}_{C}) & \stackrel{(\ref{76})}{=} & \tau \circ \theta(p) \\ & = & (\tau' \otimes \bar{\tau})(p)\\ & \stackrel{(\ref{16})}{\ge} & (\mu + \eta) \tau'(\mathbf{1}_{A} - q) \\ & = & (\mu + \eta) (\tau' \otimes \bar{\tau})((\mathbf{1}_{A}-q) \otimes \mathbf{1}_{{\mathcal Z}}) \\ & = & (\mu + \eta) \tau \circ \theta ((\mathbf{1}_{A}-q) \otimes \mathbf{1}_{{\mathcal Z}}) \\ & \stackrel{(\ref{20})}{>} & (\mu + \eta)(\tau(\mathbf{1}_{A}-q) - \frac{\eta}{\mu + \eta} \tau(\mathbf{1}_{A}-q)) \\ & = & \mu \tau(\mathbf{1}_{A}-q) \, . \end{eqnarray*} We also have \begin{eqnarray*} \|[p_{i}, \varphi(\mathbf{1}_{F_{i}} x)]\| & \stackrel{(\ref{77},\ref{19})}{\le} & \|[\theta(p_{i}'), \theta(\varphi(\mathbf{1}_{F_{i}} x) \otimes \mathbf{1}_{{\mathcal Z}})]\| + \frac{\delta}{2} \|x\| \\ & \le & \|[p_{i}', \varphi(\mathbf{1}_{F_{i}} x) \otimes \mathbf{1}_{{\mathcal Z}}]\| + \frac{\delta}{2} \|x\| \\ & \stackrel{(\ref{17})}{<} & \delta \|x\| \end{eqnarray*} for all $0 \neq x \in F_{+}$, \[ \mathrm{dist}(p_{i}, \overline{\varphi(\mathbf{1}_{F_{i}})A \varphi(\mathbf{1}_{F_{i}})}) \stackrel{(\ref{18})}{<} \frac{\delta}{s} \] and \begin{eqnarray} \|p_{i} g(\varphi(\mathbf{1}_{F_{i}})) - p_{i}\| & \stackrel{(\ref{21})}{<} & \|\theta(p_{i}') \theta(g(\varphi(\mathbf{1}_{F_{i}}))) - \theta(p_{i}')\| + \frac{\delta}{2} \nonumber \\ & \le & \|p_{i}' g (\varphi(\mathbf{1}_{F_{i}})) - p_{i}' \| + \frac{\delta}{2} \nonumber \\ & \stackrel{(\ref{73})}{<} & \delta \end{eqnarray} for $i=1, \ldots,s$. We are done. \end{nproof} \end{nummer} \section{The proof of Theorem \ref{lfdrtr0}} This section is entirely devoted to the proof of Theorem \ref{lfdrtr0}, following the outline of Section 3. Let $A$ be separable, simple, unital and ${\mathcal Z}$-stable with real rank zero and locally finite decomposition rank. Since $A$ has real rank zero, every nonzero hereditary subalgebra contains a nontrivial projection; since $A$ is nuclear and ${\mathcal Z}$-stable, it satisfies Blackadar's second fundamental comparability property by \cite{R2}, Corollary 4.6. Therefore, it will suffice to show that $A$ satisfies the hypotheses of Proposition \ref{wu-tr0}. So let $\varepsilon>0$ and a finite subset ${\mathcal F} \subset A$ be given. Without loss of generality we may assume that $\mathbf{1}_{A} \in {\mathcal F}$ and that the elements of ${\mathcal F}$ are positive and normalized. Moreover, since $A$ has locally finite decomposition rank, we can assume that ${\mathcal F} \subset {\mathcal B}_{1}(B)_{+}$, where $B \subset A$ is a unital $C^{*}$-subalgebra with $\mathrm{dr}\, B=n$ for some $n \in {\mathbb{N}}$. \\ Fix some $0 < \mu < \frac{1}{2(n+1)}$. For $k \in {\mathbb{N}}$, define \begin{equation} \label{80} \zeta_{k}:=\mu \sum_{l=0}^{k} (1-\mu)^{l} \, , \end{equation} then \[ \zeta_{k} \stackrel{k \to \infty}{\longrightarrow} \mu \sum_{l=0}^{\infty} (1-\mu)^{l} = \mu \frac{1}{1-(1-\mu)} = 1 \, , \] whence there is $K \in {\mathbb{N}}$ such that \begin{equation} \label{81} \zeta_{K}>1-\varepsilon \, . \end{equation} Define ${\mathcal G}_{0}:={\mathcal F}$ and choose $\eta_{0}>0$ such that \[ \eta_{0}< \min \left\{\frac{\varepsilon}{8}, \, \frac{1}{10}\left( \frac{1}{2(n+1)} - \mu \right) , \, \frac{1}{48}\right\} \, . \] Apply Corollary \ref{excisible-appr} (with ${\mathcal G}_{0}$ in place of ${\mathcal G}$ and $\eta_{0}$ in place of $\eta$) to obtain an $n$-decomposable c.p.\ approximation $(F_{0},\psi_{0},\varphi_{0})$ and $0 < \delta_{0}< \frac{1}{2}$ such that a) and b) of Corollary \ref{excisible-appr} hold. Now the hypotheses of Lemma \ref{excision} are fulfilled (with $(F_{0},\psi_{0},\varphi_{0})$, $\eta_{0}$, ${\mathcal G}_{0}$ and $\delta_{0}$ in place of $(F,\psi,\varphi)$, $\eta$, ${\mathcal G}$ and $\delta$); note that (\ref{51}) is satisfied by Corollary \ref{excisible-appr}a). We obtain $\gamma_{0}>0$ such that the assertion of Lemma \ref{excision} holds. Next, suppose ${\mathcal G}_{k}$, $\eta_{k}$, $(F_{k},\psi_{k},\varphi_{k})$, $\delta_{k}$ and $\gamma_{k}$ have been constructed for some $k \in {\mathbb{N}}$. Define ${\mathcal G}_{k+1}:= {\mathcal G}_{k} \cup \varphi_{k}({\mathcal B}_{1}(F_{k})_{+})$ and choose $\eta_{k+1}>0$ such that \[ \eta_{k+1}<\frac{1}{2^{k+1}} \min\left\{\frac{\varepsilon}{8}, \, \gamma_{k}, \delta_{k}\right\} \] and \[ \eta_{k+1} < \min \left\{\frac{\varepsilon}{8}, \, \frac{1}{10} \left( \frac{1}{2(n+1)} - \mu \right) , \, \frac{1}{48} \right\} \, . \] From Corollary \ref{excisible-appr} (with ${\mathcal G}_{k+1}$ in place of ${\mathcal G}$ and $\eta_{k+1}$ in place of $\eta$) we obtain an $n$-decomposable c.p.\ approximation $(F_{k+1},\psi_{k+1},\varphi_{k+1})$ of $B$ and $0 < \delta_{k+1}< \frac{1}{2} $ such that a) and b) of \ref{excisible-appr} hold. \\ Again, the hypotheses of Lemma \ref{excision} are fulfilled (with $(F_{k+1},\psi_{k+1},\varphi_{k+1})$, $\eta_{k+1}$, ${\mathcal G}_{k+1}$ and $\delta_{k+1}$ in place of $(F,\psi,\varphi)$, $\eta$, ${\mathcal G}$ and $\delta$), so we obtain $\gamma_{k+1}>0$ such that the assertion of Lemma \ref{excision} holds; we may asume that $\gamma_{k+1}<\gamma_{k}$. Induction yields compact subsets ${\mathcal G}_{k} \subset B$, positive numbers $\eta_{k}$, $\delta_{k}$, $\gamma_{k}$ and $n$-decomposable c.p.\ approximations $(F_{k},\psi_{k},\varphi_{k})$ for each $k \in {\mathbb{N}}$. By construction, we have in particular that \begin{equation} \label{24} \sum_{l=0}^{\infty} \eta_{l}< \frac{\varepsilon}{2} \, , \; \sum_{l=k+1}^{K} \eta_{l} < \gamma_{k} \end{equation} and \[ {\mathcal G}_{k} = {\mathcal F} \cup \bigcup_{l=0}^{k-1} \varphi_{l}({\mathcal B}_{1}(F_{l})_{+}) \subset {\mathcal G}_{k+1} \, . \] For each $k$, we denote the summands of $F_{k}$ by $F_{k,i}$, $i=1, \ldots,s_{k}$, in other words, we write $F_{k}= \bigoplus_{i=1}^{k}F_{k,i}$ with matrix algebras $F_{k,i}$. \\ Let $q_{K} \in A$ be the zero projection, then \[ \|[q_{K},\varphi_{K}(x)]\|=0 < \gamma_{K} \|x\| \; \forall \, 0 \neq x \in (F_{K})_{+} \] and by Lemma \ref{excision} there is a finite-dimensional $C^{*}$-subalgebra \[ C_{K} \subset (\mathbf{1}_{A}-q_{K}) A (\mathbf{1}_{A}-q_{K}) = A \] satisfying \begin{itemize} \item[(i)] $\mathrm{dist}(\mathbf{1}_{C_{K}},b,\mathbf{1}_{C_{K}},C_{K})< \eta_{K} \; \forall \, b \in {\mathcal G}_{K}$ \item[(ii)] $\tau(\mathbf{1}_{C_{K}}) \ge \mu \cdot \tau(\mathbf{1}_{A}-q_{K})=\mu \; \forall \, \tau \in T(A)$ \item[(iii)] the projection $\mathbf{1}_{C_{K}}$ can be written as a sum of $s_{K}$ pairwise orthogonal projections $p_{K,1}, \ldots,p_{K,s_{K}} \in C_{K}$, $\mathbf{1}_{C_{K}}= \sum_{i=1}^{s_{K}}p_{K,i}$, satisfying \[ \|[p_{K,i}, \varphi_{K}(\mathbf{1}_{F_{K,i}}x)]\| < \delta_{K}\|x\| \; \forall \, 0 \neq x \in (F_{K})_{+} \, , \] \[ \|p_{K,i} g_{\frac{\eta_{K}}{2},\eta_{K}}(\varphi_{K}(\mathbf{1}_{F_{K,i}})) - p_{K,i}\|<\delta_{K} \] and \[ \mathrm{dist}(p_{K,i} , \overline{\varphi_{K}(\mathbf{1}_{F_{K,i}})A\varphi_{K}(\mathbf{1}_{F_{K,i}})} ) < \frac{\delta_{K}}{s_{K}} \] for $i=1, \ldots,s_{K}$. \end{itemize} Suppose that, for some $k \in \{1, \ldots,K\}$, we have already constructed pairwise orthogonal finite-dimensional $C^{*}$-subalgebras $C_{l} \subset A$ and projections \begin{equation} \label{79} q_{l}=\sum_{m=l+1}^{K} \mathbf{1}_{C_{m}} \in A \end{equation} for $l=k, \ldots,K$, which satisfy \[ C_{l} \subset (\mathbf{1}_{A}-q_{l})A(\mathbf{1}_{A}-q_{l}) \, , \; \|[q_{l},\varphi_{l}(x)]\|< \gamma_{l} \|x\| \; \forall \, 0 \neq x \in F_{l} \] and \begin{itemize} \item[(i')] $\mathrm{dist}(\mathbf{1}_{C_{l}},b,\mathbf{1}_{C_{l}},C_{l})< \eta_{l} \; \forall \, b \in {\mathcal G}_{l}$ \item[(ii')] $\tau(\mathbf{1}_{C_{l}}) \ge \mu \cdot \tau(\mathbf{1}_{A}-q_{l}) \; \forall \, \tau \in T(A)$ \item[(iii')] the projection $\mathbf{1}_{C_{l}}$ can be written as a sum of $s_{l}$ pairwise orthogonal projections $p_{l,1}, \ldots,p_{l,s_{l}} \in C_{l}$, $\mathbf{1}_{C_{l}}= \sum_{i=1}^{s_{l}}p_{l,i}$, satisfying \[ \|[p_{l,i}, \varphi_{l}(\mathbf{1}_{F_{l,i}}x)]\| < \delta_{l}\|x\| \; \forall \, 0 \neq x \in (F_{l})_{+} \, , \] \[ \|p_{l,i} g_{\frac{\eta_{l}}{2},\eta_{l}}(\varphi_{l}(\mathbf{1}_{F_{l,i}})) - p_{l,i}\|<\delta_{l} \] and \[ \mathrm{dist}(p_{l,i} , \overline{\varphi_{l}(\mathbf{1}_{F_{l,i}})A\varphi_{l}(\mathbf{1}_{F_{l,i}})}) < \frac{\delta_{l}}{s_{l}} \] for $i=1, \ldots,s_{l}$. \end{itemize} Now (iii') and Corollary \ref{excisible-appr}b) imply that \begin{equation} \label{23} \|[\mathbf{1}_{C_{l}},b]\| < \eta_{l} \; \forall \, b \in {\mathcal G}_{l}, \, l=k, \ldots, K\, . \end{equation} Set \[ q_{k-1}:=q_{k}+\mathbf{1}_{C_{k}} = \sum_{l=k}^{K} \mathbf{1}_{C_{l}} \, , \] then $q_{k-1}$ is a projection since $q_{k} \perp \mathbf{1}_{C_{k}}$ and \begin{eqnarray*} \frac{1}{\|x\|} \cdot \|[q_{k-1},\varphi_{k-1}(x)]\| & \le & \sum_{l=k}^{K} \|[\mathbf{1}_{C_{l}},\frac{1}{\|x\|} \cdot \varphi_{k-1}(x)]\| \\ & \stackrel{(\ref{23})}{<} & \sum_{l=k}^{K} \eta_{l} \\ & \stackrel{(\ref{24})}{\le} & \gamma_{k-1} \; \forall \, 0\neq x \in (F_{k-1})_{+} \, , \end{eqnarray*} since \[ \varphi_{k-1}({\mathcal B}_{1}((F_{k-1})_{+})) \subset {\mathcal G}_{l} \; \forall \, l=k, \ldots,K\, . \] Now by Lemma \ref{excision} there is a finite-dimensional $C^{*}$-subalgebra \[ C_{k-1} \subset (\mathbf{1}_{A}-q_{k-1})A(\mathbf{1}_{A}-q_{k-1}) \] such that \begin{itemize} \item[(i'')] $\mathrm{dist}(\mathbf{1}_{C_{k-1}},b,\mathbf{1}_{C_{k-1}},C_{k-1})< \eta_{k-1} \; \forall \, b \in {\mathcal G}_{k-1}$ \item[(ii'')] $\tau(\mathbf{1}_{C_{k-1}}) \ge \mu \cdot \tau(\mathbf{1}_{A}-q_{k-1}) \; \forall \, \tau \in T(A)$ \item[(iii'')] the projection $\mathbf{1}_{C_{k-1}}$ can be written as a sum of $s_{k-1}$ pairwise orthogonal projections $p_{k-1,1}, \ldots,p_{k-1,s_{k-1}} \in C_{k-1}$, $\mathbf{1}_{C_{k-1}}= \sum_{i=1}^{s_{k-1}}p_{k-1,i}$, satisfying \[ \|[p_{k-1,i}, \varphi_{k-1}(\mathbf{1}_{F_{k-1,i}}x)]\| < \delta_{k-1}\|x\| \; \forall \, 0 \neq x \in (F_{k-1})_{+} \, , \] \[ \|p_{k-1,i} g_{\frac{\eta_{k-1}}{2},\eta_{k-1}}(\varphi_{k-1}(\mathbf{1}_{F_{k-1,i}})) - p_{k-1,i}\|<\delta_{k-1} \] and \[ \mathrm{dist}(p_{k-1,i} , \overline{\varphi_{k-1}(\mathbf{1}_{F_{k-1,i}})A\varphi_{k-1}(\mathbf{1}_{F_{k-1,i}})})< \frac{\delta_{k-1}}{s_{k-1}} \] for $i=1, \ldots,s_{k-1}$. \end{itemize} Induction yields pairwise orthogonal finite-dimensional $C^{*}$-subalgebras $C_{k}\subset A$ and projections $q_{k} \in A$ satisfying $q_{k}= \sum_{m=k+1}^{K} \mathbf{1}_{C_{m}}$ and (i'), (ii') and (iii') above for $k=0, \ldots, K$ in place of $l$. Note that (iii') and Corollary \ref{excisible-appr} b) imply that (\ref{23}) holds for all $l=0,\ldots,K$.\\ Define a finite-dimensional $C^{*}$-subalgebra $D$ of $A$ by \[ D:= \bigoplus_{k=0}^{K} C_{k} \, . \] We proceed to check properties (i), (ii) and (iii) of Proposition \ref{wu-tr0}. First, we have for any $b \in {\mathcal F}$ \begin{eqnarray*} \|[\mathbf{1}_{D},b]\| & \le & \sum_{k=0}^{K} \|[\mathbf{1}_{C_{k}},b]\| \\ & \stackrel{(\ref{23})}{<} & \sum_{k=0}^{K} \eta_{k} \\ & \stackrel{(24)}{<} & \varepsilon \, , \end{eqnarray*} since ${\mathcal F} \subset {\mathcal G}_{k}$ for $k=0, \ldots, K$ and since (\ref{23}) holds for $l = 0, \ldots, K$. Similarly, we obtain \begin{eqnarray*} \lefteqn{\mathrm{dist}(\mathbf{1}_{D}b\mathbf{1}_{D},D)}\\ & = & \mathrm{dist}((\sum_{k=0}^{K}\mathbf{1}_{C_{k}})(\sum_{k=0}^{K} b \mathbf{1}_{C_{k}}),D) \\ & \le & \mathrm{dist}((\sum_{k=0}^{K}\mathbf{1}_{C_{k}})(\sum_{k=0}^{K} \mathbf{1}_{C_{k}} b \mathbf{1}_{C_{k}}),D) + \sum_{k=0}^{K} \|[b,\mathbf{1}_{C_{k}}]\| \\ & \stackrel{(\ref{23})}{<} & \mathrm{dist}(\sum_{k=0}^{K}(\mathbf{1}_{C_{k}}b\mathbf{1}_{C_{k}}),D) + \sum_{k=0}^{K} \eta_{k} \\ & = & \max_{k=0, \ldots,K} (\mathrm{dist}(\mathbf{1}_{C_{k}}b \mathbf{1}_{C_{k}},C_{k})) + \sum_{k=0}^{K} \eta_{k} \\ & \stackrel{({\small \mbox{i'}})}{<} & \max_{k=0, \ldots,K}(\eta_{k}) + \sum_{k=0}^{K} \eta_{k} \\ & \stackrel{(24)}{<} & \varepsilon \end{eqnarray*} for any $b \in {\mathcal F}$. Finally, we show by induction that \begin{equation} \label{78} \tau \left(\sum_{l=0}^{k} \mathbf{1}_{C_{K-l}} \right) \ge \zeta_{k} \end{equation} for $k=0, \ldots, K$ and any $\tau \in T(A)$. From Lemma \ref{excision}(ii) we see that \[ \tau(\mathbf{1}_{C_{K}}) \ge \mu \cdot \tau(\mathbf{1}_{A}-q_{0}) = \mu \cdot \tau(\mathbf{1}_{A}) = \mu = \zeta_{0} \; \forall \, \tau \in T(A) \, , \] so (\ref{78}) holds for $k=0$. Next, suppose we have shown (\ref{78}) for some $k \in \{0, \ldots, K-1\}$ and all $\tau \in T(A)$. Then, \begin{eqnarray*} \lefteqn{\tau \left(\sum_{l=0}^{k+1} \mathbf{1}_{C_{K-l}} \right)} \\ & = & \tau(\mathbf{1}_{C_{K-(k+1)}}) + \tau\left( \sum_{l=0}^{k} \mathbf{1}_{C_{K-l}}\right) \\ & \stackrel{({\small \mbox{ii'}})}{\ge} & \mu \cdot \tau(\mathbf{1}_{A} -q_{K-(k+1)}) + \tau\left( \sum_{l=0}^{k} \mathbf{1}_{C_{K-l}}\right) \\ & \stackrel{(\ref{79})}{=} & \mu \cdot \tau \left(\mathbf{1}_{A} - \sum_{l=K-(k+1)+1}^{K} \mathbf{1}_{C_{l}} \right) + \tau\left( \sum_{l=0}^{k} \mathbf{1}_{C_{K-l}}\right) \\ & = & \mu \cdot \tau(\mathbf{1}_{A}) + (1 - \mu) \cdot \tau\left( \sum_{l=0}^{k} \mathbf{1}_{C_{K-l}}\right) \\ & \stackrel{(\ref{78})}{\ge} & \mu + (1-\mu) \zeta_{k}\\ & \stackrel{(\ref{80})}{=} & \zeta_{k+1} \end{eqnarray*} for all $\tau \in T(A)$. Therefore, (\ref{78}) holds for all $k=0, \ldots, K$ and $\tau \in T(A)$. In particular, \[ \tau(\mathbf{1}_{D}) = \tau \left( \sum_{l=0}^{K} \mathbf{1}_{C_{K-l}} \right) \ge \zeta_{K} \stackrel{(\ref{81})}{>} 1 - \varepsilon \, . \] We have now shown that $D$ satisfies (i), (ii) and (iii) of Proposition \ref{wu-tr0}, whence $A$ has tracial rank zero. This completes the proof of Theorem \ref{lfdrtr0}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The first stars are thought to form at $z \gtrsim 15$, with the first galaxies following at $z \sim 10$ \citep[e.g.,][]{bro99,abe02,bro11}. The chemical abundances of these first galaxies are unknown. If those abundances could be measured, then they would constrain the properties of metal-free Population III stars, the early chemical evolution of galaxies, and the reionization of the universe. Metal-poor stars in the Milky Way provide a local link to this high-redshift universe through the elemental abundances of their photospheres. As the number of known metal-poor stars with detailed chemical abundance measurements has grown, it has become possible to homogeneously analyze large samples to search for subtle trends \citep[e.g.,][]{cay04,bon09,nor13a,nor13b,yon13a,yon13b,roe14}. It is tempting to assert that these metal-poor stars in the halo are the direct descendants of the first stars. This is not necessarily the case though, as metal-poor stars form over a range of redshift in halos of varying mass and environment. Likewise, stars at a given redshift form with a range of metallicity. The examination of other properties beyond metallicity are therefore necessary to identify the stars in the Milky Way that formed at the highest redshifts. \citet{tum10} showed that because galaxies form from the inside-out, the oldest stars at a given metallicity are found near the center of a halo on the most tightly-bound orbits. Indeed, near the center of a Milky Way-analog a large fraction of stars with $-3 \lesssim \mathrm{[Fe/H]} \lesssim -2$ formed at $z \gtrsim 6$, while 20--40\% of stars with $-4 \lesssim \mathrm{[Fe/H]} \lesssim -3$ formed at $10 \lesssim z \lesssim 15$. Consequently, the metal-poor stellar population in the inner few kpc of the Galaxy---the bulge---is the best place to search for truly ancient stars, including low-mass Population III stars that may have survived to the present day. Large-scale spectroscopic surveys of the bulge have shown that while metal-poor stars in the bulge are quite rare, they do exist. The Abundances and Radial Velocity Galactic Origins (ARGOS) survey of \citet{fre13} and \citet{nes13} identified 16 stars with $\mathrm{[Fe/H]} \lesssim -2.0$ in a sample of 14,150 stars within 3.5 kpc of the Galactic center. The most metal-poor star in their sample has $\mathrm{[Fe/H]} \approx -2.6$. As part of the third phase of the Sloan Digital Sky Survey, the Apache Point Observatory Galactic Evolution Experiment (APOGEE) collected $H$-band spectra for 2,403 giants stars in outer bulge fields and identified two stars with $\mathrm{[Fe/H]} \approx -2.1$ \citep{gar13}. Ground-based objective prism surveys for metal-poor stars in the bulge are impractical due to crowding and strong absolute and differential reddening. For this reason, searches for metal-poor stars have historically avoided the inner regions of our own Galaxy. Recently though, the Extremely Metal-poor BuLge stars with AAOmega (EMBLA) survey has successfully used narrow-band SkyMapper $v$-band photometry \citep{bes11} in the \ion{Ca}{2} H \& K region to pre-select candidate metal-poor stars for follow-up spectroscopy. In a sample of more than 8,600 stars, \citet{how14} found in excess of 300 stars with $\mathrm{[Fe/H]} \lesssim -2.0$---including four stars with $-2.7 \lesssim \mathrm{[Fe/H]} \lesssim -2.5$. Still, strong absolute and significant differential reddening limits the efficiency of near UV based selections for metal-poor stars in the bulge and restricts their applicability to outer-bulge regions. In \citet{sch14}, we described a new technique to identify candidate metal-poor stars using only near-infrared 2MASS and mid-infrared \textit{WISE} photometry \citep{skr06,wri10,mai11}. Our infrared selection is well suited to a search for metal-poor stars in the bulge, as it is minimally affected by crowding or reddening. We found that more than 20\% of the candidates selected with our infrared selection are genuine very metal-poor (VMP) stars with $-3.0 \lesssim \mathrm{[Fe/H]} \lesssim -2.0$. Another 2\% of our candidates are genuine extremely metal-poor (EMP) stars with $-4.0 \lesssim \mathrm{[Fe/H]} \lesssim -3.0$. In a sample of 90 metal-poor candidates---selected with only an apparent magnitude cut to be high in the sky from Las Campanas in the first half of the year---we identified three stars with $-3.1 \lesssim \mathrm{[Fe/H]} \lesssim -2.7$ within 4 kpc of the Galactic center. Two of these stars are the most metal-poor stars in bulge in the literature, while the third is comparable to the most metal-poor star from \citet{how14}. Because these stars are both tightly bound to the Galaxy and very metal-poor, they are likely to be among the most ancient stars identified to this point. For that reason, their detailed abundances provide clues to the chemistry of the first galaxies in the $z \gtrsim 10$ universe, beyond those already identified in more metal-poor halo stars. These stars all have apparent magnitudes $V \lesssim 13$, making them unusually bright for stars at the distance of the bulge. Their bright apparent magnitudes enable a very telescope-time efficient exploration of the $z \gtrsim 10$ Universe. We describe the collection of the data we will subsequently analyze in Section \ref{sec:data}. We detail the determination of distances and orbital properties, stellar parameters, and chemical abundances of these three stars in Section \ref{sec:analysis}. We discuss our results and their implications in Section \ref{sec:discussion}, and we summarize our findings in Section \ref{sec:conclusions}. \section{Data Collection}\label{sec:data} We initially selected these stars as candidates according to criteria (1)--(4) from Section 2 of \citet{sch14}: $0.45 \leq J-H \leq 0.6$, $W3 > 8$, $-0.04 \leq W1-W2 \leq 0.04$, and $J-W2 > 0.5$. We give astrometry and photometry for each star in Table~\ref{tbl-1}. We confirmed their metal-poor nature using low-resolution spectroscopy from Gemini South/GMOS-S \citep{hoo04}\footnote{Programs GS-2014A-A-8 and GS-2014A-Q-74.} in service mode during March and April of 2014. Our Gemini South/GMOS-S follow-up spectroscopy was not focused on candidates in the bulge, so the discovery of these stars in the bulge was not predetermined by our survey strategy. We used the Magellan Inamori Kyocera Echelle (MIKE) spectrograph \citep{ber03} on the Clay Telescope at Las Campanas Observatory on 2014 June 21--22 to obtain high-resolution, high signal-to-noise (S/N) spectra suitable for a detailed chemical abundance analysis. We observed all three stars in 0\farcs5~seeing at airmass $<\!\!1.01$ with exposure times in the range 390--590 seconds. The total exposure time for all three sources combined was less than 24 minutes. Including overheads, our Magellan/MIKE observations for all three stars were completed in about 30 minutes. We used the 0\farcs7~slit and the standard blue and red grating azimuths, yielding spectra between 332 nm and 915 nm with resolution $R \approx 41,\!000$ in the blue and $R \approx 35,\!000$ in the red. The resultant spectra have S/N $\gtrsim 50$ pixel$^{-1}$ at 400 nm and S/N $\gtrsim 100$ pixel$^{-1}$ at 600 nm. To obtain proper motions for each star, we cross-matched with both the UCAC4 and SPM4 proper motion catalogs using \texttt{TOPCAT}\footnote{\url{http://www.star.bris.ac.uk/~mbt/topcat/}}\citep{zac13,gir11,tay05}. We list both sets of proper motions for our sample in Table~\ref{tbl-2}. \section{Analysis}\label{sec:analysis} We reduced the spectra using the \texttt{CarPy}\footnote{\url{http://code.obs.carnegiescience.edu/mike}} software package \citep{kel03,kel14}. We continuum-normalized individual echelle orders using spline functions before joining them to form a single contiguous spectrum. We estimate line-of-sight radial velocities by cross-correlating each spectrum with a normalized rest-frame spectrum of the well-studied metal-poor giant star {HD 122563}. We use the measured radial velocities to place the spectra in the rest-frame of the star. \subsection{Distances \& Dynamics} To determine the distances between the sun and each star in our sample, we use the scaling relation \begin{eqnarray} L/L_{\odot} & = & (R/R_{\odot})^2 (T_{\mathrm{eff}}/T_{\mathrm{eff,\odot}})^4, \\ & = & (M/M_{\odot}) (g/g_{\odot})^{-1} (T_{\mathrm{eff}}/T_{\mathrm{eff,\odot}})^4. \end{eqnarray} Taking their characteristic mass as $0.8~M_{\odot}$, the bolometric luminosity $L$ of our stars can be approximated as \begin{eqnarray} \log{\left(L/L_{\odot}\right)} & = & \log{0.8} - (\log{g} - 4.44) + 4\log{\left(T_{\mathrm{eff}}/5777~\mathrm{K}\right)}.\nonumber\\\label{eq-lum} \end{eqnarray} We then use Equation \ref{eq-lum} and the stellar parameters from \citet{sch14} listed in Table~\ref{tbl-3} to determine $L$. We use a 10 Gyr, ${\mathrm{[Fe/H]} = -2.5}$, and ${[\alpha/{\rm Fe}] = +0.4}$ Dartmouth isochrone to convert $L$ into an absolute $W1$-band magnitude $M_{W1}$ \citep{dot08}. Given the available photometry, $W1$ is least affected by extinction. We de-redden the observed $W1$ magnitudes using the \cite{sch98} dust maps as updated in \citet{sch11} along with the \citet{ind05} infrared extinction law. The distance modulus $W1-M_{W1}$ then yields $d_{\odot}$, the approximate distance of each star from the sun. Assuming the distance to the Galactic center is $R_{0} = 8.2 \pm 0.4$ kpc \citep[e.g.,][]{bov09}, we can then compute $d_{\mathrm{gc}}$, the approximate distance of each star from the Galactic center. We perform a Monte Carlo simulation to account for the random observational uncertainties in $W1$, $A_{W1}$, $T_{\mathrm{eff}}$, $\log{g}$, and $R_{0}$. We sample 10,000 realizations from the uncertainty distributions for each quantity and compute $d_{\odot}$ and $d_{\mathrm{gc}}$ for each realization. We give both distance estimates and their random uncertainties in the first two columns of Table~\ref{tbl-4}. All three stars have $d_{\mathrm{gc}} \lesssim 4$ kpc. We compute the Galactic orbits of each star in our sample using the \texttt{galpy} code\footnote{\url{http://github.com/jobovy/galpy} and described in \citet{bov15}}, with initial conditions set by the observed heliocentric radial velocities and proper motions in Table~\ref{tbl-2} and estimated $d_{\odot}$ values from Table~\ref{tbl-4}. Following \citet{bov12}, we model the Milky Way's potential as the superposition of a Miyamoto-Nagai disk with a radial scale length of 4 kpc and a vertical scale height of 300 pc, a Hernquist bulge with a scale radius of 600 pc, and a Navarro-Frenk-White halo with a scale length of 36 kpc \citep{miy75,her90,nav96}. We assume that the Miyamoto-Nagai disk, the Hernquist bulge, and the Navarro-Frenk-White halo respectively contribute 60\%, 5\%, and 35\% of the rotational support at the solar circle. We integrate the orbits for 200 orbital periods and derive the pericenters $r_{\mathrm{peri}}$, apocenters $r_{\mathrm{ap}}$, and eccentricities $e$. We perform a Monte Carlo simulation to account for the random observational uncertainties in $d_{\odot}$, $v_{\mathrm{hel}}$, $\mu_{\alpha} \cos{\delta}$, and $\mu_{\delta}$. We sample 1,000 realizations from the uncertainty distributions for each quantity and use those data as input to an orbital integration. In an attempt to quantify the systematic uncertainties that result from the input proper motion measurements, we include in Table~\ref{tbl-4} orbital properties and uncertainties estimated using both UCAC4 and SPM4 proper motions. \subsection{Stellar Parameters} We estimate stellar parameters by classical excitation and ionization balance using unblended \ion{Fe}{1} and \ion{Fe}{2} lines. Following the process described in \citet{cas14}, we measure equivalent widths of individual absorption lines from the rest-frame spectra by fitting Gaussian profiles. We visually inspect all lines for quality, and discard blended or low-significance measurements. For these analyses, we assume transitions are in local thermodynamic equilibrium (LTE) and employ the plane-parallel 1D $\alpha$-enhanced model atmospheres from \citet{cas04}. We use the atomic data compiled by \citet{roe10}\footnote{We used the correct transition probabilities for \ion{Sc}{2} from \citet{law89} that were misstated in \citet{roe10}.}, the \citet{asp09} solar chemical composition, and the February 2013 version of MOOG to calculate line abundances and synthesize spectra \citep{sne73,sob11}. We require four conditions to be simultaneously met for a converged set of stellar parameters: zero trend in \ion{Fe}{1} line abundance with excitation potential, zero trend in \ion{Fe}{1} line abundances with reduced equivalent width, equal mean \ion{Fe}{1} and \ion{Fe}{2} abundances, and that the mean [\ion{Fe}{1}/H] abundance must match the input model atmosphere abundance [M/H]. In practice we accepted solutions where the slopes had magnitudes less than $10^{-3}$ and the absolute abundance differences were less than $10^{-2}$ dex. Our estimated stellar parameters are provided in Table~\ref{tbl-3}. To verify our spectroscopically-derived effective temperatures, we calculate effective temperatures using color--temperature relations for 2MASS $J-K_{s}$ and APASS/2MASS $V-K_{s}$ colors. We use the \citet{sch98} dust maps as updated by \citet{sch11} to account for reddening in both colors. However, we find that our photometric temperatures are as much as $600$\,K hotter than our spectroscopically-derived quantities. To explore the reason for this discrepancy, we also estimate effective temperatures by comparing the observed Balmer lines with synthetic spectra from \citet{bar03}. Our analysis of the H-$\beta$ profile suggests effective temperatures between $4600$\,K and 4800\,K for all three stars, in excellent agreement with our excitation-ionization balance measurements. As we show qualitatively in Figure~\ref{fig01}, our observed spectra are very similar to the well-studied metal-poor giant star {HD\,122563}. Given our independent effective temperature estimates, and since {HD\,122563} is a red giant branch star with $T_{\mathrm{eff}} = 4590$\,K, $\log{g} = 1.61$, and $\mathrm{[M/H]} = -2.64$ \citep{jof14}, we are confident in our derived spectroscopic effective temperatures. Moreover, we observe repeated saturated interstellar \ion{Na}{1} D absorption lines in our data. These lines are indicative of multiple optically-thick gas clouds along the line-of-sight, each with distinct velocities. For these reasons, we assert that the discrepancy between photometric and spectroscopic temperatures is likely due to poorly-characterized reddening in the outer bulge region. Given the spectral resolution and S/N ratios of our data, we estimate that the uncertainties in our spectroscopically-derived stellar parameters are about $100$\,K in $T_{\rm eff}$, 0.2\,dex in $\log{g}$, 0.1\,dex in [Fe/H], and 0.1\,km s$^{-1}$ in microturbulence ($\xi$). We note that our stellar parameters ($T_{\rm eff}$, $\log{g}$, [Fe/H], $\xi$) would change if we used different model atmospheres or included a proper treatment of non-LTE effects. For metal-poor giants, the non-LTE treatment would increase the mean \ion{Fe}{1} line abundance by about 0.1\,dex and result in higher surface gravities for a given effective temperature. As an example, \citet{jof14} reports a slightly cooler temperature and higher surface gravity for HD 122563 than we find for our three stars. However, in that study $T_{\rm eff}$ and $\log{g}$ were not derived by excitation and ionization equilibrium. Instead, they were fixed by bolometric temperature and angular diameter measurements from \citet{cre12}. With the stellar parameters fixed, \citet{jof14} noted that HD 122563 showed the largest abundance imbalance of \ion{Fe}{1} and \ion{Fe}{2} lines in their sample. This indicates the the application of the equilibrium method in LTE tends towards a different set of stellar parameters. In Figure~\ref{fig02} we plot our stars alongside giant star (i.e., $\log{g} \lesssim 3.0$) comparison samples from \citet{yon13a} and \citet{roe14}. Although these authors estimated surface gravities directly from isochrones, our stellar parameters are comparable to their determinations. Consequently, we are confident of our stellar parameter estimates. \subsection{Detailed Abundances} Our high-resolution, high S/N Magellan/MIKE spectra allow us to measure the abundances of many light, odd-Z, $\alpha$, Fe-peak, and neutron-capture elements. For most elements, we determine individual line abundances from the measured equivalent widths of clean, unblended atomic lines. We take a synthesis approach for molecular features (e.g., CH), doublets (e.g., Li), or atomic transitions with significant hyperfine structure and/or isotopic splitting (namely Sc, V, Mn, Co, Cu, Ba, La, and Eu). We use molecular data (CH) from \citet{mas14}. Our hyperfine structure and isotopic splitting data come from \citet{kur95} for Sc, V, Mn, Co, and Cu, from \citet{bie99} for Ba, and from \citet{law01a,law01b} for La and Eu. We assume standard solar system isotopic fractions as collated by \citet{and89}. We report our equivalent width measurements in Table~\ref{tbl-5} and our derived abundances in Table~\ref{tbl-6}. We estimate lithium abundances through synthesis of the Li doublet at $\lambda$6707. This feature is quite weak in our spectra. However, the abundances we obtain are typical for stars at the tip of the red giant branch. We synthesize the $G$-band molecular feature at $\lambda$4323 to estimate carbon abundances. None of our stars are carbon enhanced by the \citet{bee05} definition of [C/Fe] $\gtrsim +1.0$. On the other hand, one of our stars is carbon enhanced by the \citet{aok07} definition that takes stellar evolutionary effects into account. In either case, there is not much carbon present in the photospheres of our stars---[C/Fe] ranges from $-0.61$ in J183713-314109 to $+0.15$ in J181503-375120. We measure potassium abundances from equivalent widths of the strong \ion{K}{1} transitions at $\lambda$7664 and $\lambda$7698. Given the radial velocities of our targets, these \ion{K}{1} lines were mostly separated from the telluric A-band feature near $\lambda$7600. We detected \ion{Na}{1} in all three stars and derive abundances from the strong $\lambda$5889 and $\lambda$5895 transitions. We measure \ion{Al}{1} from the $\lambda$3961 feature. All three stars appear $\alpha$-enhanced (Mg, Ti, Si, and Ca). On average, the $\alpha$-element abundances of these three metal-poor stars in the bulge are similar to those observed in large samples of halo metal-poor giant stars \citep[e.g.,][]{cay04,yon13a,roe14}. [Mg/Fe] varies between $+0.46$ and $+0.57$, while [Ca/Fe] changes marginally from $+0.41$ to $+0.47$. However, in all stars we find that [\ion{Ti}{1}/Fe] and [\ion{Ti}{2}/Fe] are slightly lower than the other $\alpha$-elements, between [Ti/Fe]$ = +0.22$ and $+0.29$ (Figure~\ref{fig03}). In all stars, the mean abundances of neutral and ionized Ti transitions agree within 0.03--0.08\,dex. We measure [\ion{Si}{1}/Fe] abundances from the $\lambda$3905 transition, yielding [\ion{Si}{1}/Fe] abundance ratios between $+0.71$ and $+0.86$. There are a large number of Fe-peak transitions available in our spectra: \ion{Sc}{2}, \ion{V}{1}, \ion{Cr}{1} \& \ion{Cr}{2}, \ion{Mn}{1}, \ion{Co}{1}, \ion{Ni}{1}, \ion{Cu}{1}, and \ion{Zn}{1}. While Sc, V, Cr, Mn, Co, Ni, and Zn are clearly measurable in all stars from multiple unblended lines, we do not detect \ion{Cu}{1} in J155730-293922 or J183713-314109. Instead, we provide upper limits for \ion{Cu}{1} from the $\lambda$5105 transition. We also report a low-significance detection of \ion{Cu}{1} in J181503-375120 of {[\ion{Cu}{1}/Fe] $= -0.51$}. Our Fe-peak abundance ratios generally follow the mean halo abundance trends observed by other authors in giant stars of similar metallicity \citep[e.g.,][]{cay04,yon13a,roe14}. We find that [\ion{Si}{1}/Fe], [\ion{Sc}{2}/Fe], and [\ion{Mn}{1}/Fe] abundances are at the extremes of the abundance distribution observed in halo metal-poor giant stars. We show this in Figure~\ref{fig05} and explore possible explanations for these observations in Section \ref{sec:discussion}. We measure elemental abundances from the first (Sr and Y) and second (Ba) neutron-capture peaks. We do not detect Eu or La in our targets, and therefore we report upper limits for these elements in Table~\ref{tbl-6}. Sr and Y have a common nucleosynthetic pathway, and we observe comparable abundance ratios for these elements in all three stars. As we show in Figure~\ref{fig04}, all of our measured neutron-capture abundances are indistinguishable from the abundances observed in halo metal-poor giant stars \citep{yon13a,roe14}. The uncertainties in chemical abundances are dominated by systematics, principally due to the uncertainties in determining stellar parameters. We vary the stellar parameters of each star by the estimated uncertainties and calculate the resulting change in abundances. We give the sign and magnitude of these effects in Table~\ref{tbl-7}, along with the quadrature sum of systematic uncertainties. Due to a lack of lines for some elements, we adopt a minimum random uncertainty of 0.1\,dex. We estimate total uncertainties as the quadrature sum of random and systematic uncertainties, which we list in Table~\ref{tbl-7}. For uncertainties in [X/Fe] abundance ratios (e.g., as shown in Figures~\ref{fig03}--\ref{fig05}), we adopt the quadrature sum of the total uncertainties in [X/H] and [\ion{Fe}{1}/H]. \section{Discussion}\label{sec:discussion} Our initial survey was not targeted at the bulge, so we are observing all three stars at random orbital phases. Since a star on a radial orbit spends most of its orbit near apocenter, there is a strong prior that we are observing all three stars close to apocenter. Our estimated Galactocentric distances and orbital parameters for the three stars listed in Table~\ref{tbl-4} securely place J155730-293922 and J183713-314109 in the bulge on tightly bound orbits. In both cases, the currently-observed Galactocentric distances are consistent with the idea that both stars are near apocenter. At the same time, the differences in proper motion reported by UCAC4 and SPM4 deviate by up to 3-$\sigma$. It seems clear that the quoted random proper motion uncertainties are not representative of the total uncertainties including the contribution from systematics. Both stars have $V \lesssim 13$ and have had their proper motions matched to the correct 2MASS sources, so the discrepancy is not due to faintness or misidentification. Nevertheless, the range in proper motions reported by UCAC4 and SPM4 should be an approximation of the effect of the unreported systematic uncertainties. Since both UCAC4 and SPM4 place J155730-293922 and J183713-314109 on tightly bound orbits, there is no reason to reject the idea that they are indeed tightly bound. We therefore argue that since J155730-293922 and J183713-314109 are metal-poor, located near the center of the Galaxy, and on tightly bound orbits, they are likely to be truly ancient stars according to the analysis described in \citet{tum10}. On the other hand, the orbital parameters listed in Table~\ref{tbl-4} for the star J181503-375120 suggest that it may be a halo star on a very eccentric orbit. Both UCAC4 and SPM4 agree that $\mu_{\alpha} \cos{\delta} \approx 20$ mas yr$^{-1}$ with high significance, indicating a substantial transverse velocity at $9.0_{-2.2}^{+2.8}$ kpc. The problem with that scenario is that J181503-375120 would spend only a tiny fraction of its orbit near where it is observed today, and we are therefore observing it at a special time. There are two possible interpretations of this observation. The first is that both UCAC4 and SPM4 have somehow overestimated the $\mu_{\alpha} \cos{\delta}$ proper motion of J181503-375120. This cannot easily be rejected. Though the UCAC4 and SPM4 proper motion measurements were produced independently, they both used the same blue SPM plates for their first epoch astrometry. In that case, the apparently large proper motion of J181503-375120 could be the result of an issue with the same blue SPM plate. Moreover, both UCAC4 and SPM4 may be subject to residual systematic uncertainties at the level of 10 mas yr$^{-1}$. The second interpretation is that J181503-375120 is genuinely on a very eccentric orbit that takes it from the bulge all the way to the edge of the Local Group. Though we cannot reject the latter hypothesis, we suspect that the former is a better explanation. Nevertheless, the proper motion of J181503-375120 merits further attention. If its parallax is measured and its proper motion confirmed by Gaia, then it could be a hypervelocity star that has been ejected from the Galactic center by a three-body interaction involving the Milky Way's supermassive black hole. In any case, J181503-375120 is currently located near the center of the Galaxy. Since all three stars in our sample are old, one might wonder if the orbits we observe today might be significantly different from their orbits at higher redshift. Even though we will argue that our stars formed at $z \sim 10$, they were likely accreted by the Milky Way more recently. \citet{tum10} found that even metal-poor stars that formed at $z \sim 10$ are not typically accreted by a Milky Way analog until $z \sim 3$. \citet{wan11} showed that in the absence of a major merger, inside of 2 kpc Milky Way-analog dark matter halos have accreted more than 75\% of their $z = 0$ mass by $z \sim 3$. The Milky Way is not likely to have had a major merger in that interval, as its disk is quite old and its bulge appears to be a psuedobulge best explained by secular disk instabilities \citep[e.g.,][]{aum09,sch09,kor04,how09}. The fact that the mass enclosed by the orbits of of our stars does not change much since they likely entered the Milky Way's dark matter halo suggests that their orbits should not have changed significantly. The impact of merger activity would be to cause the outward diffusion of stellar orbits anyway, so in that situation the orbits of our stars would have been even more tightly bound in the past. This would not qualitatively effect our interpretation of their abundances. The inside-out formation of the Milky Way suggests that in the inner few kpc of the Galaxy, about 10\% of stars with $\mathrm{[Fe/H]} \lesssim -3.0$ formed at $z \gtrsim 15$ \citep{tum10}. Another 20--40\% of stars in the range $-4.0 \lesssim \mathrm{[Fe/H]} \lesssim -3.0$ formed at $10 \lesssim z \lesssim 15$. All three of our stars are currently in the inner Galaxy, while the kinematics of two of the three place them on tightly bound orbits. The probability $P_{15}$ that at least one of our stars formed at $z \gtrsim 15$ is 1 minus the probability that none of them formed at $z \gtrsim 15$: $P_{15} = 1-0.9^3 \approx 0.3$. Likewise, the probability $P_{10}$ that at least one of our stars formed at $z \gtrsim 10$ is $P_{10} = 1-0.7^3 \approx 0.7$. In other words, there is 30\% chance that at least one of these three stars formed at $z \gtrsim 15$ and a 70\% chance that at least one star formed at $10 \lesssim z \lesssim 15$. If we apply the \citet{tum10} analysis only to J155730-293922 and J183713-314109, then $P_{15} \approx 0.2$ and $P_{10} \approx 0.5$. Even though these stars are not the most metal-poor stars known, the combination of their low metallicity and tightly-bound orbits suggests that they may be among the most ancient stars with detailed chemical abundance measurements. In this scenario, our derived chemical abundances are indicative of the chemical abundances of the progenitor galaxies of the Milky Way during the epoch of the first galaxies. Generally, we find that our abundance ratios are near the mean of abundance distributions observed in halo metal-poor giant stars \citep{cay04,yon13a,roe14}. Si, Sc and Mn are exceptions though, which we discuss below. Based on four metal-poor bulge stars with $-2.7 \lesssim \mathrm{[Fe/H]} \lesssim -2.5$ from the EMBLA survey, \citet{how14} reached a similar conclusion: bulge metal-poor stars have a similar abundance pattern to halo metal-poor stars. They also noted large scatter in [\ion{Mg}{1}/Fe] from $-0.07$ to $+0.62$ in just four stars, with one star overabundant in [\ion{Ti}{2}/Fe] to the level of $+0.84$. We find very little variance in [\ion{Mg}{1}/Fe], ranging from $+0.46$ to $+0.57$. None of our stars are overabundant in either [\ion{Ti}{1}/Fe], [\ion{Ti}{2}/Fe], or any other $\alpha$-elements. In fact, we find that [\ion{Ti}{1}/Fe] and [\ion{Ti}{2}/Fe] are about $0.15$\,dex below the abundances of other $\alpha$-elements. Our stars appear near the extremes of the silicon abundance distribution observed in halo metal-poor giant stars \citep[e.g.,][]{cay04,yon13a,roe14}. This is likely due to their low surface gravities and temperatures though. \citet{bon09} found that giants exhibited higher [Si/Fe] abundance ratios than dwarfs by about 0.2\,dex. Similarly, cool stars usually appear to have high silicon \citep{pre06,lai08,yon13a}. Given these two effects, the slightly higher [Si/Fe] abundance ratios we find can most likely be attributed to a combination of low surface gravity and cool temperatures. Indeed, when we consider [Si/Fe] in giant stars ($\log{g} < 3$) in the \citet{roe14} sample, our [Si/Fe] ratios lie near the mean for our temperature range. That is to say although our stars show relatively high [Si/Fe] ratios, the stars with high [Si/Fe] values in the comparison samples also usually have cooler temperatures. In short, [Si/Fe] appears to be strongly correlated with temperature. On the other hand, high silicon is consistent with the Galactic chemical enrichment model predictions of \citet{kob06}. While we regard the former as the most likely explanation, we cannot rule out the latter idea that the high silicon we observe is representative of the $z \gtrsim 10$ interstellar medium. The [\ion{Mn}{1}/Fe] abundance ratios we find are lower than what is observed in metal-poor giants in the halo. We use the same hyperfine structure data for \ion{Mn}{1} as the referenced authors and derive abundances from common lines. \citet{cay04} and \citet{roe10,roe14} have noted that the \ion{Mn}{1} resonance triplet at 403\,nm yields systematically lower abundances than other neutral Mn lines. For that reason, \citet{roe14}\footnote{\citet{cay04} made similar adjustments.} empirically corrected their \ion{Mn}{1} triplet abundances by about +0.3\,dex, which explains most of the discrepancy we observe. \citet{yon13a} made no corrections, and we still find our stars in the lower envelope of their [\ion{Mn}{1}/Fe] distribution. The remaining difference in [\ion{Mn}{1}/Fe] is probably attributable to our stars being at the tip of the giant branch. In halo metal-poor giant stars, many authors have noted a positive trend in the $T_{\mathrm{eff}}-{\rm [Mn/Fe]}$ plane. In other words, lower [Mn/Fe] abundances are found in cooler giants \citep{pre06,yon13a,roe14}. All three stars have low scandium abundances, with [\ion{Sc}{2}/Fe] $\lesssim -0.5$. \citet{yon13a} found a tight abundance relation between [\ion{Ti}{2}/H] and [\ion{Sc}{2}/H] in halo stars, which is suggestive of a common nucleosynthetic environment. Figure~\ref{fig06} shows that our stars deviate significantly from this relation. Unlike \ion{Si}{1} or \ion{Mn}{1}, our low [\ion{Sc}{2}/Fe] abundance ratios cannot be easily explained by correlations with $T_{\mathrm{eff}}$. \citet{yon13a} found a slight slope ($m = 0.05 \pm 0.06$) in the relationship between $T_{\mathrm{eff}}$ and [\ion{Sc}{2}/Fe], such that cooler stars have lower [\ion{Sc}{2}/Fe] abundance ratios. The typical range of [\ion{Sc}{2}/Fe] they measure for cool stars is $-0.10$ to $+0.50$ though. Our measurements are substantially below this range, with [\ion{Sc}{2}/Fe] = $-0.59$ to $-0.54$. Scandium probably remains the most discrepant element between Galactic chemical evolution models and observations of metal-poor stars, as models typically under-predict Sc abundances by a factor of ten. For example, \citet{kob06} predict constant ${[{\rm Sc/Fe}] \sim -1}$ for metal-poor stars, roughly an order of magnitude lower than the observed values of [Sc/Fe] $\sim 0$. The abundance ratios we find in the inner few kpc of the Galaxy bring our stars far closer to these predictions. However, advances in modeling are required for both abundance measurements (e.g., non-LTE treatment, $\langle$3D$\rangle$ photospheres) and Galactic chemical evolution models. Departures from local thermodynamic equilibrium or 3D effects will alter the inferred Sc abundances, while increasing the $\alpha$-rich freeze-out or delaying neutrino processes during explosive nucleosynthesis may be necessary to increase Sc yields in chemical evolution models \citep[e.g.,][]{fro06,kob06}. We searched the SAGA database\footnote{Described in \citet{sud08,sud11} and \citet{yam13} and available at \url{http://saga.sci.hokudai.ac.jp/wiki/doku.php}.} and the compilation of \citet{fre10a} for other Galactic giant stars with [\ion{Sc}{2}/Fe] $\lesssim -0.5$. That search returned three objects: BS 16929-005, HE 0533-5340, and HE 1207-3108. While BS 16929-005 was reported by \citet{hon04} to have [\ion{Sc}{2}/Fe] $= -0.53$, that measurement did not take into account the hyperfine structure that is known to be important for scandium abundance measurements \citep[e.g.,][]{pro00}. In comparison, \citet{lai08} accounted for hyperfine structure in BS 16929-005 and found [\ion{Sc}{2}/Fe] $= -0.03$. We regard the latter measurement as more reliable. \citet{coh13} found [\ion{Sc}{2}/Fe] $= -0.56$ for HE 0533-5340 and \citet{yon13a} found [\ion{Sc}{2}/Fe] $= -0.55$ for HE 1207-3108. However, both HE 0533-5340 and HE 1207-3108 are among the rare class of ``iron-rich" metal-poor stars in which most [X/Fe] abundances are sub-solar. This is in contrast to typical metal-poor stars, which are usually enhanced in at least the $\alpha$ elements. The combination of low [\ion{Sc}{2}/Fe] and $\alpha$ enhancement that we see in our three metal-poor giants in the bulge is unprecedented in any of the 381 metal-poor giant stars in the SAGA database with scandium abundance measurements. These three stars are therefore unlike any other known star in the Galaxy. We also searched \citet{fre10a} for metal-poor stars in dwarf galaxies with [\ion{Sc}{2}/Fe] $\lesssim -0.5$. We found two examples, one from \citet{fre10b} in Coma Berenices (SDSS J122657+235611/ComBer-S3) and one from \citet{she03} in Carina (Car 3). Car 3 is an ``iron-rich" metal-poor star, so we do not consider it further. That leaves SDSS J122657+235611/ComBer-S3 with [\ion{Sc}{2}/Fe] $= -0.57$ as the only giant star known with a similar abundance pattern to our three metal-poor giants in the bulge. Coma Berenices is an ultra-faint dwarf spheroidal (dSph) galaxy with a $V$-band absolute magnitude of only $M_{V} = -3.4$ \citep{bel07,dej08}. It is also one of the most ancient galaxies known. Indeed, \citet{bro14} found a mean age of $13.9 \pm 0.3$ Gyr for Coma Berenices based on \textit{Hubble Space Telescope} Advanced Camera for Surveys photometry of its resolved stellar population. That made it the oldest galaxy in their sample. The apparent chemical abundance similarity between the ancient dSph Coma Berenices and our three stars in the bulge supports both the conclusion that our three stars are among the most ancient stars in our Galaxy and the idea that low [\ion{Sc}{2}/Fe] may be a chemical indicator of ancient stellar populations. Our detailed chemical abundance analysis has assumed that transition levels are in a state of LTE. It is well known that this assumption breaks down in the upper levels of stellar photospheres, where departures from LTE can significantly alter the inferred elemental abundance. The direction and magnitude of these abundance changes are dependent on stellar parameters, atomic number, ionization level, absorption depth (i.e., the strength of the transition), among other factors. Many authors have investigated the effects of abundance deviations due to LTE departures in well-studied metal-poor giant stars that are comparable to our program stars, like HD\,122563 \citep[e.g.,][]{gra99,asp03,mas08,and10,han13}. For metal-poor giant stars like those analyzed here, the abundance changes due to departures from LTE will be the largest for \ion{K}{1}, \ion{Co}{1}, and \ion{Mn}{1}. The change in \ion{K}{1} is significantly negative\footnote{Deviations are described following standard nomenclature: $\Delta{\rm NLTE} = \log_{\rm \epsilon}({\rm X})_{\rm NLTE} - \log_{\rm \epsilon}({\rm X})_{\rm LTE}$. A `positive correction' refers to a higher abundance after accounting for departures from LTE.}: $\Delta\log_{\rm \epsilon}$\ion{K}{1} $\approx -0.15$, such that in Figure \ref{fig03} we have shown uncorrected (i.e., LTE) \ion{K}{1} abundances from \citet{roe14} for a fair comparison. \ion{Co}{1} is expected to show the largest absolute change, with positive deviations up to about $+0.65$\,dex. Similarly we can expect our \ion{Mn}{1} abundances to increase by about $+0.4$\,dex with the proper inclusion of LTE departure coefficients. However, these \ion{Mn}{1} corrections would be of the same approximate order and direction for the halo comparison samples. Therefore we assert that the \ion{Mn}{1} abundance ratios we find in metal-poor stars in the bulge would persist in the lower tail of [\ion{Mn}{1}/Fe] abundance distribution observed in comparable halo stars. All other species examined here have expected abundance deviations less than $0.2$\,dex, with the average magnitude being about $0.1$\,dex \citep{ber14}. We note that systematic abundance differences can also be expected due to surface granulation and convection, complex features which cannot be accounted for in our 1D models. Our observations indirectly suggest that the progenitor galaxies of the Milky Way had reached $\mathrm{[Fe/H]} \sim -3.0$ with an abundance pattern comparable to metal-poor halo stars by $z \sim 10$. The chemical state of high-redshift galaxies can be measured directly by observations of metal-poor damped Ly$\alpha$ systems (DLAs) in absorption in the spectra of background quasars. Many authors\footnote{See for example \citet{mol00}, \citet{des01}, \citet{pro02}, \citet{des03}, \citet{ome06}, \citet{peti08}, \citet{pett08}, \citet{ell10}, \citet{pen10}, \citet{sri10}, and \citet{coo11a,coo11b}.} have measured the column densities and relative abundances of H, C, N, O, Al, Si, and Fe to $z \sim 4$. At higher redshift, C, O, Mg, Si, and Fe have been measured in DLAs at $z \sim 6$ \citep{bec12}. At $z \approx 7$, the abundances of one system has been bounded to be less than 1/1,000 solar \citep{sim12}. Where [C/Fe], [O/Fe], and [Si/Fe] have been measured in high-redshift DLAs, it has been found that the average abundances are in good agreement with those observed in metal-poor stars: $\mathrm{[C/Fe]} \approx 0.15 \pm 0.03$, $\mathrm{[O/Fe]} \approx 0.40 \pm 0.01$, and $\mathrm{[Si/Fe]} \approx 0.37 \pm 0.01$. Our stars in the bulge are likely ancient and are well matched by the observed abundances in DLAs. Only 500 Myr passes between $z \sim 10$ and $z \sim 6$ \citep[e.g.,][]{wri06}, so it seems plausible that the $z \sim 10$ abundances as observed in our ancient stars (after correcting for $\log{g}$ and $T_{\mathrm{eff}}$ effects) are comparable to those directly observed at $z \sim 6$. \section{Conclusions}\label{sec:conclusions} We have measured the detailed chemical abundances of the three metal-poor stars with $\mathrm{[Fe/H]} \lesssim -2.7$ in the bulge that we discovered in \citet{sch14}. Two of these three stars are the most metal-poor stars in the bulge in the literature, while the third is comparable to the most metal-poor star identified in \citet{how14}. We have carefully estimated the Galactocentric distances and orbits of all three stars. While we find that all three have $d_{\mathrm{gc}} \lesssim 4$\,kpc, only J155730-293922 and J183713-314109 can be securely placed on tightly-bound orbits. J181503-375120 may be a halo star on a very eccentric orbit that is only passing through the bulge. While UCAC4 and SPM4 proper motion measurements favor a very eccentric orbit, the orbit is so extreme that it may be more likely that there is an issue with the SPM blue plate that provides the first epoch astrometry for both catalogs. When combined with their metal-poor nature, their proximity to the center of the Galaxy and their tightly-bound orbits indicate that these stars may be some of the most ancient objects yet identified. We use the theoretical models of \citet{tum10} to estimate that there is a 30\% chance that at least one of these stars formed at $z \gtrsim 15$ and a 70\% chance that at least one formed at $10 \lesssim z \lesssim 15$. We therefore argue that the chemical abundances we observe in these metal-poor stars is representative of the chemical state of the interstellar medium in the progenitor galaxies of the Milky Way at $z \sim 10$. Compared to observations of metal-poor giant stars of similar effective temperatures found in the Galactic halo, we find similar [X/Fe] abundance ratios for most elements. However, we observe [\ion{Sc}{2}/Fe] abundance ratios lower than reported in the halo by about $0.5$\,dex. Scandium remains the element with the largest discrepancy between what is observed in halo metal-poor stars and what is predicted from models of Galactic chemical evolution. Interestingly, when compared to the values observed in halo metal-poor stars, our [\ion{Sc}{2}/Fe] abundances are closer to predictions for the chemical abundances of the first galaxies \citep[e.g.,][]{kob06}. For these reasons, the progenitor halos of the Milky Way likely reached $\mathrm{[Fe/H]} \sim -3.0$ by $z \sim 10$. Their chemical abundances were probably very similar to those observed in halo metal-poor stars with the possible exception of Sc, which we observe to be low in these ancient stars in the bulge. \acknowledgments We thank Judith Cohen, Anna Frebel, Gerry Gilmore, Paul Schechter, and Josh Winn. We are especially grateful to the anonymous referee for suggestions that improved this paper. This research has made use of NASA's Astrophysics Data System Bibliographic Services and both the SIMBAD database and VizieR catalog access tool, CDS, Strasbourg, France. The original description of the VizieR service was published by \citet{och00}. This research made use of Astropy, a community-developed core Python package for Astronomy \citep{astropy}. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This publication was partially based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o (Brazil) and Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n Productiva (Argentina). This research has made use of the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research was made possible through the use of the AAVSO Photometric All-Sky Survey (APASS), funded by the Robert Martin Ayers Sciences Fund. A.~R.~C. acknowledges support through European Research Council grant 320360: The Gaia-ESO Milky Way Survey. Support for this work was provided by the MIT Kavli Institute for Astrophysics and Space Research through a Kavli Postdoctoral Fellowship. {\it Facilities:} \facility{CTIO:2MASS}, \facility{FLWO:2MASS}, \facility{Gemini:South (GMOS-S spectrograph)}, \facility{Magellan:Clay (MIKE spectrograph)}, \facility{WISE}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the present paper we shall prove some quantitative estimates of unique continuation for fourth order elliptic equations arising in linear elasticity theory. The equations we are most concerned with are those describing the equilibrium of a thin plate having uniform thickness. Working in the framework of the linear elasticity for infinitesimal deformations and under the kinematical assumptions of the Kirchhoff-Love theory (see \cite{Fi}, \cite{Gu}), the transversal displacement $u$ of the plate satisfies the following equation \begin{equation} \label{1-I} \mathcal{L}u:=\sum_{i,j,k,l=1}^2\partial_{ij}^2 (C_{ijkl}(x)\partial_{kl}^2 u)=0, \quad \hbox{in } \Omega, \end{equation} where $\Omega$ is the middle surface of the plate and $\{C_{ijkl}(x)\}_{i,j,k,l=1}^{2}$ is a fourth order tensor describing the response of the material of the plate. In the sequel we shall assume that the following standard symmetry conditions are satisfied \begin{equation} \label{2-I} C_{ijkl}(x)=C_{klij}(x)=C_{lkij}(x), \quad \hbox{ }{i,j,k,l=1,2}, \quad\hbox{ in } \Omega. \end{equation} In addition we shall assume that $C_{ijkl}\in C^{1,1}(\overline{\Omega})$, $i,j,k,l=1,2$, and that the following strong convexity condition is satisfied \begin{equation} \label{3-I} C_{ijkl}(x)A_{ij}A_{kl} \geq \gamma |A|^2, \quad \hbox{ in } \Omega, \end{equation} for every $2\times2$ symmetric matrix $A=\{A_{ij}\}_{i,j=1}^2$, where $\gamma$ is a positive constant and $|A|^2=\sum_{i,j=1}^2A_{ij}^2$. More precisely, the quantitative estimates of unique continuation which we obtain are in the form of a three sphere inequality (see Theorem \ref{theo:9-4.3}, Theorem \ref{theo:12-4.3} and Theorem \ref{theo:13-4.3}), in developing which we have mainly had in mind its applications to two kinds of inverse problems for thin elastic plates: a) the stability issue for the inverse problem of the determination of unknown boundaries, b) the derivation of size estimates for unknown inclusions made of different elastic material.\\ Let us give a brief description of problems a) and b).\\ \textit{Problem a)}. We consider a thin elastic plate, having middle surface $\Omega$, whose boundary is made by an accessible portion $\Gamma$ and by an unknown inaccessible portion $I$, to be determined. Assuming that the boundary portion $I$ is free, a possible approach to determine $I$ consists in applying a couple field $\widehat{M}$ on $\Gamma$ and measuring the resulting transversal displacement $u$ and its normal derivative $\frac{\partial u}{\partial n}$ on an open subset of $\Gamma$. In \cite{M-R} it was proved that, under suitable a priori assumptions, a single measurement of this kind is sufficient to detect $I$. The stability issue, which we address here, asks whether small perturbations of the measurements produce or not small perturbations of the unknown boundary $I$. Since assigning a couple field $\widehat{M}$ results in prescribing the so called Neumann conditions for the plate, that is two boundary conditions of second and third order respectively, it follows that Cauchy data are known in $\Gamma$. Therefore it is quite reasonable, also in view of the literature about stability results for the determination of unknown boundaries in other physical frameworks (see for instance \cite{A-B-R-V}, \cite{Si}, \cite{Ve}), that the first step to be proved in order to get such a stability result consists in stability estimates for the Cauchy problem for the fourth order equation \eqref{1-I}. For this reason, in the present paper we derive a stability result for the Cauchy problem, see Theorem \ref{theo:LSC}, having in mind applications to this inverse problem and to the analogous ones, consisting in the determination of cavities or rigid inclusions inside the plate. We refer to \cite{M-R-V3} and to \cite{M-R} respectively for uniqueness results for these two inverse problems.\\ \textit{Problem b)}. We consider a thin elastic plate, inside which an unknown inclusion made of different material might be present. Denoting by $\Omega$ and $D$ the middle surface of the plate and of the inclusion respectively, a problem of practical interest is the evaluation of the area of $D$. In \cite{M-R-V1} we derived upper and lower estimates of the area of $D$ in terms of boundary measurements, for the case of isotropic material and assuming a ``fatness'' condition on the set $D$, see \cite[Theorem 4.1]{M-R-V1}. Since the proof of that result was mainly based on a three sphere inequality for $|\nabla^2u|^2$ (here $\nabla^2u$ denotes the Hessian matrix of $u$), where $u$ is a solution of the plate equation, we emphasize here that Theorem 4.1 of \cite{M-R-V1} extends to the more general anisotropic assumptions on the elasticity tensor stated in Theorem \ref{theo:12-4.3} of the present paper, in which such a three sphere inequality is established.\\ Concerning the Cauchy problem, along a classical path, \cite{NIR}, recently revived in \cite{A-R-R-V} in the framework of second order elliptic equations, we derive the stability estimates for the Cauchy problem for equation \eqref{1-I} as a consequence of smallness propagation estimates from an open set for solution to \eqref{1-I}. Such smallness propagation estimates are achieved by a standard iterative application of the three sphere inequality. In view of the applications to problems a) and b), we took care to study with particular attention the sharp character of the exponents appearing in the three sphere inequality because of its natural connection with the unique continuation property for functions vanishing at a point with polynomial rate of convergence (strong unique continuation property, \cite{CoGr}, \cite{Co-Gr-T}, \cite{Ge}, \cite {LeB}, \cite{L-N-W}, \cite{M-R-V1}) or with exponential rate of convergence, \cite{Co-K}, \cite{Pr}. As a byproduct of our three sphere inequality, we reobtain the result in \cite{Co-K}, in the case of $C^{1,1}$ coefficients, stating that, if $u(x)=O\left(e^{-|x-x_0|^{-\beta}}\right)$ as $x\rightarrow x_0$, for some $x_0\in\Omega$ and for an appropriate $\beta>0$ which is precisely defined below, then $u\equiv0$ in $\Omega$. Indeed it is not worthless to stress that such kinds of unique continuation properties, especially the quantitative version of the strong unique continuation property (three sphere inequalities with optimal exponent and doubling inequalities, in the interior and at the boundary) have provided crucial tools to prove optimal stability estimates for inverse problems with unknown boundaries \cite{A-B-R-V}, \cite{Si}, \cite{Ve} and to get size estimates for unknown inclusions, \cite{A-M-R1}, \cite{A-M-R2}, \cite{A-M-R3}, \cite{A-R-S}, \cite{M-R-V1}, \cite{M-R-V2}. Concerning problem b), we stress that the application of doubling inequalities allows to get size estimates of the unknown inclusion $D$ under fully general hypotheses on $D$, which is assumed to be merely a measurable set, see \cite{M-R-V2}. The strong unique continuation property for equation \eqref{1-I} holds true, \cite{CoGr}, \cite {LeB}, \cite{L-N-W}, \cite{M-R-V1}), when the tensor $\{C_{ijkl}(x)\}_{i,j,k,l=1}^{2}$ satisfies isotropy hypotheses, that is \begin{equation} \label{10-I} C_{ijkl}(x)=\delta_{ij}\delta_{kl}\lambda(x)+\left(\delta_{ik}\delta_{jl} +\delta_{il}\delta_{jk}\right)\mu(x), \quad \hbox{ }{i,j,k,l=1,2}, \quad \hbox{ in } \Omega, \end{equation} \noindent where $\lambda$ and $\mu$ are the Lam\'{e} moduli. On the other hand, in view of Alinhac Theorem \cite{Ali}, it seems extremely improbable that the solutions to \eqref{1-I} can satisfy the strong unique continuation property under the general hypotheses \eqref{2-I} and \eqref{3-I} on the tensor $\{C_{ijkl}(x)\}_{i,j,k,l=1}^{2}$. Indeed, let $\widetilde{\mathcal{L}}=\sum_{h=0}^{4}a_{4-h}(x)\partial _{1}^{h}\partial _{2}^{4-h}$ be the principal part of the operator $\mathcal{L}$. Let $z_1, z_2, \overline{z}_1, \overline{z}_2$ (here $\overline{z}_j$ is the conjugate of the complex number $z_j$) be the complex roots of the algebraic equation $\sum_{h=0}^{4}a_{4-h}(x_0)z^{h}=0$. In \cite{Ali} it is proved that if $z_1\neq z_2$ then there exists an operator $Q$ of order less than four such that the strong unique continuation property in $x_0$ doesn't hold true for the solutions to the equation $\widetilde{\mathcal{L}}u+Qu=0$. A fortiori, it seems hopeless the possibility that solutions to \eqref{1-I} can satisfy the doubling inequality. At the best of our knowledge, concerning both weak and strong unique continuation property for equation \eqref{1-I}, under the general assumptions \eqref{2-I}, \eqref{3-I} and some reasonable smoothing condition on the coefficients $C_{ijkl}$, neither positive answers nor counterexamples are available in the literature. On the other hand, it is clear that, in order to face the issue of unique continuation property for equation \eqref{1-I} under the above mentioned conditions, the two-dimensional character of equation \eqref{1-I} or the specific structure of the equation should play a crucial role. Indeed, a Pl\u{\i}s's example, \cite{Pl}, \cite{Zu}, shows that the unique continuation property fails for general three-dimensional fourth order elliptic equations with real $C^\infty$ coefficients. For the reasons we have just outlined, in the present paper we have a bit departed from the specific equation \eqref{1-I} and we have derived the three sphere inequality that we are interested in, as a consequence of a three sphere inequality for solutions to the equation \begin{equation} \label{4-I} P_4(u)+Q(u)=0, \quad \hbox{ in } B_1=\{x\in \mathbb{R}^n\ |\ |x|<1\}, \end{equation} where $n\geq 2$, $Q$ is a third order operator with bounded coefficients and $P_4$ is a fourth order elliptic operator such that \begin{equation} \label{5-I} P_4=L_2L_1, \end{equation} where $L_1$ and $L_2$ are two second order uniformly elliptic operator with real and $C^{1,1}(\overline{B_1})$ coefficients. Our approach is also supported by the fact that the operator $\mathcal{L}$ can be written, under very general and simple conditions (see sections \ref{SecCauchy} and \ref{Sec4.3}), as follows \begin{equation} \label{6-I} \mathcal{L}=P_4+Q, \end{equation} where $P_4$ satisfies \eqref{5-I} and $Q$ is a third order operator with bounded coefficients. We have conventionally labeled such conditions (see Definition \ref{def:dichotomy} in Section \ref{SecCauchy}) the \emph{dichotomy condition}. On the other hand, the conditions under which the decomposition \eqref{6-I} is possible are, up to now, basically the same under which the unique continuation property holds for fourth order elliptic equation in two variables \cite{Wat}, \cite{Zu}. More precisely, such conditions guarantee the weak unique continuation property for solution to $\mathcal{L}u=0$ provided that the complex characteristic lines of the principal part of operator $\mathcal{L}$ satisfy some regularity hypothesis. We prove the three sphere inequality for solutions to equation \eqref{4-I} (provided that $P_4$ satisfies \eqref{5-I}) in Theorem \ref{theo:7-4.2}. By such a theorem we immediately deduce, Corollary \ref{cor:8-4.2}, the following unique continuation property. Let $L_k=\sum_{i,j=1}^ng_k^{ij}(x)\partial_{ij}^2$, $k=1,2$, where $g_k=\{g_k^{ij}(x)\}_{i,j=1}^{n}$ are symmetric valued function whose entries belong to $C^{1,1}\left(\overline{B}_1\right)$. Assuming that $\{g_k^{ij}(x)\}_{i,j=1}^{n}$, $k=1,2$ satisfy a uniform ellipticity condition in $B_1$, let $\nu_*$ and $\nu^*$ ($\mu_*$ and $\mu^*$) be the minimum and the maximum eigenvalues of $\{g_1^{ij}(0)\}_{i,j=1}^{n}$ ($\{g_2^{ij}(0)\}_{i,j=1}^{n}$) respectively, and let $\beta>\sqrt{\frac{\mu^*\nu^*}{\mu_*\nu_*}}-1$. We have that \begin{equation} \label{30-I} \quad \hbox{if} \qquad u(x)=O\left(e^{-|x|^{-\beta}}\right), \quad \hbox{ as } x\rightarrow 0, \quad \hbox{ then } u\equiv0 \quad \hbox{ in } B_1. \end{equation} Since \eqref{30-I} has been proved for the first time in \cite{Co-K}, see also \cite{Co-Gr-T}, where the sharp character of property \eqref{30-I} has been emphasized, we believe useful to compare our procedure with the one followed in \cite{Co-K}. In the present paper, as well as in \cite{Co-K}, the bulk of the proof consists in obtaining a Carleman estimate for $P_4=L_2L_1$ with weight function $e^{-\left(\sigma_0(x)\right)^{-\beta}}$, where $\beta>\sqrt{\frac{\mu^*\nu^*}{\mu_*\nu_*}}-1$ and $\left(\sigma_0(x)\right)^2$ is a suitable positive definite quadratic form (Theorem \ref{theo:6-4.2}). In turn, here and in \cite{Co-K}, the Carleman estimate for $P_4$ is obtained by an iteration of two Carleman estimates for the operators $L_1$ and $L_2$ with the same weight function $e^{-\left(\sigma_0(x)\right)^{-\beta}}$. However, while in \cite{Co-K} and \cite{Co-Gr-T} the proof of Carleman estimates for $L_1$ and $L_2$ is carried out by a careful analysis of the pseudoconvexity conditions, \cite{HO63}, \cite{HO2}, \cite{I04}, in the present paper, Section \ref{Sec4.1}, we obtain the same estimates by a more elementary and direct way. More precisely, we adapt appropriately a technique introduced in \cite{E-V} in the context of parabolic operators. A prototype of this technique was already used in \cite{Ke-Wa} in the issue of the boundary unique continuation for harmonic functions. Such a technique, which is based only on integration by parts and on the fundamental theorem of calculus, being direct and elementary, makes it possible to easily control the constants that occur in the final three sphere inequality. Finally, let us notice that the above results can be extended also to treat fourth order operators having leading part $\mathcal{L}u$ given by \eqref{1-I} and involving lower order terms. An example of practical relevance is, for instance, the equilibrium problem for a thin plate resting on an elastic foundation. According to the Winkler model \cite{Win}, the corresponding equation is \begin{equation} \label{lower_order} \mathcal{L}u+ku=0, \quad \hbox{in } \Omega, \end{equation} where $k=k(x)$ is a smooth, strictly positive function. Indeed, in view of Theorem \ref{theo:7-4.2}, the three sphere inequalities established in Section \ref{Sec4.3} extend to equation \eqref{lower_order}. The plan of the paper is as follows. In Section \ref{SecNotation} we introduce some basic notation. In Section \ref{SecCauchy} we present the main results for the Cauchy problem, see Theorem \ref{theo:LSC}. In Section \ref{Sec4.1} we prove a Carleman estimate for second order elliptic operators, Theorem \ref{theo:4-4.1}, which will be used in Section \ref{Sec4.2} to derive a Carleman estimate for fourth order operators obtained as composition of two second order elliptic operators, Theorem \ref{theo:6-4.2}. In the same Section, as a consequence of Theorem \ref{theo:6-4.2}, we also derive a three sphere inequality and the unique continuation property for such fourth order operators, see Theorem \ref{theo:7-4.2} and Corollary \ref{cor:8-4.2} respectively. Finally, in Section \ref{Sec4.3}, the results of Section \ref{Sec4.2} are applied to the anisotropic plate operator, obtaining the desired three sphere inequality, see Theorems \ref{theo:9-4.3}, \ref{theo:12-4.3} and \ref{theo:13-4.3}. \section{Notation\label{SecNotation}} Let $P=(x_1(P), x_2(P))$ be a point of $\mathbb{R}^2$. We shall denote by $B_r(P)$ the ball in $\mathbb{R}^2$ of radius $r$ and center $P$ and by $R_{a,b}(P)$ the rectangle of center $P$ and sides parallel to the coordinate axes, of length $a$ and $b$, namely $R_{a,b}(P)=\{x=(x_1,x_2)\ |\ |x_1-x_1(P)|<a,\ |x_2-x_2(P)|<b \}$. To simplify the notation, we shall denote $B_r=B_r(O)$, $R_{a,b}=R_{a,b}(O)$. \noindent When representing locally a boundary as a graph, we use the following definition. \begin{definition} \label{def:2.1} (${C}^{k,\alpha}$ regularity) Let $\Omega$ be a bounded domain in ${\mathbb{R}}^{2}$. Given $k,\alpha$, with $k\in\mathbb{N}$, $0<\alpha\leq 1$, we say that a portion $S$ of $\partial \Omega$ is of \textit{class ${C}^{k,\alpha}$ with constants $\rho_{0}$, $M_{0}>0$}, if, for any $P \in S$, there exists a rigid transformation of coordinates under which we have $P=0$ and \begin{equation*} \Omega \cap R_{\frac{\rho_0}{M_0},\rho_0}=\{x=(x_1,x_2) \in R_{\frac{\rho_0}{M_0},\rho_0}\quad | \quad x_{2}>\psi(x_1) \}, \end{equation*} where $\psi$ is a ${C}^{k,\alpha}$ function on $\left(-\frac{\rho_0}{M_0},\frac{\rho_0}{M_0}\right)$ satisfying \begin{equation*} \psi(0)=0, \end{equation*} \begin{equation*} \psi' (0)=0, \quad \hbox {when } k \geq 1, \end{equation*} \begin{equation*} \|\psi\|_{{C}^{k,\alpha}\left(-\frac{\rho_0}{M_0},\frac{\rho_0}{M_0}\right)} \leq M_{0}\rho_{0}. \end{equation*} \medskip \noindent When $k=0$, $\alpha=1$, we also say that $S$ is of \textit{Lipschitz class with constants $\rho_{0}$, $M_{0}$}. \end{definition} \begin{rem} \label{rem:2.1} We use the convention to normalize all norms in such a way that their terms are dimensionally homogeneous with the $L^\infty$ norm and coincide with the standard definition when the dimensional parameter equals one. For instance, the norm appearing above is meant as follows \begin{equation*} \|\psi\|_{{C}^{k,\alpha}\left(-\frac{\rho_0}{M_0},\frac{\rho_0}{M_0}\right)} = \sum_{i=0}^k \rho_0^i \|\psi^{(i)}\|_{{L}^{\infty}\left(-\frac{\rho_0}{M_0},\frac{\rho_0}{M_0}\right)}+ \rho_0^{k+\alpha}|\psi^{(k)}|_{\alpha, \left(-\frac{\rho_0}{M_0},\frac{\rho_0}{M_0}\right)}, \end{equation*} where \begin{equation*} |\psi^{(k)}|_{\alpha,\left(-\frac{\rho_0}{M_0},\frac{\rho_0}{M_0}\right)}= \sup_ {\overset{\scriptstyle x', \ y'\in \left(-\frac{\rho_0}{M_0},\frac{\rho_0}{M_0}\right)}{\scriptstyle x'\neq y'}} \frac{|\psi^{(k)}(x')-\psi^{(k)}(y')|} {|x'-y'|^\alpha}. \end{equation*} Similarly, denoting by $\nabla^i u$ the vector which components are the derivatives of order $i$ of the function $u$, \begin{equation*} \|u\|_{{C}^{k,1}(\Omega)} =\sum_{i=0}^{k+1} {\rho_{0}}^{i}\|{\nabla}^{i} u\|_{{L}^{\infty}(\Omega)}, \end{equation*} \begin{equation*} \|u\|_{L^2(\Omega)}=\rho_0^{-1}\left(\int_\Omega u^2\right) ^{\frac{1}{2}}, \end{equation*} \begin{equation*} \|u\|_{H^m(\Omega)}=\rho_0^{-1}\left(\sum_{i=0}^m \rho_0^{2i}\int_\Omega|\nabla^i u|^2\right)^{\frac{1}{2}}, \end{equation*} and so on for boundary and trace norms such as $\|\cdot\|_{H^{\frac{1}{2}}(\partial\Omega)}$, $\|\cdot\|_{H^{-\frac{1}{2}}(\partial\Omega)}$. Notice also that, when $\Omega=B_{R}(0)$, then $\Omega$ satisfies Definition \ref{def:2.1} with $\rho_{0}=R$, $M_0=2$ and therefore, for instance, \begin{equation*} \|u\|_{H^m(B_R)}=R^{-1}\left(\sum_{i=0}^m R^{2i}\int_{B_R}|\nabla^i u|^2\right)^{\frac{1}{2}}, \end{equation*} \end{rem} Given a bounded domain $\Omega$ in $\mathbb{R}^2$ such that $\partial \Omega$ is of class $C^{k,\alpha}$, with $k\geq 1$, we consider as positive the orientation of the boundary induced by the outer unit normal $n$ in the following sense. Given a point $P\in\partial\Omega$, let us denote by $\tau=\tau(P)$ the unit tangent at the boundary in $P$ obtained by applying to $n$ a counterclockwise rotation of angle $\frac{\pi}{2}$, that is \begin{equation} \label{eq:2.tangent} \tau=e_3 \times n, \end{equation} where $\times$ denotes the vector product in $\mathbb{R}^3$, $\{e_1, e_2\}$ is the canonical basis in $\mathbb{R}^2$ and $e_3=e_1 \times e_2$. Given any connected component $\cal C$ of $\partial \Omega$ and fixed a point $P\in\cal C$, let us define as positive the orientation of $\cal C$ associated to an arclength parametrization $\varphi(s)=(x_1(s), x_2(s))$, $s \in [0, l(\cal C)]$, such that $\varphi(0)=P$ and $\varphi'(s)=\tau(\varphi(s))$. Here $l(\cal C)$ denotes the length of $\cal C$. Throughout the paper, we denote by $\partial_i u$, $\partial_s u$, and $\partial_n u$ the derivatives of a function $u$ with respect to the $x_i$ variable, to the arclength $s$ and to the normal direction $n$, respectively, and similarly for higher order derivatives. We denote by $\mathbb{M}^2$ the space of $2 \times 2$ real valued matrices and by ${\mathcal L} (X, Y)$ the space of bounded linear operators between Banach spaces $X$ and $Y$. For every $2 \times 2$ matrices $A$, $B$ and for every $\mathbb{L} \in{\mathcal L} ({\mathbb{M}}^{2}, {\mathbb{M}}^{2})$, we use the following notation: \begin{equation} \label{eq:2.notation_1} ({\mathbb{L}}A)_{ij} = L_{ijkl}A_{kl}, \end{equation} \begin{equation} \label{eq:2.notation_2} A \cdot B = A_{ij}B_{ij}, \end{equation} \begin{equation} \label{eq:2.notation_3} |A|= (A \cdot A)^{\frac {1} {2}}, \end{equation} \begin{equation} \label{eq:2.notation_3bis} A^{sym} = \frac{1}{2} \left ( A + A^t \right ), \end{equation} where $A^t$ denotes the transpose of the matrix $A$. Notice that here and in the sequel summation over repeated indexes is implied. \section{Stability estimates for the Cauchy problem\label{SecCauchy}} Let us consider a thin plate $\Omega\times[-\frac{h}{2},\frac{h}{2}]$ with middle surface represented by a bounded domain $\Omega$ in $\mathbb{R}^2$ and having uniform thickness $h$, $h<<\hbox{diam}(\Omega)$. Given a positive constant $M_1$, we assume that \begin{equation} \label{eq:M_1} |\Omega|\leq M_1\rho_0^2. \end{equation} Let us assume that the plate is made of nonhomogeneous linear elastic material with elasticity tensor $\mathbb{C}(x) \in{\cal L} ({\mathbb{M}}^{2}, {\mathbb{M}}^{2})$ and that body forces inside $\Omega$ are absent. We denote by $\hat M$ a couple field acting on the boundary $\partial\Omega$. We shall assume throughout that the elasticity tensor $\mathbb{C}$ has cartesian components $C_{ijkl}$ which satisfy the following conditions \begin{equation} \label{eq:sym-conditions-C-components} C_{ijkl} = C_{ klij} = C_{ klji} \quad i,j,k,l =1,2, \hbox{ a.e. in } \Omega. \end{equation} We recall that the symmetry conditions \eqref{eq:sym-conditions-C-components} are equivalent to \begin{equation} \label{eq:sym-conditions-C-1} {\mathbb{C}}A={\mathbb{C}} {A}^{sym}, \end{equation} \begin{equation} \label{eq:eq:sym-conditions-C-2} {\mathbb{C}}A \quad \hbox{is } symmetric, \end{equation} \begin{equation} \label{eq:eq:sym-conditions-C-3} {\mathbb{C}}A \cdot B= {\mathbb{C}}B \cdot A, \end{equation} for every $2 \times 2$ matrices $A$, $B$. In order to simplify the presentation, we shall assume that the tensor $\mathbb{C}$ is defined in all of $\mathbb{R}^2$. On the elasticity tensor $\mathbb{C}$ we make the following assumptions: \medskip {I)} \textit{Regularity} \begin{equation} \label{eq:3.bound} \mathbb{C} \in C^{1,1}(\mathbb{R}^2, {\mathcal L} ({\mathbb{M}}^{2}, {\mathbb{M}}^{2})), \end{equation} with \begin{equation} \label{eq:3.bound_quantit} \sum_{i,j,k,l=1}^2 \sum_{m=0}^2 \rho_0^m \|\nabla^m C_{ijkl}\|_{L^\infty(\mathbb{R}^2)} \leq M, \end{equation} where $M$ is a positive constant; {II)} \textit{Ellipticity (strong convexity)} There exists $\gamma>0$ such that \begin{equation} \label{eq:3.convex} {\mathbb{C}}A \cdot A \geq \gamma |A|^2, \qquad \hbox{in } \mathbb{R}^2, \end{equation} for every $2\times 2$ symmetric matrix $A$. Condition \eqref{eq:sym-conditions-C-components} implies that instead of $16$ coefficients we actually deal with $6$ coefficients and we denote \begin{center} \( {\displaystyle \left\{ \begin{array}{lr} C_{1111}=A_0, \ \ C_{1122}=C_{2211}=B_0, \vspace{0.12em}\\ C_{1112}=C_{1121}=C_{1211}=C_{2111}=C_0, \vspace{0.12em}\\ C_{2212}=C_{2221}=C_{1222}=C_{2122}=D_0, \vspace{0.12em}\\ C_{1212}=C_{1221}=C_{2112}=C_{2121}=E_0, \vspace{0.12em}\\ C_{2222}=F_0, \vspace{0.25em}\\ \end{array} \right. } \) \vskip -3.0em \begin{eqnarray} \ & & \label{3.coeff6} \end{eqnarray} \end{center} and \begin{equation} \label{3.coeffsmall} a_0=A_0, \ a_1=4C_0, \ a_2=2B_0+4E_0, \ a_3=4D_0, \ a_4=F_0. \end{equation} Let $S(x)$ be the following $7\times 7$ matrix \begin{equation} \label{3. S(x)} S(x) = {\left( \begin{array}{ccccccc} a_0 & a_1 & a_2 & a_3 & a_4 & 0 & 0 \\ 0 & a_0 & a_1 & a_2 & a_3 & a_4 & 0 \\ 0 & 0 & a_0 & a_1 & a_2 & a_3 & a_4 \\ 4a_0 & 3a_1 & 2a_2 & a_3 & 0 & 0 & 0 \\ 0 & 4a_0 & 3a_1 & 2a_2 & a_3 & 0 & 0 \\ 0 & 0 & 4a_0 & 3a_1 & 2a_2 & a_3 & 0 \\ 0 & 0 & 0 & 4a_0 & 3a_1 & 2a_2 & a_3 \\ \end{array} \right)}, \end{equation} and \begin{equation} \label{3.D(x)} {\mathcal{D}}(x)= \frac{1}{a_0} |\det S(x)|. \end{equation} Let us introduce the fourth order \emph{plate tensor} \begin{equation} \label{3.P} \mathbb{P}= \frac{h^3}{12} \mathbb{C}, \quad\hbox{in } \mathbb{R}^2. \end{equation} With this notation we may rewrite the plate equation \eqref{1-I} in the equivalent compact form \begin{equation} \label{3.compact_plate} {\rm div}({\rm div} ( {\mathbb P}\nabla^2 u))=0, \quad\hbox{in } \Omega, \end{equation} where the divergence of a second order tensor field $T(x)$ is defined, as usual, by \begin{equation*} (\textrm{div}\, T(x))_i=\partial_j T_{ij}(x). \end{equation*} Our approach to the Cauchy problem leads us to consider the following complete, inhomogeneous equation \begin{equation} \label{3.compact_plate_inhom} {\rm div}({\rm div} ( {\mathbb P}\nabla^2 u))=f + {\rm div}F + {\rm div}({\rm div} \mathcal{F}), \quad\hbox{in } B_R, \end{equation} where $f\in L^2(\mathbb{R}^2)$, $F\in L^2(\mathbb{R}^2;\mathbb{R}^2)$, $\mathcal{F}\in L^2(\mathbb{R}^2;\mathbb{M}^2)$ satisfy the bound \begin{equation} \label{3.bound_inhom} \|f\|_{L^2(\mathbb{R}^2)}+\frac{1}{\rho_0}\|F\|_{L^2(\mathbb{R}^2;\mathbb{R}^2)}+ \frac{1}{\rho_0^2}\|\mathcal{F}\|_{L^2(\mathbb{R}^2;\mathbb{M}^2)} \leq \frac{\epsilon}{\rho_0^4}, \end{equation} for a given $\epsilon>0$. A weak solution to \eqref{3.compact_plate_inhom} is a function $u\in H^2(B_R)$ satisfying \begin{equation} \label{3.inhom_weak} \int_{B_R}\mathbb{P}\nabla^2 u\cdot\nabla^2\varphi=\int_{B_R}f\varphi-\int_{B_R}F\cdot\nabla\varphi+ \int_{B_R}\mathcal{F}\cdot\nabla^2\varphi, \quad\hbox{for every }\varphi\in H^2_0(B_R). \end{equation} In the sequel we shall use the following condition on the elasticity tensor that we have conventionally labeled \emph{dichotomy condition}. \begin{definition} \label{def:dichotomy}(\textbf{Dichotomy condition}) Let $\mathcal{O}$ be an open set of $\mathbb{R}^2$. We shall say that the tensor $\mathbb{P}$ satisfies the \emph{dichotomy condition} in $\mathcal{O}$ if one of the following conditions holds true \begin{subequations} \begin{eqnarray} \label{3.D(x)bound} && {\mathcal{D}}(x)>0, \quad\hbox{for every } x\in \overline{\mathcal{O}}, \\[2mm] \label{3.D(x)bound 2} && {\mathcal{D}}(x)=0, \quad\hbox{for every } x\in \overline{\mathcal{O}}, \end{eqnarray} \end{subequations} where ${\mathcal{D}}(x)$ is defined by \eqref{3.D(x)}. \end{definition} \begin{rem} \label{rem:dichotomy} Whenever \eqref{3.D(x)bound} holds we denote \begin{equation} \label{delta-1} \delta_1=\min_{\overline{\mathcal{O}}}{\mathcal{D}}. \end{equation} We emphasize that, in all the following statements, whenever a constant is said to depend on $\delta_1$ (among other quantities) it is understood that such dependence occurs \textit{only} when \eqref{3.D(x)bound} holds. \end{rem} \begin{rem} \label{rem:orthotropy} Let us briefly comment the \emph{dichotomy condition} in the special class of \emph{orthotropic} materials, frequently used in practical applications. In particular, let us assume that through each point of the plate there pass three mutually orthogonal planes of elastic symmetry and that these planes are parallel at all points. In this case \begin{equation} \label{ortho-1} C_0=0, \quad D_0=0, \end{equation} so that \begin{equation} \label{ortho-2} a_0=A_0, \quad a_1=0, \quad a_2=2B_0+4E_0, \quad a_3=0, \quad a_4=F_0, \end{equation} and \begin{equation} \label{ortho-3} {\mathcal{D}}(x) = 16 a_0 a_4 ( a_2^2 - 4 a_0 a_4)^2. \end{equation} Since, by the ellipticity condition \eqref{eq:3.convex}, the coefficients $a_0$, $a_4$ are strictly positive, the dichotomy condition reduces to the vanishing or not vanishing of the factor $a_2^2 - 4 a_0 a_4$. Introducing the engineering constitutive coefficients $E_1$, $E_2$, $G_{12}$, $\nu_{12}$, $\nu_{21}$, with $\nu_{12} E_2 = \nu_{21} E_1$ by the symmetry of $\mathbb{C}$, we have \begin{equation} \label{ortho-4} a_2^2 - 4 a_0 a_4 = 4E_1^2 \left ( \left ( \frac{\nu_{12}}{k} + \frac{1- \frac{\nu_{12}^2}{k} }{ m+\nu_{12}} \right )^2 - \frac{1}{k} \right ), \end{equation} where \begin{equation} \label{ortho-5} k= \frac{E_1}{E_2}, \quad m=\frac{E_1}{2G_{12}}-\nu_{12}. \end{equation} The \emph{isotropic} case corresponds to $k=1$ and $m=1$, so that, by \eqref{ortho-4}, ${\mathcal{D}}(x) \equiv 0$. Let us notice that \begin{equation} \label{ortho-6} \hbox{if } m = \sqrt k, \quad \hbox{then } {\mathcal{D}}(x) \equiv 0. \end{equation} This shows that there exist anisotropic materials such that \eqref{3.D(x)bound 2} is satisfied. Roughly speaking, this simple example makes clear that the value of ${\mathcal{D}}(x)$ cannot be interpreted as a ``measure of anisotropy''. Moreover, a case of practical interest corresponds to the vanishing of the Poisson's coefficient $\nu_{12}$, which gives \begin{equation} \label{ortho-7} a_2^2 - 4 a_0 a_4 = 4E_1^2 \left ( \frac{1}{m^2}-\frac{1}{k} \right ), \end{equation} so that \begin{equation} \label{ortho-8} \hbox{if } m \neq \sqrt k, \quad \hbox{then } {\mathcal{D}}(x) > 0. \end{equation} This gives an explicit class of examples in which \eqref{3.D(x)bound} holds. \end{rem} \begin{theo} [Three sphere inequality - complete equation] \label{theo:3sfere_completa} Let $u \in H^4({B}_R)$ be a solution to the equation \eqref{3.compact_plate_inhom}, where $\mathbb{P}$, defined by \eqref{3.P}, satisfies \eqref{eq:sym-conditions-C-components}, \eqref{eq:3.bound_quantit}, \eqref{eq:3.convex} and the dichotomy condition in $B_R$. There exist positive constants $k$ and $s$, $k\in(0,1)$ only depending on $\gamma$ and $M$, $s\in(0,1)$ only depending on $\gamma$, $M$ and on $\delta_1=\min_{\overline{B}_R}{\mathcal{D}}$, such that for every $r_1$, $r_2$, $r_3$, $0<r_1<r_2<kr_3<sR$, the following inequality holds \begin{equation} \label{3.3sfere_eqcompl} \|u\|_{L^2(B_{r_2})}\leq C\left(\|u\|_{L^2(B_{r_1})}+\epsilon\right)^\alpha \left(\|u\|_{H^4(B_{r_3})}+\epsilon\right)^{1-\alpha} \end{equation} where $C>0$ and $\alpha\in(0,1)$ only depend on $\gamma$, $M$, $\delta_1$, $\frac{r_2}{r_1}$, $\frac{r_3}{r_2}$ and $\delta_1=\min_{\overline{B}_R}{\mathcal{D}}$. \end{theo} \begin{proof} Let us consider the unique solution $u_0$ to \begin{equation} \label{3.u_0} \left\{ \begin{array}{lr} {\rm div}({\rm div} ( {\mathbb P}\nabla^2 u_0))=f + {\rm div}F + {\rm div}({\rm div} \mathcal{F}), &\hbox{in } B_R,\\ u_0=0,& \hbox{on }\partial B_R,\\ \frac{\partial u_0}{\partial\nu}=0, &\hbox{on } \partial B_R.\\ \end{array} \right. \end{equation} By using the weak formulation \eqref{3.inhom_weak} with $\varphi=u_0$, by the strong convexity condition \eqref{eq:3.convex}, by using the bound \eqref{3.bound_inhom} on the inhomogeneous term and by Poincar\'{e} inequality in $H^2_0(B_R)$, we have \begin{equation} \label{3.u_0bound} \|u_0\|_{L^2(B_{R})}\leq \|u_0\|_{H^2_0(B_{R})}\leq C\epsilon, \end{equation} with $C$ only depending on $\gamma$. Noticing that $u-u_0$ satisfies the hypotheses of Theorem \ref{theo:13-4.3}, we have that the thesis immediately follows. \end{proof} Let $\Sigma$ be an open connected portion of $\partial \Omega$ such that $\Sigma$ is of class $C^{1,1}$ with constants $\rho_0$, $M_0$, and there exists a point $P_0 \in \Sigma$ such that \begin{equation} \label{1-page6-par3} R_{ \frac{\rho_0}{M_0}, \rho_0}(P_0) \cap \partial \Omega \subset \Sigma. \end{equation} We shall consider as test function space the space $H_{co}^2(\Omega \cup \Sigma)$ consisting of the functions $\varphi \in H^2(\Omega)$ having support compactly contained in $\Omega \cup \Sigma$. We denote by $H^{ \frac{3}{2}}_{co}(\Sigma)$ the class of $H^{ \frac{3}{2}}(\Sigma)$ traces of functions $\varphi \in H_{co}^2(\Omega \cup \Sigma)$, and by $H^{ \frac{1}{2}}_{co}(\Sigma)$ the class of $H^{ \frac{1}{2}}(\Sigma)$ traces of the normal derivative $ \frac{\partial \varphi}{\partial n}$ of functions $\varphi \in H_{co}^2(\Omega \cup \Sigma)$. Moreover, for every positive integer number $m$, we define $H^{- \frac{m}{2}}(\Sigma)$ as the dual space to $H^{\frac{m}{2}}(\Sigma)$ based on the $L^2(\Sigma)$ dual pairing. Let $g_1\in H^{ \frac{3}{2}}(\Sigma)$, $g_2\in H^{ \frac{1}{2}}(\Sigma)$ and $\widehat{M}\in H^{- \frac{1}{2}}(\Sigma; \mathbb{R}^2)$ be such that \begin{equation} \label{2-page6-par3} \|g_1\|_{H^{\frac{3}{2}}(\Sigma)}+\rho_0\|g_2\|_{H^{\frac{1}{2}}(\Sigma)}+ \rho_0^2\|\widehat{M}\|_{H^{-\frac{1}{2}}(\Sigma; \mathbb{R}^2)} \leq \eta, \end{equation} for some positive constant $\eta$. We consider the following Cauchy problem \begin{center} \( {\displaystyle \left\{ \begin{array}{lr} \textrm{div}\,(\textrm{div}\, ( {\mathbb P}\nabla^2 u))=0, & \hbox{in}\ \Omega, \vspace{0.25em}\\ u=g_1, & \hbox{on}\ \Sigma, \vspace{0.25em}\\ \frac{\partial u}{\partial n}=g_2, & \hbox{on}\ \Sigma, \vspace{0.25em}\\ ({\mathbb {P}}\nabla^2 u) n \cdot n=-\widehat{M}_n, & \hbox{on}\ \Sigma, \vspace{0.25em}\\ \textrm{div}\,({\mathbb {P}} \nabla^2 u)\cdot n + (({\mathbb {P}}\nabla^2 u) n \cdot \tau),_{s}=\widehat{M}_{\tau,s}, & \hbox{on}\ \Sigma, \end{array} \right. } \) \vskip -8.4em \begin{eqnarray} & & \label{eq:1-page7-par3}\\ & & \label{eq:2-page7-par3}\\ & & \label{eq:3-page7-par3}\\ & & \label{eq:4-page7-par3}\\ & & \label{eq:5-page7-par3} \end{eqnarray} \end{center} where $\widehat{M}_\tau=\widehat{M}\cdot n$, $\widehat{M}_n=\widehat{M}\cdot \tau$ denote respectively the twisting moment and the bending moment applied at the boundary. A weak solution to \eqref{eq:1-page7-par3}--\eqref{eq:5-page7-par3} is a function $u \in H^2(\Omega)$ such that \begin{equation} \label{6-page7-par3} \int_\Omega \mathbb P \nabla^2 u \cdot \nabla^2 \varphi = - \int_\Sigma \left ( \widehat{M}_{\tau,s} \varphi + \widehat{M}_n \varphi_n \right ), \quad \hbox{for every } \varphi \in H_{co}^2(\Omega \cup \Sigma), \end{equation} with \begin{equation} \label{7-page7-par3} u|_{\Sigma} = g_1, \quad \frac{\partial u}{\partial n}|_{\Sigma}=g_2. \end{equation} We denote \begin{equation} \label{1-page8-par3} R^-_{ \frac{\rho_0}{M_0}, \rho_0} (P_0)=\{ (x_1,x_2) \in R_{ \frac{\rho_0}{M_0}, \rho_0}(P_0) | \ x_2 < \psi(x_1)\}, \end{equation} that is \begin{equation} \label{2-page8-par3} R^-_{ \frac{\rho_0}{M_0}, \rho_0} (P_0)= R_{\frac{\rho_0}{M_0}, \rho_0}(P_0) \setminus \overline{\Omega}. \end{equation} \begin{lem} \label{lem:traces} Let $g_1 \in H^{ \frac{3}{2}}(\Sigma)$, $g_2 \in H^{ \frac{1}{2}}(\Sigma)$. Then there exists $v \in H^2(R^-_{ \frac{\rho_0}{M_0}, \rho_0} (P_0))$ such that \begin{equation} \label{3-page8-par3} v|_{\Sigma \cap R_{\frac{\rho_0}{M_0}, \rho_0}(P_0)} = g_1, \end{equation} \begin{equation} \label{4-page8-par3} \frac{\partial v}{\partial n}|_{\Sigma \cap R_{\frac{\rho_0}{M_0}, \rho_0}(P_0)}= g_2 \end{equation} and \begin{equation} \label{5-page8-par3} \|v\|_{H^2(R^-_{ \frac{\rho_0}{M_0}, \rho_0} (P_0))} \leq C \left ( \|g_1\|_{H^{\frac{3}{2}}(\Sigma)}+\rho_0 \|g_2\|_{H^{\frac{1}{2}}(\Sigma)} \right ), \end{equation} where $C$, $C>0$, only depends on $M_0$. \end{lem} \begin{proof} The proof follows the lines of the proof of Lemma 6.1 of \cite{A-R-R-V}. \end{proof} Let us define \begin{equation} \label{1-page9-par3} \widetilde{u}= \left\{ \begin{array}{ll} u, & \hbox{in } \Omega, \\ & \\ v & \hbox{in } R^-_{ \frac{\rho_0}{M_0}, \rho_0} (P_0),\\ \end{array} \right. \end{equation} \begin{equation} \label{2-page9-par3} \Omega_1 = \Omega \cup \left ( \Sigma \cap R_{ \frac{\rho_0}{M_0}, \rho_0} (P_0) \right ) \cup R^-_{ \frac{\rho_0}{M_0}, \rho_0} (P_0). \end{equation} Since $u$ and $v$ share the same Dirichlet data $(g_1, g_2)$ on $\Sigma$, we have that \begin{equation} \label{3-page9-par3} \widetilde{u} \in H^2(\Omega_1). \end{equation} \begin{theo} \label{theo:Extension} There exist $\widetilde{f} \in L^2(\Omega_1)$, $\widetilde{F} \in L^2(\Omega_1; \mathbb{R}^2)$, $\mathcal{F} \in L^2(\Omega_1; \mathbb{M}^2)$ such that \begin{equation} \label{4-page9-par3} \|\widetilde{f}\|_{L^2(\Omega_1)}+ \frac{1}{\rho_0}\|\widetilde{F}\|_{L^2(\Omega_1;\mathbb{R}^2)} + \frac{1}{\rho_0^2}\|\mathcal{F}\|_{L^2(\Omega_1; \mathbb{M}^2)}\leq \frac{C\eta}{\rho_0^4} \end{equation} and $\widetilde{u}$ satisfies in the weak sense the equation \begin{equation} \label{5-page9-par3} {\rm div}({\rm div} ( {\mathbb P}\nabla^2 \widetilde{u}))=\widetilde{f} + {\rm div}\widetilde{F} + {\rm div}({\rm div} \mathcal{\widetilde{F}}), \quad\hbox{in } \Omega_1. \end{equation} Here, the constant $C$, $C>0$, only depends on $M_0$ and $\gamma$. \end{theo} \begin{proof} Let $\varphi$ be an arbitrary test function in $H^2_0(\Omega_1)$. It is clear that $\varphi|_{\Omega} \in H_{co}^2(\Omega \cup \Sigma)$. Denoting for simplicity $R^- =R^-_{ \frac{\rho_0}{M_0}, \rho_0} (P_0)$, by \eqref{6-page7-par3} we have \begin{equation} \label{1-page10-par3} \int_{\Omega_1} \mathbb{P} \nabla^2 \widetilde{u} \cdot \nabla^2 \varphi = - \int_\Sigma ( \widehat{M}_{\tau,s}\varphi +\widehat{M}_n \varphi_{,n}) + \int_{R^-} \mathbb{P} \nabla^2 v \cdot \nabla^2 \varphi. \end{equation} Let us define the functional $\Psi: H_0^2(\Omega_1) \rightarrow \mathbb{R}$ as \begin{equation} \label{2-page10-par3} \Psi(\varphi) = \int_\Sigma ( \widehat{M}_{\tau,s}\varphi +\widehat{M}_n \varphi_{,n}) = \rho_0 \left ( \frac{1}{\rho_0} \int_\Sigma ( \widehat{M}_{\tau,s}\varphi +\widehat{M}_n \varphi_{,n}) \right ). \end{equation} By standard trace embedding and by \eqref{2-page6-par3}, we have \begin{multline} \label{1-page11-par3} |\Psi(\varphi)| \leq \rho_0 \left ( \|\widehat{M}_{\tau,s}\|_{H^{- \frac{3}{2}}(\Sigma)} \|\varphi\|_{H^{\frac{3}{2}}(\Sigma)}+ \|\widehat{M}_n\|_{H^{-\frac{1}{2}}(\Sigma)} \|\varphi_{,n}\|_{H^{\frac{1}{2}}(\Sigma)} \right ) \leq \\ \leq C \|\widehat{M}\|_{H^{-\frac{1}{2}}(\Sigma)} \|\varphi\|_{H_0^{2}(\Omega_1)} \leq \frac{C\eta}{\rho_0^2} \|\varphi\|_{H_0^{2}(\Omega_1)}, \end{multline} where $C$, $C>0$, only depends on $M_0$. Therefore, $\Psi \in H^{-2}(\Omega_1)$ and \begin{equation} \label{2-page11-par3} \|\Psi\|_{H^{-2}(\Omega_1)} \leq \frac{C\eta}{\rho_0^2}. \end{equation} By the well-known Riesz Representation Theorem in Hilbert spaces, we can find $f \in H_0^2(\Omega_1)$ such that $\Psi(\varphi) = <\varphi, f>_{H_0^2(\Omega_1)}$ for every $\varphi \in {H_0^2(\Omega_1)}$ and \begin{equation} \label{2BIS-page11-par3} \|\Psi\|_{H^{-2}(\Omega_1)} = \|f\|_{H_0^2(\Omega_1)}. \end{equation} Let us set \begin{equation} \label{3-page11-par3} f_1 = \frac{f}{\rho_0^2}, \quad F_1= -\nabla f, \quad {\mathcal{F}_1} = \rho_0^2 \nabla^2 f. \end{equation} Then \begin{equation} \label{4-page11-par3} \rho_0\|f_1\|_{L^2(\Omega_1)}+ \|F_1\|_{L^2(\Omega_1; \mathbb{R}^2)} + \rho_0^{-1}\|\mathcal{F}_1\|_{L^2(\Omega_1; \mathbb{M}^2)}\leq \frac{C\eta}{\rho_0^3}. \end{equation} By \eqref{1-page10-par3} \begin{equation} \label{1-page12-par3} \int_{\Omega_1} \mathbb{P} \nabla^2 \widetilde{u} \cdot \nabla^2 \varphi = \int_{R^-} \mathbb{P} \nabla^2 v \cdot \nabla^2 \varphi - \int_{\Omega_1}f_1\varphi + \int_{\Omega_1}F_1 \cdot \nabla \varphi - \int_{\Omega_1} \mathcal{F}_1 \cdot \nabla^2 \varphi, \end{equation} for every $\varphi \in H_0^2(\Omega_1)$. Denoting \begin{equation} \label{2-page12-par3} \widetilde{f}=-f_1, \quad \widetilde{F}=-F_1, \quad \mathcal{ \widetilde{F} } = \left\{ \begin{array}{lr} - \mathcal{F}_1, & \hbox{in } \Omega_1, \\ \mathbb{P}\nabla^2 v - \mathcal{F}_1, &\hbox{in } R^{-}, \end{array} \right. \end{equation} we obtain \eqref{5-page9-par3}. By \eqref{2-page12-par3}, \eqref{3-page11-par3}, \eqref{eq:3.bound_quantit}, \eqref{2-page11-par3}, \eqref{2BIS-page11-par3}, \eqref{5-page8-par3}, \eqref{2-page6-par3} we obtain \eqref{4-page9-par3}. \end{proof} \begin{theo} [Propagation of smallness in the interior] \label{theo:HPS} Let $\Omega$ be a bounded domain in $\mathbb{R}^2$ satisfying \eqref{eq:M_1} and let $B_{r_0}(x_0)\subset\Omega$ be a fixed disc. Let $r$, $0<r\leq \frac{r_0}{2}$ be fixed and let $G\subset\Omega$ be a connected open set such that ${\rm dist}(G,\partial\Omega)\geq r$ and $B_{\frac{r_0}{2}}(x_0)\subset G$. Let $u\in H^2_{loc}(\Omega)$ be a weak solution to the equation \begin{equation} \label{1-page13-par3} {\rm div}({\rm div} ( {\mathbb P}\nabla^2 u_0))=f + {\rm div}F + {\rm div}({\rm div} \mathcal{F}), \quad\hbox{in } \Omega \end{equation} where $\mathbb{P}$, defined by \eqref{3.P}, satisfies \eqref{eq:sym-conditions-C-components}, \eqref{eq:3.bound_quantit}, \eqref{eq:3.convex} and the dichotomy condition in $G$. Let $f$, $F$, $\mathcal{F}$ satisfy \eqref{3.bound_inhom}. Let us assume that \begin{equation} \label{2-page13-par3} \|u\|_{L^2(B_{r_0}(x_0))}\leq\eta, \end{equation} \begin{equation} \label{3-page13-par3} \|u\|_{L^2(\Omega)}\leq E_0, \end{equation} for given $\eta>0$, $E_0>0$. We have \begin{equation} \label{4-page13-par3} \|u\|_{L^2(G)}\leq C(\epsilon+\eta)^\delta(E_0+\epsilon+\eta)^{1-\delta}, \end{equation} where \begin{equation} \label{5-page13-par3} C=C_1\left(\frac{|\Omega|}{r^2}\right)^\frac{1}{2}, \end{equation} \begin{equation} \label{6-page13-par3} \delta\geq\alpha^{\frac{C_2|\Omega|}{r^2}}, \end{equation} with $C_1>0$ and $\alpha$, $0<\alpha<1$, only depending on $\gamma$, $M$ and $\delta_1$, and with $C_2$ only depending on $\gamma$ and $\delta_1$, where $\delta_1=\min_{\overline{G}}{\mathcal{D}}$. \end{theo} \begin{proof} The proof is essentially based on an iterated application of the three sphere inequality, see \cite[Proof of Theorem 5.1]{A-R-R-V} for details. \end{proof} \begin{theo} [Local stability for the Cauchy problem] \label{theo:LSC} Let $u\in H^2(\Omega)$ be a weak solution to the Cauchy problem \eqref{eq:1-page7-par3}--\eqref{eq:5-page7-par3}, where $\mathbb{P}$, defined by \eqref{3.P}, satisfies \eqref{eq:sym-conditions-C-components}, \eqref{eq:3.bound_quantit}, \eqref{eq:3.convex} and the dichotomy condition in the rectangle $R_{\frac{\rho_0}{M_0},\rho_0}(P_0)$, $\Sigma$ satisfies \eqref{1-page6-par3}, $f$, $F$, $\mathcal{F}$ satisfy \eqref{3.bound_inhom}, and $g_1$, $g_2$, $\widehat{M}$ satisfy \eqref{2-page6-par3}. Assuming the a priori bound \begin{equation} \label{1-page14-par3} \|u\|_{L^2(\Omega)}\leq E_0, \end{equation} then \begin{equation} \label{2-page14-par3} \|u\|_{L^2\left(R_{\frac{\rho_0}{2M_0},\frac{\rho_0}{2}}(P_0)\cap\Omega\right)}\leq C(\epsilon+\eta)^\delta(E_0+\epsilon+\eta)^{1-\delta}, \end{equation} where $C>0$ and $\delta$, $0<\delta<1$, only depend on $\gamma$, $M$, $M_0$, $M_1$ and on $\delta_1=\min_{\overline{\mathcal{O}}}{\mathcal{D}}$, where $\mathcal{O}= R_{\frac{\rho_0}{M_0},\rho_0}(P_0)$. \end{theo} \begin{proof} Representing locally $\Omega$ in a neighborhood of $P_0$ as \begin{equation*} \Omega\cap R_{\frac{\rho_0}{M_0},\rho_0}(P_0)=\{(x_1,x_2)\in R_{\frac{\rho_0}{M_0},\rho_0}\ |\ x_2>\psi(x_1)\}, \end{equation*} let \begin{equation*} r_0=\frac{\rho_0}{2(\sqrt{1+M_0^2}+1)}, \end{equation*} \begin{equation*} x_0=\left(0,r_0-\frac{\rho_0}{2}\right). \end{equation*} We have that \begin{equation*} B_{r_0}(x_0)\subset R^-_{\frac{\rho_0}{2M_0},\frac{\rho_0}{2}}(P_0), \end{equation*} so that \begin{equation*} \|u\|_{L^2(B_{r_0}(x_0))}\leq C\eta. \end{equation*} The thesis easily follows by applying Theorem \ref{theo:HPS} with $\Omega=R_{\frac{\rho_0}{M_0},\rho_0}(P_0)$, $G=R_{\frac{\rho_0}{2M_0},\frac{\rho_0}{2}}(P_0)$, $h=\frac{r_0}{2}$. \end{proof} \section{Carleman estimate for second order elliptic operators\label{Sec4.1}} In this and in the next section we consider $n\geq2$, where $n$ is the space dimension. Moreover, in this section we use a notation for euclidean norm and scalar product which differs {}from the standard one used in the other sections. Let \begin{equation} \label{1-1} Pu=\partial_{i}(g^{ij}(x)\partial_{i}u) \end{equation} where $\{{ g^{ij}(x)}\} _{i,j=1}^{n}$ is a symmetric matrix valued function which satisfies a uniform ellipticity condition and whose entries are Lipschitz continuous functions. In order to simplify the calculations, in the sequel we shall use some standard notations in Riemannian geometry, but always dropping the corresponding volume element in the definition of the Laplace-Beltrami metric. More precisely, denoting by $g(x)=\{g_{ij}(x)\}_{i,j=1}^{n}$ the inverse of the matrix $\{g^{ij}(x)\}_{i,j=1}^{n}$ we have $g^{-1}(x)=\{g^{ij}(x) \}_{i,j=1}^{n}$ and we use the following notation when considering either a smooth function $v$ or two vector fields $\xi $ and $\eta$ i. $\xi \cdot \eta =\sum\limits_{i,j=1}^{n}g_{ij}(x)\xi _{i}\eta _{j}$, \quad $| \xi|^{2}=\sum\limits_{i,j=1}^{n}g_{ij}(x)\xi _{i}\xi _{j},$ ii. $\nabla v=( \partial_{1}v,...\partial _{n}v) $, \quad $\nabla _{g}v(x)=g^{-1}(x)\nabla v(x)$, \\ $\textrm{div}\,(\xi)=\sum\limits_{i=1}^{n}\partial_{i}\xi _{i}, \quad \Delta_{g}v=\textrm{div}\,(\nabla_{g}v)$, iii. $(\xi ,\eta )_{n}=\sum\limits_{i=1}^{n}\xi _{i}\eta _{i}$, $ | \xi| _{n}^{2}=\sum\limits_{i=1}^{n}\xi _{i}^{2}$. \bigskip With this notation the following formulae hold true when $u$, $v$ and $w$ are smooth functions \begin{equation} \label{1-2} Pu=\Delta _{g}u\text{, }\quad \Delta _{g}\left( v^{2}\right) =2v\Delta _{g}v+2\left\vert \nabla _{g}v\right\vert ^{2} \end{equation}% and \begin{equation} \label{2-2}\int_{\mathbb{R}^{n}}v\Delta _{g}wdx=\int_{\mathbb{R}^{n}}w\Delta _{g}vdx=-\int_{\mathbb{R}^{n}}\nabla _{g}v\cdot \nabla _{g}wdx. \end{equation}% We shall also use the following Rellich identity \begin{eqnarray} \label{3-2}2(B\cdot \nabla _{g}v)\Delta _{g}v =\textrm{div}\,\left( 2(B\cdot \nabla _{g}v)\nabla _{g}v-B|\nabla _{g}v| ^{2}\right)+\\+(\textrm{div}\, B) |\nabla _{g}v| ^{2}-2\partial _{i}B^{k}g^{ij}\partial _{j}v\partial _{k}v+B^{k}\partial _{k}g^{ij}\partial _{i}v\partial _{j}v\text{ ,}\nonumber \end{eqnarray}% where $B=(B^{1},...,B^{n})$ is a smooth vector field. We denote by $w\in C^2(\mathbb{R}^{n}\setminus\{0\})$ a function that we shall choose later on such that $w(x)>0$ and $|\nabla _{g}w|>0$ in $\mathbb{R}^{n}\setminus\{0\}$. Given $f\in C^\infty(\mathbb{R}^{n}\setminus \{0\})$, let us set \begin{equation} \label{10-3}P_{\tau}(f)=w^{-\tau}P(w^{\tau}f ), \end{equation} \begin{equation} \label{50-3} A_w(f)=\frac{w}{|\nabla_{g}w|} \partial_{Y} f+\frac{1}{2}F_{w}^gf, \end{equation} where \begin{equation} \label{20-3} F_{w}^{g}=\frac{w\Delta _{g}w-|\nabla_{g}w|^2}{|\nabla_{g}w|^2}, \end{equation} \begin{equation} \label{30-3}Y=\frac{\nabla_{g}w}{|\nabla_{g}w|}, \end{equation} \begin{equation} \label{40-3}\partial_Y f=\nabla_g f\cdot Y. \end{equation} \bigskip With the notation introduced above we have% \begin{equation} \label{1-3}P_{\tau }(f)=P_{\tau }^{(s)}(f)+P_{\tau }^{(a)}(f), \end{equation}% where $P_{\tau }^{(s)}$ and $P_{\tau }^{(a)}$ are the symmetric and the antisymmetric part of the operator $P_{\tau }$ with respect to the $L^2$ scalar product, respectively. \\More precisely we hav \begin{equation} \label{2-3}P_{\tau }^{(s)}(f)=\Delta _{g}f+\tau ^{2}\frac{\left\vert \nabla _{g}w \right\vert ^{2}}{w^2}f \end{equation}% and% \begin{equation} \label{3-3}P_{\tau }^{(a)}(f)=2\tau \frac{\left\vert \nabla _{g}w \right\vert ^{2}}{w^2} A_w(f). \end{equation}% Moreover, let us denote by $S^g_{w}$ the symmetric matrix $S^g_{w}=\{S_{w}^{g,ij}\}_{i,j=1}^{n}$, where \begin{equation} \label{1-4} S_{w}^{g,ij}=\frac{1}{2}\left((\textrm{div}\, B)-F_{w}^g) g^{ij}\\-\partial _{k}B^{j}g^{ki}-\partial _{k}B^{i}g^{kj}+B^{k}\partial _{k}g^{ij}\right), \end{equation} with \begin{equation} \label{2-4} B= \frac{w}{\left\vert \nabla _{g}w \right\vert }Y=\frac{w\nabla _{g}w }{\left\vert \nabla _{g}w \right\vert ^{2}}. \end{equation} We also denote \begin{equation} \label{3-4} \mathcal{M}_{w}^{g}=S_{w}^g g. \end{equation} Notice that \begin{equation} \label{4-4} \mathcal{M}_{w}^{g}\xi \cdot \eta =\xi \cdot \mathcal{M}_{w}^{g}\eta, \quad \text{for every }\xi,\eta \in \mathbb{R}^{n} \end{equation}% and, letting $\xi _{g}=g^{-1}\xi $, $\eta _{g}=g^{-1}\eta $, \begin{equation} \label{5-4} \mathcal{M}_{w}^{g}\xi _{g}\cdot \eta _{g}=(S_{w}^g\xi ,\eta )_{n},\quad \text{for every }\xi,\eta \in \mathbb{R}^{n}. \end{equation} \bigskip The proof of the following lemma is straightforward. \bigskip \begin{lem} \label{lem:1-4.1}Let $v\in C^2(\mathbb{R}^n\setminus\{0\})$ be a function that satisfies the conditions $v(x)>0$, $|\nabla_gv(x)|>0$ for every $x\in\mathbb{R}^n\setminus\{0\}$. Let $S_{v}^{g}$, $\mathcal{M}_{v}^{g}$, $F_{v}^{g}$ and $B$ be obtained substituting $w$ with $v$ in the \eqref{1-4}, \eqref{3-4}, \eqref{20-3} and \eqref{2-4}, respectively. Let $\varphi\in C^2(0,+\infty)$ be such that $\varphi(s)>0$, $\varphi'(s)>0$, for every $s\in(0,+\infty) $. Let us denote \begin{equation} \label{1-5}\Phi(s)=\frac{\varphi(s)}{s\varphi'(s)}. \end{equation} We have \begin{equation} \label{2-5} \mathcal{M}_{v}^{g}\nabla_g v=S_{v}^g\nabla v=0, \end{equation} \begin{equation} \label{3-5} F_{\varphi(v)}^{g}=\Phi(v)F_{v}^{g}-\Phi'(v)v , \end{equation} \begin{equation} \label{4-5} \mathcal{M}_{\varphi(v)}^{g}\xi \cdot \eta = v\Phi'(v)\left(\xi\cdot\eta- \frac{(\nabla_gv\cdot\xi)(\nabla_gv\cdot\eta)}{|\nabla_{g}v|^2}\right)+\Phi(v)\mathcal{M}_{v}^{g} \xi \cdot\eta . \end{equation} \end{lem} \bigskip In the sequel we shall use the following notation \begin{equation} \label{1*-6}\nabla_g^N f=(\nabla_gv\cdot\nabla_g f) \frac{\nabla_g v}{|\nabla_g v|^2}=(\partial_Y f\cdot Y)Y, \end{equation} \begin{equation} \label{2*-6}\nabla_g^T f= \nabla_g f-\nabla_g^N f, \end{equation} Notice that $\nabla_g^N f$ and $\nabla_g^T f$ are the normal component and the tangential component (with respect to the Riemannian metric $\{g_{ij}\}_{i,j=1}^n$) of $\nabla_g f$ to the level surface of $w$ respectively. In particular $\nabla_g^N f$ and $\nabla_g^T f$ are invariant with respect to transformations of the type $\widetilde{w}=\varphi(w)$, where $\varphi$ satisfies the hypotheses of Lemma \ref{lem:1-4.1}. We have \bigskip \begin{equation} \label{3*-6}\nabla_g^T f\cdot Y=0,\quad \nabla_g f=\nabla_g^N f+\nabla_g^T f, \end{equation} \begin{equation} \label{4*-6} |\nabla_g f|^2=|\nabla_g^N f|^2 +|\nabla_g^T f|^2=(\partial_Y f)^2+|\nabla_g^T f|^2, \end{equation} \begin{equation} \label{5*-6} \nabla_g^N f\cdot\nabla_g^T f=0. \end{equation} \noindent In addition, observe that by \eqref{4-4} and \eqref{2-5} we have \begin{equation} \label{1-6}\mathcal{M}_w^g\nabla_g f\cdot\nabla_g f=\mathcal{M}_w^g\nabla_g^T f\cdot\nabla_g^T f. \end{equation}% \begin{lem} \label{lem:2-4.1} Let $w\in C^2(\mathbb{R}^n\setminus\{0\})$ be such that $w(x)>0$, $|\nabla_gw(x)|>0$ for every $x\in\mathbb{R}^n\setminus\{0\}$. For every $\tau\neq 0$ we have \begin{multline} \label{2-6} \frac{w^2}{|\nabla_gw|^2}\left(P_\tau(f)\right)^2=\frac{w^2}{|\nabla_gw|^2}\left(P^{(s)}_\tau(f)\right)^2 +4{\tau}^2\left(\partial_Y f\right)^2\left(1+(2\tau)^{-1}F_{w}^{g}\right)+\\ +4\tau\left(\mathcal{M}_{w }^{g}\nabla_g^T f\cdot\nabla_g^T f+\frac{1}{2}F_{w }^{g}|\nabla_g^T f|^2\right)-\\ -2\tau^3\frac{|\nabla_gw|^2}{w^2}F_{w }^{g}\left(1+(2\tau)^{-1}F_{w}^{g}\right)f^2+ 2\tau F_{w }^{g}f P_\tau(f)+ \textrm{div}\,(q), \end{multline} where \begin{eqnarray} \label{2-7} q=\frac{2\tau w}{|\nabla_gw|}\left(2(\partial_Y f)\nabla_g f-|\nabla_g f|^2Y+\tau^2f^2\frac{|\nabla_gw|^2}{w^2}Y\right). \end{eqnarray} \end{lem} \begin{proof} By \eqref{1-3} we have \begin{multline} \label{3-7} \frac{w^2}{|\nabla_gw|^2}\left(P_{\tau }(f)\right)^2=\frac{w^2}{|\nabla_gw|^2}\left(P_{\tau }^{(s)}(f)\right)^2+\\+2\frac{w^2}{|\nabla_gw|^2}P^{(s)}_\tau(f)P^{(a)}_\tau(f)+\frac{w^2}{|\nabla_gw|^2}\left(P_{\tau }^{(a)}(f)\right)^2. \end{multline} Let us consider the second term at the right-hand side of \eqref{3-7}. We have \begin{multline} \label{10-7} 2\frac{w^2}{|\nabla_gw|^2}P^{(s)}_\tau(f)P^{(a)}_\tau(f)=4\tau\left(\Delta_g f+\tau^2\frac{|\nabla_gw|^2}{w^2}f\right)A_w(f)=\\ =4\tau\left(\frac{w\nabla_gw\cdot\nabla_gf}{|\nabla_gw|^2}\right)\Delta_g f+2\tau F_{w}^{g}f\Delta_g f+4\tau^3\frac{|\nabla_gw|^2}{w^2}A_w(f)f=\\=4\tau\left(\frac{w\nabla_gw\cdot\nabla_gf}{|\nabla_gw|^2}\right)\Delta_g f+2\tau F_{w}^{g}f\Delta_g f+2\tau^3\textrm{div}\,\left(\frac{\nabla_gw}{w}f^2\right). \end{multline} Now we transform the term $4\tau\left(\frac{w\nabla_gw\cdot\nabla_gf}{|\nabla_gw|^2}\right)\Delta_g f$ by applying the Rellich identity \eqref{3-2} with $B=\frac{w\nabla_gw}{|\nabla_gw|^2}$ and $v=f$. We obtain \begin{multline} \label{1-8} 2\frac{w^2}{|\nabla_gw|^2}P^{(s)}_\tau(f)P^{(a)}_\tau(f)=\\ =4\tau\mathcal{M}^g_w\nabla_g f\cdot\nabla_g f +2\tau F_{w}^{g}|\nabla_g f|^{2}+2\tau F_{w}^{g} f\Delta_g f+\textrm{div}\,(q), \end{multline} where $q$ is given by \eqref{2-7}. Now we transform the third term at the right-hand side of \eqref{1-8} by using the following trivial consequence of \eqref{1-3} \begin{equation} \label{3-8} \Delta_gf=P_\tau(f)-\tau^2\frac{|\nabla_gw|^{2}}{w^2}f-2\tau\frac{|\nabla_gw|^{2}}{w^2}A_w(f) \end{equation} and we obtain \begin{multline} \label{1-9} 2\tau F_{w}^{g} f\Delta_g f= 2\tau F_{w}^{g}fP_\tau(f)-\\- 2\tau^3\frac{|\nabla_gw|^2}{w^2}F_{w }^{g}\left(1+\frac{1}{\tau}F_{w}^{g}\right)f^2- 4\tau^2\frac{|\nabla_gw|}{w}F_{w}^{g}f\partial_Yf. \end{multline} Now, just spreading the square in the third term at the right-hand side of \eqref{3-7}, we have \begin{multline} \label{2-9} \frac{w^2}{|\nabla_gw|^2}\left(P_{\tau }^{(a)}(f)\right)^2=\\ =4\tau^2(\partial_Yf)^2+\tau^2\frac{|\nabla_gw|^2}{w^2}\left(F_{w }^{g}\right)^2f^2 +4\tau^2\frac{|\nabla_gw|}{w}F_{w }^{g}f\partial_Yf, \end{multline} so that, by \eqref{4*-6}, \eqref{1-6}, \eqref{3-7}, \eqref{1-8}, \eqref{1-9} and \eqref{2-9} we obtain identity \eqref{2-6}. \end{proof} \bigskip In the sequel of this section we assume that the matrix $\{{ g^{ij}(x)}\} _{i,j=1}^{n}$ satisfies the following conditions \begin{equation} \label{1-10} \lambda|\xi|_n^2\leq\sum_{i,j=1}^ng^{ij}(x)\xi_i\xi_j\leq\lambda^{-1}|\xi|_n^2,\quad \text { for every }x\in\mathbb{R}^n, \xi\in\mathbb{R}^n \end{equation} and \begin{equation} \label{2-10} \sum_{i,j=1}^n|g^{ij}(x)-g^{ij}(y)|\leq\Lambda|x-y|_n, \quad\text { for every }x\in\mathbb{R}^n, y\in\mathbb{R}^n, \end{equation} where $\lambda\in(0,1]$ and $\Lambda>0$. \bigskip Now we introduce some additional notation that we shall use in the sequel. Let $\Gamma=\{{ \gamma_{ij}}\} _{i,j=1}^{n}$ be a matrix that we shall choose later on. We assume that \begin{equation} \label{4-10} m_\ast|x|_n^2\leq\left(\Gamma x,x\right)_n\leq m^{\ast}|x|_n^2, \quad\text { for every }x\in\mathbb{R}^n, \end{equation} where $m_\ast$ and $m^{\ast}$ are the minimum and the maximum eigenvalue of $\Gamma$ respectively, and $m_\ast>0$. Let us denote \begin{equation} \label{10-10} \sigma(x)=\left(\left(\Gamma x,x\right)_n\right)^{1/2} \end{equation} and we denote \begin{equation} \label{*-11} S^{(0)}=S_{\sigma}^{g(0)}, \end{equation} where we recall that \begin{equation} \label{1-11} S_\sigma^{g(0),ij}=\frac{1}{2}\left((\textrm{div}\, B_0)-F_{\sigma}^{g(0)}) g^{ij}(0)\\-\partial _{k}B_0^{j}g^{ki}(0)-\partial _{k}B_0^{i}g^{kj}(0)\right) \end{equation} and \begin{equation} \label{10-11} B_0=\{B_0^i\}_{i=1}^n=\left\{\frac{\sigma(x)g^{ij}(0)\partial_j\sigma(x)}{g^{lm}(0)\partial_l\sigma(x) \partial_m\sigma(x)}\right\}_{i=1}^n, \end{equation} \bigskip \begin{equation} \label{20-11} F_{\sigma}^{g(0)}=\frac{\sigma(x)g^{ij}(0)\partial^2_{ij}\sigma(x)-g^{ij}(0)\partial_i\sigma(x)\partial_j\sigma(x)} {g^{ij}(0)\partial_i\sigma(x)\partial_j\sigma(x)}. \end{equation} \bigskip Moreover, for any fixed $\xi\in\mathbb{R}^n$, $(S^{(0)}\xi,\xi)_n$ is an homogeneous function with respect to the $x$ variable of degree $0$, hence the following number is well defined \begin{equation} \label{2-11} \omega_0=\sup\left\{-(S^{(0)}\xi,\xi)_n\ |\ g^{ij}(0)\xi_i\xi_j=1,\ g^{ij}(0)\partial_i\sigma(x)\xi_j=0,\ x\in\mathbb{R}^n\setminus{0}\right\}. \end{equation} \bigskip We observe that $\omega_0$ is a nonnegative number. More precisely we have the following proposition. \begin{prop} \label{propRem} Let $Q=\sqrt{g(0)}\Gamma^{-1}\sqrt{g(0)}$, where $\sqrt{g(0)}$ is the positive square root of the matrix $g(0)$. Let $\varrho_{\ast}$ and $\varrho^{\ast}$ be the minimum and the maximum eigenvalues of the matrix $Q$ respectively. Then the following equality holds true \begin{equation} \label{1-11bis} \omega_0=\frac{\varrho^{\ast}}{\varrho_{\ast}}-1. \end{equation} \end{prop} \begin{proof} In order to prove \eqref{1-11bis}, let us denote \begin{equation} \label{2-11bis} K=\Gamma g^{-1}(0)\Gamma \end{equation} and let us notice that, with the conditions \begin{equation} \label{3-11bis} (g^{-1}(0)\xi,\xi)_n=1 \quad (g^{-1}(0)\nabla\sigma(x),\xi)_n=0 \end{equation} and with the normalization condition \begin{equation} \label{1-11ter} (Kx,x)_n=1, \end{equation} we have \begin{equation} \label{2-11ter} -(S^{(0)} \xi, \xi)_n=(\Gamma x, x)_n\left((K\Gamma^{-1}K x, x)_n + (g^{-1}(0)\Gamma g^{-1}(0) \xi,\xi)_n\right)-2. \end{equation} Moreover, by introducing the new variables \begin{equation} \label{3-11ter} \eta=\left(\sqrt{g(0)}\right)^{-1}\xi, \quad y=\left(\sqrt{g(0)}\right)^{-1}\Gamma x, \end{equation} conditions \eqref{3-11bis} and \eqref{1-11ter} become respectively \begin{equation} \label{4-11ter} |\eta|_n^2=1, \quad (y,\eta)_n=0, \end{equation} and \begin{equation} \label{5-11ter} |y|_n^2=1 \end{equation} so that expression \eqref{2-11ter} is equal to \begin{equation} \label{6-11ter} H(y,\eta):=(Qy,y)_n\left((Q^{-1}y,y)_n + (Q^{-1}\eta,\eta)_n\right)-2. \end{equation} Thus we have \begin{equation} \label{10-11ter} \omega_0=\sup\left\{H(y,\eta)\ |\ |y|_n=1,|\eta|_n=1, (y,\eta)_n=0\right\}. \end{equation} \bigskip Now let $z_{\ast}$ and $z^{\ast}$ be two linearly independent unit eigenvectors of $Q$ such that $Qz_{\ast}=\varrho_{\ast}z_{\ast}$ and $Qz^{\ast}=\varrho^{\ast}z^{\ast}$. We have \begin{equation} \label{1-11quater} H(z^{\ast},z_{\ast})=\frac{\varrho^{\ast}}{\varrho_{\ast}}-1, \end{equation} hence \begin{equation} \label{2-11quater} \omega_0\geq\frac{\varrho^{\ast}}{\varrho_{\ast}}-1. \end{equation} \bigskip In order to complete the proof of \eqref{1-11bis} we need to prove that \begin{equation} \label{3-11quater} \omega_0\leq\frac{\varrho^{\ast}}{\varrho_{\ast}}-1. \end{equation} To this aim we recall the following Kantorovich inequality \cite{Ka}, \cite{Mi}. Let $\mathcal{A}$ be a $m\times m$ positive definite symmetric real matrix and let $\alpha_{\ast}$, $\alpha^{\ast}$ be the minimum and the maximum eigenvalues of $\mathcal{A}$ respectively, then for every $X\in\mathbb{R}^m$ we have \begin{equation} \label{4-11quater} (\mathcal{A}X,X)_m(\mathcal{A}^{-1}X,X)_m\leq \frac{1}{4}\left(\sqrt{\frac{\alpha^*}{\alpha_*}}+\sqrt{\frac{\alpha_*}{\alpha^*}}\right)^2|X|_m^4. \end{equation} \bigskip Now let $m=2n$, $X=(y,\eta)^t$ and \begin{equation} \label{10-11pentium} \mathcal{A}=\left( \begin{array}{cc} Q & 0 \\ 0 & Q% \end{array}% \right), \end{equation} we have, for every $y,\eta\in \mathbb{R}^n$ such that $|y|_n=|\eta|_n=1, (y,\eta)=0$ \begin{equation} \label{1-11pentium} H(y,\eta) =(\mathcal{A}X,X)_{2n}(\mathcal{A}^{-1}X,X)_{2n}-(Q\eta,\eta)_n(\mathcal{A}^{-1}X,X)_{2n}-2. \end{equation} By Schwarz inequality we have \begin{multline} \label{2-11pentium}\qquad (Q\eta,\eta)_n(\mathcal{A}^{-1}X,X)_{2n} = (Q\eta,\eta)_n(Q^{-1} y,y)_n+\\+(Q\eta,\eta)_n(Q^{-1} \eta,\eta)_n\geq\frac{\varrho_{\ast}}{\varrho^{\ast}}+|\eta|_n^2=\frac{\varrho_{\ast}}{\varrho^{\ast}}+1. \end{multline} On the other hand, the first term on the right-hand side of \eqref{1-11pentium} can be estimated {}from above by inequality \eqref{4-11quater}. By the obtained inequality and by \eqref{2-11pentium} we get \eqref{3-11quater}, that completes the proof of \eqref{1-11bis}. \end{proof} \bigskip In the next Lemma and in the sequel we shall use the following notation when dealing with a matrix $A=\{a_{ij}\}_{i,j=1}^n$ \begin{equation} \label{10N-11} \left\vert A\right\vert=\left(\sum_{i,j=1}^na_{ij}^2\right)^{1/2}. \end{equation} \begin{lem} \label{lem:3-4.1} There exists a constant $C,C\geq1,$ depending only on $\lambda,\Lambda,m_\ast$ and $m^{\ast}$ such that for every $x\in\mathbb{R}^n\setminus\{0\}$, $0<\sigma(x)\leq1$, the following inequalities hold true \begin{equation} \label{1-12} C^{-1}\leq\left\vert \nabla _{g}\sigma \right\vert\leq C,\text { } \left\vert F _{\sigma}^g \right\vert\leq C, \text { } \left\vert S ^{(0)} \right\vert\leq C, \end{equation} \begin{equation} \label{2-12} \left\vert F _{\sigma}^g -F _{\sigma}^{g(0)}\right\vert\leq C\sigma, \text { } \left\vert S_\sigma ^{g}- S ^{(0)} \right\vert\leq C\sigma, \end{equation} \begin{equation} \label{3-12} \mathcal{M}_w^g\nabla_g^T f\cdot\nabla_g^T f\geq-(\omega_0+C\sigma)\left\vert \nabla _{g}^Tf\right\vert^2. \end{equation} \end{lem} \begin{proof} The proof of \eqref{1-12} and \eqref{2-12} is straightforward. We prove inequality \eqref{3-12}. Denote by \begin{equation} \label{4-12}\zeta=g\nabla_g^T f. \end{equation} We have by \eqref{1-10}, \eqref{2-10}, \eqref{2-12} and \eqref{4-12} \begin{multline} \label{5-12} \mathcal{M}_w^g\nabla_g^T f\cdot\nabla_g^T f=(S_{\sigma}^g\zeta,\zeta)_n\geq\\\geq(S^{(0)}\zeta,\zeta)_n-\left\vert ((S_{\sigma}^g-S^{(0))})\zeta,\zeta)_n\right\vert\geq(S^{(0)}\zeta,\zeta)_n-C\sigma\left\vert \nabla _{g}^Tf\right\vert^2, \end{multline} where $C$ depends only on $\lambda,\Lambda,m_\ast$ and $m^{\ast}$. \bigskip Now, let us consider the term $(S_{\sigma}^g\zeta,\zeta)_n$ on the right-hand side of \eqref{5-12}. Denoting by \begin{equation} \label{10-13}\tilde{\zeta }=\zeta+g(0)\left(g^{-1}(x)-g^{-1}(0)\right)\zeta, \end{equation} we have $g^{-1}(0)\tilde{\zeta }=g^{-1}(x)\zeta=\nabla_g^Tf$, hence \begin{equation} \label{2-13}g^{ij}(0)\tilde{\zeta }_j\partial_i\sigma=\nabla_g^Tf\cdot\nabla_g\sigma=0. \end{equation} In addition we have \begin{equation} \label{3-13}|\zeta-\tilde\zeta|_n\leq C|\nabla_g^Tf|\sigma \end{equation} and \begin{equation} \label{4-13}g^{ij}(0)\tilde{\zeta }_j\tilde\zeta_i\leq\left(1+C\sigma\right)|\nabla_g^Tf|^2, \end{equation} where $C$ depends only on $\lambda,\Lambda,m_\ast$ and $m^{\ast}$. \bigskip By \eqref{2-11}, \eqref{1-12}, \eqref{2-13} and \eqref{3-13}, we obtain, for every $x\in\mathbb{R}^n\setminus\{0\}$ such that $0<\sigma(x)\leq1$, \begin{multline} \label{20-13} (S^{(0)}\zeta,\zeta)_n\geq(S^{(0)}\tilde\zeta,\tilde\zeta)_n-\\-\left\vert (S^{(0)}(\zeta-\tilde\zeta),\zeta-\tilde\zeta)_n \right\vert\ - 2\left\vert(S^{(0)}(\zeta-\tilde\zeta),\tilde\zeta)_n \right\vert\geq\\\geq-\omega_0(g^{-1}(0)\tilde\zeta,\tilde\zeta)_n-C|\zeta-\tilde\zeta|_n^2- 2C|\zeta-\tilde\zeta|_n|\tilde\zeta|_n\geq\\\geq-(\omega_0+C\sigma)|\nabla_g^Tf|^2, \end{multline} where $C$ depends only on $\lambda,\Lambda,m_\ast$ and $m^{\ast}$. By the just obtained inequality and by \eqref{5-12} we obtain \eqref{3-12}. \end{proof} \bigskip Let $r$ be a given positive number, in the sequel we shall denote by $B_r^{\sigma}$ the set $\left\{x\in\mathbb{R}^n|\sigma(x)<r\right\}$. In addition, in order to simplify the notation, we shall denote $\int_{\mathbb{R}^n}(.)dx$ simply by $\int$ and, instead to write \textquotedblleft$f$ is a function that belongs to $C_0^\infty\left(\mathbb{R}^n\setminus\{0\}\right)$ and $f$ is such that $\text { supp}(f)\subset B_r^{\sigma}\setminus\{0\}$\textquotedblright, we shall write simply \textquotedblleft$f\in C_0^\infty\left( B_r^{\sigma}\setminus\{0\}\right)$\textquotedblright. \bigskip \begin{theo} \label{theo:4-4.1}Let $\beta$ be a number such that $\beta>\omega_0$, let \begin{equation} \label{1-15} \varphi(s)=e^{-s^{-\beta}} \end{equation} and let $w(x)=\varphi\left(\sigma(x)\right)$. There exist constants $C$, $\tau_1$ and $r_0$, ($C\geq 1$, $\tau_1\geq 1$, $0<r_0\leq 1$) depending only on $\lambda,\Lambda,m_\ast,m^{\ast}$ and $\beta$ such that for every $u\in C_0^\infty\left( B_{r_0}^{\sigma}\setminus\{0\}\right)$ and for every $\tau\geq\tau_1$ the following inequality holds true \begin{multline} \label{2-15} \tau\int\sigma^{\beta}w^{-2\tau}|\nabla_gu|^2+\tau^3\int\sigma^{-\beta-2}w^{-2\tau}u^2\leq C\int\sigma^{2\beta+2}w^{-2\tau}\left(\Delta_gu\right)^2. \end{multline} \end{theo} \begin{proof}Let $w(x)=\varphi\left(\sigma(x)\right)$, where $\sigma(x)=\left((\Gamma x,x)_n\right)^{1/2}$. Let us notice that $\varphi$ satisfies the hypotheses of Lemma \ref{lem:1-4.1} and that \begin{equation} \label{1-16}\Phi(s)=\frac{s^{\beta}}{\beta}. \end{equation} Let $u\in C_0^\infty\left( B_{1}^{\sigma}\setminus\{0\}\right)$ and $f=w^{-\tau}u$. By \eqref{4-5} and by \eqref{3-12} we have \begin{equation} \label{2-16} \mathcal{M}_w^g\nabla_g^T f\cdot\nabla_g^T f\geq{\sigma}^{\beta}\left(1-\frac{\omega_0}{\beta}-C\sigma\right)\left\vert \nabla _{g}^Tf\right\vert^2, \end{equation} where $C$ depends only on $\lambda,\Lambda,m_\ast,m^{\ast}$ and $\beta$.\\ Now, denoting \begin{equation} \label{10-16} \psi_0={\sigma}^{\beta}\left(-1+\frac{1}{\beta}F_{\sigma}^{g(0)}\right), \end{equation} by \eqref{3-5} we have \begin{equation} \label{20-16} F_{w}^{g}=\psi_0+\frac{{\sigma}^{\beta}}{\beta}\left(F_{\sigma}^{g}-F_{\sigma}^{g(0)}\right), \end{equation} hence by \eqref{1-12} and \eqref{2-12} of Lemma \ref{lem:3-4.1} we have, for every $x\in B_1^{\sigma}\setminus\{0\}$, \begin{equation} \label{2*-16} \left\vert F_{w}^{g}\right\vert \leq C{\sigma}^{\beta}, \quad\quad\quad \left\vert F_{w}^{g}-\psi_0\right\vert\leq C{\sigma}^{\beta+1}, \end{equation} where $C, C\geq1$, depends only on $\lambda,\Lambda,m_\ast,m^{\ast}$ and $\beta$. \bigskip Let $\psi_1$ be a function that we shall choose later on, by \eqref{2-3} we have \begin{multline} \label{1-17} \qquad \frac{w^2}{|\nabla_gw|^2}\left(P_{\tau }^{(s)}(f)\right)^2=\\=\frac{w^2}{|\nabla_gw|^2}\left(P_{\tau }^{(s)}(f)-\tau\frac{|\nabla_gw|^2}{w^2}\psi_1f+\tau\frac{|\nabla_gw|^2}{w^2}\psi_1f\right)^2 \geq\\ \geq 2\tau\psi_1f\left(P_{\tau }^{(s)}(f)-\tau\frac{|\nabla_gw|^2}{w^2}\psi_1f\right)=\\= 2\tau^3\left(\left(1-\frac{\psi_1}{\tau} \right)\psi_1\frac{|\nabla_gw|^2}{w^2}+\frac{1}{2\tau^2}\Delta_g\psi_1\right)f^2 -2\tau\psi_1|\nabla_gf|^2+\textrm{div}\,(q_1), \end{multline} where \begin{equation} \label{10-17}q_1=\tau\left(2\psi_1f\nabla_gf-f^2\nabla_g\psi_1\right). \end{equation} By inequalities \eqref{2-16} and \eqref{1-17}, by \eqref{4*-6} and by Lemma \ref{lem:2-4.1} we obtain \begin{multline} \label{2-17} \frac{w^2}{|\nabla_gw|^2}\left(P_\tau(f)\right)^2\geq 2\tau^3a_1f^2+\\+4\tau a_2|\nabla_g^Tf|^2+4{\tau}^2a_3\left(\partial_Y f\right)^2 + 2\tau F_{w }^{g}f P_{\tau}(f)+ \textrm{div}\,(q_2), \end{multline} where \begin{equation} \label{30-18} a_1=\frac{|\nabla_gw|^2}{w^2}\left(\left(\psi_1-F_w^g\right)-\frac{1}{\tau}\left(\frac{1}{2}\left(F_w^g\right)^2+ \psi_1^2\right)\right)+\frac{1}{2\tau^2}\Delta_g\psi_1, \end{equation} \begin{equation} \label{20-18} a_2={\sigma}^{\beta}\left(1-\frac{\omega_0}{\beta}-C\sigma\right)+\frac{1}{2}\left(F_w^g-\psi_1\right) \end{equation} \begin{equation} \label{10-18}a_3=1+\frac{1}{2\tau}(F_w^g-\psi_1), \end{equation} \begin{equation} \label{40-18} q_2=q+q_1. \end{equation} \bigskip Now we choose \begin{equation} \label{50-18} \psi_1=\psi_0+\frac{\varepsilon\sigma^\beta}{\beta}, \end{equation} where $0<\varepsilon\leq \text{min}\{1,\beta-\omega_0\}$.\\ Let us notice that for every $x\in B_1^{\sigma}\setminus\{0\}$, \begin{equation} \label{1-18}C^{-1}\sigma^{-2\beta-2}\leq\frac{|\nabla_gw|^2}{w^2}\leq C\sigma^{-2\beta-2}, \end{equation} \begin{equation} \label{2-18}F_w^g-\psi_1\geq-\frac{\sigma^\beta}{\beta}\left(\varepsilon+C\sigma\right), \end{equation} \begin{equation} \label{3-18}\psi_1-F_w^g\geq\frac{\sigma^\beta}{\beta}\left(\varepsilon-C\sigma\right), \end{equation} \begin{equation} \label{1-19}|\psi_1|\leq C\sigma^{\beta},\qquad |\Delta_g\psi_1|\leq C\sigma^{\beta-2}, \end{equation} where $C, C\geq1$, depends only on $\lambda,\Lambda,m_\ast,m^{\ast}$ and $\beta$, with \eqref{2-18}--\eqref{1-19} following {}from \eqref{10-16}--\eqref{2*-16} and \eqref{50-18}. {}From \eqref{1-18}--\eqref{1-19} we have that, for every $x\in B_1^{\sigma}\setminus\{0\}$ and for every $\tau\geq1$ \begin{equation} \label{10-19} a_1\geq C_{\ast}^{-1}\sigma^{-\beta-2}\left(\varepsilon-C_0\sigma-\frac{C_1}{\tau}\sigma^\beta\right), \end{equation} where $C_{\ast}, C_0, C_1$ ($C_{\ast}\geq1, C_0\geq1, C_1\geq1$) depend only on $\lambda,\Lambda,m_\ast,m^{\ast}$ and $\beta$. Therefore, if $0<\sigma(x)\leq\frac{\varepsilon}{2C_0}$ and $\tau\geq\frac{4C_1}{\varepsilon}$, then we have \begin{equation} \label{2-19} a_1 \geq\frac{\varepsilon}{4}C_{\ast}^{-1}\sigma^{-\beta-2}, \end{equation} where $C, C\geq1$, depends only on $\lambda,\Lambda,m_\ast,m^{\ast}$ and $\beta$.\\ Concerning $a_2$, we have by \eqref{2-18} \begin{equation} \label{20-19} a_2\geq{\sigma}^{\beta}\left(\frac{1}{2}\left(1-\frac{\omega_0}{\beta}\right)-C_2\sigma\right), \end{equation} where $C_2, C_2\geq1$, depends only on $\lambda,\Lambda,m_\ast,m^{\ast}$ and $\beta$. Therefore, if $0<\sigma(x)\leq\dfrac{\beta-\omega_0}{4\beta C_2}$, then we have \begin{equation} \label{3-19} a_2\geq\frac{1}{4}{\sigma}^{\beta}\left(1-\frac{\omega_0}{\beta}\right), \end{equation} Concerning $a_3$, by \eqref{1-19} and \eqref{2*-16} we have that there exists $C_3, C_3\geq1$, depending only on $\lambda,\Lambda,m_\ast,m^{\ast}$ and $\beta$ such that if $\tau\geq C_3$ and $0<\sigma(x)\leq 1$ then \begin{equation} \label{4-19} a_3\geq\frac{1}{2}. \end{equation} \bigskip Now, denote by $\tau_0=\text{max}\{\frac{4C_1}{\varepsilon},C_3\}$ and $r_0=\text{min}\{\frac{\varepsilon}{2C_0},\frac{\beta-\omega_0}{4\beta C_2}\}$, by \eqref{4*-6}, \eqref{2-17}, \eqref{2-19}, \eqref{3-19} and \eqref{4-19} we have \begin{multline} \label{1-20} \qquad\frac{w^2}{|\nabla_gw|^2}\left(P_\tau(f)\right)^2\geq \tau^3\sigma^{-\beta-2}\frac{\varepsilon }{2}C_{\ast}^{-1}f^2+\\ +\tau{\sigma}^{\beta}\left(1-\frac{\omega_0}{\beta}\right)|\nabla_gf|^2+ 2\tau F_{w }^{g}f P_{\tau}(f)+ \textrm{div}\,(q_2), \end{multline} for every $x\in B_{r_0}^{\sigma}\setminus\{0\}$ and $\tau\geq\tau_0$.\\ By Young's inequality, by the first of \eqref{2*-16} and by \eqref{2-18} we have \begin{equation} \label{*1-20} |2\tau F_{w }^{g}f P_{\tau}(f)|\leq\frac{1}{2}\frac{w^2}{|\nabla_gw|^2}\left(P_\tau(f)\right)^2 +C_{4}\tau^2\sigma^{-2}f^2, \end{equation} where $C_4, C_4\geq1$, depends only on $\lambda,\Lambda,m_\ast,m^{\ast}$ and $\beta$.\\ By \eqref{1-20} and \eqref{*1-20} we have \begin{multline} \label{2-20} \qquad\frac{1}{2}\frac{w^2}{|\nabla_gw|^2}\left(P_\tau(f)\right)^2\geq \tau^3\sigma^{-\beta-2}\frac{\varepsilon }{4}C_{\ast}^{-1}f^2+\\ +\tau{\sigma}^{\beta}\left(1-\frac{\omega_0}{\beta}\right)|\nabla_gf|^2+ \textrm{div}\,(q_2), \end{multline} for every $x\in B_{r_0}^{\sigma}\setminus\{0\}$ and every $\tau\geq\tau_1:=\text{max}\{\tau_0,\frac{4C_{\ast}C_4}{\varepsilon}\}$. \bigskip Finally, we choose $\varepsilon=\text{min}\{1,\beta-\omega_0\}$. Recalling that $f=w^{-\tau}u$, and integrating both sides of \eqref{2-20} over $B_{r_0}^{\sigma}\setminus\{0\}$, we obtain \eqref{2-15}. \end{proof} \begin{rem} \label{rem:nondiv} It is straightforward that estimate \eqref{2-15} remains valid for operators in non-divergence form $Pu=g_{ij}\partial^2_{ij}u$. Of course, the values of the constants, and in particular of $\tau_1$, might be different. \end{rem} \section{Carleman estimate for product of two second order elliptic operators}\label{Sec4.2} {}In this section and in the sequel we return to the standard notation, that is we denote by $|\cdot|$ and by $\cdot$ the euclidian norm and scalar product respectively. Let $\{g_1^{ij}(x)\}_{i,j=1}^n$ and $\{g_2^{ij}(x)\}_{i,j=1}^n$ be two symmetric matrix real valued functions which satisfy conditions \eqref{1-10}, \eqref{2-10} and let us assume that \begin{equation} \label{1-22} \sum_{i,j=1}^n\|\nabla ^2 g_1^{ij}\|_{L^\infty(\mathbb{R}^n)}\leq \Lambda_1, \quad\sum_{i,j=1}^n\|\nabla ^2 g_2^{ij}\|_{L^\infty(\mathbb{R}^n)}\leq \Lambda_1, \end{equation} with $\Lambda_1>0$. Let us denote by $L_1$, $L_2$ and $\mathcal{L}$ the operators \begin{equation} \label{10-22} L_1(u)=\sum_{i,j=1}^n g_1^{ij}(x)\partial_{ij}^2 u,\quad L_2(u)=\sum_{i,j=1}^n g_2^{ij}(x)\partial_{ij}^2 u, \end{equation} \begin{equation} \label{1*-22} \mathcal{L}(u)=L_2(L_1 u). \end{equation} In the sequel we shall need the following standard proposition which we prove for the reader's convenience. \begin{prop} \label{prop:5-4.2} Let $L_1$, $L_2$ and $\mathcal{L}$ be the operators defined above. Given $a\in C^1(\mathbb{R}^n\setminus\{0\})$ and $u\in C^\infty_0(\mathbb{R}^n\setminus\{0\})$, the following inequalities hold true: \begin{equation} \label{2-22} \int a^2|\nabla ^2 u|^2\leq C\left(\int a^2|L_k u|^2+\int (a^2 + |\nabla a|^2)|\nabla u|^2\right),\quad k=1,2, \end{equation} \begin{equation} \label{1-23} \int a^2|\nabla ^3 u|^2\leq C\left(\int a^2|\mathcal{L} u||\nabla ^2 u|+\int (a^2 + |\nabla a|^2)|\nabla^2 u|^2\right), \end{equation} where $C$ only depends on $\lambda$ and $\Lambda$. \end{prop} \begin{proof} To simplify the notation, let us omit the index $k$ in $L_k$. For a fixed $l\in\{1,...,n\}$ we have \begin{multline} \int Lu\partial^2_{ll}u a^2=-\int\partial_l(a^2g^{ij}\partial^2_{ij} u)\partial_l u=\\ =-\int a^2 g^{ij}\partial^3_{ijl} u \partial_l u -2\int a \partial_l a g^{ij}\partial^2_{ij} u \partial_l u- \int (\partial_l g^{ij})\partial^2_{ij} u \partial_l u a^2=\\ =\int a^2 g^{ij}\partial^2_{il} u \partial^2_{jl} u +\int \partial_j(a^2g^{ij})\partial^2_{il} u \partial_l u -2\int a \partial_l a g^{ij}\partial^2_{ij} u \partial_l u- \int (\partial_l g^{ij})\partial^2_{ij} u \partial_l u a^2\geq\\ \geq\lambda\int a^2|\nabla\partial_l u|^2-C\int(|a|+|\nabla a|)|a||\nabla u||\nabla^2 u|, \end{multline} where $C$ only depends on $\lambda$ and $\Lambda$. Now, summing up with respect to $l$ the above inequalities and applying the inequality $2xy\leq x^2+y^2$, we get \eqref{2-22}. Now we prove \eqref{1-23}. First we observe that, \cite{G-T}, multiplying both sides of the second equality \eqref{10-22} by $a^2v$ and integrating by parts we easily obtain \begin{equation} \label{2-23} \int a^2|\nabla v|^2\leq C\left(\int a^2|L_2 v||v|+\int (a^2 + |\nabla a|^2)v^2\right), \end{equation} where $C$ only depends on $\lambda$ and $\Lambda$. Let us apply \eqref{2-23} to $v=L_1 u$. Noticing that, for a fixed $l\in\{1,...,n\}$, we have \begin{equation} \label{10-24} |L_1(\partial_l u)|\leq |\partial_l (L_1 u)|+ C|\nabla^2 u|, \end{equation} where $C$ only depends on $\Lambda$, we obtain \begin{equation} \label{1-24} \int a^2|L_1(\partial_l u)|^2\leq C\left(\int a^2|\mathcal{L} u||\nabla^2 u|+\int (a^2 + |\nabla a|^2)|\nabla^2 u|^2\right), \end{equation} where $C$ only depends on $\lambda$ and $\Lambda$. Finally, by applying inequality \eqref{2-22} to estimate {}from below the integral on the left hand side of \eqref{1-24}, and summing up with respect to $l$, we get \eqref{1-23}. \end{proof} In order to prove the next theorem we need to use some transformation formulae for the operator $\mathcal{L}$ which we recall now. Let $\Psi:\mathbb{R}^n\rightarrow \mathbb{R}^n$ be a $C^4$ diffeomorphism. We have \begin{equation} \label{2-24} (\mathcal{L} u)(\Psi^{-1}(y))=(\widetilde{\mathcal{L}} U)(y)+(Q U)(y), \end{equation} where $U(y)=u(\Psi^{-1}(y))$, $Q$ is a third order operator, $\widetilde{\mathcal{L}}=\widetilde{L}_2\widetilde{L}_1$, $\widetilde{L}_k=\sum_{i,j=1}^n \widetilde{g}_k^{ij}(y)\partial^2_{ij}$, $k=1,2$, and $\widetilde{g}_k^{-1}(\Psi(x))=\frac{\partial\Psi}{\partial x}(x)g_k^{-1}(x)\left(\frac{\partial\Psi}{\partial x}(x)\right)^t$, namely \begin{equation} \label{10-25} \widetilde{g}_k^{ij}(\Psi(x))=\sum_{r,s=1}^n g_k^{rs}(x)\frac{\partial\Psi_i}{\partial x_r}(x)\frac{\partial\Psi_j}{\partial x_s}(x), \quad i,j=1,...,n. \end{equation} We can find a linear map $\Psi$ such that $\widetilde{g}_1^{-1}(0)$ is the identity matrix and $\widetilde{g}_2^{-1}(0)$ is a diagonal matrix. More precisely, let $R_1$ be the matrix of a rotation such that $R_1g_1^{-1}(0)R_1^t=\text{diag}\{\nu_1,....\nu_n\}$, where $\nu_i$, $i=1,...,n$, are the eigenvalues of $g_1^{-1}(0)$, and let $H=\text{diag}\{\frac{1}{\sqrt{\nu_1}},...,\frac{1}{\sqrt{\nu_n}}\}$. We have that $HR_1g_1^{-1}(0)R_1^tH^t$ is equal to the identity matrix. Now let $R_2$ be the matrix of a rotation such that $\widetilde{g}_2^{-1}(0)=R_2HR_1g_2^{-1}(0)R_1^tH^tR_2^t$ has a diagonal form. We have that the desired map is $\Psi(x)=R_2HR_1x$. In addition, notice that if $\nu_*$, $\nu^*$ are the minimum and maximum eigenvalues of $g_1^{-1}(0)$ respectively and $\mu_*$, $\mu^*$ are the minimum and maximum eigenvalues of $g_2^{-1}(0)$ respectively, then \begin{equation} \label{1-26} \frac{\mu_*}{\nu^*}|x|^2\leq \widetilde{g}_2^{-1}(0)x \cdot x \leq \frac{\mu^*}{\nu_*}|x|^2, \quad \hbox{for every }x\in\mathbb{R}^n. \end{equation} \begin{theo} \label{theo:6-4.2} Let $\mathcal{L}$ be the operator defined by \eqref{1*-22}. Let $\nu_*$ and $\nu^*$ ($\mu_*$ and $\mu^*$) be the minimum and the maximum eigenvalues of $g_1^{-1}(0)$ ($g_2^{-1}(0)$). Then there exists a symmetric matrix $\Gamma_0$ satisfying \begin{equation} \label{2-26} \lambda^2|x|^2\leq \sigma_0^2(x):=\Gamma_0x \cdot x \leq \lambda^{-2}|x|^2, \end{equation} and such that if $\beta>\sqrt{\frac{\mu^*\nu^*}{\mu_*\nu_*}}-1$ and \begin{equation} \label{3-26} w_0(x)=e^{-\left(\sigma_0(x)\right)^{-\beta}} \end{equation} the following inequality holds true: \begin{equation} \label{1-27} \sum_{k=0}^3 \tau^{6-2k}\int \sigma_0^{-\beta-2+k(2\beta+2)}w_0^{-2\tau}|\nabla^k u|^2dx\leq C\int\sigma_0^{5\beta+6}w_0^{-2\tau}|\mathcal{L} u|^2dx, \end{equation} for every $u\in C^\infty_0(B_{r_1}^{\sigma_0}\setminus\{0\})$ and for every $\tau\geq\overline{\tau}$, where $r_1$, $0<r_1<1$, $C$ and $\overline{\tau}$ only depend on $\lambda$, $\Lambda$ and $\Lambda_1$. \end{theo} \begin{proof} By the comments preceding the statement of the theorem, without loosing of generality we can assume that $g_1^{ij}(0)=\delta^{ij}$ and $g_2^{-1}(0)$ is of diagonal form, say $g_2^{-1}(0)=\text{diag}\{\mu_1,\mu_2,...,\mu_n\}$, where $0<\mu_1\leq \mu_2\leq...\leq\mu_n$. We denote by $\Gamma=\{\gamma_{ij}\}_{i,j=1}^n$ a symmetric matrix that we shall choose later on, and by $m_*$ and $m^*$ the minimum and the maximum eigenvalues of $\Gamma$ respectively, with $m_*>0$. Let us set $\sigma(x)=(\Gamma x\cdot x)^{1/2}$. We denote by $S_k^{(0)}$, $k=1,2$, the matrix $S_\sigma^{g_k(0)}$ introduced in \eqref{*-11}. We denote by $\omega_0^k$ the numbers (compare with \eqref{2-11}) \begin{equation} \label{1-28} \omega_0^k=\sup\left\{-(S_k^{(0)}\xi)\cdot\xi\ |\ g_k^{ij}(0)\xi_i\xi_j=1,g_k^{ij}(0)\partial_i\sigma(x)\xi_j=0, x\in\mathbb{R}^n\setminus\{0\}\right\}. \end{equation} Let $\beta$ be a positive number such that $\beta>\max\{\omega_0^1,\omega_0^2\}$ and let $V\in C^\infty_0(B^\sigma_{r_0}\setminus\{0\})$, where $r_0$ has been defined in Theorem \ref{theo:4-4.1}. Since \begin{equation} \label{10-29} |\Delta_{g_k}V|\leq|L_kV|+C|\nabla V|, \quad k=1,2, \end{equation} where $C$ only depends on $\Lambda$, by \eqref{2-15} we have that there exists $\tau_2$, only depending on $\lambda$, $\Lambda$, $m_*$, $m^*$ and $\beta$ such that for $k=1,2,$ and for every $\tau\geq \tau_2$ \begin{equation} \label{1-29} \tau\int\sigma^\beta w^{-2\tau}|\nabla V|^2+\tau^3\int\sigma^{-\beta-2}w^{-2\tau} V^2\leq C \int\sigma^{2\beta+2}w^{-2\tau} |L_kV|^2. \end{equation} Now we iterate inequality \eqref{1-29}. First we notice that, by a standard density property, inequality \eqref{1-29} is valid for every $V\in H^2_0(B^\sigma_{r_0}\setminus\{0\})$. Let $u$ be an arbitrary function belonging to $C^\infty_0(B^\sigma_{r_0}\setminus\{0\})$ and let us set $v=L_1 u$. By applying inequality \eqref{1-29} to the function $V=\sigma^{\frac{3}{2}\beta+2}v$, we get \begin{multline} \label{1-30} \tau^3\int\sigma^{2\beta+2}w^{-2\tau}v^2=\tau^3\int\sigma^{-\beta-2}w^{-2\tau} (\sigma^{\frac{3}{2}\beta+2}v)^2\leq\\ \leq C \int\sigma^{2\beta+2}w^{-2\tau} |L_2(\sigma^{\frac{3}{2}\beta+2}v)|^2, \end{multline} for every $\tau\geq\tau_2$. Now observe that \begin{equation} \label{2-30} |L_2(\sigma^{\frac{3}{2}\beta+2}v)|\leq \sigma^{\frac{3}{2}\beta+2}|L_2v|+C\sigma^{\frac{3}{2}\beta+1}|\nabla v|+C\sigma^{\frac{3}{2}\beta}|v|, \end{equation} where $C$ only depends on $\lambda$, $\Lambda$, $m_*$, $m^*$ and $\beta$. By using \eqref{2-30} to estimate {}from above the right hand side of \eqref{1-30}, we have that there exists $\tau_3\geq\tau_2$ such that, for every $\tau\geq\tau_3$, \begin{equation} \label{3-30} \tau^3\int\sigma^{2\beta+2}w^{-2\tau}v^2\leq C \int\sigma^{5\beta+6}w^{-2\tau} |L_2v|^2+C \int\sigma^{5\beta+4}w^{-2\tau} |\nabla v|^2, \end{equation} where $C$ and $\tau_3$ only depend on $\lambda$, $\Lambda$, $m_*$, $m^*$ and $\beta$. Now we estimate {}from above the second term in the right hand side of \eqref{3-30}. To this aim we apply inequality \eqref{1-29} to the function $V=\sigma^{2\beta+2}v$ and we have \begin{equation} \label{1-31} \tau\int\sigma^{\beta}w^{-2\tau}|\nabla(\sigma^{2\beta+2}v)|^2\leq C \int\sigma^{2\beta+2}w^{-2\tau} |L_2(\sigma^{2\beta+2}v)|^2, \end{equation} for every $\tau\geq\tau_2$. Taking into account that \begin{equation} \label{10-31} |L_2(\sigma^{2\beta+2}v)|\leq \sigma^{2\beta+2}|L_2v|+C\sigma^{2\beta+1}|\nabla v|+C\sigma^{2\beta}|v|, \end{equation} and \begin{equation} \label{20-31} |\nabla(\sigma^{2\beta+2}v)|^2\geq \frac{1}{2}\sigma^{4\beta+4}|\nabla v|^2-C\sigma^{4\beta+2}v^2, \end{equation} where $C$ only depends on $\lambda$, $\Lambda$, $m_*$, $m^*$ and $\beta$, we have, by \eqref{1-31}, \begin{equation} \label{2-31} \tau\int\sigma^{5\beta+4}w^{-2\tau}|\nabla v|^2\leq C \int\sigma^{6\beta+6}w^{-2\tau} |L_2v|^2+C\tau\int\sigma^{5\beta+2}w^{-2\tau}v^2, \end{equation} for every $\tau\geq\tau_2$, where $C$ only depends on $\lambda$, $\Lambda$, $m_*$, $m^*$ and $\beta$. Now we use \eqref{2-31} to estimate {}from above the second term on the right hand side of \eqref{3-30} and we have that there exists $\tau_4\geq\tau_3$ such that \begin{equation} \label{1-32} \int\sigma^{2\beta+2}w^{-2\tau}v^2\leq \frac{C}{\tau^3} \int\sigma^{5\beta+6}w^{-2\tau} |L_2v|^2, \end{equation} for every $\tau\geq\tau_4$, where $C$ and $\tau_4$ only depend on $\lambda$, $\Lambda$, $m_*$, $m^*$ and $\beta$. Recalling that $v=L_1u$ and by using \eqref{1-29} for $V=u$ and $k=1$, \eqref{1-32} yields \begin{equation} \label{2-32} \tau^6\int\sigma^{-\beta-2}w^{-2\tau}u^2+ \tau^4\int\sigma^{\beta}w^{-2\tau}|\nabla u|^2\leq C \int\sigma^{5\beta+6}w^{-2\tau} |L_2L_1u|^2, \end{equation} for every $\tau\geq\tau_4$, where $C$ only depends on $\lambda$, $\Lambda$, $m_*$, $m^*$ and $\beta$. Now we prove that \begin{equation} \label{3-32} \tau^2\int\sigma^{3\beta+2}w^{-2\tau}|\nabla^2u|^2+\int\sigma^{5\beta+4}w^{-2\tau}|\nabla^3 u|^2\leq C \int\sigma^{5\beta+6}w^{-2\tau} |L_2L_1u|^2, \end{equation} for every $\tau\geq\tau_4$, where $C$ only depends on $\lambda$, $\Lambda$, $m_*$, $m^*$ and $\beta$. Concerning the term with the second order derivatives on the left hand side of \eqref{3-32}, we can estimate it by using \eqref{2-22} with $a=(\sigma^{3\beta+2}w^{-2\tau})^{\frac{1}{2}}$ and $k=1$, obtaining \begin{equation} \label{1-33} \int\sigma^{3\beta+2}w^{-2\tau}|\nabla^2u|^2\leq C \int\sigma^{3\beta+2}w^{-2\tau} |L_1u|^2+C \tau^2\int\sigma^{\beta}w^{-2\tau} |\nabla u|^2, \end{equation} where $C$ only depends on $\lambda$, $\Lambda$, $m_*$, $m^*$ and $\beta$. By using \eqref{1-29} for $V=u$ and $k=1$ to estimate {}from above the second integral on the right hand side of \eqref{1-33} we get \begin{equation} \label{2-33} \int\sigma^{3\beta+2}w^{-2\tau}|\nabla^2u|^2\leq C\tau \int\sigma^{2\beta+2}w^{-2\tau} |L_1u|^2, \end{equation} for every $\tau\geq\tau_2$, where $C$ and $\tau_2$ only depend on $\lambda$, $\Lambda$, $m_*$, $m^*$ and $\beta$. Now, by \eqref{1-32} with $v=L_1u$ and by \eqref{2-33}, we have, for every $\tau\geq\tau_4$, \begin{equation} \label{1-34} \tau^2\int\sigma^{3\beta+2}w^{-2\tau}|\nabla^2u|^2\leq C \int\sigma^{5\beta+6}w^{-2\tau} |L_2L_1u|^2, \end{equation} where $C$ only depends on $\lambda$, $\Lambda$, $m_*$, $m^*$ and $\beta$. Now we estimate {}from above the term with the third order derivatives on the left hand side of \eqref{3-32}. By applying \eqref{1-23} with $a=(\sigma^{5\beta+4}w^{-2\tau})^{\frac{1}{2}}$, we have \begin{equation} \label{2-34} \int\sigma^{5\beta+4}w^{-2\tau}|\nabla^3u|^2\leq C \int\sigma^{5\beta+4}w^{-2\tau} |L_2L_1u||\nabla^2u|+C\tau^2\int\sigma^{3\beta+2}w^{-2\tau} |\nabla^2u|^2, \end{equation} where $C$ only depends on $\lambda$, $\Lambda$, $m_*$, $m^*$ and $\beta$. Noticing that \begin{multline} \label{10-34} \sigma^{5\beta+4}|L_2L_1u||\nabla^2u|=\left(\sigma^{\frac{3}{2}\beta+1}|\nabla^2u|\right) \left(\sigma^{\frac{7}{2}\beta+3}|L_2L_1u|\right)\leq\\ \leq\frac{1}{2}\left(\sigma^{3\beta+2}|\nabla^2u|^2+\sigma^{7\beta+6}|L_2L_1u|^2\right), \end{multline} by \eqref{1-34} and \eqref{2-34} we obtain the desired inequality \eqref{3-32}. By \eqref{2-32} and \eqref{3-32} we have \begin{equation} \label{1-35} \sum_{k=0}^3\tau^{6-2k}\int\sigma^{-\beta-2+k(2\beta+2)}w^{-2\tau}|\nabla^ku|^2\leq C\int\sigma^{5\beta+6}w^{-2\tau}|L_2L_1u|^2, \end{equation} for every $\tau\geq\tau_4$, where $\tau_4$ and $C$ only depend on $\lambda$, $\Lambda$, $m_*$, $m^*$ and $\beta$, for every $u\in C^\infty_0(B^\sigma_{r_0}\setminus\{0\})$. Now we choose $\Gamma=\Gamma_0:=\text{diag}\{\frac{1}{\sqrt \mu_1},...,\frac{1}{\sqrt \mu_n}\}$, $\sigma(x)=\sigma_0(x):=\left(\Gamma_0x \cdot x\right)^{1/2}$, $w(x)=w_0(x)$, where $w_0(x)$ is defined by \eqref{3-26}. By Proposition \ref{propRem} we have $\omega_0^1=\omega_0^2=\sqrt{\frac{\mu_n}{\mu_1}}-1$, hence estimate \eqref{1-35} holds for $\beta>\sqrt{\frac{\mu_n}{\mu_1}}-1$. Coming back to the old variables we obtain \eqref{1-27}. \end{proof} \begin{theo} \label{theo:7-4.2} Let $\mathcal{L}$ be the operator defined by \eqref{1*-22}. Let $\nu_*$, $\nu^*$, $\mu_*$, $\mu^*$ be as defined in Theorem \ref{theo:6-4.2}. Let us assume that $u\in H^4(B_R)$ satisfies the inequality \begin{equation} \label{1-39} |\mathcal{L}u|\leq N\sum_{k=0}^3 R^{-4+k}|\nabla^k u|, \quad \hbox{in } B_R, \end{equation} where $N$ and $R$ are positive numbers. Let $\beta>\sqrt{\frac{\mu^*\nu^*}{\mu_*\nu_*}}-1$. There exist positive constants $s_1\in(0,1)$ and $C\geq 1$, $C$ and $s_1$ only depending on $\lambda$, $\Lambda$, $\Lambda_1$ and $N$ such that, for every $\rho_1\in(0,s_1R)$ and for every $r$, $\rho$ satisfying $r<\rho<\frac{\rho_1\lambda^2}{2}$, \begin{multline} \label{2-39} \sum_{k=0}^3 \rho^{2k}\int_{B_\rho}|\nabla^k u|^2\leq C\max\left\{1,\left(\frac{\rho}{R}\right)^{-(5\beta-2)}\right\} e^{C\left((\lambda^{-1}\rho)^{-\beta}-\left(\frac{\rho_1\lambda}{2}\right)^{-\beta}\right)R^\beta}\cdot\\ \cdot\left(\left(\frac{r}{R}\right)^{5\beta-2}\sum_{k=0}^3 r^{2k}\int_{B_{r}}|\nabla^k u|^2\right)^{\vartheta_0}\cdot \left(\left(\frac{\rho_1}{R}\right)^{5\beta-2}\sum_{k=0}^3 \rho_1^{2k}\int_{B_{\rho_1}}|\nabla^k u|^2\right)^{1-\vartheta_0} , \end{multline} where \begin{equation} \label{3-39} \vartheta_0=\frac{(\lambda^{-1}\rho)^{-\beta}-\left(\frac{\lambda\rho_1}{2}\right)^{-\beta}} {\left(\frac{\lambda r}{2}\right)^{-\beta}-\left(\frac{\lambda\rho_1}{2}\right)^{-\beta}}. \end{equation} \end{theo} \begin{proof} First we observe that, denoting $\widetilde{g}_k^{-1}(x)=g_k^{-1}(Rx)$, $\widetilde{L}_k=\widetilde{g}_k^{ij}(x)\partial^2_{ij}$, $k=1,2$, $\widetilde{\mathcal{L}}=\widetilde{L}_2\widetilde{L}_1$, $\widetilde{u}(x)=u(Rx)$, $x\in B_1$, inequality \eqref{1-39} implies \begin{equation} \label{1-40} |\widetilde{\mathcal{L}}\widetilde{u}|\leq N\sum_{k=0}^3|\nabla^k \widetilde{u}|, \quad\hbox{in }B_1. \end{equation} For simplicity of notation we shall omit the symbol $\tilde{}$ . Let us introduce the following notation \begin{equation} \label{2-40} J(\rho)= \sum_{k=0}^3\rho^{2k}\int_{B_\rho^{\sigma_0}}|\nabla^k u|^2, \end{equation} where, we recall, $B_\rho^{\sigma_0}=\{x\in\mathbb{R}^n\ |\ \sigma_0(x)<\rho\}$ and $\sigma_0$ has been defined in Theorem \ref{theo:6-4.2}. Notice that \eqref{2-26} gives $B_{\lambda r}\subset B_r^{\sigma_0}\subset B_{\frac{r}{\lambda}}$, for every $r>0$. In particular inequality \eqref{1-40} is satisfied in $B_\lambda^{\sigma_0}$. Denote by $R_1=\min\{r_1,\lambda\}$, where $r_1$ has been introduced in Theorem \ref{theo:6-4.2}. Let $\rho_1\in(0,R_1]$ and $r\in \left(0,\frac{\rho_1}{2}\right)$. Let $\eta\in C^4_0(\mathbb{R})$ such that $0\leq\eta\leq 1$, $\eta\equiv 1$ in $\left(r,\frac{\rho_1}{2}\right)$, $\eta\equiv 0$ in $\left(0, \frac{r}{2}\right)\cup(\rho_1,R_1)$, $\left|\frac{d^k}{dt^k}\eta\right|\leq \frac{C}{r^k}$ in $[\frac{r}{2},r]$, $\left|\frac{d^k}{dt^k}\eta\right|\leq \frac{C}{\rho_1^k}$ in $\left[\frac{\rho_1}{2},\rho_1\right]$ for $k=0,1,...,4$, where $C$ is an absolute constant. In addition, let $\xi(x)=\eta(\sigma_0(x))$. By a standard density theorem, inequality \eqref{1-27} holds for the function $\xi(x)u(x)$. Denote \begin{equation} \label{10-47} h_\tau(t)=t^{5\beta-2}e^{\frac{2\tau}{t^\beta}},\quad t\in(0,1). \end{equation} By standard calculations, it is simple to derive that there exist $\overline{\tau}_1\geq\overline{\tau}$, $C$, $s_0\in(0,R_1)$, only depending on $\lambda$, $\Lambda$, $\Lambda_1$, $\beta$ and $N$, such that if $\rho_1\leq s_0$, $r<\rho<\frac{\rho_1}{2}$ and $\tau\geq \overline{\tau}_1$ then \begin{equation} \label{20-47} h_\tau(\rho)J(\rho)\leq C h_\tau\left(\frac{r}{2}\right)J(r)+C h_\tau\left(\frac{\rho_1}{2}\right)J(\rho_1). \end{equation} Hence \begin{equation} \label{1-47} J(\rho)\leq C \left(\left(\frac{r/2}{\rho}\right)^{5\beta-2} e^{2\tau\left(-\frac{1}{\rho^\beta}+\frac{1}{(r/2)^\beta}\right)}J(r)+ \left(\frac{\rho_1/2}{\rho}\right)^{5\beta-2} e^{2\tau\left(-\frac{1}{\rho^\beta}+\frac{1}{(\rho_1/2)^\beta}\right)}J(\rho_1)\right), \end{equation} for every $\tau\geq\overline{\tau}_1$. Let us denote \begin{equation} \label{10-50} \widetilde{\vartheta}_0=\frac{\rho^{-\beta}-\left(\frac{\rho_1}{2}\right)^{-\beta}} {\left(\frac{ r}{2}\right)^{-\beta}-\left(\frac{\rho_1}{2}\right)^{-\beta}}, \end{equation} \begin{equation} \label{20-50} \alpha_0=\frac{1}{2}\frac{\log\left(\left(\frac{\rho_1}{r}\right)^{5\beta-2}\frac{J(\rho_1)}{J(r)}\right)} {\left(\frac{r}{2}\right)^{-\beta}-\left(\frac{\rho_1}{2}\right)^{-\beta}}. \end{equation} If $\alpha_0\geq \overline{\tau}_1$ then we choose $\tau=\alpha_0$ in \eqref{1-47} obtaining \begin{equation} \label{1-51} J(\rho)\leq \frac{C}{\rho^{5\beta-2}}\left(r^{5\beta-2}J(r)\right)^{\vartheta_0} \left(\rho_1^{5\beta-2}J(\rho_1)\right)^{1-\vartheta_0}, \end{equation} where $C$ only depends on $\lambda$, $\Lambda$, $\Lambda_1$, $N$ and $\beta$. If $\alpha_0<\overline{\tau}_1$ then we have trivially \begin{multline} \label{2-51} J(\rho)\leq J(\rho_1)=\left(J(\rho_1)\right)^{\vartheta_0}\left(J(\rho_1)\right)^{1-\vartheta_0}\leq\\ \leq\frac{e^{2\overline{\tau}_1\left(\rho^{-\beta}-\left(\frac{\rho_1}{2}\right)^{-\beta}\right)}}{\rho_1^{5\beta-2}} \left(r^{5\beta-2}J(r)\right)^{\vartheta_0} \left(\rho_1^{5\beta-2}J(\rho_1)\right)^{1-\vartheta_0}. \end{multline} By \eqref{1-51} and \eqref{2-51} and scaling the variables we get \eqref{2-39}. \end{proof} \begin{cor}[Unique continuation property] \label{cor:8-4.2} Let $\mathcal{L}$ be the same operator of Theorem \ref{theo:7-4.2} and let $\nu_*$, $\nu^*$, $\mu_*$, $\mu^*$ be as defined in Theorem \ref{theo:6-4.2}. Let us assume that $u\in H^4(B_R)$ satisfies the inequality \begin{equation} \label{1-53} |\mathcal{L}u|\leq N\sum_{k=0}^3 R^{-4+k}|\nabla^k u|,\quad\hbox{in }B_R, \end{equation} where $N$ and $R$ are positive numbers. Assume that \begin{equation} \label{2-53} \int_{B_r} u^2=O\left(e^{-\frac{C_0}{r^\kappa}}\right), \quad \hbox{as } r\rightarrow 0, \end{equation} where $C_0>0$ and $\kappa>\sqrt{\frac{\mu^*\nu^*}{\mu_*\nu_*}}-1$. Then we have \begin{equation} \label{3-53} u\equiv 0 \quad \hbox{in } B_R. \end{equation} \end{cor} \begin{proof} Let us fix $\rho_1\in(0,s_1R)$ and $\rho\in\left(r,\frac{\lambda^2}{2}\rho_1\right)$, where $s_1$ has been defined in Theorem \ref{theo:7-4.2}. Let \begin{equation} \label{4-53} \sqrt{\frac{\mu^*\nu^*}{\mu_*\nu_*}}-1<\beta<\kappa. \end{equation} By \eqref{2-39} and by the interpolation inequality \begin{equation} \label{10-54} \|u\|_{H^3(B_r)}\leq C \|u\|_{L^2(B_r)}^{\frac{1}{4}}\|u\|_{H^4(B_r)}^{\frac{3}{4}}, \end{equation} where $C>0$ is an absolute constant, we have \begin{equation} \label{1-54} \|u\|^2_{H^3(B_\rho)}\leq C\left(\left(\frac{r}{R}\right)^{5\beta-2}\|u\|_{L^2(B_r)}^{\frac{1}{2}}\right)^{\vartheta_0}, \end{equation} where $\vartheta_0$ is given by \eqref{3-39} and $C>0$ only depends on $\lambda$, $\Lambda$, $\Lambda_1$, $N$, $\beta$, $\rho$, $\rho_1$, $R$ and $\|u\|_{H^4(B_R)}$. By \eqref{2-53} and \eqref{4-53}, passing to the limit as $r\rightarrow 0$ in \eqref{1-54}, we obtain $u\equiv 0$ in $B_\rho$. By iteration the thesis follows. \end{proof} \section{Three sphere inequalities for the plate operator\label{Sec4.3}} In this section we specialize the results of Section \ref{Sec4.2}, in particular we specialize the three sphere inequality proved in Theorem \ref{theo:7-4.2}, for the plate equation \begin{equation} \label{1-56} {\mathcal{L}}u:=\partial_{ij}^2 (C_{ijkl}\partial_{kl}^2 u)=0, \quad \hbox{in } B_R, \end{equation} where $\{C_{ijkl}(x)\}_{i,j,k,l=1}^{2}$ is a fourth order tensor that satisfies the hypotheses \eqref{eq:sym-conditions-C-components}, \eqref{eq:3.bound_quantit}, \eqref{eq:3.convex} for $\Omega=B_R$ and the \emph{dichotomy condition} in $B_R$. In the following, without loss of generality, we assume $R=1$. In order to apply Theorem \ref{theo:6-4.2} we need to write the operator ${\mathcal{L}}$ in the following form \begin{equation} \label{2-57} {\mathcal{L}}=L_2 L_1 + \widetilde{Q}, \end{equation} where $L_1$ and $L_2$ are second order operators which satisfy a uniform ellipticity condition and whose coefficients belong to $C^{1,1}(B_1)$ and $\widetilde{Q}$ is a third order operator with bounded coefficients. In the sequel (Lemma \ref{lem:8-4.3}) we shall prove that \eqref{2-57} holds true under some additional assumptions on the tensor $\{C_{ijkl}(x)\}_{i,j,k,l=1}^2$. Let us denote \begin{equation} \label{2-58} p(x;\partial) u = \sum_{h=0}^4 a_{4-h}(x)\partial_1^h \partial_2^{4-h}u, \quad \hbox{for every } u\in H^4(B_1), \end{equation} where the coefficients $a_i(x)$, $i=0,...,4$, have been defined in \eqref{3.coeff6}, \eqref{3.coeffsmall}. By \eqref{3.coeff6} we have \begin{equation} \label{3-58} {\mathcal{L}}u=p(x;\partial) u + Qu, \quad \hbox{for every } u\in H^4(B_1), \end{equation} where $Q$ is a third order operator with bounded coefficients which satisfies the inequality \begin{equation} \label{4-58} |Qu| \leq cM \left ( |\nabla^3 u |+ |\nabla^2 u| \right), \quad \hbox{for every } u\in H^4(B_1), \end{equation} and $c$ is an absolute constant. In addition we denote \begin{equation} \label{5-58} p(x; \xi) = \sum_{h=0}^4 a_{4-h}(x) \xi_1^h \xi_2^{4-h}, \quad x \in \overline{B}_1, \ \xi \in \mathbb{R}^2, \end{equation} \begin{equation} \label{6-58} \widetilde{p}(x; t):= p(x; (t,1))=\sum_{h=0}^4 a_{4-h}(x)t^h, \quad x \in \overline{B}_1, \ t \in \mathbb{R}. \end{equation} Notice that by \eqref{eq:3.bound_quantit} we have \begin{equation} \label{1-59} p(x; \xi) \geq \gamma |\xi|^4, \quad x \in \overline{B}_1, \ \xi \in \mathbb{R}^2, \end{equation} \begin{equation} \label{2-59} \widetilde{p}(x; t) \geq \gamma (t^2+1)^2, \quad x \in \overline{B}_1, \ t \in \mathbb{R}. \end{equation} Now, for any fixed $x \in \overline{B}_1$, let $z_k(x)=\alpha_k(x)+i\beta_k(x)$, $\overline{z}_k(x)=\alpha_k(x)-i\beta_k(x)$ ($k=1,2$) be the complex solutions to the algebraic equation $\widetilde{p}(x;z)=0$. Here, $\alpha_k$ and $\beta_k$ are real-valued functions and $\beta_k(x) >0$, $k=1,2$, for every $x \in \overline{B}_1$. We have \begin{equation} \label{3-59} p(x; \xi) = p_2(x;\xi)p_1(x;\xi), \quad \hbox{for every } x \in \overline{B}_1, \ \xi \in \mathbb{R}^2, \end{equation} where \begin{equation} \label{4-59} p_k(x; \xi) = g_k^{ij}(x)\xi_i \xi_j, \quad k=1,2, \ x \in \overline{B}_1, \ \xi \in \mathbb{R}^2, \end{equation} \begin{multline} \label{5-59} g_k^{11}(x)= \sqrt{a_0(x)}, \ \ g_k^{12}(x)=g_k^{21}(x)=-\alpha_k(x) \sqrt{a_0(x)},\\ g_k^{22}(x)=\sqrt{a_0(x)} ( \alpha_k^2(x)+ \beta_k^2(x)), \quad k=1,2, \ \ x \in \overline{B}_1. \end{multline} Since in the sequel we have to deal with some basic properties of polynomials, we recall such properties for what concerns the polynomial $\widetilde{p}(x; z)$ and we refer the reader to \cite[Chapter 5]{Wa} for an extended treatment of the issue. For any fixed $x \in \overline{B}_1$ we denote by $ {\mathcal{D}}(x)$ the absolute value of the discriminant of the polynomial $\widetilde{p}(x;z)$, that is \begin{equation} \label{1-60} {\mathcal{D}}(x)=a_0^6\left ( (z_1-z_2)(z_1-\overline{z}_1)(z_1-\overline{z}_2)(z_2-\overline{z}_1)(z_2-\overline{z}_2)(\overline{z}_1-\overline{z}_2) \right )^2, \end{equation} where $a_0=a_0(x)$ and $z_k=z_k(x)=\alpha_k(x)+i\beta_k(x)$, $k=1,2$. An elementary calculation yields \begin{equation} \label{2-60} {\mathcal{D}}(x)=16 a_0^6 \beta_1^2 \beta_2^2 \left [ (\alpha_1-\alpha_2)^2+ (\beta_1+\beta_2)^2 \right ]^2 \left [ (\alpha_1-\alpha_2)^2+ (\beta_1-\beta_2)^2 \right ]^2. \end{equation} In terms of the coefficients $a_h=a_h(x)$, $h=0,1,...,4$, it is also known that \begin{equation} \label{3-60} {\mathcal{D}}(x)= \frac{1}{a_0} |\det S(x)|, \end{equation} where $S(x)$ is the $7\times 7$ matrix defined by \eqref {3. S(x)}. Furthermore, let us denote by $\Psi$ the map of $\mathbb{R}^4$ into $\mathbb{R}^4$ defined by $\Psi(t_1,t_2,w_1,w_2)= \{\Psi_k(t_1,t_2,w_1,w_2)\}_{k=1}^4$, where \begin{center} \( {\displaystyle \left\{ \begin{array}{lr} \Psi_1(t_1,t_2,w_1,w_2)=t_1+t_2, \vspace{0.12em}\\ \Psi_1(t_1,t_2,w_1,w_2)= t_1^2+t_2^2+4t_1t_2+w_1+w_2, \vspace{0.12em}\\ \Psi_1(t_1,t_2,w_1,w_2)= t_1(t_2^2 +w_2)+t_2(t_1^2+w_1), \vspace{0.12em}\\ \Psi_1(t_1,t_2,w_1,w_2)=(t_1^2+w_1)(t_2^2+w_2). \vspace{0.25em}\\ \end{array} \right. } \) \vskip -3.0em \begin{eqnarray} \ & & \label{5-61} \end{eqnarray} \end{center} Notice that \begin{equation} \label{6-61} a_1=-2a_0\Psi_1(\alpha_1, \alpha_2, \beta_1^2, \beta_2^2), \end{equation} \begin{equation} \label{7-61} a_2=a_0\Psi_2(\alpha_1, \alpha_2, \beta_1^2, \beta_2^2), \end{equation} \begin{equation} \label{8-61} a_3=-2a_0\Psi_3(\alpha_1, \alpha_2, \beta_1^2, \beta_2^2), \end{equation} \begin{equation} \label{9-61} a_4=a_0\Psi_4(\alpha_1, \alpha_2, \beta_1^2, \beta_2^2). \end{equation} Let us denote by $\frac{\partial \Psi (t_1, t_2, w_1, w_2)}{\partial (t_1, t_2, w_1, w_2)}$ the jacobian matrix of $\Psi$ and let $J(t_1, t_2, w_1, w_2)$ be its determinant. An elementary calculation shows that \begin{equation} \label{1-62} J(t_1, t_2, w_1, w_2)= - \left [ (t_1-t_2)^4+2(w_1+w_2)(t_1-t_2)^2+(w_1-w_2)^2 \right ]. \end{equation} Let us denote \begin{equation} \label{3-62} \gamma_1= \min \left \{ \gamma, \frac{1}{16M}, 1 \right \}. \end{equation} The following lemma holds. \begin{lem} \label{lem:8-4.3} Let $p_k(x;\xi)$, $k=1,2$, be defined by \eqref{4-59}. The following facts hold: \noindent (a) If \eqref{eq:sym-conditions-C-components} and \eqref{eq:3.bound_quantit} are satisfied, then \begin{equation} \label{4-62} \gamma_2 |\xi|_2^2 \leq p_k(x;\xi) \leq \gamma_2^{-1}|\xi|_2^2, \quad \hbox{for every } x\in \overline{B}_1, \ \xi \in \mathbb{R}^2, \ k=1,2, \end{equation} where $\gamma_2= 5^{-6}\gamma_1^{15}$. \noindent (b) If the dichotomy condition introduced in Definition \ref{def:dichotomy} holds true in $B_1$, then $g_k^{ij} \in C^{1,1}(\overline{B}_1)$, for $i,j,k=1,2$. More precisely, if \eqref{3.D(x)bound} holds true, then \begin{equation} \label{2-63} \sum_{i,j,k=1}^2 \left ( \| \nabla g_k^{ij} \|_{L^{\infty}(B_1)} \delta_1^{1/2} + \| \nabla^2 g_k^{ij} \|_{L^{\infty}(B_1)} \delta_1 \right ) \leq C_1, \end{equation} where $\delta_1= \min_{\overline{B}_1} {\mathcal{D}}(x)$ and $C_1$ only depends on $M$ and $\gamma$, whereas if \eqref{3.D(x)bound 2} holds true, then \begin{equation} \label{3-63} \sum_{i,j,k=1}^2 \left ( \| \nabla g_k^{ij} \|_{L^{\infty}(B_1)} + \| \nabla^2 g_k^{ij} \|_{L^{\infty}(B_1)} \right ) \leq C_2, \end{equation} where $C_2$ only depends on $M$ and $\gamma$. \end{lem} \begin{proof} First we prove (a). Let $x$, $x \in \overline{B}_1$, be fixed. In the rest of the proof of (a) we shall omit, for brevity, the dependence on $x$. By \eqref{1-59}, \eqref{eq:3.bound_quantit}, \eqref{3-62}, we have \begin{equation} \label{1-64} \gamma_1 |\xi|^4 \leq p(\xi) \leq \gamma_1^{-1} |\xi|^4, \quad \hbox{for every } \xi \in \mathbb{R}^2. \end{equation} Now we observe that the following inequalities hold true \begin{equation} \label{2-64} |\alpha_1+\alpha_2| \leq \gamma_1^{-2}, \end{equation} \begin{equation} \label{3-64} |\alpha_1^2+\beta_1^2+\alpha_2^2+\beta_2^2+4\alpha_1\alpha_2 | \leq \gamma_1^{-2}, \end{equation} \begin{equation} \label{4-64} |\alpha_1(\alpha_2^2+\beta_2^2)+\alpha_2(\alpha_1^2+\beta_1^2) | \leq \gamma_1^{-2}, \end{equation} \begin{equation} \label{5-64} \gamma_1^{2}\leq (\alpha_1^2+\beta_1^2)(\alpha_2^2+\beta_2^2) \leq \gamma_1^{-2}, \end{equation} \begin{equation} \label{6-64} \gamma_1^{2}(1+\alpha_1^2)^2\leq \beta_1^2\left [ (\alpha_1-\alpha_2)^2+\beta_2^2 \right ] \leq \gamma_1^{-2}(1+\alpha_1^2)^2, \end{equation} \begin{equation} \label{7-64} \gamma_1^{2}(1+\alpha_2^2)^2\leq \beta_2^2\left [ (\alpha_1-\alpha_2)^2+\beta_1^2 \right ] \leq \gamma_1^{-2}(1+\alpha_2^2)^2. \end{equation} Indeed, by \eqref{1-64} we have \begin{equation} \label{1-65} \gamma_1 \leq a_0 \leq \gamma_1^{-1}, \quad \gamma_1 \leq a_4 \leq \gamma_1^{-1}. \end{equation} On the other hand, by \eqref{1-65} and using \eqref{6-61}, \eqref{7-61}, \eqref{8-61}, \eqref{9-61} we obtain the inequalities \eqref{2-64}, \eqref{3-64}, \eqref{4-64}, \eqref{5-64}, respectively. Concerning \eqref{6-64}, by using \eqref{1-64} for $\xi=(\alpha_1,1)$ and taking into account \eqref{3-59}, we have \begin{equation} \label{2-65} \gamma_1(1+\alpha_1^2)^2\leq a_0 \beta_1^2\left [ (\alpha_1-\alpha_2)^2+\beta_2^2 \right ] \leq \gamma_1^{-1}(1+\alpha_1^2)^2. \end{equation} Inequality \eqref{6-64} follows {}from the first of \eqref{1-65} and \eqref{2-65}. Proceeding similarly for $\xi=(\alpha_2,1)$ we obtain \eqref{7-64}. Now, denoting \begin{equation} \label{3-65} \epsilon_0 = \frac{\gamma_1^3}{\sqrt{50}}, \end{equation} we are going to prove that the following inequalities hold \begin{equation} \label{4-65} \beta_k > \epsilon_0, \quad k=1,2, \end{equation} \begin{equation} \label{5-65} \beta_k \leq \frac{1}{\gamma_1 \epsilon_0}, \quad k=1,2, \end{equation} \begin{equation} \label{6-65} |\alpha_k| \leq \frac{1}{\gamma_1 \epsilon_0}, \quad k=1,2. \end{equation} In order to prove \eqref{4-65}, it is enough to consider the case $k=1$, as the case $k=2$ can be proved by the same arguments. We proceed by contradiction and we assume that \begin{equation} \label{1-66} \beta_1^2 \leq \epsilon_0^2. \end{equation} By \eqref{1-66} and \eqref{6-64} we get \begin{equation} \label{2-66} \frac{\gamma_1^{2}}{\epsilon_0^2} \leq (\alpha_1-\alpha_2)^2+\beta_2^2, \end{equation} hence at least one of the following inequalities must hold \begin{equation} \label{3a-66} \frac{\gamma_1^{2}}{2\epsilon_0^2} \leq \beta_2^2, \end{equation} \begin{equation} \label{3b-66} \frac{\gamma_1^{2}}{2\epsilon_0^2} \leq (\alpha_1-\alpha_2)^2. \end{equation} If the inequality \eqref{3a-66} holds, then by \eqref{5-64} we have \begin{equation} \label{4-66} \alpha_1^2 \leq \alpha_1^2 +\beta_1^2 \leq \frac{\gamma_1^{-2}}{\alpha_2^2+\beta_2^2} \leq \frac{\gamma_1^{-2}}{\beta_2^2} \leq 2 \gamma_1^{-4}\epsilon_0^2, \end{equation} hence \begin{equation} \label{1-67} |\alpha_1| \leq \sqrt{2} \gamma_1^{-2} \epsilon_0, \end{equation} and in turn inequalities \eqref{1-67}, \eqref{2-64} imply \begin{equation} \label{2-67} |\alpha_2| \leq (1+\sqrt{2}\epsilon_0) \gamma_1^{-2}. \end{equation} Therefore, by \eqref{3-64}, \eqref{3a-66}, \eqref{1-67}, \eqref{2-67}, and recalling that $\gamma_1 \in (0,1)$, we have \begin{equation} \label{3-67} \frac{\gamma_1^2}{2\epsilon_0^2} \leq \beta_2^2 \leq \alpha_2^2 +\beta_2^2+\alpha_1^2+\beta_1^2 < 25 \gamma_1^{-4}, \end{equation} hence we have $\epsilon_0 > \frac{\gamma_1^3}{ \sqrt{50} }$, a contradiction. Hence, \eqref{3a-66} cannot be true. If \eqref{3b-66} holds, then we have $|\alpha_1|+|\alpha_2|\geq |\alpha_1-\alpha_2|\geq \frac{\gamma_1}{\sqrt{2}\epsilon_0}$. Therefore, at least one of the following inequalities holds \begin{equation} \label{4-67} |\alpha_1| \geq \frac{\gamma_1}{2\sqrt{2}\epsilon_0}, \quad |\alpha_2| \geq \frac{\gamma_1}{2\sqrt{2}\epsilon_0}. \end{equation} If the first of \eqref{4-67} holds, then by \eqref{2-64} we have $|\alpha_2| \geq |\alpha_1| - \gamma_1^{-2} \geq \frac{\gamma_1}{2\sqrt{2}\epsilon_0} -\gamma_1^{-2}\geq \frac{\gamma_1}{4\sqrt{2}\epsilon_0}$ and, analogously, if the second of \eqref{4-67} holds, then we have $|\alpha_1| \geq \frac{\gamma_1}{4\sqrt{2}\epsilon_0}$. Hence, if \eqref{3b-66} holds, then we have \begin{equation} \label{1-68} |\alpha_1| \geq \frac{\gamma_1}{4\sqrt{2}\epsilon_0}, \quad |\alpha_2| \geq \frac{\gamma_1}{4\sqrt{2}\epsilon_0}. \end{equation} Inequalities \eqref{1-68} and \eqref{5-64} give \begin{equation} \label{2-68} \frac{\gamma_1^2}{32\epsilon_0^2} \leq \alpha_1^2 \leq \alpha_1^2 + \beta_1^2 \leq \frac{\gamma_1^{-2}}{\alpha_2^2 + \beta_2^2} \leq \frac{\gamma_1^{-2}}{\alpha_2^2} \leq 32 \gamma_1^{-4}\epsilon_0^2. \end{equation} As a consequence of the above inequality we have $\frac{\gamma_1^3}{32}\leq \epsilon_0^2$, that contradicts \eqref{3-65}. Therefore, \eqref{1-66} cannot be true and \eqref{4-65} is proved. By \eqref{5-64} and \eqref{4-65} we easily obtain \eqref{5-65} and \eqref{6-65}. Finally, by \eqref{4-65}--\eqref{6-65}, we obtain easily an estimate {}from above and {}from below of the eigenvalues of the matrices $\{g_k^{ij}(x)\}_{i,j=1}^2$ {}from which the estimate \eqref{4-62} follows. Now we prove the statement (b) of the lemma. By \eqref{1-62}, \eqref{1-65}, \eqref{4-65}--\eqref{6-65} we have \begin{equation} \label{1-69} \gamma_3 \sqrt{\mathcal{D}(x)} \leq J(x) \leq \gamma_3^{-1} \sqrt{\mathcal{D}(x)}, \quad \hbox{for every } x \in \overline{B}_1, \end{equation} where \begin{equation} \label{2-69} J(x)= |J(\alpha_1(x), \alpha_2(x), \beta_1^2(x), \beta_2^2(x))| \end{equation} and $\gamma_3= 10^{-6}\gamma_1^{25}\gamma_0^{-3}$. Assume that \eqref{3.D(x)bound} holds in $B_1$. In order to prove that $g_{k}^{ij} \in C^{1,1}(\overline{B}_1)$ and to derive estimate \eqref{2-63}, it is enough to apply the Inverse Mapping Theorem to the map $\Psi$. Indeed, by \eqref{5-61}, the vector-valued function $\omega(x)=(\alpha_1(x), \alpha_2(x), \beta_1^2(x), \beta_2^2(x))$ satisfies the following equality \begin{equation} \label{1-70} \Psi(\omega(x))=d(x), \quad x \in \overline{B}_1, \end{equation} where $d(x)=\left ( - \frac{a_1(x)}{2a_0(x)}, \frac{a_2(x)}{a_0(x)}, -\frac{a_3(x)}{2a_0(x)}, \frac{a_4(x)}{a_0(x)} \right )$, hence by \eqref{eq:3.convex}, \eqref{3.coeff6}, \eqref{3.coeffsmall}, \eqref{1-69}, \eqref{2-69}, \eqref{1-70} we obtain \eqref{2-63}. If \eqref{3.D(x)bound 2} holds true, then by \eqref{2-60} we have $\alpha_1(x)=\alpha_2(x)$ and $\beta_1(x)=\beta_2(x)$ for every $x \in \overline{B}_1$. Therefore, by \eqref{5-61}--\eqref{7-61} we have \begin{equation} \label{2-70} \alpha_1(x)=\alpha_2(x)= - \frac{a_1(x)}{4a_0(x)} \end{equation} and \begin{equation} \label{3-70} \beta_1^2(x)=\beta_2^2(x)= \frac{a_2(x)}{2a_0(x)}- \frac{3a_1^2(x)}{16a_0^2(x)}. \end{equation} By \eqref{eq:3.convex}, \eqref{3.coeff6}, \eqref{3.coeffsmall}, \eqref{1-65}, \eqref{4-65}, \eqref{2-70} and \eqref{3-70} we get \eqref{3-63}. \end{proof} \begin{theo} [Three sphere inequality - first version] \label{theo:9-4.3} Let us assume that $u \in H^4({B}_R)$ is a solution to the equation \begin{equation} \label{1-71} \partial_{ij}^2 (C_{ijkl}(x)\partial_{kl}^2 u)=0, \quad \hbox{in } B_R, \end{equation} where $\{C_{ijkl}(x)\}_{i,j,k,l=1}^{2}$ is a fourth order tensor whose entries belong to $C^{1,1}(\overline{B}_R)$. Assume that \eqref{eq:sym-conditions-C-components}, \eqref{eq:3.bound_quantit}, \eqref{eq:3.convex} and the dichotomy condition are satisfied in $B_R$. Let $\gamma_2=5^{-6}\gamma_1^{15}$ and $\beta= \frac{1}{\gamma_2^2}-1$. There exist positive constants $s_2$, $0<s_2<1$, and $C$, $C>1$, $s_2$ and $C$ only depending on $\gamma$, $M$ and on $\delta_1= \min_{\overline{B}_R} \mathcal{D}$, such that, for every $\rho_1 \in (0,s_2 R)$ and every $r$, $\rho$ satisfying $r<\rho<\frac{\rho_1\gamma_2^2}{2}$, the following inequality holds \begin{multline} \label{1-72} \sum_{k=0}^3 \rho^{2k} \int_{B_\rho} |\nabla^k u|^2 \leq C \exp \left ( C \left((\gamma_2^{-1}\rho)^{-\beta}-(\gamma_2 \frac{\rho_1}{2})^{-\beta}\right)R^{\beta} \right ) \cdot \\ \cdot \left ( \sum_{k=0}^3 r^{2k} \int_{B_r} |\nabla^k u|^2 \right )^{\theta_1} \left ( \sum_{k=0}^3 \rho_1^{2k} \int_{B_{\rho_1}} |\nabla^k u|^2 \right )^{1-\theta_1}, \end{multline} where \begin{equation} \label{2-72} \theta_1 = \frac{(\gamma_2^{-1}\rho)^{-\beta}-(\gamma_2 \frac{\rho_1}{2})^{-\beta}}{(\gamma_2 \frac{r}{2} )^{-\beta}-(\gamma_2 \frac{\rho_1}{2})^{-\beta}}. \end{equation} \end{theo} \begin{proof} Let us define \begin{equation} \label{10-75} \widetilde{u}(y)=u(Ry), \quad \widetilde{C}_{ijkl}(y)=C_{ijkl}(Ry), \ \ y \in \overline{B}_1, \ \ i,j,k,l=1,2. \end{equation} Then, $\widetilde{u} \in H^4(B_1)$ is a solution to the equation \begin{equation} \label{20-75} \partial_{ij}^2 (\widetilde{C}_{ijkl}(y)\partial_{kl}^2 \widetilde{u})=0, \quad \hbox{in } B_1. \end{equation} Now, by Lemma \ref{lem:8-4.3} we have that \begin{equation} \label{3-72} {\mathcal{L}}=L_2L_1 \widetilde{u} + Q \widetilde{u}, \end{equation} where $L_k=p_k(y; \partial)$, $k=1,2$, and \begin{equation} \label{4-72} p_k(y; \partial) = g_k^{ij}\partial_{ij}^2, \quad k=1,2. \end{equation} Here, $\{g_k^{ij}\}_{i,j=1}^2$, $k=1,2$, satisfy \eqref{2-63} or \eqref{3-63} (the former whenever \eqref{3.D(x)bound} holds, the latter whenever \eqref{3.D(x)bound 2} holds), \begin{equation} \label{5-73} \gamma_2 |\xi|^2 \leq g_k^{ij}(y)\xi_i\xi_j \leq \gamma_2^{-1}|\xi|^2, \quad x \in \overline{B}_1, \ \xi \in \mathbb{R}^2, \end{equation} and $Q$ is a third order operator with bounded coefficients satisfying \begin{equation} \label{6-73} |Q\widetilde{u}| \leq cM \left ( |\nabla^3\widetilde{u}| + |\nabla^2\widetilde{u}| \right ), \end{equation} where $c$ is an absolute constant. Therefore, {}from \eqref{3-72}--\eqref{6-73} and Theorem \ref{theo:7-4.2}, and coming back to the old variables, we obtain the three sphere inequality \eqref{1-72}. \end{proof} The following Poincar\'{e}-type inequality holds. \begin{prop} [Poincar\'{e} inequality] \label{prop:10-4.3} There exists a positive constant $C$ only depending on $n$ such that for every $u\in H^2(B_R, \mathbb{R}^n)$ and for every $r\in(0,R]$ \begin{equation} \label{1-76} \int_{B_R}|\tilde u_r|^2+R^2\int_{B_R}|\nabla \tilde u_r|^2\leq CR^4\left(\frac{R}{r}\right)^n\int_{B_R}|\nabla^2 u|^2, \end{equation} where \begin{equation} \label{2-76} \widetilde{u}_r(x)=u(x) -(u)_r-(\nabla u)_r \cdot x, \end{equation} \begin{equation} \label{3-76} (u)_r=\frac{1}{|B_r|}\int_{B_r}u, \qquad (\nabla u)_r=\frac{1}{|B_r|}\int_{B_r}\nabla u. \end{equation} \end{prop} \begin{proof} For a proof we refer to \cite[Example 4.3]{A-M-R4}. \end{proof} \begin{prop} [Caccioppoli-type inequality] \label{prop:11-4.3} Let us assume that $u \in H^4(B_R)$ is a solution to the equation \begin{equation} \label{4-76} \partial_{ij}^2 (C_{ijkl}(x)\partial_{kl}^2 u)=0, \quad \hbox{in } B_R, \end{equation} where $\{C_{ijkl}(x)\}_{i,j,k,l=1}^{2}$ is a fourth order tensor whose entries belong to $C^{1,1}(\overline{B}_R)$. Assume that \eqref{eq:sym-conditions-C-components}--\eqref{eq:3.convex} are satisfied. We have \begin{equation} \label{5-76} \int_{B_{\frac{t}{2}}} |\nabla^3 u|^2 \leq C \int_{B_t} \sum_{k=0}^2 \left ( t^{k-3} |\nabla^k u| \right )^2, \quad \hbox{for every } t \leq R, \end{equation} where $C$ is a positive constant only depending on $\gamma$ and $M$. \end{prop} \begin{proof} The proof of \eqref{5-76} is essentially the same of the proof of \cite[Proposition $6.2$]{M-R-V1}. Here, for the reader convenience, we give a sketch of the proof. For every $t \in (0,R]$, let $\eta \in C_0^\infty(B_t)$ be such that $0 \leq \eta \leq1$ in $B_t$, $\eta \equiv 1$ in $B_{\frac{t}{2}}$ and \begin{equation} \label{1-77} \sum_{k=1}^3 t^k |\nabla^k \eta| \leq C, \quad \hbox{in } B_t, \end{equation} where $C$ is an absolute constant. Multiplying equation \eqref{4-76} by $\Delta (\eta^6 u)$ and integrating over $B_t$, we have \begin{equation} \label{2-77} \int_{B_t} C_{ijkl} \partial_{kl}^2 u \partial_{ij}^2 \Delta (\eta^6 u)=0 \end{equation} and, integrating by parts, \begin{equation} \label{3-77} \int_{B_t} \left \{ C_{ijkl} \partial_{kl}^2 \partial_s u \partial_{ij}^2 \partial_s (\eta^6 u) + \partial_s(C_{ijkl})\partial_{kl}^2u \partial_{ij}^2 \partial_s (\eta^6 u) \right \}=0. \end{equation} By \eqref{eq:3.convex}, \eqref{1-77}, \eqref{3-77} and taking into account that $t\leq R$ we have \begin{equation} \label{1-78} \int_{B_t} \eta^6 C_{ijkl} \partial_{kl}^2 \partial_s u \partial_{ij}^2 \partial_s u = F[u], \end{equation} where $F$ satisfies the inequality \begin{equation} \label{2-78} |F[u]| \leq CM \int_{B_t} \left ( \sum_{k=0}^2 t^{k-3} |\nabla^k u| \right )^2 + CM\int_{B_t} |\nabla^3 u| \eta^3 \left ( \sum_{k=0}^2 t^{k-3} |\nabla^k u| \right ), \end{equation} where $C$ is an absolute constant. By \eqref{1-78}, \eqref{2-78}, \eqref{eq:3.bound_quantit} and Cauchy inequality ($2ab \leq \epsilon a^2 + \frac{1}{\epsilon}b^2$, for $\epsilon >0$) we have \begin{equation} \label{3-78} \gamma \int_{B_t} \eta^6 |\nabla^3 u|^2 \leq CM^2 \int_{B_t} \left ( \sum_{k=0}^2 t^{k-3} |\nabla^k u| \right )^2. \end{equation} Inequality \eqref{5-76} follows immediately by \eqref{3-78}. \end{proof} \begin{theo} [Three sphere inequality - second version] \label{theo:12-4.3} Let $u \in H^4({B}_R)$ be a solution to the equation \begin{equation} \label{1-79} \partial_{ij}^2 (C_{ijkl}(x)\partial_{kl}^2 u)=0, \quad \hbox{in } B_R, \end{equation} where $\{C_{ijkl}(x)\}_{i,j,k,l=1}^{2}$ is a fourth order tensor whose entries belong to $C^{1,1}(\overline{B}_R)$. Assume that \eqref{eq:sym-conditions-C-components}, \eqref{eq:3.bound_quantit}, \eqref{eq:3.convex} and the dichotomy condition are satisfied in $B_R$. Let $\gamma_2=5^{-6}\gamma_1^{15}$ and $\beta= \frac{1}{\gamma_2^2}-1$. There exist positive constants $s$, $0<s<1$, and $C$, $C\geq 1$, $s$ and $C$ only depending on $\gamma$, $M$ and on $\delta_1= \min_{\overline{B}_R} \mathcal{D}$, such that, for every $\rho_1 \in (0,s R)$ and every $r$, $\rho$ satisfying $r<\rho<\frac{\rho_1\gamma_2^2}{2}$, the following inequality holds \begin{multline} \label{1-80} \rho^4 \int_{B_\rho} |\nabla^2 u|^2 \leq C \exp \left( C \left((\gamma_2^{-1}\rho)^{-\beta}-(\gamma_2 \frac{\rho_1}{2})^{-\beta}\right)R^{\beta} \right) \cdot \\ \cdot \left ( r^{4} \int_{B_{2r}} |\nabla^2 u|^2 \right )^{\theta_1} \left ( \frac{\rho_1^6}{r^2} \int_{B_{2\rho_1}} |\nabla^2 u|^2 \right )^{1-\theta_1}, \end{multline} where \begin{equation} \label{2-80} \theta_1 = \frac{(\gamma_2^{-1}\rho)^{-\beta}-(\gamma_2 \frac{\rho_1}{2})^{-\beta}}{(\gamma_2 \frac{r}{2} )^{-\beta}-(\gamma_2 \frac{\rho_1}{2})^{-\beta}}. \end{equation} \end{theo} \begin{proof} Let $a \in \mathbb{R}$, $\omega \in \mathbb{R}^2$ to be chosen later on. Since $u$ is a solution to \eqref{1-79}, also $v=u-a-\omega\cdot x$ is a solution to \eqref{1-79}. By \eqref{1-72} we have \begin{equation} \label{3-80} \rho^4 \int_{B_\rho} |\nabla^2 v|^2 \leq K \left ( H_v(r) \right )^{\theta_1} \left ( H_v(\rho_1) \right )^{1-\theta_1}, \end{equation} where \begin{equation} \label{10-81} K=C \exp \left ( C \left((\gamma_2^{-1}\rho)^{-\beta}-(\gamma_2 \frac{\rho_1}{2})^{-\beta}\right)R^{\beta} \right ) \end{equation} and \begin{equation} \label{20-81} H_v(t)= \sum_{k=0}^3 t^{2k} \int_{B_t} |\nabla^k v|^2, \quad t \in (0,R). \end{equation} By Proposition \ref{prop:11-4.3} we have \begin{equation} \label{1-81} H_v(r)= C\sum_{k=0}^2 r^{2k} \int_{B_{2r}} |\nabla^k v|^2, \end{equation} where $C$ only depends on $M$ and $\gamma$. Now, we choose \begin{equation} \label{30-81} a = \frac{1}{|B_{2r}|}\int_{B_{2r}} u, \quad \omega= \frac{1}{|B_{2r}|}\int_{B_{2r}} \nabla u. \end{equation} By Proposition \ref{prop:10-4.3} and {}from \eqref{1-81} we have \begin{equation} \label{2-81} H_v(r) \leq Cr^4 \int_{B_{2r}} |\nabla^2 u|^2, \end{equation} where $C$ only depends on $M$ and $\gamma$. Similarly, by applying Propositions \ref{prop:10-4.3} and \ref{prop:11-4.3} we obtain \begin{equation} \label{3-81} H_v(\rho_1) \leq C \rho_1^4 \left ( \frac{\rho_1}{r} \right )^2 \int_{B_{2\rho_1}} |\nabla^2 u|^2, \end{equation} where $C$ only depends on $\gamma$ and $M$. {}From \eqref{3-80}, \eqref{1-81}, \eqref{2-81}, inequality \eqref{1-80} follows. \end{proof} \begin{theo} [Three sphere inequality - third version] \label{theo:13-4.3} Let $u \in H^4({B}_R)$ be a solution to the equation \begin{equation} \label{1-79bis} \partial_{ij}^2 (C_{ijkl}(x)\partial_{kl}^2 u)=0, \quad \hbox{in } B_R, \end{equation} where $\{C_{ijkl}(x)\}_{i,j,k,l=1}^{2}$ is a fourth order tensor whose entries belong to $C^{1,1}(\overline{B}_R)$. Assume that \eqref{eq:sym-conditions-C-components}, \eqref{eq:3.bound_quantit}, \eqref{eq:3.convex} and the dichotomy condition are satisfied in $B_R$. Let $\gamma_2=5^{-6}\gamma_1^{15}$ and $\beta= \frac{1}{\gamma_2^2}-1$. There exist positive constants $s$, $0<s<1$, and $C$, $C\geq 1$, $s$ and $C$ only depending on $\gamma$, $M$ and on $\delta_1= \min_{\overline{B}_R} \mathcal{D}$, such that, for every $\rho_1 \in (0,s R)$ and every $r$, $\rho$ satisfying $r<\rho<\frac{\rho_1\gamma_2^2}{2}$, the following inequality holds \begin{multline} \label{3sfere_Cauchy} \int_{B_\rho} u^2 \leq C \exp \left ( C ((\gamma_2^{-1}\rho)^{-\beta}-(\gamma_2 \frac{\rho_1}{2})^{-\beta})R^{\beta} \right ) \cdot \\ \cdot \left ( \int_{B_{r}} u^2 \right )^{\theta} \left ( \sum_{k=0}^4 \rho_1^{2k} \int_{B_{\rho_1}} |\nabla^k u|^2 \right )^{1-\theta}, \end{multline} where $\theta=\frac{\theta_1}{4}$, with $\theta_1$ given by \eqref{2-72} \end{theo} \begin{proof} It follows immediately {}from \eqref{1-72} and by the interpolation inequality \begin{equation*} \|u\|_{H^3(B_r)}\leq C \|u\|_{L^2(B_r)}^{\frac{1}{4}}\|u\|_{H^4(B_r)}^{\frac{3}{4}}, \end{equation*} where $C$ is an absolute constant and the norms are normalized according to the convention made in Section \ref{SecCauchy}. \end{proof} \textit{Acknowledgements}. We wish to express our gratitude to Professor Luis Escauriaza for deep, fruitful and stimulating discussions on the issues of Carleman estimates.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\bf Introduction} In \cite{Bo1}, Bouc proposed the following conjecture: \begin{conjecture}\cite[Conjecture A]{Bo1} Let $G$ be a finite group. Then $\beta(G)$ is nilpotent if and only if $G$ is nilpotent. \end{conjecture} Here, $\beta(G)$ denotes a largest quotient of a finite group which is a $B$-group and the definition of $B$-group can be found in \cite{Bo1, Bo2} or in Section 2. Bouc has proven Conjecture 1.1 under the additional assumption that the finite group $G$ is solvable in \cite{Bo1}. In \cite{XZ}, Xu and Zhang consider some special cases when the finite group $G$ is not solvable. But this result relies on the proposition of Baumann \cite{Ba}, and his proposition relies on the Conlon theorem \cite[(80.51)]{CR}. If we want to generalize the result of \cite{XZ}, we need to use the new method to compute $m_{G, N}$ directly. Here, $N$ is a normal subgroup of $G$. And the definition of $m_{G,N}$ can be found in \cite{Bo1,Bo2} or in Section 2. For the computation of $m_{G,N}$, Bouc had computed the following first. \begin{proposition}\cite[Proposition 5.6.1]{Bo2} Let $G$ be a finite group. Then $$m_{G,G}=\left\{ \begin{array}{ll} 0, &\mbox{if}~ G ~is~not~cyclic; \\[2ex] \frac{1}{|G|}\varphi(|G|), &\mbox{if} ~ G ~is ~cyclic.\end{array}\right.$$ where $\varphi$ is the Euler totient function. \end{proposition} For general case, our main theorem can picture $m_{G,N}$ as follows. \begin{thm*}Let $G$ be a finite group, $G$ not cyclic and $N\unlhd G$. Then $$ m_{G,N}=\frac{1}{|G|}\sum_{\substack{C\leq G\\ C~ \mathrm{is~ cyclic}}}\sum_{i=1}^n \sum_{\substack{\sigma\leq J\\ |\sigma|=i\\ C\leq H_\sigma}} (-1)^i\widetilde{\chi}(|N(\mathfrak{T}_{C}(G, H_\sigma))|)\cdot \varphi(|C|).$$ Here, we explain the symbols of the above formula as follows: (1) Let $\{H_1, H_2,\cdots, H_n\}$ be the set of all maximal subgroup of $G$ such that $N\lneq H_i$; (2) Set $J=\{1,2,\ldots, n\}$ and $\sigma$ be a non-empty subset of $J$. And $|\sigma|$ means the order of set $\sigma$; (3) Set $H_{\sigma}=\bigcap_{i\in\sigma}H_i$; (4) Let $|N(\mathfrak{T}_{C}(G,H_\sigma))|$ be a simplicial complex associated to the poset $\mathfrak{T}_{C}(G,H_\sigma)$, and $\widetilde{\chi}(|N(\mathfrak{T}_{C}(G,H_\sigma))|)$ be the reduced Euler characteristic of the space $|N(\mathfrak{T}_{C}(G, H_\sigma))|$. \end{thm*} Here, $\mathfrak{T}_{C}(G,H)$ is defined as follows: Let $C$ be a cyclic subgroup of $H$, define $$\mathfrak{T}_{C}(G, H):=\{X|C\leq X\lneq G, X\nleq H\}.$$ We can see that $\mathfrak{T}_{C}(G, H)$ is a poset ordered by inclusion. We can consider poset $\mathfrak{T}_{C}(G, H)$ as a category with one morphism $A\rightarrow B$ if $A$ is a subgroup of $B$. We set $N(\mathfrak{T}_{C}(G, H))$ is the nerve of the category $\mathfrak{T}_{C}(G, H)$ and $|N(\mathfrak{T}_{C}(G, H))|$ is the geometric realization of $N(\mathfrak{T}_{C}(G, H))$. After recalling the basic definitions and properties of $B$-groups in Section 2, we introduce a lemma about M$\mathrm{\ddot{o}}$bius function in Section 3. And this lemma will be used in Section 4 to prove Proposition 4.2. In Section 5, we construct a class poset $\mathfrak{T}_{C}(G, H)$ of $G$ for some cyclic subgroup $C$ and prove Main Theorem in Section 6. \section{\bf The Burnside rings and $B$-groups} In this section we collect some known results about the Burnside rings and $B$-groups. For the background theory of Burnside rings and $B$-groups, we refer to \cite{Bo1}, \cite{Bo2}. \begin{definition}\cite[Notation 5.2.2]{Bo2} Let G be a finite group and $N\unlhd G$. Denote by $m_{G,N}$ the rational number defined by: $$m_{G,N}=\frac{1}{|G|}\sum_{XN=G} |X|\mu(X, G),$$ where $\mu$ is the $M\ddot{o}bius$ function of the poset of subgroups of $G$. \end{definition} \begin{remark} If $N=1$, we have $$m_{G,1}=\frac{1}{|G|}\sum_{X1=G} |X|\mu(X, G)=\frac{1}{|G|}|G|\mu(G, G)=1\neq 0.$$ \end{remark} \begin{definition}\cite[Definition 2.2]{Bo1} The finite group $G$ is called a $B$-group if $m_{G,N}=0$ for any non-trivial normal subgroup $N$ of $G$. \end{definition} \begin{proposition}\cite[Proposition 5.4.10]{Bo2} Let $G$ be a finite group. If $N_{1}, N_{2}\unlhd G$ are maximal such that $m_{G,N}\neq 0$, then $G/N_{1}\cong G/N_{2}$. \end{proposition} \begin{definition}\cite[Notation 2.3]{Bo1} When $G$ is a finite group, and $N\unlhd G$ is maximal such that $m_{G,N}\neq 0$, set $\beta(G)=G/N.$ \end{definition} \begin{theorem}\cite[Theorem 5.4.11]{Bo2} Let $G$ be a finite group. 1. $\beta(G)$ is a $B$-group. 2. If a $B$-group $H$ is isomorphic to a quotient of $G$, then $H$ is isomorphic to a quotient of $\beta(G)$. 3. Let $M\unlhd G$. The following conditions are equivalent: \quad\quad (a) $m_{G,N}\neq 0$. \quad\quad (b) The group $\beta(G)$ is isomorphic to a quotient of $G/M$. \quad\quad (c) $\beta(G)\cong \beta(G/N)$. \end{theorem} We collect some properties of $m_{G,N}$ that will be needed later. \begin{proposition}\cite[Proposition 2.5]{Bo1} Let $G$ be a finite group. Then $G$ is a $B$-group if and only if $m_{G,N}=0$ for any minimal (non-trivial) normal subgroup of $G$. \end{proposition} \begin{proposition}\cite[Proposition 5.6.1]{Bo2} Let $G$ be a finite group. Then $m_{G,G}=0$ if and only if $G$ is not cyclic. If $G$ be cyclic of order $n$, then $m_{G, G}=\varphi(n)/n$, where $\varphi$ is the Euler totient function. \end{proposition} \begin{remark} If $G$ is a finite simple group, then $G$ is a $B$-group if and only if $G$ is not abelian. \end{remark} We collect two results that are the relations about $G$ and $\beta(G)$. When $p$ is a prime number, recall that a finite group $G$ is called cyclic modulo $p$ (or $p$-hypo-elementary) if $G/O_{p}(G)$ is cyclic. And M. Baumann has proven the following theorem. \begin{theorem}\cite[Theorem 3]{Ba} Let $p$ be a prime number and $G$ be a finite group. Then $\beta(G)$ is cyclic modulo $p$ if and only if $G$ is cyclic modulo $p$. \end{theorem} In \cite{Bo1}, S. Bouc has proven the Conjecture under the additional assumption that finite group $G$ is solvable. \begin{theorem}\cite[Theorem 3.1]{Bo1} Let $G$ be a solvable finite group. Then $\beta(G)$ is nilpotent if and only if $G$ is nilpotent. \end{theorem} \section{\bf The M$\mathrm{\ddot{o}}$bius function of the posets of groups} In this section, we introduce a lemma about the M$\mathrm{\ddot{o}}$bius function. In fact, this Lemma will be used in computing $m_{G,N}$ where $G$ is a finite group and $N\unlhd G$. Let $G$ be a finite group and let $\mu$ denote the M$\mathrm{\ddot{o}}$bius function of subgroup lattice of $G$. We refer to \cite[p.94]{Y}: Let $K,D\leq G$, recall the Zeta function of $G$ as following: $$\zeta(K, D)=\left\{ \begin{array}{ll} 1, & \mbox{if}~ K\leq D; \\[2ex] 0, &\mbox{if} ~K\nleq D.\end{array}\right.$$ Set $n:=|\{K|K\leq G\}|$, we have a $n\times n$ matrix $A$ as following: $$A:=(\zeta(K, D))_{K,D\leq G}.$$ It is easy to find that $A$ is an invertible matrix, so there exists $A^{-1}$ such that $$AA^{-1}=E,$$ Here, $E$ is an identity element. Recall the M$\mathrm{\ddot{o}}$bius function as following: $$(\mu(K, D))_{K,D\leq G}=A^{-1}.$$ Now, we set the subgroup lattice of $G$ as following: $$\{K|K\leq G\}:=\{1=K_1, K_2,\ldots, K_n=G\}$$ where $n=|\{K|K\leq G\}|$. Let us list the main lemma of this section as follows, and this lemma is used to prove Proposition 4.1 in Section 4. And the proof of this lemma is due to the referee of \cite{LXZ}, and this proof is easier than the proof of \cite[Section 3]{LXZ}. \begin{lemma}Let $G$ be a finite group. Let $\{K_i|i=1,2,\ldots, n\}$ be the set of all subgroups of $G$. And Set $K_1=1, K_n=G$. Then we have $\mu(K_i, K_{i'})=0$ if $K_i\nleq K_{i'}$. \end{lemma} \begin{proof}Set the poset $\{K_i|i=1,2,\ldots, n\}:=X$, one defines the incidence (or M$\ddot{\mathrm{o}}$bius) algebra $A_X$ of $X$ at the set of square matrices m indexed by $X\times X$, with integral coefficients, such that $$\forall~ (x,y)\in X\times X, m(x,y) \neq 0 \Longrightarrow x \leq y ~in~ X.$$ This defines clearly a unital subalgebra of the algebra of all square matrices indexed by $X\times X$, with integral coefficients. The incidence matrix $\zeta_X$ of $X$ belongs to $A_X$. Moreover, since $\zeta_X$ is unitriangular (up to a suitable permutation of $X$), the matrix $\zeta_X-\mathrm{Id}$ is nilpotent. Hence $$\zeta_X^{-1}=\mathrm{Id}+(\zeta_X-\mathrm{Id})^{-1}=\sum_{i=1}^{+\infty}(-1)^i(\zeta_X-\mathrm{Id})^i,$$ and the summation is actually finite. Since $\zeta_X- \mathrm{Id}\in A_X$, it follows that $\zeta_X^{-1}\in A_X$, so $\mu(x,y) \neq 0$ implies $x\leq y$, for any $x,y\in X.$ This proves the lemma. \end{proof} \section{\bf To Compute the $M'_{G, N}$} In \cite{LXZ}, we compute $m_{G,N}$ when $|G:N|=p$ for some prime number, and we have the following observation: \begin{eqnarray*} &~&m_{G,N}+\frac{1}{|G|}\sum_{X\leq N} |X|\mu(X, G)\\ &=&\frac{1}{|G|}\sum_{\substack{XN= G\\ X\leq G}} |X|\mu(X, G)+\frac{1}{|G|}\sum_{X\leq N} |X|\mu(X, G)\\ &=&\frac{1}{|G|}\sum_{\substack{XN= G\\ X\leq G}} |X|\mu(X, G)+\frac{1}{|G|}\sum_{\substack{XN\neq G\\ X\leq G}} |X|\mu(X, G)\\ &=&\frac{1}{|G|}\sum_{X\leq G} |X|\mu(X, G)\\ &=&m_{G,G}=0, \mathrm{if}~ G \mathrm{~is ~not~ cyclic}. \end{eqnarray*} So to compute $m_{G,N}$, we can compute $\frac{1}{|G|}\sum_{X\leq N} |X|\mu(X, G)$ first. Now, we set $$m_{G,N}':=\frac{1}{|G|}\sum_{\substack{XN\neq G\\ X\leq G}} |X|\mu(X, G)=\frac{1}{|G|}\sum_{X\leq N} |X|\mu(X, G);$$ and set $$M_{G,N}':=\sum_{X\leq N} |X|\mu(X, G)=|G|m_{G,N}'.$$ In \cite{LXZ}, we gave a relation between $M_{G,N}'$ and the Euler characteristic of the nerve space of some class poset of the group $G$. But the condition $|G:N|=p$ is so strong. Under the suggestions of the referee report of \cite{LXZ}, we try to get rid of this condition, and we can get the following propositions. And the reason of this section can be found in Remark 4.3. Now, let $G$ be a finite group and $N \lneq G$. We set $$m_{G,N}':=\frac{1}{|G|}\sum_{X\leq N} |X|\mu(X, G);$$ and set $$M_{G,N}':=\sum_{X\leq N} |X|\mu(X, G)=|G|m_{G,N}'.$$ Thus we have the following propositions. \begin{proposition}Let $G$ be a finite group and $N\lneq G$. Then $$ M_{G,N}' =-\sum_{Y\lneq G}\sum_{X\leq N\cap Y} |X|\mu(X, Y)$$ \end{proposition} \begin{proof}Since $N$ is a proper subgroup of finite group $G$, thus if $X \leq N$, then $X$ is a proper subgroup of $G$. Then, by standard properties of the M$\mathrm{\ddot{o}}$bius function $$(\ast)~~~~~~~~~~\sum_{X\leq Y\leq G}\mu(X, Y)=0=\mu(X,G)+\sum_{X\leq Y\lneq G}\mu(X, Y).$$ It follows that $$\mu(X,G)=-\sum_{X\leq Y\lneq G}\mu(X, Y).$$ Reporting this value in the definition of $M'_{G,N}$ gives $$M_{G,N}'=-\sum_{X\leq N} |X|\sum_{X\leq Y\lneq G}\mu(X, Y).$$ By Lemma 3.1, we can see if $X\nleq Y$, then $\mu(X,Y)=0$. So we have \begin{eqnarray*} \sum_{X\leq N} |X|\sum_{X\leq Y\lneq G}\mu(X, Y) &=&\sum_{X\leq N} |X|\sum_{Y\lneq G}\mu(X, Y)\\ &=&\sum_{Y\lneq G} \sum_{X\leq N}|X|\mu(X, Y)\\ &=&\sum_{Y\lneq G}\sum_{X\leq N\cap Y} |X|\mu(X, Y). \end{eqnarray*} Hence, we have $$ M_{G,N}' =-\sum_{Y\lneq G}\sum_{X\leq N\cap Y} |X|\mu(X, Y).$$ \end{proof} \begin{proposition}Let $G$ be a finite group and $N\lneq G$. Then $$M_{G,N}'=-\sum_{\substack{C\leq N \\ C~ is~ cyclic}}\varphi(|C|)-\sum_{\substack{Y\lneq G\\ Y\nleq N}}M'_{Y, Y\cap N}.$$ \end{proposition} \begin{proof}By Proposition 4.1, we have $$M_{G,N}' =-\sum_{Y\lneq G}\sum_{X\leq N\cap Y} |X|\mu(X, Y)$$ We will compute $\sum_{X\leq N\cap Y} |X|\mu(X, Y)$ by considering the cases when $Y\leq N$ and $Y\neq N$ in the following. \textbf{Case 1.} $Y\leq N$. We have \begin{eqnarray*} \sum_{X\leq N\cap Y} |X|\mu(X, Y) &=&\sum_{X\leq Y} |X|\mu(X, Y)\\ &=&|Y|m_{Y,Y}. \end{eqnarray*} If $Y$ is not cyclic, we have $m_{Y,Y}=0$. If $Y$ is cyclic, we have $m_{Y,Y}=\frac{\varphi(|Y|)}{|Y|}$. Hence, we have $$\sum_{X\leq N\cap Y} |X|\mu(X, Y)=\left\{ \begin{array}{ll} \varphi(|Y|), & \mbox{if}~ Y~is~ cyclic; \\[2ex] 0, &\mbox{if} ~Y~is~not~ cyclic.\end{array}\right.$$ \textbf{Case 2.} $Y\nleq N$. Then we have \begin{eqnarray*} \sum_{X\leq N\cap Y} |X|\mu(X, Y) &=&M_{Y, Y\cap N}' \end{eqnarray*} by the definition of $M_{Y, Y\cap N}'$. Hence, \begin{eqnarray*} M_{G,N}' &=&-\sum_{\substack{Y\lneq G\\ Y\leq N\\ Y~is~cyclic}}(\sum_{X\leq N\cap Y} |X|\mu(X, Y))\\ &~&-\sum_{\substack{Y\lneq G\\ Y\leq N\\ Y~is~not~cyclic}}(\sum_{X\leq N\cap Y} |X|\mu(X, Y))\\ &~&-\sum_{\substack{Y\lneq G\\ Y\nleq N}}\sum_{X\leq N\cap Y} |X|\mu(X, Y)\\ &=&-\sum_{\substack{Y\lneq G\\ Y\leq N\\ Y~is~cyclic}} \varphi(|Y|) -(\sum_{\substack{Y\lneq G\\ Y\leq N\\ Y~is~not~cyclic}} 0) -\sum_{\substack{Y\lneq G\\ Y\nleq N}}M_{Y, Y\cap N}'\\ &=&-\sum_{\substack{C\leq N\\ C~ is~ cyclic}}\varphi(|C|)-\sum_{\substack{Y\lneq G\\ Y\nleq N}}M'_{Y, Y\cap N}. \end{eqnarray*} \end{proof} \begin{remark} To compute $M_{G,N}'$, we need compute $M_{Y, Y\cap N}'$ for every $Y\lneq G$. Since $Y\lneq G$, thus we can get $M_{G,N}'$ by finite steps. \end{remark} \section{\bf A class poset of subgroups of $G$} To compute $M_{Y, Y\cap N}'$ for every $Y\lneq G$, we define a new class poset of subgroups of $G$ in this section. And we find the relation between $M_{G,N}'$ and this class poset. \begin{definition} Let $G$ be a finite group and $N\lneq G$. Let $C$ be a cyclic subgroup of $N$, define $$\mathfrak{T}_{C}(G, N):=\{X|C\leq X\lneq G, X\nleq N\}.$$ Here, $C\notin \mathfrak{T}_{C}(G, N)$ because $C\leq N$. We can see that $\mathfrak{T}_{C}(G,N)$ is a poset ordered by inclusion. We can consider poset $\mathfrak{T}_{C}(G, N)$ as a category with one morphism $Y\rightarrow Z$ if $Y$ is a subgroup of $Z$. We set $N(\mathfrak{T}_{C}(G, N))$ is the nerve of the category $\mathfrak{T}_{C}(G, N)$ and $|N(\mathfrak{T}_{C}(G, N))|$ is the geometric realization of $N(\mathfrak{T}_{C}(G, N))$. More detail of topology can be seen in \cite{DH}. Let $\sigma$ be a $n$-simplex of the nerve $N(\mathfrak{T}_{C}(G, N))$ and $\sigma$ not degenerate, that means we have the following: $$\sigma: \sigma(0)\to \sigma(1)\to \cdots \to \sigma(n)$$ where $\sigma(i)\in \mathfrak{T}_{C}(G)$ and $\sigma(i)\lneq \sigma(i+1)$ for all $i$. \end{definition} Since we use the Euler characteristic of $|N(\mathfrak{T}_{C}(G, N))|$ in Proposition 5.3, thus we recall the definition of the Euler characteristic as follows: \begin{definition}\cite[\S 22]{M} The Euler characteristic (or Euler number) of a finite complex $K$ is defined, classically, by the equation $$\chi(K) =\sum_{i}(-1)^i \mathrm{rank}(C_i(K)).$$ Said differently, $\chi(K)$ is the alternating sum of the number of simplices of $K$ in each dimension. One can also use the reduced Euler characteristic $\widetilde{\chi}(K)$ of $K$, defined by $\widetilde{\chi}(K)=\chi(K)-1$. \end{definition} \begin{proposition}Let $G$ be a finite group and $N\lneq G$. Then \begin{eqnarray*} M_{G,N}' &=&-\sum_{\substack{C\leq N\\C~is~ cyclic}}\varphi(|C|)-\sum_{\substack{Y\lneq G\\ Y\nleq N}}M'_{Y, Y\cap N}\\ &=&\sum_{\substack{C\leq N\\ C~ is~ cyclic}}\widetilde{\chi}(|N(\mathfrak{T}_{C}(G, N))|\cdot \varphi(|C|). \end{eqnarray*} Here, $|N(\mathfrak{T}_{C}(G, N))|$ is a simplicial complex associated to the poset $\mathfrak{T}_{C}(G, N)$, and $\widetilde{\chi}(|N(\mathfrak{T}_{C}(G, N))|)$ is the reduced Euler characteristic of the space $|N(\mathfrak{T}_{C}(G, N))|$. \end{proposition} \begin{proof}By Proposition 4.2 and Remark 4.3, we have \begin{eqnarray*} M_{Y, Y\cap N}' &=&-\sum_{\substack{C\leq Y\cap N\\ C~ is~ cyclic}}\varphi(|C|)-\sum_{\substack{Y_1\lneq Y\\ Y_1\nleq Y\cap N}}M'_{Y_1, Y_1\cap N}\\ &=&-\sum_{\substack{C\leq Y\cap N\\ C~ is~ cyclic}}\varphi(|C|)-\sum_{\substack{Y_1\lneq Y\\ Y_1\nleq N}}M'_{Y_1, Y_1\cap N}. \end{eqnarray*} Now, we can repeat the operations of Proposition 4.2 on $M'_{Y_1, Y_1\cap N}$ by Remark 4.3. So \begin{eqnarray*} &~&\sum_{\substack{Y\lneq G\\ Y\nleq N}}M'_{Y, Y\cap N}\\ &=&\sum_{\substack{Y\lneq G\\ Y\nleq N}}(-\sum_{\substack{C\leq Y\cap N\\ C~is~ cyclic}}\varphi(|C|)- \sum_{\substack{Y_1\lneq Y\\ Y_1\nleq N}}M'_{Y_1, Y_1\cap N})\\ &=&-\sum_{\substack{Y\lneq G\\ Y\nleq N}}(\sum_{\substack{C\leq Y\cap N\\ C~is~ cyclic}}\varphi(|C|)) -\sum_{\substack{Y\lneq G\\ Y\nleq N}}\sum_{\substack{Y_1\lneq Y\\ Y_1\nleq N}}M'_{Y_1, Y_1\cap N}\\ &=&-\sum_{\substack{Y\lneq G\\Y\nleq N}}(\sum_{\substack{C\leq Y\cap N\\ C~is~ cyclic}}\varphi(|C|)) -\sum_{\substack{Y\lneq G\\Y\nleq N}}\sum_{\substack{Y_1\lneq Y\\Y_1\nleq N}}(-\sum_{\substack{C\leq Y_1\cap N\\C~is~ cyclic}}\varphi(|C|) -\sum_{\substack{Y_2\lneq Y_1\\Y_2\nleq N}}M'_{Y_2, Y_2\cap N})\\ &=&-\sum_{\substack{Y\lneq G\\Y\nleq N}}(\sum_{\substack{C\leq Y\cap N\\ C~is~ cyclic}}\varphi(|C|)) +\sum_{\substack{Y\lneq G\\Y\nleq N}}\sum_{\substack{Y_1\lneq Y\\Y_1\nleq N}}(\sum_{\substack{C\leq Y_1\cap N\\C~is~ cyclic}}\varphi(|C|))\\ &~&-\sum_{\substack{Y\lneq G\\Y\nleq N}}\sum_{\substack{Y_1\lneq Y\\Y_1\nleq N}}\sum_{\substack{Y_2\lneq Y_1\\Y_2\nleq N}}M'_{Y_2, Y_2\cap N}\\ &~&~\\ &=&\cdots\cdots\cdots\\ &~&~\\ &=&-\sum_{\substack{C\leq N\\ C~is~ cyclic}}\sum_{i}\sum_{\sigma\in N(\mathfrak{T}_{C}(G))_{i}}(-1)^{i}\cdot \varphi(|C|))\\ &=&-\sum_{\substack{C\leq N\\ C~is~ cyclic}}\chi(|N(\mathfrak{T}_{C}(G, N))|)\cdot \varphi(|C|)). \end{eqnarray*} Here, $\sigma$ is a $i$-simplex of nerve $N(\mathfrak{T}_{C}(G, N))$ and $\sigma$ is not degenerate. So, $$ M_{G,N}'=\sum_{\substack{C\leq N\\ C~is~ cyclic}}\widetilde{\chi}(|N(\mathfrak{T}_{C}(G, N))|\cdot \varphi(|C|). $$ \end{proof} \begin{proposition}Let $G$ be a finite group and $G$ be not cyclic. Let $N\unlhd G$ such that $|G:N|=p$ for some prime number $p$. If the space $|N(\mathfrak{T}_{C}(G, N))|$ is contractible for each cyclic subgroup $C$ of $N$, then $m_{G, N}= 0$. \end{proposition} \begin{proof}By Proposition 5.3, we have \begin{eqnarray*} M_{G,N}' &=&-\sum_{\substack{C\leq N\\C~ is~ cyclic}}\varphi(|C|)-\sum_{Y\lneq G,Y\nleq N}M'_{Y, Y\cap N}\\ &=&-\sum_{\substack{C\leq N\\C~ is~ cyclic}}(1-\chi(|N(\mathfrak{T}_{C}(G, N))|)\cdot \varphi(|C|)). \end{eqnarray*} Since for each cyclic subgroup $C$ of $N$, we have $|N(\mathfrak{T}_{C}(G, N))|$ is contractible. It implies $\chi(|N(\mathfrak{T}_{C}(G, N)))=1$, thus $M_{G,N}'=0$. By the definition of $M_{G,N}'$, we know that $$m_{G,N}+\frac{1}{|G|}M_{G,N}'=m_{G,G}=0.$$ So $m_{G,N}=0$. \end{proof} \section{\bf To compute $m_{G, N}$} Let $G$ be a finite group and $N\unlhd G$. We will prove Main Theorem in this section. Recall $$m_{G, N}=\frac{1}{|G|}\sum_{\substack{XN= G\\ X\leq G}} |X|\mu(X, G).$$ Let $H\lneq G$, and we set $$m_{G,H}':=\frac{1}{|G|}\sum_{X\leq H} |X|\mu(X, G);$$ and set $$M_{G,H}':=\sum_{X\leq H} |X|\mu(X, G)=|G|m_{G,H}'.$$ Let $\{H_1, H_2,\cdots, H_n\}$ be the set of all maximal subgroup of $G$ such that $N\lneq H_i$. Let $J=\{1,2,\ldots, n\}$ and $\sigma$ be a non-empty subset of $J$. Set $H_{\sigma}:=\bigcap_{j\in \sigma}H_j$. \begin{theorem}Let $G$ be a finite group, $G$ not cyclic and $N\unlhd G$. Then $$ m_{G,N}=\frac{1}{|G|}\sum_{\substack{C\leq G\\ C~ \mathrm{is~ cyclic}}}\sum_{i=1}^n \sum_{\substack{\sigma\leq J\\ |\sigma|=i\\ C\leq H_\sigma}} (-1)^i\widetilde{\chi}(|N(\mathfrak{T}_{C}(G, H_\sigma))|)\cdot \varphi(|C|).$$ Here, $|N(\mathfrak{T}_{C}(G,H_\sigma))|$ is a simplicial complex associated to the poset $\mathfrak{T}_{C}(G,H_\sigma)$, and $\widetilde{\chi}(|N(\mathfrak{T}_{C}(G,H_\sigma))|)$ is the reduced Euler characteristic of the space $|N(\mathfrak{T}_{C}(G, H_\sigma))|$. \end{theorem} \begin{proof}Since $G$ is not cyclic, we have \begin{eqnarray*} 0 &=&\frac{1}{|G|}\sum_{X\leq G} |X|\mu(X, G)\\ &=&\frac{1}{|G|}\sum_{\substack{XN= G\\ X\leq G}} |X|\mu(X, G)+\frac{1}{|G|}\sum_{\substack{XN\neq G\\ X\leq G}} |X|\mu(X, G). \end{eqnarray*} If $X\leq G$ and $XN\neq G$, thus there exists a maximal subgroup $H$ of $G$ such that $$X\leq XN \leq H\lneq G.$$ Let $\{H_1, H_2,\cdots, H_n\}$ be the set of all maximal subgroup of $G$ such that $N\leq H_i$. It implies that $$\sum_{\substack{XN\neq G\\ X\leq G}} |X|\mu(X, G)= \sum_{\substack{X\leq H_i\\ for ~some~ i} }|X|\mu(X,G).$$ Now, we focus on $$\sum_{\substack{X\leq H_i\\ for ~some~ i}}|X|\mu(X,G).$$ By the inclusion-exclusion principle, we can see \begin{eqnarray*} &~&\sum_{\substack{X\leq H_i\\ for ~some~ i} }\mu(X,G)\\ &=&\sum_{i=1}^n M'_{G, H_i}-\sum_{1\leq i\lneq j\leq n}M'_{G, H_i\cap H_j}\\ &~&+\sum_{1\leq i\lneq j\lneq k\leq n}M'_{G, H_i\cap H_j\cap H_k}+\cdots+(-1)^{n+1}\cdot M'_{G, \bigcap_{i=1}^{n}H_i}\\ &=&\sum_{i=1}^n\sum_{\substack{C\leq H_i\\ C~ \mathrm{is~ cyclic}}}\widetilde{\chi}(|N(\mathfrak{T}_{C}(G, H_i))|)\cdot \varphi(|C|)+\cdots+\\ &~&(-1)^{n+1}\sum_{\substack{C\leq \bigcap_{i=1}^{n}H_i\\ C~ \mathrm{is~ cyclic}}}\widetilde{\chi}(|N(\mathfrak{T}_{C}(G, \bigcap_{i=1}^{n}H_i))|)\cdot \varphi(|C|)\\ &=&-\sum_{\substack{C\leq G\\ C~ \mathrm{is~ cyclic}}}\sum_{i=1}^n \sum_{\substack{\sigma\leq J\\ |\sigma|=i\\ C\leq H_\sigma}} (-1)^i\widetilde{\chi}(|N(\mathfrak{T}_{C}(G, H_\sigma))|)\cdot \varphi(|C|). \end{eqnarray*} Hence, we have $$ m_{G,N}=\frac{1}{|G|}\sum_{\substack{C\leq G\\ C~ \mathrm{is~ cyclic}}}\sum_{i=1}^n \sum_{\substack{\sigma\leq J\\ |\sigma|=i\\ C\leq H_\sigma}} (-1)^i\widetilde{\chi}(|N(\mathfrak{T}_{C}(G, H_\sigma))|)\cdot \varphi(|C|).$$ \end{proof} \begin{remark} Since $$m_{G, N}=\frac{1}{|G|}\sum_{\substack{XN= G\\ X\leq G}} |X|\mu(X, G)=-\frac{1}{|G|} \sum_{\substack{X\leq H\\ for ~some\\ N\leq H\lneq G} }|X|\mu(X,G),$$ thus we can see that $m_{G,N}$ depends on $H$ with $N\leq H\lneq G$. So it may be a reason why there exists a relation between $G$ and $\beta(G)$. \end{remark} \textbf{ACKNOWLEDGMENTS}\hfil\break The authors would like to thank Prof. S. Bouc for his numerous discussion in Beijing in Oct. 2014. And the second author would like to thank Prof. C. Broto for his constant encouragement in Barcelona in Spain. The authors would like to thank the reviewer of \cite{LXZ}, and the proof of Lemma 3.1 and the formula $(\ast)$ of Proposition 4.1 are due to the reviewer.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \vspace{-1.5mm} Capturing an image is not an instant process; to capture enough photons, the photosensitive elements of a camera have to be exposed to light for a certain interval of time, called exposure time. Therefore, during this interval if an object is moving in the observed scene or the camera is undergoing an arbitrary motion, the resulting image will contain a blurring artifact known as \emph{motion blur}. In general, motion blur is an unwanted behaviour in vision applications \emph{e.g. } image editing \cite{gunturk2012image}, visual SLAM \cite{lee2011simultaneous} and 3D reconstruction \cite{seok2013dense}, as it degrades the visual quality of images. To cope with this type of artifact, image deblurring aims to restore a sharp image from a blurred image. This problem is known to be ill-posed since the blur kernel used for deconvolution is generally assumed to be unknown. Earlier studies assume a uniform-blur over the image to simplify the estimation of the single deconvolution blur kernel used to remove the blur \cite{fergus2006removing,cho2009fast,levin2009understanding}. Even though the methods deploy deblurring tasks with uniform-blur assumption, the assumption is often violated in practice. For instance, when the blur is caused by out-of-plane camera rotation, the blur pattern becomes spatially variant. Moreover, the problem is more complex when objects in a scene are moving \emph{i.e. } dynamic blur. While previous literature focuses on recovering a sharp image from a blurred image, we tackle a more challenging task \emph{i.e. } video restoration from a blurred image. Restoring the underlying image sequence of a blurred image requires both contents and motion prediction. We formulate video restoration from a blurred image as an inverse problem where a clean sequence of images and their motion as latent factors, and a blurred image as an observation. Some of previous deblurring approaches \cite{Kim_2014_CVPR,Zhang_2015_CVPR,sellentECCv2016,wenqi17iccv,park17iccv,argaw2021optical} also estimate the underlying motion in a blurred image, however, their goal remains in single frame restoration. Recently Jin \emph{et al. } \cite{Jin_2018_CVPR} proposed to extract video frames from a single motion-blurred image. Their approach is close to image translation model without inferring underlying motions between the latent frames. Purohit \emph{et al. } \cite{purohit2019bringing} addressed this issue by estimating pixel level motion from a given blurred input. However, their model is still prone to sequential error propagation as frames are predicted in a sequential manner using a deblurred middle frame. Our work differs from previous works in two aspects. First, we use a single network to restore the underlying video frames from a single motion-blurred image in an end-to-end manner while \cite{Jin_2018_CVPR,purohit2019bringing} jointly optimize multiple networks for the task. Second, our approach is not explicitly dependent on a deblurred middle frame in order to restore non-middle frames, and hence, is relatively robust to sequential error propagation which occurs due to erroneous middle frame. In this paper, we propose a novel framework to generate a clean sequence of images from a single motion-blurred image. Our framework is based on a single encoder-decoder structure with Spatial Transformer Network modules (STN) and Local Warping layers (LW) to restore an image sequence and its underlying motion. Specifically, a single encoder is used to extract intermediate features which are passed to multiple decoders with predicted motion from STN and LW modules to generate a sequence of deblurred images. We evaluate our model on two types of motion blur. For rotation blur, which is caused by abrupt camera motion, we generated a synthetic dataset from panoramic images \cite{jxiaoCVPR2012}. For dynamic blur caused by fast moving objects in a scene, we used a high speed video dataset \cite{Nah_2017_CVPR}. The proposed model is evaluated on the panorama and the high speed video datasets under various motion patterns. Both the quantitative metrics and qualitative results highlight that our method is more robust and performs favorably against the competing approaches~\cite{Jin_2018_CVPR} We also provide comparison with single image deblurring approaches on GoPro benchmark dataset \cite{Nah_2017_CVPR} to evaluate the performance of the middle frame prediction. For further investigation, we demonstrate the transferability of our model by cross-dataset evaluation. In short, our contributions are as follows. 1) We propose a novel unified architecture to restore clean video frames from a single motion-blurred image in an end-to-end manner. 2) A simple yet effective mechanism is presented to generate a realistic rotational blur dataset from panoramic images 3) We carefully design loss terms for stable network training and perform thorough experiments to analyze the transferability and flexibility of the proposed architecture. 4) Our model quantitatively and qualitatively performs favorably against the competing approaches. \vspace{-2mm} \section{Related Works} \vspace{-2mm} \label{sec:related} \paragraph{Image deblurring.} Image deblurring is an ill-posed inverse problem when a blur kernel is unknown \emph{i.e. } blind deconvolution problem, as different latent images can be transformed to a blurred image depending on its blur kernel. Early stage of deblurring studies \cite{cho2009fast,fergus2006removing,pan2014deblurring,michaeli2014blind,pan2016blind,chakrabarti2016neural,dong2017blind,yan2017image} assume a single blur kernel that is applied to an image globally. The restoration of blur images is often modeled as a maximization problem of probabilistic models~\cite{cho2009fast,fergus2006removing}. To narrow down the ambiguity of the blur kernel estimation, natural image priors~\cite{michaeli2014blind,pan2014deblurring,pan2016blind,yan2017image} are exploited. While single blur kernel estimation approaches are effective when blur kernels are shift-invariant, they fail when the blur is not spatially uniform. To restore images affected by motion blur from pure rotations, Dong \emph{et al. } \cite{dong2017blind} use the geometric information of the camera motion as a prior to recover the non-uniform blur model. Recently, deep network based methods \cite{Nah_2017_CVPR,Zhang_2018_CVPR} are proposed to handle general blur patterns without the uniform blur assumption. Nah \emph{et al. } propose multi-scale deep networks with multi-scale loss that mimics coarse-to-fine approaches to restore sharp images under non-uniform blurred images. Zhang \emph{et al. } proposed a spatially variant neural networks to learn spatially variant kernels. However, the approaches addressed here only recover a single image while our goal is to recover the underlying sequence of frames from a given blurred image. \vspace{-3mm} \paragraph{Sequence restoration from a blurred image.} Recently, Jin \emph{et al. }\cite{Jin_2018_CVPR} proposed to extract a video sequence from a single motion-blurred image using multiple deep networks. They showed that deep networks can successfully generate an image sequence from a blurred image, however there remains a few limitations. Their proposed framework consists of multiple networks of which each network is specialized to predict a specific frame in a sequence. Each network is trained separately and sequentially starting from the middle frame and then adjacent frames taking previously predicted frames as inputs. As a result, the non-middle frame prediction heavily relies on previously predicted frames including the middle frame itself, therefore when the middle frame is erroneous the error propagates across frames. Purohit \emph{et al. }\cite{purohit2019bringing} proposed a two-step strategy to generate a video from a motion-blurred image using three complementary networks. They used video autoencoder to learn motion and frame generation from clean frames as a pretraining phase. Later, they introduced a motion disentangle network to extract motion from blurred image. They also used independent deblurring network as their approach requires a clean middle frame generated from a blurred image in advance. Although their approach takes motion information into account, the approach generates frames sequentially starting from the middle frame to adjacent frames which results in error propagation just as in \cite{Jin_2018_CVPR}. Unlike the previous works, our approach runs in an end-to-end manner within a single training stage without error propagation across frames. \section{Dataset} \label{sec:dataset} \vspace{-1.5mm} Collecting a large number of natural motion-blurred images is a daunting task. Hence, a common practice in computer vision research is to generate blurry images by combining a sequence of sharp images using various approaches ranging from simple averaging~\cite{Nah_2017_CVPR,Jin_2018_CVPR} to learnable methods~\cite{BrooksBarronCVPR2019}. The source of motion blur in an image can be generalized into two main categories: rapid camera motion (camera shake) and dynamic motion of objects in the scene. In this section, we briefly explain how we generate a blurry image dataset by considering each case individually. \vspace{-2mm} \begin{figure*}[!t] \begin{center} \setlength{\tabcolsep}{0.8pt} \resizebox{1.0\linewidth}{!}{ \begin{tabular}{cccc} \includegraphics[width=0.17\linewidth]{figures/pano.png} & \includegraphics[width=0.1\linewidth]{figures/sphere.png} & \includegraphics[width=0.1\linewidth]{figures/overlap.PNG} & \includegraphics[width=0.075\linewidth]{figures/blur.png} \vspace{-2mm} \\ \tiny(a) & \tiny(b) & \tiny(c) & \tiny(d) \end{tabular}} \end{center} \vspace{-5mm} \caption{\textbf{Rotational blur dataset generation}. (a) input panorama image, (b) panorama projection on a unit sphere, (c) intermediate frames between the initial and final images (d) blurred image obtained by averaging the captured frames.} \label{fig:pano} \vspace{-4mm} \end{figure*} \paragraph{Rotational blur (synthetic).} In order to generate a rotation blurred image dataset, we use the SUN360 panorama dataset~\cite{jxiaoCVPR2012}. This dataset provides various panoramas with $360^{\circ}$ field of view. Hence, a virtual camera can be modeled to point at different orientations to represent the camera rotation in $SO(3)$. Given a panorama $P$ of size $H \times W$, we developed a simple yet effective framework to generate blurred images. First, the panorama is projected onto a unit sphere by linearly mapping each pixel coordinate $(x,y) \in P$ into spherical coordinates $(\theta,\phi)$ with $\theta\in(0,2\pi)$ and $\phi \in (-\pi/2, \pi/2)$. Then, a synthetic image can be captured via a virtual camera by re-projecting the 3D points on the sphere into an image plane as briefly discussed in~\cite{icraMeiR07} and~\cite{oleksandrcvmp}. Using this procedure we first capture an image by positioning the virtual camera at an arbitrary orientation. We call the image generated at this orientation \textit{initial image}. Then, we rotate the camera by a random rotation matrix (with $\beta = (\beta_x, \beta_y,\beta_z)$ its Euler angle representation) and capture a second image at the new camera position called \textit{final image}. We finally use a quaternion spherical linear interpolation technique (Slerp)~\cite{slerpref} to capture intermediate frames between the initial and final images. All the resulting images (initial, final and intermediate frames) are then averaged to generate a blurry image. The camera rotation angle is uniformly sampled from [$-10^{\circ}, 10^{\circ}$]. In order to generate a realistic blurred image, the number of intermediate images have to be adjusted automatically depending upon the rotation magnitude between the initial and final frames. Therefore, we use a simple linear relationship between the number of frames to be generated ($n$) and the rotation magnitude as follows: $n = c + \frac{1}{3} \| \mathbf{\beta} \|$, where $c$ is a constant and $\| \mathbf{\beta} \|$ is the magnitude of $\beta$. In this manner, we use $1000$ panoramic images from which we generate $26,000$ training and $3,200$ test images of size $128\times128$px. The dataset generation process is summarized in \Fref{fig:pano}. \vspace{-3mm} \paragraph{Dynamic motion (real).} In order to generate more realistic and generic (arbitrary camera motions and dynamic scene) blurred images, we take advantage of a GoPro high speed video dataset~\cite{Nah_2017_CVPR}. This dataset provides 22 training and 11 test scenes, each scene containing frames of size $1280\times720$px. A blurry image is generated by averaging $n$ consecutive frames~\cite{Nah_2017_CVPR,Jin_2018_CVPR}. In our experiments, we fixed $n=7$ and generated $20,000$ training images by randomly cropping images of size $256\times256$px. We also generated $2000$ test images from the test videos by averaging 7 consecutive frames. \section{Method} \label{sec:method} \vspace{-2mm} Given a blurry image $I_b$ synthesized from averaging $n$ latent frames, deblurring approaches predict the middle latent frame $I_m$. In this work, we restore the entire latent frame sequence $\{I_{m-\frac{n}{2}},\ldots,I_{m-1},I_m,I_{m+1},\ldots,I_{m+\frac{n}{2}}\}$, where $I_m$ is the deblurred middle frame and $\{I_{j}\}_{j=m-\frac{n}{2}}^{m+\frac{n}{2}}$ where $j \neq m$ are the recovered non-middle latent frames. The input blur is used as a motion cue to decode non-middle latent frame features (with respect to the middle latent frame) using transformer networks as shown in \Fref{fig:model}. \begin{figure*}[!t] \centering \includegraphics[width=0.9\textwidth,trim={12.2cm 7cm 16.5cm 9.0cm},clip]{figures/restoration_model.pdf} \caption{\textbf{Overview of our network}. (a) The middle frame is predicted using an encoder-decoder structure. The non-middle frames are reconstructed by transforming the multi-layer features of the middle frame. (b) Feature transformer network (FTN) transforms features locally via local warping (LW) and globally via spatial transformer network (STN). Image transformer network (ITN) transforms predicted middle frame via STN. Finally, the predicted frames are passed through a refining network.} \label{fig:model} \vspace{-3mm} \end{figure*} \subsection{Middle latent frame} The middle latent frame $I_m$ is reconstructed using a U-net~\cite{RFB15a} like network. The \textit{encoder} contains five convolutional blocks, each block containing two layers of convolutions with spatial kernel size of $3\times3$ and stride size of 2 and 1, respectively. It outputs encoded features at different feature levels as shown in \Fref{fig:model}a. The encoded features are then decoded to predict the middle latent frame. The middle frame decoder network also contains five convolutional blocks to upsample features and to predict images at different scales. In each block, a feature is first upscaled using a deconvolution layer of kernel size $4\times4$ and a stride size of 2. The image predicted in the previous block is also upscaled in the same manner. The upsampled feature and its respective image are then concatenated channel-wise with the corresponding feature from the encoder (skip connection as shown in the \Fref{fig:model}a), then passed through five layers of convolutions with dense connections to output a feature, which will be used to predict an image at current block. In this manner, features and images are successively upsampled to predict a full scale middle frame. Along with the last feature map from the decoder, the predicted image is finally passed through a \textit{refining} convolutional block. The purpose of this network is to further refine the predicted frame with contextual information by effectively enlarging the receptive field size of the network. \subsection{Non-middle latent frame} \vspace{-0.5mm} The non-middle latent frames are reconstructed from the encoded features via learned transformations by feature transformer networks (FTN) and image transformer networks (ITN) as shown in \Fref{fig:model}b. \vspace{-2.5mm} \paragraph{Feature transformer network.} The feature transformer network inputs an encoded feature and transforms it into a non-middle latent feature in accordance with the learned motion. It consists of spatial transformer network (STN) \cite{jaderberg2015spatial} and local warping (LW) layer. The STNs learn to estimate global transformation parameter $\theta_{[R|T]}$ from encoded features of a motion-blurred input and transform them accordingly. In order to compensate for locally varying motions, we designed a local warping network. This network is conditioned on the input feature like STN, however, instead of predicting global transformation parameters, it predicts pixel-wise displacement \emph{i.e. } \textit{motion flow}. Given an input feature $U\in \mathbb{R} ^{H\times W\times C}$, the local warping network outputs a motion flow of size $H\times W\times 2$. By warping the input feature with the predicted motion flow, we obtain a locally transformed feature which is concatenated with the globally transformed feature as shown in \Eref{eqn:decoder1}. \begin{equation} U^l_t = \mathrm{STN}^l (U_e^l) \oplus \mathrm{LW}^l (U_e^l), \qquad \label{eqn:decoder1} \end{equation} where $l=\{1,\; ...\; ,k\}$ is an index for $k$ feature levels, $U_e$ is an encoded feature and $U_t$ is a transformed feature. \vspace{-2mm} \paragraph{Image transformer network.} The middle frame decoder predicts frames at different feature levels. To guide the reconstruction of the non-middle latent frames with respect to the middle frame, we used STNs to spatially transform the estimated middle frames according to the learned inter-frame motion \emph{i.e. } $I^l_t = \mathrm{STN}(I_m^l)$, where $I_m$ is the predcited middle frame and $I_t$ is the transformed image (see \Fref{fig:model}b). FTNs decode non-middle latent features from encoded features via learned local and non-local motions while ITNs globally capture the motion of the non-middle latent frame relative to the middle latent frame. The outputs of both networks are aggregated channel-wise and are passed through a decoder to predict a non-middle frame (\Fref{fig:model}b). We also input the encoded feature into the non-middle frame decoder in order to guide the decoder to learn the spatial relation between the middle and the non-middle frame as shown \Eref{eqn:decoder2}. \vspace{-1mm} \begin{equation} I_p^l = \mathcal{D}^l(U_t^l \oplus I_t^l \oplus U_e^l) \label{eqn:decoder2} \end{equation} where $p = \{m-\frac{n}{2},\ldots,m-1,m+1,\ldots,m+\frac{n}{2}\}$ is an index for non-middle latent frames and $\mathcal{D}$ is a non-middle frame decoder. Given ground truth non-middle frames during training, our model learns the transformation parameters to be applied to the encoded features of a blurry input at different scales in order to output the desired non-middle frames. The fact that unique transformer networks are applied at each feature and image scale gives the model a capacity to learn various types of transformations, hence, making it robust to different blur patterns including large blurs. \subsection{Loss functions} \vspace{-0.5mm} To ensure stable training and to restore clean latent frame sequences in a temporally coherent manner, we carefully designed the following loss functions, \vspace{-2mm} \paragraph{Photometric loss.} For sharp video frame reconstruction, we trained our network with a weighted multi-scale photometric loss between the images predicted by the decoder network and the ground truth image. A bilinear downsampling is used to resize the ground truth image to the corresponding predicted frame size at different scales. Let $\{\hat{y}\}_{l=1}^k$ denote a set of predicted images from the smallest size ($\hat{y}_1$) to the full scale ($\hat{y}_k$), and $\{y\}_{l=1}^k$ represent a set of downsampled ground truth images where $y_k$ is a full scale ground truth image. For training a model predicting a sequence with $n$ frames from a single blurry image, we compute multi-scale photometric loss as follows, \vspace{-2mm} \begin{equation} {\mathcal{L}}_{mp} = \sum_{j = 1}^{n}\sum_{l=1}^{k}{{\mathbf{w}}_l \cdot\big|{\mathbf{y}}_{j,l} -\hat{{\mathbf{y}}}_{j,l}\big|_1} \label{eqn:pml} \vspace{-2mm} \end{equation} where ${\mathbf{w}}_l$ is the loss weight coefficient for feature level $l$ and $j$ is an index for frame sequence. \vspace{-2mm} \paragraph{Transformation consistency loss.} We used individual transformer networks at each feature level when predicting non-middle frames. This augments our model with the capacity to learn transformations at different levels making it robust to various blur patterns. However, we expect the transformations at different scales to be aligned for successfully reconstructing temporally consistent non-middle frames. Especially at the initial stages of the training where the transformer parameters are random, it is beneficial that our model understands the relationship between the transformations across different frame levels. In order to impose this notion into our model and facilitate a smooth training, we propose the \textit{transformation consistency loss}. Let $\{\mbox{\boldmath $\theta$}\}_{l = 1}^k$ be the set of predicted transformation parameters at different scales. The transformation consistency loss for predicting $n-1$ non-middle frames can be defined as the term ${\mathcal{L}}_{tc}$ in \Eref{eqn:tc}, where $|.|_2$ is an $\ell2$ loss between the transformation parameters. \vspace{-4mm} \begin{equation} {\mathcal{L}}_{tc} = \sum_{j = 1}^{n-1}\sum_{l=2}^{k}{\big|\mbox{\boldmath $\theta$}_{j,l} - \mbox{\boldmath $\theta$}_{j,l-1}\big|_2} \label{eqn:tc} \vspace{-3mm} \end{equation} \paragraph{Penalty term.} Predicting multiple frames from a single blurry image can be problematic at times when the model fails to learn any type of transformation and simply replicates the middle frame predicition as non-middle frames. In order to remedy this issue, we design a penalty term to enforce symmetric diversity among generated images. This is accomplished by explicitly maximizing the sum of absolute difference (SAD) \emph{i.e. }~minimizing the negative SAD between a predicted frame and its time-symmetric (about the middle frame) ground truth frame. For example, when predicting seven frames $\{I_1,...,I_4,....,I_7\}$, we enforce the predicted image $I_1$ to be different content-wise from the ground truth image $I_7$ and vice versa. The penalty is imposed in a symmetric manner (as a matter of design choice inspired by the network architecture) such that the model learns to be sensitive to smaller transformations close to the middle frame as well as larger transformations at the end frames. Given a predicted frame $\hat y_i$ and the corresponding time-symmetric ground truth $y_{n+1-i}$, the penalty term is computed as the term ${\mathcal{L}}_p$ in \Eref{eqn:pt}, where $m$ is the middle frame index, $n$ is the total number of frames. \vspace{-2mm} \begin{equation} {\mathcal{L}}_p = -\sum_{j = 1,j \neq m}^{n}\big|y_{n+1-j} - \hat y_j\big|_1 \label{eqn:pt} \end{equation} The final training loss function is defined as follows, \begin{equation} {\mathcal{L}} = {\mathcal{L}}_{mp} + \lambda_{tc}{\mathcal{L}}_{tc} + \lambda_p {\mathcal{L}}_{p}, \end{equation} where $\lambda_{tc}$ and $\lambda_{p}$ are weight coefficients for transformation consistency loss and penalty term, respectively. \vspace{-1.5mm} \paragraph{Temporal ambiguity and network training.} The task at hand has two main ambiguities. i. \emph{temporal shuffling} and ii. \emph{reverse ordering}. As explained in section 3, motion-blur is the result of an averaging process and, restoring temporally consistent (no shuffling) sharp frame sequence from a given motion-blurred input is a non-trivial task as the averaging destroys the temporal order. Jin \emph{et al. } \cite{Jin_2018_CVPR} mentions that photometric loss is not a sufficient constraint to make their network converge. Hence, they propose a pair-wise order invariant loss to train their network. Purohit \emph{et al. } \cite{purohit2019bringing} also uses the same loss function to fine-tune the recurrent video decoder in their network. We experimentally find that a multi-scale photometric loss is a sufficient constraint to train our network. We further impose more constraints using other loss terms to improve performance (see Ablation studies). By design nature, our model allows motions to be learned in a symmetric manner (about the middle frame) with transformer networks close to the middle frame decoding smaller motions and those further from the middle frame decoding larger motions. This notion is enforced by symmetric constraint term and transformation consistency loss during training. The fact that our model is optimized in a joint manner allows frames to be reconstructed in a motion-guided sequence. Other than \emph{temporal shuffling}, another issue is \emph{reverse ordering}. Given a single motion-blurred input, recovering ground truth order is a highly ill-posed problem which is intractable since reversely ordered frames result in the same motion-blurred image. Neither our work nor previous works \cite{Jin_2018_CVPR,purohit2019bringing} are capable of predicting the right order. Hence, we evaluate frame reconstructions using both ground truth order and its reverse order, then report the higher metric in the experiment section. A recent work by Argaw \emph{et al. }~\cite{argaw2021motionblurred} proposed an optical flow based approach to reconstruct sharp frames in a temporally ordered manner, however, their approach requires at least two blurry frames. \begin{table}[!t] \setlength{\tabcolsep}{7pt} \caption{Quantitative evaluation on Panorama blur dataset} \label{tbl:quant1} \centering \begin{tabular}{l|lccc} \toprule & Methods & $F_i$ & $F_m$ & $F_f$ \\ \midrule PSNR & Jin \emph{et al. } & 22.007 & 22.493 & 22.157 \\ & Ours & \textbf{23.693} & \textbf{24.049} & \textbf{23.874} \\ \midrule SSIM & Jin \emph{et al. } & 0.572 & 0.621 & 0.589\\ & Ours & \textbf{0.699} & \textbf{0.716}& \textbf{0.704}\\ \bottomrule \end{tabular} \vspace{-1.5mm} \end{table} \vspace{-1.5mm} \section{Experiment} \vspace{-1mm} \label{sec:experiment} \paragraph{Implementation details} Our model is implemented using PyTorch \cite{paszke2017automatic}. We chose Adam \cite{KingmaB14} as an optimizer with $\beta_1$ and $\beta_2$ fixed to 0.9 and 0.999, respectively. On our synthetic blur dataset, we train the model using images of size $128\times128$px and a mini-batch size of 8 to predict initial, middle and final frames. A mini-batch size of 4 and input size of $256\times256$px is used to predict sequences of frames when training on the high speed video dataset. In all experiments, we train our model for 80 epochs. We set the learning rate $\lambda = 1e-4$ at the start of the training and decay it by half at epochs 40 and 60. All the training images are cropped from the original resolution images without resizing. \subsection{Video restoration results} \vspace{-0.5mm} In this section, we analyze the performance of our model for sequential frame restoration qualitatively and quantitatively on both camera shake blurs generated from panoramic scenes and dynamic blurs obtained from high speed videos. \vspace{-2mm} \paragraph{Quantitative evaluation.} We report test results using peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. To purely evaluate the quality of generated images without ordering estimation issue due to \emph{reverse ordering}, we report the higher PSNR/SSIM metric of either ground truth order or reverse order of frames \emph{i.e. } $\textrm{max}\{\textrm{PSNR/SSIM}(F_i \rightarrow F_f), \textrm{PSNR/SSIM}(F_f \rightarrow F_i)\}$, where $F_i$, $F_m$ and $F_f$ refer to the initial, middle and final frames in the restored sequence, respectively. We compared our approach with previous works \cite{Jin_2018_CVPR} on both rotational and dynamic blur datasets as tabulated in \Tref{tbl:quant1} and \Tref{tbl:quant2}. On Panorama blur dataset, our model outperforms Jin \emph{et al. } by 1.65 dB on average. The middle and non-middle frame accuracy are similar on average (see \Tref{tbl:quant1}) mainly because rotational blurs are static blurs with uniform camera motion. Hence, it is relatively easier for the network to infer the global motion and decode frames accordingly. In contrast, the GoPro blur dataset, however, contains arbitrary camera motions with dynamic scene and hence, decoding frames require inferring non-uniform global and local motions between frames (with middle frame as a reference). Therefore, the network reliably performs for middle frame prediction and performs less for the end frames due to randomness of motions (see \Tref{tbl:quant2}). On GoPro blur dataset, our model outperforms Jin \emph{et al. } by 2.51 dB on middle frame prediction and by 3.69 dB on non-middle frame predictions. This highlights the advantage of adopting a motion-based approach to leverage blur as a motion cue to decode latent frames rather than extracting frames sequentially in a generic manner. The performance gap between the middle frame and non-middle frames is relatively larger in Jin \emph{et al. } than our method. This is due to sequential prediction in Jin \emph{et al. } which makes non-middle frame prediction heavily dependent on the generated middle frame, resulting in error propagation. As stated in~\cite{Jin_2018_CVPR}, this limitation is particularly problematic when a heavy blur affects the input image since the middle frame prediction becomes less reliable. Our approach is relatively robust to heavy blur as the proposed model generates frames independently from multiple decoders, therefore the error is not propagated (see \Fref{fig:errorprop}). We observed lower quantitative number in panorama scenario compared to the high speed video for both Jin \emph{et al. } and our model. This is most likely because panorama GT images are relatively sharper while high speed video contains less sharp GT frames due to dynamic motion and short exposure time. \begin{table}[!t] \setlength{\tabcolsep}{8pt} \begin{center} \caption{Quantitative evaluation on GoPro blur dataset} \label{tbl:quant2} \vspace{-1.5mm} \begin{tabular}{l|lccc} \toprule & Methods & $F_i$ & $F_m$ & $F_f$ \\ \midrule PSNR & Jin \emph{et al. } & 23.713 & 29.473 & 23.681 \\ & Ours & \textbf{27.357} & \textbf{31.989} & \textbf{27.414} \\ \midrule SSIM & Jin \emph{et al. } & 0.660 & 0.846 & 0.659 \\ & Ours & \textbf{0.794} & \textbf{0.885} & \textbf{0.793}\\ \bottomrule \end{tabular} \end{center} \vspace{-6mm} \end{table} \vspace{-2.5mm} \begin{figure*}[!t] \vspace{-0.2cm} \begin{center} \setlength{\tabcolsep}{1pt} \resizebox{1.0\linewidth}{!}{% \footnotesize \begin{tabular}{cccc|ccccc} Input & $F_i$ & $F_m$ & $F_f$ & Input & $F_i$ & $F_m$ & $F_f$ & \\ \includegraphics[width=0.1\linewidth]{figures/pano/blur5.png} & \includegraphics[width=0.1\linewidth]{figures/pano/blur5_img0_gt.png} & \includegraphics[width=0.1\linewidth]{figures/pano/blur5_img1_gt.png} & \includegraphics[width=0.1\linewidth]{figures/pano/blur5_img2_gt.png} & \includegraphics[width=0.1\linewidth]{figures/pano/blur3.png} & \includegraphics[width=0.1\linewidth]{figures/pano/blur3_img0_gt.png} & \includegraphics[width=0.1\linewidth]{figures/pano/blur3_img1_gt.png} & \includegraphics[width=0.1\linewidth]{figures/pano/blur3_img2_gt.png} & \raisebox{1.7\normalbaselineskip}[0pt][0pt]{\scriptsize{{\rotatebox[origin=c]{90}{GT}}}} \\ & \includegraphics[width=0.1\linewidth]{figures/pano/blur5_img0.png} & \includegraphics[width=0.1\linewidth]{figures/pano/blur5_img1.png} & \includegraphics[width=0.1\linewidth]{figures/pano/blur5_img2.png} & & \includegraphics[width=0.1\linewidth]{figures/pano/blur3_img0.png} & \includegraphics[width=0.1\linewidth]{figures/pano/blur3_img1.png} & \includegraphics[width=0.1\linewidth]{figures/pano/blur3_img2.png} & \raisebox{1.7\normalbaselineskip}[0pt][0pt]{\scriptsize{\rotatebox[origin=c]{90}{Ours}} } \end{tabular}} \end{center} \vspace{-3.5mm} \caption{Rotation blurred images generated from panorama scenes. The top row is ground truth frames and the bottom row is restored frames from the blurs.} \label{fig:qual_pano} \vspace{-3.5mm} \end{figure*} \begin{figure*}[!t] \begin{center} \setlength{\tabcolsep}{1pt} \resizebox{1.0\linewidth}{!}{% \footnotesize \begin{tabular}{cc|cc} Input & \hspace{-0.1cm} GT \hspace{0.9cm} Jin \emph{et al. } \hspace{0.9cm} Ours& Input & \hspace{-0.1cm} GT \hspace{0.9cm} Jin \emph{et al. } \hspace{0.9cm} Ours \\ \includegraphics[width=0.1\linewidth]{video/blur1.png} & \animategraphics[width=0.3\linewidth]{7}{video/1/frame-}{0}{6} & \includegraphics[width=0.1\linewidth]{video/blur3.png} & \animategraphics[width=0.3\linewidth]{7}{video/2/frame-}{0}{6} \end{tabular}} \end{center} \vspace{-5mm} \caption{Heavily blurred (dynamic) inputs from the high speed videos and the restored video frames. Click on the images in \textit{Adobe Reader} to play the videos.} \label{fig:qual_gopro1} \vspace{-3.3mm} \end{figure*} \paragraph{Qualitative evaluation.} The qualitative results for panoramic scenes and high speed videos show that our model can successfully restore multiple frames from a blurred input under various blur patterns (see \Fref{fig:qual_pano} and \Fref{fig:qual_gopro1}). We compare our approach and previous method \cite{Jin_2018_CVPR} on relatively heavily blurred images from the high speed video dataset. As can be seen from \Fref{fig:qual_gopro1}, our method reconstructs contents consistently across frames and restores visually sharper videos compared to \cite{Jin_2018_CVPR}. We experimentally observed that failure cases occur for temporally undersampled and severely blurred inputs (see \Fref{fig:fail_case}). The image contents of such inputs are usually destroyed, and hence, the STNs \cite{jaderberg2015spatial} and the LW layers in our network fail to learn the underlying motion from the heavily blurred inputs \emph{i.e. } feature decoding fails. \subsection{Middle frame deblurring results} \vspace{-1mm} In addition to video restoration, we evaluate the performance of our model on image deblurring task in comparison with state-of-the-art image deblurring approaches \cite{Nah_2017_CVPR,DeblurGAN,tao2018srndeblur} on a benchmark blur dataset provided by \cite{Nah_2017_CVPR}. The dataset provides 1111 test blurred images with $1280\times 720$px resolution. We compared the middle frame prediction ($F_m$) of our pretrained 7-frame prediction model with state-of-the-art deblurring approaches and the results are summarized in \Tref{Tab:middle_comparison}. As can be inferred from \Tref{Tab:middle_comparison}, our video restoration model gives a competitive performance on image deblurring task compared to state-of-the-art deblurring approaches. The slight performance loss can be attributed to the fact our model was trained on blur dataset generated by averaging 7 frames while the benchmark dataset contains blurred images obtained by averaging more than 7 sequential frames (larger blurs). \begin{table}[ht] \caption{Middle frame deblurring comparison with deblurring approaches on benchmark GoPro blur dataset \cite{Nah_2017_CVPR} on PSNR metric.} \vspace{-2mm} \label{Tab:middle_comparison} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{3}{|c|}{Single image deblurring} & \multicolumn{2}{c|}{Video restoration} \\ \hline Nah \emph{et al. } & Kupyn \emph{et al. } & Tao \emph{et al. } & Jin \emph{et al. } & Ours\\ \hline 29.08 & 28.70 & 30.26 & 26.98 & 29.84 \\ \hline \end{tabular} } \vspace{-5mm} \end{table} \begin{figure}[!t] \begin{center} \setlength{\tabcolsep}{0.4pt} \renewcommand{\arraystretch}{0.25} \resizebox{1.0\linewidth}{!}{% \begin{tabular}{cccc} \tiny \tiny{Input} & \tiny{$F_{i}$} & \tiny{$F_{m}$} & \tiny{$F_{f}$} \\ \includegraphics[width=0.1\linewidth]{failure/blur/GOPR0386_11_00_0609_08_blur.png} & \includegraphics[width=0.1\linewidth]{failure/ours/GOPR0386_11_00_0609_08_img7.png} & \includegraphics[width=0.1\linewidth]{failure/ours/GOPR0386_11_00_0609_08_img4.png} & \includegraphics[width=0.1\linewidth]{failure/ours/GOPR0386_11_00_0609_08_img1.png} \\ \includegraphics[width=0.1\linewidth]{failure/blur/GOPR0854_11_00_0987_01_blur.png} & \includegraphics[width=0.1\linewidth]{failure/ours/GOPR0854_11_00_0987_01_img1.png} & \includegraphics[width=0.1\linewidth]{failure/ours/GOPR0854_11_00_0987_01_img4.png} & \includegraphics[width=0.1\linewidth]{failure/ours/GOPR0854_11_00_0987_01_img7.png} \end{tabular}} \end{center} \vspace{-4mm} \caption{Failure cases} \label{fig:fail_case} \vspace{-4mm} \end{figure} \section{Analysis} \vspace{-0.5mm} \paragraph{Cross-dataset evaluation.} We report a cross-dataset \textit{panorama}$\rightarrow$\textit{high speed video} evaluation to assess the generalization capability of our model. A model trained on the panoramic scenes is evaluated on high speed video test set (\Tref{Tab::analysis}). Despite a performance degradation, our model trained on the panorama dataset performs on par with the competing approach \cite{Jin_2018_CVPR} trained on the high speed video dataset. The absence of dynamic motion on the panorama dataset, which is apparent in high speed videos, can be one contributing factor explaining the performance loss in addition to the domain gap \emph{e.g. } image contents, sharpness, blurriness. \vspace{-3mm} \paragraph{Size of blur.} We analyze our model for various blur sizes by plotting the performance of the model with respect to the camera rotation magnitudes of the blurred images in the panorama test set. As can be inferred from \Fref{fig:blur_size}, the model performs better for smaller rotations and performance in general decreases for large blurs. \vspace{-3mm} \paragraph{Sequential error propagation.} Previous works \cite{Jin_2018_CVPR,purohit2019bringing} are prone to error propagation as frames are reconstructed in a sequential manner starting from the middle frame. Particularly, if the deblurred middle frame is erroneous, then, the error propagates across the non-middle frames. Our work is relatively robust to sequential error propagation since all frames are predicted in a single-step without explicit middle frame dependency, hence, error does not propagate. As can be inferred from \Fref{fig:errorprop}, for heavily blurred inputs, Jin \emph{et al. } predicts erroneous middle frame and hence, the predicted non-middle frames are also erroneous. By contrast, our approach successfully recovers non-middle frames even when the middle frame prediction fails. \vspace{-1mm} \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{figures/size_blur_new.png} \caption{PSNR value vs. camera rotation magnitude for panorama test set} \label{fig:blur_size} \vspace{-2.3mm} \end{figure} \vspace{-3mm} \begin{table}[ht] \setlength{\tabcolsep}{10pt} \caption{Quantitative results for cross-dataset evaluation} \label{Tab::analysis} \centering \resizebox{0.85\linewidth}{!}{ \begin{tabular}{|lccc|} \hline \multicolumn{4}{|c|}{Panorama$\rightarrow$ high speed} \\ \hline & $F_i$ & $F_m$ & $F_f$ \\ \hline PSNR & 23.383 & 30.300 & 23.380 \\ SSIM & 0.649 & 0.832& 0.651 \\ \hline \end{tabular} } \vspace{-3.5mm} \end{table} \begin{figure}[!t] \begin{center} \setlength{\tabcolsep}{0.4pt} \renewcommand{\arraystretch}{0.25} \resizebox{1.0\linewidth}{!}{% \begin{tabular}{ccccc} \tiny \tiny{Input} & \tiny{$F_{i}$} & \tiny{$F_{m}$} & \tiny{$F_{f}$}&\\ \includegraphics[width=0.1\linewidth]{error_prop/GOPR0384_11_05_0406_05_blur.png} & \includegraphics[width=0.1\linewidth]{error_prop/Jin/GOPR0384_11_05_0406_05_img1.png}& \includegraphics[width=0.1\linewidth]{error_prop/Jin/GOPR0384_11_05_0406_05_img4.png}& \includegraphics[width=0.1\linewidth]{error_prop/Jin/GOPR0384_11_05_0406_05_img7.png}& \raisebox{0.8\normalbaselineskip}{\rotatebox[origin=c]{90}{\tiny{Jin \emph{et al. }}}} \\ & \includegraphics[width=0.1\linewidth]{error_prop/ours/GOPR0384_11_05_0406_05_img1.png}& \includegraphics[width=0.1\linewidth]{error_prop/ours/GOPR0384_11_05_0406_05_img4.png}& \includegraphics[width=0.1\linewidth]{error_prop/ours/GOPR0384_11_05_0406_05_img7.png}& \raisebox{0.8\normalbaselineskip}{\rotatebox[origin=c]{90}{\tiny{Ours}}} \end{tabular}} \end{center} \vspace{-4mm} \caption{Sequential error propagation} \label{fig:errorprop} \end{figure} \section{Ablation studies} \label{sec:ablation} \paragraph{Network components.} The STNs in the feature transformer network are the core part of our model for network convergence. The addition of local warping (LW) layer also significantly improves the performance of our model. The best model performance is, yet, achieved with the all network components (FTN and ITN) combined (\Tref{Tab::ablation}). The refining block improves performance by a margin of 0.43 dB on average. \vspace{-2mm} \paragraph{Loss terms.} As mentioned earlier, the multi-scale photometric loss (PML) is a sufficient constraint to make our network converge during training. We also experimentally find that a model trained with transformation consistency loss (TCL) not only converges faster with smoother behavior but also gives a better performance during testing. The penalty term (PT) gives a marginal performance improvement when predicting fewer frames as photometric loss is already a sufficient constraint (see \Tref{Tab::ablation}). In 3 frame prediction model, the penalty term improved performance marginally around 0.25dB while in 7 frame prediction model, it improved approximately 0.6dB. Penalty term enforces the model to consider subtle differences especially when the motion is small. \vspace{-3.5mm} \begin{table}[!t] \setlength{\tabcolsep}{3.5pt} \vspace{-0.5mm} \caption{Ablation studies on GoPro blur dataset for network components and loss terms on PSNR metric.} \vspace{-2mm} \label{Tab::ablation} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{|l|l|ccc|} \hline & & $F_i$ & $F_m$ & $F_f$ \\ \hline & FTN [STN] & 25.86 & 31.20 & 25.78 \\ Network& FTN [STN $\oplus$ LW] & 26.67 & 32.02 & 26.58 \\ components& FTN [STN] $\oplus$ ITN & 26.06 & 31.78 & 26.05 \\ & FTN [STN $\oplus$ LW] $\oplus$ ITN & 27.35 & 31.98 & 27.41 \\ \hline & PML & 25.98 & 30.77 & 25.97 \\ {Loss terms} & PML $\oplus$ TCL & 27.08 & 31.78 & 27.12 \\ & PML $\oplus$ TCL $\oplus$ PT & 27.35 & 31.98 & 27.41 \\ \hline \end{tabular} } \vspace{-4.3mm} \end{table} \vspace{-1.5mm} \section{Conclusion} \vspace{-2mm} We present a novel unified architecture that restores video frames from a single blurred image in an end-to-end manner without motion supervision. We evaluate our model on the two datasets with rotation blurs and dynamic blurs and demonstrate qualitatively and quantitatively favorable performance against the competing approach. The cross-dataset evaluation demonstrates that our model can generalize even when the training and test set have significantly different blur patterns and domain gap. Unlike the previous approaches, our model predicts frames in a single step without middle frame dependency. It is advantageous not only because it is simple to use but also robust to heavy blurs where middle frame prediction often fails. Overall, the simplicity and flexibility of our method makes it a promising approach for future applications such as deblurring and temporal super resolution. \paragraph{Acknowledgements.} This work was supported by NAVER LABS Corporation [SSIM: Semantic \& scalable indoor mapping]. {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \hskip0.6cm Convexity and generalized covexity play a significant role in many fields, for example, in biological system, economy, optimization, and so on \cite{kS,GrinalattLinnainmaa2011,RuelAyres1999}. \vspace{1.0mm} Generalized convex functions, labelled as semilocal convex functions were introduced by Ewing \cite{Ewing} by using more general semilocal perinvexity and $ \eta- $ semidifferentiability. After that optimality conditions for weak vector minima was given \cite{Preda1997}. Also, optimality conditions and duality results for a nonlinear fractioal involving $ \eta- $ semidifferentiability were established \cite{Preda2003}. Furthermore,some optimality conditions and duality results for semilocal E-convex programming were established \cite{Hu2007}. E-convexity was extedned to E-preinvexity \cite{FulgaPreda}. Recently, semilocal E-prenivexity (SLEP) and some of its applications were introdued \cite{Jiao2011,Jiao2012,Jiao2013}. \vspace{1.0mm} Generalized convex functions in manifolds such as Riemannian manifolds were studied by many authors; see \cite{Agarwal,BG,Ferrara,Mordukhovch2011}. Udrist \cite{Udriste1994} and Rapcsak \cite{Rapcsak1997} considered a generalization of convexity called geodesic convexity. In 2012, geodesic E-convex (GEC) sets and geodesic E-convex (GEC)functions on Riemannian manifolds were studied \cite{IAA2012}. Moreover, geodesic semi E-convex (GsEC) functions were introduced \cite{Iqbal}. Recently, geodesic strongly E-convex (GSEC) functions were introduced and discussed some of their properties \cite{AW}. \section{Geodesic Semilocal E-Preinvexity} \hskip0.6cm \begin{definition}\label{df} A nonempty set $ B \subset \aleph $ is said to be \begin{enumerate} \item geodesic E-invex (GEI) with respect to $ \eta $ if there is exactly one geodesic $ \gamma_{E(\kappa_{1}), E(\kappa_{2})}:\left[0,1 \right]\longrightarrow \aleph $ such that \begin{eqnarray*} \gamma_{E(\kappa_{1}), E(\kappa_{2})}(0)=E(\kappa_{2}),\acute{\gamma}_{E(\kappa_{1}), E(\kappa_{2})}=\eta(E(\kappa_{1}),E(\kappa_{2}), \gamma_{E(\kappa_{1}), E(\kappa_{2})}(t)\in B, \end{eqnarray*} $ \forall \kappa_{1},\kappa_{2}\in B$ and $t\in[0,1]. $ \item a geodesic local E-invex (GLEI) respect to $ \eta $, if there is $ u(\kappa_{1},\kappa_{2})\in\left(\left.0,1 \right] \right. $ such that $ \forall t\in [0,u (\kappa_{1},\kappa_{2})] $, \begin{equation}\label{eq2} \gamma_{E(\kappa_{1}),E(\kappa_{2})}(t)\in B \ \ \forall\kappa_{1},\kappa_{2}\in B. \end{equation} \item a geodesic local starshaped E-convex, if there is a map $ E $ such that corresponding to each pair of points $ \kappa_{1},\kappa_{2}\in A $, there is a maximal positive number $ u(\kappa_{1},\kappa_{2})\leq 1 $ such as \begin{equation}\label{eq1} \gamma_{E(\kappa_{1}),E(\kappa_{2})}\in A, \ \ \forall t\in [0, u(\kappa_{1},\kappa_{2})] \end{equation} \end{enumerate} \end{definition} \begin{definition} A function $ f: A\subset \aleph\longrightarrow \mathbb{R} $ is said to be \begin{enumerate} \item Geodesic E-preinvex (GEP) on $ A\subset \aleph$ with repect to $ \eta $ if $A$ is a GEI set and $$f\left(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t)\right)\leq t f(E(\kappa_{1}))+(1-t)f(E(\kappa_{2})) , \ \ \forall \kappa_{1},\kappa_{2}\in A, t\in[0,1]; $$ \item Geodesic semi E-preinvex (GSEP) on $ A $ with respect to $ \eta $ if if $A$ is a GEI set and $$f\left(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t)\right)\leq t f(\kappa_{1})+(1-t)f(\kappa_{2}) , \ \ \forall \kappa_{1},\kappa_{2}\in A, t\in[0,1]. $$ \item Geodesic Local E-preinvex (GLEP) on $ A\subset \aleph $ with respect to $ \eta $, if for any $ \kappa_{1},\kappa_{2}\in A $ there exists $ 0<v(\kappa_{1},\kappa_{2})\leq u(\kappa_{1},\kappa_{2}) $ such that $ A $ is a GLEI set and $$f\left(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t)\right)\leq t f(E(\kappa_{1}))+(1-t)f(E(\kappa_{2})) , \ \ \forall t\in[0,v(\kappa_{1},\kappa_{2})].$$ \end{enumerate} \end{definition} \begin{definition} A function $ f:\aleph\longrightarrow \mathbb{R} $ is a geodesic semilocal E-convex ( GSLEC) on a geodesic local starshaped E-convex set $ B\subset \aleph $ if for each pair of $ \kappa_{1},\kappa_{2}\in B $ ( with a maximal positive number $ u(\kappa_{1},\kappa_{2})\leq1 $ satisfying \ref{eq1}), there exists a positive number $ v(\kappa_{1},\kappa_{2})\leq u(\kappa_{1},\kappa_{2}) $ satisfying $$f\left(\gamma_{E(\kappa_{1}), E(\kappa_{2})}(t)\right)\leq t f(\kappa_{1})+(1-t)f(\kappa_{2}) , \ \ \forall t\in[0,v(\kappa_{1},\kappa_{2})].$$ \end{definition} \begin{remark} Every GEI set with respect to $ \eta $ is a GLEI set with respect to $ \eta $, where $ u(\kappa_{1},\kappa_{2})=1, \forall \kappa_{1},\kappa_{2}\in \aleph $. On the other hand, their coverses are not recessarily true and we can see that in the next example. \end{remark} \begin{example} Put $ A=\left[ \left. -4,-1\right) \right. \cup[1,4] $, \begin{eqnarray*} E(\kappa) &=& \begin{cases} \kappa^{2} \ \ if\ \ \left|\kappa \right| \leq 2,\\ -1\ \ if \ \ \left|\kappa \right| > 2; \end{cases} \end{eqnarray*} \begin{eqnarray*} \eta (\kappa,\iota) &=& \begin{cases} \kappa-\iota \ \ if\ \ \kappa\geqslant 0, \iota\geqslant 0\ \ or\ \ \kappa\leq 0, \iota\leq0 ,\\ -1-\iota \ \ if \ \ \kappa>0, \iota\leq 0 \ \ or \ \ \kappa\geqslant0 ,\iota< 0, \\ 1-\iota \ \ if \ \ \kappa<0, \iota\geqslant 0 \ \ or\ \ \kappa\leq 0, \iota>0; \end{cases} \end{eqnarray*} \begin{eqnarray*} \gamma_{\kappa,\iota}(t) &=& \begin{cases} \iota+t(\kappa-l) \ \ if \ \ \kappa\geqslant 0, \iota\geqslant 0 \ \ or \ \ \kappa\leq 0, \iota\leq0 ,\\ \iota+t(-1-\iota) \ \ if \ \ \kappa>0, \iota\leq 0 \ \ or\ \ \kappa\geqslant0 ,\iota< 0, \\ \iota+t(1-\iota) \ \ if \ \ \kappa<0, \iota\geqslant 0 \ \ or \ \ \kappa\leq 0, \iota>0. \end{cases} \end{eqnarray*} Hence $ A $ is a GLEI set with respect to $ \eta $. But, when $ \kappa=3, \iota=0 $, there is a $ t_{1}\in[0,1] $ such that $ \gamma_{E(\kappa),E(\iota)}(t_{1})=-t_{1} $, then if $ t_{1}=1 $, we obtain $\gamma_{E(\kappa),E(\iota)}(t_{1})\notin A $. \end{example} \begin{definition}\label{de1} A function $ f: \aleph\longrightarrow \mathbb{R} $ is GSLEP on $ B\subset \aleph $ with respect to $ \eta $ if for any $ \kappa_{1},\kappa_{2}\in B $, there is $ 0<v(\kappa_{1},\kappa_{2})\leq u(\kappa_{1},\kappa_{2})\leq1 $ such that $ B $ is a GLEI set and \begin{equation}\label{eq3} f\left(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t)\right)\leq t f(\kappa_{1})+(1-t)f(\kappa_{2}) , \ \ \forall t\in[0,v(\kappa_{1},\kappa_{2})]. \end{equation} If $$f\left(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t)\right)\geqslant t f(\kappa_{1})+(1-t)f(\kappa_{2}) , \ \ \forall t\in[0,v(\kappa_{1},\kappa_{2})],$$ then $ f $ is GSLEP on $ B $. \end{definition} \begin{remark} Any GSLEC function is a GSLEP function. Also, any GSEP function with respect to $ \eta $ is a GSLEP function. On the other hand, their converses are not necessarily true. \end{remark} The next example shows SLGEP, which is neither a GSLEC function nor a GSEP function. \begin{example} Assume that $ E: \mathbb{R}\longrightarrow \mathbb{R} $ is given as \begin{eqnarray*} E(m) &=& \begin{cases} 0 \ \ if\ \ m<0,\\ 1 \ \ if\ \ 1<m\leq 2,\\ m \ \ if\ \ 0\leq m\leq1 \ \ or\ \ m>2 \end{cases} \end{eqnarray*} and the map $ \eta: \mathbb{R}\times \mathbb{R}\longrightarrow \mathbb{R} $ is defined as \begin{eqnarray*} \eta (m,n) &=& \begin{cases} 0 \ \ if\ \ m= n,\\ 1-m \ \ if\ \ m\neq n ; \end{cases} \end{eqnarray*} also, \begin{eqnarray*} \gamma_{m,n}(t) &=& \begin{cases} n \ \ if\ \ m= n,\\ n+t(1-m) \ \ if\ \ m\neq n. \end{cases} \end{eqnarray*} Since $ \mathbb{R} $ is a geodesic local starshaped E-convex set and a geodesic local E-invex set with respect to $ \eta $. Assume that $ h: \mathbb{R}\longrightarrow \mathbb{R} $, where \begin{eqnarray*} h(m) &=& \begin{cases} 0 \ \ if\ \ 1<m\leq 2,\\ 1 \ \ if\ \ m>2, \\ -m+1 \ \ if\ \ 0\leq m\leq1,\\ -m+2 \ \ if\ \ m<0. \end{cases} \end{eqnarray*} Then $ h $ is a GSLEP on $ \mathbb{R} $ with respect to $ \eta $. However, when $ m_{0}=2, n_{0}=3 $ and for any $ v\in\left(\left.0,1 \right] \right. $, there is a sufficiently small $ t_{0}\in\left(\left.0,v \right] \right. $ such as $$h\left(\gamma_{E(m_{0}),E(n_{0})}(t_{0}) \right)=1>(1-t_{0})=t_{0}h(m_{0})+(1-t_{0})h(n_{0}) .$$ Then $ h(m) $ is not a GSLEC function on $ \mathbb{R} $. \vspace{1.0mm} Similarly, taking $ m_{1}=1, n_{1}=4 $, we have $$h\left(\gamma_{E(m_{1}),E(n_{1})}(t_{1}) \right)=1>(1-t_{1})=t_{1}h(m_{1})+(1-t_{1})h(n_{1}) .$$ for some $ t_{1}\in[0,1] $.\\ Hence $ h(m) $ is not a GSEP function on $ \mathbb{R} $ with respect to $ \eta $. \end{example} \begin{definition}\label{de2} A function $ h:S\subset \aleph\longrightarrow \mathbb{R} $ , where $ S $ a GLEI set, is said to be a geodesic quasi-semilocal E-preinvex (GqSLEP) (with respect to $ \eta $) if for all $ \kappa_{1},\kappa_{2}\in S $ satisfying $ h(\kappa_{1})\leq h(\kappa_{2}) $, there is a positive number $ v(\kappa_{1},\kappa_{2})\leq u(\kappa_{1},\kappa_{2}) $ such that $$h\left(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t)\right) \leq h(\kappa_{2}), \forall t\in[0,v(\kappa_{1},\kappa_{2})]. $$ \end{definition} \begin{definition}\label{de3} A function $ h:S\subset \aleph\longrightarrow \mathbb{R} $ , where $ S $ a GLEI set, is said to be a geodesic pseudo-semilocal E-preinvex ( GpSLEP) (with respect to $ \eta $) if for all $ \kappa_{1},\kappa_{2}\in S $ satisfying $ h(\kappa_{1})<h(\kappa_{2}) $, there are positive numbers $ v(\kappa_{1},\kappa_{2})\leq u(\kappa_{1},\kappa_{2}) $ and $ w(\kappa_{1},\kappa_{2}) $ such that $$h\left(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t)\right) \leq h(\kappa_{2})-t w(\kappa_{1},\kappa_{2}), \forall t\in[0,v(\kappa_{1},\kappa_{2})]. $$ \end{definition} \begin{remark} Every GSLEP on a GLEI set with respect to $ \eta $ is both a GqELEP function and a GpSLEP function. \end{remark} \begin{definition}\label{de4} A function $ h:S\longrightarrow \mathbb{R} $ is called geodesic E-$ \eta $- semidifferentiable at $ \kappa^{*}\in S $ where $ S\subset \aleph $ is a GLEI set with respect to $ \eta $ if $ E(\kappa^{*})=\kappa^{*} $ and \begin{equation*} h'_{+}\left(\gamma_{\kappa^{*},E(\kappa)}(t) \right)= \lim_{t\longrightarrow 0^{+}} \frac{1}{t}\left[h\left(\gamma_{\kappa^{*},E(\kappa)}(t)\right) -h(\kappa^{*}) \right], \end{equation*} exists for every $ \kappa\in S. $. \end{definition} \begin{remark} \begin{enumerate} \item Let $ \aleph=\mathbb{R}^{n} $, then the geodesic E-$ \eta $- semidifferentiable is E-$ \eta $-semidifferentiable \cite{Jiao2011}. \item If $ \aleph=\mathbb{R}^{n} $ and $ E=I $, then the geodesic E-$ \eta $-semidifferentiable is the $ \eta $-semidifferentiablitiy \cite{Niculescu2007Optimality} . \item If $ \aleph=\mathbb{R}^{n} $ , $ E=I $ and $ \eta(\kappa,\kappa^{*})=\kappa-\kappa^{*} $, then geodesic E-$ \eta $-semidifferentiable is the semidifferentiability \cite{Jiao2011}. \end{enumerate} \end{remark} \begin{lemma}\label{lemma2} \begin{enumerate} \item Assume that $ h $ is a GSLEP (E-preconcave) and geodesic E-$ \eta $-semidifferentiable at $ \kappa^{*}\in S\subset \aleph $, where $ S $ is a GLEI set with respect to $ \eta $. Then $$h(\kappa)-h(\kappa^{*})\geqslant (\leq) h'_{+}(\gamma_{\kappa^{*},E(\kappa)}(t)), \forall \kappa\in S.$$ \item Let $ h $ be GqSLEP (GpSLEP) and geodesic E-$ \eta $-semidifferentiable at $ \kappa^{*}\in S\subset \aleph $, where $ S $ is a LGEI set with respect to $ \eta $. Hence $$h(\kappa)\leq(<) h(\kappa^{*})\Rightarrow h'_{+}(\gamma_{\kappa^{*},E(\kappa)}(t))\leq (<)0, \forall \kappa\in S.$$ \end{enumerate} \end{lemma} The above lemma is directly by using definitions (\ref{de1},\ref{de2},\ref{de3} and \ref{de4}). \begin{theorem} Let $ f: S\subset \aleph\longrightarrow \mathbb{R} $ be a GLEP function on a GLEI set $ S $ with respect to $ \eta $, then $ f $ is a GSLEP function iff $ f(E(\kappa))\leq f(\kappa),\forall \kappa\in S $. \end{theorem} \begin{proof} Assume that $ f $ is a GSLEP function on set $ S $ with respect to $ \eta $, then $\forall \kappa_{1},\kappa_{2}\in S $, there is a positive number $ v(\kappa_{1},\kappa_{2})\leq u(\kappa_{1},\kappa_{2}) $ where $$f(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t))\leq tf(\kappa_{2})+(1-t)f(\kappa_{1}), t\in[0,v(\kappa_{1},\kappa_{2})].$$ By letting $ t=0 $, then $ f(E(\kappa_{1}))\leq f(\kappa_{1}),\forall \kappa_{1}\in S $.\\ Conversely, consider $ f $ is a GLEP function on a GLEI set $ S $, then for any $ \kappa_{1},\kappa_{2}\in S $, there exist $ u(\kappa_{1},\kappa_{2}) \in \left(\left.0,1 \right] \right. $ (\ref{eq2}) and $ v(\kappa_{1},\kappa_{2}) \in \left(\left.0,u(\kappa_{1},\kappa_{2}) \right] \right. $ such that $$f(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t))\leq tf(E(\kappa_{1}))+(1-t)f(E(\kappa_{2})), t\in[0,v(\kappa_{1},\kappa_{2})].$$ Since $ f(E(\kappa_{1})) \leq f(\kappa_{1}), \forall \kappa_{1}\in S$, then $$f(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t))\leq tf(\kappa_{1})+(1-t)f(\kappa_{2}), t\in[0,v(\kappa_{1},\kappa_{2})].$$ \end{proof} \begin{definition} The set $ \omega=\left\lbrace(\kappa,\alpha):\kappa\in B\subset \aleph, \alpha\in \mathbb{R} \right\rbrace $ is said to be a GLEI set with respect to $ \eta $ corresponding to $ \aleph $ if there are two maps $ \eta, E $ and a maximal positive number $ u((\kappa_{1},\alpha_{1}), (\kappa_{2},\alpha_{2}))\leq 1 $, for each $ (\kappa_{1},\alpha_{1}), (\kappa_{2},\alpha_{2})\in \omega $ such that $$ \left(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t),t\alpha_{1}+(1-t)\alpha_{2} \right)\in \omega, \forall t\in\left[0,u((\kappa_{1},\alpha_{1}), (\kappa_{2},\alpha_{2})) \right]. $$ \end{definition} \begin{theorem}\label{th1} Let $ B\subset \aleph$ be a GLEI set with respect to $ \eta $. Then $ f $ is a GSLEP function on $ B $ with respect to $ \eta $ iff its epigraph $$ \omega_{f}=\left\lbrace (\kappa_{1},\alpha):\kappa_{1}\in B, f(\kappa_{1})\leq\alpha, \alpha\in \mathbb{R} \right\rbrace $$ is a GLEI set with respect to $ \eta $ corresponding to $ \aleph $. \end{theorem} \begin{proof} Suppose that $ f $ is a GSLEP on $ B $ with respect to $ \eta $ and $ (\kappa_{1},\alpha_{1}), (\kappa_{2},\alpha_{2})\in \omega_{f} $, then $ \kappa_{1},\kappa_{2}\in B, f(\kappa_{1})\leq \alpha_{1}, f(\kappa_{2})\leq \alpha_{2} $. By applying definition \ref{df}, we obtain $ \gamma_{E(\kappa_{1}),E(\kappa_{2})}(t)\in B, \forall t\in\left[0, u(\kappa_{1},\kappa_{2}) \right]. $\\ Moreover, there is a positive number $ v(\kappa_{1},\kappa_{2})\leq u(\kappa_{1},\kappa_{2}) $ such that $$f\left(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t), t\alpha_{1}+(1-t)\alpha_{2} \right)\in \omega_{f}, \forall t\in[0,v(\kappa_{1},\kappa_{2})]. $$ \vspace{1.0mm} Conversely, if $ \omega_{f} $ is a GLEI set with respect to $ \eta $ corresponding to $ \aleph $ ,then for any points $ (\kappa_{1},f(\kappa_{1})) , (\kappa_{2},f(\kappa_{2}))\in \omega_{f}$, there is a maximal positive number $ u((\kappa_{1},f(\kappa_{1})), (\kappa_{2},f(\kappa_{2}))\leq 1 $ such that $$\left( \gamma_{E(\kappa_{1}),E(\kappa_{2})}(t), tf(\kappa_{1}) +(1-t)f(\kappa_{2})\right) \in \omega_{f},\forall t\in\left[0, u((\kappa_{1},f(\kappa_{1})),(\kappa_{2},f(\kappa_{2}))) \right].$$ That is, $\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t) \in B, $ $$f\left(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t) \right)\leq tf(\kappa_{1}) +(1-t)f(\kappa_{2}), \ \ t\in\left[0,u((\kappa_{1},f(\kappa_{1})),(\kappa_{2},f(\kappa_{2}))) \right]. $$ Thus, $ B $ is a GLEI set and $ f $ is a GSLEP function on $ B $. \end{proof} \begin{theorem} If $ f $ is a GSLEP function on a GLEI set $ B\subset \aleph $ with respect to $ \eta $ , then the level $ K_{\alpha}=\left\lbrace \kappa_{1}\in B: f(\kappa_{1})\leq \alpha \right\rbrace $ is a GLEI set for any $ \alpha\in \mathbb{R} $. \end{theorem} \begin{proof} For any $ \alpha\in \mathbb{R}$ $ $ and $ \kappa_{1},\kappa_{2}\in K_{\alpha} $, then $ \kappa_{1},\kappa_{2}\in B $ and $ f(\kappa_{1})\leq\alpha, f(\kappa_{2})\leq\alpha $. Since $ B $ is a GLEI set, then there is a maximal positive number $ u(\kappa_{1},\kappa_{2})\leq1 $ such that $$ \gamma_{E(\kappa_{1}),E(\kappa_{2})}(t)\in B, \ \ \forall t\in\left[0,u(\kappa_{1},\kappa_{2}) \right] .$$ In addition, since $ f $ is GSLEP, there is a positive number $ v(\kappa_{1},\kappa_{2})\leq u(y_{1},y_{2}) $ such that \begin{eqnarray} f\left(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t) \right)&\leq& t f(\kappa_{1}) +(1-t)f(\kappa_{2})\nonumber\\&\leq& t\alpha+(1-t)\alpha\nonumber\\ &=& \alpha, \ \ \forall t\in\left[0,v(\kappa_{1},\kappa_{2}) \right].\nonumber\end{eqnarray} That is , $ \gamma_{E(\kappa_{1}),E(\kappa_{2})}(t)\in K_{\alpha}, \ \ \forall t\in\left[0,v(\kappa_{1},\kappa_{2}) \right] $. Therefore, $ K_{\alpha} $ is a GLEI set with respect to $ \eta $ for any $ \alpha \in \mathbb{R} $. \end{proof} \begin{theorem} Let $ f:B\subset \aleph\longrightarrow \mathbb{R} $ where $ B $ is a GLEI. Then $ f $ is a GSLEP function with respect to $ \eta $ iff for each pair of points $ \kappa_{1},\kappa_{2}\in B $, there is a positive number $ v(\kappa_{1},\kappa_{2})\leq u(\kappa_{1},\kappa_{2})\leq 1 $ such that $$f\left(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t) \right) \leq t \alpha +(1-t)\beta , \ \ \forall t\in\left[0,v(\kappa_{1},\kappa_{2}) \right].$$ \begin{proof} Let $ \kappa_{1},\kappa_{2}\in B $ and $ \alpha,\beta\in \mathbb{R} $ such that $ f(\kappa_{1})<\alpha $ and $ f(\kappa_{2})<\beta $. Since $ B $ is GLEI, there is a maximal positive number $ u(\kappa_{1},\kappa_{2})\leq 1 $ such that $$\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t) \in B , \ \ \forall t\in\left[0,u(\kappa_{1},\kappa_{2}) \right].$$ In addition, there is a positive number $ v(\kappa_{1},\kappa_{2})\leq u(\kappa_{1},\kappa_{2}) $ where $$f\left(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t) \right) \leq t \alpha +(1-t)\beta , \ \ \forall t\in\left[0,v(\kappa_{1},\kappa_{2}) \right].$$ Conversely, let $ (\kappa_{1},\alpha) \in \omega_{f} $ and $ (\kappa_{2},\beta) \in \omega_{f} $, then $ \kappa_{1},\kappa_{2}\in B $, $ f(\kappa_{1})<\alpha $ and $ f(\kappa_{2})<\beta $. Hence, $ f(\kappa_{1})<\alpha+\varepsilon $ and $ f(\kappa_{2})<\beta+\varepsilon $ hold for any $ \varepsilon>0 $. According to the hypothesis for $ \kappa_{1},\kappa_{2}\in B $, there is a positive number $ v(\kappa_{1},\kappa_{2})\leq u(\kappa_{1},\kappa_{2})\leq 1 $ such that $$f\left(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t) \right) \leq t \alpha +(1-t)\beta+\varepsilon , \ \ \forall t\in\left[0,v(\kappa_{1},\kappa_{2}) \right].$$ Let $ \varepsilon\longrightarrow0^{+} $, then $$f\left(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t)\right) \leq t \alpha +(1-t)\beta , \ \ \forall t\in\left[0,v(\kappa_{1},\kappa_{2}) \right].$$ That is $\left(\gamma_{E(\kappa_{1}),E(\kappa_{2})}(t) , t \alpha +(1-t)\beta\right) \in \omega_{f} , \ \ \forall t\in\left[0,v(\kappa_{1},\kappa_{2}) \right].$\\ Therefore, $ \omega_{f} $ is a GLEI set corresponding to $ \aleph $. From Theorem\ref{th1}, it follows that $ f $ is a GSLEP on $ B $ with respect to $ \eta $. \end{proof} \end{theorem} \section{Optimality Criteria} \hskip0.6cm \vspace{1.0mm} In this section, let us consider the nonlinear fractional multiobjective programming problem such as :\\ \begin{eqnarray*} (VFP) \begin{cases} minimize \frac{f(\kappa)}{g(\kappa)}=\left(\frac{f_{1}(\kappa)}{g_{1}(\kappa)},\cdots,\frac{f_{p}(\kappa)}{g_{p}(\kappa)} \right),\\ subject \ \ to\ \ h_{j}(\kappa)\leq0, j\in Q={1,2,\cdots q} \\ \kappa\in K_{0} \end{cases} \end{eqnarray*} where $ K_{0}\subset \aleph $ is a GLEI set and $ g_{i}(\kappa)>0, \forall \kappa\in K_{0} , i\in P={1,2,\cdots, p} $. \vspace{1.0mm} Let $ f=(f_{1},f_{2},\cdots, f_{p}), g=(g_{1},g_{2},\cdots,g_{p}) $ and $ h=(h_{1},h_{2},\cdots,h_{q}) $\\ and denote $ K=\left\lbrace \kappa:h_{j}(\kappa)\leq 0, j\in Q, \kappa\in K_{0}\right\rbrace $, the feasible set of problem ($ VFP $).\\ For $ \kappa^{*}\in K $, we put $ Q(\kappa^{*})=\left\lbrace j:h_{j}(\kappa^{*})= 0, j\in Q \right\rbrace $, $ L(\kappa^{*})=\frac{Q}{Q(\kappa^{*})} $. \vspace{1.0mm} We also formulate the nonlinear multiobjective programming problem as follows: \begin{eqnarray*} (VFP_{\lambda}) \begin{cases} minimize \left( f_{1}(\kappa)-\lambda_{1}g_{1}(\kappa),\cdots f_{p}(\kappa)-\lambda_{p}g_{p}(\kappa) \right),\\ subject \ \ to\ \ h_{j}(\kappa)\leq0, j\in Q={1,2,\cdots q} \\ \kappa\in K_{0} \end{cases} \end{eqnarray*} where $ \lambda=(\lambda_{1},\lambda_{2},\cdots ,\lambda_{p})\in \mathbb{R}^{p} $. \vspace{1.0mm} The followinng lemma connects the weak efficient solutions for ($ VFP $) and ($ VFP_{\lambda} $). \begin{lemma}\label{Lemma1} A point $ \kappa^{*} $ is a weak efficient solution for ($ VFP_{\lambda} $) iff $ \kappa^{*} $ is a weak efficient solution for ($ VFP^{*}_{\lambda} $), where $ \lambda^{*}=(\lambda^{*}_{1},\cdots,\lambda^{*}_{p} )=\left(\frac{f_{1}(\kappa^{*})}{g_{1}(\kappa^{*})},\cdots,\frac{f_{p}(\kappa^{*})}{g_{p}(\kappa^{*})} \right) $. \end{lemma} \begin{proof} Assume that there is a feasible point $ \kappa\in K $, where $$f_{i}(\kappa)-\lambda^{*}_{i}g_{i}(\kappa)<f_{i}(\kappa^{*})-\lambda^{*}_{i}g_{i}(\kappa^{*}),\forall i\in Q $$ $ \Longrightarrow $$$f_{i}(\kappa)<\frac{f_{i}(\kappa^{*})}{g_{i}(\kappa^{*})g_{i}(\kappa)}$$ $ \Longrightarrow $ $$\frac{f_{i}(\kappa)}{g_{i}(\kappa)}<\frac{f_{i}(\kappa^{*})}{g_{i}(\kappa^{*})},$$ which is a contradiction the weak efficiency of $ \kappa^{*} $ for ($ VFP $). \vspace{1.0mm} Now let us take $ \kappa\in K $ as a feasible point such that $$\frac{f_{i}(\kappa)}{g_{i}(\kappa)}<\frac{f_{i}(\kappa^{*})}{g_{i}(\kappa^{*})}= \lambda^{*}_{i},$$ then $ f_{i}(\kappa)-\lambda^{*}_{i}g_{i}(\kappa)<0=f_{i}(\kappa^{*})-\lambda^{*}_{i}g_{i}(\kappa^{*}), \forall i\in Q $, which is agian contradiction to the weak efficiency of $ \kappa^{*} $ for ($ VFP^{*}_{\lambda} $). \end{proof} Next, some sufficient optimality conditions for the problem ($ VFP $) are established. \begin{theorem}\label{th2} Let $ \bar{\kappa}\in K, E(\bar{\kappa})=\bar{\kappa} $ and $ f,h $ be GSLEP and $ g $ be a geodesic semilocal E-preincave, and they are all geodesic E-$ \eta $- semidifferentiable at $ \bar{\kappa} $. Further, assume that there are $ \zeta^{o}=\left(\zeta^{o}_{i}, i=1,\cdots,p \right)\in\mathbb{R}^{p} $ and $ \xi^{o}=\left(\xi^{o}_{j}, j=1,\cdots,m \right)\in\mathbb{R}^{m} $ such that \begin{equation}\label{eq4} \zeta^{o}_{i}f'_{i+}\left(\gamma_{\bar{\kappa},E(\widehat{\kappa})}(t) \right)+\xi^{o}_{j} h'_{j+}\left(\gamma_{\bar{\kappa},E(\widehat{\kappa})}(t) \right)\geqslant 0\forall \kappa\in K, t\in[0,1], \end{equation} \begin{equation}\label{eq5} g'_{i+}\left(\gamma_{\bar{\kappa},E(\kappa)}(t) \right)\leq 0, \forall \kappa\in K, i\in P, \end{equation} \begin{equation}\label{eq6} \xi^{o}h(\bar{\kappa})=0 \end{equation} \begin{equation}\label{eq7} \zeta^{o}\geqslant 0 , \xi^{o}\geqslant 0. \end{equation} Then $ \bar{\kappa} $ is a weak efficient solution for ($ VFP $). \end{theorem} \begin{proof} By contradiction, let $ \bar{\kappa} $ be not a weak efficient solution for ($ VFP $), then there exist a point $ \widehat{\kappa}\in K $ such that \begin{equation}\label{eq8} \frac{f_{i}(\widehat{\kappa})}{g_{i}(\widehat{\kappa})}<\frac{f_{i}(\bar{\kappa})}{g_{i}(\bar{\kappa})}, i\in P. \end{equation} By the above hypotheses and Lemma \ref{Lemma1}, we have \begin{equation}\label{eq9} f_{i}(\widehat{\kappa})-f_{i}(\bar{\kappa})\geqslant f'_{i+}\left(\gamma_{\bar{\kappa},E(\widehat{\kappa})}(t)\right) , i\in P \end{equation} \begin{equation}\label{eq10} g_{i}(\widehat{\kappa})-g_{i}(\bar{\kappa})\leq g'_{i+}\left(\gamma_{\bar{\kappa},E(\widehat{\kappa})}(t)\right) , i\in P \end{equation} \begin{equation}\label{eq11} h_{i}(\widehat{\kappa})-h_{i}(\bar{\kappa})\geqslant h'_{j+}\left(\gamma_{\bar{\kappa},E(\widehat{\kappa})}(t)\right) , j\in Q. \end{equation} Multiplying (\ref{eq9}) by $ \zeta^{o}_{i} $ and (\ref{eq11}) by $ \xi^{o}_{j} $, then we get \begin{eqnarray}\label{eq12}&& \sum_{i=1}^{p} \zeta^{o}_{i} \left(f_{i}(\widehat{\kappa})-f_{i}(\bar{\kappa}) \right) + \sum_{j=1}^{m} \xi^{o}_{j} \left(h_{j}(\widehat{\kappa})-h_{j}(\bar{\kappa}) \right)\nonumber\\&&\hspace{0.5in} \geqslant \zeta^{o}_{i} f'_{i+}\left(\gamma_{\bar{\kappa},E(\widehat{\kappa})}(t)\right) +\xi^{o}_{j} h'_{j+}\left(\gamma_{\bar{\kappa},E(\widehat{\kappa})}(t)\right) \geqslant 0. \end{eqnarray} Since $ \widehat{\kappa}\in K, \xi^{o}\geqslant 0 $ by (\ref{eq6}) and (\ref{eq12}), we have \begin{equation}\label{eq13} \sum_{i=1}^{p} \zeta^{o}_{i} \left(f_{i}(\widehat{\kappa})-f_{i}(\bar{\kappa}) \right)\geqslant 0. \end{equation} Utilizing (\ref{eq7}) and (\ref{eq13}),then there is at least an $ i_{0} $ ($ 1\leq i_{0}\leq p $) such that \begin{equation}\label{eq14} f_{i_{0}}(\widehat{\kappa})\geqslant f_{i_{0}}(\bar{\kappa}). \end{equation} On the other hand, (\ref{eq5}) and (\ref{eq10}) imply \begin{equation}\label{eq15} g_{i}(\widehat{\kappa})\leq g_{i}(\bar{\kappa}), i\in P. \end{equation} By using (\ref{eq14}), (\ref{eq15}) and $ g>0 $, we have \begin{equation}\label{eq16} \frac{f_{i_{0}}(\widehat{\kappa})}{g_{i_{0}}(\widehat{\kappa})}\geqslant\frac{f_{i_{0}}(\bar{\kappa})}{g_{i_{0}}(\bar{\kappa})}, \end{equation} which is a contradition with \ref{eq8}, then the proof of throrem is completed. \end{proof} Similarly we can prove the next theorem: \begin{theorem} Consider that $ \bar{\kappa}\in B, E(\bar{\kappa})=\bar{\kappa} $ and $ f,h $ are geodesic E-$ \eta $- semidifferentiable at $ \bar{\kappa} $. If there exist $ \zeta^{o}\in \mathbb{R}^{n} $ and $ \xi^{o}\in \mathbb{R}^{m} $ such that condition (\ref{eq4})-(\ref{eq7}) hold and $ \zeta^{o}f(x)+\xi^{o}h(x) $ is a GSLEP function, then $ \bar{\kappa} $ is a weak efficient solution for ($ VFP $). \end{theorem} \begin{theorem} Consider $ \bar{\kappa}\in B, E(\bar{\kappa})=\bar{\kappa} $ and $ \lambda_{i}^{o}=\frac{f_{i}(\bar{\kappa})}{g_{i}(\bar{\kappa})}(i\in P) $ are all pSLGEP functions and $ h_{j}(\kappa)(j\in \aleph(\bar{\kappa})) $ are all GqSLEP functions and $ f,g,h $ are all geodesic E-$ \eta $-semidifferentiable at $ \bar{\kappa} $. If there is $ \zeta^{o}\in \mathbb{R}^{p} $ and $ \xi^{o}\in \mathbb{R}^{m} $ such that \begin{eqnarray}\label{eq17} \sum_{i=1}^{p}\zeta_{i}^{o}\left(f'_{i+}\left( \gamma_{\bar{\kappa},E(\kappa)}(t)\right) -\lambda_{i}^{o}g'_{i+}\left( \gamma_{\bar{\kappa},E(\kappa)}(t)\right) \right) +\xi^{o}h'_{i+}\left( \gamma_{\bar{\kappa},E(\kappa)}(t)\right) \geqslant 0 \end{eqnarray} \begin{eqnarray}\label{eq18} \xi^{o}h(\bar{\kappa})=0, \end{eqnarray} \begin{equation}\label{eq19} \zeta^{o}\geqslant 0, \xi^{o}\geqslant 0, \end{equation} then $ \bar{\kappa} $ is a weak efficient solution for ($ VFP $). \end{theorem} \begin{proof} Assume that $ \bar{\kappa} $ is not a weak efficient solution for ($ VFP $). Therefore, there exists $ \kappa^{*}\in B $, yields $$\frac{f_{i}(\kappa^{*})}{g_{i}(\kappa^{*})}<\frac{f_{i}(\bar{\kappa})}{g_{i}(\bar{\kappa})}.$$ Then $$f_{i}(\kappa^{*})-\lambda_{i}^{o}g_{i}(\kappa^{*})<0,\ \ \ i\in P,$$ which means that $$f_{i}(\kappa^{*})-\lambda_{i}^{o}g_{i}(\kappa^{*})< f_{i}(\bar{\kappa})-\lambda_{i}^{o}g_{i}(\bar{\kappa})<0,\ \ \ i\in P. $$ By the pSLGEP of $ \left( f_{i}(\kappa)-\lambda_{i}^{o}g_{i}(\kappa)\right) (i\in P) $ and Lemma \ref{lemma2}, we have $$ f'_{i+}\left( \gamma_{\bar{\kappa},E(\kappa)}(t)\right) -\lambda_{i}^{o}g'_{i+}\left( \gamma_{\bar{\kappa},E(\kappa)}(t)\right) ,\ \ \ i\in P. $$ Utilizing $\zeta^{o}\geqslant 0 $, then \begin{equation}\label{eq20} \sum_{i=1}^{p}\zeta_{i}^{o}\left(f'_{i+}\left( \gamma_{\bar{\kappa},E(\kappa)}(t)\right) -\lambda_{i}^{o}g'_{i+}\left( \gamma_{\bar{\kappa},E(\kappa)}(t)\right) \right)< 0. \end{equation} For $ h(\kappa^{*})\leq 0 $ and $ h_{j}(\bar{\kappa})= 0,\ \ \ j\in \aleph(\bar{\kappa}) $ , we have $ h_{j}(\kappa^{*})\leq h_{j}(\bar{\kappa}),\ \ \ \forall j\in \aleph(\bar{\kappa}). $ By the GqSLEP of $ h_{j} $ and Lemma \ref{lemma2}, we have $$h_{j+}\left( \gamma_{\bar{\kappa},E(\kappa)}(t)\right) \leq 0,\ \ \ \forall j\in \aleph(\bar{\kappa}). $$ Considering $ \xi^{o}\geqslant 0 $ and $ \xi_{j}^{o}= 0,\ \ \ j\in \aleph(\bar{\kappa}), $ then \begin{equation}\label{eq21} \sum_{j=1}^{m}\xi_{j}^{o}h'_{j+}\left( \gamma_{\bar{\kappa},E(\kappa^{*})}(t)\right) \leq 0. \end{equation} Hence, by (\ref{eq20}) and (\ref{eq21}), we have \begin{eqnarray} \sum_{i=1}^{p}\zeta_{i}^{o}\left(f'_{i+}\left( \gamma_{\bar{\kappa},E(\kappa^{*})}(t)\right) -\lambda_{i}^{o}g'_{i+}\left( \gamma_{\bar{\kappa},E(\kappa^{*})}(t)\right) \right) +\xi^{o}h'_{i+}\left( \gamma_{\bar{\kappa},E(\kappa^{*})}(t)\right) < 0,\nonumber\\ \end{eqnarray} which is contradiction with relation (\ref{eq17}) at $ \kappa^{*}\in B $. Therefore, $ \bar{\kappa} $ is a weak efficient solution for ($ VFP $). \end{proof} \begin{theorem} Consider $ \bar{\kappa}\in B, E(\bar{\kappa})=\bar{\kappa} $ and $ \lambda_{i}^{o}=\frac{f_{i}(\bar{\kappa})}{g_{i}(\bar{\kappa})}(i\in P) $. Also, assume that $ f,g,h $ are geodesic E-$ \eta $-semidifferentiable at $ \bar{\kappa} $. If there is $ \zeta^{o}\in \mathbb{R}^{p} $ and $ \xi^{o}\in \mathbb{R}^{m} $ such that the conditions (\ref{eq17})-(\ref{eq19}) hold and $ \sum_{i=1}^{p}\zeta^{o}_{i}\left(f_{i}(\kappa)-\lambda^{o}_{i}g_{i}(\kappa) \right)+\xi^{o}_{\aleph(\bar{\kappa})}h_{\aleph(\bar{\kappa})}(\kappa) $ is a GpSLEP function, then $ \bar{\kappa} $ is a weak efficient soluion for ($ VFP $). \end{theorem} \begin{corollary} Let $ \bar{\kappa}\in B, E(\bar{\kappa})=\bar{\kappa} $ and $ \lambda_{i}^{o}=\frac{f_{i}(\bar{\kappa})}{g_{i}(\bar{\kappa})}(i\in P) $. Futher let $ f, h_{\aleph(\bar{\kappa})} $ be all GSLEP function, $ g $ be a geodesic semilocal E-preincave function and $ f,g,h $ be all geodesic E-$ \eta $- semidifferentiable at $ \bar{\kappa} $. If there exist $ \zeta^{o}\in \mathbb{R}^{p} $ and $ \xi^{o}\in \mathbb{R}^{m} $ such that the conditions (\ref{eq17})-(\ref{eq19}) hold, then $ \bar{\kappa} $ is a weak efficient soluion for ($ VFP $). \end{corollary} \vspace{1.0mm} The dual problem for ($ VFP $) is formulated as follows \begin{eqnarray*} (VFD) \begin{cases} minimize \left(\zeta_{i}, i=1,2,\cdots, p \right) ,\\ subject \ \ to\ \ \sum_{i=1}^{p}\alpha_{i}\left(f'_{i+}\left( \gamma_{\lambda,E(\kappa)}(t)\right) -\zeta_{i}g'_{i+}\left( \gamma_{\lambda,E(\kappa)}(t)\right) \right)\\\hspace{1.5in} +\sum_{j=1}^{m}\beta_{j}h'_{j+}\left( \gamma_{\lambda,E(\kappa)}(t)\right) \geqslant 0 \\ \kappa\in K_{0}, t\in[0,1],\\ f_{i}(\lambda)-\zeta_{i}g_{i}(\lambda)\geqslant0,\ \ \ i\in P, \beta_{j}h_{j}(\lambda)\geqslant 0,\ \ \ j\in \aleph,\\ \end{cases} \end{eqnarray*} where $\zeta =(\zeta_{i}, i=1,2,\cdots, p)\geqslant 0$, $\alpha =(\alpha_{i}, i=1,2,\cdots, p)> 0$,\\ $\beta =(\beta_{i}, i=1,2,\cdots, m)\geqslant 0$, $\lambda\in K_{0}. $ \vspace{1.0mm} Denote the feasible set problem ($ VFD $) by $ K^{,} $. \begin{theorem}[General Weak Duality] Let $ \kappa\in K $, $ (\alpha,\beta,\lambda,\zeta)\in K^{,} $ and $ E(\lambda)=\lambda $. If $ \sum_{i=1}^{p}\alpha_{i}(f_{i}-\zeta_{i}g_{i}) $ is a GpSLEP function and $ \sum_{j=1}^{m}\beta_{j}h_{j} $ is a GqSLEP function and they are all geodesic E-$ \eta $-semidifferentiable at $ \lambda $, then $ \frac{f(\kappa)}{g(\kappa)}\nleq \zeta $. \end{theorem} \begin{proof} From $ \alpha>0 $ and $ (\alpha, \beta,\lambda,\zeta)\in K^{,} $, we have $$\sum_{i=1}^{p}\alpha_{i}(f_{i}(\kappa)-\zeta_{i}g_{i}(\kappa))<0\leq\sum_{i=1}^{p}\alpha_{i}(f_{i}(\lambda)-\zeta_{i}g_{i}(\lambda)). $$ By the GpSLEP of $ \sum_{i=1}^{p}\alpha_{i}(f_{i}-\zeta_{i}g_{i}) $ and Lemma \ref{lemma2}, we obtain $$\left( \sum_{i=1}^{p}\alpha_{i}(f_{i}-\zeta_{i}g_{i}) \right)'_{+}\left(\gamma_{\lambda,E(\kappa)}(t) \right) <0, $$ that is, $$\sum_{i=1}^{p}\alpha_{i}\left(f'_{i+}\left( \gamma_{\lambda,E(\kappa)}(t)\right) -\zeta_{i}g'_{i+}\left( \gamma_{\lambda,E(\kappa)}(t)\right] \right)<0. $$ Also, from $ \beta\geqslant 0$ and $ \kappa\in K $, then $$\sum_{j=1}^{m}\beta_{j}h_{j}(\kappa)\leq 0 \leq\sum_{j=1}^{m}\beta_{j}h_{j}(\lambda). $$ Using the GqSLEP of $ \sum_{j=1}^{m}\beta_{j}h_{j} $ and Lemma \ref{lemma2}, one has $$\left( \sum_{j=1}^{m}\beta_{j}h_{j} \right)'_{+}\left(\gamma_{\lambda,E(\kappa)}(t) \right) \leq 0. $$ Then $$ \sum_{j=1}^{m}\beta_{j}h'_{j+} \left(\gamma_{\lambda,E(\kappa)}(t) \right) \leq 0. $$ Therefore $$\sum_{i=1}^{p}\alpha_{i}\left(f'_{i+}\left( \gamma_{\lambda,E(\kappa)}(t)\right) -\zeta_{i}g'_{i+}\left( \gamma_{\lambda,E(\kappa)}(t)\right) \right) +\sum_{j=1}^{m}\beta_{j}h'_{j+}\left( \gamma_{\lambda,E(\kappa)}(t)\right) < 0, $$ This is a contradiction with $ (\alpha,\beta,\lambda,\zeta)\in K^{,} $. \end{proof} \begin{theorem} Consider that $ \kappa\in K $, $ (\alpha, \beta,\lambda,\zeta)\in K^{,} $ and $ E(\lambda)=\lambda $. If $ \sum_{i=1}^{p}\alpha_{i}(f_{i}-\zeta_{i}g_{i})+\sum_{j=1}^{m}\beta_{j}h_{j} $ is a GpSLEP function and geodesic E-$ \eta $-semidifferentiable at $ \lambda $, then $ \frac{f(\kappa)}{g(\kappa)}\nleq \zeta $. \end{theorem} \begin{theorem}[General Converse Duality] Let $ \bar{\kappa}\in K $ and $ (\kappa^{*},\alpha^{*}, \beta^{*},\zeta^{*})\in K^{,} $,$ E(\kappa^{*})=\kappa^{*} $, where $\zeta^{*}= \frac{f(\kappa^{*})}{g(\kappa^{*})}=\frac{f(\bar{\kappa})}{g(\bar{\kappa})}=(\zeta^{*}_{i}, \ \ \ i=1,2,\cdots, p) $. If $ f_{i}-\zeta_{i}^{*}g_{i} (i\in P), h_{j}(j\in \aleph)$ are all GSLEP functions and all geodesic E-$ \eta $-semidifferentiable at $ \kappa^{*} $, then $ \bar{\kappa} $ is a weak efficient solution for ($ VFP $). \end{theorem} \begin{proof} By using the hypotheses and Lemma \ref{lemma2}, for any $ \kappa\in K $, we obtain $$\left( f_{i}(\kappa)-\zeta_{i}^{*}g_{i}(\kappa)\right) -\left(f_{i}(\kappa^{*})-\zeta_{i}^{*}g_{i}(\kappa^{*}) \right)\geqslant f'_{i+}\left( \gamma_{\kappa^{*},E(\kappa)}(t)\right) -\zeta_{i}g'_{i+}\left( \gamma_{\kappa^{*},E(\kappa)}(t)\right) $$ $$h_{j}(y)-h_{j}(\kappa^{*})\geqslant h'_{j+}\left( \gamma_{\kappa^{*},E(\kappa)}(t)\right). $$ \vspace{1.0mm} Utilizing the fiest constraint condition for ($ VFD $), $ \alpha^{*}>0,\beta^{*}\geqslant 0, \zeta^{*}\geqslant 0 $ and the two inequalilities above, hence \begin{eqnarray}&& \sum_{i=1}^{p}\alpha^{*}_{i}\left(\left( f_{i}(\kappa)-\zeta_{i}^{*}g_{i}(\kappa)\right) -\left(f_{i}(\kappa^{*})-\zeta_{i}^{*}g_{i}(\kappa^{*}) \right) \right) + \sum_{j=1}^{m}\beta^{*}_{j}\left(h_{j}(\kappa)-h_{j}(\kappa^{*}) \right) \nonumber\\\hspace{0.5in} &\geqslant& \sum_{i=1}^{p}\left(f'_{i+}\left( \gamma_{\kappa^{*},E(\kappa)}(t)\right) -\zeta_{i}g'_{i+}\left( \gamma_{\kappa^{*},E(\kappa)}(t)\right) \right)\nonumber\\\hspace{0.5in}&+&\sum_{j=1}^{m}\beta^{*}_{j} h'_{j+}\left( \gamma_{\kappa^{*},E(\kappa)}(t)\right)\nonumber\\\hspace{0.5in} &\geqslant& 0. \end{eqnarray} In view of $ h_{j}(\kappa)\leq 0, \beta^{*}_{j}\geqslant 0, \beta^{*}_{j}h_{j}(\kappa^{*})\geqslant (j\in \aleph) $ and $ \zeta^{*}_{i}= \frac{f_{i}(\kappa^{*})}{g_{i}(\kappa^{*})}\ \ \ (i\in P) $, then \begin{equation}\label{eq22} \sum_{i=1}^{p}\alpha^{*}_{i}\left( f_{i}(\kappa)-\zeta_{i}^{*}g_{i}(\kappa)\right)\geqslant 0 \ \ \ \forall y\in Y . \end{equation} Consider that $ \bar{\kappa} $ is not a weak efficient solution for ($ VFP $). From $ \zeta^{*}_{i}= \frac{f_{i}(\bar{\kappa})}{g_{i}(\bar{\kappa})}\ \ \ (i\in P) $ and Lemma \ref{Lemma1}, it follows that $ \bar{\kappa} $ is not a weak efficient solution for ($ VFP_{\zeta^{*}} $). Hence, $ \tilde{\kappa}\in K $ such that $$ f_{i}(\tilde{\kappa})-\zeta_{i}^{*}g_{i}(\tilde{\kappa}) <f_{i}(\bar{\kappa})-\zeta_{i}^{*}g_{i}(\bar{\kappa}) = 0,\ \ \ i\in P, $$ hence $ \sum_{i=1}^{p}\alpha^{*}_{i}\left(f_{i}(\tilde{\kappa})-\zeta_{i}^{*}g_{i}(\tilde{\kappa}) \right)<0 $. This is a contradiction to the inequality (\ref{eq22}). The proof of theorem is completed. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Appendix A} \label{SECappendixA} Here, we describe the integrations which show that \cref{EQcollapsed2} is equivalent to \cref{EQfinal}. \subsection*{A.1 Collapsing $\theta$} Here, we show how to calculate \begin{equation} \mathrm{P}(z|K) = \int_\Theta \mathrm{p}(z,\theta | K) \; \mathrm{d} \theta \, . \label{EQappPz} \end{equation} This corresponds to the first integration expression in \cref{EQcollapsed}. $z$ is a vector which records, for each of the $N$ nodes, which cluster it has been assigned to. The probability for each cluster is in a vector $\theta$, where \[1 = \sum_{k=1}^K \theta_k \, . \] We integrate over the support of the Dirichlet distribution, which we have denoted with $\Theta$ in \cref{EQappPz}, \[ \theta \sim \mbox{Dirichlet}({\alpha,\alpha, \dots}) \, . \] where we made the common simplification in our prior that all members of the vector $\alpha$ are identical; $\alpha_k = \alpha$. $\theta$ is drawn from Dirichlet prior, \[ \mathrm{p}(\theta) = \frac1{\mathrm{B}(\alpha)} \prod_{k=1}^K \theta_k^{\alpha_k-1} \, , \] where the normalizing constant $\mathrm{B}(\alpha)$ is \[ \mathrm{B}(\alpha) = \frac{ \prod_{k=1}^K \Gamma(\alpha_k) }{ \Gamma\left( \prod_{k=1}^K \alpha_k \right) } \, . \] To collapse $\theta$, the expression for $\mathrm{P}(z | K)$ becomes the Multivariate P\'olya distribution. In the derivation, we have defined $n_k$ to be the number of nodes in cluster $k$, i.e. \[n_k = \sum_{i=1}^N \left\{ \begin{array}{cc} 1 & \text{if} \; z_i = k \\ 0 & \text{if} \; z_i \neq k\end{array} \right. \, . \] In the following expression, we will also find it useful to define another vector of length $K$, \[ \alpha' = (\alpha_1 + n_1, \alpha_2 + n_2, \dots, \alpha_K + n_K) \, , \] \[ \begin{split} \int_\Theta \mathrm{p}(z,\theta | K) \; \mathrm{d}\theta = & {} \int_\Theta \mathrm{p}(\theta | K) \mathrm{P}(z | \theta , K) \; \mathrm{d}\theta \\ = & {} \int_\Theta \mathrm{p}(\theta | K) \prod_{k=1}^K \theta_k^{n_k} \; \mathrm{d}\theta \\ = & {} \int_\Theta \frac1{\mathrm{B}(\alpha)} \prod_{k=1}^K \theta_k^{\alpha_k-1} \prod_{k=1}^K \theta_k^{n_k} \; \mathrm{d}\theta \\ = & {} \int_\Theta \frac1{\mathrm{B}(\alpha)} \prod_{k=1}^K \theta_k^{\alpha_k+n_k-1} \; \mathrm{d}\theta \\ = & {} \frac{\mathrm{B}(\alpha')}{\mathrm{B}(\alpha)} \int_\Theta \frac1{\mathrm{B}(\alpha')} \prod_{k=1}^K \theta_k^{\alpha_k+n_k-1} \; \mathrm{d}\theta \\ = & {} \frac{\mathrm{B}(\alpha')}{\mathrm{B}(\alpha)} \\ = & {} \frac{\Gamma(\sum_{k=1}^K \alpha_k)}{\Gamma(N+\sum_{k=1}^K \alpha_k)} \prod_{k=1}^K \frac{ \Gamma(n_k + \alpha_k) }{ \Gamma(\alpha_k) } \\ = & {} \frac{\Gamma(K \alpha)}{\Gamma(N+K \alpha)} \prod_{k=1}^K \frac{ \Gamma(n_k + \alpha) }{ \Gamma(\alpha) } \, . \end{split} \] \iffalse \subsection*{A.2 Prior on $K$} \cite{Nowicki-01} do not put a prior on $K$ as the number of clusters is taken as an input. We take the approach used in \cite{NobileAllocationSampler} and put a Poisson prior on K, with rate $\lambda = 1$, \begin{equation} K \sim \mbox{Poisson}(1) \nonumber \mathrm{P}(K) & = \frac{\lambda^K}{K!} e^{-\lambda} = \frac1{K!} e^{-1} \propto \frac1{K!} \label{PK} \end{equation} \fi \subsection*{A.2 Collapsing $\pi$} Now we look at the second integration expression in \cref{EQcollapsed}. This describes how to calculate the probability of a network, $x$, given a clustering, $z$, and the number of clusters, $K$. \[ \mathrm{P}(x|z,K) = \int_{\Pi} \mathrm{P}(x,\pi|z,K) \; \mathrm{d} \pi \, . \] This depends on whether we're using the unweighted (Bernoulli) or integer-weighted(Poisson) model for edges. It is also possible to allow real-valued weights with a Normal distribution and suitable priors, an example of such a model is solved in Appendix B.2 of \cite{WyseFriel}; that paper is relevant for all the derivations here as the collapsing approach is quite similar as in this paper. The number of pairs of nodes in block between clusters $k$ and $l$ will be denoted $p_{kl}$ - for blocks on the diagonal $p_{kk}$ will depend on whether the edges are directed and on whether self loops are allowed; see \cref{EQpkkCountPairs} for details. The relevant probabilities for a given block will be shown to be a function only of $p_{kl}$ and of the total weight (or total number of edges) in that block. We'll denote this total weight as \[ y_{kl} = \sum_{i,j | z_i = k, z_j=l} x_{ij} \, . \] In an undirected graph, we should consider each pair of nodes only once, \[ y_{kl} = \sum_{i,j | i<j, z_i = k, z_j=l} x_{ij} \, . \] We are to calculate the integral for a single block. $x_{(kl)}$ represents the submatrix of $x$ corresponding to pairs of nodes in clusters $k$ and $l$. Our goal is to simplify the expression such that there there is one factor for each block, \[ \begin{split} \mathrm{P}(x|z,K) & = \prod \mathrm{P}(x_{(kl)}|z,K) \\ & = \prod \int \mathrm{P}(x_{(kl)},\pi_{kl}|z,K) \; \mathrm{d} \pi_{kl} \, . \end{split} \] For directed graphs, the product is $\prod_{k,l}$, giving $K \times K$ blocks. But for undirected graphs, the product is $\prod_{k,l|k \leq l}$, giving $\frac12 K(K+1)$ blocks. The domain of the integration will be either $\int_0^1$ or $\int_0^\infty$, depending on which of the two data models, unweighted or weighted, is in effect. We'll first consider the unweighted (Bernoulli) model. The probability of a node in cluster $k$ connecting to a node in cluster $l$ is constrained by \[0 < \pi_{kl} <1 \, ,\] and each element of $x_{(kl)}$ is drawn from a Bernoulli distribution with parameter $\pi_{kl}$, \[ \mathrm{P}(x_{(kl)} | \pi_{kl} , z, K) = \pi_{kl}^{y_{kl}} (1-\pi_{kl})^{p_{kl}-y_{kl}} \, . \] The prior for $\pi_{kl}$ is a Beta($\beta_1,\beta_2)$ distribution. \[ \begin{split} \mathrm{P}(x_{(kl)} |z,K) = & {} \int_0^1 \mathrm{p}(x_{(kl)} , \pi_{kl} |z,K) \; \mathrm{d} \pi_{kl} \\ = & {} \int_0^1 \mathrm{p}(\pi_{kl}) \; \mathrm{P}(x_{(kl)} | \pi_{kl} ,z,K) \; \mathrm{d} \pi_{kl} \\ = & {} \int_0^1 \frac{ \pi_{kl}^{\beta_1-1} (1-\pi_{kl})^{\beta_2-1} }{\text{B}(\beta_1,\beta_2)} \; \pi_{kl}^{y_{kl}} (1-\pi_{kl})^{p_{kl}-y_{kl}} \; \mathrm{d} \pi_{kl} \\ = & {} \int_0^1 \frac{ \pi_{kl}^{y_{kl}+\beta_1-1} (1-\pi_{kl})^{p_{kl}-y_{kl}+\beta_2-1} }{\text{B}(\beta_1,\beta_2)} \; \mathrm{d} \pi_{kl} \\ = & {} \frac{ \text{B}(y_{kl}+\beta_1,p_{kl}-y_{kl}+\beta_2) }{\text{B}(\beta_1,\beta_2)} \\ & {} \times \int_0^1 \frac{ \pi_{kl}^{y_{kl}+\beta_1-1} (1-\pi_{kl})^{p_{kl}-y_{kl}+\beta_2-1} }{ \text{B}(y_{kl}+\beta_1,p_{kl}-y_{kl}+\beta_2) } \; \mathrm{d} \pi_{kl} \\ = & {} \frac{ \text{B}(y_{kl}+\beta_1,p_{kl}-y_{kl}+\beta_2) }{\text{B}(\beta_1,\beta_2)} \, , \end{split} \] where $\mathrm{B}(\beta_1, \beta_2) = \frac{ \Gamma(\beta_1) \Gamma(\beta_2) }{ \Gamma(\beta_1+\beta_2) }$ is the Beta function. This result is closely related to the Beta-binomial distribution. Next, we'll consider the Poisson model for edges in more detail. Again, we will see that $p_b$ and $y_b$ are sufficient for $\mathrm{P}(x_{(kl)} | K, z)$. In this integer-weighted model, an edge (or non-edges) between a node in cluster $k$ and a node in cluster $l$ gets its weight from a Poisson distribution \[ x_i|\pi_{kl} \sim \mbox{Poisson}(\pi_{kl}) \, , \] and $\pi_{kl}> 0$. This gives us, iterating over the pairs of nodes in the block, \[ \mathrm{P}(x_{(kl)} | \pi_{kl} , z, K) = \prod_{i,j \in k,l} \frac{ \pi_{kl}^{x_{ij}} }{ x_{ij}! } \mbox{exp}(-\pi_{kl}) \, . \] We can combine this expression for every block, \[ \begin{split} \mathrm{P}(x| \pi , z, K) & = \prod_{kl} \mathrm{P}(x_{(kl)} | \pi_{kl} , z, K) \\ & = \prod_{kl} \prod_{i,j \in k,l} \frac{ \pi_{kl}^{x_{ij}} }{ x_{ij}! } \mbox{exp}(-\pi_{kl}) \\ & = \prod_{ij} \frac1{ x_{ij}! } \prod_{kl} \prod_{i,j \in k,l} { \pi_{kl}^{x_{ij}} } \mbox{exp}(-\pi_{kl}) \, . \end{split} \] We can ignore the $\prod_{ij} \frac1{x_{ij}!}$, as one of those will be included for every pair of nodes in the network. That will contribute a constant factor to \cref{EQfinal}; this factor will depend only on the network $x$, and it will not depend on $K$ or $z$ or any other variable of interest, and hence we can ignore it for the purposes of \cref{EQfinal}. Therefore, for our purposes we will be able to use the following approximation in the derivation \[ \mathrm{P}(x_{(kl)} | \pi_{kl} , z, K) = \prod_{i,j \in k,l} { \pi_{kl}^{x_{ij}} } \mbox{exp}(-\pi_{kl}) \, . \] We'll place a Gamma prior on the rates, \[ \pi_b \sim \mbox{Gamma}(s, \phi) \, . \] \[ \begin{split} \mathrm{P}(x_{(kl)} & | z, K) = \int_0^\infty \mathrm{p} (x_{(kl)}, \pi_{kl} | z, K) \mathrm{d} \pi_{kl} \\ & = {} \int_0^\infty \pi_{kl}^{s-1} \frac{ e^{- \pi_{kl} / \phi} } { \Gamma(s) \phi^s } \prod_{i,j \in k,l} \frac{ \pi_{kl}^{x_{ij}} }{ x_{ij}! } e^{-\pi_{kl}} \mathrm{d} \pi_{kl} \\ = {} \prod & \frac1{x_{ij}!} \int_0^\infty \pi_{kl}^{s-1} \frac{ e^{- \pi_{kl} / \phi} } { \Gamma(s) \phi^s } \prod_{i,j \in k,l} { \pi_{kl}^{x_{ij}} } e^{-\pi_{kl}} \mathrm{d} \pi_{kl} \\ & \propto {} \int_0^\infty \pi_{kl}^{s-1} \frac{ e^{- \pi_{kl} / \phi} } { \Gamma(s) \phi^s } \prod_{i,j \in k,l} { \pi_{kl}^{x_{ij}} } e^{-\pi_{kl}} \mathrm{d} \pi_{kl} \\ & = {} \int_0^\infty \pi_{kl}^{s-1 + \sum x_{ij}} \; \frac{ \exp(-\pi_{kl} p_{kl} - \frac{\pi_{kl}}{\phi}) } { \Gamma(s) \phi^s } \mathrm{d} \pi_b \, . \end{split} \] We said earlier that we'll define $y_{kl} = \sum_{i,j \in k,l} x_{ij} $. We'll now substitute that in and also use the following definitions: \[ \begin{split} s' & = s + y_{kl} \\ \frac1{\phi'} & = p_{kl} + \frac1{\phi} \, . \end{split} \] Where $\text{Gamma}(s,\phi)$ was the prior on $\pi_b$, $\text{Gamma}(s',\phi')$ is the posterior now that we have observed edges with total weight $y_{kl}$ between $p_{kl}$ pairs of nodes. Returning to $f$, and rearranging such that we can cancel out the integral (because it is the integral of a Gamma distribution and hence it will equal 1), \[ \begin{split} f(x_{(kl)} | z, K) & = {} \int_0^\infty \pi_{kl}^{s'-1} \; \frac{ \exp(- \frac{\pi_{kl}}{\phi'}) } { \Gamma(s) \phi^s } \mathrm{d} \pi_{kl} \\ & = {} \frac{ \Gamma(s') \phi'^{s'} }{ \Gamma(s) \phi^s } \int_0^\infty \pi_{kl}^{s'-1} \; \frac{ \exp(- \frac{\pi_{kl}}{\phi'}) } { \Gamma(s') \phi'^{s'} } \mathrm{d} \pi_{kl} \\ & = {} \frac{ \Gamma(s') \phi'^{s'} }{ \Gamma(s) \phi^s } \\ & = {} \frac{ \Gamma(s + y_{kl}) \left(\frac{1}{p_{kl} + \frac1{\phi} }\right)^{s + y_{kl}} }{ \Gamma(s) \phi^s } \, . \end{split} \] \begin{comment} To recap the symbols: $N$ is the number of nodes. $K$ is the number of clusters. $z$ is a vector of length $N$ giving the cluster membership of each node. $n_k$ is the number of nodes is cluster $k$, i.e. $n_k$ is a function of $z$. $B$ is the number of blocks, where $B = K \times K$ for directed networks, and for undirected $B = \frac12 K \times (K+1)$. $p_b$ is the number of pairs of nodes in block $b$. For off-diagonal blocks, $p_b = n_i n_j$ where $i,j$ are the clusters involved in block $b$. For on-diagonal blocks, $i=j$ and therefore $n_i = n_j$ and the form of $p_b$ depends on whether self loops are allowed, either $\frac12 n_i (n_i-1)$ or $\frac12 n_i (n_i+1)$ for undirected, or $n_i (n_i-1)$ or $n_i n_i$ for directed. $y_b$ is the total weight (or number) of edges in block $b$. The priors are specified by $\alpha, s, \phi, \beta_1, \beta_2$. There are defaults in our software, changeable via command line parameters. The defaults are $\alpha=1$; for the Bernoulli model, $\beta_1 = \beta_2 = 1$ in the Beta prior, giving us a Uniform(0,1). The prior used in the Poisson model is Gamma($s,\phi$) with $s = 1, \phi=1000$ as this is approximately $\mathrm{P}(\pi_b) = \frac1{\pi_b}$. \end{comment} \section{Conclusion} \label{SECconclusion} The original stochastic blockmodel was tested on a small network with two clusters. We have shown how Bayesian models, collapsing, and modern MCMC techniques can combine to create an algorithm which can accurately estimate the clusters, and the number of clusters, without compromising on speed. It is sometimes stated that MCMC is necessarily slower than other methods, ``effectively leading to severe size limitations (around 200 nodes)'' \citep{GazalVariational}. The MCMC method we have presented scales to thousands of nodes, and is more scalable than a recent variational method. We do not claim that MCMC will always be necessarily faster than the alternatives, but we observe that comments on the scalability of Metropolis-Hastings MCMC depends on the particular model and on the particular proposal functions used. It may be an open question as to which methods will prove to be most scalable in the long term, as further improvements are made to all methods. \margnote{I'll have to redo the conclusion in light of the many changes - ignore this for now.} Our application to the survey data demonstrated that \emph{block-modelling} can detect structure in networks that might be missed by \emph{community-finding} algorithms. Sometimes the links between clusters are more interesting than the links within clusters. \subsection*{Acknowledgements} This research was supported by Science Foundation Ireland (SFI) Grant No. 08/SRC/I1407 - Clique Research Cluster \section{} \label{} \bibliographystyle{model5-names} \section{} \label{} \bibliographystyle{elsarticle-harv} \section{} \label{} \bibliographystyle{model1b-num-names} \section{} \label{} \bibliographystyle{model6-num-names} \section{} \label{} \bibliographystyle{model3a-num-names} \section{} \label{} \bibliographystyle{model1-num-names} \section{} \label{} \bibliographystyle{model1a-num-names} \section{} \label{} \bibliographystyle{model4-names} \section{} \label{} \bibliographystyle{model2-names} \section{} \label{} \bibliographystyle{elsarticle-num} \section{} \label{} \bibliographystyle{model3-num-names} \section{} \label{} \bibliographystyle{model1c-num-names} \section{Estimation} \label{SECestimation} In this section, we describe our MCMC algorithm which samples, given a network $x$, from the posterior $K,z|x$. The moves are Metropolis-Hastings moves \citep{HastingsMetropolis}. We define the moves and calculate the proposal probabilities and close the section with a discussion of the label-switching phenomenon, where we use the method proposed in \cite{NobileAllocationSampler} to summarize the clusterings found by the sampler. Our algorithm is closely based on the \emph{allocation sampler} \cite{NobileAllocationSampler}, which was originally presented in the context of a mixture-of-Gaussians model. In fact, it can be applied to any model that can be collapsed to the form $\mathrm{P}(x,z,K)$ where $x$ is some fixed (observed) data and the goal is to sample the clustering and the number of clusters $(z,K)$. In the Gibbs sampler used in \cite{Nowicki-01}, the parameters are not collapsed, and sampling is from \[ z,\pi,\theta | x,K . \] In their experiment on the Hansell dataset, $K$ was fixed to 2. As a result of this value for $K$, $\theta$ reduced to a single real number specifying the relative expected size of the two clusters. Expressions were presented for $p( \theta | z,\pi,x,K)$, $P( z | \theta,\pi,x,K )$ and $ p(\pi | z,\theta,x,K)$ such that the various elements $z_i$ (or $\pi_{kl}$) are conditionally independent of each other, given $(\pi,x,K)$ (or $(z,x,K)$), allowing for a straightforward Gibbs sampler. In contrast, we develop an algorithm that searches across the full sample space of all possible clusterings, $z$, for all $K$, drawing from the posterior, \[ z, K | x , \] \noindent using \cref{EQfinal} as the desired stationary distribution of the Markov Chain. We use four moves: \begin{itemize} \item \emph{MK}: Metropolis move to increase or decrease $K$, adding or removing an empty cluster. \item \emph{GS}: Gibbs sampling on a randomly-selected node. Fixing all but one node in $z$, select a new cluster assignment for that node. \item \emph{M3}: Metropolis-Hastings on the labels in two clusters. This is the M3 move proposed in \cite{NobileAllocationSampler}. Two clusters are selected at random and the nodes are reassigned to the two clusters using a novel scheme fully described in that paper. $K$ is not affected by this move. \item \emph{AE}: The \emph{absorb-eject} move is a Metropolis-Hasting merge/split cluster move, as described in \cite{NobileAllocationSampler}. This move does affect $K$ along with $z$. \end{itemize} At each iteration, one of these four moves is selected at random and attempted. All the moves are essentially Metropolis-Hastings moves; a move to modify $z$ and/or $K$ is generated randomly, proposing a new state $(z',K')$, and the ratio of the new density to the old density $\frac{\mathrm{P}(z',K|x)}{\mathrm{P}(z,K|x)}=\frac{\mathrm{P}(x,z',K)}{\mathrm{P}(x,z,K)}$ is calculated. This is often quite easy to calculate quickly as, for certain moves, only a small number of factors in \cref{EQfinal} are affected by the proposed move. We must also calculate the probability of this particular move being proposed, and of the reverse move being proposed. The \emph{proposal probability ratio} is combined with the \emph{posterior mass ratio} to give us the move \emph{acceptance probability}, \begin{equation} \operatorname{min} \left( 1, \frac { \mathrm{P}(x,z',K') } { \mathrm{P}(x,z,K) } \times \frac { \mathrm{P}_\text{prop}( (K',z') \rightarrow (K,z)) } { \mathrm{P}_\text{prop}( (K,z) \rightarrow (K',z')) } \right) \, , \label{EQdetailedbalance} \end{equation} where $\mathrm{P}_\text{prop}( (K,z) \rightarrow (K',z'))$ is the probability that the algorithm, given current state $(K,z)$, will propose a move to $(K',z')$. \iffalse We needn't concern ourselves with moves that simply translate a state to itself. For each state $(z,K)$ we must consider each state $(z',K') \neq (z,K)$ that can be reached in one move from $(z,K)$. There should be a one-to-one relationship between these (i.e. the reverse step should always be possible) and we require that \[ \frac {\mathrm{P}_\text{transition}( (z,K) \rightarrow (z',K') )} {\mathrm{P}_\text{transition}( (z',K') \rightarrow (z,K) )} =\frac { \mathrm{P}(z',K'|x) } { \mathrm{P}(z,K|x) } \] \fi In the remainder of this section, we discuss the four moves in detail, derive the proposal probabilities and describe the computational complexity of the moves. \subsection{MK} The \emph{MK} move increases or decreases the number of clusters by adding or removing an empty cluster. If \emph{MK} is selected, then the algorithm selects with 50\% probability whether to attempt to add an empty cluster, or to delete one. If it chooses to attempt a delete, then one cluster is selected at random; if that cluster is not empty, then the attempt is abandoned. If it chooses to attempt an insert, it selects a new cluster identifier randomly from $\{1,\dots,K+1\}$ for the new cluster and inserts a new empty cluster with that identifier, renaming any existing clusters as necessary. The proposal probabilities are \[ \begin{split} \mathrm{P}_\text{prop}( (K,z) \rightarrow (K+1,z')) & = \frac {0.5} {K+1} \\ \mathrm{P}_\text{prop}( (K',z') \rightarrow (K'-1,z)) & = \left\{ \begin{array}{rl} \frac {0.5} {K'} & \text{if } K' > 1 \\ 0 & \text{otherwise} \end{array} \right. \, . \end{split} \] By adding an empty cluster, $K$ increases to $K'=K+1$ and the posterior mass change is: \[ \begin{split} \frac{\mathrm{P}(x,z,K')}{\mathrm{P}(x,z,K)} & = \frac{K!}{(K+1)!} \frac {\left( \frac {\Gamma(\alpha (K+1)) \prod_{k=1}^{K+1} \Gamma(n_k + \alpha)} {\Gamma(\alpha)^{K+1} \Gamma(N + \alpha (K+1))} \right)} {\left( \frac {\Gamma(\alpha K) \prod_{k=1}^K \Gamma(n_k + \alpha)} {\Gamma(\alpha)^K \Gamma(N + \alpha K)} \right)} \\ & = \frac { \Gamma(\alpha (K+1)) \Gamma(N + \alpha K)} { (K+1) \Gamma(\alpha K) \Gamma(N + \alpha (K+1)) }\,. \end{split} \] \iffalse Using this formula, we define our acceptance probability for \emph{MK}, when it has been proposed to add an empty community at a particular offset, to be \[ \operatorname{min} \left( 1, \frac { 50 \% / (K') } { 50 \% / (K+1) } \frac { \mathrm{P}(x,z',K') } { \mathrm{P}(x,z,K) } \right) \] And $K'$ = $K+1$ and so this is a simple symmetric proposal and the acceptance ratio cancels to \[ \operatorname{min} \left( 1, \frac { \mathrm{P}(x,z',K') } { \mathrm{P}(x,z,K) } \right) \] \fi The computational complexity of this move is constant. \subsection{GS} The Gibbs update move, \emph{GS}, selects a node $i$ at random to be assigned to a new cluster. All other nodes are kept fixed in their current cluster assignment i.e. a single element of the vector $z$ is updated. Denote by $z' = z_{\{z_i \rightarrow k\}}$ the modified clustering resulting from a move of node $i$ to cluster $k$. For each possible value of $ z_i \in \{1,\dots,K\} $, $z_i$ is chosen with probability proportional to $\mathrm{P}(x,z_{\{z_i \rightarrow k\}},K)$. The proposal is then accepted. Bear in mind that this move often simply reassigns the node to the same cluster it was in before the \emph{GS} move was attempted. The calculations involved in \emph{GS} are quite complex as many of the factors in \cref{EQfinal} are affected. The sizes of the clusters are changed as the node is considered for inclusion in each cluster, and the number of edges and pairs of nodes are changed in many of the blocks. The computational complexity is $\mathcal{O}(K^2)+\mathcal{O}(N)$ as every block needs to be considered for each of the $K$ possible moves and every node may be checked to see if it is connected or not to the current node. The $\mathcal{O}(N)$ term is just a theoretical worst case over all possible networks. Our algorithm iterates over the neighbours of the current node and this is sufficient to perform all the necessary calculations. There is no need to iterate over the non-neighbours and therefore the average complexity is equal to the average degree, which will be much less than $N$ in real-world sparse networks. For small $K$k, and assuming a given average degree, the complexity of the \emph{GS} move is independent of $N$. \subsection{M3} \emph{M3} is a more complex move and was introduced in \cite{NobileAllocationSampler}. Two distinct clusters are selected at random, $j$ and $k$. All the nodes in these two clusters are removed from their current clusters and placed in a list which is then randomly reordered -- call this ordered list $A = \{a_1, \dots, a_{n_j+n_k}\}$, of size equal to the total number of nodes in the two clusters. The software creates a temporary cluster to store these nodes until they are reassigned to the original two clusters. One node at a time is selected from $A$ and is assigned to one of the two clusters according to some assignment probability. As the nodes are assigned (or reassigned) the new cluster assignments are stored in a list $B_h = \{b_1, \dots, b_{h-1}\}$, where $b_i$ is the new cluster assignment of node $a_i$ and $B_h$ represents the assignments before the $h^{\rm th}$ node in A is processed. Iterating through the list $A$, $a_h$ is assigned to either cluster $j$ or cluster $k$ with probability satisfying \[ p^{a_h \rightarrow j}_{B_h} + p^{a_h \rightarrow k}_{B_h} = 1 \, , \] \noindent conditional on the nodes $B_h$ that have already been (re-)assigned. Conceptually, any arbitrary assignment distribution can be chosen, as long as the probabilities for each choice are non-zero and sum to one. Once all nodes in the list have been assigned to the two clusters, the proposal probability is given by \[ \mathrm{P}_\text{prop}( z \rightarrow z') = \prod_{h =1}^{n_j + n_k} p^{a_h\rightarrow b_h}_{B_h} \, . \] We remark that while the order in which the nodes are reinserted is random, it can be shown that this random ordering does not affect the acceptance probability. In \cite{NobileAllocationSampler}, it is proposed to choose the ratio of the assignment probabilities as the ratio of the two posterior probabilities resulting from the assignments of the first $h$ nodes. Specifically, denote by $z_{\{a_h\rightarrow l, B_h\}}$, the clustering that assigns the first $h-1$ nodes of A according to $B_h$ and assigns $a_h$ to cluster $l$. Let $P(x', z_{\{a_h\rightarrow l, B_h\}}, K)$ be the posterior probability of this clustering on the network $x'$ \emph{where all unassigned nodes and edges involving these nodes are ignored}. Then \[ \begin{split} \frac {p^{a_h \rightarrow j}_{B_h}} {p^{a_h \rightarrow k}_{B_h}} = \frac { \mathrm{P}(x',z_{\{a_h\rightarrow j, B_h\}},K) } { \mathrm{P}(x',z_{\{a_h\rightarrow j, B_h\}},K) } \, . \end{split} \] This heuristic should guide the selection towards `good' choices. \iffalse It should be noted that, when calculating the posterior mass $\mathrm{P}(x,z_{B_h}^{A_h=*},K)$, we condition on the nodes that have been assigned and we otherwise ignore those nodes in A which have not yet been assigned. All pairs of nodes involving those not-yet-assigned nodes are ignored in the calculation. The original clustering is called $z$ and the proposed new clustering is $z'$. At the end, after all the nodes in A have been (re)assigned, the temporary cluster is empty again and is removed. \fi To calculate the proposal probability of the reverse proposal, the list A is again traversed to calculate \[ \mathrm{P}_\text{prop}( z' \rightarrow z) = \prod_{h =1}^{n_j + n_k} p^{a_h\rightarrow z_{a_h}}_{B'_h} \, , \] \noindent where $B'_h = \{z_{a_1},\dots, z_{a_{h-1}}\}$. Our algorithm has been optimized for sparser networks. The complexity of \emph{M3} is made up of three terms. First, it is possible that many or all nodes will be reassigned, causing a complexity of $\mathcal{O}(N)$ while updating the data structure that records the size of each cluster. Second, we keep a record of the number of edges within each block; the M3 move will consider each edge in the network at most once, as it moves the edge from one block to another, leading to a complexity of $\mathcal{O}(M)$, where $M$ is the number of edges in the network. Finally, once the data structures have been updated, a new posterior mass must be calculated by iterating over each cluster and over each block, querying the summary data structures, to sum the new terms in \cref{EQfinal}; this has a complexity of $\mathcal{O}(K^2)$. Together, this gives a complexity of $\mathcal{O}(N)+\mathcal{O}(M)+\mathcal{O}(K^2)$. The first term may be ignored, since for most networks that are considered here and in the literature, $M > N$. As long as the number of clusters is small, $K^2 \ll M$, the $\mathcal{O}(M)$ term dominates. While in the worst case $M=N^2$, in practice, for the sparse networks we consider, $M\ll N^2$. \subsection{AE} In the \emph{absorb-eject} \emph{AE} move, a cluster is selected at random and split into two clusters, or else the reverse move can merge two clusters. This move therefore can both change the number of clusters $K$ and change the clustering $z$. The move will first choose, with 50\% probability, whether to attempt a merge or split. In the case of the split move, one of the $K$ clusters is selected at random. Also, the cluster identifier of the proposed new cluster is selected at random from $\{1,\dots, K+1\}$. Finally, the nodes in the original cluster are assigned between the two clusters. This is similar to the \emph{M3} move and a heuristic to guide the assignment, as in \emph{M3}, could be considered. Instead, as in \cite{NobileAllocationSampler}, we use a \emph{probability of ejection}, $p_E$, selected randomly from a $\text{Uniform}(0,1)$ distribution, such that each node is assigned to the new cluster with probability, $p_E$. In such as move, the proposal probability is dependent on $p_E$. Rather than specify an ejection probability, we integrate over the choice of $p_E$ in much the same manner as collapsing. Given $(z,K)$ and a proposal to split into $(z', K'=K+1)$, where a cluster of size $n_k$ is split into clusters of size $n_{j_1}$ and $n_{j_2}$, the resulting proposal probability for an eject move is \[ \mathrm{P}_\text{prop}( (z,K) \rightarrow (z',K') ) = \frac { \Gamma ( n_{j_1} + 1 ) \Gamma ( n_{j_2} + 1 ) } { K (K+1) \Gamma ( n_k + 2 ) } \,. \] For a merge, the proposal probability is simply obtained as the probability of selecting the two clusters for merger from the $K'=K+1$ possible clusters. One cluster is selected which will retain its current nodes and which will expand to contain the nodes in another, randomly selected, cluster, \[ \mathrm{P}_\text{prop}( (z',K') \rightarrow (z,K) ) = \frac1K \frac1{K+1} \,. \] The complexity is similar to that of the M3 move. \subsection{Applying the moves} In all simulations, discussed in \cref{SECevaluation}, the algorithm is seeded by initializing $K=2$ and assigning the nodes randomly to one of the two initial clusters. The first two moves, \emph{MK} and \emph{GS}, are sufficient to sample the space but have slow mixing. The \emph{AE} move is sufficient on its own as it can add or remove clusters as well as move the nodes to reach any $(z,K)$ state. In practice, we'll see in \cref{SECevaluation} that the combination of \emph{AE} and \emph{M3} is good in the initialization stages to burn-in to a good estimate of both $z$ and $K$ and lessen the dependence on the initialization. It is possible to envisage many possible extensions to these moves. For example, a form of \emph{M3} could be made which selects three clusters to rearrange. The \emph{AE} move could be extended to include the assignment heuristic of the \emph{M3} move. \subsection{Label Switching} \label{SEClabelswitching} For any given $z$, with $K$ clusters (assuming they are all non-empty), there are $K!$ ways to relabel the clusters, resulting in $K!$ effectively equivalent clusterings. The posterior has this symmetry and as the MCMC algorithm proceeds it often swaps the labels on two clusters, in particular during the \emph{M3} move. This is known as the \emph{label switching phenomena}. The posterior distribution for any $z_i | x,K$ assigns node $i$ to each of the $K$ clusters with probability $\frac1K$, so in the long run every node is assigned with equal probability to every cluster. While each $z_i$ is uniformly distributed between 1 and K, the components of $z$ are dependent on each other and pairs of nodes that tend to share a cluster will tend to have the same values at their corresponding component of $z$. Depending on the context, this may not be an issue of concern. For example, if the aim is to estimate $K$ or to estimate the probability of two nodes sharing the same cluster, see \cref{FIGKis2or3share}, or to estimate the size of the largest cluster, then label switching is not a problem. However, it sometimes is desirable to undo this label switching by relabelling the clusters, such that nodes are typically assigned to a single cluster identifier along with those other nodes that they typically share a cluster with. Such a relabelling can, for example, make it easier to identify the nodes which are not strongly tied to any one cluster. We use the algorithm in \cite{NobileAllocationSampler} to undo the label switching by attempting to maximize the similarity between pairs of clusterings, after the burn-in clusterings have been discarded. Given two $z$ vectors, at two different points in the Markov Chain, $t$ and $u$, define the distance between them to be \[ D(z^{(t)}, z^{(u)}) = \sum_{i=1}^N I(z^{(t)}_i,z^{(u)}_i) \, , \] where $I$ is an indicator function that returns 0 if node $i$ is assigned to the same cluster at point $t$ and point $u$; and returns 1 otherwise. For each $z^{(t)}$, consider $z^{(*t)}$, one of the $K!$ possible relabelled versions of $z^{(t)}$. The Markov Chain is run for $a$ iterations, discarding the first $b$ iterations as burn-in. Ideally, the goal is to find the relabelling that minimizes the sum over all pairs of $u$ and $t$, \[ \sum_{t=b}^a \sum_{u=t+1}^a D(z^{(*t)}, z^{(*u)}) \, , \] but it is not computationally feasible to search across the full space of all relabellings. \margnote{More details, as requested by Rev\#2.} Each state can be relabelled in approximately $K!$ different ways, the precise number depends on the number of non-empty clusters. There are $a$ states altogether, therefore the space of all possible relabellings of all states will have $(K!)^a$ elements; this will be untractable for non-negligible $a$. In our experiments, $a$ tends to be of the order of one million. Instead, we use the \emph{online} algorithm proposed in \cite{NobileAllocationSampler}. It first orders the states from the Markov chain by the number of non-empty clusters. Then, it iterates through the states, comparing each state to all the preceding relabelled states and relabelling the current state such that the total distance to all the preceding relabelled states is minimized. We will see in \cref{SECsurvey} how this algorithm helps to summarize the output of the Markov Chain. This algorithm is fast. On a 2.4~GHz Intel~Zeon in a server with 128GB RAM, it takes 43 seconds to process the output of 1 million iterations of that data. In comparison, it takes 610 seconds to run the SBM MCMC algorithm in order to get the states to feed into the label-unswitching algorithm. Note also that the algorithm doesn't take up much memory --- even with a network with 10 million edges, the memory usage doesn't exceed 2GB. Once the label-switched set of states is obtained, a posterior distribution of the clustering for each node, $ z_i | x $, can be calculated. There is a similarity here with variational methods \cite{Daudin-08, LatoucheILvbStatMod} as they model the posterior in this manner, where each node's variational posterior is independent of the other nodes' variational posterior. It may be interesting to compare these approximate posteriors to the approximate posterior found by our method. In \margnote{\dots in particular, a discussion of loss. We can't prove anything here.} the experiments we perform later in \cref{SECevaluation,SECsurvey}, the vast majority of nodes are strongly assigned by this label-switching algorithm to one of the clusters with at least 99\% probability in the posterior. Therefore, the distance $D(.,.)$ between each state and this `summary' state is usually quite small. We take this as an indication that the online heuristic has done a reasonable job of minimizing the distance between the states, at least for those networks. \iffalse \subsection{Computation} Our software is written in C++ and is available at \url{http://sites.google.com/site/aaronmcdaid/sbm}. We also hope to make it available as an R package. The \emph{M3} proposal can be very computationally demanding. It involves repeatedly looking at the posterior density when a single node is to be assigned to one of two clusters. Even though there are $K^2$ blocks, it can be shown that at most $4K-4$ blocks are affected when we propose moving a node from one cluster to another. We can take advantage of that to efficiently calculate the ratio of the two potential proposals. For large $K$ this becomes very important. We take this efficient approach throughout the implementation of the algorithm, we consider only those factors in the posterior density which might change as a results of the various proposals under consideration. \fi \section{Survey of interaction data} \label{SECsurvey} A survey was performed by a team involving one of the authors of this paper at a summer school. We asked the 74 participants to fill in a survey and record which other participants they knew before the summer school and also which participants they met during the school. 40 of the participants responded and gave us permission to make their survey response available publicly in anonymized format. We created a directed, unweighted, network from the data by linking A to B if A recorded either type of relationship with B, resulting in 1,138 edges. This network data is available at \url{https://github.com/aaronmcdaid/Summer-school-survey-network}. \begin{centering} \begin{figure} \includegraphics[width=1.0\columnwidth]{survey/SurveyLabelSwitched} \caption{The interation survey network of \cref{SECsurvey}. Node-to-cluster membership matrix. 74 rows, one for each participant. There are 8 columns, one for each of the main seven clusters, and an extra cluster which, with very small probability, is occupied by some nodes. Most nodes are strongly assigned to one cluster, but the grey areas off the diagonal show a small number of nodes that are partially assigned to multiple clusters. } \label{FIGsurveyLabelSwitched} \end{figure} \end{centering} Using the procedure described in \cref{SEClabelswitching}, we are able to summarize the output of the Markov chain in \cref{FIGsurveyLabelSwitched}. This is a matrix which records, for each (relabelled) cluster and node, the posterior probability of that participant being a member of that cluster. Each row represents one participant of the summer school, and the total weight in each row sums to 1.0. We have ordered the rows in this figure in order to bring similar rows together, helping to highlight the sets of nodes which tend to be clustered together in the Markov Chain. As may be observed, most of the participants are strongly assigned to one cluster. Every node is assigned to one of the clusters with at least 75\% posterior probability, and the majority of nodes have at least 99\% posterior probability. \begin{figure}[ht!] \includegraphics[width=1.0\columnwidth]{survey/adjOrdered.png} \caption{The interaction survey network of \cref{SECsurvey} as a 74$\times$74 adjacency matrix for the 74 participants in the summer school. 7 clusters were found by our method, and this matrix is ordered by the summary clustering found by the label-unswitching method of \cref{SEClabelswitching}. In the text in \cref{SECsurvey}, we interpret the clusters found and show how many of the clusters correspond to the different types of people that attended the event. There were 33 people who did not respond, these can be seen in the last two clusters.} \label{FIGsurveyAdjOrdered} \end{figure} The number of clusters selected is 7, with 90.7 \% posterior probability. We can summarize this into a single clustering by assigning each node to its `best' cluster as found by the label-unswitching procedure. In \cref{FIGsurveyAdjOrdered}, we see this clustering. This particular clustering (or label-switched equivalents) has posterior probability of 20.7\%. (The order in which the clusters are presented is different in \cref{FIGsurveyAdjOrdered} than in \cref{FIGsurveyLabelSwitched}) We then analyzed the clusters to see if they could be meaningfully interpreted. The first thing that stands out is that the final two rows of blocks are empty; these are simply the 33 people who did not respond to the survey. It is interesting to see that the non-respondents have been split into two clusters. Looking at the final two columns of blocks, the differences in how other clusters linked to the non-respondents can be seen. With the help of one of the organizers, we verified that the second cluster (counting from the top, or from the left) is made up of the \emph{Organizers} of the summer school, with one exception. These were people based in the research institute who were involved in organizing the summer school. Therefore, it is no surprise that the corresponding rows and columns of the adjacency matrix in \cref{FIGsurveyAdjOrdered} are quite dense. The \emph{Organizers} interacted with almost everybody. The third and fourth clusters are also made up of people who are based in the research institute where the summer school was hosted but who weren't on the programme committee. We call these \emph{Locals}. The first cluster is made up of \emph{Visitors}. These were people from further afield who attended the school and spoke at the summer school. Looking at the blocks at the top left of \cref{FIGsurveyAdjOrdered} you can see that the \emph{Locals} know each other and the \emph{Visitors} interacted with each other. But the two groups do not tend to interact strongly with each other. The \emph{Organizers} are the glue that hold everybody together. The fifth cluster appears simply to be made up of participants who did not interact very much with anybody -- in fact they did not even interact with each other. We can now interpret the fact that there are two clusters of non-respondents. One of those clusters (the sixth cluster) is made of up of local people. Their names appeared in the surveys of the \emph{Organizers} and \emph{Locals}. The final cluster, the other non-respondent cluster, is made up of a broader range of people. It includes many non-responder \emph{Visitors}, including many of the speakers at the summer school. A community finding algorithm would not have been able to find these results, as it would expect to find dense clusters and is tied to the assumption that the probability of pairs of nodes being connected is, all other things being equal, greater if they share a cluster than if they do not share a cluster. This would manifest as dense blocks on the diagonal of this adjacency matrix. Clearly, a community-finding algorithm could not find the non-respondent clusters. Also, a community finding algorithm might have merged the \emph{Organizers} and \emph{Locals} clusters. This is because those two clusters are quite dense internally and also have many connections between them. The only difference between these two clusters is how they interact with the rest of the network; this demonstrates how the rich block structure of the Stochastic Block Model, including the various cluster-cluster interactions, can be helpful in clustering this data. We ran the algorithm for 1 million iterations on this survey data, discarding the first 500,000 iterations as burn-in. The acceptance rates were as follows: 2.3\% for \emph{AE}, 64.6\% for \emph{M3}, 1.1\% for \emph{MK}. In the case of the Gibbs sampler, 2.5\% of the time it assigned a node to a new cluster, otherwise the node was reassigned to its old cluster. The \emph{M3} and \emph{AE} are both Metropolis-Hastings; a change to the clustering is proposed and then the change is accepted or rejected. Sometimes the accepted move actually places all the nodes back to the same position they were in, or sometimes it merely swaps the labels between the two clusters. If we consider these as `rejections', then the rate and which new states are reached is just 1.0\% for \emph{M3}. So, \emph{M3} is accepted a lot, but it usually only moves between label-switched equivalents; this tells us that the algorithm is able to move quickly between the various modes of the distribution, and also suggests that the posterior is quite peaked around the modes. \subsection{Estimating the Network Probability, \texorpdfstring{$\mathrm{P}(x)$}{P(x)} } \label{SECPx} In Section 4, we discussed how the fully Bayesian approach of the SBM presented here allows model selection criteria such as the ICL to be avoided to select between models with different input numbers of clusters $K$. It is also worth remarking that in certain circumstances, such as our survey data presented here, it is possible to compute an estimate of the network probability, $P(x)$; that is, the probability, given just the total number of nodes $N$, that the network $x$ is observed from an SBM. This provides an absolute measure of the fit of the SBM to the observed data and could be used to test the hypothesis that the data is drawn from an SBM against some alternative model. In the survey data there is one clustering where it, along with its label-switched equivalents, take up 20.7\% of the posterior probability. Call this $\hat{z}$. Thus we have a value $\hat{z}$ which is visited very often by the sampler and this allows an accurate estimate of $\mathrm{P}(K,\hat{z} | x)$ to be obtained using \[ 7! \times \mathrm{P}(K=7, z=\hat{z} | x) = 0.207\,. \] Now inserting $x$, $K$ and $\hat{z}$ into the expression for the joint distribution, an estimate of $P(x)$ can be obtained using \[ \mathrm{P}(x) \mathrm{P}(K=7,z=\hat{z}|x) = \mathrm{P}(x,z=\hat{z},K=7)\,. \] In the case of the survey data we obtain $\log_2\mathrm{P}(x) \approx -2,482$. To put some perspective on this value, we can compare with a model that selects $x$ uniformly at random from all possible directed networks over $N=74$ nodes. In this case, we obtain $\log_2\mathrm{P}(x) = -N(N-1) =-5,402$. As a second alternative, if $x$ were generated from an Erdos-Renyi model, averaged over all possible edge probabilities drawn uniformly at random, then $\log_2\mathrm{P}(x) \approx -4,130$. \section{Evaluation} \label{SECevaluation} In this section we first look at experiments based on synthetic data and follow in the next section with an application of the collapsed SBM to a survey network gathered by one of the authors at a recent summer school. The synthetic analysis proceeds by generating networks of various sizes from the model and examining whether the algorithm can correctly estimate the number of clusters and the cluster assignments. As mentioned in the previous section, all our experiments are done on a 2.4~GHz Intel~Zeon in a server with 128GB RAM, and the memory usage never exceeded 2GB. \subsection{Estimating z} \label{SECestimatingz} A 40-node directed, unweighted network is generated from the model, containing 4 clusters of 10 nodes each. The block densities $\pi_{kl}$ are generated by drawing from a $\text{Uniform}(0,1) \equiv \text{Beta}(1,1) $ for each of the $4 \times 4 = 16$ blocks. \begin{figure}[h!] \centering \includegraphics[width=.8\columnwidth]{synth/AdjK4O10} \caption{The adjacency matrix (with $\delta = 0$) for the four-cluster synthetic network used in \cref{SECestimatingz}. Each of the four clusters has 10 nodes.} \label{FIGK4_O50} \end{figure} To challenge the algorithm further we add noise to the synthetic data, similar to simulation experiment described in section 4 of \cite{WyseFriel}. The values in the matrix $\pi$ are scaled linearly. For a given $\delta$, define $\pi^{(\delta)}_{kl} = \delta+ \pi_{kl} (1-2 \delta)$. While the values in the original $\pi$ are drawn from the full range, $[0,1]$, the elements in the matrix $\pi^{(\delta)}$ are in the range $(\delta, 1-\delta)$. Various networks for values of $\delta$ between 0 and 0.5 are generated. The original network model corresponds to $\delta=0$. The network with $\delta=0.5$ corresponds to an Erdos-Renyi model with $p=0.5$ --- this is a random graph model with no block structure. \begin{table}[h] \centering \begin{tabular}{r r r r r r} \hline $\delta$ & \begin{turn}{80} $\mathrm{P}(K=4 | x)$ \; \end{turn} & $\hat{K}_\text{mode}$ & \begin{turn}{80} $\mathrm{P}(K=\hat{K}_\text{mode} | x) $ \end{turn} & \begin{turn}{80} $\mathrm{P}(\hat{z} \equiv z | x)$ \end{turn} & $\tau$ \\ \hline 0.0 & 0.8982 & 4 & 0.8982 & 0.974 & 50.12 \\ 0.1 & 0.8799 & 4 & 0.8799 & 0.952 & 63.99 \\ 0.2 & 0.8769 & 4 & 0.8769 & 0.124 & 80.18 \\ 0.3 & 0.0073 & 2 & 0.7865 & 0.000 & 371.96 \\ 0.4 & 0.0075 & 1 & 0.6293 & 0.000 & 1365.58 \\ \hline \end{tabular} \caption{The performance decreases as the noise level, $\delta$, increases. The fifth column, $\mathrm{P}(\hat{z} \equiv z|x)$, reports how often the sampler visits the `correct' answer; i.e. where the visited state was equivalent, subject to relabelling, to the model from which the network was generated. } \label{TBLdelta} \end{table} The algorithm is run for one million iterations, discarding the first 500,000 of these as burn-in. \Cref{TBLdelta} shows how the performance is affected as $\delta$ increases. The first column is the posterior probability for the ``correct'' answer for $K$, $\mathrm{P}(K=4|x)$. As the value of $\delta$ increases, the network approaches the Erdos Renyi model and therefore there is no longer any structure to detect; this explains why the accuracy decreases as $\delta$ increases. Next is the modal value of K which maximizes the posterior $\mathrm{P}(K|x)$, followed by the posterior probability of the modal value, $\mathrm{P}(K = \hat{K}_\text{mode}|x)$ . The fifth column, $\mathrm{P}(\hat{z} \equiv z | x)$ is the probability that the (non-empty) clusters are equivalent (allowing for relabelling) to the clustering used to generate the data. Note that sometimes there are empty clusters in the estimate and therefore $\mathrm{P}(\hat{z} \equiv z | x)$ can be bigger than $\mathrm{P}(K=4 | x)$. The final column reports $\tau$, the Integrated Autocorrelation Time (IAT) for the estimate of $K$, defined as $\tau = 1 + 2 \sum_{t=1}^{\infty} \rho(t)$, where $\rho_t$ is the autocorrelation at lag $t$. As the sampler visits the states, we consider how correlated the estimate of $K$ is with the estimates for preceding states. A low autocorrelation, as summarized by the IAT, is an indicator of good mixing. \begin{comment} Given that these are synthetic networks, we can compare the ``true'' clustering to that found by our algorithm. The \emph{normalized mutual information}(NMI) is commonly used when comparing two (non-overlapping) clusterings to each other \cite{DanonNMI}. The algorithm succeeded when $\delta$ was low, telling us that the NMI was well correlated with the posterior density. But we can see, in \cref{FIGnmiVSpostK4}, how this correlation breaks down as $\delta$ reaches 0.3. For that diagram, we fixed $K=4$ in the algorithm in order to attempt assist it, but it still was unable to find a good clustering. There is now clearly no correlation between the NMI and the posterior mass. \begin{figure}[h] \includegraphics[width=.9\columnwidth]{synth/pmfVSnmiKis4} \caption{With the synthetic network, where $\delta=0.3$, the posterior density did not correlate with the NMI. This confirms that this network is now too noisy, due to the large $\delta$, for the model to pick out the correct clustering. We fixed $K=4$ in this case.} \label{FIGnmiVSpostK4} \end{figure} \end{comment} \subsection{Estimating K} \label{SECestimatingK} We perform three different types of experiments to judge the ability of the algorithm to correctly estimate the number of clusters with networks of increasing size. First, we repeat the experiments of \cite{LatoucheILvbStatMod}. \margnote{These are the new requested experiments. Our results are very similar to Latouche.} The true numbers of clusters, $K_{true}$ is set to range from 3 to 7. For each $K_{true}$, 100 networks are randomly generated. The number of nodes in each network, $N$, is set to 50. The nodes are assigned to the clusters randomly, with $\theta_1= \dots= \theta_K = \frac{1}{K_{true}}$. Two parameters are used to control the density of the blocks. The first, $\lambda$, is the density within clusters i.e. $\pi_{kk} = \lambda$. Also, one of the clusters is selected to be a special cluster of `hubs', well connected to the other nodes in the network, by setting $\pi_{1k} = \pi_{k1} = \lambda$. The second parameter, $\epsilon$, represents the inter-block density of all the other blocks i.e. $\pi_{kl}= \epsilon$ for $k, l\ne 1$. As in the experiments of \cite{LatoucheILvbStatMod}, the parameter values are $\lambda=0.9$, and $\epsilon=0.1$. \begin{table}[h] \centering \subtable[ILvb \label{TBLilvb} ]{ \begin{tabular}{|r|r r r r r|} \hline & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} & \textbf{7} \\ \hline \textbf{3} & 100 &0 &0 &0 &0 \\ \textbf{4} & 0 &99 &1 &0 &0 \\ \textbf{5} & 0 &4 &96 &0 &0 \\ \textbf{6} & 0 &0 &24 &76 &0 \\ \textbf{7} & 0 &5 &29 &41 &25 \\ \hline \end{tabular} } \subtable[our algorithm \label{TBLSBMonLatoucheP} ]{ \begin{tabular}{|r|r r r r r|} \hline & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} & \textbf{7} \\ \hline \textbf{3} &99 &1 &0 &0 &0 \\ \textbf{4} &0 &99 &1 &0 &0 \\ \textbf{5} &0 &4 &96 &0 &0 \\ \textbf{6} &0 &0 &25 &75 &0 \\ \textbf{7} &0 &5 &27 &46 &22 \\ \hline \end{tabular} } \caption{ The rows represent $K_{true}$ and the columns are the estimates from the $ILvb$ of \cite{LatoucheILvbStatMod} and from our algorithm. } \label{TBLbothLatoucheExpers} \end{table} Each network is run through the variational method of \cite{LatoucheILvbStatMod}. The estimated value of $K$ which maximizes the $ILvB$ measure is taken as the estimate of the number of clusters. A contingency table, showing the true number of clusters against the estimate from $ILvB$, is displayed in \cref{TBLbothLatoucheExpers}~\subref{TBLilvb}. For low $K_{true}$ the algorithm is very accurate, and for larger values there is a tendency to underestimate the number of clusters. For example, when $K_{true}=7$, the estimate was $\hat{K}=6$ for 41 of the networks and $\hat{K}=7$ for only 25 of the 100 networks. The results from our algorithm, shown in \cref{TBLbothLatoucheExpers}~\subref{TBLSBMonLatoucheP} are similar to those obtained using the $ILvB$. \subsection{Synthetic SBM networks} \label{SECsynthSBM} \margnote{Putting new experiments here within Latouche's framework. Might remove our experiment.} The experiments of \cref{SECestimatingK} involve synthetic data generated according to a model of \emph{community structure}, where edges tend to form primarily within clusters. In order to explicitly test our algorithm in the more general setting of \emph{block structure}, we generated another set of networks with data generated directly from the SBM. Similarly to the previous experiment, for each of a range of values of $K_{true}$, 100 networks are generated. $K_{True}$ is now set to range from 10 to 20 and the number of nodes is set to $N=100$, in order that the size of each cluster not be very small. Each element of $\pi_{kl}$ is chosen randomly from Uniform(0,1) and for each of the 100 networks, a new $\pi$ is created randomly. As these are undirected networks, only the upper triangular portion of $\pi$ is used when generating the network. Again, we compared the estimates of $K$ found by the $ILvB$ to those found by our algorithm. \begin{table}[h!] \centering \begin{tabular}{|r|r r r r r r r r r r r|} \hline & \textbf{10} & \textbf{11} & \textbf{12} & \textbf{13} & \textbf{14} & \textbf{15} & \textbf{16} & \textbf{17} & \textbf{18} & \textbf{19} & \textbf{20} \\ \hline \textbf{10}& \underline{72}& 15& 0& 1& 0& 0& 0& 0& 0& 0& 0 \\ \textbf{11}& 15& \underline{75}& 5& 3& 1& 0& 0& 0& 0& 0& 0 \\ \textbf{12}& 5& 20& \underline{64}& 6& 5& 0& 0& 0& 0& 0& 0 \\ \textbf{13}& 2& 3& 21& \underline{66}& 8& 0& 0& 0& 0& 0& 0 \\ \textbf{14}& 0& 0& 4& 21& \underline{61}& 10& 4& 0& 0& 0& 0 \\ \textbf{15}& 0& 0& 2& 8& 28& \underline{51}& 9& 0& 2& 0& 0 \\ \textbf{16}& 0& 0& 1& 4& 15& 32& \underline{33}& 11& 4& 0& 0 \\ \textbf{17}& 0& 0& 0& 2& 4& 11& 30& \underline{45}& 8& 0& 0 \\ \textbf{18}& 0& 0& 0& 1& 3& 12& 20& 30& \underline{23}& 10& 1 \\ \textbf{19}& 0& 0& 0& 0& 0& 1& 12& 24& 38& \underline{13}& 10 \\ \textbf{20}& 0& 0& 0& 0& 0& 1& 7& 6& 23& 29& \underline{23} \\ \hline \end{tabular} \caption{The true number of clusters (rows) against the number estimated by $ILvb$ (columns). The diagonal entries are underlined to aid readability, as these represent the correct answer. We see here a tendency to underestimate the number of clusters, especially for larger $K_{True}$. } \label{TBLK10to20ILvB} \end{table} \begin{table} [h!] \centering \begin{tabular}{|r|r r r r r r r r r r r|} \hline & \textbf{10} & \textbf{11} & \textbf{12} & \textbf{13} & \textbf{14} & \textbf{15} & \textbf{16} & \textbf{17} & \textbf{18} & \textbf{19} & \textbf{20} \\ \hline \textbf{10} & \underline{95}& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0 \\ \textbf{11} & 6& \underline{93}& 1& 0& 0& 0& 0& 0& 0& 0& 0 \\ \textbf{12} & 1& 8& \underline{90}& 1& 0& 0& 0& 0& 0& 0& 0 \\ \textbf{13} & 0& 2& 12& \underline{86}& 0& 0& 0& 0& 0& 0& 0 \\ \textbf{14} & 0& 0& 1& 9& \underline{90}& 0& 0& 0& 0& 0& 0 \\ \textbf{15} & 0& 0& 0& 1& 13& \underline{84}& 2& 0& 0& 0& 0 \\ \textbf{16} & 0& 0& 0& 0& 1& 22& \underline{73}& 4& 0& 0& 0 \\ \textbf{17} & 0& 0& 0& 0& 0& 2& 29& \underline{65}& 4& 0& 0 \\ \textbf{18} & 0& 0& 0& 0& 0& 1& 9& 28& \underline{62}& 0& 0 \\ \textbf{19} & 0& 0& 0& 0& 0& 1& 3& 7& 38& \underline{51}& 0 \\ \textbf{20} & 0& 0& 0& 0& 0& 0& 0& 3& 11& 28& \underline{57} \\ \hline \end{tabular} \caption{The true number of clusters (rows) against the number estimated by our collapsed MCMC algorithm (columns). The diagonal entries are underlined to aid readability, as these represent the correct answer. The accuracy is better here than in \cref{TBLK10to20ILvB}; we can see that the numbers on the diagonal are larger. } \label{TBLK10to20SBMp} \end{table} The results are shown in \cref{TBLK10to20ILvB,TBLK10to20SBMp}. Each row of data represents the 100 networks generated for a given $K_{True}$. Each column represents the estimated $\hat{K}$. Ideally, the algorithm would correctly estimate the number of clusters in most cases, corresponding to large number on the diagonal. We have underlined the diagonal entries for clarity. Note that the sum of the entries in each row does not always sum exactly to 100, since there are cases where the algorithms underestimate or overestimate the number of clusters, beyond the shown range. For the $ILvb$ algorithm, it is necessary to specify a range of $K$ to be tested; we specified the range from 5 to 30. Our MCMC algorithm requires no such hint. For networks with a small number of clusters, both algorithms perform well, with 72\% accuracy for $ILvb$ and 95\% accuracy for our algorithm. As the true number of clusters increase, the performance decreases. Our algorithm maintains at least 50\% accuracy in all cases, whereas the accuracy for $ILvb$ falls to 23\%. When they are incorrect, both algorithms have a tendency to underestimate the number of clusters. In \cref{SEClargernetworks}, a more thorough investigation of the speed and scalability of our algorithm with respect to larger networks is given but we close our comparison with $ILvb$ with some remarks on speed. For the first set of small networks above, both methods are very fast; they complete within seconds. For example, the $ILvb$ can be calculated for all values of $K$ from 10 to 20 in a total under five seconds. We have not defined a convergence criterion for our MCMC algorithm, and therefore we make no attempt to halt the sampling early in order to define a `runtime' for our algorithm. But in the occasions where both methods get the correct result, our algorithm typically reaches the correct result within nine seconds; and the sampler remains at, or very close to, the correct clustering for the remainder of the run. \margnote{I removed some of our old experiments from here, as they are now redundant thanks to the ILvb experiments above. The next few paragraphs are being trimmed.} \begin{comment} We generate 10 directed, unweighted synthetic networks, each with a different number of clusters, chosen in steps of 5: $K$ = 5, 10, 15, \dots 50. The size (number of nodes) of each cluster is fixed at 10, resulting in $10K$ nodes and $K^2$ blocks. The elements of the edge density matrix $\pi$ are drawn from $\text{Uniform}(0,1)$. The purpose of this experiment is to demonstrate that the algorithm can detect the correct number of clusters even though it has been initialized badly; we seed the algorithm with two clusters to which nodes are randomly assigned. We focus initially on the case $K=20$. The corresponding adjacency matrix is shown in \cref{FIGK20adjacency}. \begin{figure}[ht!] \centering \includegraphics[width=.8\columnwidth]{synth/K20vsI} \caption{The estimate of K for the 200-node, 20-cluster network. The x-axis is the number of iterations, note the log scale. The correct number of clusters, $K=20$, is reached in under 10,000 iterations (7.8 seconds). On the right of this plot, you see that the estimate of $K$ does often drift above 20, but in almost all cases these are merely empty clusters. } \label{FIGsynthK20} \end{figure} We ran the SBM MCMC algorithm for 1,000,000 iterations. In \cref{FIGsynthK20}, we see how long it takes to converge on the correct value of K. Note the log scale on the x-axis. $K=20$ is reached in under 10,000 iterations. Thereafter, the value sometimes drifts above 20, but this is almost always because the algorithm temporarily has a small number of empty clusters in those states. In the case of all 10 synthetic networks, in the default setting, with all four moves enabled, in all cases it takes less than two minutes to reach the correct clustering, or just a few seconds for smaller $K$, see \cref{FIGsynthK}. \end{comment} Finally, to demonstrate the importance of the \emph{AE} move, in \cref{FIGsynthK}, the time taken by our algorithm to reach the correct clustering for three synthetic networks is shown. The numbers of clusters in the networks is 5, 20, and 50 respectively, with $\pi$ drawn from $\text{Uniform}(0,1)$. In each case, there were exactly 10 nodes in each cluster, giving $N=10 \times K$ nodes in each network. The x-axis displays the number of iterations and the y-axis the number of clusters at that stage in the run of the sampler. The correct clustering is reached in approximately 10,000 iterations. We found that the \emph{AE} move is quite important, at least in the early stages. If \emph{AE} is disabled, see \cref{FIGsynthK_noAE}, then it takes about 320,000 iterations for K=50, instead of just 20,000 iterations when all moves are in effect. For fast burn-in, \emph{M3} and \emph{AE} are necessary. With similar experiments we noticed that, once the chain has burned in, the \emph{M3} move is sufficient for good performance and the other simple moves, \emph{GS} and \emph{MK}, do not make major contributions. \begin{figure} \centering \subfigure[All moves enabled]{ \includegraphics[width=.45\columnwidth]{KagainstItersThreeNetworks} } \subfigure[\emph{AE} move disabled]{ \includegraphics[width=.45\columnwidth]{KagainstItersThreeNetworks_noAE} \label{FIGsynthK_noAE} } \caption{The estimates of K in the synthetic networks, with $K = 5,20,50$. The x-axis (logarithmic scale) is the number of iterations; as the algorithm proceeds, in each case it converges on the correct estimate of $K$. The networks had $10\times K$ nodes each. In the lower plot, we see the performance where where the \emph{AE} move has been disabled; demonstrating how it is important in burnin.} \label{FIGsynthK} \end{figure} \subsection{Larger networks} \label{SEClargernetworks} Next, we investigate larger networks to demonstrate the scalability of the algorithm. A number of synthetic networks are generated, each with approximately ten thousand nodes and ten million edges. The number of clusters ranges from 3 to 50, and the number of nodes in each clusters, $O$, is set such that the total number of nodes, $N = K \times O$, is close to 10,000. If we use the default SBM edge model, then the number of edges would be approximately 50 million. As this would take up a lot of computer memory, instead we modify the prior for the per-block densities to be Uniform(0,0.2) in order to ensure that the expected number of edges is 10 million. Large real-world networks are typically quite sparse, even more sparse than this synthetic network. The details, including the speed and accuracy, are in \cref{TBLlarger}. The SBM algorithm is run for 100,000 iterations on each of these networks and the time to converge is recorded. In each case, when the algorithm first visits the `correct' state, it remains in that state for practically all the remaining iterations. We record the number of iterations taken before the algorithm reaches the correct state, and the time that has elapsed at that point. It typically converges within one hour, but it takes nearly four hours for the 50-cluster network. Methods that scale to thousands of nodes have been presented in the literature, such as \cite{Daudin-08} and \cite{LatoucheILvbStatMod}. To our knowledge, ours is the only method which has been demonstrated on networks with 10,000 or more nodes. \margnote{More details here about speed versus ILvb} We have attempted to load these networks into the $R$ software package in order to run them through $ILvb$. However, the memory requirements for such large adjacency matrices become prohibitive. For large networks, it may be necessary to consider a different implementation language and techniques in order to fully explore the scalability of a variational method such as $ILvb$. Instead, we generated five 500-node networks, with 20 clusters each, according to the SBM model and run $ILvB$ on it, using only one value of $K$, namely $K=20$. It takes between 38 and 56 seconds, depending on which of the five networks is used. In comparison, on the same data, our algorithm takes between 17 and 35 seconds, despite the fact that it is given no clue as to the correct value of $K$. With 1,000-node networks, the runtimes for $ILvb$ are between 636 and 814 seconds, whereas our algorithm takes between 55 and 78 seconds. This suggests our algorithm scales better than the $ILvb$ -- although perhaps this is an implementation issue rather than a limitation of the variational model. In practice, it is necessary to run $ILvb$ for every possible value of $K$, and this fact should be incorporated into any evaluation of its runtime. For larger networks, the range of possible values of $K$ increases making this a significant issue. In contrast, an algorithm based on the allocation sampler, such as ours, does not suffer this limitation, suggesting that that our algorithm is well suited to large networks. \begin{table} \small \centering \begin{tabular}{ r r r r r r } \hline $K$ & $O$ & $N$ & $E$ & $i$ & $t$ \\ \hline 3 & \numprint{3333} & \numprint{9999} & \numprint{9722580} & \numprint{41} & \numprint{3317 \\ 4 & \numprint{2500} &\numprint{10000} & \numprint{8526987} &\numprint{149} & \numprint{2977 \\ 5 & \numprint{2000} &\numprint{10000} & \numprint{8627869} &\numprint{190} & \numprint{2460 \\ 6 & \numprint{1667} &\numprint{10002} & \numprint{9974998} &\numprint{416} & \numprint{3265 \\ 7 & \numprint{1429} &\numprint{10003} & \numprint{9316651} &\numprint{749} & \numprint{3449 \\ 8 & \numprint{1250} &\numprint{10000} & \numprint{11059656} &\numprint{962} & \numprint{3710 \\ 9 & \numprint{1111} & \numprint{9999} & \numprint{9581440} &\numprint{1383} & \numprint{4052 \\ 10& \numprint{1000} &\numprint{10000} & \numprint{9989886} &\numprint{1277} & \numprint{3785 \\ 20& \numprint{500} &\numprint{10000} & \numprint{9871938} &\numprint{5655} & \numprint{4779 \\ 30& \numprint{333} & \numprint{9990} & \numprint{9821594} &\numprint{12497} & \numprint{6999 \\ 40& \numprint{250} &\numprint{10000} & \numprint{9862703} &\numprint{37742} & \numprint{12452 \\ 50& \numprint{200} &\numprint{10000} & \numprint{10008963} &\numprint{40958} & \numprint{24028 \\ \hline \end{tabular} \caption{ The time-to-convergence for the larger synthetic networks. The networks have $N = K \times O$ nodes, made up of $K$ clusters each with $O$ nodes. After $i$ iterations ($t$ seconds), the algorithm reached the correct result and remained in, or close to, that state for the remainder of the 100,000 iterations. It should be noted that much of the runtime is simply taken up with loading the network into memory; the time spent in the MCMC algorithm itself is smaller than the $t$ figure presented here. } \label{TBLlarger} \end{table} \subsection{Autocorrelation in K} \label{SECsmallAutoCorrs} \tikzset{vertex/.style={black} } \tikzstyle{abstract}=[rectangle, draw=black, rounded corners, fill=white, drop shadow, text centered, anchor=north, text=black, text width=3cm] \tikzstyle{dot}=[fill=black,circle,minimum size=1pt] \begin{figure} \centering \subfigure[Adjacency matrix]{ \tikz[scale=0.5] { \draw[help lines] (0,0) grid (6,6); \foreach \ensuremath{ \mathbf{x} } / \y in {0/0,0/1,1/0,1/1,0/4,0/5,1/4,1/5,4/4,4/5,5/4,5/5} { \draw[fill=black] (\y,6-1- \ensuremath{ \mathbf{x} } ) rectangle +(1,1) {}; } } \label{FIGKis2or3} } \subfigure[Percentage posterior probability of two nodes sharing a cluster.]{ \small \begin{tabular}{|r|r|r|r|r|r|} \hline & 97 & 4 & 4 & 75 & 75 \\ \hline 97 & & 4 & 4 & 75 & 75 \\ \hline 4 & 4 & 99 & 99 & 4 & 4 \\ \hline 4 & 4 & 99 & 99 & 4 & 4 \\ \hline 75 & 75 & 4 & 4 & & 97 \\ \hline 75 & 75 & 4 & 4 & 97 & \\ \hline \end{tabular} \label{FIGKis2or3share} } \subfigure[Autocorrelation on $K$.]{ \includegraphics[width=.4\columnwidth]{Kis2or3acor} \label{FIGKis2or3acor} } \caption{ Adjacency matrix used in the analysis of varying K in \cref{SECsmallAutoCorrs}. \Cref{FIGKis2or3share} estimates, for every pair of nodes, the predicted probability of them sharing a cluster. \Cref{FIGKis2or3acor} shows the autocorrelation in the estimate of $K$. } \end{figure} An autocorrelation analysis can reveal the mixing properties of the algorithm. However, in the above examples, and in the survey data discussed in \cref{SECsurvey}, the estimates of $K$ are very much peaked around a single value. Often the larger values of $K$ are associated with empty clusters and the estimate of the number of non-empty clusters is even more peaked. This makes it difficult to use $K$ as an interesting variable on which to perform autocorrelation analysis. To address this, we examine the 6-node network in \cref{FIGKis2or3}, for which a greater variance in the values of $K$ is observed. Define $K_1$ to be the number of non-empty clusters, $K_1 \leq K$. The posterior predictive probability for $K=2$ is 57.0\%, and for $K=3$ it is 31.4\%. For the non-empty clusters, it is 73.4\% for $K_1=2$ and 24.4\% for $K_1=3$. The autocorrelation in the estimates of $K$ is shown in \cref{FIGKis2or3acor}. The acceptance rates on this small 6-node network are relatively high: 8.1\% for \emph{MK}, 4.2\% for \emph{GS}, 20.5\% for \emph{AE}, 46.0\% for \emph{M3} . We'll see lower acceptance rates in the next section when the algorithm is applied to the survey network. \section{Extensions} \label{SECextending} In this paper we have applied parameter collapsing to a subset of the models presented in \cite{Nowicki-01}. It is possible to apply this strategy to their full range of models in a similar fashion to that described here, allowing for $K$ to be estimated in this fully Bayesian approach. For example, it is possible to allow for missing data, and also to allow for a broader `alphabet' as mentioned previously. It is also possible to conceive of a hierarchical extension to the SBM which is suitable for collapsing. At the `top-level', the standard SBM models connections between clusters. Next, the internal connections within each cluster are again modelled as an SBM producing a set of sub-clusters within each cluster. This process can be applied recursively to any depth, and the resulting ``clustering'' can be represented as a tree. As far as we are aware, this generalized model has not appeared in the literature. However, it is interesting that two specializations of this model have. In \cite{BaderParkBinaryTree}, an SBM is used at the top level to model the between-cluster interactions, and each cluster is then modelled as a \emph{binary} tree. This is a specialization with the restriction that the number of subclusters (and sub-sub-clusters etc) is fixed at $K=2$ at every level except the top level. Also, in \cite{ClausetHierNetworks} a further restriction is presented, where even the ``top-level'' K is fixed at 2. \section{Introduction} This paper is concerned with \emph{block-modelling} -- an approach to clustering the nodes in a network, based on the pattern of inter-connections between them. \margnote{Rewritten these first two paragraphs, so we don't claim to have invented any extensions.} The starting point for the method presented here is the \emph{stochastic block model} (SBM) \cite{Nowicki-01}. The goal is to improve the speed and scalability, without compromising on accuracy. We use conjugate priors and integration in order to focus on the marginal distribution of interest, this marginalization is also referred to as the `collapsing' of the nuisance parameters \citep{LiuCollapsedGibbs, WyseFriel}. This allows us to implement an efficient algorithm based on the \emph{allocation sampler} of \cite{NobileAllocationSampler}. We incorporate existing extensions, such as the weighted-edge model of \cite{MariadassouWeightedSBM}, and show how this extension can be incorporated within our collapsing and within our algorithm. As required by the \emph{allocation sampler}, we place a prior on the number of clusters, allowing the number of clusters to be directly estimated. Together, these techniques allow us to avoid the more complex forms of transdimensional MCMC and they also allow us to avoid the need for post-hoc model selection via criteria such as the ICL. We show that our method can accurately and efficiently estimate the number of clusters -- an improvement over many existing methods. Our algorithm, and the data we have used in \cref{SEClargernetworks} and our survey data used in \cref{SECsurvey}, are available at \url{http://sites.google.com/site/aaronmcdaid/sbm}. The concept of clustering is broad and originated outside of network analysis, where the input data is in the form of real-valued vectors describing the location of the data points in a Euclidean space. Network clustering takes a set of connected nodes as input and finds a partition of the nodes based on the network structure. This finds application in many different contexts. For instance, in bio-informatics, networks of protein-protein interactions are analysed and clustering is applied to find functional groups of proteins. Interest in social network analysis has grown greatly in recent years, with the availability of many networks, such as Facebook datasets, of human interactions. Clustering of such social networks has been applied in order to find social communities. In the following, we will distinguish the community-finding problem from the more general setting of block-modelling. In network analysis, the input data may be described mathematically as a graph, which is a set of nodes (where each node represents an entity, say, a person) and a set of edges linking pairs of nodes together. An edge might represent a friendship on Facebook or a phone call on a mobile phone network. In \cref{SECsurvey}, we apply our method to the network of interactions between participants at a summer school. Given a network, the goal in block-modelling is to cluster the nodes such that pairs of nodes are clustered together if their connectivity pattern to the clusters in the rest of the network is similar. A cluster might, for example, consist of a set of nodes which do not tend to have connections among themselves at all. Given two nodes in this cluster (node $i$ and node $j$), the neighbours of $i$ tend to be in the same clusters as are the neighbours of $j$. Community-finding has focussed on finding clusters of high internal edge density, where an edge between two nodes will tend to pull the two nodes into the same cluster, and a non-edge will tend to push them into separate clusters. This contrasts with block-modelling, which allows clusters to have \emph{low} internal edge density. Block-modelling is able to find such community structure, but it is a more general method that is also able to find other types of structure. A variety of other, non-probabilistic, approaches have been used to tackle the broad problem \margnote{More refs, for Rev\#1. But no discussion, to satisfy Rev\#2. Thoughts?} of block-modelling \citep{EverettColoration,ChanGBM}. Outside of block-modelling, there are other solutions for community-finding in networks \citep{NewmanGirvan,girvan-2002,NewmanFastishMod}. Many probabilistic clustering models have also been applied \citep{HandcockRafteryTantrum,HoffLatentSpace,Airoldi2008MMSB}. There is a huge variety of methods, and we will not attempt to summarize them further; for the rest of this paper, we will focus on the SBM and on algorithms for the SBM. For more details, in particular about community finding, see the excellent review article of \cite{fortunato-2010}. The remainder of the paper is structured as follows. In \cref{SECsbm}, we define the SBM and define the notation used in the paper. We then define, in \cref{SECours}, the conjugate priors and integration that we use in order to access the relevant marginal distribution. \Cref{SECrelatedProb} discusses other closely-related models and algorithms and in particular gives consideration to the issue of how to select the number of clusters (model selection), comparing the approach we have used to other approaches and noting connections among the methods. \Cref{SECestimation} describes the algorithm we use; \margnote{Rev\#2 is \emph{very} unhappy with the term `collapsing' - see his comments (in red) on page 40 below.} without collapsing, it would have been necessary to use full Reversible Jump MCMC (\cite{GreenRJMCMC}) to search a sample space of varying dimension and this could be much slower. In \cref{SECevaluation}, we evaluate our method on synthetic networks, showing how the number of clusters can be estimated accurately and the nodes assigned to their correct cluster with high probability. We also test the scalability and efficiency of the algorithm by considering synthetic datasets with ten thousand nodes and ten million edges. In \cref{SECsurvey}, we evaluate our method on a dataset of interactions, gathered by a survey, of participants at a doctoral summer school attended by one of the authors of this paper. The method is able to detect interesting structures, demonstrating the differences between \emph{block-modelling} and \emph{community-finding}. \Cref{SECconclusion} draws some conclusions. \section{Collapsing the SBM} \label{SECours} In this section, we show how \emph{collapsing} can be used to give a more convenient and efficient expression for the model. This refers to the integration of nuisance parameters from the model, see \cite{WyseFriel} for an application to a different, but related, bipartite model. The SBM has been partially collapsed by \cite{KempTRcollapsedSBM}, but we will consider the full collapsing of both $\pi$ and $\theta$. As our primary interest is in the clustering $z$ and the number of clusters $K$, we integrate out $\pi$ and $\theta$, yielding an explicit expression for the marginal $\mathrm{P}(x, z, K)$. We emphasize that integration does not change the model, it merely yields \margnote{Collapsing is \emph{not} an extension, as pointed out by Rev\#2. Perhaps there is a word, other than `extension', that we want to use? I'm happy as is. } a more convenient representation of the relevant parts of the posterior. This integration is made possible by the choice of conjugate priors for $\pi$ and $\theta$. We treat $K$ as a random variable and place a Poisson prior on $K$ with rate $\lambda=1$, conditioning on $K>0$, \begin{equation} K \sim \mbox{Poisson}(1) \, | \, K>0 \, , \label{EQpriorK} \end{equation} which gives us \[ \mathrm{P}(K) = \frac{ \frac{\lambda^K}{K!} e^{-\lambda} }{ 1-\mathrm{P}(K=0) } = \frac{1}{K!(e-1)} \,. \] We are only interested in these expressions as functions of $K$ and $z$ up to proportionality, as this will be sufficient for our Markov Chain over $(K,z|x)$, and hence we can simply use $\mathrm{P}(K) \propto \frac1{K!}$. The Poisson prior is used in the \emph{allocation sampler}, the algorithm upon which our method is based \citep{NobileAllocationSampler}. This allows the estimation of the number of clusters as an output of the model rather than requiring a user to specify $K$ as an input or to to use a more complex form of model selection. Thus, we have a fully Bayesian approach where, other than $N$, which is taken as given, every other quantity is a random variable with specified priors where necessary, \begin{equation} \begin{split} \mathrm{p}(x, \pi, z, \theta, K) = \mathrm{P}(K) & \times \mathrm{p}(z, \theta | K) \\ & \times \mathrm{p}(x , \pi | z) \, . \end{split} \label{EQuncollapsed} \end{equation} With \cref{EQuncollapsed} we could create an algorithm which, given a network $x$, would allow us to sample the posterior $\pi, z, \theta, K | x$. However, we are only interested in estimates of $z,K | x$. We now show how to collapse $\pi$ and $\theta$. Define $\mathbb{R}_+$ to be the set of non-negative real numbers, and write the set of real numbers between 0 and 1 as $[0,1] $. Define $\Theta$ the \emph{unit simplex} i.e. the subset of $\mathbb{R}_+^K$ where $1=\sum_{k=1}^K \theta_k$. Define $\Pi$ to be the domain of $\pi$. For the Poisson model this is $\mathbb{R}_+^B$ while for the Bernoulli model this is $[0,1]^B$, where $B$ is the number of blocks. We can access the same posterior for $z$ and $K$ by \emph{collapsing} two of the factors in \cref{EQuncollapsed}, \begin{equation} \begin{split} \mathrm{P}(x, z, K) = \mathrm{P}(K) \times \int_\Theta \mathrm{p}(z, \theta | K) \; \mathrm{d}\theta \\ \times \int_\Pi \mathrm{p}(x , \pi | z) \; \mathrm{d}\pi \, , \end{split} \label{EQcollapsed} \end{equation} or, equivalently, using the block-by-block independence $x_{(kl)} | z,K$, \begin{equation} \begin{split} \mathrm{P}(x, z, K) = \mathrm{P}(K) \times \int_\Theta \mathrm{p}(z, \theta | K) \; \mathrm{d}\theta \\ \times \prod_{k,l} \int_{\Pi_{kl}} \mathrm{p}(x_{(kl)} , \pi_{kl} | z) \; \mathrm{d}\pi_{kl} \, . \end{split} \label{EQcollapsed2} \end{equation} This allows the creation of an algorithm which searches only over $K$ and $z$. The algorithm never needs to concern itself with $\theta$ or $\pi$. Collapsing greatly simplifies the sample space over which the MCMC algorithm has to search. Without collapsing, the dimensionality of the sample space would change if our estimate of $K$ changed; this would require a Reversible-Jump Markov Chain Monte Carlo (RJMCMC) algorithm (see \cite{GreenRJMCMC}). \margnote{Per Rev\#2, I've deleted the argument about how collapsing will improve mixing. This is quite frustrating.} \begin{comment} Second, collapsing should improve the mixing of the Markov chain. Without collapsing, the estimates for $z$ and $\pi$ would be highly correlated. This correlation would make it difficult for the algorithm to successfully propose updates to one while the other is fixed. By collapsing out $\pi$ and $\theta$, the algorithm should be able to move more easily across the sample space of $K,z | x$. Each new estimate of $z$ can be less dependent on the old estimate of $z$. \end{comment} Finally, if estimates for the full posterior, including $\pi$ and $\theta$, are required, it should be noted that it is very easy to sample $\pi,\theta | x,z,K$, meaning that nothing is lost by the use of collapsing. Many of the other models described in \cref{SECrelatedProb} are collapsible, and this may be an avenue for future research. The integration of \cref{EQcollapsed2} allows an expression for the full posterior distribution to be obtained. Details of the derivation of this expression are given in Appendix A. Let $n_k$ be the number of nodes in cluster $k$. $n_k$ is a function of $z$. For the Bernoulli model, let $y_{kl}$ be the number of edges that exist in block $kl$, i.e. the block between clusters $k$ and $l$. For the Poisson model, $y_{kl}$ is the total edge weight. $y$ is a function of $x$ and $z$. Let $p_{kl}$ be the maximum number of edges that can be formed between clusters $k$ and $l$. For off-diagonal blocks, $p_{kl} = n_k n_l$. For diagonal blocks, $p_{kk}$ depends on the form of the network as follows, \begin{equation} p_{kk} = \left\{ \begin{array}{ll} \frac12 n_k (n_k-1) & \mbox{undirected, no self-loops} \\ \frac12 n_k (n_k+1) & \mbox{undirected, self-loops} \\ n_k (n_k-1) & \mbox{directed, no self-loops}\\ n_k^2 & \mbox{directed, self-loops} \end{array} \right. \, . \label{EQpkkCountPairs} \end{equation} The full posterior may be written as \begin{equation} \begin{split} \mathrm{P}(x, z, K) \propto {} & \frac1{K!} \\ & \times \frac {\Gamma(\alpha K) \prod_{k=1}^K \Gamma(n_k + \alpha)} {\Gamma(\alpha)^K \Gamma(N + \alpha K)} \\ & \times \prod f(x_{(kl)} | z) \,, \end{split} \label{EQfinal} \end{equation} where the final product is understood to take place over all blocks. The form of the function $f(x_{(kl)} | z)$ depends on the edge model. If Bernoulli, then \begin{equation} f(x_{(kl)} | z) = \frac{ \text{B}(\beta_1 + y_{kl}, p_{kl} - y_{kl} + \beta_2) }{ \text{B}(\beta_1, \beta_2) }\,, \label{EQfBernoulli} \end{equation} where $B(.,.)$ is the Beta function. If Poisson, then \begin{equation} f(x_{(kl)} | z) = \frac{ \Gamma(s+y_{kl}) \left( \frac1{p_{kl} + \frac1\phi} \right)^{s+y_{kl}} }{ \Gamma(s) \phi^s }\,. \label{EQfPoisson} \end{equation} \section{Related estimation procedures for the SBM} \label{SECrelatedProb} \margnote{This section has been shortened a lot, as per Rev\#2. It now focusses strictly on the SBM, and very-closely-related models. I start with the models that are most different from ours, and end with those that are most similar. I no longer say very much about their algorithms; I limit myself to their overall goal and strategy. } Before defining our algorithm, we look at related work, particularly other methods that are based on the SBM. We will focus on models which are identical, or very similar to, the SBM. Therefore, we will not discuss other models which are loosely related, such as that of \cite{newman-2007}, or the ``degree-corrected'' SBM of \cite{KarrerDegreeCorrected}. All methods discussed here are aimed at estimating $z$, but they differ in the approach they take to the parameters $\pi$ and $\theta$ and in whether they allow the number of clusters, $K$, to be estimated. We also discuss the issue of model selection, i.e. how the various methods estimate the number of clusters. This question was avoided in the original paper of \cite{Nowicki-01}, where the number of clusters is fixed to $K=2$ in the evaluation. \begin{comment} In the literature, it is often stated (see \cite{Daudin-08}) that it is computationally extremely difficult to \emph{exactly} evaluate the summation $\mathrm{P}(x | \pi, \theta, K) = \sum_{z} \mathrm{P}(x, z | \pi, \theta, K)$ and to find the values of $\pi$ and $\theta$ which would maximize that expression (for a fixed $K$ and $x$). While this is true, we would like to note that we are usually interested primarily or exclusively in estimating $z$. The primary challenge is to estimate the clustering $z$, and this might be difficult even if we have estimates for $\theta, \pi$. \end{comment} The method of \cite{Daudin-08} takes a network, $x$, and number of clusters $K$, and applies a variational algorithm. Point estimates are used for $\pi$ and $\theta$, but the clustering $z$ is represented as a distribution of possible cluster assignments for each node. This makes the method analogous to the EM algorithm for the MLE -- finding the pair $(\pi,\theta)$ which maximizes $\mathrm{P} (x | \pi, \theta, K)$. The model used by \cite{ZanghiOnline} is a subset of the model of \cite{Daudin-08}. The cluster-cluster density matrix, $\pi$, is simplified such that it is represented by two parameters $\lambda$ and $\epsilon$, such that the on-diagonal blocks $\pi_{kk} = \lambda$ and the off-diagonal blocks $\pi_{kl} = \epsilon$ (for $k \neq l$). A Classification EM (CEM) algorithm to maximize $ \underset{z,\pi,\theta}{\operatorname{arg max}} \; \mathrm{P} (x,z | \pi, \theta, K)$ is briefly described in \cite{ZanghiOnline} but not implemented. Instead, they implement an \emph{online} algorithm. One node of the network is considered at a time and is assigned to the cluster which maximizes $\mathrm{P} (x,z | \pi, \theta, K)$, updating estimates of $\pi$ and $\theta$ with each addition. Implicitly, their goal is to use point estimates both for the parameters \emph{and} for the clustering, to find $(\hat{z},\hat\pi,\hat\theta)$ that would maximize $\mathrm{P}(x,z | \pi,\theta, K)$; as such, it is loosely related to the profile likelihood \citep{BickelNonparametricView}. The methods just discussed are based, directly or indirectly, on the frequentist approach of finding the maximum likelihood estimate of the parameters, $(\pi,\theta)$, i.e. the values $\hat\pi,\hat\theta$ that would maximize the likelihood of the observed network, \[ \mathrm{P}(x | \pi,\theta, K) = \sum_{z} \mathrm{P}(x, z | \pi,\theta, K) \, . \] The estimate of $z$ that is used in this frequentist approach is the conditional distribution of $z$ based on this point estimate of the parameters and on the observed network, $ z | x,\hat\pi, \hat\theta, K$. In practice though, it is not tractable to calculate or maximize this likelihood exactly, and hence a variety of different approximations and heuristics have been used. In a Bayesian method, such as ours, a distribution of estimates for $(\pi,\theta)$ is used instead of a point estimate. The goal is to directly sample from $z|x,K$. Another example of this Bayesian approach is the variational algorithm used in \cite{HofmanBayesNetworkModularity}, which is based on the simpler $\lambda$ and $\epsilon$ parameterization of the $\pi$ matrix used in \cite{ZanghiOnline}. The modelling choices of \cite{LatoucheILvbStatMod}, where a new model selection criterion called $ILvB$ is introduced, are essentially identical to the standard SBM; each element of $\pi_{kl}$ is independent, and conjugate priors are specified. A variety of other variational approximations are considered by \cite{GazalVariational}, where there is more focus on parameter estimation and less focus on model selection. A further specialization of this model is possible, by employing the $\lambda, \epsilon$ parameterization, but where $\lambda > \epsilon$, which explicity constrains the expected edge density within clusters to be larger than the expected edge density between clusters. This can be considered to be \emph{community-finding} as opposed to \emph{block-modelling}. The authors of this paper considered this in \cite{McDaidSCFcompstat}. \begin{comment} In the methods described so far, the aim has been to maximize the complete data log-likelihood, typically something of the form: \[ \mathrm{P} (x,z | \Phi) = \mathrm{P} (z | \Phi) \times \mathrm{P} (x | z , \Phi) \, . \] However, there are some methods that ignore the first factor, $ \mathrm{P} (z | \Phi) $ , and instead focus exclusively on finding $z$ to maximize $ \mathrm{P} (x | z , \Phi) $. One such method is \cite{KarrerDegreeCorrected}. \cite{KarrerDegreeCorrected} describe an extension to the standard SBM which they describe as ``degree-corrected''. Each node has a parameter, $\gamma_i$, associated with it which indirectly controls the degree of the node. They model a network with integer-weighted edges, where the weight between two nodes $i$ and $j$ is drawn from a Poisson distribution with rate $\gamma_i \gamma_j \pi_{k l}$, where $k=z_i$ and $l=z_j$ are the clusters of node $i$ and node $j$ respectively. For a given clustering $z$, they show that it is easy to maximize over $\Phi \equiv (\pi, \gamma)$; \[ \underset{\Phi}{\operatorname{arg max } }\; \mathrm{P} (x | z, \Phi) \, . \] Using this estimator $\hat\Phi$, which is a function of $z$ (and $x$ of course), their aim then is to find a clustering $z$ which maximizes \[ \underset{ \ensuremath{ \mathbf{z} } }{\operatorname{arg max } }\; \mathrm{P} (x | z, \hat\Phi(z,x)) \, . \] They use a heuristic algorithm based on \cite{KernighanLin}. \end{comment} \subsection{Model selection} \margnote{ As per Rev\#2, this subsection is now much shorter. It is now much more focussed on issues that are directly relevant to our algorithm.} Later, in our experiments in \cref{SECevaluation}, we will demonstrate the ability of the allocation sampler to accurately estimate the number of clusters. In this subsection, we will briefly discuss some of the theoretical issues around the estimation of the number of clusters. The methods that involve the MLE for the parameters involve the risk of overfitting; for larger values of $K$, the parameter space of $\pi$ and $\theta$ becomes much larger and therefore the estimates of $\mathrm{P}_{\theta = \theta_{mle}}(x | K)$ will become over-optimistic, and will tend to overestimate $K$ \citep{schwarz1978}. Therefore, an alternative formulation such as the ICL is needed; see \cite{ZanghiOnline} and \cite{Daudin-08} for derivations of the ICL in the context of models based on the SBM. Instead of using the MLE directly, those measures apply priors to the parameters and integrate over the priors, as described in \cite{BiernackiICL}, such that the average likelihood is used instead of the maximum likelihood. Typically, such integrations cannot be performed exactly and the ICL criterion consists of approximations that are based on first finding an estimate to the MLE, and then adding correction terms to this MLE. For the rest of this subsection, we will not consider those approximate methods and will instead consider the exact solutions to the integrations. The \emph{integrated~classification~likelihood}, which the ICL intends to approximate, \[\mathrm{P}(x,z|K) = \int \int \mathrm{P}(x,z,\pi,\theta|K) \, \mathrm{d}\pi \, \mathrm{d}\pi \, , \] can be solved exactly in some models. The SBM is one of those models, and the posterior mass that our algorithm samples from is exactly equal to the integrated~classification~likelihood (if a uniform prior is used for $K$ instead of the default Poisson). While it is easy to exactly calculate the integrated classification likelihood for a given $(z,K)$, it would not be tractable to search across all possible $(z,K)$ to find the state that maximizes the integrated classification likelihood, except for the smallest of networks. The BIC is an attempt to approximate the \emph{integrated likelihood} \[\mathrm{P}(x|K) = \sum_z \int \int \mathrm{P}(x,z,\pi,\theta|K) \, \mathrm{d}\pi \, \mathrm{d}\theta . \] An exact solution to the BIC is not tractable for the SBM; the likelihood would require a summation over all possible clusterings $z$. If we were to use a uniform prior for $K$, then $\mathrm{P}(x|K) = \mathrm{P}(K|x)$ and an irreducible ergodic Markov chain algorithm such as ours would visit each value of $K$ in proportion to the integrated likelihood for that value of $K$. Of course, our algorithm only gives a \emph{sample} from the true posterior, and there cannot be any guarantee that the distribution of the sample is representative of the true distribution. The purpose of these last few paragraphs is to demonstrate that there are other (approximate) ways to calculate the \emph{integrated likelihood} and the \emph{integrated classification likelihood}. The Bayesian methods provide approximations that may, in practice, be at least as good as the approximations that would be provided by methods such as the ICL. The model-selection criterion $ILvb$ \cite{LatoucheILvbStatMod} is based on a variational approximation to a fully Bayesian model. As a result of its Bayesian model, it is an approximation of the integrated likelihood and no further adjustment is required for model selection. As with any variational Bayes method, we assume that the independence assumptions within the variational approximation are a good approximation of the true posterior. A second assumption made by those authors is that the Kullback--Leibler divergence, the difference between the true posterior and the variational approximation, is independent of $K$. If these two assumptions hold, then the measure they use, which they call the $ILvB$, is equivalent to $\mathrm{P}(x | K)$, the \emph{integrated likelihood}. To select the number of clusters, they use that value of $K$ which maximizes the $ILvB$. \begin{comment} Methods that involve the maximization of a likelihood function across a parameter space of varying dimension, like many of the methods that have been described in this section, are often biased towards families of models with a larger number of parameters. In the block models, and often in the broader problem of clustering, the problem manifests itself as a bias towards a larger number of clusters. Therefore it is necessary, in the uncollapsed maximization approaches, to use a model selection criterion. It is important to identify the question of interest before selecting a criterion. For example, the goal might be to find the value of $K$ which maximizes $\mathrm{P}(x | K)$ or $\mathrm{P}(x , K)$. This is a common approach, and a number of criteria have been developed. However, in clustering we are typically interested in a subtly different question. Our goal is to find clusterings which are well-described by the data; this involves finding the pair $(K,z)$ which maximizes $\mathrm{P}(x,z | K)$ or $\mathrm{P}(x,z,K)$. If you use a uniform prior on $K$, which is typically (implicitly) the case in many model selection criteria, then $\mathrm{P}(x | K) = \mathrm{P}(x , K)$ and $\mathrm{P}(x,z | K) = \mathrm{P}(x,z,K)$. These two goals are related to each other via the summation $ \mathrm{P}(x | K) = \sum_z \mathrm{P}(x,z | K) $. In other words, the former goal is to find the value of $K$ which maximizes the left hand side of this equation. Or, alternatively, the goal may be to ignore the summation and instead to find the single term on the right hand side which is maximal. With suitable priors, solving a single term in that summation, $\mathrm{P}(x,z | K)$, is relatively straightforward; this is the basis of this paper. But performing the summation across all possible colorings $z$ is not tractable. Therefore, some sort of approximation or numerical calculation is required if one wants to maximize $\mathrm{P}(x | K)$. To answer the first question, maximizing $\mathrm{P}(x|K)$, the Bayes Information Criterion (BIC) of \cite{SchwarzBIC} is often used. The maximization algorithm is run for many different values of $K$, and the BIC is used to select a value of $\hat{K}$. To answer the second question, \cite{Daudin-08} and \cite{ZanghiOnline} use an approximation to the Integrated Classification Likelihood (ICL), a criterion due to \cite{BiernackiICL} which can be used in clustering problems as it tackles the second question described above. The ICL amounts to approximating the \emph{integrated} likelihood $P(x, z| K)$, and assumes a uniform prior on $K$, over a finite number of possible cluster sizes. In this criterion, the integration is approximated by taking the maximum of the likelihood (over $\theta$ and $\pi$) and adding terms to (approximately) correct for this maximization. As the required regularity conditions do not hold in the mixture context, there is a lack of theoretical justification for use of the ICL. The ICL is an attempt to approximate the integrated likelihood of the complete data. By using suitable priors and collapsing, we are able to exactly solve this integrated likelihood, $\mathrm{P}(x,z | K)$, in our \cref{EQfinal,EQfBernoulli,EQfPoisson}. Thus, our choice of priors allows an exact expression for the integrated likelihood to be obtained and we avoid the theoretical and practical difficulties of applying the ICL or BIC. The value of $K$ in the pair $(K,z)$ which is selected by the ICL is typically smaller than the value of $K$ selected by the BIC. This is not a problem or error, but is due to the fact that ICL is answering a different question. If your goal is to find a good clustering, the ICL is appropriate; but if your goal is to estimate the number of clusters and you are not particularly interested in finding a good clustering, then the BIC is appropriate. An alternative to the BIC for estimating $\mathrm{P}(x|K)$, developed specifically for the SBM, is a criterion developed in \cite{LatoucheILvbStatMod}. In fully Bayesian models, such as \cite{Nowicki-01} and our extension, parameters are sampled instead of maximized and so bias to a larger number of parameters is avoided and we do not need to add explicit extra terms such as those in the BIC and ICL. In our approach, the number of clusters $K$ is assigned a prior and it does not need to be treated differently to any other variable. As a result, our method can be used to answer both questions in a straightforward manner without relying on the approximations used in the BIC and ICL. Also, our method does not require the user to run the algorithm repeatedly for many values of $K$. If the goal is to estimate the number of clusters, then our method can be used; as our algorithm proceeds, the value of $K$ that is visited most often will be the value which maximizes $\mathrm{P}(x , K)$. On the other hand, the pair $(K,z)$ which is visited most often by our algorithm is the pair that will maximize the ICL. If there is any discrepancy between the value of $K$ (or $(K,z)$) selected by the BIC(or ICL) and that selected by our method, it is due either to the approximations used in the BIC or ICL or in the underlying maximization algorithm or it is because our MCMC algorithm has not yet converged. Or, of course, it may be due to the choice of prior on $K$. The BIC and ICL implicitly use a Uniform prior on $K$ and our method can easily be modified to use a uniform prior instead of the Poisson prior we use by default; alternatively, the extra term could be added to the BIC or ICL to use a non-uniform prior if desired. Finally, it should be noted that the label switching procedure (\cref{SEClabelswitching}) allows us a slightly different approach. This procedure identifies a `consensus' clustering which summarizes all of the pairs ($K,z$) which have a good fit to the data. Instead of finding the single pair which maximizes $\mathrm{P}(x,z|K)$, this procedure essentially averages over all the pairs, giving greater weight to pairs with a large $\mathrm{P}(x,z|K)$. One could argue that this gives us the average of the posterior, whereas the ICL attempts to select the mode of the posterior. The mode of any distribution is very sensitive to isolated peaks in the density function, whereas an average, such as the mean or median, may be more robust. This may be desirable in certain contexts and suggests the label-unswitching procedure as a suitable method. \end{comment} \section{Reviewers' comments} We would like to thank the reviewers for their useful and interesting comments. In this response, we have copied the reviewers' comments in the original order, indented and coloured green, and we will make our detailed responses at the appropriate place below. However, we will make some brief overall comments first. In this review copy, we have arranged for lines numbers to appear in the left margin, and we will refer to those line numbers in some of our responses. The title has been changed, as suggested by Reviewer \#2. It's now ``Improved Bayesian Inference for the Stochastic Block Model with application to large networks'' The overall structure has changed in some important ways. The order of the presentation is the same, but some subsections have been added or removed or changed significantly. More detailed responses will be made later. \begin{itemize} \item Section 1: The introduction. Now includes some extra references. \item Section 2: Define the SBM. We have nothing novel here, we just define the model based on existing literature. \item Section 3: The term `collapsing' is introduced, and our marginalization is derived. \item Section 4: Related estimation procedures: Discuss algorithms already in the literature, focussing exclusively on the SBM. Also includes a completely rewritten subsection on the various approaches to model selection for the SBM. \item Section 5: Estimation: Our algorithm. \item Section 6: Synthetic evaluation: Includes some new experiments. \item Section 7: Our analysis of the summer school survey network. \item Section 8: Conclusion: rewritten to account for the changes that have been made. \end{itemize} \subsection{Reviewer 1} \green{The authors propose an interesting approach for the stochastic block model that I do recommend for publication. Contrary to existing techniques which rely on approximations (variational decompositions for instance), they derive exact quantities, using common conjugate priors such as Beta and Dirichlet priors. These collapsed quantities are maximized for inference using a MCMC algorithm, related to the allocation sampler which was proposed in the context of a more standard Gaussian mixture model. Four moves (Metropolis, Gibbs sampling, Metropolis-Hastings, absorb-eject move) are proposed to search over the space of possible clusterings and number of classes. I see three main advantages of their work compared to existing strategies for the stochastic block model : Usually, existing methods are run several times for various values of the number of classes K, a model selection criterion is computed, and K is chosen such that the corresponding criterion is maximized. Here, because a Poisson prior distribution is considered and thanks to the MCMC moves, K is estimated automatically from the data. Second, the authors maximize an exact collapsed distribution and I expect this to lead to better results than inference strategies based on variational approximations. Finally, the experiment section illustrates that their algorithm is rather fast, making it possible to use the stochastic block model on large networks (few thousand nodes) while the algorithm of Nowicki \& Snijders can be only be used for networks with a few hundred of nodes for instance. Moreover, the authors made their code publicly available as well as the original data set they use in the experiment section. Some more specific comments below :} \green{In the Introduction :} \green{* I agree with the authors that the community finding problem should be distinguish with the more general problem of block modeling.} Thanks for the accurate summary of the paper. \green{* Some common references are missing. In particular, I would like the authors to add references to community detection algorithms which are highly used for graph clustering. I understand that in the paper the authors consider methods based on probabilistic models, however, I feel these algorithms represent an important part of the literature and should be mentioned. I suggest that the authors describe in a few sentences the modularity criterion and one or two strategies to optimize it. The following references could be added: -\cite{NewmanGirvan} Newman and Girvan. Finding and evaluating community structure in networks. Physical Review. 2004 -\cite{girvan-2002} Girvan and Newman. Community structure in social and biological networks. PNAS. 2002 - \cite{NewmanFastishMod} Newman. Fast algorithm for detecting community structure in networks. Physical Review. 2004 References (and a bit of description) to three common probabilistic models for graph should also be added : - \cite{HandcockRafteryTantrum} The latent position cluster model (LPCM) : Handcock, Raftery, and Tantrum. Model based clustering for social networks. Journal of the royal statistical society. 2007. - \cite{HoffLatentSpace} LPCM generalizes the work the latent space model : Hoff and Raftery, and Handcock. Latent space approaches to social network analysis. Journal of the royal statistical society. 2002. - \cite{Airoldi2008MMSB} The mixed membership model : Airoldi, Blei, Fienberg, Xing. Mixed membership stochastic block models. 2008 } These references have been added. Near the end of Section~1(`Introduction'), lines 57, we have listed a wide variety of alternative approaches to clustering the nodes of a network, other than those that are directly based on the SBM. This includes statistical approaches and non-statistical approaches. However, we have not given any further details or description of these methods, to save space; the reader is directed to Fortunato's review \citep{fortunato-2010} if they would like a more comprehensive summary. The models and methods that are based on the SBM are discussed in later sections. \green{ * Page 3, section 2. When describing the stochastic block model, I suggest the authors first mention that the model is sometimes described in a frequentist setting as in Daudin et al (2008) and that they are here considering a Bayesian framework as in (Nowiki and Snijders, 2001) and Latouche et al (2012). } We discuss this at the bottom of page 4, just before Subsection~2.1(`DataModelVariations') at lines 137 to 145, and also in Section~4(`RelatedEstimationProcedures') at lines 258 to 268, noting the practical difference between the frequentist and Bayesian approach. The frequentist approaches use a point estimate for the parameters, specifically the MLE, whereas the Bayesian approach considers a full distribution based on a prior. We avoid any philosophy, our goal is to provide the practical guidance on what is different from an algorithmic point of view. \green{ * Page 3 : "Section 4," -$>$ remove "," * Page 5 : "rather then requiring" -$>$ "rather than requiring" * Page 10 : "alternatively, the an" -$>$ "alternatively, an" } These typographical errors, and others we had missed, have been corrected throughout the paper. \green{ * Page 4 : sometimes the authors forget to add comma or point at the end of the formulae. First formulae : comma missing. Third formulae : point missing. Check the rest of the paper, especially the appendix section. * Page 11 : first formulae : point missing } This was an issue right throughout the paper. Every formula has now been checked and edited. \break \green{ * Page 9 : the equation should be removed or given directly in the text or being introduced first. A simple : "$P(x|K)$ is given by : ." might be sufficient. } Done. Now on page 12, line 313. This is Subsection~4.1(`Model Selection'). An expression for $\mathrm{P}(x|K)$, based on a suitable summation and integration, is now given in a more direct manner. \green{ * Page 7, section 4 : When the authors mention the constraints on pi that Zanghi et al (2008) used, they could refer to the work of Hofman and Wiggins (2008) which is very similar. } This reference \cite{HofmanBayesNetworkModularity} has been added to Section~4(`Related Estimation Procedures') line 270. \green{ When they first introduce BIC ("BIC is used to ."), the authors should recall that in the case of the stochastic block model, BIC is not tractable (because the likelihood is not tractable). Also, for the rest of the section, do not describe BIC as if it was a model selection criterion among others that could be used to estimate K. Again, it is not tractable in the case of the stochastic block model. } This is a good point. The subsection~4.1(` Model selection') has been rewritten. We focus on issues more directly related to the SBM, and have removed much of the more general discussion. We have mentioned the BIC, but we have shortened our discussion of it; even though it is not tractable in practice, it may be interesting to briefly discuss the \emph{integrated likelihood}, $\mathrm{P}(x|K)$, which the BIC attempts to approximate. This allows us to discuss how our allocation sampler, and the $ILvb$ of \cite{LatoucheILvbStatMod}, may also be used to approximate the \emph{integrated likelihood}. \green{ * Page 17 : when the authors first mention the Erdos-Renyi model, they could recall that this model corresponds to a random graph model with no block structure. * same page : "as expected, this decreases as delta increases. " -$>$ explain why. } On page 22, line 605, We now state that, as delta tends to 0.5 , the network approaches the Erdos-Renyi network and therefore that the accuracy decreases due to the lack of block structure. \green{ Experiment section : I enjoyed reading the experiment section which, through a series of analysis of both toy data and real data sets, illustrates the capacity of the approach to estimate K and retrieve the clustering structure. However, the authors could easily do a little more to demonstrate the utility of their approach : I would like to see how the criterion of Latouche et al (2012) performs against the proposed approach. } We have included two new experiments based on the experiments of Latouche et al (2012) \cite{LatoucheILvbStatMod}. In subsection~6.2(`Estimating K in community-structure networks') we copied the \emph{community-structure} experiments exactly from their paper, using their software, and compared the answer with that from our method. In subsection~6.3(`Synthetic SBM networks') we modified their data-generation software to generated data more strictly from the SBM, and performed the same sort of experiments again. We used larger networks, with larger $K$, in this second set of experiments. \subsection{Reviewer 2} \green{ The paper describes a bayesian MCMC inference for The Stochastic Block Model. The method is a significant improvement to published work on the same problem. The proposed MCMC procedure allows to deal with quite larger data sets than the old one, which was limited to networks with 200 nodes. Moreover it gives a procedure to estimate the number of groups. Therefore the proposed algorithm is a real progress, which deserves publication. However it suffers from its poor presentation which has to be significantly improved. Major comments } \dots \green{ 1. The presentation is confusing because there is no clear distinction between the model and the algorithm used to estimate the parameters. } We have changed the structure as described at the top of our response above. The model is described in section 2, our collapsing/marginalization in section 3, and our algorithm in section 5. Alternative SBM algorithms in the literature have been discussed in section 4. \green{ The authors claimed that their model is an extension of the Stochastic Block Model. But this is not true. The model is exactly the SBM. } Thanks for the reference to the Mariadassou work \cite{MariadassouWeightedSBM}. We were not aware that the Poisson edge-weight model was not new, and therefore we thought that we had extended the model. We have made a number of changes throughout the paper to correct for this. \green{ The "collapsing" procedure is only the computation of a marginal distribution in the Bayesian version of the SBM. } Agreed. It is a technique to simplify the representation of the posterior which allows for simpler mathematics and simpler algorithms, and hence it is more of an `algorithmic' issue than a `modelling' issue. We no longer refer to this as an `extension' - it does not change the model and it does not change the results for $(z,K|x)$ that we extract from the posterior. \green{ The extension to weighted networks (Poisson and Gaussian) have been already published (see for example M. Mariadassou, S. Robin, and C. Vacher, Uncovering structure in valued graphs: A variational approach, Ann. Appl. Statist. 4(2) (2010), pp. 715-742). } Agreed. Discussed above. \green { Therefore the model is not new, but the MCMC algorithm is. The authors have to correct this confusion in many places in the paper. For example, the title should be modified. I suggest "New MCMC Bayesian Inference for the Stochastic Block Model", but the authors may find another title in the same spirit. The words "we extend the SBM model" should be suppressed everywhere in the paper. } We now have a new title - `Improved Bayesian Inference for the Stochastic Block Model with application to large networks'. The word `extend' has been suppressed. \green{ 2. There is some dispersion in the presentation and the authors should have a better focus and shorten the paper. The paper should be reorganized in a more logical way: Introduction, model, published methods (frequentist and bayesian with more focus on bayesian ones) and algorithms for estimating parameters, the proposed MCMC algorithm incuding the estimation of K and label switching, Simulations, Analysis of a real data set and conclusion. } Done - that's the structure now. \green{ 2.1 The presentation of the alternative published methods has to be modified: some of them are out of the scope, others are lacking. The subject is the SBM, so it is not usefull to refer to other models in the section "Related probabilistic models". This section is confusing because of the confusion between models and algorithms. The authors must focus on the SBM and suppress the comments about other models page 8. They can report these references to the conclusion section if they think that it is usefull althought I do not think it is. } Section 4 now focusses exclusively on existing SBM methods from the literature. As suggested by Reviewer\#1, in Section~1(`Introduction') we have added more references for a variety of non-SBM methods, including non-statistical approaches. \green{ However they missed the variationnal bayesian estimation method which is a direct competitor to their method (S. Gazal et al. Accuracy of variational estimates for random graph mixture models, Journal of Statistical Computation and Simulation, 2011). The paper from Bickel, P. and Chen, A. (2010), A nonparametric view of network models and newman-girvan and other modularities, PNAS, has also some connection, using a profile likelihood which has some similarities with the "collapsed MCMC". } Done. The Bickel reference has been added to Section~4(page~10, line~262). And Gazal on line 283. We have also referenced Gazal in our updated Conclusion, where we question the claim that MCMC is necessarily unscalable. We do not intend to argue that MCMC will always be competitive with the best algorithms, our goal is simply to caution against overly-general claims about MCMC. \green{ The section 4 named Related probabilistic models should be renamed Related estimation procedures. } Done. \green{ 2.2 The two pages about model selection could be shortened because most of the contents is well known. The authors should focus about what they are doing. The explanation about why is too long, not clear and creates some confusion about their own algorithm. } Agreed. This section has been rewritten and is shorter. See our response to Reviewer\#1 above for more details, lines 1099-1106. \green{ 2.3 I am not convinced by the term "collapsing". The term "collapsibility" has a precise meaning for contingency tables. It is used here in a different way. In this paper the "collapsing" is nothing else than the computation of a marginal distribution, when this marginal can be expressed explicitly. Why using a new term when the term "marginal distribution" which is more precise could be used? I know that the term collapsing has been already used in one or two papers, but very few (bayesian) statistician have heard about it. The section 3 could be renamed `Marginal distribution of P(x,z,K)'. } We have edited the paper to be more careful when introducing the term collapsing for the first time, on line 8. There are other Bayesian papers that have used ``collapsing'' to mean simply the calculation of the marginal distribution \cite{LiuCollapsedGibbs,WyseFriel}; a search for ``collapsed Bayesian'' on Google yields many results. It does appear that the term is already popular in this context, although we acknowledge the term now has a different meaning from the meaning it has for contingency tables. \green{ 2.4 The authors claim that the proposed algorithm can deal with large networks. However the M3 procedure is of order $N^2$. Therefore it seems that some optimization made for sparse networks is necessary to analyze 10 000 nodes. The authors should not overgeneralize their conclusions and say what happens for a not-sparse 10 000 nodes network. } Our algorithm has been optimized to handle sparse networks, see lines~419-425 and lines~460-475. We have edited our paper to confirm that $N^2$ is the worst case, and to more carefully justify our claims regarding sparser networks. \green{ Minor comments Page 5 the prior on K is a Poisson(1) distribution. What about the case K=0? } We had neglected that. Thanks. We now discuss this near the start of Section~3(`Collapsing'). In practice, this cancels out in the posterior as we are only interested in the posterior mass up to proportionality. \green{ Page 6 l 13-16. The argument about the correlation between old and new estimate of z is not convincing. Why the reader should believe the authors on this point? In the absence of precise mathematical argument, this sentence should be suppressed. } This has been removed. We might consider it again in a future publication - although perhaps a similar argument (more carefully made) has already appeared in the literature. \green{ Page 16 l 11. The exact search of the optimal relabeling cannot be done. Instead an heuristic is used. No information is given about the loss between the exact and the heuristic procedures. The reader would like to have some indication about it. } More information and discussion has been added in Subsection~5.6(` Label Switching'), lines 544-549 and 570-576. The issue has not been considered in great detail in the other literature on which our method is based. We have calculated the size of the space of all relabellings and shown that it is very large, ruling out a naive algorithm. In our experiments, if we take two relabelled states at random, we see that only very few of the nodes are assigned differently in the two states. Now, our paper argues that this is suggestive that the heuristic has performed well. But we do not claim to have proved this. \green{ Page 21 Table 2. Time to convergence is given for different cases but I have not been able to find any indication about the hardware used. } The hardware used is described in section Subsection~5.6(` Label Switching'), line 556/557, and at the start of Section~6(`Evaluation'), lines 584-586. The same hardware was used for all experiments. \green{ Page 24 Extensions. The hierarchical extension could be suppressed and the idea kept for a new paper. } Agreed. This section, the old Section 8, has now been removed. \green{ Page 24 l -8 : I do not understand the sentence "It can be meaningful to have clusters that are empty". } Agreed. That sentence has been removed. Our original intention was to observe that it can be valid for there to be empty clusters. But that is a very abstract comment, and it is not needed for our paper. The word `meaningful' was not very helpful. \subsection{Editor's comments} \green{ Editor's Comments (provide a point to point response to these comments) ------------------ } \green{ 1. Follow all the editorial guidelines when you prepare the revised mns. (eg abstract in the third person and no statements like this paper etc.) Re-write the abstract in the third person and without references. } Done. \green{ 2. Have the paper proof read and corrected. } Done. \green{ 4. Delete the vertical lines from the tables. } Done. \green{ 5. Incorporate all footnotes in the main text of the paper (i.e. no footnote apart from the first footnote indicating the details of the corresponding authors). } Done. \green{ 6. Add full stops and commas at the end of equations. } Done. \green{ GUIDELINES FOR PREPARING THE REVISED PAPER --------------------------------------------------------------------- Please take the following in to account when you prepare the final version of your paper: } \green{ 1. Could you please write the abstract in the third person without having expressions like "We", "In this paper", "Here", "This work", etc. PLEASE AVOID EQUATIONS IN THE ABSTRACT. } Done. \green{ 2. Avoid references in the abstract. IF THIS IS REALLY NECESSARY, then provide complete and abbreviated information. I.e. (Authors, abbr. journal, pages, vol., year). } Done. \green{ 3. Do not use vertical lines in tables. } Done. \green{ 4. Add the tables at the appropriate place in the paper, i.e. NOT at the end. If you use LATEX, then please incorporate the figures at their appropriate place in the paper too. } Done. \green{ 5. Add full stops and commas at the end of equations. } Done. \green{ 6. The article is written using the CSDA guidelines (see author instructions of the journal) and http://www.elsevier.com/locate/csda 7. If you are using Latex, then could you please use the style files of the publishers which can be found at http://www.elsevier.com/locate/latex/journals/ } We now use the \verb#elsarticle.cls# style, although further work may be required to get the right style. \green{ 8. The complete postal address, email, tel. and fax of the corresponding author should be shown as a footnote in the first page. Avoid having any other footnotes. } Done. \green{ 9. If you have attachments (software or data sets) that will appear as annexes in the electronic version of your mns, then please do mention this as a footnote in the first page. } We have no attachments. \green{ 10. In multiple equations have commas at the end of each eqn and an "and" between the last pair. E.g. eqn 1, eqn 2, eqn 3 and eqn } Done. \section{Stochastic Block Model(SBM)} \label{SECsbm} As formulated in \cite{Nowicki-01}, a network describes a relational structure on a set of nodes. Each edge in the network describes a relationship between the two nodes it links. A general case of a finite alphabet of states relating a pair of nodes is considered but in the simplest case, discussed by the same authors in \cite{SnijdersSBM1997}, relationships are binary -- an edge joining a pair of nodes either exists or not. The network can be undirected, corresponding to symmetric relationships between the nodes, or may be \emph{directed}, where a relationship from node $i$ to node $j$ does not necessarily imply the same relationship exists from node $j$ to node $i$. Finally, a \emph{self-loop} -- a relationship from node to itself -- may or may not be allowed. Throughout the paper, we use $\mathrm{P}(\cdot)$ to refer to probability mass (i.e. of discrete quantities) and $\mathrm{p}(\cdot)$ to refer to probability density (i.e. of continuous quantities). $N$ is the number of nodes in the network and $K$ is the number of clusters. In the algorithm proposed in \cite{Nowicki-01}, these are given input values, although in our approach, we treat $K$ as a random variable with a given prior distribution. Given $N$ and $K$, the SBM describes a random process for assigning the nodes to clusters and then generating a network. Specifically, the cluster memberships are represented by a random vector $z$ of length $N$ such that $z_i \in \{1, \dots, K\}$ records the cluster containing node $i$. $z_i$ follows a multinomial distribution, \[ z_i \overset{iid}\sim \text{Multinomial}( 1; \theta_1, \dots, \theta_K) \, , \] \noindent such that $\theta_i$ is the probability of a node being assigned to cluster $i$ ($1 = \sum_{k=1}^K \theta_k$). The vector $\theta$ is itself a random variable drawn from a Dirichlet prior with dimension $K$. The parameter to the Dirichlet is a vector $(\alpha_1,\dots,\alpha_K)$ of length $K$. We follow \cite{Nowicki-01} by fixing the components of this vector to a single value $\alpha$, and by default $\alpha = 1$, \[ \theta \sim \text{Dirichlet}(\alpha_1 = \alpha, \alpha_2 = \alpha, \dots, \alpha_K = \alpha ) \,.\] This describes fully how the $N$ nodes are assigned to the $K$ clusters. Next we describe how, given this clustering $z$, the edges are added between the nodes. A network can be represented as an $N \times N$ adjacency matrix, $x$, such that $x_{ij}$ represents the relation between node $i$ and node $j$ (taking values 1 or 0 in the binary case). Denote by $x_{(kl)}$ the submatrix corresponding to the \emph{block} of connections between nodes in cluster $k$ and nodes in cluster $l$. If the network is undirected, there are $\frac1 2 K (K+1)$ blocks, corresponding to each pair of clusters; and if the network is directed, there are $K^2$ clusters, corresponding to each \emph{ordered} pair of clusters. It is generally simpler to discuss the directed model; unless otherwise stated, the formulae presented here apply only to the directed case. The definitions and derivations can easily be applied to the undirected case, provided that care is taken only to consider each pair of nodes exactly once. If self-loops are not allowed, then the diagonal entries of $x$, $x_{ii}$, are excluded from the model. It is assumed that, given $K$ and $z$, connections are formed independently within a block so that \[ P(x | z, K, \pi) = \prod_{k.l} P(x_{(kl)} | z, K, \pi_{kl}) \, , \] where \[ P(x_{(kl)} | z, K, \pi_{kl}) = \prod_{\{i | z_i = k\}} \prod_{\{j | z_j = l\}} P(x_{ij} | z, K, \pi_{kl})\,, \] and the matrix $\pi = \{\pi_{kl}\}$ describes the cluster-cluster interactions. $\pi$ is a $K\times K$ matrix, but for undirected networks only the diagonal and upper triangle are relevant. Specifically, for binary networks, $\pi_{kl}$ represents the edge density within the block, and edges follow the Bernoulli distribution, \[ x_{ij} | z, K, \pi \sim \mathrm{Bernoulli}(\pi_{z_i z_j}) \,. \] Each of the $\pi_{kl}$ is drawn from the conjugate $\text{Beta}(\beta_1, \beta_2)$ prior, \[ \pi_{kl} \overset{iid}\sim \text{Beta}(\beta_1, \beta_2)\,. \] Again we follow \cite{Nowicki-01} and choose $\beta_1=\beta_2=1$, giving a Uniform prior. This completes the description of the Bayesian presentation of the SBM. A different approach is taken in other work, such as that of \cite{Daudin-08}, \margnote{ Both reviewers want to discuss freq-vs-Bayes. I'm trying to satisfy them while also focussing on the practical differences (which are quite small) instead of the philosophy. The `frequentist' SBM methods don't use unbiased estimators or any of the machinery of a full-blown frequentist. In practice, the methods give the same point estimates, \emph{if} we were to use a Bayesian point estimate. } where, using essentially the same model, the goal is to take a point estimate of the parameters, $(\pi,\theta)$, for a given number of clusters $K$. Specifically, the aim is to find the MLE; the value of $(\hat\pi,\hat\theta)$ which maximizes $\mathrm{P}(x | \pi, \theta, K)$. This is described as the frequentist approach, in contrast to the fully Bayesian approach where a distribution of parameter values is allowed instead of a point estimate. We will return to this issue in a little more detail in \cref{SECrelatedProb} in order to discuss the practical differences from an algorithmic point of view. \begin{comment} It should be noted that the priors we use for $\pi$ and $\theta$ are uniform priors and therefore that a Bayesian point estimate such as the MAP is identical to the MLE; where a Bayesian uses uniform priors, a frequentist will get the same numerical answer without priors. This is not the place to further discuss these philosophical issues of Bayesian and frequentist inference. In later sections, when we will discuss other algorithms in the literature, we will focus strictly on the practical aspects, in particular observing that the only difference from an algorithmic point of view is the distinction between methods that take a point estimate of the parameters and the methods which consider a distribution of estimates for the parameters. \end{comment} \subsection{Data model variations} The model is naturally extended in \cite{Nowicki-01} to allow for a finite alphabet of two or more relational states, where instead of using a Bernoulli with a Beta prior for $x$ and $\pi$, we can use a Multinomial and a Dirichlet to model this alphabet. The Bernoulli-and-Beta-prior model is just a special case of the Multinomial-and-Dirichlet-prior model. Alternatively, we can allow an infinite support and extend the model to allow for non-negative integer weights on the edges, by placing a Poisson distribution on $P(x|\pi , z)$, as seen in \cite{MariadassouWeightedSBM}. Now $\pi_{kl}$ represents the edge rate and is drawn from a Gamma prior, \[ \begin{split} x_{ij} | z, K, \pi & \sim \text{Poisson}(\pi_{z_i z_j}) \\ \pi_{kl} & \sim \mbox{Gamma}(s, \phi) \, \end{split} \] We do not suggest any default for the hyperparameters $s$ and $\phi$. A further extension to real-valued weights is also possible, by using a Gaussian for $p(x|\pi,z)$ and suitable prior on $\pi$, following \cite{WyseFriel}. These variations, and others, are described in \cite{MariadassouWeightedSBM}, but they do not discuss conjugate priors. \margnote{ Rewritten subsection, so as not to claim that we've discovered anything new.} The integration approach and algorithm described later in this paper can be applied to many variants of edge model, however we focus in the remainder of the paper on the Bernoulli and Poisson models that are supported in our software. In summary, given $N$ and $K$ the random process generates $\theta$, $z$, $\pi$ and ultimately the network $x$. The two main variables of interest are the clustering $z$ and the network $x$. In a typical application, we have observed a network $x$ and perhaps we have an estimate of $K$, and our goal is to estimate $z$.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The formalism developed in Ref.~\cite{Hagelstein:2015yma} was illustrated by a modification of the proton electric form factor (FF), $G_E$, which could reconcile the discrepancy in the various proton radius extractions. As is correctly pointed out in the Comment \cite{Arrington:2015}, this modification is inconsistent with the analyticity constraints. The latter require that all the singularities of $G_E(Q^2)$ lie on the negative $Q^2$--axis, whereas the modification has a pole near the positive axis resulting in a resonance-like structure, as seen in \Figref{GEminus1} (red dashed curve), as well as in the figure of the Comment. \begin{figure}[tbh] \centering \includegraphics[scale=0.48]{GEminus1.pdf} \caption{$G_E(Q^2)-1$ and $\ol G_E(Q^2)-1$ as a function of $Q$. The solid black curve shows the empirical FF from Ref.~\cite{{Arrington:2006hm}}. The dashed red curve shows the modified FF from Ref.~\cite{Hagelstein:2015yma}. The dotted blue curve is the modified FF of this work.} \label{fig:GEminus1} \end{figure} Here we present a modified $G_E$, shown in \Figref{GEminus1} (blue dotted curve), that complies with the consistency requirements put forward in the Comment, and is yet resolving the discrepancy in exactly the same way as described in the original paper. The rest of this Reply can be viewed as the revised Sec.\ III of Ref.~\cite{Hagelstein:2015yma}: \bigskip \centerline{\bf III. RESOLVING THE PUZZLE} \medskip \setcounter{equation}{19} We assume the electric FF to separate into a smooth ($\ol G_E$) and a nonsmooth part ($\widetilde G_E$), such that, \begin{equation} G_E(Q^2) = \ol G_E(Q^2) + \widetilde G_E(Q^2).\eqlab{newFF} \end{equation} For the smooth part we shall take a well-known parametrization which fits the $ep$ data, while for the nonsmooth one we take \begin{equation} \widetilde G_E(Q^2)=\frac{A\,Q_0^2\,Q^2\left[Q^2+\eps^2\right]}{\left[Q_0^2+Q^2\right]^4}, \end{equation} where $A$, $\eps$ and $Q_0$ are real parameters. The poles of this function are at negative $Q^2$ (timelike region) and hence it obeys the analyticity constraint. \begin{figure}[tbh] \centering \includegraphics[scale=0.48]{Correction.pdf} \caption{The correction, $\widetilde G_E(Q^2)$, for $Q_0=1.6$ MeV, $A=1.2\times10^{-4}$ MeV$^2$ and $\eps=0.143$ MeV (solid green), and the weighting function, $w(Q)$, for $e$H (blue dotted) and $\mu$H (red dashed) as functions of $Q$. The dash-dotted line indicates the onset of electron-proton scattering data.} \label{fig:correction} \end{figure} According to \Figref{correction}, in order to make a maximal impact on the puzzle, the fluctuation $\widetilde G_E$ must be located at the extremi of $w(Q)$ in Eq.~(19a)\footnote{Equation numbers below 20 refer to the equations in Ref.~\cite{Hagelstein:2015yma}.} around either the $e$H or $\mu$H inverse Bohr radius. Here we shall only consider the latter case and set one of the position parameters to the MeV scale: \begin{equation} Q_0=1.6\, \mbox{MeV}. \end{equation} This choice conditions the choice of the smooth part $\bar G_E$, in case one wants to solve the puzzle. Indeed, since with this $Q_0$ the nonsmooth part affects mostly the $\mu$H result, the smooth part must have a radius consistent with the $e$H value. We therefore adopt the chain-fraction fit of Arrington and Sick \cite{Arrington:2006hm}: \begin{equation} \ol G_E(Q^2)=\frac{1}{1+\frac{3.478 \,Q^2}{1-\frac{0.140 \,Q^2}{1-\frac{1.311 \,Q^2}{1+\frac{1.128 \,Q^2}{1-0.233 \,Q^2}}}}} . \end{equation} Fixing $Q_0$, the other two parameters of $\widetilde G_E$, $A$, and $\eps$, are fitted by requiring our FF to yield the empirical Lamb shift contribution, in both normal and muonic hydrogen, i.e.: \begin{subequations} \eqlab{LS} \begin{eqnarray} && E^{\mathrm{FF}(exp.)}_{2P-2S}(e\mathrm{H}) = -0.620(11) \, \mbox{neV}, \label{LSeH}\\ && E^{\mathrm{FF}(exp.)}_{2P-2S} (\mu\mathrm{H}) = -3650(2) \, \mbox{$\upmu$eV} \label{LSmuH}. \end{eqnarray} \end{subequations} Note that these are not the experimental Lamb shifts, but only the finite-size contributions, described by Eqs.~(2) and (4), with the corresponding empirical values for the radii. In the $e$H case we have taken the CODATA value of the proton radius, Eq.~(3a), which is obtained as an weighted average over several hydrogen spectroscopy measurements, and $R_{E(2)} = 2.78(14)$ fm \cite{Borie:2012zz}. In the $\mu$H case we have taken the values from Ref.~\cite{Antognini:2013rsa}, hence Eq.~(3b) for the radius and the same as the above value for $R_{E(2)}$. Figure~\ref{fig:Parameter} shows at which $A$ and $\eps$ our FF complies with either the $e$H (blue dot-dashed curve) or $\mu$H (red solid curve) Lamb shift. For $A=1.2\times10^{-4}$ MeV$^2$ and $\eps=0.143$ MeV, our FF describes them both, thus resolving the puzzle (the description of the $ep$ data by $\bar G_E$ is not affected by the addition of $\widetilde G_E$). Figure \ref{fig:correction} shows the fitted $\widetilde G_E$, and the weighting function (17) for $e$H and $\mu$H. The modification thus enhances the FF in the region below the onset of $ep$ data ($Q<63$ MeV). The overlap between the correction and the positive contribution of the $\mu$H weighting function is clearly dominating, resulting in the desired matching to the experimental Lamb shifts given in \Eqref{LS}. We emphasize that the magnitude of the change in the FF is extremely tiny, \begin{equation} \big\vert\widetilde G_E/\, \ol G_E\big\vert <3\times10^{-6}, \end{equation} for any positive $Q^2$. The Comment suggests that a comparison of our correction to the deviation of the FF from unity is more fair. For our newly proposed $\widetilde G_E$, we find this ratio to be: $$ \big\vert\widetilde G_E/\, (\ol G_E-1)\big\vert < 0.57\,, $$ which does not seem unreasonable either. Furthermore, our new FF modification satisfies another criteria put forward in the Comment, namely: $G_E(Q^2) < 1$ for $Q^2 >0$. Nevertheless, the modification obviously has a profound effect on the $\mu$H Lamb shift. Its effect on the second and third moments is given by: \begin{eqnarray} \widetilde{\langle r^2\rangle}_E&\equiv &-6 \frac{\mathrm{d}}{\mathrm{d} Q^2} \widetilde G_E(Q^2)\Big|_{Q^2= 0} =-\frac{6 A\eps^2}{Q_0^6},\quad\\ \widetilde{\langle r^3\rangle}_E&\equiv &\frac{48}{\pi} \int_0^\infty \!\frac{\mathrm{d} Q}{Q^4}\, \left\{ \widetilde G_E(Q^2) +\mbox{$\frac{1}{6}$} \widetilde{\langle r^2\rangle}_E Q^2\right\},\nonumber\\ &=&15 A(Q_0^2-7\eps^2)/2Q_0^7. \end{eqnarray} The numerical values of these moments, together with their ``would be" effect on the Lamb shift and the non-expanded Lamb result, are given in Table \ref{Table}. One can see that the expansion in moments breaks down for the the modified FF contribution to $\mu$H. \begin{figure}[tbh] \centering \includegraphics[scale=0.48]{Parameters.pdf} \caption{Parameters of $\widetilde G_E$ for which the $e$H and $\mu$H Lamb shifts of \Eqref{LS} are reproduced. For fixed $Q_0=1.6$ MeV, we chose $A=1.2\times10^{-4}$ MeV$^2$ and $\eps=0.143$ MeV, as indicated by the dashed lines.} \label{fig:Parameter} \end{figure} \begin{table}[tbh] \caption{Lamb shift and moments corresponding to our model FF, with $Q_0=1.6$ MeV, $A=1.2\times10^{-4}$ MeV$^2$ and $\eps=0.143$ MeV.} \label{Table} \begin{tabular}{c|c|c|c|c} &Eq.&$\ol G_E$&$\widetilde G_E$&$ G_E$\\ \hline $\langle r^2\rangle_E \, [\mbox{fm}^2]$&(6a)&$(0.9014)^2$&$-(0.1849)^2$&$(0.8823)^2$\\ $\langle r^3\rangle_E \,[\mbox{fm}^3]$&(12)&$(1.052)^3$&$(8.539)^3$&$(8.544)^3$\\ \hline Lamb-shift, expanded & (11) & && \\ $E_{2P-2S}^{\mathrm{FF}(1)}(e\mathrm{H})[\text{neV}]$ &&$-0.6569$&$0.0371$&$-0.6198$\\ $E_{2P-2S}^{\mathrm{FF}(1)}(\mu\mathrm{H})[\upmu\text{eV}]$&&$-4202$&$11542$&$7340$\\ \hline Lamb-shift, exact & (19a) & && \\ $E_{2P-2S}^{\mathrm{FF}(1)}(e\mathrm{H}) [\text{neV}]$&&$-0.6569$&$0.0370 $&$-0.6200$\\ $E_{2P-2S}^{\mathrm{FF}(1)}(\mu\mathrm{H})[\upmu\text{eV}]$&&$-4202$&$552$&$-3650$\\ \end{tabular} \end{table} In conclusion, we have reworked the low-$Q$ modification of the empirical proton FF $G_E$ such that it complies with the criteria put forward in the Comment \cite{Arrington:2015}. The original (`old') and the reworked (`new') modifications are shown in \Figref{GEminus1}, together with the unmodified form. The old and new modification are quite different, yet they both allow to describe the $e$H and $\mu$H Lamb shift simultaneously, while maintaining the agreement with the $ep$ scattering data. The new modification looks much more reasonable from the standpoint of the Comment. However, we emphasize once more that this is not a proposal for the solution of the puzzle --- not until a physical mechanism for this effect is found. For a current update on the status of the proton-radius puzzle, see \cite[Sec.\ 7]{Hagelstein:2015egb} and references therein. \section*{Acknowledgements} This work was supported by the Deutsche Forschungsgemeinschaft (DFG) through the Collaborative Research Center SFB 1044 [The Low-Energy Frontier of the Standard Model], and the Graduate School DFG/GRK 1581 [Symmetry Breaking in Fundamental Interactions].
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The advent of neural networks in natural language processing (NLP) has significantly improved state-of-the-art results within the field. Initially, recurrent neural networks and long short-term memory networks dominated the field. Later, the transformer model caused a revolution in NLP by dropping the recurrent part and only keeping attention mechanisms \citep{vaswaniAttention2017}. The transformer model led to other popular language models, e.g. GPT-2 \citep{radford2018gpt,radford2019gpt2}. BERT \citep{devlinBERT2019a} improved over previous models and recurrent networks by allowing the system to learn from input text in a bidirectional way, rather than only from left-to-right or the other way around. This model was later re-implemented, critically evaluated and improved in the RoBERTa model \citep{liuRoBERTa2019}. These large-scale attention-based models provide the advantage of being able to solve NLP tasks by having a common, expensive pre-training phase, followed by a smaller fine-tuning phase. The pre-training happens in an unsupervised way by providing large corpora of text in the desired language. The second phase only needs a relatively small annotated dataset for fine-tuning to outperform previous popular approaches in one of a large number of possible language tasks. While language models are usually trained on English data, some multilingual models also exist. These are usually trained on a large quantity of text in different languages. For example, Multilingual-BERT is trained on a collection of corpora in 104 different languages \citep{devlinBERT2019a}, and generalizes language components well across languages \citep{pires2019multilingual}. However, models trained on data from one specific language usually improve the performance of multilingual models for this particular language \citep{martinCamemBERT2019,devriesBERTje2019}. Training a RoBERTa model~\citep{liuRoBERTa2019} on a Dutch dataset thus also potentially increases performances for many downstream Dutch NLP tasks. In this paper, we introduce RobBERT\footnote{The model named itself RobBERT when it was prompted with \textit{``Ik heet $<$mask$>$BERT.''} (\textit{``My name is $<$mask$>$BERT.''}), which we found quite a suitable name.}, a Dutch RoBERTa-based pre-trained language model, and critically evaluate its performance on various language tasks against other Dutch languages models. We also propose several new tasks for testing the model's zeroshot ability, evaluate its performance on smaller datasets, and for measuring the importance of a language-specific tokenizer. Finally, we provide an extensive fairness evaluation using recent techniques and a new translated dataset. \section{Related Work} \label{sec:related-work} Transformer models have been successfully used for a wide range of language tasks. Initially, transformers were introduced for use in machine translation, where they efficiently improved the state-of-the-art \citep{vaswaniAttention2017}. This cornerstone was used in BERT, a transformer model obtaining state-of-the-art results for eleven natural language processing tasks, such as question answering and natural language inference \citep{devlinBERT2019a}. BERT is pre-trained with large corpora of text using two unsupervised tasks. The first task is called masked language modeling (MLM), making the model guess which word is masked in certain position in the text. The second task is next sentence prediction, in which the model has to predict if two sentences occur subsequent in the corpus, or randomly sampled from the corpus. These tasks allow the model to create internal representations about a language, which could thereafter be reused for different language tasks. This architecture has been shown to be a general language model that could be fine-tuned with little data in a relatively efficient way for a very distinct range of tasks and still outperform previous architectures \citep{devlinBERT2019a}. Transformer models are also capable of generating contextualized word embeddings \citep{peters2018elmo}. Traditional word embeddings, e.g. word2vec~\citep{mikolovEfficient2013} and GloVe~\citep{penningtonGlove2014}, lack the capibility of differentiating words based on context (e.g. \emph{``a stick''} versus \emph{``let's stick to''}). Transformer models, like BERT, on the other hand automatically incorporate the context a word occurs into its embedding. The attention mechanism in transformer encoder models also allows for better resolution of coreferences between words \citep{joshi2019spanbert}. For example, in the sentence \emph{``The trophy doesn't fit in the suitcase because it's too big.''}, the word \emph{``it''} would refer to the the suitcase instead of the trophy if the last word was changed to \emph{``small''} \citep{levesque2012winograd}. Being able to resolve these coreferences is for example important for translation, as dependent words might change form, e.g. due to word gender. While BERT has been shown to be a powerful language model, it also received scrutiny on its training and pre-processing. The authors of RoBERTa \citep{liuRoBERTa2019} showed that while the NSP pre-training task made the model perform better, it was not due to its intended reason, as it might merely predict relatedness between corpus sentences rather than subsequent sentences. That \citet{devlinBERT2019a} trained a better model when using NSP than without NSP is likely due to the model learning long-range dependencies that were longer than when just using single sentences. As such, the RoBERTa model uses only the MLM task, and uses multiple full sentences in every input. Other researchers later improved the NSP task by instead making the model predict for two subsequent sentences if they occur in the given or flipped order in the corpus \citep{lan2019albert}. \citet{devlinBERT2019a} also presented a multilingual model (mBERT) with the same architecture as BERT, but trained on Wikipedia corpora in 104 languages. Unfortunately, the quality of these multilingual embeddings is considered worse than their monolingual counterparts, as \citet{ronnqvistMultilingual2019} illustrated for German and English models in a generative setting. The monolingual French CamemBERT model~\citep{martinCamemBERT2019} also outperformed mBERT on all tasks. \citet{brandsen2019bert} also outperformed mBERT on several Dutch tasks using their Dutch BERT-based language model, called BERT-NL, trained on the small SoNaR corpus \citep{oostdijk2013sonar}. More recently, \citet{devriesBERTje2019} also showed similar results for Dutch using their BERTje model, outperforming multilingual BERT in a wide range of tasks, such as sentiment analysis and part-of-speech tagging by pre-training on multiple corpora. Since both these works are concurrent with ours, we compare our results with BERTje and BERT-NL in this paper. \section{Pre-training RobBERT} We pre-trained RobBERT using the RoBERTa training regime. We trained two different versions, one where only the pre-training corpus was replaced with a Dutch corpus \emph{(RobBERT v1)} and one where both the corpus and the tokenizer were replaced with Dutch versions \emph{(RobBERT v2)}. These two versions allow to evaluate the importance of having a language-specific tokenizer. \subsection{Data} We pre-trained our model on the Dutch section of the OSCAR corpus, a large multilingual corpus which was obtained by language classification in the Common Crawl corpus \citep{ortizsuarezAsynchronous2019}. This Dutch corpus is 39GB large, with 6.6 billion words spread over 126 million lines of text, where each line could contain multiple sentences. This corpus is thus much larger than the corpora used for similar Dutch BERT models, as BERTje used a 12GB corpus, and BERT-NL used the SoNaR-500 corpus (about 2.2GB). \citep{devriesBERTje2019,brandsen2019bert}. \subsection{Tokenizer} For RobBERT v2, we changed the default byte pair encoding (BPE) tokenizer of RoBERTa to a Dutch tokenizer. The vocabulary of the Dutch tokenizer was constructed using the Dutch section of the OSCAR corpus~\citep{ortizsuarezAsynchronous2019} with the same byte-level BPE algorithm as RoBERTa~\citep{liuRoBERTa2019}. % This tokenizer gradually builds its vocabulary by replacing the most common consecutive tokens with a new, merged token. We limited the vocabulary to 40k words, which is 10k words less than RobBERT v1, due to additional tokens including non-negligible number of Unicode tokens that are not used in Dutch. These are likely caused due to misclassified sentences during the creation of the OSCAR corpus~\citep{ortizsuarezAsynchronous2019}. \subsection{Training} RobBERT shares its architecture with RoBERTa's base model, which itself is a replication and improvement over BERT \citep{liuRoBERTa2019}. Like BERT, it's architecture consists of 12 self-attention layers with 12 heads \citep{devlinBERT2019a} with 117M trainable parameters. One difference with the original BERT model is due to the different pre-training task specified by RoBERTa, using only the MLM task and not the NSP task. During pre-training, it thus only predicts which words are masked in certain positions of given sentences. The training process uses the Adam optimizer~\citep{kingmaAdam2017} with polynomial decay of the learning rate $l_r=10^{-6}$ and a ramp-up period of 1000 iterations, with hyperparameters $\beta_1=0.9$ % and RoBERTa's default $\beta_2=0.98$. Additionally, a weight decay of 0.1 and a small dropout of 0.1 helps prevent the model from overfitting \citep{srivastavaDropout2014}. RobBERT was trained on a computing cluster with 4 Nvidia P100 GPUs per node, where the number of nodes was dynamically adjusted while keeping a fixed batch size of 8192 sentences. At most 20 nodes were used (i.e. 80 GPUs), and the median was 5 nodes. % By using gradient accumulation, the batch size could be set independently of the number of GPUs available, in order to maximally utilize the cluster. % Using the Fairseq library~\citep{ott2019fairseq}, the model trained for two epochs, which equals over 16k batches in total, which took about three days on the computing cluster. In between training jobs on the computing cluster, 2 Nvidia 1080 Ti's also covered some parameter updates for RobBERT v2. \section{Evaluation} We evaluated RobBERT on multiple downstream Dutch language tasks. For testing text classification, we evaluate on sentiment analysis and on demonstrative and relative pronoun prediction. The latter task helps evaluating the zero-shot prediction abilities, i.e. using only the pre-trained model without any fine-tuning. Both classification tasks are also used to measure how well RobBERT performs on smaller datasets, by only using subsets of the data. For testing RobBERT's token tagging capabilities, we used both part-of-speech (POS) tagging and named entity recognition (NER) tasks. \begin{table*}[tbh] \centering \caption{Results of RobBERT fine-tuned on several downstream classification tasks, compared to the state of the art models for the tasks. For accuracy, we also report the 95\% confidence intervals. \textit{(Results annotated with * from \citet{vanderburghMerits2019}, ** from \citet{devriesBERTje2019}, *** from \citet{brandsen2019bert}, **** from \citet{alleinBinary2020})}} \label{tab:results} \resizebox{\textwidth}{!}{% \begin{tabular}{@{}lll|ll@{}} \toprule & \multicolumn{2}{c}{\textbf{10k}} & \multicolumn{2}{c}{\textbf{Full dataset}} \\ Task + model & ACC (95\% CI) [\%] & F1 [\%] & ACC (95\% CI) [\%] & F1 [\%] \\ \midrule \textbf{Sentiment Analysis (DBRD)} & & & & \\ \multicolumn{1}{r}{\citet{vanderburghMerits2019}} & --- & --- & 93.8* & --- \\ \multicolumn{1}{r}{BERTje~\citep{devriesBERTje2019}} & --- & --- & 93.0** & --- \\ \multicolumn{1}{r}{BERT-NL~\citep{brandsen2019bert}} & --- & --- & --- & 84.0*** \\ \multicolumn{1}{r}{RobBERT v1} & 86.730 (85.32, 88.14) & 86.729 & 94.422 (93.47,95.38) & 94.422 \\ \multicolumn{1}{r}{RobBERT v2} & \textbf{94.379} (93.42, 95.33) & \textbf{94.378} & \textbf{95.144} (94.25,96.04) & \textbf{95.144} \\ \textbf{Die/Dat (Europarl)} & & & & \\ \multicolumn{1}{r}{Baseline~\citep{alleinBinary2020}} & --- & --- & 75.03**** & --- \\ \multicolumn{1}{r}{mBERT~\citep{devlinBERT2019a}} & 92.157 (92.06, 92.25) & 90.898 & 98.285 (98.24,98.33) & 98.033 \\ \multicolumn{1}{r}{BERTje~\citep{devriesBERTje2019}} & 93.096 (92.84, 93.36) & 91.279 & 98.268 (98.22,98.31) & 98.014 \\ \multicolumn{1}{r}{RobBERT v1} & 97.006 (96.95, 97.07)& 96.571 & {98.406} (98.36, 98.45) & {98.169} \\ \multicolumn{1}{r}{RobBERT v2} & \textbf{97.816} (97.76, 97.87)& \textbf{97.514}& \textbf{99.232} (99.20, 99.26) & \textbf{99.121} \\ \bottomrule \end{tabular}% } \end{table*} \subsection{Sentiment Analysis} We replicated the high-level sentiment analysis task used to evaluate BERT-NL~\citep{brandsen2019bert} and BERTje \citep{devriesBERTje2019} to be able to compare our methods. This task uses a dataset called Dutch Book Reviews dataset (DBRD), in which book reviews from \url{hebban.nl} are labeled as positive or negative \citep{vanderburghMerits2019}. Although the dataset contains 118,516 reviews, only 22,252 of these reviews are actually labeled as positive or negative, which are split in a 90\% train and 10\% test datasets. This dataset was released in a paper analysing the performance of an ULMFiT model (Universal Language Model Fine-tuning for Text Classification model) \citep{vanderburghMerits2019}. We fine-tuned RobBERT on the first 10,000 training examples as well as on the full dataset. While the ULMFiT model is first fine-tuned using the unlabeled reviews before training the classifier \citep{vanderburghMerits2019}, it is unclear whether the other BERT models utilized the unlabeled reviews for further pre-training \citep{sun2019classification} or only used the labeled data for fine-tuning the pre-trained model. We did the latter, meaning further improvement is possible by additionally pre-training on unlabeled in-domain sequences. Another unknown is how these models dealt with reviews that were longer than the maximum number of tokens, as the average book review length is 547 tokens, with 40\% of the documents being longer than our model could handle. For our experiments, we only gave the last tokens of a review as input, as we found the training performance to be better, likely due to containing a summarizing comments. We trained our model for 2000 iterations with a batch size of 128 and a warm-up of 500 iterations, reaching a learning rate of $10^{-5}$. The training took approx. 2 hours on 2 Nvidea 1080 Ti GPUs, the best-performing RobBERT v2 model was selected based on a validation accuracy of 0.994. We see that RobBERT outperforms the other BERT models. Both versions of RobBERT also outperform the state-of-the-art ULMFiT model, although the difference is only statistically significant for RobBERT v2. \subsection{Die/Dat Disambiguation}\label{ss:die-dat} Since BERT models perform well on coreference resolution tasks \citep{joshi2019coreference}, we propose to evaluate RobBERT on the recently introduced \emph{``die/dat disambiguation''} task \citep{alleinBinary2020}, as a novel way to evaluate the zeroshot ability of Dutch BERT models. In Dutch, depending on the sentence, both \emph{``die''} and \emph{``dat''} can be either demonstrative or relative pronouns; in addition they can also be used in a subordinating conjunction, i.e. to introduce a clause. The use of either of these words depends on the gender of the word it refers to. \citet{alleinBinary2020} presented multiple models trained on the Europarl~\citep{koehnEuroparl2005a} and SoNaR corpora~\citep{oostdijkConstruction2013}, achieving an accuracy of 75.03\% on Europarl to 84.56\% on SoNaR. For this task, we use the Dutch Europarl corpus~\citep{koehnEuroparl2005a}, with the first 1.3M sequences ({\tt head}) for training and last 399k ({\tt tail}) as test set. Every sequence containing \emph{``die''} or \emph{``dat''} creates an example for every occurrence of either word by masking the occurrence. For the test set, this resulted in about 289k masked sentences. BERT-like models can solve this task using two different approaches. Since the task is about predicting words, their default MLM task can be used to guess which of the two words is more probable in a particular masked position. This allows the comparison of zero-shot BERT models, i.e. without any fine-tuning on the training data (\autoref{tab:zeroshot}). The second approach uses the masked sentences to create two versions by filling the mask with either ``die'' and ``dat'', separate them using the \texttt{[SEP]} token and making the model predict which of the two sentences is correct. This fine-tuning was performed using 4 Nvidia GTX 1080 Ti GPUs, % taking 30 minutes for 13 epochs on 10k sequences and about 24 hours for 3 epochs on the full dataset. We did no hyperparameter tuning, as the initial hyperparameters ($l_r=10^{-5}, \epsilon=10^{-9}$, warm-up of 250 steps, batch size of 32 (10k) or 128 (full dataset), dropout of 0.1) were satisfactory. To measure RobBERTs performance on smaller datasets, we trained the model twice for both the sentiment analysis task and the \emph{die/dat} disambiguation task, once with a subset of 10k utterances, and once with the full training dataset. \begin{table}[htb] \centering \caption{Performance of predicting \textit{die/dat} as most likely candidate for a mask using zero-shot BERT models (i.e. without fine-tuning) as well as a majority class predictor (ZeroR), tested on the 288,799 test set sentences} \label{tab:zeroshot} \resizebox{0.9\linewidth}{!}{% \begin{tabular}{@{}lr@{}} \toprule \textbf{Model} & \textbf{Accuracy [\%]} \\ \midrule \multicolumn{1}{r}{ZeroR (majority class)} & 66.70 \\ \multicolumn{1}{r}{mBERT~\citep{devlinBERT2019a}} & 90.21 \\ \multicolumn{1}{r}{BERTje~\citep{devriesBERTje2019}}& 94.94 \\ \multicolumn{1}{r}{RobBERT v1} & 98.03 \\ \multicolumn{1}{r}{RobBERT v2} & \textbf{98.75} \\ \bottomrule \end{tabular}% } \end{table} RobBERT outperforms previous models as well as other BERT models both with as well as without fine-tuning (see Table \ref{tab:results} and Table \ref{tab:zeroshot}). It is also able to reach similar performance using less data. The fact that both for the fine-tuned and the zero-shot setting, RobBERT outperforms other BERT models is also an indication that the base model has internalised more knowledge about Dutch than the others, likely due to the improved pre-training regime and using a larger corpus. We can also see that having a Dutch tokenizer strongly helps reduce the error rate for this task, halving the error rate when fine-tuned on the full dataset. The reason the BERT-based models outperform the previous RNN-based approach is likely the encoders ability to better deal with coreference resolution \citep{joshi2019spanbert}, and by extension deciding which word the \textit{``die''} or \textit{``dat''} belongs to. The fact that RobBERT strongly outperforms the other BERT models on subsets of the data indicates that it is a suitable candidate for Dutch tasks that only have limited data available. \subsection{Part-of-speech Tagging} \label{ss:pos} Part-of-speech (POS) tagging involves labeling tokens rather than labeling sequences. For this, we used a different head with an classification output for each token, all activated by a softmax function. When a word consists of multiple tokens, the first token is used for the the label of the word. We perform the same POS fine-tuning regimes as RoBERTa \citep{liuRoBERTa2019} to evaluate RobBERT's performance. When fine-tuning, we employ a linearly decaying learning rate with a warm-up for 6\% of the total optimisation steps \citep{liuRoBERTa2019}. For all the encoder-based models in our evaluation, we also perform a limited hyperparameter search on the development set with learning rate $l_r \in \{10^{-5}, 2\cdot 10^{-5}, 3\cdot 10^{e-5}, 10^{-4}\}$ and batch size~$\in \{16, 32, 48\}$, which is also based on RoBERTa's fine-tuning. \begin{table}[tbh] \centering \caption{POS tagging on Lassy UD. For accuracy, we also report the 95\% confidence intervals.} \label{tab:results-tokens} \resizebox{\linewidth}{!}{% \begin{tabular}{@{}lll@{}} \toprule Task + model & ACC (95\% CI) [\%] \\ \midrule \multicolumn{1}{r}{Frog~\citep{bosch2007frog}} & 91.7 (91.2, 92.2) \\ \multicolumn{1}{r}{mBERT~\citep{devlinBERT2019a}} & \textbf{96.5} (96.2, 96.9) \\ \multicolumn{1}{r}{BERTje~\citep{devriesBERTje2019}} & 96.3 (96.0, 96.7) \\ \multicolumn{1}{r}{RobBERT v1} & 96.4 (96.0, 96.7) \\ \multicolumn{1}{r}{RobBERT v2} & 96.4 (96.0, 96.7) \\ \bottomrule \end{tabular} } \end{table} To evaluate the POS-performance, we used the Universal Dependencies (UD) version of the Lassy dataset \citep{vanNoord2013lassy}, containing 17 different POS tags. We compared its performance with Frog, a popular memory-based Dutch POS tagging approach, and with other BERT models. Surprisingly, multilingual BERT marginally outperformed both Dutch BERT models, although not statistically significantly, with both RobBERT models in second place with an almost equal accuracy. The higher performance of multilingual BERT could be indicative that it benefits from transferable language structures from other languages helping it to perform well for POS tagging. Alternatively, this could signal a limit of the UD Lassy dataset, or at least for the performance of BERT-like models on this dataset. \begin{figure} \centering \includegraphics[width=\linewidth]{fig/robbert_pos_accuracy_2.pdf} \caption{POS tagging accuracy on the test set for different sizes of training sets.} \label{fig:robbert-pos-acc} \end{figure} We also evaluated the models on several smaller subsets of the training data, to illustrate how much data is needed to achieve acceptable results. For all models, the same hyperparameters obtained for \autoref{tab:results-tokens} are used for all subsets, under the assumption that using a subset of the training data also works well under the same hyperparameters. The hyperparameters which yielded the results of RobBERT v2 are $l_r=10^{-4}$, batch size of 16 and dropout of 0.1. The separate development set was used to select the best-performing model after each epoch based , which had a cross-entropy loss of 0.172 on the development set. While all BERT models perform similarly after seeing all instances of the UD Lassy dataset, there is a clear difference when using smaller training sets (\autoref{fig:robbert-pos-acc}). RobBERT v2 outperforms all other models when using only 1,000 data points or less, again showing that it is more capable of dealing with smaller datasets. \subsection{Named Entity Recognition} Named entity recognition (NER) is the task of labeling named entities in a sentence. It is thus a token-level task, just like POS-tagging, meaning we can use the same setup and hyperparameter tuning as described in \autoref{ss:pos}. We use the CoNLL-2002 dataset and evaluation script\footnote{Retrieved from \url{https://www.clips.uantwerpen.be/conll2002/ner/}}, which use a four value BIO labeling, namely for organisations, locations, people and miscellaneous \citep{sang2002conll}. The hyperparameters yielding the results for RobBERT v2 are $l_r=3\cdot10^{-5}$, batch size of 32 and dropout of 0.1. The separate development set was used to select the best-performing model after each epoch. As the $F_1$ score is generally used for evaluation of this task, we opted to use this metric instead of cross-entropy loss for selecting the best-performing model, which had an $F_1$ score of 0.8769 on the development set. We compared the $F_1$ scores on the NER task in \autoref{tab:results-tokens-ner}. \begin{table}[htb] \centering \caption{NER for various models, $F_1$ score calculated with the CoNLL 2002 evaluation script, except for $\dagger$ which used the Seqeval Python library, * from \citet{wuBeto2019}, ** from \citet{brandsen2019bert}, *** from \citet{devriesBERTje2019}.} \label{tab:results-tokens-ner} \resizebox{0.9\linewidth}{!}{% \begin{tabular}{@{}lll@{}} \toprule Task + model & $F_1$ score [\%] \\ \midrule \multicolumn{1}{r}{Frog~\citep{bosch2007frog}} & 57.31 \\ \multicolumn{1}{r}{mBERT~\citep{devlinBERT2019a}} & 84.19 \\ \multicolumn{1}{r}{mBERT \citep{wuBeto2019}} & \textbf{90.94}* \\ \multicolumn{1}{r}{BERT-NL~\citep{brandsen2019bert}} & 89.7$^\dagger$** \\ \multicolumn{1}{r}{BERTje~\citep{devriesBERTje2019}} & 88.3*** \\ \multicolumn{1}{r}{RobBERT v1} & 87.53 \\ \multicolumn{1}{r}{RobBERT v2} & 89.08 \\ \bottomrule \end{tabular} } \end{table} We can see that \citep{wuBeto2019} outperforms all other BERT models using a multilingual BERT model with an $F_1$ score of 90.94. When we used the token labeling fine-tuning regime described earlier on multilingual BERT, we were only able to get to an $F_1$ score of 84.19 using multilingual BERT, thus being outperformed by the Dutch BERT models. One possibility is that the authors used a more optimal fine-tuning regime, or that they trained their model longer. \section{RobBERT and Fairness} As language models are trained on large corpora, this poses a risk that minorities and protected groups are ill-represented, e.g. by encoding stereotypes~\citep{bolukbasi2016man, zhaoGender2019, gonen-goldberg-2019-lipstick}. In word embeddings, these studies often rely on analogies~\citep{bolukbasi2016man,caliskanSemantics2017} or embedding analysis~\cite{gonen-goldberg-2019-lipstick}. These approaches are not directly transferable to BERT models, since the sentence the word occurs in influences its embedding. Efforts to generalize these approaches often rely on templates~\citep{mayMeasuring2019, kurita-etal-2019-measuring}. These can be intentionally neutral (``\emph{{\tt \textless mask\textgreater} is a word}'') or they might resemble an analogy in textual form (``\emph{{\tt \textless mask\textgreater} is a zookeeper.}''). One can then perform an association test between possible values for the {\tt \textless mask\textgreater} slot, similar to a word embedding association test~\citep{caliskanSemantics2017}. In this section, we discuss two distinct potential origins of representational harm \citep{blodgettLanguage2020} a language model could exhibit, and evaluate these on RobBERT v2. The two discussed behaviours are (i) stereotyping of gender roles in occupations % and (ii) unequal predictive power for texts written by men and women. % These exemplifications highlight how language models risk affecting the experience of the end user, or replicating and reinforcing stereotypes. \subsection{Gender Stereotyping} \label{ss:stereotyping-bias} To assess how gender stereotypes of professions are present, we performed a template-based association test similar to \citet{kurita-etal-2019-measuring} and the \emph{semantically unbleached} templates of \citet{mayMeasuring2019}. We used RobBERT's LM head---trained during pre-training with the MLM task---to fill in the \emph{\tt \textless mask\textgreater} slot for each template, in the same manner as the zero-shot task described in \autoref{ss:die-dat}. These templates have a second slot, which is used to iterate over the professions. For this list of professions and the gender projection on the \emph{he-she} axis, we base us on the work by \citet{bolukbasi2016man}, who crowdsourced the associated gender for various professions. Ideally, we would use a similarly crowdsourced Dutch dataset. However, since this does not yet exist, we opted for manually translating these English professions using the guidelines established by the European Parliament for gender neutral professions~\citep{dimitriospapadimoulisGenderneutraal2018}, meaning that we opted for the inclusive form for neutral professions in English that do not have a neutral counterpart, but an inclusive binary male variant and a female variant with explicit gender (e.g. for lawyer: using \emph{``advocaat''} and not \emph{``advocate''}). In the rare case that an inclusive or neutral form translated to an exclusive binary form, we excluded this profession. \begin{figure}[tb] \makebox[1.05\linewidth][r]{ % \centering \includegraphics[width=1.125\linewidth]{fig/gender_diff.pdf} } \caption{Ranking difference between gendered pronouns for various professions. Three templates were used to evaluate, were {\tt \textless T\textgreater} is replaced by a profession. In the leftmost template, the pronoun and profession refer to different entities.} \label{fig:gender-diff-lm} \end{figure} We evaluated three templates on RobBERT, including one control template without co-referent entities (``{\tt \textless mask\textgreater} \emph{goes to a} {\tt \textless T\textgreater}'') (\autoref{fig:gender-diff-lm}). For the control template, there should be no and indeed is no correlation between ranking difference for both pronouns and the associated gender of a profession. Interestingly, none of the instances has a positive ranking difference, meaning the language model always ranks the male pronoun as more likely. When the profession and {\tt \textless mask\textgreater} slot refer to the same entity, the general assessment of the language model correlates with the associated gender. But again, RobBERT estimates that the male pronoun is more likely in almost all cases, even when these professions have a gendered suffix. Curiously, actress (``\emph{actrice}'') is the only word where this is not the case. Since RobBERT estimates the male pronoun to be more likely even in the control template, we suspect that the effect is due to more coverage of men in the training corpus. \subsection{Unequal Predictive Performance} \label{ss:system-performance-bias} Unfairness is particularly problematic if it leads to unequal predictive performance. This problem has been demonstrated for decision support systems, including recidivism prediction~\citep{angwinMachine2016} and public employment services~\citep{allhutterAlgorithmic2020}. Such predictions can be downstream tasks of language understanding; for example when job resumés are processed \citep{ van-hautte-etal-2020-leveraging}. To review fairness in downstream tasks, we evaluated the sentiment analysis task on DBRD, a dataset with scraped book reviews. Although this task in itself may have low impact for end users, it still serves as an illustrative example of how fine-tuned models can behave unfairly. To investigate whether such bias might result for our fine-tuned model, we analyzed its outcome for different values of a sensitive attribute (in this case gender), as is commonly done in fair machine learning research~\citep{zemel2013learning, hardtEqualityOpportunitySupervised2016, delobelleEthical2020}. To this end, we augmented the held-out test set of DBRD with gender as a sensitive attribute for each review\footnote{We make this augmentation of DBRD available under CC-by-NC-SA at \url{https://people.cs.kuleuven.be/~pieter.delobelle/data.html}.}. Values were obtained from the reviews' author profiles with a self-reported binary gender (\emph{`man'} or \emph{`vrouw'}) (64\%). The remaining 36\% of reviews did not report author gender, and they were discarded for this evaluation. Of the remaining, gender-labelled, reviews, 76\% were written by women. Thus, the dataset is unbalanced. We quantify the gender difference with two metrics: (i) Demographic Parity Ratio (DPR), which expresses a relative difference between predicted outcomes $\hat{y}$ conditioned on the sensitive attribute $a$~\citep{dworkFairnessAwareness2012}, following \[ \frac{P( \hat{y} \mid \neg a)}{P( \hat{y} \mid a)}, \] and (ii) Equal Opportunity (EO) % \citet{hardtEqualityOpportunitySupervised2016}, which in addition also conditions on the true outcome $y$, as a task-specific % fairness measure \citep{dworkFairnessAwareness2012}, following \[ P( \hat{y} \mid \neg a, y) - P( \hat{y} \mid a, y). \] \citet{hardtEqualityOpportunitySupervised2016} also relate EO to the ROC curves to evaluate fairness when dealing with a binary predictor and a score function. To derive a binary predictor, we used 0 as a threshold value. \autoref{fig:dbrd-gender} shows the single resulting predictor, with the ROC curves split on the sensitive attribute, for each of the two rating levels (over 3 resp. 5 stars). \begin{figure}[t] \centering \includegraphics[width=1.05\linewidth]{fig/dbrd.pdf} \caption{ROC of the fine-tuned model to predict positive reviews for male and female reviewers} \label{fig:dbrd-gender} \end{figure} The results of \autoref{fig:dbrd-gender} show that there is small difference in opportunity, which is especially pronounced for the highly positive classifier. For positive reviews, the EO difference is 0.028 at the indicated threshold and DPR is 70.2\%. The DPR would indicate an unfairness, as values below 80\% are often considered unfair. However, this metric has received some criticism, and when including the true outcome in EO, the difference in probabilities is close to 0, which does not signal any unfairness. When taking into account the ROC curves (\autoref{fig:dbrd-gender}), the EO score can be explained by the good predictive performance. When considering highly positive reviews, however, the differences become more pronounced and the model has better predictive performance for reviews written by women. \section{Code} The training and evaluation code of this paper as well as the RobBERT model and the fine-tuned models are publicly available for download at \url{https://github.com/iPieter/RobBERT}. \section{Limitations and Future Work} There are several potential improvements for creating a better pre-trained RobBERT-like model. First, since BERT-based models are still being % actively researched, one could potentially improve the training regime using new unsupervised pre-training tasks when they are discovered, e.g. sentence order prediction \citep{lan2019albert}. Second, while RobBERT is trained on lines that contain multiple sentences, it does not put subsequent lines of the corpus after each other due to the shuffled nature of the OSCAR corpus \citep{ortizsuarezAsynchronous2019}. This is unlike RoBERTa, which does put full sentences next to one another if they do not exceed the available sequence length, in order to learn the long-range dependencies between words that the original BERT learned using its controversial NSP task. Creating an unshuffled version of OSCAR might thus further improve the performance of the pre-trained model. Third, there might be some benefit to modifying the tokenizer to use morpheme-based tokens, as Dutch uses compound words. Fourth, one could improve model's fairness during pre-training. We illustrated how representational harm in downstream tasks can affect the end user's experience, like the unequal predictive performance for the DBRD task. Various methods have been proposed to mitigate unfair behaviour in AI models~\citep{zemel2013learning, delobelleEthical2020}. While we refrained from training fair pre-trained and fine-tuned models in this research, training such models could be an interesting contribution. In addition, with the increased attention on fairness in machine learning, a broader view of the impact on other protected groups due to large pre-trained language models is also called-for. The RobBERT model itself can be used in new settings to help future research. First, RobBERT could be used in a model that uses a BERT-like transformer stack for the encoder and a generative model as a decoder~\citep{raffelExploring2019,lewis2019bart} Second, RobBERT can serve as the basis for a large number of Dutch language tasks that we did not examine in this paper. Given RobBERT's state-of-the-art performance % on small as well as on large datasets, it could help advance results when fine-tuned on new datasets. % \section{Conclusion} We introduced a new language model for Dutch based on RoBERTa, called RobBERT, and showed that it outperforms earlier approaches as well as other BERT-based language models for a several different Dutch language tasks. More specifically, we found that RobBERT significantly outperformed other BERT-like models when dealing with smaller datasets, making it a useful resource for a large range of application domains. We expect this model to serve as a base for fine-tuning on other tasks, and thus help foster new models that can advance results for Dutch language tasks. \section*{Acknowledgements} Pieter Delobelle was supported by the Research Foundation - Flanders under EOS No. 30992574 and received funding from the Flemish Government under the "Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen" programme. Thomas Winters is a fellow of the Research Foundation-Flanders (FWO-Vlaanderen). Most computational resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation - Flanders (FWO) and the Flemish Government – department EWI. We are especially grateful to Luc De Raedt for his guidance as well as for providing the facilities to complete this project. We are thankful to Liesbeth Allein and her supervisors for inspiring us to use the \textit{die/dat} task. We are also grateful to \citet{ott2019fairseq, paszke2019pytorch, Haghighi2018, wolfHuggingFace2019} for their software packages.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A classical result of B. Fuglede states if $A$ is a bounded normal operator on a Hilbert space $H$ and $T$ is a bounded linear operator on $H$ satisfying $TA=AT$, then $TA^{\ast }=A^{\ast }T$, \cite{F}. More generally, C.R. Putnam showed if $A$ and $B$ are bounded normal operators on $H$ such that $AT=TB$, then $A^{\ast }T=TB^{\ast }$, \cite{P}. This more general version actually follows from the original Fuglede theorem, \cite{B}. Various different proofs of Fuglede's theorem are known; see, for example, \cite{DS}, p. 934, \cite{Ha1}, \cite{RR}, \cite{R}. An abstraction of the Fuglede theorem could be as follows. Let $X$ be a Banach space and $A$ belong to a subalgebra of the bounded linear operators on $X$ which has an involution $\sharp $. Does it follows that $TA^{\sharp }=A^{\sharp }T$ whenever $T$ is a bounded linear operator on $X$ satisfying TA=AT$? For $X=L^{p}\left( \mathbb{T}\right) $, with $1\leq p<\infty $ and \mathbb{T}$ the circle group, let $\mathcal{M}_{p}\left( \mathbb{T}\right) $ denote the algebra of all bounded functions $\varphi :\mathbb{Z}\rightarrow \mathbb{C}$ having the property that, for every $f\in L^{p}\left( \mathbb{T \right) $ there exists $g\in L^{p}\left( \mathbb{T}\right) $, necessarily unique, such that its Fourier transform satisfies $\hat{g}=\varphi \hat{f}$ on $\mathbb{Z}$. Denoting $g$ by $M_{\varphi }^{\left( p\right) }f$, the bounded linear operator $M_{\varphi }^{\left( p\right) }$ so generated on L^{p}\left( \mathbb{T}\right) $ is called the $p$-multiplier operator corresponding to $\varphi $. It is well known that the complex conjugate \bar{\varphi}$ of $\varphi $ belongs to $\mathcal{M}_{p}\left( \mathbb{T \right) $ whenever $\varphi \in \mathcal{M}_{p}\left( \mathbb{T}\right) $. Clearly $M_{\varphi }^{\left( p\right) }\longmapsto M_{\bar{\varphi }^{\left( p\right) }$ is an involution on the algebra of all $p$-multiplier operators on $L^{p}\left( \mathbb{T}\right) $. In this setting the "Fuglede theorem" is indeed valid, \cite{MR}. The aim of this note is to give a far reaching generalization of the above result beyond the setting of $L^{p}$-spaces. The group $\mathbb{T}$ can be replaced with any infinite compact abelian group $G$ and the $L^{p}$-spaces can be replaced by the significantly larger class of \textit{translation invariant Banach function spaces} $E$ over $G$. However, for this class of spaces $E$ three phenomena arise which do not occur for $L^{p}$-spaces over G$. First, the space $E$ need \textit{not} contain $L^{\infty }\left( G\right) $. Second, the reflection of a function in $E$ need not belong to E $ (which has the effect that $\varphi \in \mathcal{M}_{E}\left( G\right) $ need not imply that $\bar{\varphi}$ belongs to $\mathcal{M}_{E}\left( G\right) $) and third, the space of all continuous functions $C\left( G\right) $ on $G$, even if $C\left( G\right) \subseteq E$, need not be dense in $E$. However, for the very large and important class of spaces $E$ which \textit{do} have all three of these properties it is established in Theorem \ref{Thm01} that the "Fuglede theorem" does hold. Since the methods used rely heavily of the fact that the spaces $E$ are all \textit{Banach function spaces} over $G$ (in particular, Banach lattices) and that certain non-trivial properties of $E$ arising from translation invariance have to be developed and established, we need to include some sections to address these features. An attempt has been made to keep the note as self-contained as possible. \section{Preliminaries} In this section we collect together some definitions and facts concerning Banach function spaces. Let $\left( \Omega ,\Sigma ,\mu \right) $ be a measure space. Since we are interested in the case where $\Omega =G$ is an infinite compact abelian group with normalized Haar measure $\mu $, we will assume that $\mu \left( \Omega \right) =1$ and that $\mu $ is atomless. We denote by $L^{0}\left( \mu \right) $ the space of all $\mathbb{C}$-valued, \Sigma $-measurable functions (with the usual identification of functions which are equal $\mu $-a.e. on $\Omega $). Then $L^{0}\left( \mu \right) $ is a Dedekind complete complex Riesz space (or, vector lattice). Recall that a subset $A\subseteq L^{0}\left( \mu \right) $ is called an (order) \textit ideal} whenever $A$ is a linear subspace of $L^{0}\left( \mu \right) $ with the property that $\left\vert f\right\vert \leq \left\vert g\right\vert $, with $f\in L^{0}\left( \mu \right) $ and $g\in A$, implies that $f\in A$. \begin{definition} A \emph{Banach function space} (briefly B.f.s.) over $\left( \Omega ,\Sigma ,\mu \right) $ is a Banach space $\left( E,\left\Vert \cdot \right\Vert _{E}\right) $, where $E\subseteq L^{0}\left( \mu \right) $ is an ideal and \left\Vert \cdot \right\Vert _{E}$ is an absolutely monotone norm on $E$ (that is, $\left\Vert f\right\Vert _{E}\leq \left\Vert g\right\Vert _{E}$ whenever $\left\vert f\right\vert \leq \left\vert g\right\vert $ in $E$). \end{definition} Evidently, B.f.s.' are complex (Dedekind complete) Banach lattices. Therefore, we can freely apply the general theory of Banach lattices to B.f.s.' For the theory of Banach lattices we refer the reader to \cite{AB2}, \cite{MN}, \cite{Sch}, \cite{Z}, and for the theory of B.f.s.' to \cite{Za1} (Chapter 15), \cite{BS}. The class of B.f.s.' includes the $L^{p}$-spaces ( 1\leq p\leq \infty $), Orlicz spaces, Lorentz spaces, Marcinkiewicz spaces and many more. If $E$ and $F$ are two B.f.s.' over $\left( \Omega ,\Sigma ,\mu \right) $ satisfying $E\subseteq F$, then the natural embedding of $E$ into $F$ is continuous. Indeed, the embedding operator is linear and positive, and a positive linear operator on a Banach lattice is automatically continuous (see e.g. Theorem 83.12 in \cite{Z}). \begin{remark} In Chapter 15 of \cite{Za1} and in the series of papers \cite{LZ}, B.f.s.' are introduced via function norms. We briefly recall this approach. Denote by $M\left( \mu \right) $ the set of all (equivalence classes of) extended \mathbb{C}$-valued measurable functions on $\Omega $ (that is, every $f\in M\left( \mu \right) $ is of the form $f=g+ih$ with $g,h:\Omega \rightarrow \left[ -\infty ,\infty \right] $ measurable). The set of all $\left[ 0,\infty \right] $-valued functions in $M\left( \mu \right) $ is denoted by M\left( \mu \right) ^{+}$. A \emph{function norm} $\rho $ is defined to be a map $\rho :M\left( \mu \right) ^{+}\rightarrow \left[ 0,\infty \right] $ satisfying: \begin{enumerate} \item[(i)] If $u\in M\left( \mu \right) ^{+}$ and $\rho \left( u\right) =0$, then $u=0$; \item[(ii)] $\rho \left( \lambda u\right) =\lambda \rho \left( u\right) $ for all $\lambda \in \mathbb{R}^{+}$ and $u\in M\left( \mu \right) ^{+}$; \item[(iii)] $\rho \left( u+v\right) \leq \rho \left( u\right) +\rho \left( v\right) $ for all $u,v\in M\left( \mu \right) ^{+}$; \item[(iv)] $\rho \left( u\right) \leq \rho \left( v\right) $ whenever u\leq v$ in $M\left( \mu \right) ^{+}$. \end{enumerate} \noindent If $u\in M\left( u\right) ^{+}$ and $\rho \left( u\right) <\infty , then $u\left( x\right) <\infty $ for $\mu $-a.e. $x\in \Omega $ (i.e., u\in L^{0}\left( \mu \right) ^{+}$); see \cite{Za1}, Section 63, Theorem 1. Define \begin{equation} E_{\rho }=\left\{ f\in M\left( \mu \right) :\rho \left( \left\vert f\right\vert \right) <\infty \right\} . \label{eq0109} \end{equation Then $E_{\rho }$ is an ideal in $L^{0}\left( \mu \right) $. Setting \left\Vert f\right\Vert _{E_{\rho }}=\rho \left( \left\vert f\right\vert \right) $ for all $f\in E_{\rho }$, the space $\left( E_{\rho },\left\Vert \cdot \right\Vert _{E_{\rho }}\right) $ is a normed space. If $E_{\rho }$ is complete, then it is a B.f.s. Conversely, if $\left( E,\left\Vert \cdot \right\Vert _{E}\right) $ is a B.f.s. and we set \begin{equation*} \rho \left( u\right) =\left\{ \begin{array}{ccc} \left\Vert u\right\Vert _{E} & \text{if} & u\in E^{+} \\ \infty & \text{if} & u\notin E^{+ \end{array \right. ,\ \ \ u\in M\left( \mu \right) ^{+}, \end{equation* then $\rho $ is a function norm and the space $E$ is recovered via (\re {eq0109}). This shows that the approach in \cite{Za1} is equivalent to ours and so, all results from \cite{Za1} may be used in our setting (see also \cite{Z}, Section 112). \end{remark} Given a B.f.s. $\left( E,\left\Vert \cdot \right\Vert _{E}\right) $ over \left( \Omega ,\Sigma ,\mu \right) $, there exists a set $A_{0}\in \Sigma $ with the property that $f=0$ $\mu $-a.e. on $\Omega \backslash A_{0}$ for each $f\in E$ and, for every $A\in \Sigma $ satisfying $A\subseteq A_{0}$ and $\mu \left( A\right) >0$, there exists $B\in \Sigma $ such that B\subseteq A$, $\mu \left( B\right) >0$ and $\chi _{B}\in E$ (see see \cit {Za1}, Section 72 or \cite{Z}, Section 86). The set $A_{0}$ is uniquely determined (modulo $\mu $-null sets) and is called the \textit{carrier} of E $. Moreover, there exists a sequence $\left( A_{n}\right) _{n=1}^{\infty }$ in $\Sigma $ such that $A_{n}\subseteq A_{0}$ and $\chi _{A_{n}}\in E$ for all $n$, and $A_{n}\uparrow _{n}A_{0}$. Any B.f.s. that we encounter in this note has its carrier equal to $\Omega $. In general, if $E\neq \left\{ 0\right\} $, then one can replace, without loss of generality, $\Omega $ by A_{0}$. Recall that a B.f.s. $\left( E,\left\Vert \cdot \right\Vert _{E}\right) $ over $\left( \Omega ,\Sigma ,\mu \right) $ has \textit{order continuous norm} (briefly o.c.-norm) if $\left\Vert f_{\tau }\right\Vert _{E}\downarrow _{\tau }0$ whenever $f_{\tau }\downarrow _{\tau }0$ in $E$ (that is, $\left( f_{\tau }\right) $ is a downwards directed net in the positive cone $E^{+}$ with infimum $0$). Since $E$ is Dedekind complete, it follows from Theorem 103.6 in \cite{Z} that $E$ has o.c.-norm if and only if $E$ has $\sigma -o.c.-norm (that is, $\left\Vert f_{n}\right\Vert _{E}\downarrow _{n}0$ for any sequence $\left( f_{n}\right) _{n=1}^{\infty }$ in $E^{+}$ satisfying f_{n}\downarrow _{n}0$). For a sequence $\left( f_{n}\right) _{n=1}^{\infty } $ in $E^{+}$ the condition $f_{n}\downarrow _{n}0$ is equivalent with f_{n}\left( x\right) \downarrow 0$ for $\mu $-a.e. $x\in \Omega $. By way of example, for $1\leq p<\infty $ the space $E=L^{p}\left( \mu \right) $ has o.c.-norm but, $L^{\infty }\left( \mu \right) $ does not have o.c.-norm. Let $\left( E,\left\Vert \cdot \right\Vert _{E}\right) $ be a B.f.s. over \left( \Omega ,\Sigma ,\mu \right) $ with carrier $\Omega $. The \textit{ \"{o}the dual} (or \textit{associate space}) $E^{\times }$ of $E$ is defined by \begin{equation*} E^{\times }=\left\{ g\in L^{0}\left( \mu \right) :\left\Vert g\right\Vert _{E^{\times }}<\infty \right\} , \end{equation* where \begin{equation} \left\Vert g\right\Vert _{E^{\times }}=\sup \left\{ \int_{\Omega }\left\vert fg\right\vert d\mu :f\in E,\left\Vert f\right\Vert _{E}\leq 1\right\} ,\ \ \ g\in L^{0}\left( \mu \right) . \label{eq01} \end{equation Then $\left( E^{\times },\left\Vert \cdot \right\Vert _{E^{\times }}\right) $ is a B.f.s. over $\left( \Omega ,\Sigma ,\mu \right) $ with carrier equal to $\Omega $; see \cite{Za1}, Sections 68, 69 and also \cite{Z}, p. 418. From the definition it is also clear that \textit{H\"{o}lder's inequality} holds, that is, \begin{equation} \left\vert \int_{\Omega }fg\;d\mu \right\vert \leq \int_{\Omega }\left\vert fg\right\vert d\mu \leq \left\Vert f\right\Vert _{E}\left\Vert g\right\Vert _{E^{\times }},\ \ \ f\in E,\ \ g\in E^{\times }. \label{eq02} \end{equation According to \cite{Za1}, Section 69, Theorem 1, we also have \begin{equation} \left\Vert g\right\Vert _{E^{\times }}=\sup \left\{ \left\vert \int_{\Omega }fg\;d\mu \right\vert :f\in E,\left\Vert f\right\Vert _{E}\leq 1\right\} ,\ \ \ g\in E^{\times }. \label{eq06} \end{equation} For each $g\in E^{\times }$, define the linear functional $\varphi _{g}:E\rightarrow \mathbb{C}$ by setting \begin{equation*} \varphi _{g}\left( f\right) =\int_{\Omega }fg\;d\mu ,\ \ \ f\in E. \end{equation* It is easy to see that $\varphi _{g}\in E^{\ast }$ (the dual Banach space of $E$) and that $\left\Vert \varphi _{g}\right\Vert _{E^{\ast }}=\left\Vert g\right\Vert _{E^{\times }}$ for all $g\in E^{\times }$ (see (\ref{eq06})). Consequently, the map $g\longmapsto \varphi _{g}$, for $g\in E^{\times }$, is an isometric isomorphism from $E^{\times }$ into $E^{\ast }$. Via this map, $E^{\times }$ may be identified with a closed subspace of $E^{\ast }$ and we frequently denote the \textit{dual pairing} of $E$ and $E^{\times }$ by $\left\langle \cdot ,\cdot \right\rangle $, that is, \begin{equation} \left\langle f,g\right\rangle =\int_{\Omega }fg\;d\mu ,\ \ \ f\in E,\ \ g\in E^{\times }. \label{eq0401} \end{equation Since the carrier of $E^{\times }$ is equal to $\Omega $ (whenever the carrier of $E$ is $\Omega $), it follows that $E^{\times }$ separates the points of $E$ (but, in general, $E^{\times }$ is not norming). The functionals in $E^{\ast }$ which are of the form $\varphi _{g}$ for some $g\in E^{\times }$ may be identified with the band $E_{n}^{\ast }$ in E^{\ast }$ of all order continuous functionals on $E$; see \cite{Za1}, Section 69, Theorem 3, and also \cite{Z}, Section 112. In particular, E^{\ast }=E^{\times }$ if and only if $E$ has o.c.-norm (cf. Theorem 2.4.2 in \cite{MN}). A B.f.s. $E$ over $\left( \Omega ,\Sigma ,\mu \right) $ has the \textit Fatou property} if, for any net $0\leq u_{\alpha }\uparrow _{\alpha }$ in $E$ satisfying $\sup_{\alpha }\left\Vert u_{\alpha }\right\Vert _{E}<\infty $, there exists $u\in E$ such that $0\leq u_{\alpha }\uparrow _{\alpha }u$ and \left\Vert u_{\alpha }\right\Vert _{E}\uparrow _{\alpha }\left\Vert u\right\Vert _{E}$. In this case, sequences also suffice; cf. \cite{Z}, Theorem 113.2. By a theorem of Halperin and Luxemburg (see \cite{Za1}, Section 71, Theorem 1), a B.f.s. $E$ has the Fatou property if and only if E=E^{\times \times }$ isometrically. We point out that in \cite{BS} the Fatou property is included in the definition of a B.f.s. \section{Translation invariant Banach function \protect\linebreak spaces} Let $G$ be an infinite compact abelian group and denote by $\mu $ normalized Haar measure on the Borel $\sigma $-algebra $\mathcal{B}\left( G\right) $. The space of all regular, complex Borel measures in $G$ is denoted by M\left( G\right) $. The group operation is denoted by $+$ and the identity element of $G$ is denoted by $0$. Moreover, $\mu \left( -A\right) =\mu \left( A\right) $ for all $A\in \mathcal{B}\left( G\right) $. For the general theory of locally compact abelian groups we refer the reader, for instance, to \cite{La}, \cite{Ru1}. We denote $L^{0}\left( \mu \right) $ by the more traditional notation $L^{0}\left( G\right) $ and the corresponding L^{p}$-spaces by $L^{p}\left( G\right) $, $1\leq p\leq \infty $. A B.f.s. over $\left( G,\mathcal{B}\left( G\right) ,\mu \right) $ is simply called a B.f.s. over $G$. For each $y\in G$, define the \textit{translation operator} $\tau _{y}:L^{0}\left( G\right) \rightarrow L^{0}\left( G\right) $ by settin \begin{equation*} \left( \tau _{y}f\right) \left( x\right) =f\left( x-y\right) ,\ \ \ x\in G, \end{equation* for all $f\in L^{0}\left( G\right) $. \begin{definition} \label{Def1101}A \emph{translation invariant B.f.s} over $G$ is a B.f.s. E\subseteq L^{0}\left( G\right) $ such that $\tau _{y}f\in E$ with \left\Vert \tau _{y}f\right\Vert _{E}=\left\Vert f\right\Vert _{E}$ for all y\in G$, whenever $f\in E$. \end{definition} Such B.f.s.' include all spaces $L^{p}\left( G\right) $, $1\leq p\leq \infty $, all Orlicz spaces, Lorentz spaces and Marcinkiewicz spaces over $G$. Actually, any rearrangement invariant B.f.s. over $G$ is translation invariant. However, there exist plenty of translation invariant B.f.s.' which are \textit{not} rearrangement invariant. We present an example (see also Example \ref{Ex01} below). \begin{example} \label{Ex02}Let $G_{1}$ and $G_{2}$ be two infinite compact abelian groups (with normalized Haar measure $\mu _{1}$ and $\mu _{2}$, respectively) and consider the product group $G=G_{1}\times G_{2}$ (with normalized Haar measure $\mu =\mu _{1}\times \mu _{2}$). Let $1<p,q<\infty $. Define the function norm $\left\Vert \cdot \right\Vert _{p\times q}$ on $L^{0}\left( G\right) $ by \begin{equation*} \left\Vert f\right\Vert _{p\times q}=\left( \int_{G_{1}}\left( \int_{G_{2}}\left\vert f\left( x,y\right) \right\vert ^{q}d\mu _{2}\left( y\right) \right) ^{p/q}d\mu _{1}\left( x\right) \right) ^{1/p},\ \ \ f\in L^{0}\left( G\right) , \end{equation* and let \begin{equation*} E_{p\times q}=\left\{ f\in L^{0}\left( G\right) :\left\Vert f\right\Vert _{p\times q}<\infty \right\} . \end{equation* Equipped with the norm $\left\Vert \cdot \right\Vert _{p\times q}$ the space $E_{p\times q}$ is a translation invariant B.f.s. over $G$ with the Fatou property and o.c.-norm. It is readily verified that $E_{p\times q}$ is \emph not} rearrangement invariant whenever $p\neq q$. \end{example} The following observation may be intuitively clear. \begin{lemma} \label{Lem1104}Let $E\neq \left\{ 0\right\} $ be a translation invariant B.f.s. over $G$. Then the carrier of $E$ is $G$. \end{lemma} \begin{proof} Let $B\in \mathcal{B}\left( G\right) $ with $\mu \left( B\right) >0$ be given. We have to show that there exists $C\in \mathcal{B}\left( G\right) $ satisfying $\mu \left( C\right) >0$, $C\subseteq B$ and $\chi _{C}\in E$. By assumption there is $0<f\in E$. Hence, there exist $\varepsilon >0$ and A\in \mathcal{B}\left( G\right) $ satisfying $\mu \left( A\right) >0$ and 0<\varepsilon \chi _{A}\leq f$. In particular, $\chi _{A}\in E$. It follows from Theorem F in Section 59 of \cite{Ha} that there exists $y_{0}\in G$ such that $\left( \tau _{y_{0}}\chi _{A}\right) \chi _{B}>0$. Setting C=\left( A+y_{0}\right) \cap B$, it follows that $\mu \left( C\right) >0$ and $\chi _{C}\leq \tau _{y_{0}}\chi _{A}\in E$, which implies that $\chi _{C}\in E$. Hence, the set $C$ satisfies the requirements and the proof is complete.\medskip \end{proof} Using the definition (see (\ref{eq01})), it is routine to verify that the \"{o}the dual $E^{\times }$ is translation invariant whenever $E\neq \left\{ 0\right\} $ is a translation invariant B.f.s. over $G$. If $E\neq \left\{ 0\right\} $ is any rearrangement invariant B.f.s. over $G$, then $L^{\infty }\left( G\right) \subseteq E\subseteq L^{1}\left( G\right) $ (as $\mu $ is atomless with $\mu \left( G\right) <\infty $). The following result shows, for any translation invariant B.f.s. $E$ over $G$, that the inclusion E\subseteq L^{1}\left( G\right) $ always holds. Even if $E\neq \left\{ 0\right\} $, the other inclusion $L^{\infty }\left( G\right) \subseteq E$ may fail, in general (see Example \ref{Ex1103} below). \begin{proposition} \label{Prop1107}Let $E$ be any translation invariant B.f.s. over $G$. Then E\subseteq L^{1}\left( G\right) $ with a continuous embedding. \end{proposition} \begin{proof} We can assume that $E\neq \left\{ 0\right\} $. Since then also $E^{\times }\neq \left\{ 0\right\} $, there exists $0\leq g\in E^{\times }$ with \left\Vert g\right\Vert _{E^{\times }}=1$. Given any $0\leq f\in E$, the non-negative function $\left( x,y\right) \longmapsto f\left( x+y\right) g\left( y\right) $, for $\left( x,y\right) \in G\times G$, is known to be Borel measurable on $G\times G$ (\cite{Ru1}, p. 5). Since $\int_{G}fd\mu =\int_{G}\tau _{-y}fd\mu $ for all $y\in G$ (the value $+\infty $ is possible at this stage), we deduce from the Fubini-Tonelli theorem (for positive functions) that \begin{multline*} \int_{G}\left( \int_{G}f\left( x+y\right) g\left( y\right) d\mu \left( y\right) \right) d\mu \left( x\right) \\ =\int_{G}\left( \int_{G}f\left( x+y\right) d\mu \left( x\right) \right) g\left( y\right) d\mu \left( y\right) \\ =\left( \int_{G}fd\mu \right) \left( \int_{G}g\;d\mu \right) . \end{multline* But, for each $x\in G$, it follows from (\ref{eq02}) that \begin{equation*} \int_{G}f\left( x+y\right) g\left( y\right) d\mu \left( y\right) =\int_{G}\left( \tau _{-y}f\right) g\;d\mu \leq \left\Vert f\right\Vert _{E}\left\Vert g\right\Vert _{E^{\times }}=\left\Vert f\right\Vert _{E}<\infty \end{equation* and so, by the previous identity, we have that \begin{equation} \left( \int_{G}fd\mu \right) \left( \int_{G}g\;d\mu \right) \leq \mu \left( G\right) \left\Vert f\right\Vert _{E}=\left\Vert f\right\Vert _{E}<\infty . \label{eq1119} \end{equation Since $g>0$, its integral $\alpha =\int_{G}g\;d\mu >0$ and so $\left\Vert f\right\Vert _{1}=\int_{G}fd\mu <\infty $, that is, $f\in L^{1}\left( G\right) $. It follows that $E\subseteq L^{1}\left( G\right) $. As noted earlier, the natural embedding $E\subseteq L^{1}\left( G\right) $ is necessarily continuous. \medskip \end{proof} \begin{corollary} \label{Cor1102}Let $E\neq \left\{ 0\right\} $ be a translation invariant B.f.s. over $G$. \begin{enumerate} \item[(i)] With continuous inclusions we have that $L^{\infty }\left( G\right) \subseteq E^{\times }\subseteq L^{1}\left( G\right) $. \item[(ii)] If, in addition, $E$ has the Fatou property, then $L^{\infty }\left( G\right) \subseteq E\subseteq L^{1}\left( G\right) $, with continuous inclusions. \end{enumerate} \end{corollary} \begin{proof} (i). Since $E^{\times }\neq \left\{ 0\right\} $ is also a translation invariant B.f.s., we can apply Proposition \ref{Prop1107} to $E^{\times }$ to conclude that $E^{\times }\subseteq L^{1}\left( G\right) $ continuously. Again by Proposition \ref{Prop1107}, now applied to $E$, we have $E\subseteq L^{1}\left( G\right) $ continuously and hence, $L^{\infty }\left( G\right) =L^{1}\left( G\right) ^{\times }\subseteq E^{\times }$, continuously. (ii). Part (i), applied to $E^{\times }$, yields $L^{\infty }\left( G\right) \subseteq E^{\times \times }\subseteq L^{1}\left( G\right) $ continuously. If $E$ has the Fatou property, then $E=E^{\times \times }$, which completes the proof.\medskip \end{proof} There exist non-trivial translation invariant B.f.s.' over $G$ for which L^{\infty }\left( G\right) \nsubseteq E$. \begin{example} \label{Ex1103}This example is inspired by the results in \cite{Ru3}. Let $G$ be an infinite compact abelian group, equipped with normalized Haar measure \mu $. Let $\mathcal{O}$ denote the collection of all open and dense subsets of $G$. Note that $U_{1}\cap U_{2}\in \mathcal{O}$ whenever $U_{1},U_{2}\in \mathcal{O}$ and that $x+U\in \mathcal{O}$ whenever $U\in \mathcal{O}$ and x\in G$. Define \begin{equation*} J_{\mathcal{O}}=\left\{ f\in L^{\infty }\left( G\right) :\exists \ U\in \mathcal{O}\ \text{such that }f\chi _{U}=0\right\} . \end{equation* It is readily verified that $J_{\mathcal{O}}$ is an ideal in $L^{\infty }\left( G\right) $, which is also translation invariant (that is, $\tau _{x}f\in J_{\mathcal{O}}$ whenever $f\in J_{\mathcal{O}}$ and $x\in G$). Furthermore, $J_{\mathcal{O}}\neq \left\{ 0\right\} $. Indeed, there exists U\in \mathcal{O}$ with $\mu \left( U\right) <1$ (actually, for each \varepsilon >0$ there exists $V\in \mathcal{O}$ such that $\mu \left( V\right) <\varepsilon $; see Theorem 2.4 in \cite{Ru3}), in which case 0\neq \chi _{G\backslash U}\in J_{\mathcal{O}}$. Let $E_{\mathcal{O}}$ denote the norm closure of $J_{\mathcal{O}}$ in $L^{\infty }\left( G\right) . Since the norm closure of any ideal is again an ideal (cf. Theorem 100.11 in \cite{Z}), it follows that $E_{\mathcal{O}}$ is an ideal in $L^{\infty }\left( G\right) $ and it is easily verified that $E_{\mathcal{O}}$ is translation invariant. Hence, $\left( E_{\mathcal{O}},\left\Vert \cdot \right\Vert _{\infty }\right) $ is a non-zero translation invariant B.f.s. over $G$. The claim is that $\chi _{G}\notin E_{\mathcal{O}}$ (and hence, that L^{\infty }\left( G\right) \nsubseteq E_{\mathcal{O}}$). Actually, $E_ \mathcal{O}}\cap C\left( G\right) =\left\{ 0\right\} $. Indeed, suppose that $0\neq f\in E_{\mathcal{O}}\cap C\left( G\right) $. Then there exist an \varepsilon >0$ and a non-empty open set $W\subseteq G$ such that \left\vert f\left( x\right) \right\vert >\varepsilon $, for $x\in W$. Choose $g\in J_{\mathcal{O}}$ such that $\left\Vert f-g\right\Vert _{\infty }<\varepsilon /2$, in which case $\left\vert g\left( x\right) \right\vert >\varepsilon /2$ for all $x\in W$. But, then $g$ cannot vanish $\mu $-a.e. on any set from $\mathcal{O}$, contradicting that $g\in J_{\mathcal{O}}$. \end{example} Our next aim is to show that $L^{\infty }\left( G\right) \subseteq E$ whenever $E\neq \left\{ 0\right\} $ and $E$ has o.c.-norm (see Proposition \ref{Prop1108}). First we require some preliminary results. \begin{lemma} \label{Lem1105}Let $E\neq \left\{ 0\right\} $ be a translation invariant B.f.s. over $G$ with o.c.-norm. For each set $A\in \Sigma $ with $\chi _{A}\in E$ the following assertions hold. \begin{enumerate} \item[(i)] For each $\varepsilon >0$ there exists $\delta >0$ such that \left\Vert \chi _{B}\right\Vert _{E}\leq \varepsilon $ whenever $B\in \mathcal{B}\left( G\right) $ satisfies $B\subseteq A$ and $\mu \left( B\right) \leq \delta $. \item[(ii)] If a sequence $\left( A_{n}\right) _{n=1}^{\infty }$ in \mathcal{B}\left( G\right) $ satisfies $A_{n}\downarrow \emptyset $, then \begin{equation*} \lim_{n\rightarrow \infty }\sup_{y\in G}\left\Vert \chi _{A_{n}}\left( \tau _{y}\chi _{A}\right) \right\Vert _{E}=0. \end{equation*} \item[(iii)] For each $B\in \mathcal{B}\left( G\right) $ we have that \lim_{y\rightarrow 0}\left\Vert \left( \chi _{B}-\tau _{y}\chi _{B}\right) \chi _{A}\right\Vert _{E}=0$. \item[(iv)] $\lim_{y\rightarrow 0}\left\Vert \chi _{A}-\tau _{y}\chi _{A}\right\Vert _{E}=0$. \end{enumerate} \end{lemma} \begin{proof} (i). If the statement does not hold, then there exist $\varepsilon >0$ and a sequence $\left( B_{n}\right) _{n=1}^{\infty }$ in $\mathcal{B}\left( G\right) $ with $B_{n}\subseteq A$, $\mu \left( B_{n}\right) \leq 2^{-n}$ and $\left\Vert \chi _{B_{n}}\right\Vert _{E}\geq \varepsilon $ for all n\in \mathbb{N}$. The sets $C_{n}=\bigcup\nolimits_{k=n}^{\infty }B_{k}$ satisfy $\mu \left( C_{n}\right) \leq 2^{-n+1}$ for all $n\in \mathbb{N}$ and $C_{n}\downarrow _{n}$. Hence, $\chi _{A}\geq \chi _{C_{n}}\downarrow 0$ in $E$ and so, by order continuity of the norm, we have $\left\Vert \chi _{C_{n}}\right\Vert _{E}\downarrow 0$. But, $\chi _{C_{n}}\geq \chi _{B_{n}}$ implies that $\left\Vert \chi _{C_{n}}\right\Vert _{E}\geq \left\Vert \chi _{B_{n}}\right\Vert _{E}\geq \varepsilon $ for all $n$, which is a contradiction. (ii). Given $\varepsilon >0$, by part (i) there exists $\delta >0$ such that $\left\Vert \chi _{B}\right\Vert _{E}\leq \varepsilon $ whenever $B\in \mathcal{B}\left( G\right) $ satisfies $B\subseteq A$ and $\mu \left( B\right) \leq \delta $. Since $A_{n}\downarrow \emptyset $, there exists N\in \mathbb{N}$ such that $\mu \left( A_{n}\right) \leq \delta $ for all n\geq N$. Observe that \begin{equation*} \left\Vert \chi _{A_{n}}\left( \tau _{y}\chi _{A}\right) \right\Vert _{E}=\left\Vert \tau _{-y}\left( \chi _{A_{n}}\tau _{y}\chi _{A}\right) \right\Vert _{E}=\left\Vert \left( \tau _{-y}\chi _{A_{n}}\right) \chi _{A}\right\Vert _{E},\ \ n\in \mathbb{N},\ \ \ y\in G. \end{equation* Since $\left( \tau _{-y}\chi _{A_{n}}\right) \chi _{A}=\chi _{\left( A_{n}-y\right) \cap A}$ and \begin{equation*} \mu \left( \left( A_{n}-y\right) \cap A\right) \leq \mu \left( A_{n}-y\right) =\mu \left( A_{n}\right) \leq \delta ,\ \ \ n\geq N, \end{equation* it follows that $\left\Vert \left( \tau _{-y}\chi _{A_{n}}\right) \chi _{A}\right\Vert _{E}\leq \varepsilon $ for all $y\in G$ and $n\geq N$. Hence, \begin{equation*} \sup_{y\in G}\left\Vert \left( \tau _{-y}\chi _{A_{n}}\right) \chi _{A}\right\Vert _{E}\leq \varepsilon ,\ \ \ n\geq N, \end{equation* which completes the proof of part (ii). (iii). Fix $B\in \mathcal{B}\left( G\right) $. Note that $\left\vert \chi _{B}-\tau _{y}\chi _{B}\right\vert =\chi _{\left( B+y\right) \Delta B}$, for $y\in G$, where $\Delta $ denotes the symmetric difference. It follows from the continuity of translations in $L^{1}\left( G\right) $ (see \cite{Ru1}, p.3) that $\lim_{y\rightarrow 0}\left\Vert \chi _{B}-\tau _{y}\chi _{B}\right\Vert _{1}=0$ and hence, that $\lim_{y\rightarrow 0}\mu \left( \left( B+y\right) \Delta B\right) =0$. Given $\varepsilon >0$, it follows from part (i) that there exists $\delta >0$ such that $\left\Vert \chi _{C}\right\Vert _{E}\leq \varepsilon $ whenever $C\in \mathcal{B}\left( G\right) $ satisfies $C\subseteq A$ and $\mu \left( C\right) \leq \delta $. If $U$ is a neighbourhood of $0\in G$ such that $\mu \left( \left( B+y\right) \Delta B\right) \leq \delta $ for all $y\in U$, then $\mu \left( A\cap \left( \left( B+y\right) \Delta B\right) \right) \leq \delta $ and so, \begin{equation*} \left\Vert \left( \chi _{B}-\tau _{y}\chi _{B}\right) \chi _{A}\right\Vert _{E}=\left\Vert \chi _{A\cap \left( \left( B+y\right) \Delta B\right) }\right\Vert _{E}\leq \varepsilon ,\ \ \ y\in U. \end{equation* This suffices for the proof of part (iii). (iv). Since the carrier of $E$ is $G$ (cf. Lemma \ref{Lem1104}), there exists a sequence $\left( H_{n}\right) _{n=1}^{\infty }$ in $\mathcal{B \left( G\right) $ with $\chi _{H_{n}}\in E$ for all $n$ and $H_{n}\uparrow G , i.e., $G\backslash H_{n}\downarrow \emptyset $. Given $\varepsilon >0$, it follows from part (ii) that there is $N\in \mathbb{N}$ such that $\sup_{y\in G}\left\Vert \chi _{G\backslash H_{N}}\left( \tau _{y}\chi _{A}\right) \right\Vert _{E}\leq \varepsilon /3$. By part (iii), there is a neighbourhood $U$ of $0\in G$ such that $\left\Vert \chi _{H_{N}}\left( \chi _{A}-\tau _{y}\chi _{A}\right) \right\Vert _{E}\leq \varepsilon /3$ for all y\in U$. Consequently, for each $y\in U$ we have \begin{eqnarray*} \left\Vert \chi _{A}-\tau _{y}\chi _{A}\right\Vert _{E} &\leq &\left\Vert \chi _{H_{N}}\left( \chi _{A}-\tau _{y}\chi _{A}\right) \right\Vert _{E}+\left\Vert \chi _{G\backslash H_{N}}\left( \chi _{A}-\tau _{y}\chi _{A}\right) \right\Vert _{E} \\ &\leq &\varepsilon /3+\left\Vert \chi _{G\backslash H_{N}}\left( \tau _{0}\chi _{A}\right) \right\Vert _{E}+\left\Vert \chi _{G\backslash H_{N}}\left( \tau _{y}\chi _{A}\right) \right\Vert _{E}\leq \varepsilon . \end{eqnarray* This completes the proof of part (iv).\medskip \end{proof} We can now establish the result alluded to above. \begin{proposition} \label{Prop1108}Let $E\neq \left\{ 0\right\} $ be a translation invariant B.f.s. over $G$ with o.c.-norm. The following statements hold. \begin{enumerate} \item[(i)] For each $f\in E$ it is the case that $\lim_{y\rightarrow 0}\left\Vert f-\tau _{y}f\right\Vert _{E}=0$. \item[(ii)] $L^{\infty }\left( G\right) \subseteq E\subseteq L^{1}\left( G\right) $ with continuous inclusions. \item[(iii)] $C\left( G\right) $ is a dense subspace of $E$. \end{enumerate} \end{proposition} \begin{proof} (i). Denote by $\limfunc{sim}\left( \mathcal{B}\left( G\right) \right) $ the space of all simple functions based on $\mathcal{B}\left( G\right) $. Claim: $\limfunc{sim}\left( \mathcal{B}\left( G\right) \right) \cap E$ is dense in E$. Indeed, given $0\leq f\in E$, there exists a sequence $\left( s_{n}\right) _{n=1}^{\infty }$ in $\limfunc{sim}\left( \mathcal{B}\left( G\right) \right) ^{+}$ such that $0\leq s_{n}\uparrow _{n}f$ $\mu $-a.e. on G$. Since $E$ is an ideal in $L^{0}\left( G\right) $, it is clear that s_{n}\in \limfunc{sim}\left( \mathcal{B}\left( G\right) \right) \cap E$, for $n\in \mathbb{N}$. The order continuity of the norm in $E$ implies that \left\Vert f-s_{n}\right\Vert _{E}\rightarrow 0$ as $n\rightarrow \infty $. This suffices for the proof of the claim. Let $s\in \limfunc{sim}\left( \mathcal{B}\left( G\right) \right) \cap E$. Then $s=\sum_{j=1}^{k}\alpha _{j}\chi _{A_{j}}$, where $A_{j}\in \mathcal{B \left( G\right) $ with $\chi _{A_{j}}\in E$ and $\alpha _{j}\in \mathbb{C}$, for $j=1,\ldots ,n$. Therefore, Lemma \ref{Lem1105} (iv) implies that \left\Vert s-\tau _{y}s\right\Vert _{E}\rightarrow 0$ as $y\rightarrow 0$ in $G$. Since $\limfunc{sim}\left( \mathcal{B}\left( G\right) \right) \cap E$ is dense in $E$, the result of (i) is now clear. (ii). That $E\subseteq L^{1}\left( G\right) $ has been shown in Proposition \ref{Prop1107}. For the inclusion of $L^{\infty }\left( G\right) $ into $E$, we begin with a general observation, which is of independent interest (only special cases will be used in the proofs of (ii) and (iii)). Let $f\in E$ and $\lambda \in M\left( G\right) $. Since $f\in L^{1}\left( G\right) $ (cf. Proposition \ref{Prop1107}), the convolution $f\ast \lambda \in L^{1}\left( G\right) $ exists (see e.g. \cite{Ru1}, Section 1.3.2). The claim is that $f\ast \lambda \in E$. Indeed, it follows from part (i) that the function $F:y\longmapsto \tau _{y}f$, for $y\in G$, is continuous from G $ into $E$. So the range of $F$ is a compact metric space and hence, is separable. Via the Pettis measurability theorem (\cite{DU}, p. 42) it follows that $F$ is strongly $\left\vert \lambda \right\vert $-measurable and bounded. Consequently, the Bochner integral $\int_{G}^{\left( B\right) }Fd\lambda =\int_{G}^{\left( B\right) }\tau _{y}fd\lambda \left( y\right) \in E\subseteq L^{1}\left( G\right) $ exists. Using \cite{DU}, Theorem II.2.6 (for continuous functionals on $E$) and Fubini's theorem, we find for all $g\in L^{\infty }\left( G\right) \subseteq E^{\times }\subseteq E^{\ast } $ that \begin{eqnarray*} \left\langle \int_{G}^{\left( B\right) }\tau _{y}fd\lambda \left( y\right) ,g\right\rangle &=&\int_{G}\left\langle \tau _{y}f,g\right\rangle d\lambda \left( y\right) \\ &=&\int_{G}\left( \int_{G}\left( \tau _{y}f\right) \left( x\right) g\left( x\right) d\mu \left( x\right) \right) d\lambda \left( y\right) \\ &=&\int_{G}\left( \int_{G}f\left( x-y\right) d\lambda \left( y\right) \right) g\left( x\right) d\mu \left( x\right) =\left\langle f\ast \lambda ,g\right\rangle . \end{eqnarray* We can conclude that \begin{equation} f\ast \lambda =\int_{G}^{\left( B\right) }\tau _{y}fd\lambda \left( y\right) \in E,\ \ \ f\in E,\ \ \lambda \in M\left( G\right) . \label{eq03} \end{equation This proves the claim. Fix any $0<f_{0}\in E$. By what has just been proved, we have that f_{0}\ast \mu \in E$. But, $f_{0}\ast \mu =\left( \int_{G}f_{0}d\mu \right) \chi _{G}$ and so we can conclude that $\chi _{G}\in E$ and hence, that L^{\infty }\left( G\right) \subseteq E$. Furthermore, if $f\in L^{\infty }\left( G\right) $, then $\left\vert f\right\vert \leq \left\Vert f\right\Vert _{\infty }\chi _{G}$ implies that $\left\Vert f\right\Vert _{E}\leq \left\Vert \chi _{G}\right\Vert _{E}\left\Vert f\right\Vert _{\infty }$. Consequently, the embedding $L^{\infty }\left( G\right) \subseteq E$ is continuous. (iii). Let $f\in E$ be fixed and $\varepsilon >0$ be given. It follows from (i) that there exists an open neighbourhood $U$ of $0$ in $G$ such that \left\Vert f-\tau _{y}f\right\Vert _{E}\leq \varepsilon $ for all $y\in U$. Defining $h=\mu \left( U\right) ^{-1}\chi _{U}$, we have $f\ast h\in C\left( G\right) $ (as $f\in L^{1}\left( G\right) $ and $h\in L^{\infty }\left( G\right) $; cf. \cite{Ru1}, Section 1.1.6). Furthermore, (\ref{eq03}) implies that \begin{equation*} f-\left( f\ast h\right) =\mu \left( U\right) ^{-1}\int_{U}^{\left( B\right) }\left( f-\tau _{y}f\right) d\mu \left( y\right) , \end{equation* which implies that \begin{equation*} \left\Vert f-\left( f\ast h\right) \right\Vert _{E}\leq \mu \left( U\right) ^{-1}\int_{U}\left\Vert f-\tau _{y}f\right\Vert _{E}d\mu \left( y\right) \leq \varepsilon ; \end{equation* cf. \cite{DU}, Theorem II.2.4 (ii). The proof is thereby complete. \medskip \end{proof} \begin{remark} \label{Rem01} \begin{enumerate} \item[(a)] Let $E\neq \left\{ 0\right\} $ be a translation invariant B.f.s. over $G$ with o.c.-norm. Then $f\ast \lambda \in E$ for all $f\in E$ and \lambda \in M\left( G\right) $; see the proof of part (ii) in the above proposition. Moreover, it follows from (\ref{eq03}) that $\left\Vert f\ast \lambda \right\Vert _{E}\leq \left\Vert \lambda \right\Vert _{M\left( G\right) }\left\Vert f\right\Vert _{E}$. Consequently, for each $\lambda \in M\left( G\right) $, the operator $T_{\lambda }^{E}:E\rightarrow E$ of convolution with $\lambda $ exists and satisfies $\left\Vert T_{\lambda }^{E}\right\Vert \leq \left\Vert \lambda \right\Vert _{M\left( G\right) }$. It can be shown that a similar result holds for translation invariant B.f.s.' over $G$ which have the Fatou property. However, for general translation invariant B.f.s.' over $G$ this result fails. In fact, for the translation invariant B.f.s. $E_{\mathcal{O}}$ of Example \ref{Ex1103} it can be shown that the only measures $\lambda \in M\left( G\right) $ for which the convolution operator $T_{\lambda }^{E_{\mathcal{O}}}:E_{\mathcal{O }\rightarrow E_{\mathcal{O}}$ exists, are the discrete measures. See \cit {OM}. \item[(b)] A translation invariant B.f.s. $E$ over $G$ with the property that \linebreak $\left\Vert f-\tau _{y}f\right\Vert _{E}\rightarrow 0$ as y\rightarrow 0$ in $G$, for every $f\in E$, necessarily has o.c.-norm. The proof of this fact is somewhat involved and is not needed in this note. See \cite{OM}. This fact also implies that any translation invariant B.f.s. over $G$ in which $C\left( G\right) $ is dense, necessarily has o.c.-norm. \end{enumerate} \end{remark} Next we discuss the phenomenon that a general translation invariant B.f.s. need not be reflection invariant (see the definition below). For any $f\in L^{0}\left( G\right) $, its reflection $\tilde{f}\in L^{0}\left( G\right) $ is defined by \begin{equation*} \tilde{f}\left( x\right) =f\left( -x\right) ,\ \ \ x\in G. \end{equation*} \begin{definition} \label{Def1102}A B.f.s. $E$ over $G$ is called \emph{reflection invariant} if it has the property that $\tilde{f}\in E$ with $\left\Vert \tilde{f \right\Vert _{E}=\left\Vert f\right\Vert _{E}$ whenever $f\in E$. \end{definition} Evidently, every rearrangement invariant B.f.s. over $G$ is reflection invariant. Note that the B.f.s. $E_{p\times q}$ of Example \ref{Ex02}, with p\neq q$, is translation and reflection invariant but, it is not rearrangement invariant. Furthermore, it is routine to verify, for any reflection invariant B.f.s. $E$ over $G$ with carrier equal to $G$, that the K\"{o}the dual $E^{\times }$ is also reflection invariant. However, a translation invariant B.f.s. over $G$ need not be reflection invariant. \begin{example} \label{Ex01}Let $0<g\in L^{1}\left( G\right) $ be fixed. Define \begin{equation} \left\Vert f\right\Vert _{E_{g}}=\sup_{y\in G}\int_{G}\left\vert f\right\vert \left( \tau _{y}g\right) d\mu ,\ \ \ f\in L^{0}\left( G\right) , \label{eqRI02} \end{equation and let \begin{equation} E_{g}=\left\{ f\in L^{0}\left( G\right) :\left\Vert f\right\Vert _{E_{g}}<\infty \right\} , \label{eqRI01} \end{equation equipped with the norm $\left\Vert \cdot \right\Vert _{E_{g}}$. It is readily verified that $E_{g}$ is a translation invariant B.f.s. over $G$ with the Fatou property. Actually, $E_{g}$ is the largest translation invariant B.f.s. $F$ over $G$ with the property that $g\in F^{\times }$. Consider the group $G=\mathbb{T}$ with normalized Haar measure (that is, d\mu =\frac{1}{2\pi }dx$ on $\left[ -\pi ,\pi \right] $, where $dx$ denotes Lebesgue measure). Functions on $\mathbb{T}$ will be identified with $2\pi -periodic functions on $\mathbb{R}$. The $2\pi $-periodic function $g \mathbb{R\rightarrow R}$ is defined by setting \begin{equation*} g\left( x\right) =\left\{ \begin{array}{cc} 0 & \text{if }-\pi <x\leq 0 \\ \frac{1}{\sqrt{x}} & \text{if }0<x\leq \p \end{array \right. . \end{equation* Define the function norm $\left\Vert \cdot \right\Vert _{E_{g}}$ on L^{0}\left( G\right) $ by (\ref{eqRI02}) and let $E_{g}$ be the translation invariant B.f.s. defined by (\ref{eqRI01}). It is clear that the function g\notin E_{g}$. However, a direct computation shows that $\tilde{g}\in E_{g} . Therefore, $E_{g}$ is not reflection invariant. It can be shown that the space $E_{g}$ does not have o.c.-norm. There also exist translation invariant B.f.s.' over $\mathbb{T}$ which are not reflection invariant but, have both the Fatou property and o.c.-norm. \end{example} \section{Fourier multiplier functions} As before, $G$ is an infinite compact abelian group equipped with normalized Haar measure $\mu $. Let $\Gamma $ be the dual group of $G$; see, for example, Section 1.2 of \cite{Ru1}. For $x\in G$ and $\gamma \in \Gamma $ we write $\gamma \left( x\right) =\left( x,\gamma \right) $. Since we frequently consider $\Gamma $, as well as its linear span $\tau \left( G\right) $, as subsets of $C\left( G\right) $, we shall write the group operation in $\Gamma $ as \textit{multiplication}, that is, if $\gamma _{1},\gamma _{2}\in \Gamma $, then $\gamma _{1}\gamma _{2}\in \Gamma $ is given by \begin{equation*} \left( x,\gamma _{1}\gamma _{2}\right) =\left( x,\gamma _{1}\right) \left( x,\gamma _{2}\right) ,\ \ \ x\in G. \end{equation* The identity element of $\Gamma $ is, of course, $\chi _{G}$. In particular, \begin{equation} \left( x,\gamma ^{-1}\right) =\overline{\left( x,\gamma \right) }=\left( -x,\gamma \right) ,\ \ \ x\in G,\ \ \gamma \in \Gamma , \label{eqMF12} \end{equation where the bar denotes complex conjugation. Denote by $\tau \left( G\right) $ the linear subspace of $C\left( G\right) $ consisting of all \textit trigonometric polynomials} on $G$, that is, $\tau \left( G\right) $ is the linear span of $\Gamma \subseteq C\left( G\right) $. The following fact, which is a consequence of Proposition \ref{Prop1108} (iii), will be used. \begin{proposition} \label{Prop02}Let $E\neq \left\{ 0\right\} $ be a translation invariant B.f.s. over $G$ with o.c.-norm. Then $\tau \left( G\right) $ is a dense subspace of $E$. \end{proposition} \begin{proof} The dual group $\Gamma $ separates the points of $G$ (see \cite{Ru1}, Section 1.5.2) and so it follows from the Stone-Weierstrass theorem that \tau \left( G\right) $ is dense in $C\left( G\right) $ with respect to \left\Vert \cdot \right\Vert _{\infty }$. By Proposition \ref{Prop1108} (iii), $C\left( G\right) $ is dense in $E$. Since the embedding of $C\left( G\right) $ into $E$ is continuous (as $L^{\infty }\left( G\right) \subseteq E $ continuously), we can conclude that $\tau \left( G\right) $ is dense in E$. \medskip \end{proof} For each $f\in L^{1}\left( G\right) $ its Fourier transform, denoted by \hat{f}:\Gamma \rightarrow \mathbb{C}$, is given by \begin{equation*} \hat{f}\left( \gamma \right) =\int_{G}f\left( x\right) \left( -x,\gamma \right) d\mu \left( x\right) ,\ \ \ \gamma \in \Gamma , \end{equation* and satisfies $\hat{f}\in c_{0}\left( \Gamma \right) $ with $\left\Vert \hat f}\right\Vert _{\infty }\leq \left\Vert f\right\Vert _{1}$. Furthermore, if f\in L^{1}\left( G\right) $ and $\hat{f}=0$, then $f=0$. For these facts, see for instance Section 1.2 of \cite{Ru1}. Let $E\neq \left\{ 0\right\} $ be translation invariant B.f.s.' over $G$. Recall that $E$ is contained in $L^{1}\left( G\right) $ and so $\hat{f}$ exists for $f\in E$. A function $\varphi :\Gamma \rightarrow \mathbb{C}$ is called an $E$\textit{-multiplier function} for $G$ if, for every $f\in E$, there exists a function $g\in F$ such that $\hat{g}=\varphi \hat{f}$ (pointwise on $\Gamma $). If such a function $g$ exists, then it is necessarily unique by the injectivity of the Fourier transform. Denote $g$ by $M_{\varphi }^{E}f$. The so defined map $M_{\varphi }^{E}:E\rightarrow E$ is linear and satisfies \begin{equation} \left( M_{\varphi }^{E}f\right) ^{\wedge }=\varphi \hat{f},\ \ \ f\in E. \label{eqMF02} \end{equation The collection of all $E$-multiplier functions for $G$ is denoted by \mathcal{M}_{E}\left( G\right) $, which clearly is a (complex) vector space of functions on $\Gamma $. For $E=L^{p}\left( G\right) $, where $1\leq p\leq \infty $, we use the simpler notation $\mathcal{M}_{p}\left( G\right) $ in place of $\mathcal{M}_{L^{p}\left( G\right) }\left( G\right) $. A routine application of the closed graph theorem shows that $M_{\varphi }^{E}$ is continuous, i.e., $M_{\varphi }^{E}\in \mathcal{L}\left( E\right) $, for every $\varphi \in \mathcal{M}_{E}\left( G\right) $. Here, $\mathcal{L \left( E\right) $ denotes the Banach space of all continuous linear operators of $E$ into itself, equipped with the operator norm. It should be observed that if $E$ is a translation invariant B.f.s. over $G$ with L^{\infty }\left( G\right) \subseteq E$ (in which case $\gamma \in E$ for all $\gamma \in \Gamma $) then, for all $\varphi \in \mathcal{M}_{E}\left( G\right) $, we have \begin{equation} M_{\varphi }^{E}\gamma =\varphi \left( \gamma \right) \gamma ,\ \ \ \gamma \in \Gamma . \label{eq04} \end{equation Indeed, given $\varphi \in \mathcal{M}_{E}\left( G\right) $ and $\gamma \in \Gamma $, we have that $\hat{\gamma}=\chi _{\left\{ \gamma \right\} }$, from which it follows that $\left( M_{\varphi }^{E}\gamma \right) ^{\wedge }=\varphi \hat{\gamma}=\left( \varphi \left( \gamma \right) \gamma \right) ^{\wedge }$. This implies (\ref{eq04}). It follows from (\ref{eqMF02}) that \mathcal{M}_{E}\left( G\right) $ is an algebra for pointwise multiplication of functions on $\Gamma $. It is shown in the following lemma that always $\mathcal{M}_{E}\left( G\right) \subseteq \ell ^{\infty }\left( \Gamma \right) $. Recall that if E\neq \left\{ 0\right\} $ is a translation invariant B.f.s. over $G$, then L^{\infty }\left( G\right) \subseteq E^{\times }$; see Corollary \re {Cor1102} (i). In particular, $\gamma \in E^{\times }$ for each $\gamma \in \Gamma $. Moreover, $\left\Vert \gamma \right\Vert _{E^{\times }}=\left\Vert \chi _{G}\right\Vert _{E^{\times }}$ for all $\gamma \in \Gamma $, because \left\vert \gamma \right\vert =\chi _{G}$. Recall also that $\left\Vert f\right\Vert _{E^{\times }}=\left\Vert f\right\Vert _{E^{\ast }}$ for each f\in E^{\times }$; see the discussion following formula (\ref{eq06}). \begin{lemma} \label{LemMF06}Let $E\neq \left\{ 0\right\} $ be a translation invariant B.f.s.' over $G$. Then $\mathcal{M}_{E}\left( G\right) \subseteq \ell ^{\infty }\left( \Gamma \right) $ and \begin{equation} \left\Vert \varphi \right\Vert _{\infty }\leq \left\Vert M_{\varphi }^{E}\right\Vert ,\ \ \ \varphi \in \mathcal{M}_{E}\left( G\right) . \label{eqMF03} \end{equation} \end{lemma} \begin{proof} Fix $\varphi \in \mathcal{M}_{E}\left( G\right) $ and set $T=M_{\varphi }^{E} $. As observed prior to the lemma, $T$ is continuous and so we can consider the Banach space adjoint operator $T^{\ast }:E^{\ast }\rightarrow E^{\ast }$. Let $\gamma \in \Gamma $ and consider $\gamma \in E^{\times }\subseteq E^{\ast }$. The claim is that $T^{\ast }\gamma =\varphi \left( \gamma ^{-1}\right) \gamma \in E^{\times }\subseteq E^{\ast }$. Indeed, given $f\in E$ we have that \begin{eqnarray*} \left\langle f,T^{\ast }\gamma \right\rangle &=&\left\langle Tf,\gamma \right\rangle =\int_{G}\left( Tf\right) \left( x\right) \left( x,\gamma \right) d\mu \left( x\right) \\ &=&\int_{G}\left( Tf\right) \left( x\right) \left( -x,\gamma ^{-1}\right) d\mu \left( x\right) =\left( Tf\right) ^{\wedge }\left( \gamma ^{-1}\right) \\ &=&\varphi \left( \gamma ^{-1}\right) \hat{f}\left( \gamma ^{-1}\right) =\varphi \left( \gamma ^{-1}\right) \int_{G}f\left( x\right) \left( -x,\gamma ^{-1}\right) d\mu \left( x\right) \\ &=&\varphi \left( \gamma ^{-1}\right) \int_{G}f\left( x\right) \left( x,\gamma \right) d\mu \left( x\right) =\left\langle f,\varphi \left( \gamma ^{-1}\right) \gamma \right\rangle . \end{eqnarray* Since $E$ separates the elements of $E^{\ast }$, the claim follows. Defining the \textit{reflected function} $\tilde{\varphi}:\Gamma \rightarrow \mathbb{ }$ by $\tilde{\varphi}\left( \gamma \right) =\varphi \left( \gamma ^{-1}\right) $, for $\gamma \in \Gamma $, we find that \begin{equation*} \left\Vert T\right\Vert =\left\Vert T^{\ast }\right\Vert \geq \sup_{\gamma \in \Gamma }\frac{\left\Vert T^{\ast }\gamma \right\Vert _{E^{\times }}} \left\Vert \gamma \right\Vert _{E^{\times }}}=\sup_{\gamma \in \Gamma }\frac \left\vert \tilde{\varphi}\left( \gamma \right) \right\vert \left\Vert \gamma \right\Vert _{E^{\times }}}{\left\Vert \gamma \right\Vert _{E^{\times }}}=\left\Vert \tilde{\varphi}\right\Vert _{\infty }. \end{equation* Since $\left\Vert \tilde{\varphi}\right\Vert _{\infty }=\left\Vert \varphi \right\Vert _{\infty }$, this suffices for the proof. \medskip \end{proof} For the case $E=L^{p}\left( G\right) $ with $1\leq p<\infty $, the inequality (\ref{eqMF03}) is well known; see Corollary 4.1.2 of \cite{La}, for example. Define the family of operators \begin{equation} \mathcal{M}_{E}^{\limfunc{op}}\left( G\right) =\left\{ M_{\varphi }^{E}:\varphi \in \mathcal{M}_{E}\left( G\right) \right\} \subseteq \mathcal L}\left( E\right) . \label{eqMF13} \end{equation For each $x\in G$, a direct calculation shows that the Dirac point measure \delta _{x}$ satisfies $\tau _{x}f=f\ast \delta _{x}$, for $f\in E$. Since \tau _{x}\in \mathcal{L}\left( E\right) $ and $\hat{\delta}_{x}\left( \gamma \right) =\left( -x,\gamma \right) $, for $\gamma \in \Gamma $, it follows that $\left( x,\cdot \right) \in \mathcal{M}_{E}\left( G\right) $ for every x\in G$ and $\left\{ \tau _{x}:x\in G\right\} \subseteq \mathcal{M}_{E}^ \limfunc{op}}\left( G\right) $. In particular, the identity operator $I=\tau _{0}\in \mathcal{M}_{E}^{\limfunc{op}}\left( G\right) $ and the constant function $\hat{\delta}_{0}=\mathbf{1}\in \mathcal{M}_{E}\left( G\right) $. The following result is a consequence of Lemma \ref{LemMF06}. \begin{corollary} \label{CorMF05}Let $E\neq \left\{ 0\right\} $ be a translation invariant B.f.s. over $G$. The set $\mathcal{M}_{E}\left( G\right) $ is a unital subalgebra of $\ell ^{\infty }\left( \Gamma \right) $ and the map $\Phi \mathcal{M}_{E}\left( G\right) \rightarrow \mathcal{L}\left( E\right) $, defined by \begin{equation*} \Phi \left( \varphi \right) =M_{\varphi }^{E},\ \ \ \varphi \in \mathcal{M _{E}\left( G\right) , \end{equation* is a unital algebra isomorphism onto the subalgebra $\mathcal{M}_{E}^ \limfunc{op}}\left( G\right) \subseteq \mathcal{L}\left( E\right) $. Moreover, $\mathcal{M}_{E}^{\limfunc{op}}\left( G\right) $ is a unital commutative Banach subalgebra of $\mathcal{L}\left( E\right) $. \end{corollary} \begin{proof} It is routine to verify that $\mathcal{M}_{E}\left( G\right) $ is a unital subalgebra of $\ell ^{\infty }\left( \Gamma \right) $ and that the map $\Phi :\mathcal{M}_{E}\left( G\right) \rightarrow \mathcal{L}\left( E\right) $ is a unital algebra homomorphism. Consequently, $\mathcal{M}_{E}^{\limfunc{op }\left( G\right) =\Phi \left( \mathcal{M}_{E}\left( G\right) \right) $ is a unital commutative subalgebra of $\mathcal{L}\left( E\right) $. The injectivity of $\Phi $ follows from Lemma \ref{LemMF06}. To show that $\mathcal{M}_{E}^{\limfunc{op}}\left( G\right) $ is norm closed in $\mathcal{L}\left( E\right) $, let $\left( M_{\varphi _{n}}^{E}\right) _{n=1}^{\infty }$ be a sequence in $\mathcal{M}_{E}^{\limfunc{op}}\left( G\right) $ satisfying $\left\Vert S-M_{\varphi _{n}}^{E}\right\Vert \rightarrow 0$ as $n\rightarrow \infty $ for some $S\in \mathcal{L}\left( E\right) $. Lemma \ref{LemMF06} implies that $\left( \varphi _{n}\right) _{n=1}^{\infty }$ is a Cauchy sequence in $\ell ^{\infty }\left( \Gamma \right) $. So, there exists $\varphi \in \ell ^{\infty }\left( \Gamma \right) $ such that $\left\Vert \varphi -\varphi _{n}\right\Vert _{\infty }\rightarrow 0$ for $n\rightarrow \infty $. It follows, for a fixed $f\in E\subseteq L^{1}\left( G\right) $, that $\hat{f}\in c_{0}\left( \Gamma \right) $ and so \begin{equation} \left( M_{\varphi _{n}}^{E}f\right) ^{\wedge }=\varphi _{n}\hat{f \rightarrow \varphi \hat{f},\ \ \ n\rightarrow \infty , \label{eqMF18} \end{equation in $\ell ^{\infty }\left( \Gamma \right) $. On the other hand, $M_{\varphi _{n}}^{E}f\rightarrow Sf$ in $E$ for $n\rightarrow \infty $. Since $E$ is continuously included in $L^{1}\left( G\right) $, it follows that M_{\varphi _{n}}^{E}f\rightarrow Sf$ in $L^{1}\left( G\right) $. By the continuity of the Fourier transform from $L^{1}\left( G\right) $ into c_{0}\left( \Gamma \right) $ we can conclude that $\left( M_{\varphi _{n}}^{E}f\right) ^{\wedge }\rightarrow \left( Sf\right) ^{\wedge }$ in \ell ^{\infty }\left( \Gamma \right) $. Then (\ref{eqMF18}) implies that \left( Sf\right) ^{\wedge }=\varphi \hat{f}$. But, $f\in E$ is arbitrary and so $\varphi \in \mathcal{M}_{E}\left( G\right) $ with $S=M_{\varphi }^{E}$. The proof is complete.\medskip\ \end{proof} \begin{remark} \begin{enumerate} \item[(a)] For $E=L^{2}\left( G\right) $ it is a classical result that \mathcal{M}_{2}\left( G\right) =\ell ^{\infty }\left( \Gamma \right) $, which is an immediate consequence of the fact that the Fourier transform is a surjective isometry from $L^{2}\left( G\right) $ onto $\ell ^{2}\left( \Gamma \right) $. Conversely, if $E$ is a translation invariant B.f.s. over G$ with $L^{\infty }\left( G\right) \subseteq E$ and $\mathcal{M}_{E}\left( G\right) =\ell ^{\infty }\left( \Gamma \right) $, then it follows from Proposition 1.6 of \cite{dPR} that $E=L^{2}\left( G\right) $ with equivalent norms. \item[(b)] Let $E\neq \left\{ 0\right\} $ be a translation invariant B.f.s. over $G$ with \emph{o.c.-norm}. As observed in Remark \ref{Rem01}, for each \lambda \in M\left( G\right) $ the operator $T_{\lambda }^{E}:E\rightarrow E$ of convolution with $\lambda $ exists. Recalling that $\left( f\ast \lambda \right) ^{\wedge }=\hat{\lambda}\hat{f}$ for $f\in L^{1}\left( G\right) $ and $\lambda \in M\left( G\right) $, it follows that $\hat{\lambda}\in \mathcal{M}_{E}\left( G\right) $ with $M_{\hat{\lambda}}^{E}=T_{\lambda }^{E} $ for all $\lambda \in M\left( G\right) $. Hence, \begin{equation*} \left\{ \hat{\lambda}:\lambda \in M\left( G\right) \right\} \subseteq \mathcal{M}_{E}\left( G\right) . \end{equation* A similar result holds if $E$ has the Fatou property. \end{enumerate} \end{remark} For any function $f\in L^{0}\left( G\right) $, the function $f^{\natural }\in L^{0}\left( G\right) $ is defined by \begin{equation*} f^{\natural }\left( x\right) =\overline{f\left( -x\right) },\ \ \ \mu \text -a.e}\ \ x\in G. \end{equation* Let $E$ be a reflection and translation invariant B.f.s. over $G$. Then \begin{equation*} f^{\natural }\in E\ \ \text{and\ \ }\left\Vert f^{\natural }\right\Vert _{E}=\left\Vert f\right\Vert _{E},\ \ \ f\in E, \end{equation* and the map $f\longmapsto f^{\natural }$, for $f\in E$, is a conjugate linear isometric involution in $E$. For each operator $T\in \mathcal{L \left( E\right) $ the operator $T^{\natural }\in \mathcal{L}\left( E\right) $ is defined by \begin{equation*} T^{\natural }f=\left( Tf^{\natural }\right) ^{\natural },\ \ \ f\in E, \end{equation* and satisfies $\left\Vert T^{\natural }\right\Vert =\left\Vert T\right\Vert . Then $T\longmapsto T^{\sharp }$, for $T\in \mathcal{L}\left( E\right) $, is a conjugate linear isometric involution in $\mathcal{L}\left( E\right) $. \begin{proposition} \label{Prop01}Let $E\neq \left\{ 0\right\} $ be a translation and reflection invariant B.f.s. over $G$. \begin{enumerate} \item[(i)] If $\varphi \in \mathcal{M}_{E}\left( G\right) $, then also $\bar \varphi}\in \mathcal{M}_{E}\left( G\right) $ and $M_{\bar{\varphi }^{E}=\left( M_{\varphi }^{E}\right) ^{\natural }$. \item[(ii)] The map $M_{\varphi }^{E}\longmapsto M_{\bar{\varphi}}^{E}$, for $\varphi \in \mathcal{M}_{E}\left( G\right) $, is a conjugate linear isometric involution in $\mathcal{M}_{E}^{\limfunc{op}}\left( G\right) $. \end{enumerate} \end{proposition} \begin{proof} (i). Let $\varphi \in \mathcal{M}_{E}\left( G\right) $. A direct (but careful) computation shows that \begin{equation*} \left( \left( M_{\varphi }^{E}\right) ^{\natural }f\right) ^{\wedge }=\bar \varphi}\hat{f},\ \ \ f\in E. \end{equation* Consequently, $\bar{\varphi}\in \mathcal{M}_{E}\left( G\right) $ and $M_ \bar{\varphi}}^{E}=\left( M_{\varphi }^{E}\right) ^{\natural }$. (ii). This follows immediately from part (i) in combination with the observations made prior to the present proposition.\medskip \end{proof} For $E=L^{2}\left( G\right) $ and $\varphi \in \mathcal{M}_{2}\left( G\right) =\ell ^{\infty }\left( \Gamma \right) $, it follows (via Plancherel's formula) that $M_{\bar{\varphi}}^{\left( 2\right) }=\left( M_{\varphi }^{\left( 2\right) }\right) ^{\ast }$ is the Hilbert space adjoint of $M_{\varphi }^{\left( 2\right) }$. \section{A Fuglede type theorem} Again let $G$ be an infinite compact abelian group with normalized Haar measure $\mu $. As observed in Proposition \ref{Prop01} (i), if $E$ is a translation and reflection invariant B.f.s. over $G$, then $\bar{\varphi}\in \mathcal{M}_{E}\left( G\right) $ whenever $\varphi \in \mathcal{M}_{E}\left( G\right) $. For $E=L^{2}\left( G\right) $, we noted that $M_{\bar{\varphi }^{\left( 2\right) }=\left( M_{\varphi }^{\left( 2\right) }\right) ^{\ast }$ is the Hilbert space adjoint of $M_{\varphi }^{\left( 2\right) }$. In particular, $M_{\varphi }^{\left( 2\right) }$ is a normal operator on L^{2}\left( G\right) $ (as $\mathcal{M}_{2}^{\limfunc{op}}\left( G\right) $ is commutative; cf. Corollary \ref{CorMF05}). Consequently, it follows from Fuglede's theorem (see, for instance, Theorem IX.6.7 in \cite{Co}) that if \varphi \in \mathcal{M}_{2}\left( G\right) =\ell ^{\infty }\left( \Gamma \right) $ and $T\in \mathcal{L}\left( L^{2}\left( G\right) \right) $ satisfy $M_{\varphi }^{\left( 2\right) }T=TM_{\varphi }^{\left( 2\right) }$, then also $M_{\bar{\varphi}}^{\left( 2\right) }T=TM_{\bar{\varphi}}^{\left( 2\right) }$. As noted in the Introduction, this latter result was extended to the case $E=L^{p}\left( \mathbb{T}\right) $ with $1\leq p<\infty $ in Theorem 3.1 of \cite{MR}. The purpose of this section to extend this result to all translation and reflection invariant B.f.s.' with o.c.-norm over arbitrary compact abelian groups (see Theorem \ref{Thm01} below). For any $f\in L^{1}\left( G\right) $ we denote \begin{equation*} \limfunc{supp}\left( \hat{f}\right) =\left\{ \gamma \in \Gamma :\hat{f \left( \gamma \right) \neq 0\right\} . \end{equation* We begin with the following result. \begin{lemma} \label{LemFTA01}Let $E$ be a translation invariant B.f.s. over $G$ with L^{\infty }\left( G\right) \subseteq E$. Let $T\in \mathcal{L}\left( E\right) $ and $\varphi ,\psi \in \mathcal{M}_{E}\left( G\right) $. \begin{enumerate} \item[(i)] Suppose that $M_{\varphi }^{E}T=TM_{\psi }^{E}$. Then, for every \gamma \in \Gamma $, it is the case that $\varphi \left( \xi \right) =\psi \left( \gamma \right) $ for all $\xi \in \limfunc{supp}\left( \left( T\gamma \right) ^{\wedge }\right) $. \item[(ii)] Assume, in addition, that $E$ has o.c.-norm. Let $\varphi $ and \psi $ have the property that, for each $\gamma \in \Gamma $, we have \varphi \left( \xi \right) =\psi \left( \gamma \right) $ for all $\xi \in \limfunc{supp}\left( \left( T\gamma \right) ^{\wedge }\right) $. Then M_{\varphi }^{E}T=TM_{\psi }^{E}$. \end{enumerate} \end{lemma} \begin{proof} (i). Fix $\gamma \in \Gamma $. Since $L^{\infty }\left( G\right) \subseteq E , we have that $\gamma \in E$ and so \begin{equation} M_{\varphi }^{E}T\gamma =TM_{\psi }^{E}\gamma . \label{eqFTA03} \end{equation Recalling from (\ref{eq04}) that $M_{\psi }^{E}\gamma =\psi \left( \gamma \right) \gamma $, it follows that $TM_{\psi }^{E}\gamma =\psi \left( \gamma \right) T\gamma $. Hence, \begin{equation} \left( TM_{\psi }^{E}\gamma \right) ^{\wedge }=\psi \left( \gamma \right) \left( T\gamma \right) ^{\wedge }. \label{eqFTA01} \end{equation On the other hand, \begin{equation} \left( M_{\varphi }^{E}T\gamma \right) ^{\wedge }=\varphi \cdot \left( T\gamma \right) ^{\wedge }. \label{eqFTA02} \end{equation A combination of (\ref{eqFTA03}), (\ref{eqFTA01}) and (\ref{eqFTA02}) yields that \begin{equation} \varphi \left( \xi \right) \left( T\gamma \right) ^{\wedge }\left( \xi \right) =\psi \left( \gamma \right) \left( T\gamma \right) ^{\wedge }\left( \xi \right) , \label{eqFTA04} \end{equation for all $\xi \in \Gamma $. If $\limfunc{supp}\left( \left( T\gamma \right) ^{\wedge }\right) =\emptyset $, then there is nothing to be proved. So, suppose there exists $\xi \in \limfunc{supp}\left( \left( T\gamma \right) ^{\wedge }\right) $. Then $\left( T\gamma \right) ^{\wedge }\left( \xi \right) \neq 0$ and hence, (\ref{eqFTA04}) implies that $\varphi \left( \xi \right) =\psi \left( \gamma \right) $. This proves part (i). (ii). Fix $\gamma \in \Gamma $. If $\xi \in \Gamma \backslash \limfunc{supp \left( \left( T\gamma \right) ^{\wedge }\right) $, i.e., $\left( T\gamma \right) ^{\wedge }\left( \xi \right) =0$, then it is clear that (\re {eqFTA04}) holds. On the other hand, if $\xi \in \limfunc{supp}\left( \left( T\gamma \right) ^{\wedge }\right) $, then it follows from the hypothesis that (\ref{eqFTA04}) holds. Hence, \begin{equation*} \varphi \cdot \left( T\gamma \right) ^{\wedge }=\psi \left( \gamma \right) \left( T\gamma \right) ^{\wedge }. \end{equation* Via (\ref{eqFTA01}) and (\ref{eqFTA02}) this yields that \begin{equation*} \left( M_{\varphi }^{E}T\gamma \right) ^{\wedge }=\left( TM_{\psi }^{E}\gamma \right) ^{\wedge }. \end{equation* The uniqueness of Fourier transforms in $L^{1}\left( G\right) $ then implies (\ref{eqFTA03}). Since $\gamma \in \Gamma $ is arbitrary, the linearity of $T$, $M_{\varphi }^{E}$ and $M_{\psi }^{E}$ imply that \begin{equation*} M_{\varphi }^{E}Tg=TM_{\psi }^{E}g,\ \ \ g\in \tau \left( G\right) , \end{equation* where, as before, $\tau \left( G\right) \subseteq L^{\infty }\left( G\right) \subseteq E$ denotes the space of all trigonometric polynomials on $G$. But, $E$ has o.c.-norm and so $\tau \left( G\right) $ is dense in $E$; see Proposition \ref{Prop02}. The operators $T$, $M_{\varphi }^{E}$and $M_{\psi }^{E}$ are continuous and hence, we can conclude that $M_{\varphi }^{E}T=TM_{\psi }^{E}$. The proof is thereby complete. \medskip \end{proof} The following Fuglede type theorem is a consequence of Lemma \ref{LemFTA01}. \begin{theorem} \label{Thm01}Let $E\neq \left\{ 0\right\} $ be a translation and reflection invariant B.f.s. over $G$ with o.c.-norm. Suppose that $\varphi ,\psi \in \mathcal{M}_{E}\left( G\right) $ and $T\in \mathcal{L}\left( E\right) $ satisfy $M_{\varphi }^{E}T=TM_{\psi }^{E}$. Then $M_{\bar{\varphi}}^{E}T=TM_ \bar{\psi}}^{E}$. \end{theorem} \begin{proof} The reflection invariance of $E$ ensures that $\bar{\varphi},\bar{\psi}\in \mathcal{M}_{E}\left( G\right) $ and the order continuity of the norm in $E$ implies that $L^{\infty }\left( G\right) \subseteq E$. By Lemma \re {LemFTA01} (i), the condition $M_{\varphi }^{E}T=TM_{\psi }^{E}$ implies, for every $\gamma \in \Gamma $, that \begin{equation} \varphi \left( \xi \right) =\psi \left( \gamma \right) ,\ \ \ \xi \in \limfunc{supp}\left( \left( T\gamma \right) ^{\wedge }\right) . \label{eqFTA05} \end{equation It is clear that the functions $\bar{\varphi}$ and $\bar{\psi}$ (in place of $\varphi $ and $\psi $, respectively) also satisfy (\ref{eqFTA05}). Therefore, via Lemma \ref{LemFTA01} (ii) applied to $\bar{\varphi},\bar{\psi \in \mathcal{M}_{E}\left( G\right) $, we can conclude that $M_{\bar{\varphi }^{E}T=TM_{\bar{\psi}}^{E}$. This completes the proof. \medskip \end{proof} The following result is a special case of Theorem \ref{Thm01}. \begin{corollary} Let $E\neq \left\{ 0\right\} $ be any rearrangement invariant B.f.s. over $G$ with o.c.-norm. Suppose that $\varphi ,\psi \in \mathcal{M}_{E}\left( G\right) $ and $T\in \mathcal{L}\left( E\right) $ satisfy $M_{\varphi }^{E}T=TM_{\psi }^{E}$. Then $M_{\bar{\varphi}}^{E}T=TM_{\bar{\psi}}^{E}$. \end{corollary}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Radio observations of gamma-ray burst (GRB) afterglows have been rarely successful in constraining their projected size or proper motion due to the large distances involved. In a handful of cases \citep{Taylor1998,Taylor1999,Frail2000,Alxander2019}, such as that of GRB~970508 \citep{Frail1997}, scintillation of the radio source induced by scattering of the emission by the interstellar medium has been used as an indirect probe of the source size. On the other hand, the only case so far in which Very Long Baseline Interferometry (VLBI) observations could produce a direct measurement of the size of a GRB afterglow is that of GRB~030329 \citep{Taylor2004}. More recently, VLBI observations of GRB~170817A \citep{Mooley2018,Ghirlanda2019} led to direct inference of the effects of relativistic motion, that is, an apparently superluminal displacement of the source centroid. In these favourable cases, the joint modelling of the light curves and of the evolution of the apparent size \citep{Mesler2013} or the centroid displacement \citep{Ghirlanda2019,Hotokezaka2019} helped to mitigate the problem of afterglow model degeneracies, which most often prevents the determination of the source's physical properties unless some parameters are fixed based on educated guesses. At the other end of the electromagnetic spectrum, observations of GRB afterglows at teraelectronvolt (TeV) photon energies \citep{Zhang2001,Nava2018} have also shown a potential in breaking the modelling degeneracies and constrain the underlying physical processes. Such photon energies are in principle beyond the reach of synchrotron emission from shock-accelerated electrons \citep{DeJager1992,Nava2018,HESS2021}: inverse Compton scattering of the synchrotron photons by the same relativistic electrons (`synchrotron self-Compton', \cite{Rybicki1986,Panaitescu1998,Chiang1999,Panaitescu2000,Sari2001}) is expected to dominate at these energies. Such process was shown to provide a viable explanation \citep{MAGIC2019IC} for the TeV emission component recently detected \citep{MAGIC2019OBS} by the Major Atmospheric Gamma Imaging Cherenkov (MAGIC) in association to GRB~190114C. Different emission processes mean different dependencies on the physical properties of the source, which enhances the prospects for breaking the degeneracies. Unfortunately, TeV observations of gamma-ray bursts are notoriously difficult, and only few detections have been reported so far \citep{Atkins2000,MAGIC2019OBS,HESS201920B,GCN_201015A,GCN_201216C}, including the source \citep{HESS2019GCN,HESS2021} we study in this work. \section{Results} \subsection{The GRB~190829A event} GRB~190829A is a long GRB detected by the Gamma-ray Burst Monitor (GBM) onboard the \textit{Fermi} satellite on 2019 August 29 at 19:55:53 UT \citep{2019GCN.25551Fermi} and shortly thereafter \citep{2019GCN.25552Swift} by the Burst Alert Telescope (BAT) onboard the Neil Gehrels Swift Observatory (\textit{Swift} hereafter). GRB~190829A is the third GRB detected \cite{HESS2019GCN} at teraelectronvolt photon energies after GRB~190114C \citep{MAGIC2019} and GRB~180720B \citep{HESS2019}, but, compared to these, it features a smaller isotropic-equivalent energy \citep[][$E_\mathrm{iso} \sim 3 \times 10^{50}$ erg -- see also Appendix~\ref{sec:GBM_reduction}]{2019GCN.25660KonusWind}. The redshift of the host galaxy $z = 0.0785$ (\citealt{Valeev2019}, corresponding to a luminosity distance of approximately $368$ Mpc adopting Planck cosmological parameters -- \citealt{Planck2016} -- or equivalently an angular diameter distance of $316$ Mpc) makes this event one of the closest long GRBs known to date. The afterglow emission of GRB~190829A has been monitored up to several months after the burst: after an initial peak and a fading phase, a re-brightening in the optical light curve at $\sim$5 days was attributed to the associated supernova emission (confirmed by the spectroscopic observations of the 10.4m Gran Telescopio Canarias telescope, hereafter GTC -- \citealt{Hu2020}). Radio afterglow emission was first detected by the Australia Telescope Compact Array (ATCA) at 5.5 GHz \citep{Laskar2019} and then by the Northern Extended Millimeter Array (NOEMA) at 90 GHz \citep{2019GCN.25589NOEMA}, 20.2 hours and 29.48 hours after the burst, respectively. Subsequent high-cadence radio observations were performed with the Meer Karoo Array Telescope (MeerKAT) at 1.3 GHz and Arcminute Microkelvin Imager--Large Array (AMI-LA) at 15.5 GHz, reporting a fading radio source up to 143 days after the initial gamma-ray burst \citep{Rhodes2020}. \subsection{VLBI observations and Sedov length constraint} We conducted VLBI observations of GRB~190829A with the Very Long Baseline Array (VLBA) at 15 and 5 GHz and the European VLBI Network (EVN) alongside the enhanced Multi-Element Remotely Linked Interferometer Network (e-MERLIN) at 5 GHz, for a total of nine epochs between 9 and 116 days after the GRB (see Table \ref{tab:obs}). \begin{figure*}[t] \includegraphics[width=\textwidth]{Figs/Fig1_new.pdf} \caption{Source size upper limits and comparison with the model. Central panel: Downward arrows show our one-, two- and three-sigma upper limits on the source size at each epoch (see Appendix~\ref{sec:sedov_length_constraint_method}). The dashed lines show the source size evolution as predicted by our afterglow model (analytical $t^{5/8}$ scaling -- \citealt{Granot1999} -- in black, source size from our numerical model in red) assuming our best-fit parameters (see Appendix~\ref{sec:afterglow_model_fitting}). The pink shaded band shows the 90\% credible interval implied by our afterglow parameter uncertainties. The surrounding smaller panels show previews of the cleaned radio maps for each epoch (full-size maps are \textbf{available on Zenodo} -- see \citealt{zenodo_supplementary}). } \label{fig:size_evolution} \end{figure*} Despite the good angular resolution reached in all observations, the source remained consistently unresolved. In order to obtain reliable upper limits on the source size, we fitted a circular Gaussian model to the data through a Markov Chain Monte Carlo approach (Appendix~\ref{sec:vlbi_source_fit_method}), which we first tested against simulated sources immersed in real noise (Appendix~\ref{sec:vlbi_source_fit_validation}). From the analysis of our nine-epoch data, we obtained the limits reported in Table~\ref{tab:fit} and shown in Figure~\ref{fig:size_evolution}. Assuming an intrinsic source size evolution $s\propto t^{5/8}$ as expected \citep{Granot1999} for the observed size $s$ of a relativistic blastwave whose expansion is described by the self-similar Blandford-McKee solution \citep{Blandford1976}, we could translate our measurements into a largely model-independent upper limit on the ratio between the blastwave energy $E$ and the number density $n$ of the surrounding ambient medium, which sets the fundamental length scale of the expansion, namely the Sedov length \citep{Blandford1976} $\ell_\mathrm{S}=(3E/4\pi n m_\mathrm{p} c^2)^{1/3}$, where $m_\mathrm{p}$ is the proton mass and $c$ is the speed of light. Since \citep{Blandford1976,Granot1999} $s\propto \ell_\mathrm{S}^{3/8} t^{5/8}$, we have that $E/n \propto \ell_\mathrm{S}^3 \propto s^8 t^{-5}$. After carefully evaluating the proportionality constant (Appendix~\ref{sec:sedov_length_constraint_method}) and adopting a flat prior on the source size, we obtained the posterior probabilities shown in Figure~\ref{fig:E_n_constraint}. We note that, since we do not resolve the source, only upper limits derived from these posteriors are meaningful. The most stringent upper limit is that from our first EVN epoch (solid turquoise line in Fig.~\ref{fig:E_n_constraint}), which yielded $\log[(E/n)/\mathrm{erg\,cm^{3}}] < 55.6$ at the 90\% credible level. After combining the posterior probabilities from all the epochs (grey thick line in Fig.~\ref{fig:E_n_constraint}, see Appendix~\ref{sec:sedov_length_constraint_method}) we obtained $\log[(E/n)/\mathrm{erg\,cm^{3}}] < 54.1$ at the 90\% credible level. \begin{figure}[t] \includegraphics[width=\columnwidth]{Figs/combined_E_n_constraint.pdf} \caption{Constraint on the energy-to-density ratio. Turquoise, orange and fuchsia lines show the posterior probability on $\log(E/n)$ obtained (Appendix~\ref{sec:sedov_length_constraint_method}) from the source size measurements (Appendix~\ref{sec:vlbi_source_fit_method}) in our VLBI imaging epochs assuming the source to be a relativistic shock in self-similar expansion \citep{Blandford1976,Granot1999}. The grey line shows the combined posterior from all epochs. The red solid line is the posterior obtained from fitting our forward plus reverse shock afterglow emission model to the available multi-wavelength data.} \label{fig:E_n_constraint} \end{figure} \subsection{Time-dependent multi-wavelength modelling and interpretation} In order to test this result and get a deeper physical insight on this source, we performed a self-consistent modelling of all the available multi-wavelength observations of the afterglow. We included both the forward and reverse shock emission in our model, assuming a uniform angular profile for all jet properties within an opening angle $\theta_\mathrm{j}$ and computing the shock dynamics self-consistently from deceleration down to the late side expansion phase. We computed the radiation in the shock downstream comoving frame including the effects of inverse Compton scattering on electron cooling (accounting for the Klein-Nishina suppression of the cross section above the relevant photon energy), assuming a fixed fraction $\epsilon_\mathrm{e}$ of the available energy density to be in relativistic electrons (which we assumed to be a fraction $\chi_\mathrm{e}$ of the total electrons, and to be injected with a power law energy distribution with index $p>2$), and a fraction $\epsilon_\mathrm{B}$ to be in the form of an effectively isotropic magnetic field. To compute the observed emission, we integrated over equal-arrival-time surfaces and considered relativistic beaming effects.\\ Figure~\ref{fig:lightcurves} shows the GRB~190829A afterglow light curves in the X-ray, optical and radio bands obtained by combining publicly available data (marked with circles -- see Appendix~\ref{sec:data_from_literature}) with the flux densities measured in our VLBI campaign (marked with stars -- see Appendix~\ref{sec:vlbi_data_reduction}). The solid lines represent the predictions of our best-fit afterglow model (Appendix~\ref{sec:afterglow_model}), where the dashed lines show the contribution from the reverse shock only, while the solid lines also include the forward shock, which dominates the emission at all wavelengths from around one day onwards. In addition, Figure~\ref{fig:SSC_SED} shows the predicted spectral energy distributions at 5 h (blue) and 30 h (red) after the gamma-ray burst, which agree with the emission detected \citep{HESS2019GCN,HESS2021} by the High Energy Stereoscopic System (HESS, butterfly-shaped symbols show one-sigma uncertainties -- including systematics -- when assuming a power-law spectral shape). In our interpretation, therefore, the HESS emission is Synchrotron Self-Compton from the forward shock. Differently from what was reported in the main text of \citet{HESS2021}, we do not find significant photon-photon absorption, at least for our model parameters (see Appendix~\ref{sec:tau_gammagamma_afterglow}). From this modelling, we obtained $\log[(E/n)/\mathrm{erg\,cm^{3}}] = 53.9_{-0.2}^{+0.4}$ (90\% credible level, posterior shown by the red line in Fig.~\ref{fig:E_n_constraint}), in agreement with the VLBI size upper limits, as can also be appreciated from Fig.~\ref{fig:size_evolution}, where the source size evolution entailed by the afterglow emission model (red dashed line) is compared with our source size upper limits. We regard our ability to interpret all the available data self-consistently as a success of the standard gamma-ray burst afterglow model, confirming our general understanding of these sources, but we stress that in order to obtain these results we had to include a number of often overlooked (even though widely agreed upon in most cases) elements in the model. \begin{figure*} \centering \includegraphics[width=\textwidth]{Figs/multi_wave_light_curve.pdf} \caption{Multi-wavelength data and emission model. Circles represent X-ray fluxes (blue, values shown on the right-hand axis) or flux densities (all other colours, values shown on the left-hand axis) measured at the position of GRB190829A at different times after the GRB trigger in several bands (see the legend). Optical flux densities have been corrected for both the Milky Way and host galaxy extinction, and the contribution of the host galaxy has been subtracted. The host galaxy contribution \citep{Rhodes2020} has also been subtracted from the AMI-LA radio flux densities at 15.5 GHz. Stars mark the flux densities measured in our VLBI epochs. Solid lines of the corresponding colours show the predictions of our emission model including both the forward and reverse shocks. Dashed lines single out the contribution of the reverse shock emission. We interpret the initial plateau in the X-ray data as the superposition of the prompt emission tail and the rising reverse shock emission. } \label{fig:lightcurves} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[width=\textwidth]{Figs/SSC_SED.pdf} \end{center} \caption{Predicted spectral energy distributions at the times of the HESS detections. We show with blue (resp.~red) solid lines our model at 5 hours (resp.~30 hours) after the gamma-ray trigger, with 90\% and 50\% credible bands in lighter shades. The HESS `butterflies' include the reported \citep{HESS2021} systematic error contribution (summed in quadrature). We also show XRT butterflies at the corresponding times (from our own analysis, see Appendix~\ref{sec:swift_data}), plus GTC optical and NOEMA, ATCA and AMI-LA radio datapoints taken at observing times lying within 0.2 dex.} \label{fig:SSC_SED} \end{figure*} The results of our afterglow model fitting (see Table~\ref{tab:afterglow_params} and Figure~\ref{fig:mcmc_total}) provided some rather unique insights on the physics of gamma-ray bursts and of the forward and reverse shocks that form as the jet expands into the interstellar medium. Remarkably, we found that the usual simplifying assumption $\chi_\mathrm{e}=1$ in the forward shock is excluded (that is, we were unable to find a statistically acceptable solution when assuming all electrons in the shock downstream to be accelerated to relativistic speeds) and we had $\chi_\mathrm{e}<0.04$ at 90\% credibility when adopting a wide prior $-10<\log(\chi_\mathrm{e})<0$. On the other hand, with such a wide prior we found our uncertainty on the total (collimation-corrected, two-sided) jet kinetic energy to extend towards unrealistically large values $E_\mathrm{jet}=E/(1-\cos\theta_\mathrm{jet})\gtrsim 10^{53}\,\mathrm{erg}$ (assuming two oppositely oriented, identical jets of half-opening angle $\theta_\mathrm{jet}$), corresponding to very small fractions of accelerated electrons $\chi_\mathrm{e}\lesssim 10^{-3}$. When adopting a tighter prior $-2<\log(\chi_\mathrm{e})<0$, motivated by particle-in-cell simulations of relativistic collisionless shocks (which typically find $\chi_\mathrm{e}$ to be around a few per cent, \citealt{Spitkovsky2008,Sironi2011}), we obtained best-fit values consistent within the uncertainties, but the unrealistic-energy tails were removed. In what follows, we report the results for this latter prior choice (we report one-sigma credible intervals unless otherwise stated), while the results for the wider prior are given in the Appendix (Table~\ref{tab:afterglow_params}). The jet isotropic-equivalent kinetic energy at the onset of the afterglow is $E = 2.5^{+1.9}_{-1.3}\times 10^{53}\,\mathrm{erg}$ and the jet half-opening angle is $\theta_\mathrm{jet}=15.4_{-0.9}^{+1.3}$ degrees, implying a total jet energy $E_\mathrm{jet}= 9_{-4}^{+9}\times 10^{51}\,\mathrm{erg}$, which is about one half of the energy in the associated supernova \citep{Hu2020}. Given the observed gamma-ray isotropic equivalent energy $E_\mathrm{\gamma,iso}=(2.91\pm 0.18)\times 10^{50}\,\mathrm{erg}$ (see Appendix~\ref{sec:GBM_reduction}), the implied gamma-ray efficiency is $\eta_\gamma = E_\mathrm{\gamma,iso}/(E_\mathrm{\gamma,iso} + E) = 1.2^{+1.0}_{-0.5}\times 10^{-3}$. This efficiency is much lower than typical estimates for other gamma-ray bursts in the literature \citep{Fan2006,Zhang2007,Wygoda2016,Beniamini2016}, even though we note that a recently published study \citep{Cunningham2020} of GRB~160625B also found a low efficiency when leaving the $\chi_\mathrm{e}$ parameter free to vary. The prompt emission efficiency we find is compatible with that expected in the case of internal shocks within the jet \citep{Rees1994} with a moderate Lorentz factor contrast \citep{Kumar1999}. The jet bulk Lorentz factor before the onset of the deceleration is $\Gamma_0=57^{+4}_{-5}$. Considering the isotropic-equivalent radiated energy $E_\mathrm{iso}\sim 3\times 10^{50}\,\mathrm{erg}$, this is in agreement with the $\Gamma - E_\mathrm{iso}$ correlation found for long GRBs (see Fig.~\ref{fig:eiso_gamma}, see \citealt{Ghirlanda2018}). The external medium number density (assumed constant) is relatively low $n=2.1_{-1.0}^{+3.7}\times 10^{-1}\,\mathrm{cm^{-3}}$. This could be tentatively explained by the large offset of the GRB location with respect to the host galaxy centre. Indeed, using the GRB coordinates derived from our VLBI observations and the host galaxy centre position from the 2MASS catalogue \citep{Skrutskie2006}, we measure a separation of 9.6 arcsec, corresponding to a physical projected separation of 14.7 kiloparsec. This is comparable to the largest previously measured offset in long GRBs \citep[][that of GRB~080928]{Blanchard2016}, placing it in principle in the underdense outskirts of its host galaxy. On the other hand, even though the surrounding interstellar medium density may be low, the associated supernova indicates that the progenitor must have been a massive star, which should have polluted the environment with its stellar wind. By contrast, the sharp increase in the flux density preceding the light curve peak as seen in the optical and X-rays is inconsistent with a wind-like external medium, which would result in a much shallower rise \citep{Kobayashi2003}. This places stringent constraints on the properties of the pre-supernova stellar wind, whose termination shock radius \citep{vanMarle2006} must be smaller than the nominal deceleration radius in the progenitor wind $R_\mathrm{dec,w}=E/4\pi A m_\mathrm{p}\Gamma_0^{2} c^2$, where $m_\mathrm{p}$ is the proton mass, $c$ is the speed of light. The parameter $A$ sets the stellar wind density, and can be expressed as a function of the wind mass loss rate $\dot M_\mathrm{w}$ and velocity $v_\mathrm{w}$ as $A=3\times 10^{35} \dot M_\mathrm{w,-5}v_\mathrm{w,3}\equiv 3\times 10^{35} A_\star$, where $\dot M_\mathrm{w,-5}=\dot M_\mathrm{w}/10^{-5}\mathrm{M_\odot/yr}$ and $v_\mathrm{w,3}=v_\mathrm{w}/1000\,\mathrm{km/s}$. Requiring the wind termination shock radius \citep{vanMarle2006}, which depends on the wind properties and also on the external interstellar medium density $n_0$ and on the progenitor lifetime $t_\star$, to be smaller than $R_\mathrm{dec,w}$, we obtain \begin{equation} \dot M_\mathrm{w,-5} < 3\times 10^{-4} E_\mathrm{52}^{10/13}\Gamma_{0,2}^{-20/13} v_\mathrm{w,3}^{9/13} n_\mathrm{0,2}^{3/13} t_\mathrm{\star,Myr}^{-4/13} \end{equation} where $E_\mathrm{52}=E/10^{52}$, $\Gamma_{0,2}=\Gamma_0/100$, $n_{0,2}=n_0/100\,\mathrm{cm^{-3}}$ and $t_{\star,Myr}=t_\star/1\,\mathrm{Myr}$. Inserting our best-fit afterglow parameters, we obtain $\dot M_\mathrm{w,-5} < 7 \times 10^{-2} v_\mathrm{w,3}^{9/13} n_\mathrm{0,2}^{3/13} t_\mathrm{\star,Myr}^{-4/13}$. For the fiducial wind speed, external interstellar medium density (here we set $n_0=100\,\mathrm{cm^{-3}}$ assuming that, despite the large offset, the progenitor was embedded in a star forming region -- but the dependence on this parameter is very weak) and progenitor lifetime parameters, this limits the wind mass loss rate to $\dot M_\mathrm{w}< 7 \times 10^{-7}\,\mathrm{M_\odot\,yr^{-1}}$, which can be achieved only in the case of a very low metallicity, or a low Eddington ratio \citep{Sander2020}. Alternatively, the low wind mass loss rate could be explained as the result of wind anisotropy induced by the fast rotation of the progenitor star \citep{Ignace1996,Eldridge2007}, which would reduce the wind mass loss rate along the stellar rotation axis. For the forward shock, we found a relativistic electron power law slope $p_\mathrm{FS}=2.010_{-0.0025}^{+0.0021}$, reminiscent of that expected for first-order Fermi acceleration in non-relativistic strong shocks \citep{Bell1978}, and slightly lower than $p\sim 2.2$ expected for relativistic shocks \citep{Sironi2011}. When $p$ is close to (or below) 2, as in our case, the adopted value of the maximum electron energy $\gamma_\mathrm{max}$ starts impacting the normalisation of the relativistic electron energy spectrum. For this reason, we also fitted an additional free parameter $\gamma_\mathrm{max}/\gamma_\mathrm{min}$, which sets the ratio (assumed constant throughout the evolution) of the maximum to the minimum electron energy in the injected relativistic electron power law. We find $\log(\gamma_\mathrm{max}/\gamma_\mathrm{min})>4.6$ at the 90\% credibile level. The one-sigma credible interval on the fraction of accelerated electrons is $\chi_\mathrm{e,FS}= 2.3_{-1.3}^{+1.1}\times 10^{-2}$ (note that the uncertainty extends down to $\chi_\mathrm{e}\sim 10^{-3}$ when adopting the wider prior -- see Table~\ref{tab:afterglow_params} -- as discussed above). The electron energy density fraction is $\epsilon_\mathrm{e,FS}=3.0^{+2.9}_{-1.7}\times 10^{-2}$, slightly lower than, but comparable to, the expected $\epsilon_\mathrm{e}\sim 0.1$ for mildly relativistic, weakly magnetised shocks \citep{Sironi2011}. On the contrary, the magnetic field energy density fraction is $\epsilon_\mathrm{B,FS}=2.5^{+3.5}_{-1.3}\times 10^{-5}$, in line with previous studies of gamma-ray burst afterglows \citep{BarniolDuran2014,Wang2015}, implying inefficient magnetic field amplification by turbulence behind the shock or a relatively fast decay of such turbulence with the distance from the shock front \citep{Lemoine2013,Lemoine2015}. Interestingly, the best-fit values we found for the jet isotropic-equivalent energy $E$, the interstellar medium number density $n$ and the forward shock microphysical parameters $\epsilon_\mathrm{e,FS}$ and $\epsilon_\mathrm{B,FS}$ all closely resemble those found \citep{MAGIC2019IC} for another gamma-ray burst recently detected at TeV energies, GRB~190114C, under the constant external density assumption. For the reverse shock, we fixed $\chi_\mathrm{e,RS}=1$ as usual to reduce the number of parameters, since we could not constrain it to be lower than this value. We found that, in order to be able to interpret the X-ray and optical peaks at $t\sim 10^{-2}\,\mathrm{days}$ as reverse shock emission (corresponding to the end of reverse shock crossing, see Eq.~\ref{eq:tdec}) without overpredicting \citep[see the typical radio reverse shock light curve shapes in ][which show late-time bumps]{Resmi2016} the later radio data, the magnetic field in the shock downstream must have decayed rapidly after the reverse shock crossed the jet. In particular, we found that the magnetic energy density must have decayed at least as fast as $B^2\propto V^{-\eta_\mathrm{B}}$ with $\eta_\mathrm{B}\geq 6$, where $V$ is the comoving volume of the shell (see Appendix~\ref{sec:afterglow_model} for a detailed description of the assumed dynamics before and after the shock crossing), differently from the usual simplifying assumption \citep{Kobayashi2000lightcurves} that $\epsilon_\mathrm{B}$ remains constant before and after the shock crossing. We consider this reasonable, since the magnetic field is expected \citep{Chang2008} to decay due to Landau damping of the shock-generated turbulence (which produces the magnetic field) after the shock crossing. For $\eta_\mathrm{B}\geq 6$ our results are independent of the exact value of $\eta_\mathrm{B}$, and we obtained $\epsilon_\mathrm{e,RS}=0.28_{-0.16}^{+0.32}$ and $\epsilon_\mathrm{B,RS}=1.2_{-0.8}^{+1.8}\times 10^{-3}$, and the accelerated electron power law index $p_\mathrm{RS}=2.13_{-0.08}^{+0.04}$. \subsection{Viewing angle limits} The inference on the afterglow parameters described so far is based on the assumption of an on-axis viewing angle. On the other hand, a slightly off-axis viewing angle could explain the relatively low luminosity and low peak energy of the observed prompt emission \citep{Sato2021}. This would imply, however, some degree of proper motion in the VLBI images \citep{Fernandez2021}, \textbf{which can be tested in principle with our observations}. Considering the EVN 5 GHz and VLBA 15 GHz epochs only, as these were performed under sufficiently homogeneous observing strategies and shared the same phase-reference calibrator (see Appendix~\ref{sec:vlbi_data_reduction}), the largest displacement compatible at one-sigma with \textbf{the absence of an observed proper motion in our data} is $0.71$ mas (one-sigma upper limit -- including systematic errors -- on the displacement between our first VLBA 15 GHz and our last EVN epoch, which are the two most widely separated in both time and centroid position). At the source distance, this corresponds to $\delta r_\mathrm{max} < 1.088$ parsecs from $t_0=7.89$ to $t_1=69.5$ rest-frame days separating the two observations. \textbf{In order to turn this into a limit on the source properties}, we note that the apparent displacement $\delta r$ of an off-axis jet is bound to be smaller than, or at most equal to, the size increase $\delta s$ of a spherical relativistic blastwave with the same $E/n$ ratio (i.e., the same Sedov length) over the same time, since the jet can be thought of as a portion of that sphere. Again using the self-similar expansion law from \citet{Granot1999}, a relativistic blastwave would need to have $\log[(E/n)/\mathrm{erg\,cm^{3}}] \geq 59.3$ in order to produce an expansion $\delta s \geq \delta r_\mathrm{max}$ over the same time range, which is well beyond any conceivable value for a gamma-ray burst. This means that our astrometric measurements are not sufficiently precise to exclude any viewing angle. \begin{figure}[t] \includegraphics[width=\columnwidth]{Figs/theta_Gamma_comparison_diff.pdf} \caption{Limit on the viewing angle from compactness arguments on the prompt emission. Shaded areas show the allowed regions on the ($\Gamma$, $(\theta_{\rm view}-\theta_{\rm jet})$) plane, derived by the compactness limits A (photon-photon pair-production), B (scattering off $e^{\pm}$), and C (scattering off $e^-$ associated to baryons). The solid black line ($\Gamma \theta = 1$) separates on-axis observers from off-axis ones. The green star marks the bulk Lorentz factor $\Gamma$ value inferred from the afterglow lightcurve modelling, while the purple star marks the parameters used in \citet{Sato2021}. Both solutions are in the allowed region that ensures the source to be transparent to the observed high-energy prompt emission photons.} \label{fig:compactness} \end{figure} A relatively tight limit on the viewing angle can be obtained, on the other hand, by requiring the jet to be optically thin to the photons we observed during the prompt emission. In particular, we performed the calculation of the optical depth to $\gamma$-ray photons for an arbitrary viewing angle and jet Lorentz factor \citep{Matsumoto2019b}, given the observed spectrum. We focused on the brightest emission episode, namely episode II, that provides the most stringent limit on the viewing angle. Photons of energy $E$ must have been able to escape from the emitting region and not pair-annihilate with other photons of energy $\geq (\delta m_e c^2)^2 / E$, where $\delta$ is the relativistic Doppler factor $\delta = [\Gamma(1-\beta \cos\theta)]^{-1}$ (limit A); they must not have been scattered off by pairs produced by the annihilation of other high energy photons (limit B); and they must not have been scattered off by the electrons associated with the baryons in the outflow (limit C). The first two sources of opacity depend on the observed spectrum, while the third one depends on the matter content of the jet, which we conservatively assumed to be the lowest compatible with the observed spectrum. Given the prompt emission spectrum observed in episode II, we computed the optical depth as a function of the bulk Lorentz factor $\Gamma$ and the viewing angle $\theta_{\rm view}$ for limits A, B and C following \citet{Matsumoto2019b} and assumed an emission duration $\delta_{t} = 3 s$, which corresponds to the brightest peak in the emission episode. Figure~\ref{fig:compactness} shows the regions on the $\Gamma$, $\theta_{\rm view} - \theta_{\rm jet}$ for which the optical depths are smaller than unity for the three limits. The solid black line corresponds to $\Gamma (\theta_\mathrm{view}-\theta_\mathrm{jet})= 1$, therefore dividing the plot into on- and off-axis regions (inside or outside the relativistic beaming cone of material within the jet border). As shown in the plot, the value of $\Gamma$ derived from our afterglow modelling (represented by the green star) is within the relatively small allowed region. The resulting upper limit on the viewing angle is $\theta_\mathrm{view}-\theta_\mathrm{jet}\lesssim 2^{\circ}$. Adopting the jet opening angle $\theta_{\rm jet} = 15^{\circ}$ obtained from the afterglow modelling, a viewing angle greater than 17$^{\circ}$ would not be compatible with the observed emission. Recently, a two-component jet model has been proposed \citep{Sato2021} to explain the multi-wavelength observations of GRB~190829A. In particular, a narrow ($\theta_{\rm jet} = 0.015 \, \mathrm{rad} = 0.86^{\circ}$) and fast ($\Gamma = 350$) jet was used to reproduce the bumps observed in the optical and X-rays at $t \sim 1.4 \times 10^{3} \,\mathrm{s}$ from the trigger time, while a wide ($\theta_{\rm jet} = 0.1 \, \mathrm{rad} = 5.73^{\circ}$) and slow ($\Gamma = 20$) co-axial jet should explain the late ($t \gtrsim 10^{5} \,\mathrm{s}$) X-ray and radio emission. In this scenario, the observer is at an angle $\theta_{\rm view} = 0.031 \, \mathrm{rad} \, (1.78^{\circ})$ with respect to the jet axis. Since the authors of that work point out that the narrow jet could be responsible for the prompt emission of both the episodes I and II, we also applied the compactness argument to this solution for comparison. As shown in Figure~\ref{fig:compactness}, the parameters they assumed for the narrow jet are still inside the allowed region, although quite close to its limit, and therefore the solution with an off-axis narrow jet as the source of the observed gamma-rays is not ruled out from the compactness argument. \section{Summary and discussion} Our VLBI observations and analysis provide evidence in support of the GRB~190829A afterglow being produced by a relativistic blastwave, at least at $t\geq 9\,\mathrm{days}$. We found that a forward plus reverse shock afterglow model, assuming an on-axis viewing angle and a uniform external medium density, is able to reproduce the observed light curves from the gamma rays down to the radio at 1.4 GHz, provided that only a relatively small fraction $\chi_\mathrm{e}\lesssim \mathrm{few}\times 10^{-2}$ of the electrons have been accelerated to relativistic speeds in the forward shock, and that the magnetic field in the reverse-shocked jet decays rapidly after the shock crossing. The required external medium density is relatively low, which points to a very weak progenitor stellar wind. The size evolution entailed by the model is in agreement with the limits set by our VLBI observations. On the other hand, while our calculations are based on the assumption of an on-axis jet, our analysis cannot exclude a viewing angle slightly off the jet border, in which case our derived parameters (especially those related to the reverse shock) would possibly require some modification. The jet and forward shock parameters obtained from our analysis are similar to those found \citep{MAGIC2019IC} for GRB~190114C in the constant external density scenario. As a final note, we point out that other interpretations of this GRB, differing from the one presented in this paper, have been proposed in the literature. The main point of qualitative disagreement among these interpretations is the X-ray/optical peak at $t\sim 10^{-2}\,\mathrm{days}$ (i.e.~around $10^3$ s): \cite{ZhangT2020} attribute it to late central engine activity; \cite{ZhangL2021} invoke the interaction of the blastwave with a pre-accelerated, electron-positron-pair enriched shell formed due to annihilation of prompt emission photons, partly scattered by the dusty external medium; \cite{Fraija2020} propose instead a magnetar spin-down-powered origin. Given that the reverse shock is a natural consequence of the jet interaction with the external medium, our interpretation (within which we are able to explain all the data self-consistently) can be preferred with respect to these based on Occam's razor. Finally, \cite{Rhodes2020} proposed a forward plus reverse shock interpretation. In contrast with us, though, they attribute the 15.5 GHz data at $t>1\,\mathrm{day}$ to a reverse shock in the thick shell regime. We note that, in this regime, the reverse shock emission would peak at the end of the prompt emission (around 70 s post-trigger), so that the X-ray/optical peak would remain unexplained. \section{Acknowledgements} OS thanks Marco Landoni and the Information and Communication Technologies (ICT) office of the Italian National Institute for Astrophysics (INAF) for giving access to the computational resources needed to complete this work; he also acknowledges the Italian Ministry of University and Research (MUR) grant `FIGARO' (1.05.06.13) and the INAF-Prin 2017 (1.05.01.88.06) for financial support. TA, PM and YKZ made use of the computing resources of the China SKA Regional Centre prototype under support from National Key R\&D Programme of China (grant number 2018YFA0404603), NSFC (12041301) and Youth Innovation Promotion Association of CAS. GG acknowledges support from MIUR, PRIN 2017 (grant 20179ZF5KS) BM acknowledges support from the Spanish Ministerio de Econom\'ia y Competitividad (MINECO) under grant AYA2016-76012-C3-1-P and from the Spanish Ministerio de Ciencia e Innovaci\'on under grants PID2019-105510GB-C31 and CEX2019-000918-M of ICCUB (Unidad de Excelencia ``Mar\'ia de Maeztu'' 2020-2023). The European VLBI Network (EVN) is a joint facility of independent European, African, Asian, and North American radio astronomy institutes. Scientific results from data presented in this publication are derived from the following EVN project code(s): EG010. e-VLBI research infrastructure in Europe is supported by the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement number RI-261525 NEXPReS. e-MERLIN is a National Facility operated by the University of Manchester at Jodrell Bank Observatory on behalf of STFC. The research leading to these results has received funding from the European Commission Horizon 2020 Research and Innovation Programme under grant agreement No. 730562 (RadioNet). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Scientific results from data presented in this publication are derived from the following VLBA project codes: BA140, BO062. This work made use of the Swinburne University of Technology software correlator, developed as part of the Australian Major National Research Facilities Programme and operated under licence. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. \paragraph{Data availability.} The European Very-Long Baseline Interferometry Network data (PIs: Ghirlanda \& An) whose analysis has been presented in this study are publicly available at the EVN Data Archive at JIVE \footnote{\url{http://www.jive.nl/select-experiment}} under the identifiers RG010A, RG010B and RG010C. The Very Long Baseline Array data are publicly available at the National Radio Astronomy Observatory (NRAO) Data Archive \footnote{\url{https://science.nrao.edu/facilities/vlba/data-archive}} under the identifiers BO062 (5 GHz epochs, PI: Orienti) and BA140 (15 GHz epochs, PI: An). The Neil Gehrels Swift Observatory data analysed in this study is publicly available at the UK Swift Data Centre \footnote{\url{https://www.swift.ac.uk/xrt\_spectra}} at the University of Leicester. The Fermi/GBM data analysed in this study are publicly available at the National Aeronautics and Space Administration (NASA) High-Energy Astrophysics Science Archive Research Centre (HEASARC) Fermi GBM Burst Catalog \footnote{\url{https://heasarc.gsfc.nasa.gov/W3Browse/fermi/fermigbrst.html}}. All reduced data and computer code are available from the corresponding authors upon reasonable request.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Stochastic calculus}\label{appendix_stochastics} In this appendix we gather some theory on stochastic integrals that we rely on. In Subsection \ref{sec:Reminder.finite.dimension}, we collect standard definitions and results on stochastic calculus in Euclidean space. We mostly follow \cite{Kuo} and keep the setting as simple as possible. For further results, one could for instance refer to \cite{karatzas2012brownian}. At the end of Subsection \ref{sec:Reminder.finite.dimension}, we also precise the notion of solutions to the SDEs for the particle orientations $\xi_i$ in \eqref{eq:Particles.T_D} and \eqref{eq:Particles.T_u} which are SDEs on the manifold $\S^2$. Instead of dealing with SDEs on manifolds in an abstract way (see for instance \cite{Hsu}), we follow a more pedestrian approach. Namely, we just consider corresponding SDEs in the ambient space $\mathbb{R}^3$ and show a posteriori that the solutions stay on $\S^2$. \medskip Finally, Subsection \ref{sec:infinite} is devoted to stochastic calculus in infinite dimensions. Stochastic calculus in infinite dimensional Hilbert spaces is quite well developed, see for instance the monograph \cite{da2014stochastic} where stochastic integrals of the form $\int f(s) \, d B(s)$ are considered for Hilbert-valued Brownian motions $B(s) \in E$ and functions $f$ such that $f(s)\colon E \to H$ is a Hilbert-Schmidt operator to another Hilbert space $H$. Corresponding stochastic integrals in a Stratonvich sense are used for instance in \cite{MaurelliModinSchmeding19}. For our purpose, it is sufficient to consider the case where $E = \mathbb{R}^d$ and $H$ is an infinite dimensional separable Hilbert space. This case is considerably easier than the general framework. Therefore, we choose to present here the necessary theory in a self-contained way based only on the finite dimensional stochastic calculus in the preceding section. \subsection{Reminder on stochastic calculus in finite dimensional space} \label{sec:Reminder.finite.dimension} In what follows we consider a probability space $(\Omega,\mathcal{A}, P)$ and recall that a continuous-time stochastic process is a map $t \mapsto X(t)$, $t\in \mathbb{R}^+$ such that $X(t)$ is a random variable. We recall first the definition of a Brownian motion \begin{defi}[Brownian Motion] A continuous-time stochastic process $B : \mathbb{R}^+ \times \Omega \to \mathbb{R}$, $t\geq 0$ is called a Brownian motion starting in $a\in \mathbb{R}$ if and only if \begin{enumerate}[label=(\roman*).,ref=(\roman*)] \item $B(0,\omega) = a$ for each $\omega \in \Omega$. \item For any partition $0 \leq t_0 <t_1 <\cdots<t_n$, the increments $B(t_{i+1})-B(t_i)$ are independent random variables with distribution $B(t_{i+1})-B(t_i) \sim N(0,t_{i+1}-t_i)$ \item $P$-almost every sample path $t \mapsto B(t,\omega)$ is continuous. \end{enumerate} An $\mathbb{R}^d$-valued stochastic process $B(t,\omega)=(B_1(t,\omega),\cdots,B_d(t,\omega))$ is called a multi-dimensional Brownian motion if and only if the components are independent one-dimensional Brownian motions. \end{defi} In order to recall the definition of stochastic integrals, we introduce first filtrations and the notion of adapted processes. \begin{defi}[Filtrations and adapted processes] Let $(\Omega,\mathcal{A}, P)$ be a probability space. \begin{itemize} \item A continuous-time filtration on $(\Omega, \mathcal{A})$ is a family $ (\mathcal{F}_t)_{t \geq 0}$, indexed by time, of increasing $\sigma$-algebras $\mathcal{F}_t \subseteq \mathcal{A}$, that is, satisfying $\mathcal{F}_t \subset \mathcal{F}_s$ for $t \leq s$. \item A real-valued stochastic process $M$ on $(\Omega, \mathcal{A}, P)$ is adapted with respect to $(\mathcal{F})_{t \geq 0}$ if $M(t)$ is $\mathcal{F}_t$ measurable for all $t \geq 0$. \item The natural (canonical) filtration $(\mathcal{F})_{s \geq 0}$ generated by a stochastic process $M$, denoted by $\sigma(M)$, is the filtration formed, for each $t$, by the smallest $\sigma$-algebra for which the maps $\omega \mapsto M(s,\omega)$, are measurable functions for all $ 0 \leq s \leq t$ . \end{itemize} \end{defi} \begin{defi}[Martingale] A stochastic process $M$ adapted to a filtration $(\mathcal{F}_t)_{t \geq 0}$ is a martingale if the following conditions hold: \begin{itemize} \item $E[M(t)] < + \infty$ for all $t \geq 0$. \item $E[M(t) \mathcal{F}_s] = M(s)$ for all $ 0 \leq s \leq t $ \end{itemize} \end{defi} \subsubsection*{Itô integral with respect to Brownian motion and Itô formula} We consider below a Brownian Motion and a filtration $(\mathcal{F}_t)_{ t \geq 0}$ such that \begin{itemize} \item For all $t \geq 0$, $B(t)$ is $\mathcal{F}_t$ measurable. \item For all $0 \leq s \leq t$, the random variable $B(t)-B(s)$ is independent of the $\sigma$-algebra $\mathcal{F}_s$. \end{itemize} We recall the construction of the Itô integral in the same way as the Riemann integral, proceeding first for step stochastic processes given by $$ f(\omega)=\underset{i=0}{\overset{n}{\sum}}A_i(\omega) 1_{]s_i,s_{i+1}]}, $$ for a given decomposition $s_0=0<s_1<\cdots<s_n=T $ of $[0,T]$ such that $ A_i$ is $F_{s_i}$-measurable and $E[A_i^2]<+\infty$. We set $$I(f)=\underset{i=0}{\overset{n}{\sum}}A_i (B({s_{i+1}}) -B({s_{i}})). $$ This defines a linear mapping $I$ and moreover we have $E I(f)=0$ and \begin{equation}\label{isometry_ito} E( I(f)^2)=\int_0^T E(|f(t)|^2) dt. \end{equation} for each step stochastic process, see \cite[Lemma 4.3.2]{Kuo}. Next, the idea is to extend this definition for each stochastic process in the space $f\in L^2_{\text{ad}}([0,T]\times \Omega)$ defined as the space of stochastic processes $f$ satisfying \begin{itemize} \item $ \int_0^T E |f(s)|^2 ds < +\infty $ \item $f$ adapted to the filtration $(\mathcal{F}_s)_{0 \leq s \leq T}$. \end{itemize} Indeed, each stochastic process $f\in L^2_{\text{ad}}([0,T]\times \Omega)$ can be approximated by a sequence of step stochastic processes $(f_k)_k$ in the following sense $$ \underset{k \to \infty}{\lim} \int_0^t E | f(s)-f_k(s)|^2 dt = 0, $$ see \cite[Lemma 4.3.3]{Kuo}. According to this, one can define the Ito integral as the limit \begin{equation}\label{def_ito} I(f)= \underset{k \to \infty}{\lim} I(f_k) \end{equation} where the limit does not depend on the choice of the sequence $(f_k)_k$. In particular, thanks to \eqref{isometry_ito}, $ I \colon L^2_{\text{ad}}([0,T]\times \Omega) \to L^2(\Omega)$ is an isometry. We gather below additionnal properties, see \cite[Theorem 4.3.5, Theorem 4.6.1 and Theorem 4.6.2]{Kuo}. \begin{thm}[Itô Isometry] \label{th:Ito} Let $f \in L^2_{\text{ad}}([0,T]\times \Omega)$. The stochastic process corresponding to the Itô integral $I(f)$ defined in \eqref{def_ito} and denoted by $$ X(t)= \int_0^t f(s) dB(s), \: 0 \leq t \leq T, $$ is a continuous martingale with respect to the filtration $ (\mathcal{F}_t)_{0 \leq t \leq T}$ such that $E[X(t)]=0$ and $$ E \left| \int_0^t f(s) dB(s) \right|^2=\int_0^t E|f(s)|^2 ds. $$ \end{thm} \begin{rem} \label{rem:multiIto} The definition of Itô integrals and the Itô isometry extends straightforwardly to the multidimensional case: For $d$ independent Brownian motions $B_i$, $1 \leq i \leq d$ and $ \sigma\in {L}^2_{\text{ad}}([0,t] \times \Omega )$ a $d\times d$ valued function, \begin{align} \label{eq:mulit.Ito.isometry} \| \int_0^t \sigma(s) dB(s)\|^2_{L^2(\Omega)} = \int_0^t \|\sigma(s)\|_{HS}^2, \end{align} where $\|A\|_{HS} = \sqrt{\Tr(A^T A)}$ denotes the Hilbert-Schmidt norm of a matrix $A \in \mathbb{R}^{d \times d}$. \end{rem} Following \cite[Subsection 7.5]{Kuo} we recall the multidimensional Itô formula. \begin{defi}[Ito process] Let $ \sigma\in {L}^2_{\text{ad}}([0,T] \times \Omega)$ a $d\times d$ valued function, $b\in{L}^2_{\text{ad}}([0,T] \times \Omega)$ a $\mathbb{R}^d$ valued function. Then, the stochastic process \begin{equation}\label{def_ito_process} X(t)=X_0 + \int_0^t \sigma(s) dB(s) + \int_0^t b(s) ds \end{equation} is called Itô process with drift $b$ and diffusion $\sigma$. \end{defi} \begin{lem}[Itô formula] \label{lem:Ito.formula} Let $\phi \in {C}(\mathbb{R}\times\mathbb{R}^d ; \mathbb{R}^d)$ with continuous partial derivatives $\partial_t\phi $,$\nabla_x \phi $, $\nabla_x^2\phi$. Then $\phi(t,X(t))$ is also an Itô process satisfying \begin{multline}\label{eq:Ito_formula} \phi(t,X(t))=\phi(0,X(0))+ \int_0^t \nabla \phi(s,X(s)) \sigma(s) dB(s)\\ + \int_0^t \left[\partial_t \phi(s,X(s)) + b(s) \cdot \nabla_x\phi(s,X(s)) + \frac{1}{2} \sigma(s)\sigma(s)^\top :\nabla_x^2 \phi(s,X(s)) \right ]ds. \end{multline} \end{lem} \subsubsection*{Stratonovich Integral} We now turn to the definition of the Stratonovich integral. We first recall the definition of quadratic variance and covariance. \begin{defi}[Quadratic variation and covariation] The quadratic variation of a real-valued stochastic process $M$ is the process written as $[M]$ and defined as the following limit in probability, if it exists $$ [M](t) = \underset{\|P\|\to 0}{\lim} \underset{i}{\overset{n}{\sum}} (M(t_i)-M(t_{i-1}))^2, $$ where $P=\{t_0, t_1,\cdots, t_n \}$ is any partition of $[0,t]$ and $\|P\|=\underset{1 \leq i \leq n}{\max}|t_i-t_{i-1}|$. Moreover, for two real-valued stochastic processes $X, Y$, the covariation, denoted by $[X,Y]$, is defined as $$ [X,Y](t) = \underset{\|P\| \to 0 }{\lim} \underset{i}{\overset{n}{\sum}} (X({t_i})-X({t_{i-1}}))(Y({t_i})-Y({t_{i-1}})). $$ \end{defi} \begin{rem}\label{rem_quadratic_variation} \begin{enumerate}[label=(\roman*).,ref=(\roman*)] \item The quadratic variation exists for all continuous finite variation processes, and is zero. \item The quadratic variation exists for all right continuous, square integrable martingale with left-hand limits, see \cite[Formula (6.4.4)]{Kuo} \item If $B$ and $W$ are two independent Brownian motions, $f\in L^2_{\text{ad}}([0,t]\times \Omega)$ and $X(t)= \int f(s) dB(s)$ then $[X,W](t)=0$ and $[X,B](t)=\int f(s) ds$. \end{enumerate} \end{rem} \begin{defi}[Stratonovich integral] \label{def:Strato} Let $X \in L^2_{\text{ad}}([0,T]\times \Omega)$ such that $[X]$ exists. The Stratonovich integral is defined as \begin{equation}\label{eq:Ito_vs_Stratonovich} \int_0^t X(s) \circ dB(s) = \int_0^t X(s) dB(s) +\frac{1}{2} [X,B](t). \end{equation} \end{defi} \begin{rem}\label{rem:Strato} \begin{enumerate}[label=(\roman*).,ref=(\roman*)] \item \label{it:Strato.Ito.process} In particular, if $X$ is given as an Ito process of the form \eqref{def_ito_process}, independence of $B_i$, $B_j$ for $i \neq j$ and Remark \ref{rem_quadratic_variation} iii) yield \begin{align*} \int_0^t X(s) \circ dB(s) &= \int_0^t X(s) dB(s) +\frac{1}{2} \underset{i}{\sum} \left[\int_0^t \sigma_{ij}(s) dB_j,B_i \right ](t)\\ &= \int_0^t X(s) dB(s) +\frac{1}{2} \int_0^t \text{tr} \sigma(s) ds. \end{align*} \item \label{it:Conversion.composition} One can extend the above formula to all $g \in {C}^1([0,T] \times \mathbb{R}^d; \mathbb{R}^d)$ in the following way \begin{align} \label{eq:Conversion.composition} \int_0^t g(s,X(s)) \circ dB(s) &= \int_0^t g(s,X(s)) dB(s) +\frac{1}{2} \int_0^t \text{tr}(\nabla g \sigma ) ds. \end{align} where we used Itô formula \eqref{eq:Ito_formula} in the case $g\in {C}^2$ and a density argument in the case ${g\in {C}^1}$. \item \label{it:stratonovich.chain.rule} The chain rule holds for Stratonovitch integrals. More precisely, if \rhnew{$\sigma \in L^2_{\text{ad}}([0,T]\times \Omega;\mathbb{R}^{d\times d})$} and $Y(t) := \int_0^t \sigma(s) \circ \, d B(s) $ and $\phi : [0,T] \times \mathbb{R}^d \to \mathbb{R}$ is a smooth function, then \begin{align} \phi(T,Y(T)) = \phi(0,Y(0)) + \int_0^T \partial_t \phi(t,Y(t)) \, d t + \int_0^T \nabla \phi(t,Y(t)) \rhnew{\sigma(t)} \circ \, d B(t). \end{align} \end{enumerate} \end{rem} \subsubsection*{Stochastic differential equations}\label{section_SDE_existence} Following \cite[Subsection 10.3]{Kuo}, we summarize existence and uniqueness result for SDEs of the form $$ dX(s)=\sigma(s,X(s)) dB(s)+b(s,X(s))ds, \qquad 0\leq s \leq T, \: X(0)= X^0, $$ which must be interpreted as the stochastic integral equation \begin{equation}\label{eq:SDE} X(t)=X^0 +\int_0^t \sigma(s,X(s)) dB(s)+\int_0^t b(s,X(s))ds. \end{equation} \begin{defi} \label{def:SDE} We say that a jointly measurable stochastic process $X$ on $[0, T]$, is a solution of the SDE \eqref{eq:SDE} if \begin{enumerate}[label=(\roman*).,ref=(\roman*)] \item The stochastic process $\sigma(s,X(s)) $ lies in $L^2_{\text{ad}}([0,T]\times \Omega)$ and $\displaystyle \int_0^t \sigma(s,X(s)) dB(s)$ is an Itô integral for each $t\in [0,T]$. \label{it:sigma} \item Almost all sample paths of the stochastic process $ b(s,X(s))$ belong to $L^1(0,T)$. \item For each $t\in [0,T]$, equation \eqref{eq:SDE} holds true almost surely. \label{it:SDE} \end{enumerate} \end{defi} A global existence and uniqueness result can be obtained, in a similar way as for ODEs, in the case where both $\sigma(t,\cdot)$ and $b(t,\cdot)$ satisfy a global Lipschitz property on $\mathbb{R}^d$. \begin{thm}\label{thm_SDE_existence_uniqueness} Let $(t,x) \mapsto (\sigma_{ij})_{1\leq i,j\leq d}$, $(t,x) \mapsto (b_i)_{1\leq i \leq d}$ measurable on $[0,T] \times \mathbb{R}^d$ satisfying a global Lipschitz property on $\mathbb{R}^d$ uniformly on $[0,T]$. Assume that $X^0$ is an $\mathcal{F}_0$ measurable random variable with $\mathbb{E}[|X^0|^2] <\infty$. Then the SDE \eqref{eq:SDE} has a unique continuous solution $X\in L^2_{\text{ad}}([0,T] \times \Omega)$. \end{thm} We end this subsection by making the following observations regarding SDEs on a smooth compact submanifold $M \subset \mathbb{R}^d$. In this case, it is convenient to consider Stratonovitch SDEs which read in integral form \begin{equation}\label{eq:SDE.M} X(t)=X^0 +\int_0^t \sigma(s,X(s)) \circ dB(s)+\int_0^t b(s,X(s))dt,\: t \in[0,T]. \end{equation} for $(\sigma,b) \colon [0,T] \times M \times \to (\mathbb{R}^{d \times d},\mathbb{R}^d)$ We first remark how such a Stratonovitch SDE may be transformed to an Ito SDE: \begin{rem} \label{rem:SDE.Strato.Ito} If $\sigma \in C^1([0,T] \times M)$ and $X$ solves the Ito SDE \begin{align} X(t)=X^0 +\int_0^t \sigma(s,X(s)) dB(s)+\int_0^t b(s,X(s)) + \frac 1 2 \sigma(s,X(s)) : \nabla \sigma(s,X(s)) ds, \end{align} where $(\sigma : \nabla \sigma)_i = \sum_{k} \sigma_{k} \cdot \nabla \sigma_{ki}$, then, $X$ satisfies \eqref{eq:SDE.M} almost surely. Indeed, this follows immediately from \eqref{eq:Conversion.composition} with $g = \sigma$. \end{rem} \begin{thm} \label{thm:SDE.M} Assume that $X^0 \colon \Omega \to M$ is $\mathcal F_0$ measurable, $\sigma \in C^2([0,T] \times M;\mathbb{R}^{d \times d})$ and $b \in C^1([0,T] \times M ;\mathbb{R}^d)$ satisfy $b(s,x) \in T_x M$ and $\sigma(s,x) v \in T_x M$ for all $(s,x) \in [0,T] \times M$ and for all $v \in \mathbb{R}^d$. Then, there exists a unique solution $X$ to \eqref{eq:SDE.M}. \end{thm} \begin{proof} For simplicity we restrict to the case $M = \S^{d-1}$. Since $M \subset \mathbb{R}^d$ is compact and smooth, there exist extensions $\tilde \sigma \in C_c^2([0,t] \times \mathbb{R}^d)$ and $\tilde b \in C_c^1([0,T] \times \mathbb{R}^d)$ such that $(\tilde \sigma, \tilde b)|_M = (\sigma,b)$. By Theorem \ref{thm_SDE_existence_uniqueness} (and the remark above), there exists a unique solution to the SDE \eqref{eq:SDE.M} with $M$ replaced by $\mathbb{R}^d$ and $\sigma$ replaced by $\tilde \sigma$. By the Stratonovitch chain rule, Remark \ref{rem:Strato} \ref{it:stratonovich.chain.rule}, and the assumptions on $\sigma$ and $b$ we have \begin{align} \, d |X|^2 = 0, \end{align} which ensures that $X$ takes values in $M=\S^d$. Thus, $X$ is a solution to \eqref{eq:SDE.M}. Conversely, every solution to \eqref{eq:SDE.M} also satisfies the Ito SDE with $\tilde \sigma$ on $\mathbb{R}^d$ for which we already know uniqueness. \end{proof} \subsection{Extension to infinite dimensions} \label{sec:infinite} Let $H$ an infinite dimensional separable Hilbert space, and let $(e_k)_k$ be an orthonormal basis of $H$. Let $(\Omega, \mathcal{A},P)$ a probability space and $(\mathcal{F}_t)_t$ a filtration as in the previous subsection. Let us denote by $L^2_{\text{ad}}([0,T]\times \Omega;\mathcal{L}(\mathbb{R}^d,H))$ the space of operator valued adapted processes $X:[0,T]\times \Omega \to \mathcal{L}(\mathbb{R}^d,H)$ such that $$ \|X\|_{L^2_{\text{ad}}([0,T]\times \Omega;\mathcal{L}(\mathbb{R}^d,H))}^2:=\mathbb{E} \int_0^T \| X(t)\|_{HS}^2 dt < \infty $$ where \begin{align} \| X\|^2_{HS}=\text{tr} (XX^\star)=\underset{k}{\sum}\| X^\star e_k\|_{\mathbb{R}^d}^2 = \sum_l \| X a_l\|_{H}^2 \end{align} for the standard basis $(a_l)_l$ of $\mathbb{R}^d$. Note that we can identify $\L(\mathbb{R}^d,H) = H^d$ through $A b = \sum_i A_i b_i$ for $A \in \L(\mathbb{R}^d,H)$, $b \in \mathbb{R}^d$, where $A_i = A a_i$. Then $A^\star e_k = \sum_i (A_i,e_k)a_i$. We extend the definition of the Itô integral in the following way \begin{defi} Let $B$ be a $d$-dimensional Brownian motion and $X \in L^2_{\text{ad}}([0,T]\times \Omega;\mathcal{L}(\mathbb{R}^d,H))$. Then, for $0 \leq t \leq T$ \begin{align} \label{def:Ito.infinite} \int_0^t X(s) \, d B(s) := \sum_{k \in \mathbb{N}} \sum_{i=1}^d \left(\int_0^t (X_i(s),e_k) \, d B_i(s) \right)e_k. \end{align} \end{defi} By density, we have that this is well defined and the Îto isometry carries over. \begin{prop} \label{prop:Ito.isometry.infinite} Let $B$ and $X$ be as above. Then $\int_0^t X(s) \, d B(s) \in L^2(\Omega;H)$ and for all $0 \leq t \leq T$ \begin{align} \label{eq:Ito.isometry.infinite} \mathbb{E}\left\|\int_0^t X(s) \, d B(s) \right\|^2_H = \int_0^t \mathbb{E}\|X(s)\|_{HS}^2 \, d s. \end{align} \end{prop} \begin{proof} We observe that \begin{align} \int_0^t X(s) \, d B(s) &= \sum_k \lambda_k e_k, \\ \lambda_k &= \sum_i\int_0^t (X_i(s),e_k) \, d B_i(s). \end{align} Then, by Theorem \ref{th:Ito} \begin{align} \mathbb{E} \lambda_k^2 = \sum_{i=1}^d \int_0^t\mathbb{E}|(X_i(s),e_k)|^2 \, d s = \int_0^t\mathbb{E}|(X(s),e_k)|^2 \, d s, \end{align} and thus \begin{align} \mathbb{E} \sum_k \lambda_k^2 = \int_0^t \mathbb{E}\|X(s)\|_{HS}^2 \, d s. \end{align} Therefore, by Parseval's identity and dominated convergence theorem, the integral \eqref{def:Ito.infinite} is well-defined and \eqref{eq:Ito.isometry.infinite} holds. \end{proof} Similarly, we can define the Stratonovitch integral as follows. \begin{defi} \label{defi:Strato.infinite} Let $B$ be a $d$-dimensional Brownian motion and $X \in L^2_{\text{ad}}([0,t]\times \Omega;\mathcal{L}(\mathbb{R}^d,H))$. Moreover, assume that \begin{align} \sum_{k \in \mathbb{N}} \sum_{i=1}^d \mathbb{E} \left[[(X_i,e_k),B_i]^2(T)\right] < \infty. \end{align} Then, we define the Stratonovitch integral for $ 0\leq t \leq T$ \begin{align} \label{def:Strato.infinite} \int_0^t X(s) \circ \, d B(s) := \sum_{k \in \mathbb{N}} \sum_{i=1}^d \left(\int_0^t (X_i(s),e_k) \, d B_i(s) + [(X_i,e_k),B_i](t) \right)e_k. \end{align} \end{defi} \begin{rem} Clearly Proposition \ref{prop:Ito.isometry.infinite} implies that under the assumptions of this definition, the Stratonovitch integral is well-defined and satisfies \begin{align} \mathbb{E}\left\|\int_0^t X(s) \circ \, d B(s) \right\|^2_H \leq 2 \int_0^t \mathbb{E}\|X(s)\|_{HS}^2 \, d s + \frac 1 2 \sum_{k \in \mathbb{N}} \mathbb{E} \left[ \left( \sum_{i=1}^d [(X_i,e_k),B_i](t)\right)^2 \right] . \end{align} \end{rem} We are especially concerned with Stratonovitch integrals when the integrand is of the form $X(s) = g(Y(s))$, where $Y$ is an Itô process. \begin{prop} \label{pro:Strato.infinite} Let $0 < t \leq T$, let $B$ be a $d$-dimensional Brownian motion, and let $g \in C^1(\mathbb{R}^d ; \L(\mathbb{R}^d,H))$ with $\|g\|_{C^1(\mathbb{R}^d ;\L(\mathbb{R}^d,H))} < \infty$, $\sigma \in L^2_{\text{ad}}([0,T]\times \Omega;\mathbb{R}^{d \times d})$ and $b \in L^2_{\text{ad}}([0,T]\times \Omega;\mathbb{R}^{d}))$. Let $Y \colon \Omega \times [0,T] \to \mathbb{R}^d$ be the Itô process \begin{align} \label{eq:Y.Ito} Y(t) = Y(0) + \int_0^t \sigma(s) \, d B(s) + \int_0^t b(s) \, d s. \end{align} Then $g \circ Y$ satisfies the assumptions of Definition \ref{defi:Strato.infinite} and \begin{align} \label{eq:Strato.composition.infinite} Z(t):= \int_0^t g(Y(s)) \circ \, d B(s) = \int_0^t g(Y(s)) dB(s)+ \sum_i \sum_j \frac{1}{2}\int_0^t \partial_i g_j(Y(s))\sigma_{ij} \, d s. \end{align} Moreover, $Z \in H^{\frac 1 2_-}((0,T) \times H)$ and \begin{align} \label{est:Z.H^s} \mathbb{E}\left[ \|Z\|_{H^{\frac 1 2_-}((0,T) \times H)}^2 \right]^{\frac 1 2 } \leq C_T \left(\| g \|_{L^\infty(\mathbb{R}^d; H^d)} + \| \sigma\|_{L^\infty(\Omega \times [0,T];\mathbb{R}^{d \times d}_{HS})} \|\nabla g\|_{L^\infty( \mathbb{R}^d;H^{d \times d}_{HS})} \right), \end{align} for a constant $C_T$ that depends only on $T$. \end{prop} \begin{proof} Clearly $g \circ Y \in L^2_{\text{ad}}([0,T]\times \Omega;\mathcal{L}(\mathbb{R}^d,H))$. Moreover, for all $1 \leq i \leq d$ and all $k \in \mathbb{N}$ Definition \ref{def:Strato} and \eqref{eq:Conversion.composition} imply \begin{align} \sum_{i=1}^d [(g_i \circ Y,e_k),B_i](t) = \sum_{i=1}^d \sum_{j=1}^d \int_0^t (\partial_j g_i(Y(s)),e_k) \sigma_{ij} \, d s. \end{align} In particular, \begin{align} \label{est:Z.H^s.1} \sum_{k \in \mathbb{N}} \mathbb{E} \left[ \sum_i [(g \circ Y,e_k),B_i]^2(t) \right] &\leq t^2 \left\| \sum_{i,j = 1}^d \sigma_{ij} \partial_i g_j \circ Y \right\|^2_{L^\infty(\Omega \times (0,t);H)} \\ &\leq t^2 \|\nabla g\|^2_{L^\infty((0,t);H^{d\times d})} \|\sigma\|^2_{L^\infty(\Omega \times (0,t))}. \end{align} Thus, $Z$ is well-defined and satisfies \eqref{eq:Strato.composition.infinite}. For $s \in (0,1)$, recall \begin{align} \|f\|_{H^s(0,T);H)} = \|f\|_{L^2(0,T);H)} + \left(\int_0^T \int_0^T \frac{\|f(t_1) - f(t_2)\|^2_H}{|t_1-t_2|^{1+2s}} \right)^{\frac 1 2}. \end{align} Thus, \eqref{est:Z.H^s} follows from \eqref{eq:Ito.isometry.infinite} and \eqref{est:Z.H^s.1} (adapted to $(t_1,t_2)$). \end{proof} \begin{rem} \label{rem:manifold} This proposition can be directly extended to the case, when $M \subset \mathbb{R}^d$ is a smooth submanifold, $g \in C^1(M; \L(\mathbb{R}^d,H))$ with $\|g\|_{C^1(M;\L(\mathbb{R}^d,H))} < \infty$ and $Y \colon \Omega \times [0,t] \to M$ is the Îto process as above with the constraint $\sigma (s) v \in T_{Y(s)}M$ for all $v \in \mathbb{R}^d$ (which is in fact a necessary condition for $Y$ to stay in $M$). Indeed, we can find $\tilde g \in C^1(\mathbb{R}^d; \L(\mathbb{R}^d,H))$ such that $g = \tilde g$ on $M$ and apply the above proposition. Due to the condition $\sigma (s) v \in T_{Y(s)}M$, we have $\sum_i \sum_j \partial_i \tilde g_j \sigma_{i,j} = \sum_j D_{\sigma_j} g_j $, where $D_{\sigma_j}$ is the derivative in the direction $\sigma_j = (\sigma_{ij})_i \in T_{Y}M$. Therefore \eqref{eq:Strato.composition.infinite} becomes \begin{align} \label{eq:Strato.composition.infinite.manifold} Z(t) := \int_0^t g(Y(s)) \circ \, d B(s) = \int_0^t g(Y(s)) dB(s)+ \sum_j \frac{1}{2}\int_0^t D_{\sigma_j} g_j(Y(s)) \, d s, \end{align} and \eqref{est:Z.H^s} still holds true. \end{rem} \section{Approximation of the operator \texorpdfstring{$L_n$}{Ln}} \label{sec:L_n} In this section, we study the solution operator $L_n$ which has been defined in Subsection \ref{sec:well-posedness} as the solution operator for the Stokes problem \eqref{def:L_n.2}. Throughout this section, we denote by $x_i,\xi_i$, $1 \leq i \leq n$, a generic (time-independent) collection of positions and orientations satisfying \eqref{ass:phi.log.n}--\eqref{ass:uniform_bound}. We introduce the following explicit approximate solution operator \begin{equation} \begin{array}{lccc} {L}_{n,app} \colon& \mathbb{R}^{3n} &\to & \L(\mathbb{R}^{3n}, L^{\frac 3 2_-}_{\mathfrak s, \mathrm{loc} }(\mathbb{R}^3)), \\ & \Xi:=(\xi_1,\cdots,\xi_n) & \mapsto & L_{n,app}(\Xi), \end{array} \end{equation} defined for all $T:=(T_1,\cdots,T_n)\in \mathbb{R}^{3n}$ by \begin{align} \label{def:L_n,app} \left({L}_{n,app}((\xi_1,\dots, \xi_n))T\right)(x) = \sum_i ([T_i]_M + \mathcal{S}(\xi_i) T_i) : \nabla \Phi(x-x_i), \end{align} where $\mathcal{S}(\xi_i)$ is defined in \eqref{def:mS_i}, $[T_i]_M$ is defined in \eqref{eq:[T]_M} and \begin{align*} \Phi(x) = \frac{1}{8 \pi} \left(\frac 1 {|x|} + \frac{x \otimes x}{|x|^3} \right), \end{align*} is the fundamental solution of the Stokes equations. Moreover, in \eqref{def:L_n,app} and later on, we use the convention that for a matrix $M \in \mathbb{R}^{3\times3}$ we write \begin{align} (M : \nabla \Phi)_\alpha = \sum_{\beta, \gamma} \partial_{x_\beta}M_{\gamma \beta} \Phi_{\alpha \gamma}. \end{align} From the definition, we seen that $v= {L}_{n,app}((\xi_1,\dots, \xi_n))T$ is a distributional solution to \begin{equation} \label{eq:distributional.v} -\Delta v +\nabla p = \dv\left( \underset{i}{\sum} ([T_i]_M+\mathcal{S}(\xi_i)T_i)\delta_{x_i} \right), \quad \dv v=0, \: \text{ in } \mathbb{R}^3. \end{equation} We emphasize that in contrast to $L_n$ the operator $ L_{n,app}$ does only depend on the particle positions but not on the scaling factor $r$. The only dependence on the particle shape and orientations is through the function $\mathcal{S}(\xi_i)$. Due to the asymptotic behavior of $\Phi$, $L_{n,app} T$ fails to be in $L^p$ for $p< 3/2$. To capture the decay at infinity we therefore consider $L^{p,2}_{\mathfrak s,w,B(0,M)}$ (see \eqref{def:L^p,2_w,K}) with weight $w$ as in \eqref{def:weight.function} and for \begin{align} \label{def:M} M := \sup_n \max\{1,2\, \, \underset{i}{\max}|x_i|,8r\}, \end{align} which is finite thanks to assumption \eqref{ass:uniform_bound}. We make the following observation regarding $L_{n,app}$. \begin{lem} \label{lem:L.tilde} Let $1<p<3/2$, $M$ be as in \eqref{def:M} and $w$ as in \eqref{def:weight.function}. Then, the operator $ L_{n,\mathrm{app}}$ satisfies $ L_{n,\mathrm{app}} \in C^1((\S^2)^n;\L(\mathbb{R}^{3n}, L^{p,2}_{\mathfrak s,w,B(0,M)}))$ with \begin{align} \| L_{n,\mathrm{app}} \|_{C^1((\S^2)^n;\rh{(L^{p,2}_{w,B(0,M)})^{3n})}} \leq C \sqrt{n}. \end{align} \end{lem} \begin{rem} Note that we identified $\L(\mathbb{R}^{3n}, L^{p,2}_{\mathfrak s,w,B(0,M)})$ with $ (L^{p,2}_{\mathfrak s,w,B(0,M)})^{3n}$ in order to clarify that we consider the norm \begin{align} \|L_{n,app}\|^2_{C^1((\S^2)^n;(L^{p,2}_{w,B(0,M)})^{3n})} = \sum_{i,\alpha} \| L_{n,app} e_{i,\alpha}\|^2_{L^{p,2}_{w,B(0,M)}} + \sum_{i,j,\alpha} \|\nabla_{\xi_j} L_{n,app} e_{i,\alpha}\|^2_{L^{p,2}_{w,B(0,M)}}. \end{align} \end{rem} \begin{proof} Since $\mathcal S$ is a smooth function of $\xi$, the assertion follows immediately from the decay of $\nabla \Phi$ which implies $\nabla \Phi(\cdot - x_i) \in L_{\mathrm{loc}}^{\frac 3 2_-}(\mathbb{R}^3)$ and $\nabla \Phi(\cdot - x_i) \in L^2_w(\mathbb{R}^3\setminus B(0,M))$. \end{proof} The main result of this section is the following proposition. \begin{prop}\label{pro:L_n} Let $M$ be as in \eqref{def:M} and $w$ as in \eqref{def:weight.function}. Then, there exists $N \in \mathbb{N}$ such that for all $n \geq N$ and all $1< p<3/2$ the operator $ L_{n}$ defined in Subsection \ref{sec:well-posedness} satisfies $ L_{n} \in C^1((\S^2)^n;\L(\mathbb{R}^{3n}, L^{p,2}_{\mathfrak s,w,B(0,M)}))$ \begin{equation*} \|L_n - L_{n,\mathrm{app}}\|_{C^1((\S^2)^n;\rh{(L^{p,2}_{w,B(0,M)})^{3n})}} \leq C \sqrt{n} (\phi_n \log n+r^{-2+\frac{3}{p}}) . \end{equation*} \end{prop} The proof of this proposition is given at the end of this section, see subsection \ref{section:proof_main_prop_L_n}. In subsection \ref{section:proof_main_prop_L_n} we also show how this implies Theorem \ref{th:well-posed}. \subsection{An intermediate semi-explicit approximation} For the proof of Proposition \ref{pro:L_n}, we introduce yet another approximation for $L_n$ denoted $ L_{n,\mathrm{app}}^{im}$. For this approximation $L_{n,\mathrm{app}}^{im}$, we neglect interactions between the particles, but we treat each particle by solving a Stokes problem in the exterior domain of that particles. In this sense, $L_{n,\mathrm{app}}^{im}$ can be seen as intermediate between $L_n$ and $ L_{n,\mathrm{app}}$. More precisely, we define $L_{n, \mathrm{app}}^{im} \colon \mathbb{R}^{3 n} \to \L(\mathbb{R}^{3 n }, \dot H^1_{\mathfrak s}(\mathbb{R}^3))$ by \begin{align} \label{eq:L_n,app.representation} L_{n,\mathrm{app}}^{im}(\xi_1,\dots,\xi_n) T = \sum_i U_i[T_i], \end{align} where $U_i[T_i]$ is defined as the solution to \begin{equation} \label{eq:U_i.single.rod} \left \{ \begin{array}{rcl} -\mu \Delta w_i + \nabla p_i &=& 0 \quad \text{ in }\mathbb{R}^3 \setminus \mathcal{B}_i,\\ \dv w_i &=&0 \quad \text{ in }\mathbb{R}^3 \setminus \mathcal{B}_i,\\ D w_i &=& 0 \quad \text{ in } r \mathcal{B}_i, \\ \displaystyle \int_{\partial \mathcal{B}_i} \Sigma(w_i,p_i) \nu &=& 0, \\ \displaystyle \int_{\partial \mathcal{B}_i} [\Sigma(w_i,p_i) \nu ] \times (x - x_i) &=& T_i. \end{array} \right. \end{equation} The dependence on the particle orientation of $L_{n,\mathrm{app}}^{im}$ can be made explicit. Indeed, consider the problem \begin{equation} \label{eq:single.rod} \left \{ \begin{array}{rcl} -\mu \Delta w + \nabla p &=& 0, \quad \text{ in }\mathbb{R}^3 \setminus r \mathcal{B},\\ \dv w &=&0 \quad \text{ in }\mathbb{R}^3 \setminus r \mathcal{B},\\ D w &=& 0 \quad \text{ in } r \mathcal{B}, \\ \displaystyle \int_{\partial r \mathcal{B}} \Sigma(w,p) \nu &=& 0, \\ \displaystyle \int_{\partial r \mathcal{B}} [\Sigma(w,p) \nu ] \times x &=& T, \end{array} \right. \end{equation} where $T \in \mathbb{R}^3$ is a given torque. We denote by $$ (U[T],P[T]) $$ the unique solution $(w,p) \in \dot H^1_{\mathfrak s}(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$ to \eqref{eq:single.rod}. Then, for $R_i \in SO(3)$ such that $R_i e_3 = \xi_i$, the solution $(U_i[T],P_i[T])$ to \eqref{eq:U_i.single.rod} is given by \begin{equation}\label{def:U_i} \left(U_i[T](x),P_i[T](x)\right) = \left(R_i U[R_i^T T](R_i^T(x-x_i)), P[R_i^T T](R_i^T(x-x_i)) \right). \end{equation} Indeed, \begin{align*} & \int_{\partial \mathcal{B}_i} R_i \left[\Sigma\left(U[R_i^T T], P[R_i^T T] \right)(R_i^T(x-x_i)) R_i^T \nu \right]\times (x - x_i) \\ &= \int_{\partial r \mathcal{B}} R_i \left[\Sigma\left(U[R_i^T T], P[R_i^T T] \right)(y) \nu \right] \times (R_i y) \\ &= T, \end{align*} where we used that the normal $\nu$ also gets rotated and that $(R_i a) \times (R_i b) = R_i (a \times b)$. \medskip We show the following estimates between the explicit approximation $L_{n,app}$ and the semi-explicit approximation $L_{n,app}^{im}$. \begin{prop} \label{pro:L_n,app} Let $i\in \{1, \cdots,n \}$ and $T_i \in \mathbb{R}^3$. Then, for all $1 < p < 3/2$, $U_i[T_i] \in C^1(\S^2;\L(\mathbb{R}^{3n}, L^{p,2}_{\mathfrak s,w,B(0,M)}))$ where $w$ and $M$ are as in \eqref{def:weight.function} and \eqref{def:M}, respectively, and for all $x \in \mathbb{R}^3 \setminus \mathcal{B}_i$ and all $l \in \mathbb{N}$ \begin{align} \label{est:nabla.U_i.pointwise} |\nabla^l (U_i[T_i] )(x)| + |\nabla^l \nabla_{\xi_i} (U_i[T_i] )(x)| \leq \frac {C|T_i| }{|x - x_i|^{l+2}}, \end{align} and \begin{align} \label{est:nabla.U_i.L^p} \|U_i[T_i] \|_{C^1(\S^2;\L(\mathbb{R}^{3n}, L^{p,2}_{\mathfrak s,w,B(0,M)}))} \leq C |T_i|. \end{align} In particular, $L^{im}_{n,\mathrm{app}} \in C^1((\S^2)^n;\L(\mathbb{R}^{3n},L^{p,2}_{\mathfrak s,w,B(0,M)}))$ and \rh{ \begin{align} \label{est:L.n.app.L.n.app.im} \| L_{n,app}^{im}-{L}_{n,app}\|_{C^1((\S^2)^n;\rh{(L^{p,2}_{w,B(0,M)})^{3n})}} \leq C \sqrt{n}r^{-2 + \frac 3 p} . \end{align} } \end{prop} The proof relies on the following expansion of $U[T]$ (see e.g. \cite[Proposition~2.2.]{HillairetWu19} for a similar result). \begin{prop}\label{prop_HW} There exists $\mathcal{H}[T]$ such that for and all $x \in \mathbb{R}^3 \setminus r\mathcal{B}$ $$ U[T](x)=([T]_M + \mathcal S\rhnew{(e_3)} T) : \nabla \Phi(x)+\mathcal{H}[T], $$ The error term satisfies for all $l\in\mathbb{N} $ and all $x \in \mathbb{R}^3 \setminus r\mathcal{B}$ \begin{align} \label{est:H} \left| \nabla^l \mathcal{H}[T](x) \right| \leq C \frac{r|T|}{|x|^{3+l}}, \end{align} where the constant $C$ depends on $l$ and the reference particle $\mathcal{B} $. \end{prop} \rh{ \begin{proof} By scaling, it suffices to show the assertion for $r = 1$. Let $|x| > 4$. Then, using the force free condition for $U[T]$ as well as that $T$ and $\mathcal S$ are by definition the torque and stresslet associated to $U[T]$ \begin{align} &U[T](x) - ([T]_M + \mathcal{S}(e_3) T) : \nabla \Phi(x) \\ &= -\int_{\partial \mathcal{B}} \Sigma[U[T],P[T]](y) \nu \cdot \left( \Phi(x-y) - \Phi(x) - y \cdot \nabla \Phi(x) \right) \, d y \\ &\leq 2 \|D U[T]\|_{L^2(\mathbb{R}^3)} \|D \psi\|_{L^2(\mathbb{R}^3)} \end{align} for any divergence free function $\psi \in \dot H^1(\mathbb{R}^3)$ with $\psi(y) = \Phi(x-y) - \Phi(x) - y \cdot \nabla \Phi(x) $ on $\partial \mathcal{B}$. By Lemma \ref{lem:extension} below and the decay of $\nabla ^2\Phi$, we find such a $\psi$ with \begin{align} \|D \psi\|_{L^2(\mathbb{R}^3)} \leq C \frac{1}{|x|^3}. \end{align} Moreover, by some integration by parts, we have \begin{align} 2 \|D U[T]\|^2_{L^2(\mathbb{R}^3)} = T \cdot \mathcal{R}_2^{-1} T \leq C . \end{align} Collecting these estimates yields the assertion for $|x| > 4$. On $\partial \mathcal{B}$, \eqref{est:H} holds as well since $U[T](x) = \omega \times x = (\mathcal{R}_2^{-1} T) \times x$ on $\partial \mathcal{B}$. Thus, by standard regularity theory, \eqref{est:H} also holds in $B(0,4) \setminus \mathcal{B}$. \end{proof} } The following Lemma that we used above is standard. For spherical particles it can be found for example in \cite[Lemma 4.4]{NiethammerSchubert19}, and the proof given there also applies for general smooth particles considered here. \begin{lem} \label{lem:extension} Let $\varphi \in H^1_{\mathfrak s}(\mathcal{B})$. Then, there exists $\psi \in H^1_{\mathfrak s, 0}(B(0,2))$ such that $D \psi = D \varphi$ in $\mathcal{B}$ and \begin{align} \|D \psi\|_{L^2(B(0,2))} \leq C \|D \varphi \|_{L^2(\mathcal{B})}, \end{align} where $C$ depends only on $\mathcal{B}$. \end{lem} \begin{rem} By translation and scaling, the same statement holds for $\mathcal{B}$ replaced by $\mathcal{B}_i$, where the constant $C$ is independent of $i$. \end{rem} \begin{proof}[Proof of Proposition \ref{pro:L_n,app}] \rh{ We focus on the estimates of derivatives in $\xi_i$ in \eqref{est:nabla.U_i.pointwise}, \eqref{est:nabla.U_i.L^p} and \eqref{est:L.n.app.L.n.app.im}. The estimates for the functions themselves can be obtained analogously. The right-hand side of the representation \eqref{def:U_i} allows to view $U_i$ as dependent of $R_i$ instead of $\xi_i$. We recall that this is true since the right-hand side is the same for all $R_i \in SO(3)$ with $R_i e_3 = \xi_i$. It is then sufficient to consider the derivative with respect to $R_i$.\footnote{To see this, observe that we might locally fix the choice of $R_i$ such that $R_i[\xi_i]$ is differentiable in $\xi$ with $|\nabla_{\xi_i} R| \leq C$.} } Fix $ 1 \leq i \leq n$ \am{and let $T_i \in \mathbb{R}^3$}. \rh{It suffices to consider the case $x_i = 0$. (Note that the weighted norm $L^2_w(\mathbb{R}^3 \setminus B(0,M))$ is not translation invariant. However, by definition of $M$ in \eqref{def:M} we have $|x - x_i| \geq \frac 1 2 |x|$ for all $x \in \mathbb{R}^3 \setminus B(0,M)$ and thus the position of $x_i$ does not matter.)} Combining \eqref{def:U_i} and Proposition \ref{prop_HW} and using how $\mathcal S$ and $\Phi$ transform under rotations, we have \begin{align} U_i[T_i](x) - ([T_i]_M + \mathcal S(\xi_i)T_i) : \nabla \Phi(x) = R_i \mathcal H[R_i^T T_i](R_i^T x), \end{align} and thus by the chain rule, \eqref{est:H} \begin{align} \label{eq:nabla.U_i.1} \left|\nabla_{\xi_i} \left( ([T_i]_M + \mathcal S(\xi_i)T_i) : \nabla \Phi(x) - U_i[T_i]\right)(x)\right| \leq C \frac{r|T_i|}{|x|^{3}} \quad \text{in } \mathbb{R}^3 \setminus \mathcal B_i. \end{align} Using the decay of $\nabla \Phi$, this implies \eqref{est:nabla.U_i.pointwise} for $l=0$. The estimate for $l \geq 1$ is analogous. Estimate \eqref{eq:nabla.U_i.1} directly implies $\|\nabla_{\xi_i} U_i[T_i]\|_{L^2_{w}(\mathbb{R}^3 \setminus B(0,M))} \leq C |T_i|$. Moreover, we have $U_i[T_i](x) = r^{-3} (\mathcal{R}_2^{-1} T_i) \times x$ in $\mathcal{B}_i$, and thus \begin{align} \label{eq:nabla.U_i.2} \left|\nabla_{\xi_i} \left( ([T_i]_M+ \mathcal S(\xi_i)T_i) : \nabla \Phi(x) - U_i[T_i]\right)(x)\right| \leq C \frac{|T_i|}{|x|^{2}} \quad \text{in } \mathcal B_i. \end{align} This together with \eqref{est:nabla.U_i.pointwise} implies $\|\nabla_{\xi_i} U_i[T_i]\|_{L^p_{\mathrm{loc}}( B(0,M))} \leq C |T_i|$ for $p < 3/2$ and therefore \eqref{est:nabla.U_i.L^p} holds. It remains to show \eqref{est:L.n.app.L.n.app.im}. By linearity \rhnew{of $L_{n,app}$ and $L_{n,app}^{im}$ in the particles} it suffices to show \begin{align} \label{est:L.n.app.im.one.particle} \|U_i[T_i] - ([T_i]_M + \mathcal S(\xi_i)T_i) : \nabla \Phi\|_{C^1(\S^2; L^{p,2}_{w,B(0,M)})} \leq C r^{-2 + 3/p} |T_i|. \end{align} However, the pointwise estimates \eqref{eq:nabla.U_i.1}--\eqref{eq:nabla.U_i.2} directly imply \eqref{est:L.n.app.im.one.particle}. \end{proof} \subsection{Estimates for \texorpdfstring{$L_n - L_{n,\mathrm{app}}^{im}$}{Ln} through the method of reflections} \label{sec:reflections} \rh{In order to estimate $L_n - L_{n,app}^{im}$, we write $L_n$ in terms of single particle operators only, relying on the so-called method of reflections similarly as in \cite{Hofer18MeanField, Hoefer19}. We emphasize that $L_n - L_{n,app}^{im}$ can be also estimated by energy and duality methods (see e.g. \cite{Gerard-VaretHoefer21}). More precisely, arguing as in \cite{Gerard-VaretHoefer21} one can show that \begin{align} \label{est:L_n-L_n,app.energy} \|(L_n - L_{n,app}^{im})\|_{L^p_{\mathrm{loc}}} \leq C \sum_i \|D L_{n,app}^{im} T\|_{L^1(\cup_i \mathcal{B}_i)}. \end{align} Using the decay estimate from Proposition \ref{prop_HW} together with assumptions \eqref{ass:phi.log.n},\eqref{ass:well.separated}, the right-hand side is bounded by $\phi_n \log n \sum_i |T_i|$ which we recover through the method of reflections, see Proposition \ref{pro:L_n.L_n,app.l^1}. The reason why we rely on the method of reflections in this section are the derivatives with respect to the particle orientations $\xi_i$. Here the method of reflections is very useful because the single particle problems occurring in the method of reflections have an explicit dependence on $\xi_i$, giving direct access to these shape derivatives. } More precisely, following the notation from \cite{Hofer18MeanField, Hoefer19}, we introduce $Q_i$ as the solution operator that maps a function $w \in H^1_{\mathfrak s}(\mathcal{B}_i)$ to the solution $v \in \dot H^1_{\mathfrak s}(\mathbb{R}^3)$ to \begin{equation} \label{eq:Q} \left\{ \begin{array}{ll} - \Delta v + \nabla p = 0, \quad \dv v = 0& \quad \text{ in } \mathbb{R}^3 \setminus \mathcal{B}_i, \\ D v = D w &\quad \text{ in } \ml{\mathcal{B}_i}, \\ \displaystyle 0 = \int_{\partial \mathcal{B}_i} \Sigma[v,p] n = \int_{\partial \mathcal{B}_i} \Sigma[v,p] n \times (x- x_i),& \quad \text{ for } 1 \leq i \leq n. \end{array}\right. \end{equation} Then, we claim that \begin{align} \label{eq:MOR} L_n = \lim_{k \to \infty} (1 - \sum_i Q_i)^k L_{n,\mathrm{app}}^{im}. \end{align} in the sense of convergence of operators $\mathbb{R}^{3n} \to \dot H^1_{\mathfrak s}(\mathbb{R}^3)$. Indeed, this has been proven for spherical particles in \cite{Hofer18MeanField} under the assumptions \eqref{ass:phi.log.n},\eqref{ass:well.separated} and in \cite{Hoefer19} assuming only \eqref{ass:well.separated}. Since we impose \eqref{ass:phi.log.n} in order to control the right-hand side of \eqref{est:L_n-L_n,app.energy}, we follow here the (simpler) approach of \cite{Hofer18MeanField}. The adaptation to non-spherical particles is rather straightforward, and based on the following decay estimates for $Q_i$. \begin{lem} \label{lem:decay.Q} Let $w \in H^1_{\mathfrak s}(\mathcal{B}_i) $ such that $ \nabla w \in L^\infty(\mathcal{B}_i)$. There exists a universal constant $C>0$ such that for all $x \in \mathbb{R}^3 \setminus B(x_i,2r)$ \begin{align} \label{est:Q_i.pointwise} |\nabla^l (Q_i w)(x)| \leq \frac {C r^3}{|x - x_i|^{l+2}} \|D w\|_{L^\infty(\mathcal{B}_i)} . \end{align} Moreover, \begin{align} \label{est:Q_i.H^1} \|Q_i w\|_{\dot H^1(\mathbb{R}^3)} \leq C r^{3/2} \|D w\|_{L^\infty(\mathcal{B}_i)}, \end{align} and for all $p< 3/2$, with $w$ and $M$ as in \eqref{def:weight.function} and \eqref{def:M}, repsectively, \begin{align} \label{est:Q_i.L^p} \|Q_i w\|_{L^{p,2}_{w,B(0,M)}} \leq C r^3 \|D w\|_{L^\infty(\mathcal{B}_i)}. \end{align} \end{lem} \begin{proof} We observe that $Q_i w$ minimizes the $\dot H^1$ norm among all divergence free functions with $D v = D w$. By Lemma \ref{lem:extension} we thus immediately obtain \eqref{est:Q_i.H^1}. Now, let $p < 3/2$, $K \subset \mathbb{R}^3$ be compact and let $g \in L^{p'}$ with $\supp g \subset K$. Let $\varphi$ be the solution to \begin{align*} -\Delta \varphi + \nabla p = g , \quad \dv \varphi = 0 \qquad \text{in } \mathbb{R}^3. \end{align*} Then, by standard regularity theory and Sobolev embedding \begin{align*} \|\nabla \varphi\|_\infty \leq C_{K,p} \|g\|_{L^{p'}}. \end{align*} Consequently, using the equation that $Q_i w$ solves \begin{align*} \int g \cdot Q_i w &= \int D \varphi : D Q_i w \\ & = \int_{\mathcal{B}_i} D \varphi : D Q_i w + \int_{\mathbb{R}^3 \setminus \mathcal{B}_i} D \varphi : D Q_i w \\ & \leq C r^{3} \|D w\|_{L^\infty(\mathcal{B}_i)} \|g\|_{L^{p'}} + C r^{3/2} \|D w\|_{L^\infty(\mathcal{B}_i)} \|D \psi\|_{L^2(\mathcal{B}_i)} \end{align*} for all $\psi \in \dot H^1_{\mathfrak s}(\mathbb{R}^3)$ such that $D \psi = D \varphi$ in $\mathcal{B}_i$. Appealing again to Lemma \ref{lem:extension}, such a function $\psi$ exists with \begin{align*} \|D \psi\|_{\dot H^1(\mathbb{R}^3)} \leq C r^{3/2} \|D \varphi\|_{L^\infty(\mathcal{B}_i)} \leq C r^{3/2} \|g\|_{L^{p'}}. \end{align*} Combination of the above estimates with $K=B(0,M)$ yields the $L^p(B(0,M))$ estimate in \eqref{est:Q_i.L^p}. The proof of \eqref{est:Q_i.pointwise} is similar to the proof of Proposition \ref{prop_HW} in the sense that we have for $|x-x_i|\geq 2r$ $$ |Q_i w(x)| \leq C \frac{r^{3/2}}{|x-x_i|^2} \| D w \|_{L^2(\mathcal{B}_i)} $$ In particular this yields the $L^2_w(\mathbb{R}^3\setminus B(0,M))$ estimate in \eqref{est:Q_i.L^p}. Indeed since the above decay is valid for $x \not \in B(x_i,2r)$, we have in particular $|x-x_i|\geq \frac{1}{2}|x| $ for $x \not \in B(0,M)$, $1 \leq i \leq n$, this yields \begin{align*} \|Q_i w\|_{L^2_w(\mathbb{R}^3\setminus B(0,M))}&\leq C r^3 \|D w\|_{L^\infty(\mathcal{B}_i)} \left (\int_{\mathbb{R}^3\setminus B(0,M)} \frac{(1+|x|)^{a}}{|x|^4}dx\right)^{1/2} \end{align*} this concludes the proof since the above integral is finite for \amnew{$a<1$}. \end{proof} \begin{prop} \label{pro:L_n.L_n,app.l^1} \rhnew{There exists $N_0 \in \mathbb{N}$ such that for all $n \geq N_0$} \begin{align} \label{eq:convergence.MOR} L_n = \lim_{k \to \infty} (1 - \sum_i Q_i)^k L_{n,\mathrm{app}}^{im} \end{align} in the sense of convergence of linear operators operators from $\mathbb{R}^{3n}$ to $\dot H^1_{\mathfrak s}$. Moreover, with $M$ and $w$ as in \eqref{def:M} and \eqref{def:weight.function}, respectively, for all $1\leq p < 3/2$, \rh{ \begin{equation} \label{est:L_n.L_n,app.l^1} \|L_n - L_{n,\mathrm{app}}^{im}\|_{C((\S^2)^n;(L^{p,2}_{w,B(0,M)})^{3n})} \leq C \sqrt{n} \phi_n \log n . \end{equation} } \end{prop} \begin{proof} The convergence \eqref{eq:convergence.MOR} can be proved exactly as in \cite{Hofer18MeanField}. We therefore only give the proof of \eqref{est:L_n.L_n,app.l^1}. Let $p< 3/2$ and $T \in \mathbb{R}^{3n}$. We denote $v = L_n T$ and $v_k = (1 - \sum_i Q_i)^k L_{n,\mathrm{app}}^{im} T$. To prove \eqref{est:L_n.L_n,app.l^1}, we apply Lemma \ref{lem:decay.Q} to see that \begin{align} \label{est:v_k+1.v_k_L^p_loc} \|v_{k+1} - v_k\|_{L^2_w(\mathbb{R}^3\setminus B(0,M))}+ \|v_{k+1} - v_k\|_{L^p(B(0,M))}& \leq \sum_i \|Q_i v_k\|_{L^p(B(0,M))} + \|Q_i v_k\|_{L^2_w(\mathbb{R}^3\setminus B(0,M))} \notag\\ & \leq C \sum_i r^3 \|D v_k\|_{L^\infty({\mathcal{B}_i})}. \end{align} Now, using the fact that $D(Q_i v_k) = D v_k$ in $B_i$ we get $$Dv_{k+1}= D v_k - \underset{j}{\sum} D Q_j v_k=- \underset{j \neq i}{\sum} D Q_j v_k, \text{ in } B_i. $$ Thus, using again Lemma \ref{lem:decay.Q} and assumption \eqref{ass:well.separated}, we get \begin{align*} \sum_i \|D v_{k+1}\|_{L^\infty(\mathcal{B}_i)} &\leq \sum_i \sum_{j \neq i} \|D(Q_j v_k)\|_{L^\infty(\mathcal{B}_i)} \\ & \leq \sum_i \sum_{j \neq i} \frac{r^3}{|x_i - x_j|^3} \|D v_{k}\|_{L^\infty(\mathcal{B}_j)}\\ & \leq C \phi_n \log n \sum_j \|D v_{k}\|_{L^\infty(\mathcal{B}_j)}. \end{align*} Hence, by iteration \begin{align} \label{est:Dv_k} \sum_i \|D v_{k}\|_{L^\infty(\mathcal{B}_i)} \leq (C \phi_n \log n)^k \sum_i\|D v_0\|_{L^\infty( \mathcal{B}_i)}. \end{align} By \eqref{eq:L_n,app.representation}, we can write $v_0 = \sum_i U_i(T_i)$, and thus the decay estimates from Proposition \ref{prop_HW} yield \begin{align} \label{est:Dv_0} \sum_i \|D v_0\|_{L^\infty(\mathcal{B}_i)} \leq C \sum_i \sum_{j \neq i} \frac{T_j}{|x_i - x_j|^3} \leq \phi_n \log n |T|_{l^1}. \end{align} Combining \eqref{est:v_k+1.v_k_L^p_loc}, \eqref{est:Dv_k} and \eqref{est:Dv_0} yields \begin{align*} \|v_{k+1} - v_k\|_{L^p_{\mathrm{loc}}} + \|v_{k+1} - v_k\|_{L^2_w(\mathbb{R}^3\setminus B(0,M))}\leq (C \phi_n \log n)^{k+1} |T|_{l^1}. \end{align*} Summing these errors in $k$ yields \eqref{est:L_n.L_n,app.l^1}. \end{proof} We now turn to the derivatives in $\xi_i$. We begin by estimating the derivative of $Q_i$ with respect to $\xi_i$. \begin{lem} \label{lem:decay.nabla.Q} Let $w \in H^2_{\mathfrak s}(B(x_i,2r))$ such that $ \nabla w \in W^{1,\infty}(B(x_i,2r)) $. Then, $\nabla_{\xi_i} (Q_i w)\in L^2_\mathrm{loc}(\mathbb{R}^3)$ and for all $x \in \mathbb{R}^3 \setminus B(x_i,2r)$ and all $l \in \mathbb{N}$ \begin{align} \label{est:nabla.Q_i.pointwise} |\nabla^l \nabla_{\xi_i} (Q_i w)(x)| \leq \frac {C r^3}{|x - x_i|^{l+2}} \left(\|D w\|_{L^\infty(\mathcal{B}_i)} + r \|\nabla D w\|_{L^\infty(\mathcal{B}_i)} \right). \end{align} Moreover, for all $p< 3/2$ and with $M$ and $w$ as in \eqref{def:M} and \eqref{def:weight.function}, respectively, \begin{align} \label{est:nabla.Q_i.L^p} \|\nabla_{\xi_i} Q_i w\|_{L^{p,2}_{w,B(0,M)}} \leq C r^3 \left(\|D w\|_{L^\infty(\mathcal{B}_i)} + r \|\nabla D w\|_{L^\infty(\mathcal{B}_i)} \right). \end{align} \end{lem} \begin{proof} We begin by dropping all indices $i$ and assume $x_i = 0$. Moreover, we set $Q = Q[\xi]$ to denote the dependence on $\xi$. By considering the defining equation for $Q$, \eqref{eq:Q}, we observe that for any $R \in SO(3)$ with $R \xi = e_3$ \begin{align} \label{eq:Q[xi]} (Q[\xi] w)(x) = R (Q[e_3] \bar w) (R^T x), \end{align} where $\bar w(x) = R^T w(R x)$. Note that this corresponds to the way how $U_i$ is obtained from $U$ in \eqref{def:U_i}. Analogously, as argued at the beginning of the proof of Proposition \ref{pro:L_n,app}, it suffices to view $Q$ as a function of $R$ to derive estimates for the derivative. By the assumptions on $w$ and the chain rule $\nabla_\xi \bar w \in H^1(B(0,2r))$ with \begin{align} \label{est:bar.w} |\nabla_\xi D_x \bar w|(x) \leq C |\nabla w|(x) + C |x||\nabla^2 w|(x). \end{align} Consequently, since $w \in H^2(B(0,2r))$ and $Q$ is a linear operator with values in $\dot{H}^1$, the representation \eqref{eq:Q[xi]} implies that $\nabla_\xi (Q[\xi] w) \in L^2_{\mathrm{loc}}(\mathbb{R}^3)$. Moreover, we can combine \eqref{eq:Q[xi]} and \eqref{est:bar.w} with Lemma \ref{lem:decay.Q} to obtain for $x \in \mathbb{R}^3 \setminus B(0,2r)$ \begin{align} \label{eq:expansion.nabla.Q} \begin{aligned} |\nabla_{\xi} (Q w)(x)| &\leq C |\nabla^l(Q[e_3] \bar w) (R^T x)| + C |x| |\nabla(Q[e_3] \bar w) (R^T x)| + C |Q[e_3]( \nabla_\xi \bar w) (R^T x)| \\ & \leq C \frac{r^3}{|x|^{2}} \|D w\|_{L^\infty(\mathcal{B}_i)} + C \frac{r^4}{|x|^{2}} \|\nabla D w\|_{L^\infty(\mathcal{B}_i)}. \end{aligned} \end{align} This establishes \eqref{est:nabla.Q_i.pointwise} for $l=0$ and the $L^2_w(\mathbb{R}^3\setminus B(0,M))$ estimate in \eqref{est:nabla.Q_i.L^p}. The estimate for $l \geq 1$ is analogous. Moreover, \eqref{eq:expansion.nabla.Q} implies \begin{align} \|\nabla_{\xi} (Q w)(x)\|_{L^p(B(0,M)\setminus B(0,2r)} \leq C r^3 \|D w\|_{L^\infty(\mathcal{B}_i)} + C r^4 \|\nabla D w\|_{L^\infty(\mathcal{B}_i)}. \end{align} It remains to estimate the $L^p$ norm inside of $B(0,2r)$. We note that the first inequality in \eqref{eq:expansion.nabla.Q} also holds for all $x \in B(0,2r)$. Together with Lemma \ref{lem:decay.Q} and \eqref{est:bar.w}, it implies for all $p < 3/2$ \begin{align*} \|\nabla_{\xi} (Q w)\|_{L^p( B(0,2r))} &\leq C r^3 \|D \bar w\|_{L^\infty (B(0,2r))} + \|x\|_{L^6(B(0,2r))} \|\nabla Q w\|_{L^2(\mathbb{R}^3)} + r^3 \|\nabla_\xi D_x \bar w\|_{L^\infty(\mathcal{B}_i)} \\ & \leq C r^3 \|D w\|_{L^\infty(\mathcal{B}_i)} + C r^4 \|\nabla D w\|_{L^\infty(\mathcal{B}_i)}. \end{align*} This finishes the proof. \end{proof} To estimate $\nabla_{\xi_i} (L_n - L_{n,\mathrm{app}}^{im})$ the idea is to proceed similarly as in Proposition \ref{pro:L_n.L_n,app.l^1}. However, this is more delicate due to the loss of regularity when taking the derivative in $\xi_i$. More precisely, by Lemma \ref{lem:decay.nabla.Q} we only have that $\nabla_{\xi_i} (Q_i w)$ lies in $L^2_\mathrm{loc}$. In fact it is easy to see that $\nabla_{\xi_i} (Q_i w)$ will in general not possess a weak derivative regardless how much regularity we impose on $w$. A consequence of this loss of regularity is that $Q_i \nabla_{\xi} Q_j$ is not defined a priori since $Q_i$ needs data in $H^1(\mathcal{B}_i)$. However Lemma \ref{lem:decay.nabla.Q} ensures that $Q_i \nabla_{\xi} Q_j$ is well defined for $i \neq j$. On the other hand, one can exploit that $Q_i Q_i = Q_i$ to avoid any appearance of $Q_i \nabla_{\xi} Q_i$. To this end, we rewrite the series expansion \eqref{eq:MOR}. Namely, adopting the notation $v_k = (1 - \sum_i Q_i)^k L_{n,\mathrm{app}}^{im} T$, we have \begin{multline} \label{eq:MOR.expansion} v_k = v_0 - \sum_{i_1} Q_{i_1} v_0 + \sum_{i_1} \sum_{i_2 \neq i_1} Q_{i_2} Q_{i_1} v_0 - \sum_{i_1} \sum_{i_2 \neq i_1} \sum_{i_3 \neq i_2} Q_{i_3} Q_{i_2} Q_{i_1} v_0 + \dots \\ + (-1)^k \sum_{i_1} \sum_{i_2 \neq i_1} \dots \sum_{i_k \neq i_{k-1}} Q_{i_k} Q_{i_{k-1}} \dots Q_{i_1} v_0. \end{multline} This representation can be directly deduced from \eqref{eq:MOR} by just using $Q_i Q_i = Q_i$ (see \cite[Section 2]{HoferVelazquez18} for details). \begin{prop}\label{pro:nabla.L_n.L_n,app} \rhnew{There exists $N_0 \in \mathbb{N}$ such that for all $n \geq N_0$ the following holds.} \rh{Let $M$ be as in \eqref{def:M} and $w$ as in \eqref{def:weight.function}, $1 \leq i \leq n$ and $1 \leq \alpha \leq 3$ Then, for all $1<p < 3/2$, $L_n e_{i,\alpha}$ is differentiable in $\xi_j$ for all $1 \leq j \leq n$ as a function in $L^{p,2}_{\mathfrak s,w,B(0,M)}$ and we have \begin{align} \label{est:nabla.L_n.L_n,app} \|\nabla_{\xi_i} (L_n - L_{n,\mathrm{app}}^{im}) e_{i,\alpha}\|_{L^{p,2}_{w,B(0,M)}} \leq C \phi_n \log n . \end{align} Moreover, for $j\neq i$ \begin{align} \label{est:nabla.L_n.L_n,app.different} \|\nabla_{\xi_j} (L_n - L_{n,\mathrm{app}}^{im}) e_{i,\alpha}\|_{L^{p,2}_{w,B(0,M)}} \leq C \frac{r^3 }{|x_i-x_j|^3}. \end{align} } \end{prop} \begin{proof} In the following we will assume $i=1$. Since the $L^2_w(\mathbb{R}^3 \setminus B(0,M)) $ and $L^p(B(0,M))$ estimates are analogous we only treat below the $L^p(B(0,M))$ estimates. let $v_k = (1 - \sum_i Q_i)^k L_{n,\mathrm{app}}^{im} e_{i,\alpha}$, then by virtue of \eqref{eq:MOR.expansion} we have \begin{align*} v_{k+1} - v_{k} = (-1)^{k+1} \sum_{i_1 \neq 1} \sum_{i_2 \neq i_1} \dots \sum_{i_{k+1} \neq i_{k}} Q_{i_{k+1}} Q_{i_{k}} \dots Q_{i_1} v_0. \end{align*} Here we used that $D v_0 = D L_{n,\mathrm{app}}^{im} e_{i,\alpha} = D U_1[e_{\alpha}] = 0$ in $\mathcal{B}_1$ to deduce that the first sum only runs over $i_1 \neq 1$. Thanks to $\nabla_{\xi_1} Q_i = 0$ for $i \neq 1$, taking the derivative in $\xi_1$ yields \begin{align} \label{eq:MOR.expansion.nabla} \begin{aligned} &\nabla_{\xi_1} (v_{k+1} - v_{k}) \\ &= \sum_{i_1 \neq 1} \sum_{i_2 \neq i_1} \dots \sum_{i_{k+1} \neq i_{k}} Q_{i_{k+1}} Q_{i_{k}} \dots Q_{i_1} \nabla_{\xi_1} v_0 \\ &+ \sum_{l =2}^{k+1} (-1)^{k+1} \sum_{i_1 \neq 1} \sum_{i_2 \neq i_1} \dots \sum_{\substack{i_{l -1} \neq 1 \\ i_{l-1} \neq i_{l-2}}} \sum_{i_{l+1} \neq 1} \dots \sum_{i_{k+1} \neq i_{k}} Q_{i_{k+1}} \dots Q_{i_{l+1}} \nabla_{\xi_1} Q_1 Q_{i_{l-1}} \dots Q_{i_1} v_0 \\ &=: \Psi_{k,1} + \Psi_{k,2}. \end{aligned} \end{align} Although the right-hand side looks very complicated, it can be estimated analogously as in the proof of Proposition \ref{pro:L_n.L_n,app.l^1}. Indeed, inductive application of Lemma \ref{lem:decay.Q} and eventually application of Proposition \ref{pro:L_n,app} yields \begin{align*} \|\Psi_{k,1}\|_{L^p_\mathrm{loc}} & \leq C r^3 \sum_{i_1 \neq 1} \sum_{i_2 \neq i_1} \dots \sum_{i_{k+1} \neq i_{k}} \|D Q_{i_{k}} \dots Q_{i_1} \nabla_{\xi_1} v_0\|_{L^\infty(\mathcal{B}_{k+1})} \\ & \leq C r^3 \sum_{i_1 \neq 1} \sum_{i_2 \neq i_1} \dots \sum_{i_{k+1} \neq i_{k}} \frac{r^3}{|x_{i_k} - x_{i_{k+1}}|^3} \| D Q_{i_{k-1}} \dots Q_{i_1} \nabla_{\xi_1} v_0\|_{L^\infty(\mathcal{B}_{k})} \\ & \leq C r^3 (C \phi_n \log n)^k \sum_{i_1 \neq 1} \|D \nabla_{\xi_1} v_0\|_{L^\infty(\mathcal{B}_{i_1})} \\ & \leq C r^3 (C \phi_n \log n)^k \sum_{i_1 \neq 1}\frac{1}{|x_1 - x_{i_1}|^3} \\ & \leq (C \phi_n \log n)^{k+1}. \end{align*} For the second term on the right-hand side of \eqref{eq:MOR.expansion.nabla}, we proceed similarly. We observe that the combination of Lemmas \ref{lem:decay.Q} and \ref{lem:decay.nabla.Q} implies for $i,j \neq 1$ and any function $\psi \in H^1_{\mathfrak s}(\mathcal{B}_i)$ such that $\nabla \psi \in L^\infty(\mathcal{B}_i)$. \begin{align*} \|D \nabla_{\xi_1} Q_1 Q_i \psi \|_{L^\infty(\mathcal{B}_j)} &\leq C \frac{r^3}{|x_j - x_1|^3} \left( \|D Q_i \psi\|_{L^\infty(\mathcal{B}_1)} + r \|D\nabla Q_i \psi\|_{L^\infty(\mathcal{B}_1)} \right) \\ & \leq C \frac{r^3}{|x_j - x_1|^3} \frac{r^3}{|x_i - x_1|^3} \left(1 + \frac{r}{|x_i - x_1|} \right) \|D \psi\|_{L^\infty(\mathcal{B}_i)} \\ & \leq C \frac{r^3}{|x_j - x_1|^3} \| D\psi\|_{L^\infty(\mathcal{B}_i)}. \end{align*} Similarly in the case $l=k+1$ we have \begin{align*} \| \nabla_{\xi_1} Q_1 Q_i \psi \|_{L^p_{loc}} \leq C \frac{r^3}{|x_i - x_1|^3} \| D\psi\|_{L^\infty(\mathcal{B}_i)}. \end{align*} In this way, we can estimate the second term on the right-hand side of \eqref{eq:MOR.expansion.nabla} by \begin{align*} \|\Psi_{k,2}\|_{L^p_\mathrm{loc}} & \leq k (C \phi_n \log n)^{k+1} \end{align*} where the factor $k$ originates from the sum over $l$. Combining these estimates for $\Psi_{k,1}$ and $\Psi_{k,2}$ yields \begin{align*} \|\nabla_{\xi_1} (v_{k+1} - v_{k})\|_{L^p_\mathrm{loc}} \leq (k+1) (C \phi_n \log n)^{k+1}. \end{align*} Summation over $k$ yields the first assertion. For the second estimate, we set $j=2$ and remark that since $\nabla_{\xi_2} v_0=\nabla_{\xi_2}U_1[e_\alpha]=0 $ we have \begin{align*} &\nabla_{\xi_2}(v_{k+1}-v_k) \\ &=\sum_{l =1}^{k+1} (-1)^{k+1} \sum_{i_1 \neq 1} \sum_{i_2 \neq i_1} \dots \sum_{\substack{i_{l -1} \neq 2 \\ i_{l-1} \neq i_{l-2}}} \sum_{i_{l+1} \neq 2} \dots \sum_{i_{k+1} \neq i_{k}} Q_{i_{k+1}} \dots Q_{i_{l+1}} \nabla_{\xi_2} Q_2 Q_{i_{l-1}} \dots Q_{i_1} v_0 \end{align*} with the convention that for $l=1$ the term corresponds to $$ \sum_{i_2 \neq 2} \dots \sum_{i_{k+1} \neq i_{k}} Q_{i_{k+1}} \dots Q_{i_2} \nabla_{\xi_2} Q_2 v_0. $$ We have then \begin{align} \begin{aligned} &\| \nabla_{\xi_2}(v_{k+1}-v_k)\|_{L^p_\mathrm{loc}} \leq r^3 \sum_{l =1}^{k+1} \sum_{i_1 \neq 1} \dots \sum_{\substack{i_{l -1} \neq 2 \\ i_{l-1} \neq i_{l-2}}} \sum_{i_{l+1} \neq 2}\sum_{i_{l+2} \neq i_{l+1}} \dots \sum_{i_{k+1} \neq i_{k}}\\ & \frac{Cr^3}{|x_{i_{k+1}}-x_{i_k}|^3}\cdots \frac{Cr^3}{|x_{i_{l+2}}-x_{i_{l+1}}|^3}\frac{Cr^3}{|x_{i_{l+1}}-x_2|^3 } \frac{Cr^3}{|x_{2}-x_{i_{l-1}}|^3}\cdots \frac{C }{|x_{i_1}-x_1 |^3}\\ & \leq r^3\underset{l=1}{\overset{k+1}{\sum}} (C\phi_n \log n)^{k-l+1} \sum_{i_1 \neq 1} \dots \sum_{\substack{i_{l -1} \neq 2 \\ i_{l-1} \neq i_{l-2}}} \frac{Cr^3}{|x_{2}-x_{i_{l-1}}|^3}\cdots \frac{C}{|x_{i_1}-x_1 |^3}\\ &:= \underset{l=1}{\overset{k+1}{\sum}} (C\phi_n \log n)^{k-l+1} I_{l-1} , \end{aligned} \label{est:nabla.xi_2} \end{align} where for $l\geq 1$ $$ I_l:= \underset{i_1\neq 1}{\sum} \dots \sum_{\substack{i_{l} \neq 2 \\ i_{l} \neq i_{l-1}}} \frac{Cr^3}{|x_{2}-x_{i_l}|^3} \frac{Cr^3}{|x_{i_l}-x_{i_{l-1}}|^3}\dots \frac{Cr^3}{|x_{i_1}-x_1|^3}, $$ and $I_0:=\frac{C r^3}{|x_2-x_1 |^3}$. Now we aim to show by induction that for some constant $\bar C>C$ we have for $l\geq 1$ \begin{align} \label{est:I_k} I_l \leq ( \bar C \phi_n \log n)^{l} \frac{Cr^3}{|x_1-x_2|^3}, \end{align} indeed, for $l>1$, we have by separating the term with $i_{l-1}=2 $ \begin{align*} I_l&= \underset{i_1\neq 1}{\sum} \dots \sum_{i_{l-1} \neq i_l, 2} \,\sum_{i_{l} \neq 2 }\frac{Cr^3}{|x_{2}-x_{i_l}|^3}\frac{Cr^3}{|x_{i_l}-x_{i_{l-1}}|^3} \frac{Cr^3}{|x_{i_{l-1}}-x_{i_{l-2}}|^3}\dots \frac{Cr^3}{|x_{i_1}-x_1|^3}\\ &+ \underset{i_1\neq 1}{\sum} \dots \sum_{i_{l-2} \neq 2 }\, \sum_{i_{l} \neq 2} \frac{Cr^3}{|x_{2}-x_{i_l}|^3} \frac{Cr^3}{|x_{i_l}-x_{2}|^3} \frac{Cr^3}{|x_{2}-x_{i_{l-2}}|^3}\dots \frac{Cr^3}{|x_{i_1}-x_1|^3}\\ &\leq 8 (C\phi_n \log n ) I_{l-1}+ (C \phi_n \log n)^2I_{l-2}, \end{align*} where we used $\log n \geq 1$ (for $n \geq 3$) for the second term and for the first term that for any $i_l\neq 2 \neq i_{l-1}$ we have $$ \frac{1}{|x_2-x_{i_l}|} \frac{1}{|x_{i_l}-x_{i_{l-1}}|} \leq \frac{1}{|x_2-x_{i_{l-1}}|} \left(\frac{1}{|x_2-x_{i_l}|} +\frac{1}{|x_{i_l}-x_{i_{l-1}}|} \right). $$ This yields \eqref{est:I_k} using the induction hypothesis and taking $ 8C \bar C+C^2\leq \bar C^2$. Moreover, using the same arguments as above, one can show that the induction hypothesis is satisfied for $l=0,1$. Inserting \eqref{est:I_k} into \eqref{est:nabla.xi_2} and summing over $k$ yields \eqref{est:nabla.L_n.L_n,app.different}. \end{proof} \subsection{Proofs of Proposition \ref{pro:L_n} and Theorem \ref{th:well-posed}}\label{section:proof_main_prop_L_n} \begin{proof}[Proof of Proposition \ref{pro:L_n}] The proof of Proposition is a direct consequence of Propositions \ref{pro:L_n,app}, \ref{pro:L_n.L_n,app.l^1} and \ref{pro:nabla.L_n.L_n,app}. Indeed we have thanks to Propositions \ref{pro:L_n,app} and \ref{pro:L_n.L_n,app.l^1} for all $1<p<3/2$ \begin{align*} &\left \| L_n-L_{n,app}\right\|_{(L^\infty((\mathbb{S}^2)^n; (L^{p,2}_{w,B(0,M)})^{3n})} \\ &\leq C \rh{\sqrt n} (\phi_n \log n + r^{\frac 3 p - 2}). \end{align*} For the derivative we get from Proposition \ref{pro:nabla.L_n.L_n,app} for all $1<p<3/2$ \begin{align*} &\underset{i}{\sum}\underset{j,\alpha}{\sum} \left \|\nabla_{\xi_i} (L_n-L_{n,app}) e_{j,\alpha}\right\|^2_{L^\infty((\mathbb{S}^2)^n, L^{p,2}_{w,B(0,M)})} \\ &\leq C \underset{i}{\sum} \underset{\alpha}{\sum}\left(\sum_{j\neq i} \frac{r^6}{|x_i-x_j|^6} + (\phi_n \log n)^2 \right) \\ &\leq C n \phi_n^2(1+\log^2 n). \end{align*} where we used that for all $i$, $ \underset{j \neq i}{\sum} \frac{r^6}{|x_i-x_j|^6} \leq C \frac{r^6}{d_{\min}^6} \leq C \phi_n^2$ thanks to assumption \ref{ass:well.separated}. Combining again with Proposition \ref{pro:L_n,app} yields the assertion. \end{proof} For the proof of Theorem \ref{th:well-posed}, we first deduce the following corollary which is a direct consequence of Lemma \ref{lem:L.tilde} and Proposition \ref{pro:L_n} combined with Lemma \ref{lem:embedding.Lebesque.weighted}. \begin{cor} \label{cor:L_n.H^-s} For all $n$ sufficiently large and for all $1/2<s<1$ and $p=\frac{6}{3+2s}$ \begin{align} \|L_n \|_{C^1((\S^2)^n;( H^{-s}_w(\mathbb{R}^3))^{3n})} \leq C \sqrt{n}, \\ \|L_n - L_{n,\mathrm{app}}\|_{C^1((\S^2)^n;( H^{-s}_w(\mathbb{R}^3))^{3n})} \leq C \sqrt{n} (\phi_n \log n+r^{-2+\frac{3}{p}}) \label{est:L_n.L_n,app.cor}. \end{align} \end{cor} \begin{proof}[Proof of Theorem \ref{th:well-posed}] The assertion follows immediately from Corollary \ref{cor:L_n.H^-s} combined with Proposition \ref{pro:Strato.infinite} (see also Remark \ref{rem:manifold}). \end{proof} \section*{Acknowledgements} \rhnew{ We would like to thank Francesco de Vecchi for insights about Brownian motions on spheres. We are grateful to Immanuel Zachhuber for pointing out references on stochastic integrals in infinite dimensional spaces. Moreover, we thank Henrik Matthiesen for helpful discussions on some aspects related to global analysis. } R.H. is supported by the German National Academy of Science Leopoldina, grant LPDS 2020-10. Moreover, R.H. acknowledges support by the Agence Nationale de la Recherche, Project BORDS, grant ANR-16-CE40-0027-01 and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the collaborative research center ``The Mathematics of Emerging Effects'' (CRC 1060, Projekt-ID 211504053) and the Hausdorff Center for Mathematics (GZ 2047/1, Projekt-ID 390685813). \amnew{M.L. is supported by by the Italian Ministry of Education, University and Research (MIUR), in the framework of PRIN projects 2017FKHBA8 001. A.M. is supported by the SingFlows project, grant ANR-18-CE40-0027 of the French National Research Agency (ANR).} \begin{refcontext}[sorting=nyt] \printbibliography \end{refcontext} \end{document} \section{Proof of Lemma \ref{lem:Expectations}} \label{sec:Expectations} \begin{proof} {We drop the index $i$ in the proof. Let $b\in {C}^\infty_c((0,t), \mathbb{R}^3)$. Recalling \eqref{eq:strato.xi_i.T_D}, we appeal to the Itô Stratonovitch conversion formula \eqref{eq:Conversion.composition} (after extending the occurring functions of $\xi$ to the whole space $\mathbb{R}^3$) to deduce with $\mathcal R = \sqrt{\mathcal{R}_2}$ \begin{align*} \mathbb{E} \left( \int_0^t b(s)\cdot \sqrt{\mathcal{R}_2(\xi(s))} \circ d B(s) \right)&= \mathbb{E}\left(\int_0^t \mathcal{R}(\xi(s))b(s) \circ d B(s) \right)\\ & = \mathbb{E}\left(\frac{1}{2} \int_0^t tr\left(\nabla_\xi \mathcal{R}(\xi(s)) b(s) \sigma_D(\xi(s)) \right) ds\right) \end{align*} where we used that the expectation of Îto integrals vanish. From \eqref{eq:R_2}, we have \begin{align} \label{eq:nabla.R} \nabla_\xi \sqrt{\mathcal{R}_2(\xi) }b=(\sqrt{\gamma_{rot,\parallel}} - \sqrt{\gamma_{rot}}) [(\xi\cdot b) \mathrm{Id} +\xi \otimes b] \end{align} and we recall from \eqref{eq:sigma} that $\sigma_D = \sqrt 2 [\xi]_M$ (with the notation \eqref{eq:[T]_M}). In particular, since $\sigma_D$ is skew-symmetric, \begin{align} tr\left(\nabla_\xi \mathcal{R}(\xi) b \sigma_D(\xi) \right) &= (\sqrt{\gamma_{rot,\parallel}} - \sqrt{\gamma_{rot}}) tr\left((\xi \otimes b) \sigma_D(\xi) \right) \\ &= \sqrt 2 (\sqrt{\gamma_{rot,\parallel}} - \sqrt{\gamma_{rot}}) ((\xi \times a_\alpha) \cdot b) (\xi \cdot a_\alpha) \end{align} for any orthonormal basis $(a_\alpha)$ of $\mathbb{R}^3$. Since $\xi\times a_\alpha (\xi\cdot a_\alpha)=\xi \times \xi=0$, we conclude \eqref{eq:torque.average}. \medskip Let $A\in {C}^\infty_c((0,t), \Sym_0(3))$. From \eqref{eq:R_2} and \eqref{def:mS_i} we deduce that \amnew{for any $T\in \mathbb{R}^3$ \begin{align} A: \left(\mathcal{S}(\xi) \sqrt{\mathcal{R}_2(\xi)}T\right) &= \gamma_E \gamma_{rot}^{-\frac 1 2} A: \left(T\times\xi \otimes \xi \right)= \gamma_E \gamma_{rot}^{-\frac 1 2} \left([\xi]_M A \xi \right)\cdot T , \\ \nabla_\xi \left([\xi]_M A \xi \right)&=[\xi]_M A-[A\xi]_M \end{align} where we used, for the first line, that for any $u,v,w\in\mathbb{R}^3$, \begin{equation}\label{eq:cross_product} [u]_Mv=u\times v, \quad u\cdot (v\times w)=v \cdot(w\times u) \end{equation} } Thus, analogously as above \begin{align*} & \mathbb{E} \left(\int_0^t A(s) : \mathcal{S}(\xi(s)) \sqrt{\mathcal{R}_2(\xi(s))} \circ d B(s)\right) \\&= \frac{\gamma_E}{\sqrt{2\gamma_{rot}}} \int_0^t tr (([\xi(s)]_M A(s)-[A(s)\xi(s)]_M ) [\xi(s)]_M ) ds. \end{align*} Using \eqref{eq:cross_product} and $[v]_M^\top=-[v]_M$ for any $v\in\mathbb{R}^3$, we get for any orthonormal basis $(a_\alpha)$ of $\mathbb{R}^3$ \begin{align*} tr (([\xi]_M A-[A\xi]_M ) [\xi]_M )&= \left([\xi]_M a_\alpha\right) \cdot \left([A\xi]_M a_\alpha\right)-\left([\xi]_M a_\alpha\right)\cdot \left([\xi]_M a_\alpha\right)\\ &=(\xi \times a_\alpha)\cdot [(A\xi) \times a_\alpha]-[A (\xi\times a_\alpha)]\cdot (\xi\times a_\alpha)\\ &=A\xi \cdot [a_\alpha \times (\xi\times a_\alpha)]- A: (\xi\times a_\alpha)\otimes (\xi\times a_\alpha)\\ &=A : \xi \otimes [a_\alpha \times (\xi\times a_\alpha)]- A: (\xi\times a_\alpha)\otimes (\xi\times a_\alpha)\\ &=A : (2\xi\otimes \xi)+ A : (\xi \otimes \xi - \mathrm{Id}) \end{align*} For the last line, we choose $a_1=\xi$ to get $a_\alpha \times (\xi\times a_\alpha)=\xi$ for $\alpha=2,3$ and \begin{align} \sum_\alpha (a_\alpha \times \xi) \otimes (\xi \times a_\alpha) &= \xi \otimes \xi -\mathrm{Id} . \end{align} which yields the desired result. } \end{proof} \section{Introduction} \label{sec:aim} Suspensions of non-spherical rigid Brownian particles in viscous fluids are prototypes of viscoelastic fluids. It is classical (see e.g. \cite{DoiEdwards88, KimKarilla13, Graham}) to model such complex fluids by an effective system consisting of a Fokker-Planck equation coupled to the Navier-Stokes equations which feature a viscoelastic stress $\sigma$ that depends on the particle density. In the absence of external forces, such a model for rod-like particles reads in dimensionless form \begin{equation} \label{eq:full.model} \def1.6{1.8} \left\{ \begin{array}{l} \displaystyle \partial_t f + \dv (u f) + \dv_{\xi} (P_{\xi^\perp} \nabla_x u \xi f) = \frac 1 {\mathrm{De}} \Delta_\xi f + \rh{\frac {\lambda_1} {\mathrm{De}}} \dv_x((\mathrm{Id} + \xi \otimes \xi) \nabla_x f), \\ \displaystyle \Re (\partial_t u + (u \cdot \nabla) u ) - \Delta u + \nabla p - \dv \sigma = 0, \\ \displaystyle \sigma = \sigma_v + \sigma_e = \rh{\lambda_2} \int_{\S^2} (D u) : \xi \otimes \xi ) \xi \otimes \xi \, d \xi + \rh{\frac {\lambda_3}{\mathrm{De}}} \int_{\S^2} (3 \xi \otimes \xi - \mathrm{Id}) f \, d \xi, \\ \displaystyle u(0,\cdot) = u_0, \quad f(0,\cdot) = f_0. \end{array} \right. \end{equation} Here $u(t,x)$ and $p(t,x)$ are the fluid velocity and pressure, and $f(t,x,\xi)$ is the density of particles at time $ \rhnew{t\geq 0} $, position $ \rhnew{x \in \mathbb{R}^3} $ and orientation $\xi \in \S^2$. \rhnew{Moreover $P_{\xi^\perp}$ denotes the orthogonal projection in $\mathbb{R}^3$ to the subspace $\xi^\perp$.} \rh{\rhnew{Furthermore}, $\Re$ $\mathrm{De}, \lambda_1, \lambda_2, \lambda_3$ are dimensionless parameters, where $\Re$ is the Reynolds number and $\mathrm{De}$ is the Deborah number which is the ratio between the observation time scale and the diffusion time scale for the particle orientation. The parameter $\lambda_1 \ll 1$ depends only on the aspect ratio of the rod-like particles. The parameters $\lambda_2, \lambda_3$ also depend on the diluteness of the suspension. It is usually argued that it is necessary for the validity of the Doi model that the particles can freely rotate which means $n \ell^3 \ll 1$ (so-called dilute regime in the terminology of \cite{GuazzelliHinch11}), where $n$ is the number density of the particles and $\ell$ the length of the rod-like particles. In this dilute regime $\lambda_2 \ll 1$, $\lambda_3 \ll 1$.} \rh{The two parts of the viscoelastic stress $\sigma_v$ and $\sigma_e$ are sometimes referred to as the viscous part and the elastic part, respectively. The viscous part already occurs for suspensions of non-Brownian spherical particles for which it was first studied theoretically by Einstein \cite{Ein06} and later for ellipsoidal particles by Jeffery \cite{Jeffery22}. The viscous stress $\sigma_v$ as in \eqref{eq:full.model} can be obtained from Jeffery's computation as the leading order term for very elongated ellipsoids. Jeffery also computed the periodic orbits (Jeffery orbits) of such particles in constant shear flow by taking into account non-spherical inertialess particles are partly affected by pure straining motion of the fluid. In the case of rod-like particles the symmetric and skew-symmetric part of the gradient of the fluid velocity contribute equally to the particle rotation (asymptotically for infinite aspect ratio) as reflected by the full gradient in the third term in the Fokker-Planck equation in \eqref{eq:full.model}. The elastic part of the stress only arises for Brownian non-spherical particles. Theoretical studies on such elastic stresses go back to the 1940s, see e.g. \cite{Shima40,KuhnKuhn45,KirkwoodRiseman48} and also the later works \cite{HinchLeal71, HinchLeal72, Brenner74} and the references therein. To our knowledge the elastic stress $\sigma_e$ has so far not been obtained rigorously from a corresponding microscopic model. On the contrary the scaling up has been described as \emph{mysterious} by Constantin and Masmoudi in \cite{constantin2008global}. In the present paper, we provide such a mathematically rigorous derivation of the elastic stress starting from a simplified microscopic system. We will treat both the cases of Deborah numbers $\mathrm{De}$ of order one, and very small Deborah numbers.} \subsection{Previous mathematical results} In the mathematical literature, the model \eqref{eq:full.model} is often called Doi model and has attracted a lot of attention over the last years. Global well-posedness of such models have been studied in different contexts regarding the solution concept, the space dimension and the model for the fluid where in addition to the imcompressible Navier-Stokes equations also the Stokes equations and the compressible Navier-Stokes equations have been considered. We refer to \cite{constantin2005nonlinear, constantin2007regularity, constantin2008global,constantin2009holder, constantin2010global, OttoTzavaras08, BaeTrivisa12, BaeTrivisa13}. In \cite{HelzelTzavaras17, helzel2006multiscale} some further insights on rod-like suspensions are presented when effects of gravity are included. In \cite{saintillan2008instabilities}, \cite{saintillan2008instabilities2} a generalization of the Doi model for active particle is introduced. Existence of global weak entropy solution for this generalization of the Doi model is studied in \cite{chen2013global} \rh{All the previously mentioned papers neglect the presence of the additional viscous stress $\sigma_v$, i.e., they pretend $\sigma = \sigma_e$. Global well-posedness for the Doi model including the full viscoelastic stress $\sigma$ is treated in \cite{LionsMasmoudi07, ZhangZhang08, La19}.} \medskip \rh{Regarding the derivation of the Doi model as a rigorous mean-field limit, there are only partial results so far. On the one hand, the viscous stress has been obtained in the quasistatic case. On the other hand, fully coupled systems have been obtained for non-Brownian spherical particles. In all these results the fluid is modeled by the stationary Stokes equations instead of the Navier-Stokes equations. When the time evolution of the particles is neglected, the Stokes equations with a viscous stress corresponding to $\sigma_v$ above have recently been obtained rigorously as homogenization limits for spherical particles in \cite{NiethammerSchubert19} and subsequently for arbitrary shapes of the particles in \cite{HillairetWu19}. These results have been refined in \cite{ DuerinckxGloria20, DuerinckxGloria21, Duerinckx20,Gerard-VaretHillairet19, Gerard-VaretHoefer21, Gerard-VaretMecherbet20}.} Dynamical models regarding the sedimentation of inertialess non-Brownian spherical particles have been rigorously derived in \cite{JabinOtto04, Hofer18MeanField,Mecherbet18, Hofer&Schubert} \medskip \am{ \rh{Another prominent example of viscoelastic fluids are suspensions of flexible polymers, which are typically modelled by chain of monomers connected by springs, see e.g. \cite{BAH87, BCAH87, DoiEdwards88}.} We refer to \cite{LL07,LL11} for a \rh{mathematical} introduction to the modelling of such fluids. The corresponding \rh{viscoelastic} stress tensor $\sigma$ is given by the so-called Kramers expression which takes into account the polymer chain configurations and the force needed to extend or to compress the springs. In particular, two models can be found in the literature regarding the force modelling the polymers extension: the potential associated to Hooke's law which states that the force is linear to the length of the chain and the FENE (Finitely Extensible Nonlinear Elastic) pontential which takes into account the finite extensibility of the chain. A simplified model, the so-called dumbbell model, consists in assuming that the polymer is constituted of only two monomers, see \cite[Subsections 2.3 and 2.4]{LL11} for more details. We refer to \cite{JLL04,JLL02,JLLO06} and the references therein for well posedness, long time behaviour and numerical investigations of such models. } \medskip Another popular model for the evolution of particles depending on their orientation are so-called Vicsek models. In these models, the particle evolution is also described by a (kinetic) Fokker-Planck equation. The particles (which might be microswimmers, but also birds, fish or pedestrians) move in the direction of their orientation. Moreover, the particles change their orientation due to Brownian motion and interaction with each other. This interaction is given in terms of an interaction kernel instead of the interaction through the fluid in the Doi model. Viscek models have been studied mathematically including well-posedness, rigorous mean-field results and long-time behavior. One could refer for instance to \cite{FigalliKangMorales15, GambaKang16, BriantDiezMerino21}. \subsection{Heuristics} \label{sec:heuristics} Let us give a heuristic argument for the production of the elastic stress due to Brownian rod-like particles (general non-spherical particles are analogous). A similar argument can be found in \cite[p. 309]{DoiEdwards88}. \begin{figure} \centering \begingroup \tikzset{every picture/.style={scale=0.7}}% \begin{subfigure}[b]{.3\textwidth} \centering \input{turning-rods} \caption{Rotations due to fluid\\velocity gradients} \label{fig:rotating.rod.gradient} \end{subfigure} \begin{subfigure}[b]{.3\textwidth} \centering \input{Turning-rods-rotated} \caption{Stresslet caused by\\rotations} \label{fig:rotating.rod.rotated} \end{subfigure} \begin{subfigure}[b]{.3\textwidth} \centering \input{Viscoelastic-stress} \caption{Average stresslet due to Brownian motion} \label{fig:rotating.rod.stress} \end{subfigure} \caption{Heuristic explanation of the viscoelastic stress arising from rotational Brownian motion} \endgroup \label{fig:rotating.rod} \end{figure} Consider a rod orientated in some direction as in Figure \ref{fig:rotating.rod.gradient}. Since the rod is inertialess, the rod follows the fluid flow and consequently rotates if the surrounding fluid is rotating as well. However, as we see in the lower figure, it also rotates under purely straining motion of the fluid. This is the reason why the full gradient of $u$ appears in the third term of the Fokker-Planck equation in \eqref{eq:full.model} and not only its skewsymmetric part. This is directly related (though the symmetry of the resistance tensor, see Subsection \ref{sec:resistance}) to the fact that a rigid rod-like particle subject to a torque needs to exert a stresslet on the fluid in order to maintain its shape. Therefore, Brownian torques on the particles arising from thermal noise lead to corresponding Brownian stresslets. It can then be argued that there is an average net stresslets at each of the particles as shown in Figure \ref{fig:rotating.rod.stress} which gives rise to the elastic stress $sigma_e$. There is one additional subtlety in the above argument: By linearity of the Stokes equations, the instantaneous stresslet produced by a torque which corresponds to a rotation to the left (of a vertically oriented rod) is exactly the negative of the effect of a torque on the same rod corresponding to a rotation to the right. Due to the random nature, such torques occur with equal probability and therefore seem to cancel out. However, since the random torques produce a Brownian motion of the particle orientation one has to take into account quadratic effects: One has to consider the stresslets produced by such torques after the rod has already started to rotate as visualized in Figure \ref{fig:rotating.rod.rotated}. Summing these contributions leads indeed to the effective stresslet as in Figure \ref{fig:rotating.rod.stress}. We summarize that there are three key ingredients that cause the elastic part of the stress: 1) The Brownian torque on the fluid; 2) the corresponding Brownian motion of the particle orientation; 3) the particle anisotropy. We refer to Subsection \ref{subsec:proof_strategy} for a more formalized version of this heuristic argument. \subsection{Formal statement of the main results} The rigorous derivation of the complete system \eqref{eq:full.model} from a microscopic description seems to be completely out of reach for the moment due to the highly singular interaction of the particles. Instead, we consider a simplified microscopic model keeping those aspects that produce the viscoelastic stress: First, we model the fluid by the stationary Stokes equations with no-slip conditions and balance laws on each particle. Second, we freeze the time evolution of the particle centers and assume that there is no exchange of net force between the particles and the fluid. Third, we model the evolution of the particle orientation as if they were all alone in infinite fluid. Correspondingly, the Brownian torques at each particle are independent of each other. We also include an external torque acting on the particles which could be related to an external fluid flow, a magnetic force or chemotaxis. We mainly consider such a torque in order to have a nontrivial particle distribution for very small Deborah numbers $\mathrm{De} \to 0$. For simplicity we do not include the effect of these torques on the fluid. After non-dimensionalization, this leads to the system \begin{subequations} \label{eq:micro.T_D.intro} \begin{equation} \label{eq:Stokes.micro.T_D.intro} \left \{ \begin{array}{rcl} \displaystyle - \Delta u_n+ \nabla p_n&=& 0 \quad \text{ in }\mathbb{R}^3 \setminus \bigcup_{i=1}^n \mathcal{B}_i(s),\\ \displaystyle \dv u_n&=&0 \quad \text{ in }\mathbb{R}^3 \setminus \bigcup_{i=1}^n \mathcal{B}_i,\\ \displaystyle D u_n&=&0 \quad \text{ in } \bigcup_{i=1}^n \mathcal{B}_i, \\ \displaystyle \int_{\partial \mathcal{B}_i} \Sigma(u_n,p_n) \nu & = &0, \\ \displaystyle \int_{\partial \mathcal{B}_i} [\Sigma(u_n,p_n) \nu ] \times (x-x_i) &=& \frac {\phi_n} n \sqrt{2 \gamma_{rot}\mathcal{R}_2(\xi_i(s))} \circ \dot B_i(s), \end{array} \right. \end{equation} \begin{align} \label{eq:Particles.T_D.intro} \left \{ \begin{array}{rcl} \displaystyle \, d \xi_i(s) &=& \sqrt{2} \xi_i(s) \times \circ \, d B_i(s) + P_{\xi_i^\perp} h(s,\xi_i(s),x_i) \, d s , \\ \xi_i(0) &=& \xi_{i,0}. \end{array} \right. \end{align} \end{subequations} Here, the particles $\mathcal{B}_i$, $1 \leq i \leq n$, are obtained from an arbitrary axisymmetric reference particle (not necessarily rod-like) by translation, rotation and rescaling with a factor $r$ which only depends on the number of particles $n$. Moreover, $\phi_n = n r^3$ is the volume fraction of the particles, $x_i \in \mathbb{R}^3$, $\xi_i \in \S^2$ the particle centers and orientations, $\mathcal{R}_2(\xi_i)$ and $\gamma_{rot}$ are related to the particle resistance to rotation, $D u_n$ is the symmetric gradient of $u_n$, $\Sigma(u_n,p_n)$ is the fluid stress, and $\nu$ the outer unit normal. Moreover, $B_i$ are Brownian motions, $\dot B_i$ corresponding white noise and $\circ$ indicates a product in the Stratonovitch sense. Finally $h$ is a given function related to an external torque. \rh{We emphasize that the equation for the fluid \eqref{eq:Stokes.micro.T_D.intro} implicitly depends on time trough the particle orientations as well as the random torques. The latter also prevent from an interpretation of this equation pointwise in time.} For details about the modeling we refer to Section \ref{section2} where we obtain this system (after non-dimensionalization) as a simplified version of a more physically accurate microscopic system. We refer to Subsection \ref{subsec:discussion} for a discussion on the difficulties to treat such a more accurate system. A benefit of this simplified system is that it reveals very clearly that the elastic part of the stress arises because of the interplay of the rotational Brownian motion, the Brownian torque on the fluid and the particle anisotropy, as explained in the previous subsection. We emphasize that the ratio between the viscoelastic time scale and the rotational diffusion time scale is of order of the volume fraction of the particles $\phi_n$ which corresponds to the parameter $\lambda_3$ in \eqref{eq:full.model}. Our methods restrict us to assume $\phi_n \to 0$ (see Subsection \ref{sec:assumptions} for the precise assumptions). However, we emphasize that we consider in this paper two fundamental different scalings \begin{enumerate}[label=(\roman*)., ref=(\roman*)] \item The case of Deborah numbers of order $1$: Here we consider the observation timescale to be comparable to the diffusion time scale; hence the rescaled viscoelastic stress term is small whereas the rescaled diffusion coefficient is of order one ($\gamma_{rot}$ is of order one). This case corresponds to the system \eqref{eq:micro.T_D.intro} above. \item Very small Deborah numbers, where we consider the observation timescale to be comparable to the viscoelastic time scale; hence the rescaled viscoelastic stress term is of order one whereas the rescaled diffusion coefficient is large, see \eqref{eq:Stokes.micro.T_u}, \eqref{eq:Particles.T_u} for the corresponding microscopic system. \end{enumerate} In the first case, we obtain that the empirical measure of the particles converges to the solution of the instationary Fokker-Planck equation \begin{equation}\label{eq:Fokker-Planck.instationary} \left\{ \begin{array}{rcl} \partial_t f + \dv_\xi (P_{\xi^\perp} h f - \nabla_\xi f ) &=& 0, \\ f(0,\cdot) &=& f_0. \end{array} \right. \end{equation} Since in this case, the total viscoelastic stress, which is the only source term for the fluid equation, is of order $\phi_n$, we consider rescaled fluid velocities $\phi_n^{-1} u_n$. We show that this rescaled sequence $\phi_n^{-1} u_n$ converges to the Stokes equations with an additional viscoelastic stress, namely \begin{align} {\def1.6{1.6}} \label{eq:viscoelastic} \left\{ \begin{array}{l} \displaystyle -\Delta u + \nabla p = \dv \sigma, \qquad \dv u = 0,\\ \displaystyle \sigma(t,x) = \int \gamma_E (\mathrm{Id} - 3 \xi \otimes \xi) f(t,x,\xi) \, d \xi. \end{array} \right. \end{align} The parameter $\gamma_E \in \mathbb{R}$ depends only on the shape of the reference particle. We emphasize that the viscoelastic stress $\sigma$ here corresponds to the elastic part $\sigma_e$ in \eqref{eq:full.model}. The viscous part does not appear in this case because its effect is of order $\phi_n^2$. It is classical that the viscous stress produced by the particles is proportional to their volume fraction to leading order. In our case however, since the elastic stress as the only source term is also of order $\phi_n$, the total effect of the viscous stress is indeed quadratic in the volume fraction. In the second case, which corresponds to very small Deborah numbers $\mathrm{De} \to 0$, we obtain in the limit the (quasi)-stationary Fokker-Planck equation for the particles \begin{align} \label{eq:Fokker-Planck.stationary} \left\{ \def1.6{1.6} \begin{array}{rcl} \dv_\xi (P_{\xi^\perp} h f - \nabla_\xi f ) &=& 0, \\ \displaystyle \int f(t,\cdot) \, d \xi &=& \int f_0 \, d \xi. \end{array} \right. \end{align} In this case, the fluid velocity $u_n$ itself converges to the solution to \eqref{eq:viscoelastic}. The Fokker-Planck equation \eqref{eq:Fokker-Planck.stationary} depends on time only through $h$. \rhnew{Since the equation is elliptic, the initial configuration $f_0$ only enters as a constraint on the spacial density $\int f(t,\cdot) \, d \xi$ which is constant in time because the particles do not move in space. However, the solution $f$ to \eqref{eq:Fokker-Planck.stationary} does in general not satisfy $f(0,\cdot) = f_0$. This discrepancy in the initial data arises from the fast diffusion for very small Deborah numbers which creates an initial layer for the solution of the microscopic particle density. This initial layer can be related to the instationary Fokker Planck equation \eqref{eq:Fokker-Planck.instationary} in the sense that $f(0,\cdot)$ is given as long-time limit $\lim_{t\to \infty} \tilde f$ for the solution $\tilde f$ to \eqref{eq:Fokker-Planck.instationary} with $h$ replaced by $\tilde h(t,\cdot) = h(0,\cdot)$.} \medskip We emphasize that already making sense of the equations for the fluid velocity $u_n$ in \eqref{eq:micro.T_D.intro} is non-trivial due to the Stratonovitch white noise in the torque that acts as a boundary condition for the Stokes equations. We overcome this issue by using the linearity of the problem in order to define $u_n$ through an Hilbert space valued Stratonovitch integral, see Subsection \ref{th:well-posed}. For the regularity in time of $u_n$ we obtain the optimal regularity $H^{-s}$, $s<\frac{1}{2}$, which corresponds to the regularity of white noise. For homogenization problems of the Stokes equations in perforated domains where the stresslets at the particles produce a nontrivial term in the limit, it is classical to obtains $L^p$-convergence in space of the fluid velocity for $p < 3/2$. Due to issues with Banach valued stochastic integrals, we will work in negative spaces $H^{-s}$, $s<\frac{1}{2}$, with respect to time and space. Moreover, the Stratonovitch nature of the white noise makes it necessary to consider shape derivatives of the solution to the Stokes equations in perforated domains with prescribed torques. Such estimates and approximations for this solution will be obtained through the method of reflections that has already been used in several related works (e.g. \cite{Hofer18MeanField, Mecherbet18, NiethammerSchubert19}) but will here be used for the first time to estimate shape derivatives. \subsection{Organization of the rest of the paper} The rest of the paper is organized as follows \begin{itemize} \item \rh{Section 2 is devoted to the modeling of the microscopic system that eventually leads to \eqref{eq:micro.T_D.intro}. After specifying the assumptions on the particle shape and recalling properties of the grand-resistance tensor for a single (axisymmetric) particle in Subsections \ref{sec:particles} and \ref{sec:resistance}, we introduce a full microscopic model of inertialess Brownian particles in a Stokes flow in Subsection \ref{sec:dynamics}. Subsequent simplifications and nondimensionalization in Subsections \ref{sec:simplification} and \ref{sec:nondim} then leads to \eqref{eq:micro.T_D.intro}.} \item In Section \ref{section3} we first present the main assumptions on the initial particle configuration and specify the notion of solutions to \eqref{eq:micro.T_D.intro}. The main convergence results are then stated in Subsection \ref{sec:convergence}. In Subsection \ref{subsec:notatation} we introduce the main notations used in this paper. We summarize the key steps of the proof in Subsection \ref{subsec:proof_strategy} and we discuss about the limitations and possible generalizations of our approach in Subsection \ref{subsec:discussion}. \item Sections \ref{sec:L_n}--\ref{section6} are devoted to the proof of the main results. For a more detailed outline of these sections and the appendices, we refer to Subsection \ref{subsec:proof_strategy}. \end{itemize} \section{Passage to the limit for very small Deborah numbers}\label{section6} In this section, we prove Theorem \ref{th:main.T_u}, i.e. the passage to the quasi-stationary system \eqref{eq:Fokker-Planck.stationary}, \eqref{eq:viscoelastic} starting from the microscopic dynamics \eqref{eq:Stokes.micro.T_u}--\eqref{eq:Particles.T_u}. Large parts of the proof are analogous to the the proof of Theorem \ref{th:main.T_D} given in the previous section. The main difference concerns the tightness of the law of the empirical measure $S_n$ (defined in \eqref{eq:empirical_measure}). Indeed, we cannot expect tightness in $C([0,T];\mathcal{P}_1(\mathbb{R}^3 \times \S^2))$ since the solution to the limit problem \eqref{eq:Fokker-Planck.stationary} is discontinous at time $0$. This discontinuity arises from the fast diffusion that induces a boundary layer at the initial time. This fast diffusion also makes tightness in $C([\delta,T];\mathcal{P}_1(\mathbb{R}^3 \times \S^2))$ difficult to obtain. We therefore work in a weaker space instead, namely $H^{0_-}((0,T);H^{-3/2_-}(\mathbb{R}^3 \times \S^2))$. \begin{lem}\label{lem:tightness_Sn.T_u} Let $T> 0$ and let $S_n$ be the empirical measure defined in \eqref{eq:empirical_measure}. Then, the family of laws $\{Q^{S_n}\}$ of the empirical measures $S_n$ is tight in the space $H^{0_-}((0,T);H^{-3/2_-}(\mathbb{R}^3 \times \S^2))$. \end{lem} \begin{proof} We note that since $H^{3/2_+}(\mathbb{R}^3 \times \S^2)$ is compactly embedded into $C^{0}(\mathbb{R}^3 \times \S^2)$, we have that $S_n$ is uniformly bounded in $L^\infty((0,T);H^{-3/2_-}(\mathbb{R}^3 \times \S^2))$ and thus compactly embedded in $H^{0_-}((0,T);H^{-3/2_-}(\mathbb{R}^3 \times \S^2))$. \end{proof} Since the fast diffusion is balanced by the different scaling of $u_n$, tightness of the law of $u_n$ is completely analogous as in the previous section. \begin{lem}\label{lem:bound_un.T_u} Let \begin{align}\label{def:U_n,app_T_u} u_{n,app} &:= U_{n,app}', \\ U_{n,app}(t) &:= \frac {\sqrt{\am{2 \gamma_{rot}\phi_n}}} n \int_0^t L_{n,app}(\Xi(s)) \sqrt{\mathfrak R_2(\Xi(s))} \circ \, d (B_1(s), \dots, B_n(s)). \end{align} Then, $u_{n,app} \in L^2(\Omega;H^{-s}(0,T;H^{-s}_{\mathfrak s}(\mathbb{R}^3)))$ is well-defined and there exists $N_0 \in \mathbb{N}$ such that for all $s>\frac12$ and all $n \geq N_0$ \begin{align}\label{eq:lem_bound_un.T_u_1} \mathbb{E}\left[\norm{u_n}^2_{H^{-s}((0,T); H^{-s}_w(\mathbb{R}^3))}\right]&\leq C, \\ \label{eq:lem_bound_un.T_u_2} \lim_{n\to\infty}\mathbb{E}\left[\norm{u_n-u_{n,app}}^2_{H^{-s}((0,T); H^{-s}_w(\mathbb{R}^3))}\right]&=0. \end{align} In particular, the family of laws $\{Q^{u_n}\}$ of $u^n$, defined through \eqref{def:U_n}--\eqref{eq:u_n.U_n}, is tight in the space $H^{-s}((0,T),H^{-s}_{\mathfrak s}(\mathbb{R}^3))$. \end{lem} \medskip In order to pass to the limit, we consider again functionals $\bar \Psi_\psi(g), \Phi_\varphi(v,g), \Theta_\theta(g)$ for $(v,g) \in H^{-s}((0,T); H^{-s}_{\mathfrak s}(\mathbb{R}^3))\times H^{0_-}((0,T);H^{-3/2_-})$ and $\varphi\in C^\infty_c((0,T)\times \mathbb{R}^3)$ with $\dv \varphi = 0$, $\psi\in C^\infty_c((0,T)\times \mathbb{R}^3\times\S^2)$ and $\theta \in C_c^\infty((0,T) \times \mathbb{R}^3)$. Here, $\Phi_\varphi$ is still given by \eqref{eq:formal.approximation_weak}, $\bar \Psi_\psi$ and $\Theta_\theta$ are defined as \begin{align}\label{eq:empiricalmeasure_weak_stationnary} \bar \Psi_\psi(g)&:=\langle g,\Delta_\xi \psi \rhnew{+}\nabla_\xi\psi \cdot P_{\xi^\perp} h\rangle, \\ \label{eq:def_Theta} \Theta_\theta(g) &:= \langle f_0 - g, \theta \otimes 1 \rangle. \end{align} Correspondingly to Lemma \ref{lem:identity_S_n}, we have the following. \begin{lem} \label{lem:identity_S_n.T_u} For every test function $\psi\in C^\infty_c((0,T)\times\mathbb{R}^3\times \S^2)$, the empirical measures $S_n$ satisfies the following identity, \begin{align*} \bar \Psi_\psi(S_n) &= \rhnew{-} \phi_n\int_0^T\langle S_n(t),\partial_t \psi(t,x,\xi)\rangle \rhnew{-} \int_0^T \frac{1}{n}\sum_{i=1}^n \nabla \psi(t,x,\xi_i(t))\sigma_D(\xi_i(t)) dB_i(t), \end{align*} with $\sigma_D$ as in in \eqref{eq:sigma}. \end{lem} The following proposition is the analogous version of Proposition \ref{prop:Q_sol_delta}. \begin{prop}\label{prop:Q_sol_delta.T_u} For each $\delta>0$, for each $\varphi\in C^\infty_c((0,T),\mathbb{R}^3)$ with $\dv \varphi = 0$, $\psi\in C^\infty_c((0,T)\times \mathbb{R}^3\times\S^2)$ and $\theta \in C_c^\infty((0,T) \times \mathbb{R}^3)$ \begin{multline*} \lim_{n\to\infty} Q^{n}\Big((v,g)\in H^{-s}((0,T); H^{-s}_{\mathfrak s}(\mathbb{R}^3))\times H^{0_-}((0,T);H^{-3/2_-}(\mathbb{R}^3 \times \S^2)) :\\ |\Phi_\varphi(v,g)|+|\bar \Psi_\psi(g)| + |\Theta_\theta(g)|>\delta\Big)=0. \end{multline*} \end{prop} \begin{proof} The proof is analogous to the one of Proposition \ref{prop:Q_sol_delta}.\am{ Note in particular that Lemma \ref{lem:Expectations} has to be adapted accordingly since $\Xi$ satisfies \eqref{eq:Particles.T_u} instead of \eqref{eq:Particles.T_D} which yields the appearance of a factor $\sqrt{\phi_n}^{-1}$ in \eqref{eq:stress.average} that will be compensated by the factor $\sqrt{\phi_n}$ appearing in the definition of $U_{n,app}$, see \eqref{def:U_n,app_T_u}.} To deal with the additional functional $\Theta_\theta$, we notice that \begin{align} \Theta_\theta(S_n) = \int_0^T \langle \frac 1 n \sum_i \delta_{x_i, \xi_{i,0}} - f_0, \theta(t,\cdot) \otimes 1 \rangle \, d t, \end{align} and therefore this functional can be handled due to assumption \eqref{ass:initial.convergence}. \end{proof} \begin{proof}[Proof of Theorem \ref{th:main.T_u}] The proof is completely analogous to the proof of Theorem \ref{th:main.T_D}. The only difference is that regarding the uniqueness, we rely on Theorem \ref{thm:uniqueness.stationary} below instead of Theorem \ref{thm:uniqueness.instationary}. \end{proof} \begin{thm} \label{thm:uniqueness.stationary} Let $f_0\in L^2(\S^2\times \mathbb{R}^3)$ and $g \in H^{0_-}((0,T);H^{-3/2_-})$ satisfy $\bar \Psi_\psi(g) = 0$, $\Theta_\theta(g)= 0$ for all $\psi \in C_c^\infty((0,T) \times \mathbb{R}^3 \times \S^2)$ and $\theta \in C_c^\infty((0,T) \times \mathbb{R}^3)$. \am{Then, $g$ is the unique weak solution to \eqref{eq:Fokker-Planck.stationary} in $L^2((0,T)\times \S^2\times \mathbb{R}^3)$ such that for almost all $x\in \mathbb{R}^3$, $f(\cdot,\cdot,x)\in C^\infty((0,T)\times \S^2)$.} \end{thm} \section{Passage to the limit for Deborah numbers of order \texorpdfstring{$1$}{1} }\label{section5} In this section we prove Theorem \ref{th:main.T_D}. We recall the strategy from Subsection \ref{subsec:proof_strategy}. First, we show that the laws of the empirical measure of the particles and of the fluid velocity field respectively are tight in suitable function spaces. For $S_n$ this is classical but we include the proof for completeness in Subsection \ref{sec:tightness.S_n}. Tightness of $\phi_n^{-1} u_n$, which we show in Subsection \ref{sec:tightness.u_n}, follows from the estimates in Section \ref{sec:L_n} which also allow us to replace $u_n$ by more explicit functions $u_{n,app}$. The tightness of the laws implies weak convergence along subsequences by the Prokhorov Theorem. To conclude the proof of Theorem \ref{th:main.T_D} we introduce certain functionals in Subsection \ref{sec:sub.proof.main.T_D}. As we show later in Appendix \ref{appendixA}, these functionals vanish precisely on the solution of the desired limit system \eqref{eq:Fokker-Planck.instationary}, \eqref{eq:viscoelastic}. For the proof of Theorem \ref{th:main.T_D} it therefore remains to show that the laws of the microscopic system concentrate on the zeroes of these functionals as $n \to \infty$. \subsection{Tightness of \texorpdfstring{$S_n$}{Sn}}\label{sec:tightness.S_n} \begin{lem}\label{lem:tightness_Sn} Let $T> 0$ and let $S_n$ be the empirical measure defined in \eqref{eq:empirical_measure}. Then, the family of laws $\{Q^{S_n}\}$ of the empirical measures $S_n$ is tight in the space $C([0,T],\mathcal{P}_1(\mathbb{R}^3\times \S^2))$. \end{lem} \begin{proof} By standard arguments, tightness follows from the uniform bounds \begin{equation}\label{eq:lem_bound_Sn_1} \mathbb{E}\left[\sup_{t\in[0,T]}\int_{\mathbb{R}^3}\int_{\S^2}(|x|+|\xi|)S_n(t)(dx,d\xi)\right]\leq C, \end{equation} \begin{equation}\label{eq:lem_bound_Sn_2} \mathbb{E}\left[\int_0^T\int_0^T \frac{\mathcal{W}_1(S_n(t_1),S_n(t_2))^p}{|t_1-t_2|^{1+s p}} \, d t_1 \, d t_2\right]\leq C, \end{equation} for some $s\in (0,\frac12)$, $p > s^{-1}$. Indeed, by the Ascoli-Arzelà theorem in metric spaces, the set \begin{multline*} \mathcal{K}_{M,R}=\left\{f\in C([0,T],\mathcal{P}_1(\mathbb{R}^3\times \S^2))\,\,s.t.\sup_{t\in[0,T]} \int_{\mathbb{R}^3}\int_{\S^2}(|x|+|\xi|)f_t \, d \xi \, d x \leq M,\right.\\\left. \int_0^T\int_0^T \frac{\mathcal{W}_1(f(t_1),f(t_2))^p}{|t_1-t_2|^{1+s p}}\, d t_1 \, d t_2\leq R\right\}. \end{multline*} is relatively compact in $C([0,T],\mathcal{P}_1(\mathbb{R}^3\times \S^2))$. Indeed, for $f \in \mathcal K_{M,R}$ and $\varphi \in C^1(\mathbb{R}^3 \times \S^2)$, we have for $\theta = s - 1/p$ due to Sobolev inequality for fractional spaces (see e.g. \cite[Section 8]{di2012hitchhiker}) \[|\langle f(t),\varphi\rangle-\langle f(s),\varphi\rangle|\leq C |t-s|^\theta[\langle f,\varphi\rangle]_{s,p}^p \leq C R |t-s|^\theta \| \nabla \varphi \|^p_\infty. \] Taking the supremum in $\varphi$ yields equicontinuity and thus $\mathcal K_{M,R}$ is compact. For any $\varepsilon > 0$, by Chebyshev's inequality and \eqref{eq:lem_bound_Sn_1}--\eqref{eq:lem_bound_Sn_2}, choosing $M$ and $R$ big enough, \begin{align*} Q^{S_n}(\mathcal{K}^c_{M,R})< \epsilon, \end{align*} which yields tightness of the laws $\{Q^{S_n}\}$. Thus, it remains to show \eqref{eq:lem_bound_Sn_1}--\eqref{eq:lem_bound_Sn_2}. We recall that the particle positions $x_i$ do not evolve in time and stay in a compact set $K \subset \mathbb{R}^3$ due to assumption \eqref{ass:uniform_bound}. Moreover the particle orientations lie on the sphere $\S^2$ which is also compact, thus the first estimate \eqref{eq:lem_bound_Sn_1} is trivial. By definition of the Wasserstein distance, by Jensen inequality and by \eqref{eq:strato.xi_i.T_D} \begin{align}\label{eq:bound_Sn_1} \mathbb{E}\left[\mathcal{W}_1(S_n(t_1),S_n(t_2))^p\right]&\leq \mathbb{E}\left[\left(\frac1n \sum_{i=1}^n \left|\xi_i(t_1)-\xi_i(t_2)\right|\right)^p\right] \notag\\ &\leq C \frac1n \sum_{i=1}^n \mathbb{E}\left[\bigg|\int_{t_1}^{t_2} \sigma_D(\xi_i(\tau)) \, d B_i(\tau) \bigg|^p + \bigg| \int_{t_1}^{t_2} A(\tau,x_i,\xi_i(\tau)) \, d \tau \bigg|^p\right], \end{align} where $A(t,x,\xi):=P_{\xi_i^\perp} h(t,\xi,x) -\am{2\xi} $. Then, by the Burkholder-Davis-Gundy inequality, (\ml{see e.g. Theorem 3.28 in \cite{karatzas2012brownian}}) we deduce \begin{align}\label{eq:bound_Sn_2} \mathbb{E}\left[\mathcal{W}_1(S_n(t_1),S_n(t_2))^p\right]& \leq C \frac1n \sum_{i=1}^n\mathbb{E}\left[\left(\int_{t_1}^{t_2} \norm{\sigma_D(\xi_i(\tau))}_{HS}^2\, d \tau \right)^{p/2}\right]+C |t_1-t_2|^p \notag \\ &\leq C|t_1-t_2|^{p/2}, \end{align} where HS stands for the Hilbert-Schmidt norm. Therefore, \[\mathbb{E}\left[\int_0^T\int_0^T \frac{\mathcal{W}_1(S_n(t_1),S_n(t_2))^p}{|t_1-t_2|^{1+s p}}\, d t_1 \, d t_2\right]\leq C \int_0^T\int_0^T |t_1-t_2|^{p(1/2 - s) -1}\, d t_1 \, d t_2 \leq C_2\] since $s < \frac 1 2$. \end{proof} \subsection{Tightness of \texorpdfstring{$u_n$}{un}} \label{sec:tightness.u_n} Recall from \eqref{eq:u_n.U_n}--\eqref{def:U_n.T_D} the definition of $u_n$. Analogously, we define \begin{align} \label{def:u_n,app} u_{n,app} &:= U_{n,app}', \\ U_{n,app}(t) &:= \frac {{\phi_n}\am{\sqrt{2\gamma_{rot}}}} n \int_0^t L_{n,app}(\Xi(s)) \sqrt{\mathfrak R_2(\Xi(s))} \circ \, d (B_1(s), \dots, B_n(s)), \end{align} where $L_{n,app}$ is the operator defined at the beginning of Section \ref{sec:L_n}. \begin{lem}\label{lem:bound_un.T_D} Let $s > \frac 1 2 $. Then, for all $n \in \mathbb{N}$, the stochastic integral in \eqref{def:u_n,app} is well defined and $u_{n,app} \in L^2(\Omega; H^{-s}(0,T; H_{\mathfrak s}^{-s}(\mathbb{R}^3)))$. Moreover, there exists $N_0 \in \mathbb{N}$ such that for all for all $n \geq N_0$ and all $s>\frac12$ \begin{align}\label{eq:lem_bound_un.T_D_1} \mathbb{E}\left[\norm{\phi_n^{-1}u_n}^2_{H^{-s}((0,T); H^{-s}_w(\mathbb{R}^3))}\right]&\leq C, \\ \label{eq:lem_bound_un.T_D_2} \lim_{n\to\infty}\mathbb{E}\left[\norm{\phi_n^{-1}(u_n-u_{n,app})}^2_{H^{-s}((0,T); H^{-s}_w(\mathbb{R}^3))}\right]&=0. \end{align} \end{lem} \begin{proof} To estimate the ${H^{-s}((0,T); H^{-s}_w(\mathbb{R}^3))}$ norm of $u_n$ and $u_n - u_{n,app}$, by \eqref{eq:char.negative.sobolev}, it suffices to estimate the corresponding $H^{-s+1}((0,T),H^{-s}_w(\mathbb{R}^3))$ norms of $U_n$ and $U_n - U_{n,app}$. Thus, by assumption \eqref{ass:phi.log.n}, to show \eqref{eq:lem_bound_un.T_D_1}--\eqref{eq:lem_bound_un.T_D_2}, it suffices to prove \begin{align}\label{eq:lem_bound_Un.T_D_1} \mathbb{E}\left[\norm{\phi_n^{-1}U_{n,app}}^2_{H^{1-s}((0,T);H^{-s}_w(\mathbb{R}^3))}\right]\leq C, \\ \mathbb{E}\left[\norm{\phi_n^{-1}(U_n-U_{n,app})}^2_{H^{1-s}((0,T);H^{-s}_w(\mathbb{R}^3))}\right]\leq C (\phi_n \log n \rh{+ r^{3/p -2}})^2, \label{eq:lem_bound_Un.T_D_2} \end{align} \rh{for $p= \frac{6}{3+2s}$}. To estimate these norms, we appeal to \eqref{est:Z.H^s} and Remark \ref{rem:manifold}. To apply this estimate, we first recall from \eqref{eq:Particles.T_D} that for the vector $\Xi=(\xi_1,\dots, \xi_n)$, \[d\Xi(t)=\Sigma_D(\Xi(t))\circ \, d B(t)+H(t,\Xi(t),x) \, d t\] where $H(t,\Xi,x)=\left(P_{\xi_1^\perp} h(t,\xi_1,x_1),\dots, P_{\xi_n^\perp} h(t,\xi_n,x_n)\right)$, $B=\left(B_1,\dots,B_n\right)$ and \am{$\Sigma_D(\Xi)$ is a block diagonal matrix in $\mathbb{R}^{3n\times 3n}$ whose blocks are $\sigma_D(\xi_i)$}, defined in \eqref{eq:sigma}. In particular $\|\Sigma_D(\Xi)\|_{HS} \leq C \sqrt n$. Then, by \eqref{est:Z.H^s} and Remark \ref{rem:manifold} \begin{align} &\left(\mathbb{E}\left[\norm{\phi_n^{-1}U_{n,app}}^2_{H^{1-s}((0,T);H^{-s}_w(\mathbb{R}^3))}\right]\right)^{\frac 1 2} \\ &\leq \frac C n \left(\|L_{n,app}\|_{L^\infty((\S^2)^n;(H^{-s}_w(\mathbb{R}^3))^{3n})} + \sqrt n \|\nabla L_{n,app} \|_{L^\infty((\S^2)^n;(H^{-s}_w(\mathbb{R}^3))^{3n\times3n})}\right), \end{align} Applying Corollary \ref{cor:L_n.H^-s} yields \eqref{eq:lem_bound_Un.T_D_1}. The proof of estimate \eqref{eq:lem_bound_Un.T_D_2} is completely analogous. \end{proof} \begin{lem}\label{lem:tightness_un} For all $s > 1/2$, the family of laws $\{Q^{u_n}\}$ of $u^n$, defined in \eqref{def:U_n.T_D}, is tight in the space $H^{-s}((0,T),H^{-s}_{\mathfrak s}(\mathbb{R}^3))$. \end{lem} \begin{proof} For $s>z>\frac12$, let \[\mathcal{K}_M=\{v \in H^{-z}((0,T),H^{-z}_{\mathfrak s,w}(\mathbb{R}^3)) : \norm{v}_{H^{-z}((0,T), H^{-z}_w(\mathbb{R}^3))}\leq M\}.\] By Lemma \ref{lem:compact.weighted} and \cite{Amann00}[Theorem 5.1], this set is relatively compact in $H^{-s}((0,T),H^{-s}(\mathbb{R}^3))$. Thus, for all $\varepsilon > 0$, by Chebyshev's inequality, by Lemma \ref{lem:bound_un.T_D} and choosing $M$ big enough, \[ Q^{u_n}(\mathcal{K}^c_M)=\P\left(\norm{u_n}_{H^{-s}((0,T), H^{-s}_w(\mathbb{R}^3))}>M\right)\leq \frac{\mathbb{E}\left[\norm{u_n}_{H^{-s}((0,T), H^{-s}_w( \mathbb{R}^3))}\right]}{M}<\epsilon, \] which concludes the proof. \end{proof} \subsection{Proof of Theorem \texorpdfstring{\ref{th:main.T_D}}{main TD}} \label{sec:sub.proof.main.T_D} As outlined at the beginning of this Section, we now introduce functionals whose zeroes are the solutions of the system \eqref{eq:Fokker-Planck.instationary}, \eqref{eq:viscoelastic}. For $(v,g) \in H^{-s}((0,T); H_{\mathfrak s}^{-s}(\mathbb{R}^3))\times C([0,T];\mathcal{P}_1(\mathbb{R}^3\times \S^2))$, we define for each $\varphi\in C^\infty_c((0,T)\times {\color{blue} \mathbb{R}^3})$ with $\dv \varphi = 0$ and $\psi\in C^\infty_c([0,T]\times \mathbb{R}^3\times\S^2)$, \begin{align}\label{eq:empiricalmeasure_weak} \Psi_\psi(g)&= \langle f_0,\psi(0)\rangle - \langle g(T),\psi(T)\rangle + \int_0^T\langle g,\partial_t \psi + \nabla_\xi\psi \cdot P_{\xi^\perp} h + \Delta_\xi \psi\rangle \, d t,\\ \label{eq:formal.approximation_weak} \Phi_\varphi(v,g)&= \langle v,\Delta \varphi\rangle - {\gamma_E}\langle g , (\textrm{Id}-3\xi \otimes \xi):\nabla \varphi \rangle. \end{align} Note that $\langle \cdot, \cdot \rangle$ in \eqref{eq:empiricalmeasure_weak} denotes the pairing in $\mathbb{R}^3 \times \S^2$ whereas \rhnew{the first pairing $\langle \cdot, \cdot \rangle$ in \eqref{eq:formal.approximation_weak} denotes the pairing in $(0,T) \times \mathbb{R}^3$ and the second pairing $\langle \cdot, \cdot \rangle$ in \eqref{eq:formal.approximation_weak} denotes the pairing in $(0,T) \times \mathbb{R}^3 \times \S^2$.} \begin{lem} \label{lem:identity_S_n} For every test function $\psi\in C^\infty_c([0,T]\times\mathbb{R}^3\times \S^2)$, the empirical measures $S_n$ satisfies the following identity, \begin{align*} \Psi_\psi(S_n) = \rhnew{\langle f_0 - S_n(0),\psi(0)\rangle -} \int_0^T \frac{1}{n}\sum_{i=1}^n \nabla \psi(t,x_i,\xi_i(t))\sigma_D(\xi_i(t)) dB_i(t) \end{align*} with $\sigma_D$ as in \eqref{eq:sigma}. \end{lem} \begin{proof} The proof follows by applying Ito's formula (Lemma \ref{lem:Ito.formula}) to $\psi(t,x_i,\xi_i(t))$ using \eqref{eq:strato.xi_i.T_D}. To avoid the issue of having to apply Ito's formula on the manifold $\S^2$, one might first extend $\psi$ to a smooth function $\tilde \psi \in C_c^\infty([0,T] \times \mathbb{R}^3 \times \mathbb{R}^3\setminus \{0 \})$. Then, observing that $\frac{1}{2}\sigma_D(\xi)\sigma_D(\xi)^T = \mathrm{Id} - \xi \otimes \xi = \rh{\mathrm{Id} - \frac{\xi \otimes \xi}{|\xi|^2} =} P_{\xi^\perp}$ \rh{on $\S^2$}, we find \begin{align*} \tilde \psi(T,x_i,\xi_i(T)) &= \tilde \psi(0,x_i,\xi_{i,0}) + \int_0^T (\partial_t + P_{\xi^\perp} h \cdot \nabla_\xi - 2 \xi \cdot \nabla_\xi + P_{\xi^\perp} : \nabla_\xi^2) \tilde \psi(t,x_i,\xi_i(t)) \, d t \\ &+ \int_0^T \frac{1}{n}\sum_{i=1}^n \nabla_\xi \tilde \psi(t,x_i,\xi_i(t))\sigma_D(\xi_i(t)) dB_i(t), \end{align*} We then observe that \begin{align} P_{\xi^\perp} : \nabla_\xi^2 \tilde \psi - \rh{ 2} \xi \cdot \nabla_\xi \tilde \psi = \dv ( P_{\xi^\perp} \nabla \tilde \psi) = \Delta_{\S^2} \tilde \psi, \end{align} where $\Delta_{\S^2}$ is the Laplace-Beltrami operator. This concludes the proof. \end{proof} \begin{prop}\label{prop:Q_sol_delta} Denote by $Q^n$ the law of $(\phi_n^{-1}u_n,S_n)$ on the space $H^{-s}((0,T); H^{-s}_{\mathfrak s}(\mathbb{R}^3))\times C([0,T],\mathcal{P}_1(\mathbb{R}^3\times \S^2))$. Then for all $\delta>0$, all $\varphi\in C^\infty_c((0,T),{\mathbb{R}^3})$ with $\dv \varphi = 0$ and for all $\psi\in C^\infty_c([0,T]\times \mathbb{R}^3\times\S^2)$ \begin{multline*} \lim_{n\to\infty} Q^{n}\Big((v,g)\in H^{-s}((0,T); H^{-s}_{\mathfrak s}(\mathbb{R}^3))\times C([0,T],\mathcal{P}_1(\mathbb{R}^3\times \S^2)) :\\ \,\, |\Phi_\varphi(v,g)|+|\Psi_\psi(g)|>\delta\Big)=0. \end{multline*} \end{prop} \begin{proof} By Chebyshev's inequality, it suffices to show that \begin{align}\label{eq:Q_step0} \lim_{n \to \infty} \mathbb{E}\left[ |\Phi_\varphi(\phi_n^{-1}u_{n},S_{n})| + |\Psi_\psi(S_{n})|\right] = 0. \end{align} We start by studying the first term. With $u_{n,app}$ as in \eqref{def:u_n,app} we have \begin{align} \langle \phi_n^{-1}u_{n,app},\Delta\varphi \rangle &= \rhnew{\phi_n^{-1}}\int_0^T \langle U_{n,app}(t), \partial_t \Delta \varphi(t,\cdot) \rangle \, d t \\ &= \frac{\sqrt{2 \gamma_{rot}}}{n} \int_0^T \left\langle \int_0^t L_{n,app}(\Xi(s)) \sqrt{\mathfrak R_2(\Xi(s))} \circ \, d B(s), \partial_t \Delta \varphi(t,\cdot) \right \rangle \, d t . \end{align} \rh{Now let us recall that since $L:=L_{n,app}(\Xi(s)) \sqrt{\mathfrak R_2(\Xi(s))} \in \mathcal{L}(\mathbb{R}^{3n}, H)$ with $H:=H^{-s}_{\mathfrak s}(\mathbb{R}^3)$, we may identify $\mathcal{L}(\mathbb{R}^{3n}, H)$ with $H^{3n}$ through $L_{i,\alpha} = L e_{i,\alpha}$ where $e_{i,\alpha}$ denote the canonical basis vectors of $\mathbb{R}^{3n}$. Then, with $\langle \cdot , \cdot \rangle$ denoting the pairing between $H^{-s}(\mathbb{R}^3) $ and $H^{s}(\mathbb{R}^3) $, and $(\cdot,\cdot)$ the scalar product in $H$, we use $L_{i,\alpha} = \sum_k ( L_{i,\alpha}, \epsilon_k ) \epsilon_k$, where $(\epsilon_k)_k$ is an orthonormal basis of $H$, in order to get \begin{align} \sum_{i,\alpha} \left \langle \int_0^t L_{i,\alpha}(s) \circ \, d B_{i,\alpha}(s), h \right \rangle &= \sum_{i,\alpha,k} \left \langle \int_0^t ( L_{i,\alpha}(s), \epsilon_k ) \epsilon_k \circ \, d B_{i,\alpha}(s), h \right \rangle \\ &= \sum_{i,\alpha,k}\int_0^t \langle ( L_{i,\alpha}(s), \epsilon_k ) \epsilon_k, h \rangle \circ \, d B_{i,\alpha}(s) \\ &= \sum_{i,\alpha}\int_0^t \langle L_{i,\alpha}(s) , h\rangle \circ \, d B_{i,\alpha}(s). \end{align} } \am{Hence, this yields, with $e_\alpha$ denoting the canonical basis vectors of $\mathbb{R}^3$, \begin{align} &\langle \phi_n^{-1}u_{n,app},\Delta\varphi \rangle\\ &= \frac{\sqrt{2 \gamma_{rot}}}{n}\underset{i,\alpha,\beta}{\sum} \int_0^T \int_0^t \langle L_{n,app}(\Xi(s))e_{i,\alpha}, \partial_t \Delta \varphi(t,\cdot) \rangle \left(\sqrt{\mathcal R_2(\xi_i(s))}\right)_{\alpha,\beta} \circ \, d B_{i,\beta}(s) \, d t \\ &=-\frac{\sqrt{2 \gamma_{rot}}}{n} \underset{i,\alpha,\beta}{\sum} \int_0^T \int_0^t \left([e_{\alpha}]_M+\mathcal{S}(\xi_i(s))e_{\alpha} \right):\partial_t \nabla \varphi(t,x_i) \left(\sqrt{\mathcal R_2(\xi_i(s))}\right)_{\alpha,\beta} \circ \, d B_{i,\beta}(s) \, d t \\ &=- \frac{\sqrt{2 \gamma_{rot}}}{n} \sum_i\int_0^T \left(\int_0^t \partial_t D \varphi(t,x_i):\mathcal{S}(\xi_i(s))\sqrt{\mathcal{R}_2(\xi_i(s))} \circ \, d B_i(s) \right.\\ &\qquad \qquad \qquad \qquad \qquad \left.+2 \int_0^t\partial_t\curl \varphi(t,x_i)\cdot \sqrt{\mathcal{R}_2(\xi_i(s))} \circ \, d B_i(s) \right) \amnew{\, d t}\\ &= \frac{\sqrt{2 \gamma_{rot}}}{n} \sum_i \int_0^T D \varphi(t,x_i):\mathcal{S}(\xi_i(t))\sqrt{\mathcal{R}_2(\xi_i(t))}\circ \, d B_i(t) \\ & +\frac{\sqrt{2 \gamma_{rot}}}{n} \sum_i\int_0^T 2\curl \varphi(t,x_i)\cdot \sqrt{\mathcal{R}_2(\xi_i(t))} \circ \, d B_i(t) . \end{align} where we used \eqref{eq:distributional.v}, \eqref{eq:skew_curl} and an integration by parts formula for Stratonovitch integrals which follows from the chain rule, Remark \ref{rem:Strato} \ref{it:stratonovich.chain.rule}. } Thus, \begin{align} \label{eq:I_1.I_2} \Phi_\varphi(\phi_n^{-1}u_{n_k},S_{n_k})=\mathcal{I}_1+\mathcal{I}_2, \end{align} where \[\mathcal{I}_1=\langle \phi_n^{-1}\left(u_{n_k}-u_{n_k,app}\right),\Delta \varphi\rangle,\] and \begin{align*} \mathcal{I}_2=: \frac 1 n \sum_i \mathcal J_i&= \frac 1 n \sum_i\left( \frac{\sqrt{2 \gamma_{rot}}}{n} \int_0^T 2\curl \varphi(t,x_i)\cdot \sqrt{\mathcal{R}_2(\xi_i(t))} \circ \, d B_i(t) \right.\\ &+ \frac{\sqrt{2 \gamma_{rot}}}{n} \int_0^T D \varphi(t,x_i):\mathcal{S}(\xi_i(t))\sqrt{\mathcal{R}_2(\xi_i(t))} \circ \, d B_i(t) \\ & \left.- \gamma_E\int_0^T(3\xi_i \otimes \xi_i-\textrm{Id}):\nabla \varphi(t,x_i) \, d t \right) . \end{align*} By Lemma \ref{lem:bound_un.T_D}, we have \begin{align} \label{est:I_1} \lim_{n \to \infty} \mathbb{E}\left[|\mathcal{I}_1|\right] = 0. \end{align} To estimate $\mathcal{I}_2$, we use Lemma \ref{lem:Expectations}, which implies $\mathbb{E}[\mathcal J_i] = 0$. Furthermore, since the particle orientations $\xi_i$ are independent, also the terms $\mathcal J_i$ are independent. \rh{Moreover, they have a bounded variance by the It\^o Stratonovitch conversion formula, Remark \ref{rem:Strato} \ref{it:Conversion.composition} and It\^o isometry, Theorem \ref{th:Ito}.} Therefore. \begin{align}\label{est:I_2} \mathbb{E}[\mathcal I_2^2] = \frac 1 {n^2} \sum_i \mathbb{E}[\mathcal J_i^2] \leq \frac{C}{n} \|\nabla \varphi \|^2_{L^\infty}. \end{align} Inserting \eqref{est:I_1}--\eqref{est:I_2} in \eqref{eq:I_1.I_2} yields \begin{align}\label{est:Phi_phi} \lim_{n \to \infty} \mathbb{E}\left[ |\Phi_\varphi(\phi_n^{-1}u_{n_k},S_{n_k})|\right] = 0. \end{align} Regarding the second term of \eqref{eq:Q_step0}, Lemma \ref{lem:identity_S_n} implies \[\Psi_\psi(S_n) - \langle f_0 - S_n(0),\psi(0)\rangle = -\frac{1}{n}\sum_{i=1}^n\int_0^T \nabla \psi(t,x_i,\xi_i(t))\sigma_D(\xi_i(t)) \, d B_i(t) =: \frac 1 n \sum_i \tilde{\mathcal J_i}.\] \rhnew{Using assumption \eqref{ass:initial.convergence}, $\mathbb{E}[|\langle f_0 - S_n(0),\psi(0)\rangle|] \to 0$}. Combining this with independence of $\tilde{\mathcal J_i}$ \rh{and bounded variance} as above, we conclude \begin{align} \label{est:Psi_psi} \lim_{n \to \infty} \mathbb{E}\left[\left|\Psi_\psi(S_n)\right|\right] = 0. \end{align} Combination of \eqref{est:Phi_phi} and \eqref{est:Psi_psi} yields \eqref{eq:Q_step0} and the proof is completed. \end{proof} To be in position to prove Theorem \ref{th:main.T_D}, we state the following uniqueness results which are proved in Appendix \ref{appendixA}. \begin{thm} \label{thm:uniqueness.instationary} Let $f_0 \in L^2(\S^2\times \mathbb{R}^3)$ and $f \in C([0,T];\mathcal P_1(\mathbb{R}^3 \times \S^2))$ satisfying $\Psi_\psi(f) = 0$ for all $\psi \in C_c^\infty([0,T] \times \mathbb{R}^3 \times \S^2)$. \am{Then, $f$ is the unique weak solution to \eqref{eq:Fokker-Planck.instationary} \rhnew{in $C([0,T];L^2(\mathbb{R}^3 \times \S^2))$} such that for almost all $x\in \mathbb{R}^3$, $f(\cdot,\cdot,x)\in L^2(0,T; H^1(\S^2)) $ and $f'(\cdot,\cdot,x)\in L^2(0,T; H^{-1}(\S^2)) $}. \end{thm} \begin{thm} \label{thm:uniqueness.Stokes} \am{Let $f \in L^2((0,T) \times \mathbb{R}^3 \times \S^2)$.} Assume $u \in H^{-s}((0,T),H_{\mathfrak s}^{-s}(\mathbb{R}^3))$ satisfies $\Phi_\varphi(u,f) = 0$ for all $\varphi\in C^\infty_c((0,T),\mathbb{R}^3)$ with $\dv \varphi = 0$. Then $u$ is the unique weak solution in $L^2(0,T;\dot H_{\mathfrak s}^1(\mathbb{R}^3))$ to \eqref{eq:viscoelastic} \end{thm} \begin{proof}[Proof of Theorem \ref{th:main.T_D}] By Lemmas \ref{lem:tightness_Sn} and \ref{lem:tightness_un}, the law $\{Q^n\}$ of $(S_n,u_n)$ is tight, and thus we can extract a convergent subsequence $Q^{n_k}$ which converges weakly to some probability measure $Q$ on $H^{-s}((0,T); H^{-s}_{\mathfrak s}( \mathbb{R}^3)\times C([0,T];\mathcal{P}_1(\mathbb{R}^3\times \S^2))$. We will argue that $Q= \delta_{(u,f)}$, where $(u,f)$ is the unique solution to \eqref{eq:Fokker-Planck.instationary}--\eqref{eq:viscoelastic}. Then, by standard arguments, uniqueness of $(u,f)$ implies weak convergence of the whole sequence $Q^n$, and, since $Q$ is deterministic, we deduce convergence in probability of $(u_n,S_n)$ to $(u,f)$. It thus remains to show that $\delta_{(u,f)}$ is the only accumulation point of $Q^n$. Let $Q^{n_k}$ be a converging subsequence and $Q$ its limit. First we observe that for all $\varphi\in C^\infty_c((0,T) \times \mathbb{R}^3)$ with $\dv \varphi = 0$ and all $\psi\in C^\infty_c([0,T]\times \mathbb{R}^3\times\S^2)$ the functionals $\Psi_\psi$ and $\Phi_\varphi$ are continuous with respect to the topology of $ H^{-s}(0,T; H^{-s}_{\mathfrak s}(\mathbb{R}^3))\times C([0,T],\mathcal{P}_1(\mathbb{R}^3\times \S^2)) $. Therefore, by Portmanteau theorem and Proposition \ref{prop:Q_sol_delta}, for all $\delta > 0$ \[Q\left((v,g)\,:\, |\Phi_\varphi(v,g)|+|\Psi_\psi(g)|>\delta\right)\leq\liminf_{k\to\infty} Q^{n_k}\left((v,g)\,:\, |\Phi_\varphi(v,g)|+|\Psi_\psi(g)|>\delta\right)=0,\] which implies \[Q\left((v,g)\,\,:\,\, |\Phi_\varphi(v,g)|+|\Psi_\psi(g)|=0\right)=1.\] Since the functionals $\Psi_\psi$ and $\Phi_\varphi$ are also continuous with respect to $\psi$ and $\varphi$, a density argument yields \begin{align*} &Q\left((v,g)\,:\, |\Phi_\varphi(v,g)|+|\Psi_\psi(g)|=0,\, \forall \,\varphi\in C^\infty_c((0,T),\mathbb{R}^3) \text{ with } \dv \varphi = 0, \right. \\ & \qquad \qquad \left. \psi\in C^\infty_c([0,T]\times \mathbb{R}^3\times\S^2)\right)=1. \end{align*} Thus, Theorem \ref{thm:uniqueness.instationary} implies that the law of $S^n$ concentrates indeed on $f$. Finally, by Theorem \ref{thm:uniqueness.Stokes} we conclude $Q=\delta_{(u,f)}$. \end{proof} \section{Main Results}\label{section3} \subsection{Assumptions} \label{sec:assumptions} \rh{We will impose the following assumptions for the rest of the paper.} \ml{We work in the framework of a filtered probability space, denoted by $(\Omega, \mathcal{F}, \left\{\mathcal{F}_t\right\}, \mathbb{P})$.} We recall that we consider $n$ particles occupying $\mathcal{B}_i(\xi_i) = x_i + r R_i(\xi_i) \mathcal{B}$, where $\mathcal{B} \subset \mathbb{R}^3$ is the reference particle specified in \eqref{rod_reference}, the positions $x_i \in \mathbb{R}^3$ are static random variables, the orientations $\xi_i \in \S^2$ are random and time-dependent, and the rotation matrix $R_i(\xi_i) \in SO(3)$ is any matrix which satisfies $R_i(\xi_i) e_3 = \xi_i$. We emphasize that $\mathcal{B}_i, x_i, r, \xi_i$ all implicitly depend on $n$. We assume that the particle volume fraction $\phi_n = nr^3$ tends to zero sufficiently fast, namely \begin{align} \label{ass:phi.log.n}\tag{H1} \lim_{n \to \infty} \phi_n \log n = 0. \end{align} Moreover, we assume that the particle centers $x_i$ are well-separated in the sense that there exists $c > 0 $, independent of $n$, such that \begin{align} \label{ass:well.separated} \tag{H2} d_{\min} := \min_{i \neq j} |x_i - x_j| \geq c n^{-\frac 1 3}. \end{align} We remark that the above assumptions together imply in particular that there exists $n_0 \in \mathbb{N}$ such that for all $n \geq n_0$ \begin{align} \label{eq:separation} \min_{i \neq j} \inf_{(\zeta_i, \zeta_j) \in \S^2 \times \S^2} \dist(\mathcal{B}_i, \mathcal{B}_j) \geq cr. \end{align} We assume in addition that the particles are contained in a bounded domain uniformly in $n$ \begin{equation}\label{ass:uniform_bound} \underset{n}{\sup} \underset{1\leq i \leq n}{\max}|x_i| < +\infty. \tag{H3} \end{equation} We assume that $B_i$ are independent $\mathbb{R}^3$ valued Brownian motions,all independent of $\xi_{i,0}$, defined on the filtered probability space $(\Omega, \mathcal{F}, \left\{\mathcal{F}_t\right\}, \mathbb{P})$. Moreover, we assume that the initial particle orientations $\xi_{i,0}$ are independent random variables and we assume the convergence of the initial empirical measure to some $f_0 \in \mathcal P_1(\mathbb{R}^3 \times \S^2) \cap L^2(\S^2\times \mathbb{R}^3)$,\footnote{To see that for a given compactly supported function $f_0$, there exists $x_i, \xi_{i,0}$ which satisfy both assumptions \eqref{ass:well.separated} and \eqref{ass:initial.convergence}, one might first generate i.i.d variables $\tilde x_i, \xi_{i,0}$ with law $f_0 \, d x$ and then define the positions $x_i \in n^{-1/3}\mathbb{Z}^3$ to be the closest points to $\tilde x_i$. } i.e. \begin{align} \label{ass:initial.convergence} \tag{H4} \lim_{n \to \infty}\mathbb{E}\left[ \mathcal W_1\left(f_0, \frac 1 n \sum_i \delta_{x_i,\xi_{i,0}}\right) \right]. \end{align} Here, for two probability measures $g_1,g_2 \in \mathcal{P}_1(\mathbb{R}^3 \times \S^2)$ \rh{with bounded first moment}, we denote by $\mathcal{W}_1(g_1,g_2)$ their $1$-Wasserstein distance. \rh{It is well known that the $\mathcal{W}_1$-distance metrizes weak convergence of measures.} Finally, we assume for simplicity that the external force $h$ is smooth, i.e. \begin{align} \label{ass:h} \tag{H5} h \in C^\infty([0,\infty) \times \mathbb{R}^3 \times \S^2). \end{align} \subsection{Well-posedness of the microscopic system} \label{sec:well-posedness} We will now specify what the notion of solutions to the systems \eqref{eq:micro.T_u} and \eqref{eq:micro.T_D}. The uncoupled SDEs for the particle orientations are well known to be well-posed for $h$ as in \eqref{ass:h} (see Definition \ref{def:SDE} and Theorem \ref{thm:SDE.M}). Note that by introducing the diffusion matrix $\sigma_D$, \begin{equation}\label{eq:sigma} \sigma_D(\xi) := \sqrt{2} \begin{pmatrix} 0 & -\xi_3 & \xi_2\\ \xi_3 & 0 & - \xi_1 \\ -\xi_2 & \xi_1 & 0 \end{pmatrix}, \end{equation} and by the Ito-Stratonovich conversion, Remark \ref{rem:SDE.Strato.Ito}, we can rewrite \eqref{eq:Particles.T_D} and \eqref{eq:Particles.T_u} respectively as \begin{equation}\label{eq:strato.xi_i.T_D} \, d \xi_i = \sigma_D(\xi_i) \, d B_i -2\xi_i \, d s+ P_{\xi_i^\perp} h(s,\xi_i,x_i) \, d s , \end{equation} \begin{equation}\label{eq:strato.xi_i.T_u} \, d \xi_i (s)= \sqrt{\frac {1} {\phi_n}}\sigma_D(\xi_i(s)) \, d B_i(s)- {\frac {2} {\phi_n}}\xi_i(s) \, d s + \frac 1 \phi_n P_{\xi_i^\perp} h(s,\xi_i(s),x_i) \, d s. \end{equation} Since the fluid is modeled by the stationary Stokes equations, we can only expect the fluid velocity to be pathwise as (ir-)regular in time as the white noise that drives the fluid velocity through the prescribed torques. It is well known that white noise is in $H^{-s}$ for any $s > 1/2$. To give a meaning to $u_n$ and to obtain this regularity, we write $u_n$ as the distributional derivative of a suitable stochastic integral. To this end, we first introduce the operator $L_n \colon \mathbb{R}^{3n} \to \L(\mathbb{R}^{3n}, \dot H^1(\mathbb{R}^3))$. For any fixed set of particle positions $(x_1, \dots, x_n)$ it associates to every set of orientations $(\zeta_1,\dots \zeta_n)$ the solution to the Stokes system with given torques $(T_1, \dots, T_n)$. More precisely, \begin{align}\label{def:L_n.1} v = L_n((\zeta_1,\dots, \zeta_n))(T_1, \dots, T_n) , \end{align} is defined to be the solution to \begin{equation} \label{def:L_n.2} \left \{ \begin{array}{rcl} - \Delta v+ \nabla q&=& 0 \quad \text{ in }\mathbb{R}^3 \setminus \bigcup_{i=1}^n \mathcal{B}_i,\\ \dv v&=&0 \quad \text{ in }\mathbb{R}^3 \setminus \bigcup_{i=1}^n \mathcal{B}_i,\\ D v&=&0 \quad \text{ in } \bigcup_{i=1}^n \mathcal{B}_i, \\ \displaystyle \int_{\partial \mathcal{B}_i} \Sigma(v,q) \nu & = &0, \\ \displaystyle \int_{\partial \mathcal{B}_i} [\Sigma(v,q) \nu ] \times (x-x_i) &=& T_i, \end{array} \right. \end{equation} where here (abusing the notation) $\mathcal{B}_i =x_i + r R(\zeta_i) \mathcal{B}$. It is straightforward to see that for non-overlapping particles $\mathcal{B}_i$, the linear problem \eqref{def:L_n.1} admits a unique weak solution $v \in \dot H^1(\mathbb{R}^3)$ (see e.g. \cite{NiethammerSchubert19}). In particular, $L_n$ is well-defined for all particle configurations $(y_1, \dots, y_n)$ which satisfy \eqref{eq:separation}. This allows us to formally define \begin{align} \label{def:U_n} U_n(t) := \frac {\sqrt{\am{2\gamma_{rot}}\phi_n}} n \int_0^t L_n(\Xi(s)) \sqrt{\mathfrak R_2(\Xi(s))} \circ \, d (B_1(s), \dots, B_n(s)), \end{align} where $\Xi = (\xi_1 ,\dots \xi_n)$, $\xi_i$ are the solutions to \eqref{eq:Particles.T_u}, and $\mathfrak R_2(\Xi) \in \mathbb{R}^{3n \times 3n}$ is the block-diagonal matrix with blocks $\mathcal{R}_2(\xi_i)$ (see \eqref{eq:R_2}). Then, we define the (distributional) solution to \eqref{eq:Stokes.micro.T_u} $u_n$ as the distributional derivative of $U_n$, \begin{align} \label{eq:u_n.U_n} u_n := U_n'. \end{align} Similarly, concerning the system \eqref{eq:micro.T_D}, we set \begin{align} \label{def:U_n.T_D} U_n(t) &:= r^3 \am{\sqrt{2\gamma_{rot}}} \int_0^t L_n(\Xi(s)) \sqrt{\mathfrak R_2(\Xi(s))} \circ \, d (B_1(s), \dots, B_n(s)), \\ u_n &:= U_n', \end{align} where $\xi_i$ are the solutions to \eqref{eq:Particles.T_D}. To make these formal definition of $u_n$ rigorous, we need to make sense of the stochastic integrals in \eqref{def:U_n} and \eqref{def:U_n.T_D}. In Appendix \ref{appendix_stochastics}, we collect some statements about the definition and properties of such stochastic integrals with values in separable Hilbert spaces. Essentially, all standard results immediately carry over due to It\^o isometry. Therefore, $U_n$, and thereby $u_n$, is well-defined provided we show that $L_n$ is continuously differentiable with respect to the orientations $\xi_i$. We will show the following global well-posedness result of the microscopic dynamics. \begin{thm} \label{th:well-posed} Let \eqref{ass:phi.log.n}--\eqref{ass:well.separated} be satisfied. Then, for all $n \in \mathbb{N}$, there exists a unique solution $(\xi_1,\dots,\xi_n)$ to \eqref{eq:Particles.T_D} and \eqref{eq:Particles.T_u}, respectively. Moreover, there exists $N_0 \in \mathbb{N}$ such that for all $n \geq N_0$ and all $s > 1/2$, the operator $L_n$ defined through \eqref{def:L_n.1}--\eqref{def:L_n.2} satisfies $L_n \in C^1((\S^2)^n; \L(\mathbb{R}^{3n},H^{-s}_{\mathfrak s}(\mathbb{R}^3)))$, where $H^{-s}_{\mathfrak s}(\mathbb{R}^3)$ denotes the subspace of divergence free functions in $H^{-s}(\mathbb{R}^3)$. In particular, the integral \eqref{def:U_n} (respectively \eqref{def:U_n.T_D}) is well defined and for all $T>0$ \begin{align} u_n \in L^2(\Omega;H^{-s}(0,T;H^{-s}_{\mathfrak s}(\mathbb{R}^3))). \end{align} \end{thm} \subsection{Convergence results} \label{sec:convergence} We are finally prepared to state the main results of our paper, the convergence to the mean-field limits for \eqref{eq:micro.T_u} and \eqref{eq:micro.T_D}. We denote by $S_n$ the empirical measure \begin{equation}\label{eq:empirical_measure} S_n(t) := \frac 1 n \sum_i \delta_{x_i,\xi_i(t)}. \end{equation} We first state the main result concerning system \eqref{eq:micro.T_D}. \begin{thm} \label{th:main.T_D} Let assumptions \eqref{ass:phi.log.n}--\eqref{ass:h} be satisfied. For $n \geq N_0$ as in Theorem \ref{th:well-posed}, let $\xi_i$, $1 \leq i \leq n$ and $u_n$ be the unique solutions to \eqref{eq:micro.T_D}. Then, for all $t> 0$ and all $s > 1/2$ the following convergence in probability holds: \begin{align} \forall \varepsilon > 0 ~ \lim_{n \to \infty} \P\left( \|\phi_n^{-1} u_n - u\|_{H^{-s}((0,t), H^{-s}(\mathbb{R}^3))} + \sup_{\tau \in [0,t]} \mathcal{W}_1(S_n(\tau),f(\tau)) > \varepsilon\right) = 0, \end{align} \am{where $f\in C([0,t], \mathcal P_1(\mathbb{R}^3 \times \S^2)\cap L^2(\mathbb{R}^3 \times \S^2))$ is the unique weak solution to \eqref{eq:Fokker-Planck.instationary} such that for almost all $x\in \mathbb{R}^3$, $f(\cdot,\cdot,x)\in L^2(0,t;H^1(\S^2))$, $f'(\cdot,\cdot,x)\in L^2(0,t;H^{-1}(\S^2))$ and $u\in L^2(0,t;\dot{H}^1_{\mathfrak{s}}(\mathbb{R}^3))$ the unique weak solution to \eqref{eq:viscoelastic}.} \end{thm} Similarly, we show the following convergence result for system \eqref{eq:micro.T_u}. \begin{thm} \label{th:main.T_u} Let assumptions \eqref{ass:phi.log.n}--\eqref{ass:h} be satisfied. For each $n \geq N_0$ as in Theorem \ref{th:well-posed}, let $\xi_i$, $1 \leq i \leq n$ and $u_n$ be the unique solutions to \eqref{eq:micro.T_u}. Then, for all $s_1 > 1/2, s_2 > 0, s_3 > 3/2$ the following convergence in probability holds: \begin{align} \forall \varepsilon > 0 ~ \lim_{n \to \infty} \P\left(\|u_n - u\|_{H^{-s_1}((0,t) ;H^{-s_1}(\mathbb{R}^3))} + \|S_n - f\|_{H^{-s_2}(0,t;H^{-s_3}(\mathbb{R}^3 \times \S^2))} > \varepsilon\right) = 0, \end{align} \am{where $f\in L^2((0,t)\times \S^2\times \mathbb{R}^3)$ is the unique weak solution to \eqref{eq:Fokker-Planck.stationary} such that for almost all $x\in \mathbb{R}^3$, $f(\cdot,\cdot,x)\in C^\infty((0,t)\times\S^2)$, and $u\in L^2(0,t);\dot{H}^1_{\mathfrak{s}}(\mathbb{R}^3))$ the unique weak solution to \eqref{eq:viscoelastic}.} \end{thm} \subsection{Notation}\label{subsec:notatation} \begin{itemize} \item Throughout the paper we will often deal with vectors $V \in (\mathbb{R}^3)^n$, where $n \in \mathbb{N}$ is the number of particles. We will use the convention to denote the components $V_i \in \mathbb{R}^3$, $1 \leq i \leq n$ by Latin indices and use Greek indices to denote the components of these vectors, e.g. $V_{i,\alpha} \in \mathbb{R}$, $1 \leq \alpha \leq 3$. With this convention, we allow for the slight abuse to write $V \in \mathbb{R}^{3n}$ instead of $V \in (\mathbb{R}^3)^n$.\am{ In particular we denote by $e_{i,\alpha}$, $1 \leq i \leq n$, $1 \leq \alpha \leq 3$ the canonical basis of $\mathbb{R}^{3n}$. } Moreover, in the special case of the particle positions and orientations we will use capital letters to denote the collections $X = (x_1, \dots, x_n) \in \mathbb{R}^{3n}$, and $\Xi = (\xi_1, \dots, \xi_n) \in (\S^2)^n$ in order to avoid confusion with variables $x,\xi$ that appear in the limit systems. \item \am{For any vector $T \in \mathbb{R}^3$, we set \begin{align}\label{eq:[T]_M} [T]_M := \begin{pmatrix} 0 & -T_3 & T_2\\ T_3 & 0 & - T_1 \\ -T_2 & T_1 & 0 \end{pmatrix}, \end{align} the unique skew-symmetric matrix associated to $T\in \mathbb{R}^3$ satisfying for any $x\in\mathbb{R}^3$ and any smooth test function $\phi\in {C}_c^\infty(\mathbb{R}^3;\mathbb{R}^3)$ \begin{align}\label{eq:skew_curl} [T]_M x = T \times x,&& [T]_M:\nabla \phi(x)= 2 T \cdot \curl \phi(x). \end{align} } \item We denote by $\Sym_0(3)$ the space of symmetric traceless matrices $A \in \mathbb{R}^{3\times3}$ and by $\sym B$ (resp. $\sym_0 B$) the symmetric (resp. symmetric traceless) part of $B \in \mathbb{R}^{3\times3}$. \item For a reflexive Banach space $X$, an open set $\mathcal O \subset \mathbb{R}^d$, $p \in (1,\infty)$, $s \in \mathbb{R}$, we denote by $W^{s,p}(\mathcal O;X)$ the usual fractional Sobolev space of $X$-valued functions. For $s = k+ \gamma$, $k \in \mathbb{N}$, $\gamma \in (0,1)$ the norm in $W^{s,p}(\mathcal O;X)$ is given by \begin{align} \|u\|^p_{W^{s,p}(\mathcal O;X)} := \|u\|^p_{W^{k,p}(\mathcal O;X)} + [\nabla^k u]^p_{s,p}, \\ [f]^p_{s,p} := \int_{\mathcal O \times \mathcal O} \frac{\|f(x) - f(y)\|_X^p}{|x-y|^{d+\gamma p}} \, d y \, d x. \end{align} For $s > 0$, we define $W^{-s,p'}(\mathcal O;X) := (W^{s,p}_0(\mathcal O;X'))^\ast$. We recall (see e.g. \cite{Amann00}) that if $X$ is reflexiv, then an equivalent norm in $W^{-k + \theta,p}(\mathcal O;X)$, $\theta \in (0,1)$ is given by \begin{align} \label{eq:char.negative.sobolev} \|u\|_{W^{-k + \theta,p}(\mathcal O;X)} = \inf \left\{ \sum_{|\beta| \leq k} \|u_\beta\|_{{W^{\theta,p}(\mathcal O;X)}} : u= \sum \partial^\beta u_\beta \right\}. \end{align} We also introduce weighted fractional Sobolev spaces $W^{s,p}_w(\mathcal O)$ with a weight $w \geq 0$ through \begin{align} \|v\|^p_{W^{s,p}_w(\mathcal O)} &:= \|v\|^p_{W^{k,p}_w(\mathcal O)} + [\nabla^k v]^p_{w,s,p} ,\\ [f]^p_{w,s,p} &:= \int_{\mathcal O \times \mathcal O} \frac{|f(x) - f(y)|_X^p}{|x-y|^{d+\gamma p}} w(x) w(y) \, d y \, d x, \\ \|v\|^p_{W^{k,p}_w(\mathcal O)} &:= \sum_{l = 0}^k \int_{\mathcal O} |\nabla^l v|^p w \, d x, \end{align} for $s = k + \gamma$ as above, and $W^{-s,p'}_{\frac 1 w}(\mathcal O) := (W^{s,p}_{0,w}(\mathcal O))^\ast$. Throughout the paper, we will fix the appearing weight function $w$ as \begin{align} \label{def:weight.function} w(x)=(1+|x|)^a, && \text{for some } 0<a<1. \end{align} \rh{As usual we denote the Hilbert spaces $H^s(\mathcal O) := W^{s,2}(\mathcal O)$ and $H^s_w(\mathcal O) = W^{s,2}_w(\mathcal O)$.} \item We allow for the abuse of notation \begin{align} \|g\|_{W^{s,p}_{\mathrm{loc}}(\mathbb{R}^3)} \leq C, \end{align} to indicate that for all compact $K \in \mathbb{R}^3$ there exists $C$ depending on $K$ such that \begin{align} \|g\|_{W^{s,p}(K)} \leq C. \end{align} Similarly, we write $g \in {W^{s,p_-}(U)}$ and \begin{align} \|g\|_{W^{s,p_-}(U)} \leq C , \end{align} to indicate that for all $1 \leq q < p$ there exists $C$ depending on $q$ such that \begin{align} \|g\|_{W^{s,q}(U)} \leq C. \end{align} The notation $\|g\|_{W^{s_-,p}(U)} \leq C.$ should be understood analogously. \item \rh{For an open set $\mathcal O \subset \mathbb{R}^3$, $p \geq 1$ and $w$ as in \eqref{def:weight.function}, we introduce \begin{align} \|g\|_{L^{p,2}_{w,\mathcal O}} &:= \|g\|_{L^p(\mathcal O)} + \|g\|_{L^2_w(\mathbb{R}^3 \setminus \mathcal O)}, \\ L^{p,2}_{w,\mathcal O} &:= \{ g \in L^1_\mathrm{loc}(\mathbb{R}^3) : \|g\|_{L^{p,2}_{w,\mathcal O}} < \infty \}. \label{def:L^p,2_w,K} \end{align} } \item \rh{ We use an index $\mathfrak s$ to denote subspaces of (distributionally) divergence free functions, i.e. we use the notation $L^p_{\mathfrak s}(\mathcal O)$, $H^s_{\mathfrak s}(\mathcal O)$, $H^s_{\mathfrak s,w}(\mathcal O)$, $W^{s,p}_{\mathfrak s}(\mathcal O)$, $W^{s,p}_{\mathfrak s,w}(\mathcal O)$, $L^{p,2}_{\mathfrak s,w,\mathcal O}$ and all these subspaces are closed. } \item \am{Finally, in Sections \ref{sec:L_n} -- \ref{section6} we will adopt the usual convention to denote by $C>0$ any constant that might change from line to line and that might depend on certain quantities which are fixed like the reference particle or the constant appearing in \eqref{ass:well.separated}. We emphasize that the constant $C$ will never depend on $n$ though.} \end{itemize} \subsection{Outline of the proof of the main results}\label{subsec:proof_strategy} Starting from \eqref{eq:micro.T_D}, the first step towards the derivation of the mesoscopic system \eqref{eq:Fokker-Planck.instationary}, \eqref{eq:viscoelastic} is the following approximation for the fluid equations \eqref{eq:Stokes.micro.T_D}: \begin{align} \label{eq:u_n.tilde} - \Delta u_{n,app} + \nabla \tilde p_{n,app} = \frac{\phi_n} n \sum_i ([T_i]_M + \mathcal{S}(\xi_i)T_i) \nabla \delta_{x_i}, \end{align} where $T_i = \sqrt{\am{2\gamma_{rot}}\mathcal{R}_2(\xi_i)} \circ \dot B_i$ and we recall that $\mathcal{S}(\xi_i)$ from \eqref{def:mS_i} is the tensor that relates the torque to the stresslet and $[T_i]_M$ is defined in \eqref{eq:[T]_M}. Thus the approximation \eqref{eq:u_n.tilde} consists in a formal superposition principle of point torques and stresslets; each particle contributes to $u_{n,app}$ as if it was alone in the fluid and only its total torque and stresslet acts on the fluid at the particle center. The rigorous justification of this approximation is the content of Section \ref{sec:L_n}. More precisely, Section \ref{sec:L_n} deals with the approximation of the solution $L_n T$ to \eqref{def:L_n.2} by the solution $ L_{n,app} T$ to \eqref{eq:u_n.tilde} for any given $T \in \mathbb{R}^{3n}$ and any given collection of particle positions $x_i$ and orientations $\xi_i$, $1 \leq i\leq n$ which satisfy assumotion \eqref{ass:phi.log.n}--\eqref{ass:uniform_bound}. To this end, we will first introduce an intermediate approximation $L^{im}_{n,app}$ defined as the superposition of single particle problems, but with the full boundary conditions (no-slip and balance equations) for the single particle. Estimates for approximations similar to \eqref{eq:u_n.tilde} have been given for example in \cite{HoferVelazquez18, Hofer18MeanField, Gerard-Varet19, NiethammerSchubert19, Gerard-VaretHillairet19, HillairetWu19}. The main novelty here are estimates for $\nabla \xi_i (L_n - L_{n,app}^{im}) T$, which are needed because of the Stratonovitch integral in \eqref{def:U_n}. Since the fluid domain $\mathbb{R}^3 \setminus \cup_i \mathcal{B}_i$ depends on the particle orientations, we enter here the topic of shape derivatives. Such shape derivatives for the Stokes equations with different boundary conditions have been considered for example in \cite{Simon91,BadraCaubetDambrine11}. Instead of identifying the boundary value problem solved by $\nabla_{\xi_i} L_n$ as in \cite{Simon91,BadraCaubetDambrine11}, we rely on the method of reflections to analyze these shape derivatives in Subsection \ref{sec:reflections}. The method of reflections has been used for related problems, see for example \cite{HoferVelazquez18, Hofer18MeanField, NiethammerSchubert19, Gerard-VaretHillairet19, HillairetWu19, Mecherbet18, Mecherbet19Cluster}. It allows to express $L_n$ in terms of single particle problems only. Since these single particle operators have an explicit dependence on the particle orientation, this yields a useful expression for the shape derivative. As reflected by the approximation \eqref{eq:u_n.tilde} the given torques at each particle position $x_i$ perturb the fluid velocity in a singular way as $|x-x_i|^{-2}$. This is the reason why we only obtain sufficient estimates under the assumptions \eqref{ass:phi.log.n}--\eqref{ass:well.separated}. Moreover, due to the singular nature of the perturbation, these estimates are locally only in $L^p$ for $p < 3/2$. Since the stochastic integral \eqref{def:U_n} does not seem compatible with such $L^p$ spaces (see e.g. \cite{van2015stochastic}), we work instead in weighted negative Sobolev spaces $H^{-s}_w(\mathbb{R}^3)$, $s > 1/2$, which are Hilbert spaces. In Appendix \ref{sec:weighted sobolev space}, we show that ${L^{p,2}_{w,K}}$ (see \eqref{def:L^p,2_w,K}) embeds into $H^{-s}_w(\mathbb{R}^3)$ for a compact set $K$. Therefore, in Section \ref{sec:L_n}, we will always work in the space ${L^{p,2}_{w,K}}$ where we choose $K$ to contain all the particles thanks to assumption \ref{ass:uniform_bound}. The results in Section \ref{sec:L_n} immediately imply that the stochastic integral \eqref{def:U_n} that defines $u_n$ is well-defined which yields Theorem \ref{th:well-posed}. The approximation \eqref{eq:u_n.tilde} suggests through an appropriate version of the Law of Large Numbers that the limit of $u_n$ is given in terms of the expectation of the torque and stresslet exerted on the fluid by each particle. The following lemma states that these expectation indeed correspond to the formula for the viscoelastic stress in \eqref{eq:viscoelastic}. The proof of the lemma is a straightforward calculation which we postpone to Appendix \ref{sec:Expectations}. \begin{lem} \label{lem:Expectations} Let $\xi_i$ be the solution to \eqref{eq:Particles.T_D}. \am{For any $A\in {C}^\infty_c((0,t), \Sym_0(3))$ and $b\in {C}^\infty_c((0,t), \mathbb{R}^{3})$ we have \begin{align} \mathbb{E} \left[\int_0^t b(s) \cdot \sqrt{ \mathcal{R}_2(\xi_i(s))} \circ \, d B_i(s) \right] &= 0 , \label{eq:torque.average} \\ \mathbb{E} \left[ \int_0^t A(s): \mathcal{S}(\xi_i(s)) \sqrt{ \mathcal{R}_2(\xi_i(s))} \circ \, d B_i(s) \right] &= \frac{\gamma_E}{\sqrt{2\gamma_{rot}}} \mathbb{E}\int_0^t[3 \xi_i(s) \otimes \xi_i(s) - \mathrm{Id}]:A(s) \, d s.\\ \label{eq:stress.average} \end{align} } \end{lem} The proof of the main convergence results, Theorems \ref{th:main.T_D} and \ref{th:main.T_u}, is completed in Sections \ref{section5} and \ref{section6}, respectively. Since the particle dynamics is uncoupled, we do not face here the problem of propagation of chaos as in classical mean field systems. However, a slight complication arises because the particle position are not assumed to be independent (which is impossible due to assumption \eqref{ass:well.separated}). If they were independent, the law of each particle would be given by the Fokker-Planck equation \eqref{eq:Fokker-Planck.instationary} in the case of system \eqref{eq:Particles.T_D}, and we would deduce convergence immediately by the Law of Large Numbers. Instead, we follow here a compactness approach used for example in \cite{meleard1996asymptotic,oelschlager1985law,kipnis1998scaling,flandoli2021navier}. This approach is roughly described as follows. First, one shows tightness in a suitable functional spaces of the laws of the empirical measure $S_n$ and the fluid velocity $u_n$ in suitable spaces. Then one introduces functionals such that, on the one hand, $u_n$ and $S_n$ concentrate on the zeroes of these functionals, and on the other hand, these zeroes are precisely the unique solutions of the desired limit systems \eqref{eq:viscoelastic} together with \eqref{eq:Fokker-Planck.instationary} and \eqref{eq:Fokker-Planck.stationary}, respectively. The choice of the functionals correspond to distributional solutions of the limit system. We therefore need to show that such solutions are unique in the negative Sobolev spaces we use. The proof of these uniqueness results, which we give in Appendix \ref{appendixA} for completeness, is based on well-posedness and regularity of the corresponding dual problems. This is completely standard for the Stokes equations and the instationary Fokker-Planck equations. The slightly more involved proof for the stationary Fokker-Planck system we carry out in more detail. \subsection{Limitations and possible generalizations}\label{subsec:discussion} In this subsection we comment on open questions related to limitations and possible generalizations of the analysis in this paper. \medskip We dropped the evolution of the particle translations. It seems very challenging to include translations with the current techniques because they require well-separated particles in the sense of assumption \eqref{ass:well.separated}. Although this condition has been shown to propagate in time in \cite{Hofer18MeanField,Mecherbet18, Hofer&Schubert} under suitable assumptions for sedimenting inertialess non-Brownian rigid spherical particles, the presence of Brownian forces and torques in the current model could break such propagation. \medskip As discussed in Subsection \ref{sec:dynamics} it seems physically more accurate to prescribe random forces and torques given by \eqref{eq:full.Stokes.Einstein} and the evolution of the particle evolution through \eqref{eq:angularVelocity}. For vanishing particle volume fraction as in assumption \eqref{ass:phi.log.n}, it seems not unrealistic to treat such a microscopic model (still ignoring the evolution of the particle centers). One might worry about rotations caused by hydrodynamic interactions between the particles which have a singularity like $|x|^{-3}$: a stresslet $S$ at a particle at $x_i$ creates a fluid velocity roughly like $\nabla \Phi(x-x_i) : S$ where $\Phi$ is the fundamental solution of the Stokes equations. The induced rotation at another particle then scales like the gradient of this function. However, as the viscoelastic stress is of smaller order then the diffusion rate by a factor $\phi_n$, this singular interaction could still be neglegligible if $\phi_n \to 0$. In the Doi model \eqref{eq:full.model} this corresponds to $\lambda_2 = \lambda_3 = 0$ for $\phi_n \to 0$. The situation changes completely when one considers non vanishing volume fractions $\phi_n$. In that case, the singular interaction cannot be neglected. In fact, the critical singularity $|x|^{-3}$ of the interaction leaves little hope that one can pass to the formal mean-field limit that would give rise to the term $\dv_{\xi} (P_\xi^\perp \nabla_x u \xi f)$ in (a version of) the Doi model \eqref{eq:full.model}. Indeed, discrepancies between discrete and continuous convolutions which such singular convolution kernel have been studied in a related setting in \cite{Gerard-VaretHillairet19}. \medskip On the technical side, we are restricted to vanishing volume fractions $\phi_n$, more precisely to assumption \eqref{ass:phi.log.n}, because of the factor $\phi_n \log n$ appearing in the estimates for the shape derivatives (see Proposition \ref{pro:L_n}). It would be desirable to analyze whether this bound is optimal or could be improved under suitable assumptions on the particle configuration. \medskip We dropped the fluid inertia and model the fluid by the stationary Stokes equations. An interesting question would be to investigate the instationary Navier-Stokes equations. Heuristically, the result should be unchanged since on the microscopic scale of the particles the fluid inertia does not matter. Mathematically though, already the problem of giving a meaning to the microscopic fluid velocity changes completely. To our knowledge, even in the case of quasistatic non-Brownian particles, no rigorous homogenization results regarding the effective viscosity of suspensions are available for the Navier-Stokes equations. This has been obtained only for prescribed particle velocities (i.e. inertial particles) in \cite{Feireisl2016} where the effective equation contains the typical Brinkman term. \medskip In this paper the particles are all obtained from (isotropic) rescaling of a fixed reference particle. In order to model rod-like particles, it would be interesting to consider particles which become more and more slender as $n \to \infty$. We refer to \cite{HoeferPrangeSueur22} for the analysis of the evolution of such filaments in a fluid. \medskip Finally, an important open problem is the rigorous derivation of models (expressions for the viscoelastic stress to begin with) of flexible polymers mentioned in the introduction. In the simplest setting one could start with a dumbbell model, where the flexible polymer is just modeled by two rigid balls connected by an (infinitesimally small) spring. \section{The microscopic model}\label{section2} \subsection{The particles} \label{sec:particles} We consider $n$ identical particles $\mathcal{B}_i$ given by scaling, rotation and translation of a reference particle $\mathcal{B}$. We assume that $0 \in \mathcal{B} \subset B(0,1)$ is a smooth compact set with rotational symmetry, i.e. \begin{align}\label{rod_reference} R \mathcal{B} = \mathcal{B} \quad \text{for all } R \in SO(3) \text{ with } R e_3 =e_3, \end{align} where $e_3 = (0,0,1) \in \mathbb{R}^3$. We then consider $n$ identical particles $\mathcal{B}_i = x_i + r R(\xi_i) \mathcal{B}$, where $r>0$ a scaling factor, $x_i \in \mathbb{R}^3$ is the position and $\xi_i \in \S^2$ the orientation of the $i$-th particle. The rotation matrix $R(\xi_i) \in SO(3)$ is chosen such that $R(\xi_i) e_3 = \xi_i$. Note that this constraint does not characterize $R(\xi_i)$ uniquely, but due to the rotational symmetry of $\mathcal{B}$, the choice of $R(\xi_i)$ does not affect $\mathcal{B}_i$. \subsection{The resistance tensor} \label{sec:resistance} For the setup of the model, we need to recall the notion of the (Stokes) resistance tensor. A detailed discussion on this topic can be found for example in \cite[Chapter 5]{KimKarilla13}. In the following $\Sym_0(3)$ denotes the symmetric traceless matrices in $\mathbb{R}^{3 \times 3}$. For an arbitrary (smooth) bounded domain $A \subset \mathbb{R}^3$, the (grand) resistance tensor $\mathcal{R}_A \in \mathbb{R}^{6 \times 6}$ relates the translational and angular velocities $V, \omega \in \mathbb{R}^3$ and the rate of strain $E \in \Sym_0(3)$ of a particle in a quiescent Stokes flow to the force, torque and stresslet exerted on the fluid. More precisely, let $x_0 \in \mathbb{R}^3$ be a fixed reference point, and consider the solution $(v,p) \in (\dot H^1(\mathbb{R}^3),L^2(\mathbb{R}^3))$ (where $\dot{H}^1(\mathbb{R}^3)$ stands for the standard homogeneous Sobolev space) to the Stokes equations with some viscosity $\mu > 0$ \begin{equation} \label{eq:resistance.problem} \left\{ \begin{array}{ll} - \mu \Delta v + \nabla p = 0, \quad \dv v = 0 &\quad \text{in } \mathbb{R}^3 \setminus A, \\ v = V + \omega \times (x - x_0) + E(x-x_0) & \quad \text{in } A. \end{array} \right. \end{equation} Then, the force and torque and stresslet exerted on the fluid by the particle are given as \begin{align} F &= -\int_{\partial A} \Sigma[v,p] \nu, \\ T &= -\int_{\partial A} \Sigma[v,p] \nu \times (x-x_0), \\ S &= - \sym_0\left(\int_{\partial A} \Sigma[v,p] \nu \otimes (x-x_0) \right), \end{align} with $\nu$ the outer normal at $\partial A$ and $\Sigma[v,p] = 2 \mu Dv - p \mathrm{Id}$ the fluid stress where $D v = \sym(\nabla v)$ and $\sym B$ and $\sym_0 B$ denote the symmetric and symmetric traceless part of a matrix $B \in \mathbb{R}^{d \times d}$, respectively. Then, the resistance tensor is defined through\footnote{By linearity, one can replace the velocities $V,\omega,E$ by the corresponding quantities relative to prescribed velocities of the fluid at infinity, which we set equal to zero by imposing $v \in \dot H^1(\mathbb{R}^3)$. Considering nonzero velocities at infinity, $E$ might be nonzero even for a rigid particle.} \begin{equation} \label{def:R} \begin{pmatrix} F \\ T \\ S \end{pmatrix}= \mu \mathcal{R}_A \begin{pmatrix} V \\ \omega \\ E \end{pmatrix}. \end{equation} By linearity of the Stokes equations, $\mathcal{R}_A$ is well defined. Moreover, by some integration by parts, $\mathcal{R}_A$ is symmetric and positive definite. We denote \begin{align*} \bar \mathcal{R} := \mathcal R_{\mathcal B} := \begin{pmatrix} \bar \mathcal{R}_1 & \bar \mathcal{R}_{12} & \bar \mathcal{R}_{13} \\ \bar \mathcal{R}_{12}^T & \bar \mathcal{R}_2 & \bar \mathcal{R}_{23} \\ \bar \mathcal{R}_{13}^T & \bar \mathcal{R}^T_{23} & \bar \mathcal{R}_3 \end{pmatrix}, \end{align*} and find due to rotational symmetry \begin{align} \bar \mathcal{R}_1 &= \gamma_\perp (\mathrm{Id} - e_3 \otimes e_3) + \gamma_\parallel e_3 \otimes e_3, \label{eq:translational.resistence.reference}\\ \bar \mathcal{R}_2 &= \gamma_{rot} (\mathrm{Id} - e_3 \otimes e_3) + \gamma_{rot,\parallel} e_3 \otimes e_3, \label{eq:angular.resistence.reference} \\ \bar \mathcal{R}_{23}^T \omega &= \gamma_E \Sym\left( (\omega \times e_3) \otimes e_3\right ), \end{align} for some $\gamma_\perp,\gamma_\parallel, \gamma_{rot}, \gamma_{rot,\parallel} > 0$, $\gamma_E \in \mathbb{R}$. Formula for these quantities in the case of spheroids can be found in \cite[Subsection 3.3]{KimKarilla13}). In particular $\gamma_E \neq 0$ for all spheroids except for spheres. {(We omit corresponding formula for $\bar \mathcal{R}_3$, $\bar \mathcal{R}_{12}$, $\bar \mathcal{R}_{13}$ (see \cite[Subsection 5.5]{KimKarilla13}). In fact, $\bar \mathcal{R}_{12}=0 = \bar \mathcal{R}_{13}$ according to \cite{Graham} by choosing $x_0$ as the so-called center of hydrodynamic reaction.)} By straightforward transformation arguments the resistance tensor of $\mathcal{B}_i$ with respect to $x_i$ is given by \begin{equation} \label{def:R_B_i} \mathcal{R}_{\mathcal{B}_i} = \begin{pmatrix} r \mathcal{R}_1 (\xi_i) & r^2 \mathcal{R}_{12} (\xi_i) & r^2 \mathcal{R}_{13} (\xi_i) \\ r^2 \mathcal{R}_{12}^T (\xi_i) & r^3 \mathcal{R}_2(\xi_i) & r^3 \mathcal{R}_{23}(\xi_i) \\ r^2 \mathcal{R}_{13}^T (\xi_i) & r^3 \mathcal{R}_{23}^T(\xi_i) & r^3 \mathcal{R}_{3}(\xi_i) \end{pmatrix}, \end{equation} where \begin{align} \mathcal{R}_1(\xi_i) &= \gamma_\perp (\mathrm{Id} - \xi_i \otimes \xi_i) + \gamma_\parallel \xi_i \otimes \xi_i, \label{eq:R_1}\\ \mathcal{R}_2(\xi_i) &= \gamma_{rot} (\mathrm{Id} - \xi_i \otimes \xi_i) + \gamma_{rot,\parallel} \xi_i \otimes \xi_i, \label{eq:R_2} \\ \mathcal{R}_{23}^T(\xi_i) \omega &= \gamma_E \Sym\left( (\omega \times \xi_i) \otimes \xi_i\right ). \label{eq:R_23} \end{align} For an inertialess rigid particle, one is interested in the problem to determine the particle velocities $V,\omega$ as well as the stresslet $S$ when force and torque $F,T$ are given, as well as the rate of strain $E$ (which corresponds to the rate of strain of the fluid far from the particle). This is known as the mobility problem. We consider here only the case $E=0, \rhnew{F=0}$.IN this case, we have \begin{align*} \omega &= \mu^{-1} r^{-3} \mathcal{R}_2^{-1} T, \\ S &= \mu r^3 (\mathcal{R}_{23})^T \omega = \mathcal{R}_{23}^T \mathcal{R}_2^{-1} T \end{align*} The mapping $\mathcal{R}_{23}^T \mathcal{R}_2^{-1}$ which relates the torque to the stresslet will play an important role in the analysis of the viscoelastic stress, hence we introduce the following operator \begin{equation} \label{def:mS_i} \begin{array}{lccl} \mathcal{S} :& \mathbb{S}^2 &\to& \L(\mathbb{R}^3, \Sym_0 (\mathbb{R}^3)),\\ & \xi & \mapsto & \mathcal{R}_{23}^T(\xi) \mathcal{R}_2^{-1}(\xi). \end{array} \end{equation} Note that $\mathcal{S}$ is smooth. In particular, there exists a constant $C>0$ such that \begin{equation}\label{reg_mS_i} \|\mathcal{S} \|_{{C}^1(\mathbb{S}^2;\mathcal{L}(\mathbb{R}^3, \Sym_0 (\mathbb{R}^3))} \leq C . \end{equation} \subsection{The dynamics} \label{sec:dynamics} We assume that the fluid satisfies the Stokes equations with no-slip condition at the particles: \begin{equation} \label{eq:u_n} \left \{ \begin{array}{rcll} -\mu \Delta u_n+ \nabla p_n&=& 0 &\quad \text{ in }\mathbb{R}^3 \setminus \bigcup_{i=1}^n \mathcal{B}_i,\\ \dv u_n&=&0 &\quad\text{ in } \mathbb{R}^3 \setminus \bigcup_{i=1}^n \mathcal{B}_i,\\ u_n&=& v_i+\omega_i \times (x-x_i) &\quad \text{ in } \mathcal{B}_i. \end{array} \right. \end{equation} Here, $v_i, \omega_i \in \mathbb{R}^3$ are the translational and angular velocities on $\mathcal{B}_i$. Neglecting the particle inertia, the velocities $v_i, \omega_i$ are not given but they are determined through the following conditions, prescribing the total force and torque acting on each particle: \begin{align} \label{eq:F_i.T_i} \int_{\partial B_i} \Sigma(u_n,p_n) \nu &=F_i ,\\ \int_{\partial B_i} [\Sigma(u_n,p_n) \nu ] \times (x-x_i)&=T_i. \end{align} Since the particles are inertialess, these forces and torques balance the forces and torques acting on the particles which are the sum of external forces and torques and of the random forces and torques acting on each particle due to collisions with fluid particles, $F_i = F_i^E + F_i^B$ and $T_i = T_i^E + T_i^B$. \rh{ According to the fluctuation-dissipation theorem, the random forces and torques are given by \begin{align} \label{eq:full.Stokes.Einstein} (F^B,T^B) = \sqrt{2 k_B \Theta \mu\mathscr R_n} \mlnew{\circ \dot B}, \end{align} see e.g. \cite{Roux92}. Here, $k_B$ is the Boltzmann constant, $\Theta$ the absolute temperature, $B$ is a $6n$-dimensional Brownian motion and $F^B, T^B \in \mathbb{R}^{3n}$ are the vectors containing all the forces and torques $F_i^B, T_i^B$. Moreover, $\mathscr R_n \in \mathbb{R}^{6n\times6n}$ is the resistance matrix of all the particles (excluding, stresslet/strain). More precisely, similar as in \eqref{def:R}, $\mathscr R_n$ relates given velocities $V_i,\omega_i$ at all particles to forces and torques $F_i,T_i$ by solving the corresponding $n$ particle problem instead of \eqref{eq:resistance.problem}. In particular $\mathscr R_n$ depends on the positions and orientations of all the $n$ particles. } The fluid equations are complemented by the equations of motion for the particles \begin{align} \dot{x}_i &= v_i \label{eq:velocity},\\ \dot{\xi}_i &= \omega_i \times \xi_i. \label{eq:angularVelocity} \end{align} \subsection{Simplification of the model} \label{sec:simplification} As outlined in the introduction, deriving the Doi model from \eqref{eq:u_n}--\eqref{eq:angularVelocity} seems presently out of reach. We now detail the simplifications that lead from \eqref{eq:u_n}--\eqref{eq:angularVelocity} to the model \eqref{eq:Stokes.micro.T_D.intro}--\eqref{eq:Particles.T_D.intro}. Instead of \eqref{eq:velocity} and \eqref{eq:angularVelocity}, we fix the particle centers, and set the forces $F_i$ equal to $0$, \begin{align} \dot{x}_i &= 0 \label{eq:velocity0}, \\ F_i &= 0. \end{align} Moreover, instead of the equation of motion for the particle orientation \eqref{eq:angularVelocity}, we assume \begin{align} \, d \xi_i(s) &= \xi_i(s) \times \sqrt{2 k_B \Theta \mu^{-1} r^{-3} \mathcal{R}_{2}^{-1}} \circ \, d B_i(s) + P_{\xi_i^\perp} h(s,\xi_i(s),x_i) \, d s \label{eq:acceleration2}, \end{align} where $\mathcal{R}_2$ is as in \eqref{def:R_B_i}. The first term on the right-hand side above corresponds to an angular velocity $\omega_i$ caused by random collisions with fluid particles as if the particle $\mathcal{B}_i$ was alone in the fluid, i.e. $\omega_i = \mu^{-1} r^{-3} \mathcal{R}_{2}^{-1} \rhnew{T_i^B}$ \rhnew{with $T_i^B = \sqrt{2 k_B \Theta \mu r^{3} \mathcal{R}_2(\xi_i)} \mlnew{\circ \dot B_i}$}. This is known as hydrodynamic decoupling and is at least formally justified for small particle volume fraction. Here, and in the following, \mlnew{$B_i$ is brownian motion in $\mathbb{R}^3$}. The additional term in \eqref{eq:acceleration2} containing $h$ could be understood as arising from an external torque for example associated to an external fluid flow, a magnetic field or chemotaxis. By an action reaction principle, a corresponding torque should act on the fluid as well, but we will omit this for the sake of simplicity. More precisely, regarding the fluid equations, we consider the random torques $T_i^B$ as the only torques acting on the fluid. Then, the simplified model at one glance is given by \begin{subequations} \label{eq:reduced} \begin{align} \label{eq:Stokes.reduced} \left \{ \begin{array}{rcl} \displaystyle -\mu \Delta u_n+ \nabla p_n&=& 0 \quad \text{ in }\mathbb{R}^3 \setminus \bigcup_{i=1}^n \mathcal{B}_i,\\ \dv u_n&=&0\quad \text{ in }\mathbb{R}^3 \setminus \bigcup_{i=1}^n \mathcal{B}_i,\\ \displaystyle D u_n&=&0\quad \text{ in } \bigcup_{i=1}^n \mathcal{B}_i, \\ \displaystyle \int_{\partial \mathcal B_i} \Sigma(u_n,p_n) \nu & = &0 , \\ \displaystyle \int_{\partial \mathcal B_i} [\Sigma(u_n,p_n) \nu ] \times (x-x_i) & = & \sqrt{2 k_B \Theta \mu r^{3} \mathcal{R}_2(\xi_i)} \mlnew{\circ \dot B_i}, \end{array} \right. \end{align} \begin{align} \label{eq:Particles.reduced} \left \{ \begin{array}{rcl} \displaystyle \, d \xi_i(s) &=& \xi_i(s) \times \sqrt{2 k_B \Theta \mu^{-1} r^{-3} \gamma_{rot}^{-1}} \circ \, d B_i(s) + P_{\xi_i^\perp} h(s,\xi_i(s),x_i) \, d s, \\ \xi_i(0) &=& \xi_{i,0}, \\ x_i(s) & = & x_{i,0}. \end{array} \right. \end{align} \end{subequations} Here we used \eqref{eq:R_2} and properties of the cross product to simplify the equation for $\xi_i$. Moreover, for the ease of notation, we replaced the condition $u_n = v_i + \omega_i \times (x - x_i)$ in $\mathcal{B}_i$ by the equivalent condition $D u_n = 0$ in $\mathcal{B}_i$. \subsection{Nondimensionalization} \label{sec:nondim} We examine the expected order of magnitude of various terms and perform a nondimensionalization of the equation. A similar reasoning (for a model including translations and also gravity) can be found in \cite{OttoTzavaras08}. We recall that we make the assumption that we are in the so-called dilute regime, meaning \begin{align} \label{eq:diluteness} \phi_n := \frac{n r^3}{L^3} \ll 1, \end{align} where $L$ is the characteristic length of the cloud of particles such that $n/L^3$ is the number density. This assumption entails that the particles can freely rotate. Note that $\phi_n$ is proportional to the particle volume fraction. For very elongated rods the reference particle $\mathcal{B}$ has a very small volume such that the particle volume fraction might be much less than $\phi_n$. However, we fix the reference particle $\mathcal{B}$ to be independent of $n$. Throughout the paper we will often simply refer to $\phi_n$ as the particle volume fraction. From \eqref{eq:Particles.reduced} we obtain a rotational diffusion constant $$D_r \sim \frac{k_B \Theta}{\mu \gamma_{rot} r^3}, $$ this gives rise to a typical timescale for diffusion $$T_D = \frac 1 {D_r} \sim \frac{\mu r^3 \gamma_{rot}}{k_B \Theta}. $$ On the other hand, we look at the viscoelastic stress $\sigma$. Following the heuristic given in Subsection \ref{sec:heuristics}, we remind that a non-isotropic particle $\mathcal{B}_i$ induces a stresslet on the fluid proportional to the torque. The average stresslet produced by one particle arises then from the variation of the stresslet with respect to changes of the orientation. Thus combining the the formula for the torque from \eqref{eq:Stokes.reduced} with the random part of the change of orientation in \eqref{eq:Particles.reduced}, formally leads to a stresslets of order $$|S_{i}| \sim \sqrt{k_B \Theta \mu^{-1} r^{-3}} \sqrt{k_B \Theta \mu r^3} = k_B \Theta, $$ for each individual particle. For a rigorous argument on how the individual stress arises, we refer to Lemma \ref{lem:Expectations}. To obtain the total \rhnew{viscoelastic} stress, we multiply with the number density $|\sigma| \sim \frac n {L^3} |S_i|$. Since the induced fluid gradient is of order $ |\nabla u| \sim \frac{|\sigma|}{\mu}$ we arrive at the viscoelastic timescale $$T_u = \frac 1 {|\nabla u|} \sim \frac{\mu \gamma_{rot} L^3}{n k_B \Theta}. $$ In particular, we have $$\frac{T_D}{T_u} \sim \frac{n r^3}{L^3} = \phi_n \ll 1, $$ which means that the diffusion happens on a much faster timescale than the viscoelastic stress. Consequently, when nondimensionalizing, we can choose to rescale to the diffusive timescale $T=T_D$ or to rescale to the viscoelastic timescale $T= T_u$. We first rescale \eqref{eq:reduced} with the characteristic time $T_D$ and the length $L$. Keeping the same symbols for the rescaled quantities, lengthy but straightforward calculations\footnote{Here one needs to use that the Brownian motion scales as $B(Tt) \sim \sqrt T B(t)$.} yield \begin{subequations} \label{eq:micro.T_D} \begin{equation} \label{eq:Stokes.micro.T_D} \left \{ \begin{array}{rcl} - \Delta u_n+ \nabla p_n&=& 0 \quad \text{ in }\mathbb{R}^3 \setminus \bigcup_{i=1}^n \mathcal{B}_i,\\ \dv u_n&=&0 \quad \text{ in }\mathbb{R}^3 \setminus \bigcup_{i=1}^n \mathcal{B}_i,\\ D u_n&=&0 \quad \text{ in } \bigcup_{i=1}^n \mathcal{B}_i, \\ \displaystyle \int_{\partial B_i} \Sigma(u_n,p_n) \nu & = &0 , \\ \displaystyle \int_{\partial B_i} [\Sigma(u_n,p_n) \nu ] \times (x-x_i) &=& r^3 \sqrt{ 2 \gamma_{rot} \mathcal{R}_2(\xi_i(s))} \mlnew{\circ \dot B_i(s)}, \end{array} \right. \end{equation} \begin{align} \label{eq:Particles.T_D} \left \{ \begin{array}{rcl} \displaystyle \, d \xi_i(s) &=& \sqrt{2} \xi_i(s) \times \circ \, d B_i(s) + P_{\xi_i^\perp} h(s,\xi_i(s),x_i) \, d s , \\ \xi_i(0) &=& \xi_{i,0}. \end{array} \right. \end{align} \end{subequations} Note that from now on the nondimensional fluid stress is given as $\Sigma(u_n,p_n) = 2 D u_n - p_n \mathrm{Id}$. Note that due to the rescaling of the lengthscale with $L$, the ``volume fraction'' $\phi_n$ is now given by \begin{align} \phi_n = n r^3. \end{align} Finally, we remark that we dropped the \rhnew{trivial equation $x_i(s) = x_{i,0}$, and we consider instead the positions $x_i$ as given time-independent quantities.} Similarly rescaling instead with the characteristic time $T_u$ yields \begin{subequations} \label{eq:micro.T_u} \begin{equation} \label{eq:Stokes.micro.T_u} \left \{ \begin{array}{rcl} - \Delta u_n+ \nabla p_n&=& 0\quad \text{ in }\mathbb{R}^3 \setminus \bigcup_{i=1}^n \mathcal{B}_i,\\ \dv u_n&=&0\quad \text{ in }\mathbb{R}^3 \setminus \bigcup_{i=1}^n \mathcal{B}_i,\\ D u_n&=&0\quad \text{ in } \bigcup_{i=1}^n \mathcal{B}_i, \\ \displaystyle \int_{\partial B_i} \Sigma(u_n,p_n) \nu & = &0, \\ \displaystyle \int_{\partial B_i} [\Sigma(u_n,p_n) \nu ] \times (x-x_i) &=& \frac 1 n \sqrt{2 \gamma_{rot} \phi_n \mathcal{R}_2(\xi_i(s))} \mlnew{ \circ \dot B_i(s)}, \end{array} \right. \end{equation} \begin{align} \label{eq:Particles.T_u} \left \{ \begin{array}{rcl} \displaystyle \, d \xi_i(s) &=& \xi_i(s) \times \sqrt{\frac {2} {\phi_n}} \circ \, d B_i(s) + \frac 1 \phi_n P_{\xi_i^\perp} h(s,\xi_i(s),x_i) \, d s , \\ \xi_i(0) &=& \xi_{i,0}. \end{array} \right. \end{align} \end{subequations} \rh{Clearly \eqref{eq:micro.T_D} can be recoverd from \eqref{eq:micro.T_u} upon rescaling time with $\phi_n$. Thus, in the limit $n\to \infty$ with $\phi_n \to 0$ one can interpret \eqref{eq:micro.T_D} as an initial layer of \eqref{eq:micro.T_u}. We emphasize that in this sense, the function $h$ in \eqref{eq:micro.T_D} corresponds to $h(\phi_n t, \cdot)$ for $h$ as in \eqref{eq:micro.T_u}. } \section{Uniqueness results for the Stokes and Fokker-Planck equations in negative Sobolev spaces}\label{appendixA} In this appendix we show the uniqueness results for solutions of the Stokes and the (in-)stationary Fokker-Planck equations as stated in Theorems \ref{thm:uniqueness.Stokes}, \ref{thm:uniqueness.instationary} and \ref{thm:uniqueness.stationary}. \subsection{Proof of Theorem \ref{thm:uniqueness.Stokes}} \begin{proof}[Proof of Theorem \ref{thm:uniqueness.Stokes}] Well-posedness of \eqref{eq:viscoelastic} in \am{$L^2(0,T;\dot H^1_{\mathfrak{s}}(\mathbb{R}^3))$} is classical. By linearity, it therefore remains to show that there is at most one function $u \in H^{-s}((0,T),H^{-s}(\mathbb{R}^3))$ that satisfies $\Phi_\varphi(u,0) = 0$ for all $\varphi\in C^\infty_c((0,T),\mathbb{R}^3)$ with $\dv \varphi = 0$. Let $g \in C_c^{\infty}((0,T) \times \mathbb{R}^3)$ and define the pair $v_g \in C_c^\infty((0,T); \dot H_{\mathfrak s}^{2 +s}(\mathbb{R}^3) \cap \dot H_{\mathfrak s}^2(\mathbb{R}^3))$, $p_g \in C_c^\infty((0,T); \dot H^{1 +s}(\mathbb{R}^3) \cap \dot H^1(\mathbb{R}^3))$ to be the solution to the Stokes equations \[-\Delta v_g+\nabla p_g=g,\quad \dv v_g=0.\] Then, by density of divergence free function of $C^\infty_{c}(\mathbb{R}^3)$ in $\dot H_{\mathfrak s}^{2 +s}(\mathbb{R}^3) \cap \dot H_{\mathfrak s}^2(\mathbb{R}^3)$, we have \begin{align} 0 = \Phi_{v_g}(u,0) = \langle u, g \rangle, \end{align} which finishes the proof. \end{proof} \subsection{Proof of Theorem \ref{thm:uniqueness.instationary}} \begin{proof}[Proof of Theorem \ref{thm:uniqueness.instationary}] Let $\varphi \in C_c^\infty([0,T] \times \mathbb{R}^3 \times \S^2)$ and let $\psi \in C_c^\infty([0,T] \times \mathbb{R}^3 \times \S^2)$ be the unique classical solution to the backwards parabolic equation \begin{equation}\left\{ \begin{array}{rcl} - \partial_t \psi - \Delta_\xi \psi - P_{\xi^\perp} h \cdot \nabla \psi &=& \varphi , \\ \psi(T,\cdot) &=& 0. \end{array} \right. \end{equation} By standard regularity theory for parabolic equations, $\psi$ is well-defined. Thus, for $f$ as in the statement \begin{align} 0 = \Psi_\psi(f) = \langle f_0, \psi(0) \rangle - \int_0^T \langle f, \varphi \rangle \, d t. \end{align} Since $f_0$ is given, this identity characterizes $f$ uniquely, and therefore $f$ must coincide with the unique classical solution to \eqref{eq:Fokker-Planck.instationary}. \end{proof} \subsection{Proof of Theorem \ref{thm:uniqueness.stationary}} The proof of Theorem \ref{thm:uniqueness.stationary} relies on the following theorem regarding the elliptic operator \begin{align} L v = -\dv (\nabla v - \bar h v), \end{align} where \am{$\bar h \in C^\infty(\S^2;\mathbb{R}^3)$} with \begin{align} \label{eq:h.in.TS^2} \xi \cdot \am{\bar h = 0 }\quad \text{for all }\xi\in \S^2. \end{align} The condition \eqref{eq:h.in.TS^2} ensures that $h(t,\cdot,x)$ takes values in the tangent space $T \S^2$. We denote the formal adjoint by \begin{align} L^\ast v := - \Delta v -\bar h \cdot \nabla v. \end{align} \rh{In what follows we will often deal with functions $h$ depending on a parameter $z \in \mathbb{R}^m$ and denote by $L_{z}$ and $L_{z}^\star$ the corresponding operators as above.} \begin{thm}\label{th:regularity.f.bar} \begin{enumerate}[label=(\roman*)., ref=(\roman*)] \item \label{it:unique.stationary} Let $\bar h \in C^\infty(\S^2;\mathbb{R}^3)$ satisfy \eqref{eq:h.in.TS^2}. Then $\dim \ker L = 1$ and all elements in $\ker L $ have a sign. In particular, there exists a unique classical solution $\bar f$ to \begin{align} \label{eq:stationary.normalized} - L \bar f = 0, && \int_{\S^2} \bar f = 1. \end{align} Furthermore, this solution $\bar f$ depends smoothly on $h$. More precisely, let for a smooth family $(z,\xi) \in \mathbb{R}^m \times \S^2 \mapsto \bar h_z(\xi)$, $z \in \mathbb{R}^m$ be smooth such that $h_z$ satisfies \eqref{eq:h.in.TS^2} for all $z \in \mathbb{R}^m$, then the family of solutions $z \mapsto \bar f_z$ is smooth. \item \label{it.smooth.dependence.dual} Let $(z,\xi) \in \mathbb{R}^m \times \S^2 \mapsto h_z(\xi)$ be as above. Then, for all $K \Subset \mathbb{R}^m$, $k \in \mathbb{N}$ there exists $C> 0$ such that for all $z \in K$ and all $\psi \in (\ker L_z)^\perp$, the unique solution $v \in H^1(\S^2)$ to $L_z^\ast v = \psi$ with $\int v = 0$ satisfies \begin{align} \label{est:uniform.spectral.gap} \|v\|_{H^{k+2}} \leq C \|\psi\|_{H^k}. \end{align} In particular, if $\varphi \in C_c^\infty(\mathbb{R}^m \times \S^2)$, and $v(z,\cdot)$ is the unique solution with $\int v(z,\cdot) = 0$ to $L^\ast_z v(z,\cdot) = \varphi(z,\cdot)$, then $v \in C_c^\infty(\mathbb{R}^m \times \S^2)$. \end{enumerate} \end{thm} \begin{proof} We rely on classical results concerning elliptic PDEs on compact manifolds, for which we refer to \cite{Taylor}. Existence and uniqueness for the problem \eqref{eq:stationary.normalized} is classical, see e.g. \cite{zeeman1988stability}. We give a short proof for completeness. The operator $L$ is a compact perturbation of the selfadjoint operator $-\Delta$. Thus, the index of $L$ is $0$, i.e. \begin{align} \dim \ker L = \dim \ker L^\ast. \end{align} Since $L^\ast$ contains no zero-order terms, the maximum principle implies $\ker L^\ast = \{ const\}$. In particular $\dim \ker L = \dim \ker L^\ast = 1$. It is easy to see that $g \in \ker L$ implies that also the positive and negative parts $g_+$, $g_-$ lie in $\ker L$. Thus, $g = g_+$ or $g= g_-$, otherwise $g_+$ and $g_-$ would be linearly independent, contradicting $\dim \ker L = 1$. \medskip The proof of the assertion that $\bar f_z$ depends smoothly on $z$ is similar to the proof of \ref{it.smooth.dependence.dual} which we prove first. Let $v,\psi$ be as in the statement. From standard elliptic regularity theory it follows that \begin{align} \label{est:elliptic.regularity} \|v\|_{H^{k+2}} \leq C( \|v\|_{L^2} + \|\psi\|_{H^k}). \end{align} for a constant $C$ independent of $z$ in $K$. For \eqref{est:uniform.spectral.gap}, it remains to show that \begin{align*} \|v\|_{L^2} \leq C \|\psi\|_{L^2}. \end{align*} Assume for the sake of contradiction that there exists no such constant. Then, there exists a sequence $z_n \in K$, $\psi_n \in (\ker L_{z_n})^\perp$, $v_n \in H^1(\S^2)$ such that $L^\ast v_n = \psi_n$, $\int v_n = 0$, $\|v_n\|_{L^2} = 1$ and $\|\psi_n\|_{L^2} \leq 1/n$. By \eqref{est:elliptic.regularity}, $v_n$ is bounded in $H^2(\S^2)$, thus by taking a suitable subsequence $v_n \to v_\ast$ in $H^1(\S^2)$ and $z_n \to z_\ast$ for some $v_\ast \in H^2(\S^2)$, $z_\ast \in K$. Note that by the smoothness assumption on $h$, we have $h_{z_n} \cdot \nabla v_n \to \nabla h_{z_\ast} \cdot v_\ast$ in $L^2$. Thus, $L^\ast_{z_\ast} v_\ast = 0$. Since $\int v_\ast = 0$ and $\|v_\ast\|_{L^2} = 1$ this contradicts $\ker L_{z_\ast}^\ast = \{const\}$. This establishes \eqref{est:uniform.spectral.gap}. \medskip For the second part of \ref{it.smooth.dependence.dual}, let $\varphi \in C_c^\infty(\mathbb{R}^m \times \S^2)$, and $v(z,\cdot)$ be the unique solution with $\int v(z,\cdot) = 0$ to $L^\ast_z v(z,\cdot) = \varphi(z,\cdot)$. Note that $v(z,\cdot) = 0$ if $\varphi(z,\cdot) = 0.$ Moreover, for $z_1,z_2 \in \mathbb{R}^m$, we have \begin{align*} L^\ast_{z_1} (v(z_1,\cdot) - v(z_2,\cdot)) = ( h_{z_2} - h_{z_1}) \cdot \nabla v(z_2,\cdot) + \varphi(z_1,\cdot) - \varphi(z_2,\cdot). \end{align*} Thus, by \eqref{est:uniform.spectral.gap}, we have \begin{align*} \|v(z_1,\cdot) - v(z_2,\cdot)\|_{H^{k+2}(\S^2)} &\leq C \| h_{z_1} - h_{z_2}\|_{W^{k,\infty}(\S^2)} \|\nabla v(z_2,\cdot)\|_{H^k(\S^2)} \\ &+ \|\varphi(z_1,\cdot)- \varphi (z_2,\cdot)\|_{H^k(\S^2)} \\ & \leq C |z_1 - z_2| \end{align*} Thus, $v \in W^{1,\infty}(\mathbb{R}^m;H^k(\S^2))$. Differentiating the equation with respect to $z_j$ leads to \begin{align} L^\ast \tilde v = \tilde \varphi + \tilde h \cdot \nabla v, \end{align} where $\tilde v = \partial_{z_k} v, \tilde h = \partial_{z_k} h$ and $\tilde \varphi = \partial_{z_k} \varphi$. Note that $\int \tilde v(z,\cdot) = 0$. Thus, repeating the above argument, we find that $\tilde v \in W^{1,\infty}(\mathbb{R}^m;H^k(\S^2))$ and by induction $v \in C_c^\infty(\mathbb{R}^m \times \S^2)$. \medskip To see that $\bar f_z$ as in the second part of assertion \ref{it:unique.stationary} depends smoothly on $z$, we proceed analogously: Similarly as before, we observe that for any $\psi \in (\ker L^\ast_z)^\perp$ the unique solution to $L_z v = \psi$ with $\int v = 0$ satisfies \begin{align} \label{est:regularity.L} \|v\|_{H^{k+2}} \leq C \|\psi\|_{H^k}, \end{align} where $C$ depends only on $k$, uniformly on compact sets $K \subset \mathbb{R}^m$. Note that the only difference of this estimate to \eqref{est:uniform.spectral.gap} is that $\int v = 0$ is not equivalent to $v \in \ker L_z^\perp$. However, inspection of the proof above reveals that we never used orthogonality but only that \begin{align} \label{eq:ker.sign} \{v \in \ker L_z : \int v= 0 \} = \{0\}. \end{align} This still holds true, since $\dim \ker L_z =1$ and $\bar f_z \in \ker L_z$ with $\int \bar f_z =1$. Thus, \eqref{est:regularity.L} holds. Next, we observe that $\bar f_{z}$ satisfies \begin{align} \label{est:bar.f} \|\bar f_z\|_{H^k} \leq C \end{align} uniformly on compact sets $z \in K$. Indeed, by elliptic regularity corresponding to \eqref{est:elliptic.regularity}, it suffices to treat the case $k =0$. Observe that \begin{align} \bar f_z = \frac{\tilde f_z}{\int \tilde f_z \, d \xi} \end{align} for the normalized unique non-negative eigenvector $\tilde f_z \in \ker L_z$ with $\|\tilde f_z\|_2 = 1$. Thus, it suffices to show that \begin{align} \int \tilde f_z \, d \xi \geq c, \end{align} uniformly on compact sets $z \in K$. Again, we argue by contradiction. Indeed, if such an estimate was not true, we would find a sequence $z_n \to z_\ast$ such that $\tilde f_{z_n} \to f_\ast$ in $H^1(\S^2)$ with $\int f_\ast = 0$. But then, $f_\ast \in \ker L_{z_\ast}$ with $\|f\|_2 =1$ which contradicts \eqref{eq:ker.sign}. This proves \eqref{est:bar.f}. Finally, for $z_1,z_2 \in \mathbb{R}^m$ we have \begin{align} L_{z_1}( \bar f_{z_1} - \bar f_{z_2}) = \dv((h_{z_1} - h_{z_2}) \bar f_{z_2}). \end{align} Using \eqref{est:bar.f}, the right-hand side is bounded by $C |z_1 - z_2|$ in $H^k$ and thus \eqref{est:regularity.L} implies Lipschitz estimates in $z$ as before and we conclude again by differentiation and iteration. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:uniqueness.stationary}] We denote by $L_{t,x}$ the operator corresponding to $\bar h_{t,x}(\xi) = P_{\xi^\perp} h(t,x,\xi)$, and by $\bar f_{t,x}$ the corresponding unique solution to \eqref{eq:stationary.normalized}. We claim that $\bar \Psi_\psi(g) = 0$ for all $\psi \in C_c^\infty((0,T) \times \mathbb{R}^3 \times \S^2)$ implies \begin{align} \label{eq:g=bar.f.mu} g = \frac{1}{\|\bar f_{\boldsymbol \cdot}\|_{L^2}^2} \bar f_{\boldsymbol \cdot} \mu \end{align} for some $\mu \in (C_c^\infty((0,T) \times \mathbb{R}^3))^\ast$ in the sense that \begin{align} \langle g, \psi \rangle = \langle \mu, \frac{1}{\|\bar f_{\boldsymbol \cdot}\|_{L^2}^2} \int_{\S^2} \bar f_{\boldsymbol \cdot}(\xi) \psi(\cdot,\xi) \, d \xi \rangle \end{align} Note that this is well-defined since $(t,x,\xi) \mapsto \bar f_{t,x}(\xi)$ is smooth by Theorem \ref{th:regularity.f.bar} \ref{it:unique.stationary}. From this identity, $\int_{\S^2} \bar f_{t,x}(\xi) \, d \xi = 1$ and $\Theta_\theta(g)= 0$ for all $\theta \in C_c^\infty((0,T) \times \mathbb{R}^3)$, we immediately deduce \begin{align} \mu = \|\bar f_{t,x}\|_{L^2}^2 \int_{\S^2} f_0(x,\xi) \, d \xi, \end{align} which yields uniqueness \am{and smoothness in $(t,\xi)\in(0,T)\times \S^2$ of the solution for almost all $x\in \mathbb{R}^3$.} It remains to prove \eqref{eq:g=bar.f.mu}. Let $\psi \in C_c^\infty((0,T) \times \mathbb{R}^3 \times \S^2)$ and let \begin{align} \varphi(t,x,\xi) = \psi(t,x,\xi) - \frac{1}{\|\bar f_{t,x}\|_{L^2}^2} \int \bar f_{t,x}(\zeta) \psi(t,x,\zeta) \, d \zeta \bar f_{t,x}(\xi). \end{align} Then, $\varphi(t,x,\cdot) \in (\ker L_{t,x})^\perp$. Hence, by Theorem \ref{th:regularity.f.bar} \ref{it.smooth.dependence.dual}, there exists $\tilde \varphi(t,x,\xi) \in C_c^\infty((0,T) \times \mathbb{R}^3 \times \S^2)$ such that $L_{t,x}^\ast \tilde \varphi(t,x,\xi) = \varphi(t,x,\xi)$ and $\int_{\S^2} \varphi(t,x,\xi) \, d \xi = 0$. Thus, $\langle g, \varphi \rangle = \Psi_{\tilde \varphi}(g) = 0$ and hence \begin{align} \langle g, \psi \rangle = \langle g,\frac{1}{\|\bar f_{\boldsymbol \cdot}\|_{L^2}^2} \int \bar f_{\cdot}(\zeta) \psi(\cdot,\zeta) \, d \zeta \bar f_{\boldsymbol \cdot} \rangle \end{align} which yields the claim with \begin{align} \langle \mu, v \rangle = \langle g, \frac{1}{\|\bar f_{\boldsymbol \cdot}\|_{L^2}^2} \bar f_{\boldsymbol \cdot} v \rangle. \end{align} This concludes the proof of the Theorem. \end{proof} \section{Some embeddings in weighted Sobolev spaces}\label{sec:weighted sobolev space} In this appendix we show some embeddings for the weighted fractional Sobolev space introduced in Subsection \ref{subsec:notatation}. \begin{lem} \label{lem:embedding.Lebesque.weighted} Let $K$ be an open bounded set in $\mathbb{R}^3$. Consider the non negative function $w(x)=(1+|x|)^a$ with $a\geq 0$. Then, for $p\geq\frac{6}{3+2s}$, \rh{we have the continuous embedding} \[L^{p,2}_{w,K} \hookrightarrow H^{-s}_w(\mathbb{R}^3).\] \end{lem} \begin{proof} By recalling the definition of the $H^s(K)$ norm and $H^s_w(K)$ norm and noting that $w$ is bounded below on $K$, we have \begin{equation} \norm{f}_{H^s(K)}\leq \norm{f}_{H^s_{1/w}(\mathbb{R}^3)}. \end{equation} By Sobolev embedding we have that $H^s(K)\hookrightarrow L^{p'}(K)$ since $p' \leq \frac{6}{3-2s}$. Hence, \begin{equation}\label{eq:embedding_1} H^s_{1/w}(\mathbb{R}^3)\hookrightarrow H^s(K)\hookrightarrow L^{p'}(K) \end{equation} Moreover, since $1/w\leq 1$, it is straightforward to prove that \begin{equation}\label{eq:embedding_2} H^s_{1/w}(\mathbb{R}^3)\hookrightarrow L^2_{1/w}(\mathbb{R}^3\setminus K). \end{equation} Now we observe that we can identify ${L^{p,2}_{w,K}} = L^2_{w}(\mathbb{R}^3\setminus K)\times L^{p}(K)$ and thus the dual $({L^{p,2}_{w,K}})^\ast = L^2_{w}(\mathbb{R}^3\setminus K)\times L^{p}(K)$. Combining \eqref{eq:embedding_1} and \eqref{eq:embedding_2} we get that \[ H^s_{1/w}(\mathbb{R}^3)\hookrightarrow L^2_{1/w}(\mathbb{R}\setminus K)\times L^{p'}(K).\] Since $V \hookrightarrow W$ implies $W^\ast \hookrightarrow V^\ast$ and $\left(L^2_{1/w}(\mathbb{R}^3\setminus K)\times L^{p'}(K)\right)^*=L^2_{w}(\mathbb{R}^3\setminus K)\times L^{p}(K)$ we conclude the desired embedding. \end{proof} \begin{lem} \label{lem:compact.weighted} For \rh{$z>s>\frac{1}{2}$}, the embedding $H^{-s}_w(\mathbb{R}^3)\hookrightarrow H^{-z}(\mathbb{R}^3) $ is compact. \end{lem} \begin{proof} \rh{By Schauder's Theorem, it suffices to show compactness of the embedding $H^z(\mathbb{R}^3)\hookrightarrow H^{s}_{1/w}(\mathbb{R}^3)$ } To this end, we will prove that the unit ball of radius in $H^{z}(\mathbb{R}^3)$ is precompact in $H^{s}_{1/w}(\mathbb{R}^3)$. Fixed $L>0$, we introduce $B(0,L)$, a ball of radius $L$ in $\mathbb{R}^3$ and a cut-off function $\eta:\mathbb{R}^3\to\mathbb{R}$ such that $\norm{\eta}_{C^1(\mathbb{R}^3)}\leq C$ and \[\eta(x)=\begin{cases} 1\,\,\textrm{if}\quad x\in B(0,L),\\ 0\,\,\textrm{if}\quad x\in B(0,L+2)^c. \end{cases}\] \rh{Let $\phi \in H^z(\mathbb{R}^3)$.} Then, \begin{align}\label{eq:norm_etaphi} \norm{\phi \eta}_{H^z(\mathbb{R}^3)}&\leq C \norm{\phi}_{H^z(\mathbb{R}^3)}\norm{\eta}_{C^1(\mathbb{R}^3)}, \\ \label{eq:norm_etaphi2} \norm{\phi(1-\eta)}_{H^s_{\frac{1}{w}}(\mathbb{R}^3)}&\leq \frac{C}{(1+L)^{a/2}} \norm{\phi}_{H^s(\mathbb{R}^3)}\norm{(1-\eta)}_{C^1(\mathbb{R}^3)} \end{align} Indeed, \begin{align*} [\phi \eta]_{H^z(\mathbb{R}^3)}^2 &= \int_{\mathbb{R}^3}\int_{\mathbb{R}^3} \frac{|(\phi\eta)(x)-(\phi\eta)(y)|^2}{|x-y|^{3+2z}}dxdy \\ &\leq 2 \norm{\eta}^2_{C(\mathbb{R}^3)} [\phi]^2_{H^z(\mathbb{R}^3)}+2\int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\frac{|\phi(y)|^2|\eta(x)-\eta(y)|^2}{|x-y|^{3+2z}}dxdy, \end{align*} and the second term is further estimated as \begin{align*} \int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\frac{|\phi(y)|^2|\eta(x)-\eta(y)|^2}{|x-y|^{3+2z}}dxdy &\leq\norm{\eta}^2_{C^1(\mathbb{R})^3}\int_{\mathbb{R}^3}\int_{|x-y|\leq 1}\frac{|\phi(y)|^2}{|x-y|^{1+2z}}dxdy\\ &+\norm{\eta}^2_{C^1(\mathbb{R}^3)} \int_{\mathbb{R}^3}\int_{|x-y|> 1}\frac{|\phi(y)|^2}{|x-y|^{3+2z}}dxdy\\ &\leq C \norm{\eta}^2_{C^1(\mathbb{R})^3}\norm{\phi}^2_{L^2(\mathbb{R}^3)}. \end{align*} Moreover, \begin{align*} \|\phi (1-\eta)\|_{H^s_{1/w}(\mathbb{R}^3)}^2 &=\int_{B(0,L)^c} \frac{|\phi(x)(1-\eta(x))|^2}{w(x)} \\ &+\int_{B(0,L)^c}\int_{B(0,L)^c}\frac{|(\phi(1-\eta))(x)-(\phi(1-\eta))(y)|^2}{|x-y|^{3+2s} w(x) w(y)}dxdy\\ &\leq \int_{\mathbb{R}^3} |\phi(x)(1-\eta(x))|^2\frac{1}{(1+L)^a}dx\\ &+\frac{1}{(1+L)^{2a}}\int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\frac{|(\phi(1-\eta))(x)-(\phi(1-\eta))(y)|^2}{|x-y|^{3+2s}}dxdy\\ &\leq \norm{\phi(1-\eta)}^2_{H^s(\mathbb{R}^3)}\frac{1}{(1+L)^{a}}. \end{align*} \rh{ Let $\varepsilon >0$. Since $H^{z}_0(B(0,{L+2}))$ is compactly embedded into $H^{s}_0(B(0,{L+2}))$, there exists $N \in \mathbb{N}$ and functions $u_i \in H^{s}_0(B(0,{L+2}))$, $1 \leq i \leq N$, such that \begin{align} B_{H^{z}_0(B(0,{L+2}))}(0,C) \subset \bigcup_{i=1}^N B_{H^{s}_0(B(0,{L+2}))}(\varepsilon,u_i), \end{align} where $C$ is the constant from \eqref{eq:norm_etaphi}. Thus, (extending the functions $u_i$ by $0$ to functions in $H^{z}(\mathbb{R}^3)$ that vanish in $\mathbb{R}^3 \setminus B(0,L+2)$), for each $v \in H^{z}(\mathbb{R}^3)$ with $\|v\|_{H^{z}(\mathbb{R}^3)} \leq 1$, there exists $1 \leq i \leq N$ such that \begin{align} \|v - u_i\|_{H^{s}_{1/w}(\mathbb{R}^3)} \leq \|\eta v- u_i\|_{H^{s}(\mathbb{R}^3)} + \|(1-\eta) v\|_{H^{s}_{1/w}(\mathbb{R}^3)} \leq \varepsilon + \frac{C}{(1+L)^{a/2}}. \end{align} Choosing $L$ sufficiently large finishes the proof. } \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{(1) Stochastic projected Gross-Pitaevskii (SPGPE) Theory} The SPGPE is a first-principles reservoir theory that quantitatively describes a finite-temperature ultracold Bose gas~[46] Within this framework, highly-populated modes of the quantum field (generally $\gtrsim 1$ atoms on average) are treated as a coherent classical field $\psi$, which interacts with an incoherent thermal reservoir composed of the remaining sparsely-occupied high-energy modes. This leads to a stochastic equation of motion for the classical field $\psi$, which in Stratonovich form is \begin{align} \label{eq:SPGPEFull} i\hbar d\psi &= \mathcal{P}\big{\{} (1-i\gamma)(\mathcal{L}-\mu)\psi dt + i\hbar d\xi_\gamma (\textbf{r},t) + V_\varepsilon(\textbf{r},t)\psi dt - \hbar\psi dU_{\varepsilon}(\textbf{r},t) \big{\}} \,, \end{align} where $\mathcal{L}=H_0+g|\psi|^2$ for the single-particle Hamiltonian $H_0$. Here $g=4\pi a_s \hbar^2/m$ is the two-body interaction strength for an s-wave scattering length $a_s$. The explicit inclusion of the projector $\mathcal{P}$ ensures the dynamic separation of the field into a low-energy coherent region and incoherent reservoir. The noise terms $d\xi_\gamma$ and $dU_{\varepsilon}$ correspond to incoherent thermal fluctuations from the number-damping and energy-damping processes, respectively, and coupled with the deterministic dissipation terms ultimately drive any initial state to a steady-state at thermal equilibrium within the grand-canonical ensemble at temperature $T$ and chemical potential $\mu$. The strengths of the number-damping and energy-damping dissipation processes are characterized by the dimensionless quantity $\gamma$ and the length-squared quantity $\mathcal{M}$, respectively. These parameters can be \emph{a priori} determined from the reservoir chemical potential $\mu$, temperature $T$, and the energy cutoff $\epsilon_\text{cut}$~[51] \begin{align} \label{eq:ND_Rate} \gamma&=\frac{8a_s^2}{\lambda_\text{dB}^2}\sum_{j=1}^\infty \frac{e^{\beta \mu (j+1)}}{e^{2\beta\epsilon_\text{cut}j}} \Phi[e^{\beta (\mu-2\epsilon_\text{cut})},1,j] \,, \\ \label{eq:ED_Rate} \mathcal{M} &= \frac{16\pi a_s^2}{\exp\left(\frac{\epsilon_{\rm cut}-\mu}{k_B T}\right)-1} \,, \end{align} where $a_s$ is the s-wave scattering length, $\beta=1/(k_\text{B} T)$, $\lambda_\text{dB}=\sqrt{2\pi\hbar^2/(m k_\text{B} T)}$ is the thermal de~Broglie wavelength, and $\Phi[z,x,a]$ is the Lerch transcendent. In the main text, we define $N_{\rm cut}\equiv(e^{(\epsilon_{\rm cut}-\mu)/(k_B T)}-1)^{-1}$ as the number of reservoir atoms at the cutoff energy, as it is the thermal equilibrium number distribution for an ideal gas evaluated at the cutoff energy. This is a good estimate of the true number of atoms at the cutoff energy, as the cutoff energy $\epsilon_{\rm cut}$ should be sufficiently large compared to $\mu$ such that high-energy modes very close to the cutoff are essentially non-interacting (see the discussion below). The energy-damping coefficient can then be written as $\mathcal{M}=2\sigma_sN_{\rm cut}$, where $\sigma_s\equiv 8\pi a_s^2$ is s-wave scattering cross section. The energy-damping dissipation process is described by an effective potential term $V_\varepsilon$: \begin{equation} \label{scattering potential} V_\varepsilon(\textbf{r},t) = -\hbar\int d^3 \textbf{r}' \varepsilon_\textrm{3D}(\textbf{r}-\textbf{r}') \nabla_{\textbf{r}'}\cdot \textbf{j}(\textbf{r}',t)\,, \end{equation} which is a convolution between the divergence of the particle current \begin{equation} \textbf{j}(\textbf{r},t) = \frac{i\hbar}{2m}[\psi\nabla\psi^*-\psi^*\nabla\psi] \end{equation} and the scattering kernel \begin{equation} \varepsilon_\textrm{3D}(\textbf{r}) = \frac{\mathcal{M} }{(2\pi)^3}\int d^3 \textbf{k} \frac{e^{i \textbf{k}\cdot \textbf{r}}}{|\textbf{k}|} \,. \end{equation} The noise terms in the SPGPE are random Gaussian variables with zero mean and correlations: \begin{align} \langle d\xi_\gamma^*(\mathbf{r},t)d\xi_\gamma(\mathbf{r}',t')\rangle &= \frac{2\gamma k_\text{B} T}{\hbar} \delta (\mathbf{r}-\mathbf{r'}) \delta(t-t')dt \,, \\ \label{eq:ED_NoiseCorr} \langle dU_{\varepsilon}(\mathbf{r},t)dU_{\varepsilon}(\mathbf{r'},t')\rangle &= \frac{2k_\text{B}T}{\hbar}\varepsilon_\textrm{3D}(\mathbf{r}-\mathbf{r'})\delta(t-t')dt \,. \end{align} Note that the number-damping noise $d\xi_\gamma$ is complex, whereas the energy-damping noise $dU_{\varepsilon}$ is real-valued. To zeroth order in the reservoir processes, the SPGPE satisfies the continuity equation \begin{align} \nabla \cdot \mathbf{j} + \frac{\partial\rho}{\partial t} =0 \,, \end{align} where $\rho=|\psi|^2$ is the fluid density (the leading correction occurs at order $\gamma \ll 1$). Therefore, at leading order in the dissipation parameters, the energy-damping potential directly opposes changes in the density: \begin{equation} \label{eq:approx_continuity} V_\varepsilon(\textbf{r},t)dt = \hbar\int d^3 \textbf{r}' \varepsilon(\textbf{r}-\textbf{r}') d\rho(\textbf{r}',t) + \mathcal{O}(\gamma \mathcal{M}) \,. \end{equation} This relation is used below in our derivation of the stochastic point-vortex equation. \subsection{Determination of the energy cutoff} Determining the value of the energy cutoff $\epsilon_{\rm cut}$ for a particular experiment is a non-trivial yet essential aspect of first-principles modelling with the SPGPE. To ensure the validity of the SPGPE framework, the choice of cutoff must satisfy two key properties. Firstly, $\epsilon_{\rm cut}$ must be chosen such that each of the modes in the low-energy region are appreciably occupied, i.e. have occupation no fewer than $\mathcal{O}(1)$ atoms~[46]. Secondly, the cutoff should be sufficiently large (compared to $\mu$) to ensure the interacting modes of the system are contained within the low-energy coherent region. The latter requirement is typically satisfied for $\epsilon_{\rm cut}\gtrsim 2\mu$~[39] These constraints do not uniquely specify a particular energy cutoff, but rather tightly constrains appropriate choices of $\epsilon_{\rm cut}$. In principle, this means calculations in the SPGPE framework will depend weakly on the precise choice of $\epsilon_{\rm cut}$. In practise, it is therefore important for first-principles SPGPE calculations to demonstrate robustness of results to small variations of $\epsilon_{\rm cut}$ (on the order of $10\%$) -- see, for example, Refs.[5,46,51,61] . In the main text results are shown for a $15\%$ variation in $\epsilon_\text{cut}$. For the comparison to Ref.~[48] presented in Fig.~3 of the main text, we find the choice of $\epsilon_\text{cut}=2\mu$ to satisfy the two requirements described above, across the temperature range considered. Significantly increasing the cutoff beyond this value results in the highest-energy modes becoming too sparsely occupied, particularly for the lower range of temperatures considered. For example, setting $\epsilon_\text{cut}=3\mu$ results in $N_{\rm cut}\approx 0.4$ for the $T=200$nK in Fig.~3 of the main text. Significantly reducing the cutoff below $2\mu$ will result in a number of appreciably-occupied interacting modes of the system inappropriately becoming part of the incoherent region. \section{(2) Quasi-2D SPGPE and approximate energy-damping potential} \label{append:Approx2DKernel} For studies of two-dimensional systems, it is convenient to work with a quasi-2D form of the SPGPE, where the transverse ($z$) degrees of freedom are integrated out. The resulting quasi-2D SPGPE (Eq.~(2) of the main text) has the same form as the 3D SPGPE Eq.~(\ref{eq:SPGPEFull}) with the following modified 2D scattering kernel~[60] \begin{equation} \varepsilon(\textbf{x}) = \frac{1}{2\pi}\int d^2 \textbf{k} \, e^{i \textbf{k}\cdot \textbf{x}}\underbrace{\left[\frac{\mathcal{M}}{(2\pi)^2}F\left(\frac{(l_z |\textbf{k}|)^2}{4}\right)\right]}_{\tilde \varepsilon(\textbf{k})} \,, \end{equation} where $F(x)=e^xK_0(x)$ with $K_0$ a modified Bessel function of the second kind. This reduction to 2D assumes the transverse field can be described by a Gaussian of $1\sigma$ radius $l_z$, as described in Section 5 of this document. For the study of 2D vortex dynamics, we only require that the lengthscale be of the order of the healing length $l_z\approx\xi$, which ensures that Kelvin waves along the vortex filaments are suppressed~[15] Since this is a much less restrictive condition than the oblate confinement need to realize a thermodynamically two-dimensional gas (i.e. the BKT transition), experiments can investigate two-dimensional vortex dynamics in a convenient regime where condensate fraction, temperature, etc. are all well defined~[52].\par The convolution with the scattering kernel in the energy-damping potential adds a level of complexity that prevents most integrals involving $V_\varepsilon$ to be analytically solved. In the main text we treat this by approximating the kernel as flat in Fourier space, evaluated at the vortex core scale $k=\xi^{-1}$. This results in a simplified form of the kernel: \begin{align} \label{eq:approximate_edkernel} \varepsilon(\mathbf{x}) &\approx 2\pi \tilde{\varepsilon}(\xi^{-1}) \delta(\mathbf{x}) \,, \\ &= 2\sigma_{\rm ED} N_{\rm cut}\delta(\mathbf{x}) \,, \end{align} where $\sigma_{\rm ED}=\sigma_s F\left(\frac{l_z^2}{4\xi^2} \right)/(2\pi)$ is the effective energy-damping scattering cross section defined in the main text, and the factor of $2\pi$ in the first line arises due to the convolution theorem for the two-dimensional Fourier transform (in the unitary transform convention). \section{(3) Full derivation of stochastic point-vortex equation} Our derivation of the stochastic point-vortex equation described in the main text uses an effective Lagrangian formulation of the quasi-2D SPGPE and the approximate form of the energy-damping kernel Eq.~(\ref{eq:approximate_edkernel}) described in the previous section. Since idealized point-vortex dynamics can be rigorously derived from the GPE in the limit of well-separated vortices~[24] we describe the non-dissipative Gross-Pitaevskii dynamics of our system via the point-vortex Lagrangian~[74] \begin{align} L_\text{PV} &= 2\pi\hbar\rho_0 \bigg( \sum_n \frac{q_n}{2}\epsilon_{ij} \dot{X}_n^i X_n^j + \frac{\hbar}{m}\sum_{m\neq n}q_n q_m \log\frac{|\mathbf{r}_m-\mathbf{r}_n|}{l} \bigg), \end{align} where $\epsilon_{ij}$ is the Levi-Civita symbol, $\rho_0$ is the background 2D density of the fluid, and $\mathbf{r}_n=(X_n,Y_n)^T$ and $q_n=\pm 1$ is the 2D position vector and charge of the $n$-th vortex, respectively. The box size $l$ is included here as a cutoff to regularize the PV theory; it does not explicitly appear at any point in our derivation below. The SPGPE energy-damping terms are real-valued and can be treated as potential energy terms that can simply be added to $L_\text{PV}$, giving the effective point-vortex Lagrangian \begin{align} L_\text{eff} &= L_\text{PV} + L_\text{damping} + L_\text{noise} \end{align} where the additional reservoir terms due to energy damping are: \begin{align} L_\text{damping} &\equiv \int d^2 \mathbf{x} \, V_\varepsilon(\mathbf{x},t)\rho(\mathbf{x},t) \label{eq:Ldamping} \\ L_\text{noise} dt &\equiv -\hbar \int d^2\mathbf{x} \, dU_{\varepsilon}(\mathbf{x},t) \rho(\mathbf{x},t) \,. \end{align} These terms do not depend upon the full classical field $\psi$ and can therefore be solved with an appropriate ansatz for the 2D fluid density $\rho=|\psi|^2$ alone. We choose a Gaussian ansatz for $\rho$ that separates the contribution of the vortices from the infinite background: \begin{align} \label{eq:DensityAnsatz} \rho(\textbf{x}) = \rho_0\left[1-\sum_{n} \exp\left(-\frac{|\mathbf{x}-\mathbf{r}_n(t)|^2}{2\xi^2}\right)\right], \end{align} which for a single vortex agrees well with the exact GPE solution for the core in the range $|\mathbf{x}-\mathbf{r}_n(t)| \lesssim \xi$. Although this ansatz is not strictly non-negative, this is approximately true in the point-vortex regime where vortices are well separated (which is required for the validity of the point-vortex model in general). As we show below, this ansatz allows the above integrals to be solved \emph{exactly} for an $N$-vortex system. An additional benefit of this form of ansatz is that the infinite background term does not need to be manually discarded, as only derivatives of the density contribute to the final equation of motion. We first focus on the damping term Eq.~(\ref{eq:Ldamping}), which for the above ansatz and $\varepsilon(\mathbf{x}) \approx 2\sigma_{\rm ED} N_{\rm cut}\delta(\mathbf{x})$ is given by: \begin{align} L_\text{damping} &= 2\alpha_\varepsilon\pi\hbar\rho_0\sum_{nm}\exp\left(-\frac{r_{mn}^2}{4\xi^2}\right) \left(\dot{X}_m\delta x_{mn}+\delta y_{mn}\dot{Y}_m \right), \end{align} where we are using the shorthand $\delta x_{mn}\equiv X_m-X_n$, $\delta y_{mn}\equiv Y_m-Y_n$, $r_{mn}^2 \equiv \delta x_{mn}^2 + \delta y_{mn}^2$, and identified the expression for the mutual friction coefficient $\alpha_\varepsilon\equiv \sigma_{\rm ED}N_{\rm cut}\rho_0/2 = \sigma_s N_{\rm cut}\rho_0F(l_z^2/(4\xi^2))/(4\pi)$ defined in the main text. Neglecting $L_\text{noise}$ for now, we can derive the dissipative dynamics by taking the Euler-Lagrange equations with respect to $L_\text{PV}+L_\text{damping}$: \begin{align} \label{eq:ELEqns_Ideal} q_n\bigg[\begin{pmatrix} -\dot{Y}_n\\ \dot{X}_n \end{pmatrix} +\frac{\hbar}{m}\sum_{m \neq n} \frac{q_m}{r_{mn}^2} \begin{pmatrix} \delta x_{nm}\\ \delta y_{nm} \end{pmatrix} \bigg] =\frac{\alpha_\varepsilon}{2\xi^2}\sum_{m}\exp\left(-\frac{r_{mn}^2}{4\xi^2}\right)\begin{pmatrix} \dot{X}_m(2\xi^2-\delta x_{mn}^2)-\dot{Y}_m\delta x_{mn}\delta y_{mn} \\ \dot{Y}_m(2\xi^2-\delta y_{mn}^2)-\dot{X}_m \delta x_{mn}\delta y_{mn} \end{pmatrix}, \end{align} where we have cancelled common factors of $2\pi\hbar\rho_0$. In the point-vortex limit $r_{mn}^2 \gg \xi^2$, we may make the approximation $\exp[-r_{mn}^2 / (4 \xi^2)] \approx \delta_{mn}$, resulting in a very simple set of equations: \begin{align} \begin{pmatrix} \dot{X}_n\\ \dot{Y}_n \end{pmatrix} \approx \frac{\hbar}{m}\sum_{m \neq n}\frac{q_m}{r_{mn}^2} \begin{pmatrix} -\delta y_{nm}\\ \delta x_{nm} \end{pmatrix} +\frac{\alpha_\varepsilon}{q_n}\begin{pmatrix} \dot{Y}_n\\ -\dot{X}_n \end{pmatrix} \,. \end{align} Since the first term on the right-hand side is exactly the background superfluid velocity at the $i$-th vortex $\mathbf{v}_i^0$, we may write this equation as (assuming $q_n=\pm 1$): \begin{align} \mathbf{\dot{r}}_n &= \mathbf{v}^0_n -\alpha_\varepsilon q_n \mathbf{\hat{z}}{\times}\dot{\mathbf{r}}_n \notag \\ \label{eq:dissipative-point-vortex} &= \mathbf{v}^0_n -\alpha_\varepsilon q_n\mathbf{\hat{z}}{\times}\mathbf{v}^0_n + \mathcal{O}\left( \alpha_\varepsilon^2\right) \,, \end{align} where in the last line we substituted in the zeroth-order result $\mathbf{\dot{r}}_n=\mathbf{v}^0_n+\mathcal{O}(\alpha_\varepsilon)$. This equation is precisely the damped PVM (Eq.~(1) of the main text), with mutual friction coefficient $\alpha_\varepsilon$. We now derive the full effective Lagrangian by first turning our attention to the energy-damping noise term. In contrast to the approach for the damping term, we will find it convenient to evaluate the spatial integrals after first taking the Euler-Lagrange equations with respect to the full effective Lagrangian $L_{\rm eff}$, which gives the equations of motion: \begin{align} \begin{pmatrix} dX_n\\ dY_n \end{pmatrix} \approx \underbrace{\frac{\hbar}{m}\sum_{m \neq n}\frac{q_m}{r_{nm}^2} \begin{pmatrix} -\delta y_{nm}\\ \delta x_{nm} \end{pmatrix}dt +\alpha_\varepsilon q_n\begin{pmatrix} dY_n\\ -dX_n \end{pmatrix}}_{\text{RHS of Eq.~(\ref{eq:dissipative-point-vortex})}} + d\mathbf{U}_n(t)\,, \end{align} where we have again assumed $q_n=\pm 1$, and defined the stochastic noise vector \begin{align} d\mathbf{U}_n(t) &\equiv \frac{1}{2\pi \rho_0 q_n} \int d^2\mathbf{x} \, dU_{\varepsilon}(\mathbf{x},t) \begin{pmatrix} -(\partial \rho(\mathbf{x}) / \partial Y_n) \\ (\partial \rho(\mathbf{x}) / \partial X_n) \end{pmatrix}. \end{align} We will now consider the noise correlations of this stochastic noise vector using Eq.~\eqref{eq:ED_NoiseCorr}. To simplify our expressions, we denote the $i$th element of the vectors $d\mathbf{U}_n(t)$ and $\textbf{r}_n = (X_n, Y_n)^T$ by $dU^i_n(t)$ and $X_n^i$, respectively, and define $\sigma^{ij}=1$ if $i = j$ and $\sigma^{ij} = -1$ if $i \neq j$. Following from the properties of $dU_\varepsilon$, $d\mathbf{U}_n(t)$ is a Gaussian noise vector with zero mean and correlations: \begin{subequations} \begin{align} \label{eq:projected_noisecorr} \langle dU^i_n(t) dU^j_m(t') \rangle &= \frac{\sigma^{ij}}{(2\pi\rho_0)^2} \int d^2\mathbf{x} \int d^2\mathbf{y} \, \frac{\partial \rho(\mathbf{x})}{\partial X^i_n}\frac{\partial \rho(\mathbf{y})}{\partial X^j_m} \langle dU_{\varepsilon}(\mathbf{x},t)dU_{\varepsilon}(\mathbf{y},t')\rangle \\ &= \frac{2k_B T\sigma^{ij}}{\hbar(2\pi\rho_0)^2}\delta(t-t')dt \int d^2\mathbf{x} \frac{\partial \rho(\mathbf{x})}{\partial X^i_n}\left(\int d^2\mathbf{y} \, \frac{\partial \rho(\mathbf{y})}{\partial X^j_m} \varepsilon\left(\mathbf{x}-\mathbf{y}\right)\right) \,. \end{align}\end{subequations} The integrand of the bracketed integral in Fourier space is a local product of the Fourier transform of $\partial\rho/\partial X^j_m$ and the kernel $\tilde{\varepsilon}(\mathbf{k})$. Following the same argument made in the main text and Section (2) of this document for the energy-damping potential term, we may approximate this integral by noting $\partial\rho/\partial X^j_m$ will be peaked in Fourier space at $k=\xi^{-1}$, allowing us to treat the kernel as constant at this scale: $\tilde{\varepsilon}(\mathbf{k})\approx \tilde{\varepsilon}(\xi^{-1})$. This allows us to make the substitution Eq.~\eqref{eq:approximate_edkernel} in the above correlation, \begin{subequations}\begin{align} \langle dU^i_n(t) dU^j_m(t') \rangle&\approx \sigma^{ij}\frac{4 k_B T \sigma_{\rm ED}N_{\rm cut}}{\hbar (2\pi\rho_0)^2} \delta(t-t')dt\int d^2\mathbf{x}\frac{\partial \rho(\mathbf{x})}{\partial X^i_n} \int d^2\mathbf{y} \, \frac{\partial \rho(\mathbf{y})}{\partial X^j_m}\delta(\mathbf{x-y}) \\ &= \sigma^{ij}\frac{k_B T \sigma_{\rm ED}N_{\rm cut}}{\pi^2\hbar\rho_0^2} \delta(t-t')dt \int d^2\mathbf{x} \, \frac{\partial \rho(\mathbf{x})}{\partial X^i_n}\frac{\partial \rho(\mathbf{x})}{\partial X^j_m} \,. \end{align}\end{subequations} The integral can be solved analytically for our choice of density ansatz \begin{subequations} \begin{align} \int d^2\mathbf{x} \, \frac{\partial \rho(\mathbf{x})}{\partial X^i_n}\frac{\partial \rho(\mathbf{x})}{\partial X^j_m} &= \frac{\pi \rho_0^2}{4\xi^2}\exp\left(-\frac{r_{mn}^2}{4\xi^2}\right)\left(2\xi^2\delta_{ij} - \delta X^i_{mn}\delta X^j_{mn}\right) \\ &\approx \frac{\pi\rho_0^2}{2}\delta_{ij}\delta_{nm} \,, \end{align} \end{subequations} where in the second line we have again made the approximation $\exp[-r_{mn}^2 / (4 \xi^2)] \approx \delta_{mn}$ valid in the point-vortex regime $r_{mn}^2 \gg \xi^2$. Therefore, off-diagonal correlations vanish, leading to the simple expression: \begin{subequations}\begin{align} \langle dU^i_n(t) dU^j_m(t') \rangle &= 2\frac{\alpha_\varepsilon k_B T}{2\pi\hbar \rho_0} \delta_{ij}\delta_{nm}\delta(t-t')dt, \end{align}\end{subequations} noting $\sigma^{ij}\delta_{ij}=\delta_{ij}$ and $\sigma_{\rm ED}N_{\rm cut} = 2\alpha_\varepsilon/\rho_0$. This correlation allows us to express the noise vector in terms of white noise processes: \begin{align} d\mathbf{U}_n(t)\equiv \sqrt{2\eta}d\mathbf{w}_n(t) = \sqrt{2\eta} \begin{pmatrix} dW^x_n(t) \\ dW^y_n(t) \end{pmatrix} \,, \end{align} where $dW_n^i$ are real Gaussian noises with zero mean and correlation $\langle dW_n^i(t) dW_m^j(t') \rangle = \delta_{ij} \delta_{nm} dt$ (i.e. Weiner increments) and we have defined the diffusion coefficient (units $\text{length}^2/\text{time}$): \begin{align} \eta \equiv\alpha_\varepsilon \frac{k_B T}{2\pi\hbar\rho_0} \,. \end{align} This leads to the stochastic point-vortex equation \begin{align} \label{eq:stochasticdPV_withED} d\mathbf{r}_i &= \left(\mathbf{v}^0_i - \alpha_\varepsilon q_i \mathbf{\hat{z}}{\times} \mathbf{v}^0_i\right) dt +\sqrt{2\eta} d\mathbf{w}_n\,, \end{align} which is presented as Eq.~(5) of the main text. \subsection{Numeric Validation: Dipole decay} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{Dipole_ED_AnalyticvsNumeric.pdf} \caption{Numerical validation of the damped point-vortex model for dipole decay. (a) Dipole size as a function of time for various initial sizes $d_0\in [2,25]\xi$ as given by the analytic expression Eq.~(\ref{eq:dipole_analytic}) (lines) and numerical integration of the noiseless SPGPE (pluses). (b) Decay time $\tau_c$, defined as $d(\tau_c)=d_c$ for $d_c=2\xi$, given in terms of the speed of sound $c=\sqrt{\mu/m}$, as a function of initial dipole size $d_0\in [4,15]\xi$. Analytic expression Eq.~(\ref{eq:dipole_analytic}) compares well to numeric values, with increasing agreement for $d\gg d_c$. } \label{fig:Dipole_ED_AnalyticvsNumeric} \end{figure} Here we numerically validate our approximate treatment of the energy-damping kernel (i.e. Eq.~\eqref{eq:approximate_edkernel}) by comparing the predictions of our derived point-vortex equation against direct integration of the quasi-2D SPGPE (with the exact expression for the scattering kernel $\varepsilon(\textbf{x})$) for a vertical thickness of $l_z=\xi$. For simplicity, we neglect the noise in both equations, effectively comparing the predictions of each equation for the \emph{mean} vortex dynamics. This allows us to separate deviations due to the approximate form of the kernel (used in both the damping and noise terms) from sampling errors in averaging over a finite number of stochastic trajectories. Specifically, we consider the decay of a vortex-antivortex dipole due to damping, for which the analytic solution to the damped point-vortex equation is well-known (see, for example, [69]). For our model, this solution can be written as: \begin{align} d(t) = \sqrt{d(0)^2 -4\frac{\hbar}{m}\alpha_\varepsilon t}, \label{eq:dipole_analytic} \end{align} where $d(t)$ is the separation between the two vortices and we have neglected the contribution of number damping. By defining a critical scale $d_c$ at which the vortex-antivortex pair are expected to annihilate, we can estimate a timescale for decay $\tau_c=m(d(0)^2-d_c^2)/(4\hbar\alpha_\varepsilon)$. An estimate of this critical scale is $d_c=2\xi$, where $\xi$ is the healing length of the fluid~[57,69]. Our simulations are performed in dimensionless healing length units of $\xi$ (space) and $c/\xi=\hbar/\mu$ (time), with $l_z=\xi$ and $\mathcal{M}=2\sigma_s N_{\rm cut} \rho_0 = 0.1$ together giving a mutual friction coefficient $\alpha_\varepsilon\approx 0.006$. Here $c=\sqrt{\mu/m}$ is the speed of sound in the superfluid. \par Figure \ref{fig:Dipole_ED_AnalyticvsNumeric} compares the analytic expression Eq.~(\ref{eq:dipole_analytic}) to direct numerical integration of the noiseless quasi-2D SPGPE (with $\gamma=0$) for a range of initial dipole sizes. We see strong quantitative agreement between the analytic and numeric results, particularly for large inter-vortex distances $d(t)\gg \xi$. We observe growing discrepancy between the analytics for dipole sizes below $d(t)\lesssim5\xi$, indicating the expected breakdown of a point-vortex description of the vortex dynamics. When the approximate form of the energy-damping kernel is also used in the numeric simulations, we observe a slight improvement in the agreement for dipoles with initial separation $d(0)\lesssim10\xi$. This demonstrates that there is a quantitative deviation due to the treatment of the kernel, but one that is small and only noticeable close to the breakdown of the point-vortex regime. \section{(4) Brownian motion in the mean-field approximation} Here we show that vortex evolution under the stochastic damped PVM corresponds to Brownian motion in the mean-field limit. Under the mean-field approximation, each vortex interacts with a mean-field flow induced by all other vortices, allowing us to approximate Eq.~\eqref{eq:stochasticdPV_withED} by: \begin{align} \begin{pmatrix} dX_i\\dY_i \end{pmatrix} &= \langle \mathbf{v}_i^0 - \alpha_\varepsilon q_i \mathbf{\hat{z}}{\times} \mathbf{v}^0_i\rangle dt - \sqrt{2\eta}\begin{pmatrix} dW_i^x\\dW_i^y \end{pmatrix} \,, \end{align} where we have replaced the background superfluid velocity at the $i$-th vortex, $\mathbf{v}_i^0$, with its stochastic average $\langle \mathbf{v}_i^0 \rangle$. In other words, under this assumption the $i$-th vortex interacts with the \emph{mean} velocity field produced by the dynamics of all other vortices. Using the shorthand $\langle \mathbf{v}_i^0\rangle - \alpha_\varepsilon q_i \mathbf{\hat{z}}{\times} \langle \mathbf{v}^0_i\rangle \equiv (a_i(t),b_i(t),0)^T$, we can then write the solution of the above equation as a vector Ornstein-Uhlenbeck process: \begin{subequations} \begin{align} X_i(t) &= x_0 + \int_0^t a_i(t') dt' - \sqrt{2\eta}\int_0^t dW_i^x(t') \,,\\ Y_i(t) &= y_0 + \int_0^t b_i(t')dt' + \sqrt{2\eta}\int_0^t dW_i^y(t') \,. \end{align} \end{subequations} From here we may then compute the variance of the positions by noting that the noise vanishes in the means ($\langle dW_i^\alpha \rangle = 0$): \begin{align} \langle \Delta X_i^2 \rangle \equiv \left\langle (X_i(t)- \langle X_i(t) \rangle )^2 \right\rangle &= 2\eta \int_0^t dt' \int_0^t dt'' \langle dW_i^x(t')dW_i^x(t'') \rangle \\ &= 2\eta\int_0^t dt' = 2\eta t \,. \end{align} An identical calculation gives $\langle Y_i^2 \rangle=2\eta t$. Finally, this allows us to compute the growth of the variance induced by thermal fluctuations, in the mean-field approximation: \begin{align} \langle \Delta r_i^2 \rangle \equiv \langle \Delta X_i^2 +\Delta Y_i^2 \rangle = 4\eta t \,. \end{align} This can be interpreted as Brownian motion of vortices around the background flow, with diffusive growth of the position variance of each vortex. \par \section{(5) Estimation of atomic cloud parameters for calculations of the mutual friction coefficient} \label{sec:red_2D} \subsection{Reduction to 2D theory: Calculation of $l_z,\mu_{\rm 2D},\rho_0$ from 3D cloud parameters} A key parameter in our stochastic point vortex theory is the vertical thickness of the atomic cloud $l_z$. In terms of the quasi-2D SPGPE, this is defined as the $1\sigma$ radius of the transverse wavefunction, which is treated as a Gaussian~[60] In this work we compute $l_z$ for a given harmonically trapped system based on the analytical variational ground state for a Gaussian ansatz, as given in Ref.~[64] Specifically, we find $l_z$ as the solution to the following algebraic equation ($b_i = l_i/\sqrt{\hbar/(m\omega_i)} $): \begin{align} \frac{1}{2}\hbar\omega_i\left(b_i^2 - \frac{1}{b_i^2}\right) - \frac{1}{2(2\pi)^{3/2}}\frac{gN_0}{l_{\rm geo}^3}\frac{1}{b_1b_2b_3}=0 \end{align} where $\omega_{\rm geo}=(\omega_x\omega_y\omega_z)^{1/3}$ is the geometric mean of the trapping frequencies, $l_{\rm geo}=\sqrt{\hbar/(m\omega_{\rm geo})}$, and $N_0$ is the number of condensate atoms. Integrating out the $z$ dimension results in an effective 2D chemical potential and interaction strength: \begin{align} \mu_{\rm 2D} &= \mu - \frac{m\omega_z^2l_z^2 }{4}- \frac{\hbar^2}{4ml_z^2} \,, \\ g_{\rm 2D} &= \frac{g}{\sqrt{2\pi}l_z} \,, \end{align} which we use to estimate the healing length $\xi=\hbar/\sqrt{m\mu_{\rm 2D}}$ and the 2D background density $\rho_0=\sqrt{\mu_{\rm 2D}/g_{\rm 2D}}$. \subsection{Estimation of chemical potential for comparison to ZNG simulations of Ref.~[38] In the main text we compare our microscopically-derived expression for the mutual friction coefficient to the numerical calculations of Ref.~[38] In their calculations, the total atom number $N_T$ of the gas was fixed for all temperatures studied, resulting in a different chemical potential for each temperature considered -- therefore changing the effective energy cutoff for each temperature. For each temperature $T$, we compute the chemical potential by first estimating the number of condensate atoms $N_0$, using the thermodynamic expression~[63] \begin{align} \frac{N_0}{N_T} = \left[1-\left(\frac{T}{T_c^0}\right)^3\right] -\frac{3\omega_{\rm arith} \zeta(2) }{2\omega_{\rm geo} [\zeta(3)]^{2/3}}\left(\frac{T}{T_c^0}\right)^2 N_T^{-1/3} \,, \end{align} where $\omega_{\rm arith}=(\omega_x+\omega_y+\omega_z)/3$ is the arithmetic mean of the trapping frequencies, and $T_c^0=177$~nK is the ideal gas critical temperature for the parameters in Ref.~[38] The first term in this equation is simply the ideal-gas relation, and the second accounts for the first-order shift in the critical temperature due to the finite-size of the trapped gas. From $N_0$, the chemical potential can then be estimated in the Thomas-Fermi approximation~[63] \begin{align} \mu = \frac{\hbar \omega_{\rm geo}}{2}\left(\frac{15 N_0 a_s}{l_{\rm geo}}\right)^{2/5} \,. \end{align} This value of $\mu$ is then used to set the energy cutoff $\epsilon_{\rm cut}=2\mu$ and compute the 2D background density as described above. \hrulefill {\small \begin{enumerate} \setlength\itemsep{0.05em} \setcounter{enumi}{4 \item S.~J.~Rooney, T.~W.~Neely, B.~P.~Anderson, and A.~S.~Bradley, Phys. Rev. A \textbf{88}, 063620 (2013) \setcounter{enumi}{14} \item S.~J.~Rooney, P.~B.~Blakie, B.~P.~Anderson and A.~S.~Bradley, Physical Review A \textbf{84}, 023637 (2011). \setcounter{enumi}{17} \item T.~Simula, M.~J.~Davis, and K.~Helmerson, Physical Review Letters \textbf{113}, 165302 (2014). \setcounter{enumi}{23} \item A.~Lucas and P.~Surowka, Physical Review A \textbf{90}, 053617 (2014). \setcounter{enumi}{37} \item B.~Jackson, N.~P.~Proukakis, C.~F.~Barenghi, and E.~Zaremba, Physical Review A \textbf{79}, 053615 (2009) \setcounter{enumi}{38} \item S.~J.~Rooney, A.~S.~Bradley, and P.~B.~Blakie, Physical Review A \textbf{81}, 023630 (2010). \setcounter{enumi}{45} \item P.~Blakie, A.~Bradley, M. Davis, R. Ballagh, and C.~Gardiner, Advances in Physics \textbf{57}, 363 (2008). \setcounter{enumi}{47} \item G.~Moon, W.~J.~Kwon, H.~Lee, and Y.-i.~Shin, Physical Review A \textbf{92}, 051601 (2015). \setcounter{enumi}{50} \item A.~S.~Bradley, C.~W.~Gardiner, and M.~J.~Davis, Physical Review A \textbf{77}, 033616 (2008) \setcounter{enumi}{51} \item A.~S.~Bradley and B.~P.~Anderson, Physical Review X \textbf{2}, 041001 (2012). \setcounter{enumi}{59} \item A.~S.~Bradley, S.~J.~Rooney, and R.~G.~McDonald, Physical Review A \textbf{92}, 033631 (2015) \setcounter{enumi}{60} \item Z.~Mehdi, A.~S.~Bradley, J.~J.~Hope, and S.~S.~Szigeti, SciPost Phys. \textbf{11}, 80 (2021). \setcounter{enumi}{62} \item F.~Dalfovo, S.~Giorgini, L.~P.~Pitaevskii, and S.~Stringari, Rev. Mod. Phys. \textbf{71}, 463 (1999). \setcounter{enumi}{63} \item C.~J.~Pethick and H.~Smith, ``Bose–Einstein Condensation in Dilute Gases'' (Cambridge University Press, 2008). \setcounter{enumi}{68} \item W.~J.~Kwon, G.~Del~Pace, K.~Xhani, L.~Galantucci, A.~Muzi Falconi, M.~Inguscio, F.~Scazza, and G.~Roati, Nature 600, \textbf{64} (2021). \setcounter{enumi}{73} \item This expression has an additional factor of two compared to that of Ref.~[24], which is required for agreement between the conserved energy of the PV Lagrangian and the GPE Hamiltonian. We have numerically confirmed this for the case of a well-separated vortex dipole. \end{enumerate} } \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\chapter*{Abstract} The aim of the project is to develop tracking and estimation techniques relevant to underwater targets. The received measurements of the targets have to be processed using the models of the target dynamics to obtain better estimates of the target states like position, velocity etc. This work includes exploration of particle filtering techniques for target tracking. Particle filter is a numerical approximation method for implementing a recursive Bayesian estimation procedure. It does not require the assumptions of linearity and Guassianity like the traditional Kalman filter (KF) based techniques. Hence it is capable of handling non-Gaussian noise distributions and non-linearities in the target's measurements as well as target dynamics. The performance of particle filters is verified using simulations and compared with EKF. Particle filters can track maneuvering targets by increasing the number of particles. Particle filter have higher computational load which increases in the case of multi-targets and highly maneuvering targets. The efficient use of particle filters for multi-target tracking using Independent Partition Particle Filter (IPPF) and tracking highly maneuvering targets using Multiple Model Particle Filter(MMPF) are also explored in this work. These techniques require only smaller number of particles and help in reducing the computational cost. The performance of these techniques are also simulated and verified. Data association problem exists in multi-target tracking due to lack of information at the observer about the proper association between the targets and the received measurements. The problem becomes more involved when the targets move much closer and there are clutter and missed target detections at the observer. Monte Carlo Joint Probabilistic Data Association Filter (MCJPDAF) efficiently solves data association during the mentioned situation. MC-JPDAF also incorporates multiple observers. Its performance is simulated and verified. Due to the inability of the standard MCJPDAF to track highly maneuvering targets, Monte Carlo Multiple Model Joint Probabilistic Data Association Filter (MC-MMJPDAF) which combines the technique of Multiple Model Particle Filter(MMPF) in the framework of MC-JPDAF has been proposed. The simulation results shows the efficiency of the proposed method. The results from the silmulation of particle filter based methods show that it handles maneuvering, multiple target tracking and has been verified with some field data. \chapter{Algorithm for sampling indices from a distribution} \label{appendix:a} Suppose there are $N$ particles with indices from $1$ to $N$, i.e.$\{\mathbf{x}^{(1)},\mathbf{x}^{(2)},\mathbf{x}^{(3)},....,\mathbf{x}^{(N)}\}$ and if it is required to sample $R$ particles from these given particles such that the distribution of the indices of these sampled particles follow a desired probability distribution $\rho(\cdotp)$, then any of the following two method can be used. The desired distribution $\rho(\cdotp)$ is specified using a set of indices from $1$ to $M$ and their corresponding weights $\rho^{(1)}, \rho^{(2)}, \rho^{(3)}, . . ., \rho^{(M)}$. The desired distribution function $\rho(\cdotp)$ is usually a function of the initial particles itself like their likelihood fuction or their cumulative distribution function. This technique is used in systematic resampling and weighted resampling of a set of particles. \section{$O(NR)$ algorithm} Let $X$ have a probabilty distribution function $F_X(X)$. Given a uniform randon variable $Y$, the transformation $X\equiv F_X^{-1}(Y)$ will generate a random variable with probability distribution $F_X(X)$. This technique can be used to generate random variable with specified distributions from a uniform random variable. Hence this technique is used here to generate indices from $1$ to $M$ of distribution $\rho(\cdotp)$. This algorithm requires $R$ random number generations and $N$ comparisons in the worst case at every iteration. Hence the order of this algorithm is $O(NR)$. This algorithm has been derived from the algorithm described in \cite{4} for Regime Transition. The pseudo code of the algorithm is shown in Table.\ref{tab:Generate_Dist1}. The algorithm is illustrated in Fig. \ref{fig:Distribution_sampling_M}. \begin{table}[H] \caption{Method 1: Generating indices from a given distribution} \centering \begin{tabular}{l} \hline \begin{minipage}{4in} \vskip 4pt $[\{j(n)\}_{n=1}^R]=$Generate indices$[\{\rho^{(n)}\}_{n=1}^{N}, R]$ \begin{itemize} \item $c(0)=0$ \item FOR $i=1:N$, \begin{itemize} \item $c(i)=c(i-1)+ \rho^{(i)}$ \end{itemize} \item END FOR \item FOR $n=1:R$, \begin{itemize} \item Draw $u_n\sim \mathcal{U} [0,1]$ \item m=1 \item WHILE $(c(m)<u_n)$ \begin{itemize} \item$ m=m+1$ \end{itemize} \item END WHILE \item Set $j(n)=m$ \end{itemize} \item END FOR \end{itemize} \vskip 4pt \end{minipage} \\ \hline \end{tabular} \label{tab:Generate_Dist1} \end{table} \begin{figure}[H] \centering \includegraphics[scale=0.5]{BootstrapPF_fig/Distribution_sampling_M} \caption{Method 1: Generating indices from a given distribution} \label{fig:Distribution_sampling_M} \end{figure} \section{$O(max(N,R))$ algorithm} This algorithm is based on the same principle of generating random variable with specified distributions from a uniform random variable as explained in the previous section. This method is simple to implement and relatively reduces the computational load. It requires $R+N$ comparisons and hence is of O(max(N,R)). This algorithm has been derived from the Systematic Resampling algorithm described in \cite{4}, for removing sample degeneracy in particle filters. The pseudo code of the algorithm is shown in Table.\ref{tab:Generate_Dist2}. The algorithm is illustrated in Fig. \ref{fig:Distribution_sampling_S}. \begin{table}[H] \caption{Method 2: Generating indices from a given distribution} \centering \begin{tabular}{l} \hline \begin{minipage}{4in} \vskip 4pt $[\{j(n)\}_{n=1}^R]=$Generate indices$[\{\rho^{(n)}\}_{n=1}^{N}, R]$ \begin{itemize} \item $c(0)=0$ \item FOR $i=1:N$, \begin{itemize} \item $c(i)=c(i-1)+ \rho^{(i)}$ \end{itemize} \item END FOR \item Draw $u_1\sim \mathcal{U} [0, \frac{1}{R}]$ \item m=1 \item FOR $n=1:R$, \begin{itemize} \item $u_n=u_1+R^{-1}(n-1)$ \item WHILE $(c(m)<u_n)$ \begin{itemize} \item$ m=m+1$ \end{itemize} \item END WHILE \item Set $j(n)=m$ \end{itemize} \item END FOR \end{itemize} \vskip 4pt \end{minipage} \\ \hline \end{tabular} \label{tab:Generate_Dist2} \end{table} \begin{figure}[H] \centering \includegraphics[scale=0.5]{BootstrapPF_fig/Distribution_sampling_S} \caption{Method 2: Generating indices from a given distribution} \label{fig:Distribution_sampling_S} \end{figure} \end{appendices} \chapter{Bayesian Estimation} \label{chap:Bayesian} The Bayesian approach to estimate the state $\mathbf{x}_k$ from the measurements $\mathbf{Z}_k$ is to calculate the posterior distribution of $\mathbf{x}_k$ conditioned on the measurements $\mathbf{Z}_k$. This conditional pdf is denoted as $p(\mathbf{x}_k|\mathbf{Z}_k)$. The estimation based on this posterior distribution is called Bayesian because it is constructed using Bayes rule. \begin{equation} p(\mathbf{x}_k|\mathbf{Z}_k)=\dfrac{p(\mathbf{Z}_k|\mathbf{x}_k)p(\mathbf{x}_k)}{p(\mathbf{Z}_k)} \end{equation} where $p(\mathbf{x}_k)$ is the prior target distribution, $p(\mathbf{Z}_k|\mathbf{x}_k)$ is the measurement likelihood (measure of how likely the measurement is true, given the state), $p(\mathbf{Z}_k)$ is called the evidence which is a normalizing factor. Once $p(\mathbf{x}_k|\mathbf{Z}_k)$ is estimated, then we can estimate the statistical properties of the estimate of the target such as mean, median, covariance, etc. \section{Recursive Bayesian Estimation} The requirement is to recursively compute the posterior target density $p(\mathbf{x}_k|\mathbf{Z}_k)$ whose computation requires only the estimated target density at the previous time $p(\mathbf{x}_{k-1}|\mathbf{Z}_{k-1})$ and the current measurement $\mathbf{z}_k$. No history of observations or estimates is required. The first measurement is obtained at $k=1$. Hence the initial density of the state ${\mathbf{x}_0}$ can be written as \begin{equation} p(\mathbf{x}_0)=p(\mathbf{x}_0|\mathbf{Z}_0) \end{equation} where $\mathbf{Z}_0$ is the set of no measurements. The conditional pdf $p(\mathbf{x}_k|\mathbf{Z}_{k-1})$ can be written as \begin{align} p(\mathbf{x}_{k}|\mathbf{Z}_{k-1})& = \int p[(\mathbf{x}_k,\mathbf{x}_{k-1})|\mathbf{Z}_{k-1}]d\mathbf{x}_{k-1} \\ & = \int p(\mathbf{x}_k|\mathbf{x}_{k-1},\mathbf{Z}_{k-1})p(\mathbf{x}_{k-1}|\mathbf{Z}_{k-1})d\mathbf{x}_{k-1} \label{eqn:condpdf1} \end{align} But according to \eqref{eqn:markov1}, under the Markovian assumption the state $\mathbf{x}_k$ is determined only by $\mathbf{x}_{k-1}$ and $w_{k-1}$. Hence \eqref{eqn:condpdf1} can be written as \begin{equation} p(\mathbf{x}_{k}|\mathbf{Z}_{k-1})=\int p(\mathbf{x}_k|\mathbf{x}_{k-1})p(\mathbf{x}_{k-1}|\mathbf{Z}_{k-1})d\mathbf{x}_{k-1} \label{eqn:condpdf2} \end{equation} The pdf $p(\mathbf{x}_k|\mathbf{x}_{k-1})$ is referred to as the transitional density and is available from the system equation $f_k(\cdotp)$ and the process noise $w_k$. The pdf $p(\mathbf{x}_{k-1}|\mathbf{Z}_{k-1})$ is available at the initial time as $p(\mathbf{x}_0|\mathbf{Z}_0)$. Then the posterior conditional pdf of $\mathbf{x}_k$, $p(\mathbf{x}_k|\mathbf{Z}_k)$ can be written as \begin{align} p(\mathbf{x}_{k}|\mathbf{Z}_{k})& = p(\mathbf{x}_{k}|\mathbf{z}_k,\mathbf{Z}_{k-1}) \\ & = \dfrac{p(\mathbf{x}_{k},\mathbf{z}_k,\mathbf{Z}_{k-1})}{p(\mathbf{z}_{k},\mathbf{Z}_{k-1})}\label{eqn:condpdf3} \\ & = \dfrac{p(\mathbf{z}_{k}|\mathbf{x}_k,\mathbf{Z}_{k-1})p(\mathbf{x}_k|\mathbf{Z}_{k-1})p(\mathbf{Z}_{k-1})}{p(\mathbf{z}_{k}|\mathbf{Z}_{k-1})p(\mathbf{Z}_{k-1})}\label{eqn:condpdf4} \\ & = \dfrac{p(\mathbf{z}_{k}|\mathbf{x}_k,\mathbf{Z}_{k-1})p(\mathbf{x}_k|\mathbf{Z}_{k-1}))}{p(\mathbf{z}_{k}|\mathbf{Z}_{k-1})}\label{eqn:condpdf5} \\ & = \dfrac{p(\mathbf{z}_{k}|\mathbf{x}_k)p(\mathbf{x}_k|\mathbf{Z}_{k-1}))}{p(\mathbf{z}_{k}|\mathbf{Z}_{k-1})} \label{eqn:condpdf6} \end{align} In \eqref{eqn:condpdf3} and \eqref{eqn:condpdf5}, Bayes rule is used and in \eqref{eqn:condpdf6}, \eqref{eqn:markov1} is used. The pdf $p(\mathbf{z}_k|\mathbf{x}_k)$ can be obtained using the measurement equation $h(\cdotp)$. The pdf $p(\mathbf{x}_k|\mathbf{Z}_{k-1})$ is available from \eqref{eqn:condpdf2}. The pdf $p(\mathbf{z}_k|\mathbf{Z}_{k-1})$ which is a normalizing constant, may be obtained as follows. \begin{align} p(\mathbf{z}_{k}|\mathbf{Z}_{k-1})& = \int p(\mathbf{z}_k,\mathbf{x}_k|\mathbf{Z}_{k-1})d\mathbf{x}_{k} \\ & = \int p(\mathbf{z}_k|\mathbf{x}_k,\mathbf{Z}_{k-1})p(\mathbf{x}_k|\mathbf{Z}_{k-1})d\mathbf{x}_{k} \\ & = \int p(\mathbf{z}_k|\mathbf{x}_k)p(\mathbf{x}_k|\mathbf{Z}_{k-1})d\mathbf{x}_{k} \label{eqn:condpdf7} \end{align} The pdf $p(\mathbf{z}_k|\mathbf{x}_k)$ and $p(\mathbf{x}_k|\mathbf{Z}_{k-1})$ in \eqref{eqn:condpdf7} are available as discussed previously. Hence all the pdfs of the right side of \eqref{eqn:condpdf6} are available. Hence formal solution to the recursive Bayesian estimation can be summarized as in Table \ref{tab:Recursive_Bayesian} \cite{7,9,11}. \begin{table}[h] \caption{Recursive Bayesian Estimator \cite{4}} \centering \begin{tabular}{l} \hline \begin{minipage}{5in} \vskip 4pt \begin{enumerate} \item For $k=0$, initialize $p(\mathbf{x}_0|\mathbf{Z}_{0})=p(\mathbf{x}_0)$ \item For $k>0$ \begin{itemize} \item Prediction step: Calculate the a priori pdf using \eqref{eqn:condpdf2}. \begin{equation} p(\mathbf{x}_{k}|\mathbf{Z}_{k-1})=\int p(\mathbf{x}_k|\mathbf{x}_{k-1})p(\mathbf{x}_{k-1}|\mathbf{Z}_{k-1})d\mathbf{x}_{k-1} \end{equation} \item Update step: Calculate the posterior pdf using \eqref{eqn:condpdf6} . \begin{equation} p(\mathbf{x}_{k}|\mathbf{Z}_{k})=\dfrac{p(\mathbf{z}_{k}|\mathbf{x}_k)p(\mathbf{x}_k|\mathbf{Z}_{k-1}))}{p(\mathbf{z}_{k}|\mathbf{Z}_{k-1})} \end{equation} \end{itemize} \end{enumerate} \vskip 4pt \end{minipage} \\ \hline \end{tabular} \label{tab:Recursive_Bayesian} \end{table} The measurement $\mathbf{z}_k$ is used to update the prior density $p(\mathbf{x}_k|\mathbf{Z}_{k-1})$ to obtain the posterior density. Thus, in principle the posterior pdf $p(\mathbf{x}_k|\mathbf{Z}_{k})$ can be obtained recursively by the two stages: prediction and update. In general the implementation of this conceptual solution is not practically possible since it requires the storage of the entire pdf which is an infinite dimensional vector. Analytical solution to these recursive equations cannot be determined in general because of complex and high dimensional integrals and are known only for few cases. For example in the system described by \eqref{eqn:state_model} and \eqref{eqn:measurement_model}, if $f(\cdotp)$ and $h(\cdotp)$ are linear and initial density $p(\mathbf{x}_0)$ is Gaussian, noise sequences $w_k$ and $v_k$ are zero mean mutually independent, and $p(\mathbf{x}_0)$, $w_k$ and $v_k$ are additive Gaussian, the optimal Bayesian solution is the Kalman filter. The exact implementation of the Kalman filter is feasible since its posterior density $p(\mathbf{x}_k|\mathbf{Z}_k)$ also turns out to be Gaussian and can be completely represented by its mean and covariance which are finite dimensional. Hence the storage of the posterior density $p(\mathbf{x}_k|\mathbf{Z}_{k})$ becomes convenient and the recursive Bayesian solution reduces to the recursive estimation of the mean and covariance of the posterior density $p(\mathbf{x}_k|\mathbf{Z}_{k})$. Thus Kalman filter is the optimal filter for the type of system mentioned above, and no other filter does better than it. In practice $f(\cdot)$ and $h(\cdotp)$ may be nonlinear, and $p(\mathbf{x}_0)$, $w_k$ and $v_k$ may be non Gaussian. In such cases the posterior densities may be multi modal and/or non Gaussian. For such cases approximations or suboptimal Bayesian solutions are required for a practical realization. Analytical and numerical approximation methods for the implementation of the recursive Bayesian solution include extended Kalman filter, unscented Kalman filter, particle filter etc. The particle filter is explored in the subsequent chapters. \section{Summary} The Bayesian estimation problem can be conceptually solved recursively by two steps: prediction and update. Kalman filter is the optimal filter when the target state dynamics and measurement equation are linear and all the random elements in the model are additive Gaussian, and process and measurement noise are zero mean. In general, implementation of recursive Bayesian solution is not possible and hence analytical and numerical approximation techniques are required. A numerical approximation technique called particle filter for target tracking is explored in the subsequent chapters. \chapter{Particle Filtering} \label{chap:Particle_filtering} Particle filter is a class of sequential Monte Carlo method to solve recursive Bayesian filtering problems. Monte Carlo methods are computational algorithms that are based on repeated random sampling to compute their results. Initially they define a domain of possible inputs, generate random input samples from a posterior distribution over this domain, perform the computation over this input samples to get the output samples and infer about the output probability distribution based on these output samples \cite{11}. Particle filters was initially developed for target tracking by N.J. ~Gordon et.al \cite{7}. There have been significant modifications on the particle filter by A. ~Doucet et.al \cite{8,10,13}, B. ~Ristic et.al \cite{4} and are explored in this chapter. Particle filter doesn't require the assumptions of linearity and Guassianity like the traditional Kalman filter (KF), Extended Kalman filter (EKF), etc. Hence it is capable of handling non-Gaussian noise distributions and non-linearities in the target's measurements as well as target dynamics. The posterior distribution of the state of the system at every instant $k$ is represented by a set of $N$ random samples $\mathbf{x}_k^{(i)}$ called particles with associated weights $w_k^{(i)}$. The weights are normalized such that $\sum_{i=1}^N w_k^{(i)}=1$. This particle set $\{\mathbf{x}_k^{(i)},w_k^{(i)}\}_{i=1}^N$ can then be regarded representing a probability distribution \begin{equation} p_N(\mathbf{x}_k)=\sum_{i=1}^N w_k^{(i)}\delta(\mathbf{x}-\mathbf{x}_k^{(i)}) \end{equation} where $\delta(\cdotp)$ is the Dirac $\delta$-function. This particle set represents the probability distribution $p(\mathbf{x})$ if $p_N\rightarrow p$ as $N\rightarrow \infty$. Thus we have a discrete weighted approximation of a probability distribution function. The properties of the distribution $p(\mathbf{x})$ can be approximately calculated using these samples. \section{Monte Carlo Approach} Suppose $\pi(\mathbf{x})$ is a probability density function with $\mathbf{x}\in \mathbb{R}^{n_x}$ satisfying \begin{eqnarray} \pi(\mathbf{x}) & \geq & 0 \\ \int\pi(\mathbf{x})d\mathbf{x} & = & 1 \end{eqnarray} where $n_x$ is the dimension of the state vector and $\mathbb{R}$ is a set of real numbers. If $N\gg1$ independent random samples $\{\mathbf{x}^{(i)};i=1,....,N\}$ are available from the distribution $\pi(\mathbf{x})$, then its discrete approximation is given by \begin{equation} p_N(\mathbf{x})=\dfrac{1}{N}\sum_{i=1}^N \delta(\mathbf{x}-\mathbf{x}^{(i)}) \end{equation} Then any integral function on the probability density function $\pi(\mathbf{x})$ can be approximated using an equivalent summation function on the samples from $p_N(\mathbf{x})$ and it converges to the true value as $N\rightarrow \infty$. Suppose it is required to evaluate a multidimensional integral \begin{equation} I=\int g(\mathbf{x})d\mathbf{x} \end{equation} then the Monte Carlo approach will be to factorize $g(\mathbf{x})=f(\mathbf{x})p(\mathbf{x})$ such that $p(\mathbf{x})\geq0$ and $\int p(\mathbf{x})d\mathbf{x}=1$, where $p(\mathbf{x})$ is interpreted as a probability distribution from which samples can be drawn easily and $f(\mathbf{x})$ is a function on $\mathbf{x}$. Then the integral can be written as \begin{eqnarray} I & = & \int f(\mathbf{x})p(\mathbf{x})d\mathbf{x} \label{eqn:condpdf8} \\ & = & E_{p(\mathbf{x})}[f(\mathbf{x})] \end{eqnarray} where $E_{p(\mathbf{x})}[\cdotp]$ is the expectation w.r.t distribution $p(\mathbf{x})$. Hence the integral $I$ is the expectation of $f(\mathbf{x})$ with respect to the distribution $p(\mathbf{x})$. Then Monte Carlo estimate of $I$ can be obtained by generating $N$ samples $\{\mathbf{x}^{(i)}\}_{i=1}^N$ from distribution $p(\mathbf{x})$ and calculating the summation \begin{eqnarray} I_N & = & \dfrac{1}{N}\sum_{i=1}^N f(\mathbf{x}^{(i)})\delta(\mathbf{x}-\mathbf{x}^{(i)}) \\ & = & \dfrac{1}{N}\sum_{i=1}^N f(\mathbf{x}^{(i)}) \label{eqn:condpdf9}\\ & \approx & I \end{eqnarray} This estimate is unbiased and converges to the true value $I$ as $N\rightarrow \infty$. If the distribution $p(\mathbf{x})$ is standard and has closed analytical form, then generation of random samples from it is possible. But since in target tracking the posterior distribution may be multivariate and non standard, it is not possible to sample efficiently from this distribution. There are two problems in the basic Monte Carlo method as mentioned in \cite{10}. \textit{Problem 1 :} Sampling from the distribution $p(\mathbf{x})$ is not possible if it is complex high dimensional probability distribution. \textit{Problem 2 :} The computational complexity of sampling from target distribution $p(\mathbf{X_k})$ where $\mathbf{X}_k=\{\mathbf{x}_j;j=0,..,k$ increases at least linearly with the number of variables $k$. \section{Importance Sampling} Importance sampling helps in addressing the \textit{Problem 1} discussed above. Suppose we are interested in generating samples from $p(\mathbf{x})$ which is difficult to sample, importance sampling is a technique which helps to indirectly generate samples from a suitable distribution $q(\mathbf{x})$ that is easy to sample, and modify this samples by appropriate weighting so that it represents the samples from the distribution $p(\mathbf{x})$. Thus importance sampling makes the calculation of $E_{p(\mathbf{x})}[f(\mathbf{x})]$ feasible. The pdf $q(\mathbf{x})$ is referred to as proposal or importance density. The integral in \eqref{eqn:condpdf8} can be modified as \begin{align} \label{eq:PF_integral} I & = \int f(\mathbf{x})p(\mathbf{x})d\mathbf{x} \\ & = \int f(\mathbf{x})\dfrac{p(\mathbf{x})}{q(\mathbf{x})}q(\mathbf{x})d\mathbf{x} \\ & = E_{q(\mathbf{x})}[f(\mathbf{x})\dfrac{p(\mathbf{x})}{q(\mathbf{x})}] \end{align} provided $p(\mathbf{x})>0 \Rightarrow q(\mathbf{x})>0$ for all $\mathbf{x}\in\mathbb{R}^{n_x}$ and $p(\mathbf{x})/q(\mathbf{x})$ has an upper bound. Then according to \eqref{eqn:condpdf9} Monte Carlo estimate of $I$ can be obtained by generating samples $\mathbf{x}^{(i)};i=1,....,N; N\gg1$ from the distribution $q(\mathbf{x})$ and evaluating \begin{align} I_N & = \dfrac{1}{N}\sum_{i=1}^N f(\mathbf{x}^{(i)})\dfrac{p(\mathbf{x}^{(i)})}{q(\mathbf{x}^{(i)})} \\ & = \dfrac{1}{N}\sum_{i=1}^N f(\mathbf{x}^{(i)})\tilde{w}(\mathbf{x}^{(i)}) \\ \tilde{w}(\mathbf{x}^{(i)}) & = \dfrac{p(\mathbf{x}^{(i)})}{q(\mathbf{x}^{(i)})}; \;\qquad i=1,..,N \label{eqn:condpdf10} \end{align} where $\tilde{w}(\mathbf{x}^{(i)})$ are called the importance weights. The weights are then normalized to qualify it to be a probability distribution. \begin{eqnarray} w^{(i)}=\dfrac{{\tilde{w}}^{(i)}}{\sum_{i=1}^N {\tilde{w}}^{(i)}} \label{eqn:condpdf11} \end{eqnarray} Thus the random samples from distribution $p(\mathbf{x})$ are equivalent to the the random samples from distribution $q(\mathbf{x})$, with associated weights $w^{(i)}$ given in \eqref{eqn:condpdf11}. Thus the samples $\{\mathbf{x}^{(i)}\}_{i=1}^N$ from $q(\mathbf{x})$ with weights $\{w^{(i)}\}_{i=1}^N$ represent the probability distribution of $p(\mathbf{x})$ as $N\rightarrow \infty$ and can be used to compute estimate the integral $I$. \section{Sequential Importance Sampling} Sequential importance sampling helps in addressing the \textit{Problem 2} described above. Sequential Importance Sampling unlike importance sampling requires only a fixed computational complexity at every time step. It is also known as bootstrap filtering, particle filtering or condensation algorithm. It is the sequential version of the Bayesian filter using importance sampling. Consider a joint posterior distribution $p(\mathbf{X}_k|\mathbf{Z}_k)$, where $\mathbf{X}_k=\{\mathbf{x}_j;j=0,...,k\}$ is the sequence of all target states upto time $k$ and $\mathbf{Z}_k=\{\mathbf{z}_j;j=0,...,k\}$ is the sequence of all target measurements upto time $k$. Let $\{\mathbf{X}_k^{(i)},w_k^{(i)}\}_{i=1}^N$ be the particles such that \begin{equation} p(\mathbf{X}_k|\mathbf{Z}_k) \approx \sum_{i=1}^N w_k^{(i)} \delta (\mathbf{X}_k-\mathbf{X}_k^{(i)}) \end{equation} If the importance density $q(\mathbf{X}_k|\mathbf{Z}_k)$ is used to generate particles $\{\mathbf{X}_k^{(i)}\}_{i=1}^N$, then its corresponding weights according to \eqref{eqn:condpdf10} can be written as \begin{equation} w^{(i)}\varpropto \dfrac{p(\mathbf{x}^{(i)})}{q(\mathbf{x}^{(i)})}; \;\qquad i=1,..,N \label{eqn:condpdf12} \end{equation} We can express the importance function using Bayes rule as \begin{eqnarray} q(\mathbf{X}_k|\mathbf{Z}_k)& = & q(\mathbf{x}_k,\mathbf{X}_{k-1}|\mathbf{Z}_k)\\ & = & q(\mathbf{x}_k|\mathbf{X}_{k-1},\mathbf{Z}_k)q(\mathbf{X}_{k-1}|\mathbf{Z}_k) \label{eqn:condpdf13}\\ & = & q(\mathbf{x}_k|\mathbf{X}_{k-1},\mathbf{Z}_k)q(\mathbf{x}_{k-1}|\mathbf{X}_{k-2},\mathbf{Z}_k)......q(\mathbf{x}_{1}|\mathbf{X}_{0},\mathbf{Z}_k)q(\mathbf{X}{0}|\mathbf{Z}_k) \label{eqn:condpdf14}\\ & = & q(\mathbf{x}_{0}|\mathbf{Z}_k)\Pi_{n=1}^k q(\mathbf{x}_n|\mathbf{X}_{n-1},\mathbf{Z}_k) \end{eqnarray} In order to make the importance sampling recursive at every instant $k$ without modifying the previous simulated trajectories $\{\mathbf{X}_{k-1}^{(i)}\}_{i=1}^N$, the new set of samples at time $k$, $\mathbf{X}_{k}^{(i)}\sim q(\mathbf{X}_k|\mathbf{Z}_k)$ must be obtained using the previous set of samples $\mathbf{X}_{k-1}^{(i)}\sim q(\mathbf{X}_{k-1}|\mathbf{Z}_{k-1})$ and the importance density must be chosen such that $q(\mathbf{X}_{k-1}|\mathbf{Z}_{k})=q(\mathbf{X}_{k-1}|\mathbf{Z}_{k-1})$. Then \eqref{eqn:condpdf13} can be written as \begin{eqnarray} q(\mathbf{X}_k|\mathbf{Z}_k) & = & q(\mathbf{x}_k|\mathbf{X}_{k-1},\mathbf{Z}_k)q(\mathbf{X}_{k-1}|\mathbf{Z}_{k-1}) \label{eqn:condpdf15}\\ & = & q(\mathbf{x}_{0}|\mathbf{Z}_0)\Pi_{n=1}^k q(\mathbf{x}_n|\mathbf{X}_{n-1},\mathbf{Z}_n) \end{eqnarray} Thus the importance density at $k$ can be expressed in terms of importance density at $k-1$ so that new samples $\mathbf{X}_{k}^{(i)} \sim q(\mathbf{X}_k|\mathbf{Z}_k)$ can be obtained by augmenting each previous samples $\mathbf{X}_{k-1}^{(i)} \sim q(\mathbf{X}_{k-1}|\mathbf{Z}_{k-1})$ with the new state $\mathbf{x}_k^{(i)} \sim q(\mathbf{x}_k|\mathbf{X}_{k-1},\mathbf{Z}_{k})$. These particles along with their new importance weights can approximate the posterior distribution $p(\mathbf{X}_k|\mathbf{Z}_k)$ as $N\rightarrow \infty$. In order to calculate the new importance weights for the above samples recursively, the pdf $p(\mathbf{X}_k|\mathbf{Z}_k)$ can be written using \eqref{eqn:condpdf5} as \cite{4}. \begin{eqnarray} p(\mathbf{X}_{k}|\mathbf{Z}_{k}) & = & \dfrac{p(\mathbf{z}_{k}|\mathbf{X}_k,\mathbf{Z}_{k-1})p(\mathbf{X}_k|\mathbf{Z}_{k-1})}{p(\mathbf{z}_{k}|\mathbf{Z}_{k-1})} \\ & = & \dfrac{p(\mathbf{z}_{k}|\mathbf{X}_k,\mathbf{Z}_{k-1})p(\mathbf{x}_k|\mathbf{X}_{k-1},\mathbf{Z}_{k-1})p(\mathbf{X}_{k-1}|\mathbf{Z}_{k-1})}{p(\mathbf{z}_{k}|\mathbf{Z}_{k-1})} \label{eqn:condpdf16} \end{eqnarray} Using the assumption in \eqref{eqn:markov1}, \eqref{eqn:condpdf16} can be written as \begin{eqnarray} p(\mathbf{X}_{k}|\mathbf{Z}_{k})& = & \dfrac{p(\mathbf{z}_{k}|\mathbf{x}_k)p(\mathbf{x}_k|\mathbf{x}_{k-1})}{p(\mathbf{z}_{k}|\mathbf{Z}_{k-1})}p(\mathbf{X}_{k-1}|\mathbf{Z}_{k-1})\\ & \varpropto & p(\mathbf{z}_k|\mathbf{x}_k)p(\mathbf{x}_k|\mathbf{x}_{k-1})p(\mathbf{X}_{k-1}|\mathbf{Z}_{k-1}) \label{eqn:condpdf17} \end{eqnarray} The proportionality follows because $p(\mathbf{z}_{k}|\mathbf{Z}_{k-1})$ is a normalizing constant. Using \eqref{eqn:condpdf17} and \eqref{eqn:condpdf15}, \eqref{eqn:condpdf12} can be rewritten as \begin{eqnarray} w_k^{(i)} & \varpropto & \dfrac{p(\mathbf{z}_k|\mathbf{x}_k^{(i)})p(\mathbf{x}_k^{(i)}|\mathbf{x}_{k-1}^{(i)})}{q(\mathbf{x}_k^{(i)}|\mathbf{X}_{k-1}^{(i)},\mathbf{Z}_{k})}\dfrac{p(\mathbf{X}_{k-1}^{(i)}|\mathbf{Z}_{k-1})}{q(\mathbf{X}_{k-1}^{(i)}|\mathbf{Z}_{k-1})} \\ & = & w_{k-1}^{(i)}\dfrac{p(\mathbf{z}_k|\mathbf{x}_k^{(i)})p(\mathbf{x}_k^{(i)}|\mathbf{x}_{k-1}^{(i)})}{q(\mathbf{x}_k^{(i)}|\mathbf{X}_{k-1}^{(i)},\mathbf{Z}_{k})} \end{eqnarray} If the importance density also satisfies $q(\mathbf{x}_k|\mathbf{X}_{k-1},\mathbf{Z}_k)=q(\mathbf{x}_k|\mathbf{x}_{k-1},\mathbf{z}_k)$, then the importance weight can be calculated recursively as \begin{eqnarray} \label{eq:imp_weights} w_k^{(i)}\varpropto w_{k-1}^{(i)}\dfrac{p(\mathbf{z}_k|\mathbf{x}_k^{(i)})p(\mathbf{x}_k^{(i)}|\mathbf{x}_{k-1}^{(i)})}{q(\mathbf{x}_k^{(i)}|\mathbf{x}_{k-1}^{(i)},\mathbf{z}_{k})} \label{eqn:condpdf18} \end{eqnarray} Thus sequential importance sampling filter consists of recursive propagation of particles $\mathbf{x}_k^{(i)}$ according to \eqref{eqn:condpdf15} and update of importance weights $w_k^{(i)}$ according to \eqref{eqn:condpdf18}. Hence in order to obtain the particles at instant $k$, only the past particles $\{\mathbf{x}_{k-1}^{(i)},\mathbf{w}_{k-1}^{(i)}\}_{i=1}^N$ and measurement $\mathbf{z}_k$ are required and can discard the past trajectories $\mathbf{X}_{k-2}^{(i)}$ and measurements $\mathbf{Z}_{k-1}$, and requires only fixed computational complexity. Thus it addresses the \textit{Problem 2} discussed previously. Hence the posterior filtered density $p(\mathbf{x}_k|\mathbf{Z}_k)$ can be calculated recursively. The pseudo-code for the sequential importance sampling (SIS) filter is repeated in Table.\ref{tab:SIS} from \cite{8}. \begin{table}[t] \caption{Sequential Importance Sampling (SIS) \cite{8}} \centering \begin{tabular}{l} \hline \begin{minipage}{5in} \vskip 4pt \begin{enumerate} \item For $k=0$, \begin{itemize} \item For $i=1,....,N$: Initialize \begin{itemize} \item Sample $\mathbf{x}_0^{(i)}\sim q(\mathbf{x}_0|\mathbf{z}_0)$ \item Evaluate the unnormalized importance weights \begin{eqnarray} {\tilde{w}}_0^{(i)}& = & \dfrac{p(\mathbf{z}_0|\mathbf{x}_0^{(i)})p(\mathbf{x}_0^{(i)})}{q(\mathbf{x}_0^{(i)}|\mathbf{z}_{0})} \end{eqnarray} \end{itemize} \item For $i=1,....,N$: \begin{itemize} \item Normalize the importance weights \begin{eqnarray} w^{(i)}& = & \dfrac{{\tilde{w}}_0^{(i)}}{\sum_{i=1}^N {\tilde{w}}_0^{(i)}} \end{eqnarray} \end{itemize} \end{itemize} \item For $k>0$ \begin{itemize} \item For $i=1,....,N$: \begin{itemize} \item Sample $\mathbf{x}_k^{(i)} \sim q(\mathbf{x}_k|\mathbf{X}_{k-1},\mathbf{Z}_{k})$ \item Evaluate the unnormalized importance weights \begin{eqnarray} {\tilde{w}}_k^{(i)} & \varpropto & w_{k-1}^{(i)}\dfrac{p(\mathbf{z}_k|\mathbf{x}_k^{(i)})p(\mathbf{x}_k^{(i)}|\mathbf{x}_{k-1}^{(i)})}{q(\mathbf{x}_k^{(i)}|\mathbf{x}_{k-1}^{(i)},\mathbf{z}_{k})} \end{eqnarray} \end{itemize} \item For $i=1,....,N$: \begin{itemize} \item Normalize the importance weights \begin{eqnarray} w^{(i)}& = & \dfrac{{\tilde{w}}^{(i)}}{\sum_{i=1}^N {\tilde{w}}^{(i)}} \end{eqnarray} \end{itemize} \end{itemize} \end{enumerate} \vskip 4pt \end{minipage} \\ \hline \end{tabular} \label{tab:SIS} \end{table} The weight update and proposal for each particle in the sequential importance sampling filter can be calculated in parallel. Hence availability of parallel computational techniques like graphics processing unit (GPU) and FPGA facilitates the implementation of SIS filter without loosing time efficiency. \section{Implementation Issues} \subsection{Degeneracy} According to \cite{8}, the variance of importance weights increases over time if the importance density is of the form \eqref{eqn:condpdf15}. Hence after a certain number of recursive steps, the weights degrade or get degenerated such that most particles have negligible weight. A large computational effort has to be wasted on updating these particles even though their contribution to the posterior estimate is negligible. Hence only a few high weight particles contribute to the posterior distribution $p(\mathbf{x}_k|\mathbf{Z}_k)$ effectively. One level of degeneracy can be estimated based on effective sample size($N_{eff})$ \begin{eqnarray} N_{eff}& = & \dfrac{1}{\sum_{i=1}^N (w_k^{(i)})^2} \end{eqnarray} The two extreme cases are \begin{enumerate} \item If the weights are uniform, $w_k^{(i)}=\frac{1}{N}$, for $i=1,...,N$, then $N_{eff}=N$. \item If weights are such that $w_k^{(j)}=1$ and $w_k^{(i)}=0$ for $i\neq j$, then $N_{eff}=1$. \end{enumerate} For all other intermediate cases $1<N_{eff}<N$. Thus higher degeneracy implies lesser $N_{eff}$ and vice versa.\\\\ $Solution: Resampling$\\ Resampling is a technique to reduce degeneracy. If degeneracy is observed, i.e., $N_{eff}$ falls below some threshold $N_{thr}$, then resampling is done. It keeps as many samples with non-zero significant weights and neglects the negligible weights. It replaces the old set of particles and their weights with new set of particles and weights by removing the low weight particles and replicating the high weight particles and associating them with uniform weights such that the resultant particles represent the posterior pdf in a better form for later iterations. Thus it does a transformation of the set $\{\mathbf{x}_k^{(i)}, w_k^{(i)}\}_{i=1}^N$ to $\{\mathbf{x}_k^{(i)}, N^{-1}\}_{i=1}^N$ such that the final set represents the same distribution as of the first. The concept of resampling is illustrated in Fig \ref{fig:resampling}.\\ \begin{figure}[t!] \centering \includegraphics[scale=0.5]{IPPF_fig/resampling} \caption{Resampling of a set of particles representing a distribution $P(\mathbf{x})$ is illustrated. The size of the particles represents their weight.} \label{fig:resampling} \end{figure} One way of implementation of resampling is multinomial resampling \cite{4,11} which involves generating uniformly distributed random samples in range $(0,1)$ and using them to obtain samples from the required target posterior density by inverse transformation. It has three main steps. First it generates independent uniform random samples $u_j\sim \mathcal{U}[0,1]$ for $j=1,...N$. Secondly it accumulates the weights $w_k^{(i)}$ into a sum until it is just greater than $u_j$. \begin{equation} \sum_{i=1}^{m-1} w_k^{(i)} <u_j\leq \sum_{i=1}^{m} w^{(i)} \end{equation} Hence it projects $u_j$ to the cumulative sum of the weights $w_k^{(i)}$ as shown in Fig \ref{fig:Mresampling}. The new particle ${\tilde{\mathbf{x}}}_k^{(j)}$ is set equal to the old particles $\mathbf{x}_k^{(m)}$ with weight $1/N$ and is repeated until $N$ samples are obtained. The large weight particles have higher chance of being selected and multiplied. Its pseudo code is given in Table \ref{tab:Mresampling}. \begin{figure}[t!] \centering \includegraphics[scale=0.4]{BootstrapPF_fig/Multiresampling} \caption{Multinomial Resampling: The high weight particles such as particles with indices $2$, $4$, etc, are selected more number of times.} \label{fig:Mresampling} \end{figure} \begin{table}[p!] \caption{Multinomial Resampling \cite{4,11}} \centering \begin{tabular}{l} \hline \begin{minipage}{4in} \vskip 4pt $[\{\tilde{\mathbf{x}}_k^{(n)},\tilde{w}_k^{(n)}\}_{n=1}^{N}]=$RESAMPLE$[\{\mathbf{x}_k^{(i)}, w_k^{(i)}\}_{i=1}^N]$ \begin{itemize} \item $c(0)=0$ \item FOR $i=1:N$, \begin{itemize} \item $c(i)=c(i-1)+ w_k^{(i)}$ \end{itemize} \item END FOR \item FOR $n=1:N$, \begin{itemize} \item Draw $u_n\sim \mathcal{U} [0,1]$ \item m=1 \item WHILE $(c(m)<u_n)$ \begin{itemize} \item$ m=m+1$ \end{itemize} \item END WHILE \item Set $\tilde{\mathbf{x}}_k^{(n)}=\mathbf{x}_k^{(m)}$ \item Set $\tilde{w}_k^{(n)}=N^{-1}$ \end{itemize} \item END FOR \end{itemize} \vskip 4pt \end{minipage} \\ \hline \end{tabular} \label{tab:Mresampling} \end{table} Another slightly different method of resampling is the systematic resampling\cite{4, 11}. It has the same procedure as multinomial sampling except that pseudo uniform random variables are generated instead of independent uniform random variables. Here a uniform random number $u_1\sim \mathcal{U}[0,N^{-1}]$ is generated once and the rest are generated by increasing this random number $u_1$ by $1/N$ cumulatively and then performing the inverse transformation as shown in Fig \ref{fig:Sresampling} similar to the multinomial sampling to get the required target posterior distribution. Its pseudo code is repeated in Table \ref{tab:Sresampling} from \cite{4}. where $T$ is the sampling period of the target dynamics. \begin{table}[p!] \caption{Systematic Resampling} \centering \begin{tabular}{l} \hline \begin{minipage}{4in} \vskip 4pt $[\{\tilde{\mathbf{x}}_k^{(n)},\tilde{w}_k^{(n)}\}_{n=1}^{N}]=$RESAMPLE$[\{\mathbf{x}_k^{(i)}, w_k^{(i)}\}_{i=1}^N]$ \begin{itemize} \item $c(0)=0$ \item FOR $i=1:N$, \begin{itemize} \item $c(i)=c(i-1)+ w_k^{(i)}$ \end{itemize} \item END FOR \item Draw the starting point $u_1\sim \mathcal{U} [0, \frac{1}{N}]$ \item m=1 \item FOR $n=1:N$, \begin{itemize} \item $u_n=u_1+R^{-1}(n-1)$ \item WHILE $(c(m)<u_n)$ \begin{itemize} \item$ m=m+1$ \end{itemize} \item END WHILE \item Set $\tilde{\mathbf{x}}_k^{(n)}=\mathbf{x}_k^{(m)}$ \item Set $\tilde{w}_k^{(n)}=N^{-1}$ \end{itemize} \item END FOR \end{itemize} \vskip 4pt \end{minipage} \\ \hline \end{tabular} \label{tab:Sresampling} \end{table} \begin{figure}[t!] \centering \includegraphics[scale=0.4]{BootstrapPF_fig/Sresampling} \caption{Systematic Resampling} \label{fig:Sresampling} \end{figure} Thus resampling involves $N$ draws from the initial particles using their own probability distribution as the selection probabilities and assigning each particle a weight of $w^{(i)}=\frac{1}{N}$ for $i=1,...,N$. This strategy of resampling along with importance sampling is termed as sampling importance resampling(SIR). Even though resampling helps to remove degeneracy, it introduces another issue known as sample impoverishment which is described next. The accuracy of any estimate of a function of the distribution decreases with resampling. It also limits the opportunity to parallelize the propagation and update of the particles since they have to be combined to find the cumulative density required for resampling. Hence in order to minimize the frequency of resampling, a proper proposal function has to be used so that there is significant overlap between the prior particles and the likelihood. Strategies of selecting good proposal function are explained in section \ref{sec:imp_fn_selection}. \subsection{Sample Impoverishment} When there is very less overlap between the prior and the likelihood, only few particles will have higher weight. A subsequent resampling causes loss of diversity among particles as particles with large weight are sampled many times with the result that resultant sample will contain many repeated points or less distinct points. This is called sample impoverishment. After some iterations it leads to a situation when all particles collapse to a single particle. \\\\ $Solution: Roughening$\\ One method to solve sample impoverishment is to increase the number of particle $N$. But it increases the computational demand. Roughening is an efficient method proposed in \cite{7} to solve sample impoverishment. Here random noise $\Delta \mathbf{x}$ is added to each component of the particle after the resampling process such that: \begin{eqnarray} \mathbf{x}_k^{(i)}(m)& = & \mathbf{x}_k^{(i)}(m)+\Delta \mathbf{x}(m) \\ \Delta \mathbf{x} & \sim & \mathcal{N}(0,KMN^{-1/d}) \\ M(m)& = & \max_{i,j}|\mathbf{x}_{k}^{(i)}(m)-\mathbf{x}_{k}^{(j)}(m)|; \;\;\; m=1,...,d \label{eqn:roughening} \end{eqnarray} where $K$ is a scalar tuning parameter, $N$ is the number of particles, $d$ is the dimension of state space, $M$ is the vector containing maximum difference between each particle elements before roughening. Higher value of $K$ will blur the distribution and low value of $K$ will create group of points around the original samples. Hence $K$ is a compromise and has to be tuned. A value of $K=0.2$ has been used in \cite{7}. The pseudo code for roughening is shown in Table.\ref{tab:Roughening} \begin{table}[ht!] \caption{Roughening} \centering \begin{tabular}{l} \hline \begin{minipage}{4in} \vskip 4pt $[\{\mathbf{x}_k^{(n)},w_k^{(n)}\}_{n=1}^{N}]=$ROUGHEN$[\{\mathbf{x}_k^{(i)},w_k^{(i)}\}_{i=1}^{N}]$ \begin{enumerate} \item For $m=1,...,d$ \begin{equation} M(m)=\max_{i,j}|\mathbf{x}_{k}^{(i)}(m)-\mathbf{x}_{k}^{(j)}(m)| \end{equation} \item For $i=1,...,N$ \begin{itemize} \item Calculate random noise vector \begin{eqnarray} \Delta \mathbf{x} & \sim & \mathcal{N}(0,KMN^{-1/d}) \end{eqnarray} \item For $m=1,...,d$ \begin{eqnarray} \mathbf{x}_k^{(i)}(m)& = & \mathbf{x}_k^{(i)}(m)+\Delta \mathbf{x}(m) \end{eqnarray} \end{itemize} \end{enumerate} \vskip 4pt \end{minipage} \\ \hline \end{tabular} \label{tab:Roughening} \end{table} Other solutions for sample impoverishment include prior editing, Markov Chain Monte Carlo resampling, regularized particle filter, auxiliary particle filter etc. \section{Selection of Importance function} \label{sec:imp_fn_selection} A good selection of importance density minimizes the frequency of resampling. Since increase in the variance of the weights of the particles causes degeneracy, the better method will be to select the importance density which minimizes the variance of the importance weights based on the available information $\mathbf{X}_{k-1}$ and $\mathbf{Z}_k$. \subsection{Optimal Importance function} The best way of selecting an importance density is to choose the one which minimizes the variance of the weights. According to \cite{8}, the optimal importance density that minimizes the variance of the importance weights conditional upon the simulated trajectories $\mathbf{X}_{k-1}^{(i)}$ and observations $\mathbf{Z}_k$ is given by \begin{eqnarray} q(\mathbf{x}_k|\mathbf{X}_{k-1}^{(i)},\mathbf{Z}_k)& = & p(\mathbf{x}_k|\mathbf{x}_{k-1}^{(i)},\mathbf{z}_k)\\ & = & \dfrac{p(\mathbf{x}_k,\mathbf{x}_{k-1}^{(i)},\mathbf{z}_k)}{p(\mathbf{x}_{k-1}^{(i)},\mathbf{z}_k)} \\ & = & \dfrac{p(\mathbf{z}_k|\mathbf{x}_k,\mathbf{x}_{k-1}^{(i)})p(\mathbf{x}_k|\mathbf{x}_{k-1}^{(i)})p(\mathbf{x}_{k-1}^{(i)})}{p(\mathbf{z}_k|\mathbf{x}_{k-1}^{(i)})p(\mathbf{x}_{k-1}^{(i)})} \\ & = & \dfrac{p(\mathbf{z}_k|\mathbf{x}_k,\mathbf{x}_{k-1}^{(i)})p(\mathbf{x}_k|\mathbf{x}_{k-1}^{(i)})}{p(\mathbf{z}_k|\mathbf{x}_{k-1}^{(i)})} \label{eqn:condpdf19} \end{eqnarray} Then the weight update equation for particles drawn from this optimal importance density can be obtained using \eqref{eqn:condpdf18} and \eqref{eqn:condpdf19} as \begin{eqnarray} w_k^{(i)} & \varpropto & w_{k-1}^{(i)}p(\mathbf{z}_k|\mathbf{x}_{k-1}^{(i)}) \end{eqnarray} Another advantage of using the optimal importance function is that the importance weight at instant $k$ doesn't depend on $\mathbf{x}_k$ and hence evaluation of of weight $w_k^{(i)}$ and proposal of $\mathbf{x}_k^{(i)}$ can be parallelized for better practical results. In order to use this optimal importance function, we should be able to sample particles from $p(\mathbf{x}_k|\mathbf{x}_{k-1}^{(i)},\mathbf{z}_k)$ and to evaluate \begin{eqnarray} p(\mathbf{z}_k|\mathbf{x}_{k-1}^{(i)}) & = & \int p(\mathbf{z}_k|\mathbf{x}_k)p(\mathbf{x}_k|\mathbf{x}_{k-1})d\mathbf{x}_k \end{eqnarray} at least upto a normalizing constant. But these exact calculations are possible only for some special cases like systems of form \begin{eqnarray} \mathbf{x}_k & = & \mathbf{f}_{k-1}(\mathbf{x}_{k-1})+\mathbf{w}_{k-1} \label{eqn:condpdf20a}\\ \mathbf{z}_k & = & \mathbf{H}_{k}\mathbf{x}_{k}+\mathbf{v}_{k}\label{eqn:condpdf20b} \end{eqnarray} where $\mathbf{f}_{k-1}(\cdotp)$ can be a non linear function, $H_k$ is a matrix, $v_k$ and $w_k$ are mutually independent zero mean white Gaussian noise with known covariances $Q_k$ and $R_k$ \subsection{Suboptimal Importance Functions} \subsubsection{Importance Function Obtained by Local Linearization} For systems of form \eqref{eqn:condpdf21a} and \eqref{eqn:condpdf21b}, where both the system and measurement equation are non linear, local linearization of function $\mathbf{h}_{k}(\cdotp)$ is done similar to Extended Kalman Filter to get the linearized matrix $H_k$ so that the problem becomes similar to the system defined in \eqref{eqn:condpdf20a} and \eqref{eqn:condpdf20b}. \begin{eqnarray} \mathbf{x}_k & = & \mathbf{f}_{k-1}(\mathbf{x}_{k-1})+\mathbf{w}_{k-1} \label{eqn:condpdf21a}\\ \mathbf{z}_k & = & \mathbf{h}_{k}(\mathbf{x}_{k})+\mathbf{v}_{k} \label{eqn:condpdf21b}\\ H_k & = & {\dfrac{\delta \mathbf{h}_{k}(\mathbf{x}_{k})}{\delta \mathbf{x}_{k}}}|_{\mathbf{x}_k=f(\mathbf{x}_{k-1})} \end{eqnarray} \subsubsection{Prior Importance Function} One popular choice of importance density is the transitional prior itself. \begin{eqnarray} q(\mathbf{x}_k|\mathbf{x}_{k-1}^{(i)},\mathbf{z}_k) & = & p(\mathbf{x}_k|\mathbf{x}_{k-1}^{(i)})\label{eqn:condpdf22} \end{eqnarray} For a system with state space representation of \eqref{eqn:condpdf20a} and \eqref{eqn:condpdf20b}, the prior becomes \begin{equation} p(\mathbf{x}_k|\mathbf{x}_{k-1}^{(i)})=\mathcal{N}(\mathbf{x}_k;f_{k-1}(\mathbf{x}_{k-1}^{(i)},Q_{k-1})) \end{equation} Using \eqref{eqn:condpdf18} and \eqref{eqn:condpdf22}, the weight update equation simplifies to \begin{equation} w_k^{(i)}\varpropto w_{k-1}^{(i)}p(\mathbf{z}_k|\mathbf{x}_{k}^{(i)}) \label{eqn:condpdf23} \end{equation} This method has the advantage that importance weights are easily calculated and the importance density can be easily sampled. But this method is less efficient since the particles are proposed without the knowledge of the observation and hence the overlap between the prior and the likelihood might be less. \section{Generic Particle Filters} The pseudo code for a generic particle filter which incorporates resampling and roughening is shown in Table.\ref{tab:GPF} \cite{4,8,11}. A graphical representation of a PF with $N=22$ samples and using the transitional prior as the importance density is shown in Fig \ref{fig:BootstrapPF}. At the top we have the target distribution $p(\mathbf{x}_k|\mathbf{z}_k)$ which is approximated using the particles $\{\mathbf{x}_k^{(i)},w_k^{(i)}\}_{i=1}^{N}$. If $N_{eff}<N_{thr}$ resampling is executed on these particles to obtain uniform weight particles $\{\mathbf{x}_k^{(i)},N^{-1}\}_{i=1}^{N}$, which still approximates the target distribution $p(\mathbf{x}_k|\mathbf{z}_k)$. Resampling is followed by roughening to modify duplicate particles. The resultant particles are used for prediction using the transitional prior to get particles $\{\mathbf{x}_{k+1}^{(i)},N^{-1}\}_{i=1}^{N}$ that approximate the density $p(\mathbf{x}_{k+1}|\mathbf{z}_k)$. Next the weight update is carried out using the likelihood $p(\mathbf{z}_{k+1}|\mathbf{x}_{k+1})$ to obtain particles $\{\mathbf{x}_{k+1}^{(i)},w_{k+1}^{(i)}\}_{i=1}^{N}$ that approximate the density $p(\mathbf{x}_{k+1}|\mathbf{z}_{k+1})$. \begin{figure}[t!] \centering \includegraphics[scale=0.5]{BootstrapPF_fig/BootstrapPF} \caption{A single cycle of a particle filter with $N=22$ and transitional prior as the importance density} \label{fig:BootstrapPF} \end{figure} \begin{table}[ht!] \caption{Generic Particle Filter \cite{4,11}} \centering \resizebox{!}{4.5in} { \begin{tabular}{l} \hline \begin{minipage}{6in} \vskip 4pt \begin{enumerate} \item For $k=0$, \begin{itemize} \item For $i=1,....,N$: Initialize \begin{itemize} \item Sample $\mathbf{x}_0^{(i)}\sim q(\mathbf{x}_0|\mathbf{z}_0)$ \item Evaluate the unnormalized importance weights \begin{eqnarray} {\tilde{w}}_0^{(i)}& = & \dfrac{p(\mathbf{z}_0|\mathbf{x}_0^{(i)})p(\mathbf{x}_0^{(i)})}{q(\mathbf{x}_0^{(i)}|\mathbf{z}_{0})} \end{eqnarray} \end{itemize} \item For $i=1,....,N$: \begin{itemize} \item Normalize the importance weights \begin{eqnarray} w^{(i)}& = & \dfrac{{\tilde{w}}^{(i)}}{\sum_{i=1}^N {\tilde{w}}^{(i)}} \end{eqnarray} \end{itemize} \end{itemize} \item For $k>0$ \begin{itemize} \item For $i=1,....,N$: \begin{itemize} \item Sample $\mathbf{x}_k^{(i)} \sim q(\mathbf{x}_k|\mathbf{X}_{k-1},\mathbf{Z}_{k-1})$ \item Evaluate the unnormalized importance weights \begin{eqnarray} {\tilde{w}}_k^{(i)} & \varpropto & w_{k-1}^{(i)}\dfrac{p(\mathbf{z}_k|\mathbf{x}_k^{(i)})p(\mathbf{x}_k^{(i)}|\mathbf{x}_{k-1}^{(i)})}{q(\mathbf{x}_k^{(i)}|\mathbf{x}_{k-1}^{(i)},\mathbf{z}_{k})} \end{eqnarray} \item For $i=1,....,N$: \begin{itemize} \item Normalize the importance weights \begin{eqnarray} w^{(i)}& = & \dfrac{{\tilde{w}}^{(i)}}{\sum_{i=1}^N {\tilde{w}}^{(i)}} \end{eqnarray} \end{itemize} \end{itemize} \end{itemize} \item Calculate $N_{eff}$ \begin{eqnarray} N_{eff}& = & \dfrac{1}{\sum_{i=1}^N (w_k^{(i)})^2} \end{eqnarray} \item If $N_{eff}<N_{thr}$ \begin{itemize} \item Resample the particles using algorithm in Table \ref{tab:Sresampling} or Table \ref{tab:Mresampling} \begin{eqnarray} [\{\mathbf{x}_k^{(n)},w_k^{(n)}\}_{n=1}^{N}] & = & RESAMPLE[\{\mathbf{x}_k^{(i)},w_k^{(i)}\}_{i=1}^{N}] \end{eqnarray} \item Roughen the particles using algorithm in Table \ref{tab:Roughening} \begin{eqnarray} [\{\mathbf{x}_k^{(n)},w_k^{(n)}\}_{n=1}^{N}]] & = & ROUGHEN[\{\mathbf{x}_k^{(i)},w_k^{(i)}\}_{i=1}^{N}] \end{eqnarray} \end{itemize} \end{enumerate} \vskip 4pt \end{minipage} \\ \hline \end{tabular} } \label{tab:GPF} \end{table} \section{Bootstrap Filter} Bootstrap filter proposed in \cite{7} is also known as sequential importance resampling (SIR) filter. It is a modification of the above generic particle filter. It uses transitional prior as the importance density and performs resampling at every step. For this choice of importance density the weight update equation is given by \eqref{eqn:condpdf23}. Since the resampling is done at every step, the resampled particles at the previous instant have weights $w_{k-1}^{(i)}=N^{-1}$ for a $i=1,...,N$. Hence the weight update equation reduces to \begin{equation} w_k^{(i)}\varpropto p(\mathbf{z}_k|\mathbf{x}_{k}^{(i)}) \end{equation} The bootstrap filter has the advantage that the importance weights can be easily calculated and the importance density can be easily sampled. The pseudocode for bootstrap filter is shown in Table \ref{tab:BootstrapPF}. \begin{table}[ht!] \caption{Bootstrap Particle Filter \cite{4,11}} \centering \begin{tabular}{l} \hline \begin{minipage}{6in} \vskip 4pt \begin{enumerate} \item For $k=0$, \begin{itemize} \item For $i=1,....,N$: Initialize \begin{itemize} \item Sample $\mathbf{x}_0^{(i)}\sim p(\mathbf{x}_0)$ \item Assign particle weights \begin{eqnarray} w_0^{(i)}& = & N^{-1} \end{eqnarray} \end{itemize} \end{itemize} \item For $k>0$ \begin{itemize} \item For $i=1,....,N$: \begin{itemize} \item Sample $\mathbf{x}_k^{(i)} \sim p(\mathbf{x}_k|\mathbf{X}_{k-1})$ \item Evaluate the unnormalized importance weights \begin{eqnarray} {\tilde{w}}_k^{(i)} & \varpropto p(\mathbf{z}_k|\mathbf{x}_k^{(i)}) \end{eqnarray} \end{itemize} \item For $i=1,....,N$: \begin{itemize} \item Normalize the importance weights \begin{eqnarray} w^{(i)}& = & \dfrac{{\tilde{w}}^{(i)}}{\sum_{i=1}^N {\tilde{w}}^{(i)}} \end{eqnarray} \end{itemize} \item Resample the particles using algorithm in Table \ref{tab:Sresampling} or Table \ref{tab:Mresampling} \begin{eqnarray} [\{\mathbf{x}_k^{(n)},w_k^{(n)}\}_{n=1}^{N}] & = & RESAMPLE[\{\mathbf{x}_k^{(i)},w_k^{(i)}\}_{i=1}^{N}] \end{eqnarray} \item Roughen the particles using algorithm in Table \ref{tab:Roughening} \begin{eqnarray} [\{\mathbf{x}_k^{(n)},w_k^{(n)}\}_{n=1}^{N}]] & = & ROUGHEN[\{\mathbf{x}_k^{(i)},w_k^{(i)}\}_{i=1}^{N}] \end{eqnarray} \end{itemize} \end{enumerate} \vskip 4pt \end{minipage} \\ \hline \end{tabular} \label{tab:BootstrapPF} \end{table} \section{Other Particle Filters} The variations in the selection of importance density and/or modification of the resampling step has resulted in various versions of particle filters like \begin{enumerate} \item Auxiliary SIR filter \item Regularized particle filter \item MCMC particle filter \item Multiple Model particle filter(MMPF) \item Independent partition particle filter(IPPF) etc. \end{enumerate} Of these particle filters, the IPPF and MMPF will be considered in later chapters. \section{Simulation Results} A target motion scenario and its measurements are simulated according to the given models and the estimates using the generic particle filter algorithm is compared with the true trajectories. For comparison, estimation is done using the extended Kalman filter also on the same target tracking problem and the results are compared. We have a target which has constant velocity and constant turn motions. The state vector consists of position and velocities of the target, \begin{eqnarray} \mathbf{\mathbf{x}}= \begin{bmatrix} x & v_{x} & y & v_{y} \end{bmatrix}^T \end{eqnarray} The initial true state of the target was $\mathbf{x}_{0}=\begin{bmatrix} 100&20&100&20 \end{bmatrix}^T$. From time $k=0s$ to $k=20s$, $k=61s$ to $k=70s$, $k=91s$ to $k=100s$, the target has constant velocity motion. From $k=21s$ to $k=60s$, $k=11s$ to $k=90s$, it moves in clockwise constant turn rate motion of $6 rad/s$. The measurement sensor is located at the origin. The target's range $r$ and bearing $\theta$ at time $k$ are available as the measurement $\mathbf{z}_k$. \begin{eqnarray} \mathbf{z}_k=h(\mathbf{x}_{k})+\mathbf{v}_k \\ \mathbf{v}_k\sim \mathcal{N}(0,Q_v) \end{eqnarray} where $\mathbf{v}_k$ is the measurement error, $h(\cdotp)$ is the measurement model . The measurement error $\mathbf{v}_k$ is uncorrelated and has zero mean Gaussian distribution with covariance matrix $Q_v$. \begin{eqnarray} \mathbf{z}_k= \begin{bmatrix} r\\ \theta\\ \end{bmatrix} \end{eqnarray} \begin{eqnarray} \mathbf{Q_v}= \begin{bmatrix} \sigma_{r}^{2} &0 \\ 0 &\sigma_{\theta}^{2} \\ \end{bmatrix}= \begin{bmatrix} 10 &0 \\ 0 &1\\ \end{bmatrix} \end{eqnarray} The measurement model $h(\cdotp)$ for the target is given by: \begin{eqnarray} h(\mathbf{x}_{k})= \begin{bmatrix} \sqrt{x^2_k+y^2_k}\\ \tan^{-1}\left(\dfrac{y_k}{x_k}\right) \end{bmatrix} \end{eqnarray} The initial state estimate is assumed to be a Gaussian vector with mean $\mathbf{x}_{0}$ and error covariance $P_{0}$, such that \begin{equation} \mathbf{x}_{0}=\begin{bmatrix} 100&20&100&20 \end{bmatrix}^T \end{equation} \begin{equation} P_{0}=\begin{bmatrix} 100 &10 &100 &10 \end{bmatrix}^T \end{equation} Hence initial particles $\{\mathbf{x}_{0}^{(i)}\}_{i=1}^N$ were generated based on the distribution \begin{equation} \mathbf{x}_{0} \sim \mathcal{N}(\mathbf{x}_{0},P_{0}) \end{equation} In this implementation of the particle filter, the transitional prior which is a suboptimal choice of the importance density is used to propose particles. The state transition model $f(\cdotp)$ for estimation of state at time $k$ is such that: \begin{eqnarray} \mathbf{x}_{k} & = & f(\mathbf{x}_{k-1})+\mathbf{w}_{k-1} \end{eqnarray} where $\mathbf{w}_{k-1}$ is the process noise with zero mean. The state transition model $f(\cdotp)$ used in this implementation of the generic particle filter is constant velocity model. Hence $f(\cdotp)$ is a matrix $F$ given by: \begin{equation} F=\begin{bmatrix} 1&T&0&0\\ 0&1&0&0\\ 0&0&1&T\\ 0&0&0&1\\ \end{bmatrix} \end{equation}\\ where $T$ is the sampling period of the target dynamics. The process noise assumed has a diagonal covariance matrix $Q_{w}$ as: \begin{equation} Q_w = diag \left( \begin{array}{cccccccc} 5 ,&1, &5 ,& 1 \end{array} \right) \end{equation} The number of particles used was $N=500$. The detailed implementation algorithm for the target tracking problem is given in Table.\ref{tab:GPF_impltn}. Since the resampling can only reduce the accuracy of the estimates of the distribution, the estimates such as conditional mean, covariance of samples, mean square error(MSE) are calculated before resampling. Results shown are calculated for 100 Monte Carlo runs. \begin{table}[ht!] \caption{Implementation of GPF} \centering \resizebox{!}{4in} { \begin{tabular}{l} \hline \begin{minipage}{6in} \vskip 4pt \begin{enumerate} \item For $k=0$, initialize all particles: \begin{itemize} \item For $i=1,...,100$, generate samples $\mathbf{x}_{0}^{(i)} \sim \mathcal{N}(\mathbf{x}_{0},P_{0})$ \item For $i=1,...,100$, assign weights $w_{0}^{(i)}=\dfrac{1}{100}$ \end{itemize} \item For $k>0$, \begin{itemize} \item For $i=1, 2,..., 100$ \begin{itemize} \item Draw sample $\mathbf{x}_{k}^{(i)}$ using the transitional prior. \begin{equation} a_k^{(i)}= F\mathbf{x}_{k-1}^{(i)} \end{equation} \begin{equation} \mathbf{x}_{k}^{(i)} \sim p(\mathbf{x}_{k}\mid \mathbf{x}_{k-1}^{(i)})=\mathcal{N}(a_k^{(i)},Q_{w}) \end{equation} \item Evaluate the unnormalized importance weights \begin{eqnarray} {\tilde{w}}_k^{(i)} & \varpropto & w_{k-1}^{(i)}\dfrac{p(\mathbf{z}_k|\mathbf{x}_k^{(i)})p(\mathbf{x}_k^{(i)}|\mathbf{x}_{k-1}^{(i)})}{q(\mathbf{x}_k^{(i)}|\mathbf{x}_{k-1}^{(i)},\mathbf{z}_{k})} \end{eqnarray} \end{itemize} \item For $i=1,....,100$: \begin{itemize} \item Normalize the importance weights \begin{eqnarray} w^{(i)}& = & \dfrac{{\tilde{w}}^{(i)}}{\sum_{i=1}^N {\tilde{w}}^{(i)}} \end{eqnarray} \end{itemize} \item Calculate the target estimates such as conditional mean, covariances, mean square error MSE etc. \item Calculate $N_{eff}$ \begin{eqnarray} N_{eff}& = & \dfrac{1}{\sum_{i=1}^N (w_k^{(i)})^2} \end{eqnarray} \item If $N_{eff}<N_{thr}$ \begin{itemize} \item Resample the particles using algorithm in Table \ref{tab:Sresampling} or Table \ref{tab:Mresampling} \begin{eqnarray} [\{\mathbf{x}_k^{(n)},w_k^{(n)}\}_{n=1}^{N}] & = & RESAMPLE[\{\mathbf{x}_k^{(i)},w_k^{(i)}\}_{i=1}^{N}] \end{eqnarray} \item Roughen the particles using algorithm in Table \ref{tab:Roughening} \begin{eqnarray} [\{\mathbf{x}_k^{(n)},w_k^{(n)}\}_{n=1}^{N}]] & = & ROUGHEN[\{\mathbf{x}_k^{(i)},w_k^{(i)}\}_{i=1}^{N}] \end{eqnarray} \end{itemize} \end{itemize} \end{enumerate} \vskip 4pt \end{minipage} \\ \hline \end{tabular} } \label{tab:GPF_impltn} \end{table} The true trajectory of the target and its estimates are shown in Fig.\ref{GPF_traj}. The state estimates of the target are shown in Fig. \ref{GPF_state}. The mean square error MSE of the position estimates are shown in Fig.\ref{GPF_MSE}. \begin{figure}[h] \centering \subfloat [$xy$ track] {\label{GPF_traj}\includegraphics[scale=0.4]{BootstrapPF_fig/PF_fig/colour/xy_plot} } \subfloat [MSE] {\label{GPF_MSE}\includegraphics[scale=0.4]{BootstrapPF_fig/PF_fig/colour/MSE} }\\ \caption{Target's true $xy$ track with its estimate and MSE obtained using generic PF.} \end{figure} \begin{figure}[h] \centering \subfloat [$xy$ track] {\label{EKF_traj}\includegraphics[scale=0.4]{BootstrapPF_fig/EKF_fig/colour/xy_plot} } \subfloat [MSE] {\label{EKF_MSE}\includegraphics[scale=0.4]{BootstrapPF_fig/EKF_fig/colour/MSE} }\\ \caption{Target's true $xy$ track with its estimate and MSE obtained using EKF.} \end{figure} \subsection{Comparison of Particle Filter with EKF} For comparison the extended Kalman filter(EKF) is also implemented for the same target motion scenario. The true trajectory of the target and its estimate is shown in Fig.\ref{EKF_traj}. The state estimates are calculated after 100 Monte Carlo runs and are shown in Fig.\ref{EKF_state}. The mean square error MSE of the position estimate is shown in Fig.\ref{EKF_MSE}. The results show that the estimates obtained using EKF diverge. Thus it shows that particle filter has better tracking accuracy under nonlinear target motions and it can handle moderate maneuvers of the target by using only constant velocity models without the need of maneuvering models. \section{Summary} Particle filter is a class of Monte Carlo method to solve recursive Bayesian estimation. It represents the probability distribution of a target using particles and associated weights. It doesn't require the assumptions of linearity and Guassianity and is capable of handling complex noise distributions and non-linearities in the target's measurements as well as target dynamics. Importance sampling provide the alternative to sample particles from a complex distribution using an another suitable easy to sample distribution called importance density. Sequential importance sampling helps to perform importance sampling recursively and reduce its computational complexity. Particle filter consists of proposing particles using importance function and weight update of these particles at every iteration and is capable of parellel implementation. Implementation issues like degeneracy and sample impoverishment are addressed by resampling and roughening respectively. Selection of good importance density also reduces the frequency of resampling. Simulations confirm that particle filter outperforms EKF in tracking maneuvering targets at the expense of increased computational cost. Independent partition particle filter (IPPF) for multi-target tracking and Multiple Model Particle filter (MMPF) for maneuvering target tracking are explored in the subsequent chapters. \begin{figure}[p] \centering \subfloat [position $x$ ] {\includegraphics[scale=0.4]{BootstrapPF_fig/PF_fig/colour/x_state} }\\ \subfloat [position $y$ ] {\includegraphics[scale=0.4]{BootstrapPF_fig/PF_fig/colour/y_state} }\\ \subfloat [velocity $v_x$] {\includegraphics[scale=0.4]{BootstrapPF_fig/PF_fig/colour/vx_state} }% \subfloat [velocity $v_y$] {\includegraphics[scale=0.4]{BootstrapPF_fig/PF_fig/colour/vy_state} }\\ \caption{Target's true states and their estimates obtained using generic PF.} \label{GPF_state} \end{figure} \begin{figure}[p] \centering \subfloat [position $x$ ] {\includegraphics[scale=0.4]{BootstrapPF_fig/EKF_fig/colour/x_state} }\\ \subfloat [position $y$ ] {\includegraphics[scale=0.4]{BootstrapPF_fig/EKF_fig/colour/y_state} }\\ \subfloat [velocity $v_x$] {\includegraphics[scale=0.4]{BootstrapPF_fig/EKF_fig/colour/vx_state} }% \subfloat [velocity $v_y$] {\includegraphics[scale=0.4]{BootstrapPF_fig/EKF_fig/colour/vy_state} }\\ \caption{Target's true states and their estimates obtained using EKF.} \label{EKF_state} \end{figure} \chapter{Multi-target Tracking using Independent Partition Particle Filter (IPPF)} \label{chap:IPPF} \thispagestyle{empty} Partitioned sampling is developed by J. ~Maccormick et.al \cite{3} for tracking more than one target. The independent partition Particle Filter (IPPF) is given by M.~Orton et.al \cite{1} is a convenient way to propose particles when part or all of the joint multi-target density factors. These techniques are explored in this chapter. In particle filters, the number of particles required to model a distribution increases with dimension $n_{x}$ of the state space. The upper bound on the variance of the estimation error has the form $cN^{-1}$, where $c$ is a constant and $N$ is the number of particles used by the particle filter. The constant $c$ depends heavily on the state vector dimension $n_{x}$ of the system \cite{4}. The variance of the estimation error for particle filter becomes exponential in $n_{x}$ for poorly chosen importance density and is referred to as \textquotedblleft curse of dimensionality\textquotedblright. Hence the number of required particles $N$ should be higher for higher dimensional systems like multi-target tracking systems. In the case of multi-targets, the proportion of state space that is filled by the region of the likelihood with reasonably high probability gets smaller. A particle with one very improbable state, and all the remaining states being probable may be rejected during resampling step of particle filter since overall this particle is improbable. It is the low probability of the bad estimates that determines the fate of the whole particle. Hence parts of the particle are penalized at the expense of other parts. A better approach is to ensure that either whole particle is probable or the whole particle is improbable. This can be done by redistributing the set of weighted particles so as to increase the density of particles in certain regions of interest, and account for redistribution by suitable weights such that it doesn't alter the underlying distribution described by the former particles. This is accomplished by Weighted Resampling technique described in \cite{1,2,3}. \section{Weighted Resampling} Weighted resampling with respect to a function $g(\mathbf{x})$, is an operation on the particle set which populates peaks of $g(\mathbf{x})$ with particles without altering the distribution actually represented by the particle set. Given a weighted set of particles, the weighted resampling populates certain parts of the configuration space with particles in the desired manner so that representation is more efficient for future operations. It has the advantage that subsequent operations on this particle set will produce more accurate representation of the desired probability distributions. Weighted resampling is carried out with respect to a strictly positive weighting function $g(\mathbf{x})$. It is analogous to the importance function used in standard importance sampling. Let the $i^{th}$ particles be $\mathbf{x}^{(i)}$ with weight $w^{(i)}$. \begin{equation} \mathbf{x}=\{\mathbf{x}^{(1)},\mathbf{x}^{(2)},\mathbf{x}^{(3)},....,\mathbf{x}^{(N)}\} \end{equation} \begin{equation} \mathbf{w}=\{w^{(1)}, w^{(2)}, w^{(3)}, . . . . , w^{(N)}\} \end{equation} Given a set of $N$ particles $\mathbf{x}$, with corresponding weights $\mathbf{w}$, it produces a new particle set by resampling from $\mathbf{x}$, using secondary weights which are proportional to $g(\mathbf{x})$. This has the effect of selecting many particles in regions where $g(\mathbf{x})$ is peaked. The weights of the resampled particles are calculated in such a way that overall distribution represented by the new particle set is same as the old one. Thus asymptotically any strictly positive function is acceptable as the weighting function $g(\mathbf{x})$, but it is better to select a function which has advantage in our application. We would like the weighted resampling step to position as many particles as possible near peaks in the posterior. Hence a natural choice therefore is to take $g(\mathbf{x})$ to be the likelihood function of the target itself. The algorithm for one dimensional weighted resampling with respect to importance function $g(\mathbf{x})$ is repeated in Table.\ref{tab:WR} from \cite{3}. Here $\mathbf{x}_{k},w_{k}$ represents the particles at time $k$. \begin{table}[h] \caption{Weighted Resampling \cite{3}} \centering \begin{tabular}{l} \hline \begin{minipage}{4.5in} \vskip 4pt $[\{\mathbf{x}_{k}^{(j)},w_{k}^{(j)}\}_{j=1}^{N}]$ = Weighted Resampling $[\{\mathbf{x}_{k}^{(i)}, w_{k}^{(i)}\}_{i=1}^{N}, g(\cdotp)]$ \begin{itemize} \item Define secondary weights $\rho^{(i)} = \dfrac{g(\mathbf{x}^{(i)})}{\sum^{N}_{j=1}g(\mathbf{x}^{(j)})}$. \item Sample indices $j(i)$ from the distribution formed by $\rho^{(i)}$ for $i=1,2,3,...,N$ as explained in Appendix A \item Set $\mathbf{x}^{(i)} = \mathbf{x}^{(j(i))}$ \item Set $w^{(i)} = \dfrac{w^{(j(i))}}{\rho^{(j(i))}}$ \item Normalize $w^{(i)}$ such that $\sum w=1$ $$w^{(i)} = \dfrac{w^{(j(i))}}{\sum^{N}_{i=1}w^{(j(i))}}$$ \end{itemize} \vskip 4pt \end{minipage} \\ \hline \end{tabular} \label{tab:WR} \end{table} \begin{figure}[h] \centering \includegraphics[scale=0.5]{IPPF_fig/weighted_resampling} \caption{Weighted resampling of a set of particles representing a distribution $P(x)$ with respect to function $g(x)$: After weighted resampling, more number of particles get populated near the peak of the function $g(x)$. The resultant particles may have non uniform weights and still have the same initial distribution $P(x)$.} \label{fig:weighted_resampling} \end{figure} The fourth step in the Weighted Resampling algorithm has the effect of counteracting the extent to which the particles were biased by the secondary weights. An intuitive proof that weighted resampling doesn't alter underlying distribution is given in \cite{3}. Thus Weighted Resampling has the similar objective and effect as the importance resampling. The difference of weighted resampling and resampling is illustrated in Fig.\ref{fig:weighted_resampling} and Fig.\ref{fig:resampling}. \section{Independent Partitioned Sampling} Partitioned sampling is a general term for the method which consists of dividing the state vector into two or more partitions and sequentially applying dynamics for each partition and then followed by an appropriate resampling operation. Objective of the partitioned sampling is to use one's intuition about the problem to choose a decomposition of the dynamics which simplify the problem, and a weighting function $g(\mathbf{x})$ to have better rearrangement of the particles. If the weighting function for the intermediate resampling is chosen to be highly peaked close to the peak in the likelihood for that partition, then the weighted resampling step will increase the number of particles close to the peak in the likelihood for that partition. After applying this method to each partition, the result is that more particles are likely to contain mostly good states so that fewer are rejected at the final resampling step. For independent targets, \cite{3} introduces the Independent Partition Particle Filter. Here the state $\mathbf{x}_{t}$ is assumed to be separable into independent partitions, each partition containing the state for one target .Thus $\mathbf{x}_{t}$ is the union of several partitions, $\mathbf{x}_{t} \equiv \{\mathbf{x}_{t}(1),\mathbf{x}_{t}(2),\mathbf{x}_{t}(3),.....,\mathbf{x}_{t}(K_{t})\}$ where we have $K_{t}$ partitions, which is the same as the number of targets. If the prior is assumed to be independent, and if the likelihood and the importance function are also independent with respect to the same partitioning, then the posterior will have the same independence. In this scenario, weighted resampling allows the particles to interact and swap target states. Thus it is used to do the crossover of the targets among the particles implicitly. Suppose there are two targets $A$ and $B$, represented using five particles $\{A_{1},B_{1}\},\{A_{2},B_{2}\},\{A_{3},B_{3}\},\{A_{4},B_{4}\},\{A_{5},B_{5}\}$. Suppose $A_{3},A_{4},B_{2}, B_{5}$ are less probable states and $A_{1}, A_{2},A_{5}, B_{1}, B_{3},B_{4} $ are highly probable, then weighted resampling applied to each partition does the crossover among the particles and can generate five new particles $\{A_{2},B_{1}\},\{A_{1},B_{3}\},\{A_{2},B_{4}\},\{A_{5},B_{5}\},\{A_{3},B_{1}\}$ such that these particles have more probable states. Hence the new particles get more concentrated at the peak of the posterior. The algorithm for Independent partition particle filter from \cite{4} is repeated in Table.\ref{tab:IPPF}. \begin{table}[ht] \caption{Independent Partition Particle Filter (IPPF) \cite{4}} \centering \resizebox{!}{4in} { \begin{tabular}{l} \hline \begin{minipage}{7in} \vskip 4pt \begin{enumerate} \item For $t=0$, initialize all particles: \begin{itemize} \item For $i=1,...,N$ sample $\mathbf{x}_{0}^{(i)} \sim p(\mathbf{x}_{0})$ where $p(\mathbf{x}_{0})$ is the prior distribution of the target. \item For $i=1,...,N$ calculte weights $w_{0}^{(i)}$ according to $p(\mathbf{x}_{0})$. \end{itemize} \item For $t>0$, \begin{itemize} \item For $k=1, 2,..., K_{t}$ \begin{itemize} \item For $i=1, 2,..., N$ \begin{itemize} \item Draw sample from the importance density. $\mathbf{x}(k) \sim q_{k}(\mathbf{x}_{t}(k)\mid \mathbf{x}_{t-1}^{(i)}(k),z_{t})$ \item Compute secondary weights $g(\mathbf{x}_{t}^{(i)}(k))$ \end{itemize} \end{itemize} \item For $k=1, 2,..., K_{t}$ \begin{itemize} \item For $i=1, 2,..., N$ \begin{itemize} \item Normalize the secondary weights $$\rho^{(i)}(k)= \dfrac{g(\mathbf{x}_{t}^{(i)}(k)}{\sum^{N}_{j=1}g(\mathbf{x}^{(i)}(k)}$$ \end{itemize} \end{itemize} \item For $k=1, 2,..., K_{t}$ \begin{itemize} \item For $i=1, 2,..., N$ \begin{itemize} \item Sample indices $j_{k}(i)$ from the distribution formed by $\rho^{(n)}(k)$ for $n=1,2,3,...,N$ by any of the method given in Appendix.\ref{appendix:a}. \end{itemize} \end{itemize} \item For $i=1, 2,..., N$ \begin{itemize} \item Set the new particles $\mathbf{x}_{t}^{(i)} \equiv \{\mathbf{x}_{t}^{(j_{1}(i))}(1),\mathbf{x}_{t}^{(j_{2}(i))}(2),x_{t}^{(j_{3}(i))}(3),.....,\mathbf{x}_{t}^{(j_{K_{t}}(i))}(K_{t})\}$ and compute their corresponding particle weights. \end{itemize} \item For $i=1, 2,..., N$ evaluate the importance weights $$ w_{t}^{(i)}=\dfrac{w_{t-1}^{(i)} p(z_{t}\rvert \mathbf{x}_{t}^{(i)}) p(\mathbf{x}_{t}^{(i)}\rvert \mathbf{x}_{t-1}^{(i)})}{q(\mathbf{x}_{t}^{(i)}\mid \mathbf{x}_{t-1}^{(i)},z_{t}) \prod_{k=1}^{K_{t}}\rho^{(j_{k}(i))}(k)} $$ \item For $i=1, 2,..., N$, normalize weights: $$w^{(i)} = \dfrac{w^{(i)}}{\sum^{N}_{j=1}w^{(j)}}$$ \end{itemize} \item If required resample the particles and do roughening. \end{enumerate} \vskip 4pt \end{minipage} \\ \hline \end{tabular} } \label{tab:IPPF} \end{table} \section{Simulation Results} To verify the effectiveness of the algorithm, targets' motion scenario and their measurements are simulated according to the given models and the estimates obtained using the algorithm is compared with the true trajectories. For comparison, estimation is done using the standard bootstrap particle filter also on the same target tracking problem and the results are compared. \subsection{Multi-target tracking using IPPF} We have two independent targets $A$ and $B$ which have constant velocity and constant turn motions. The state vector consists of position and velocities of two targets, \begin{eqnarray} \mathbf{x}= \begin{bmatrix} \mathbf{x}(1) &\mathbf{x}(2) \end{bmatrix}^T= \begin{bmatrix} x(1) & v_{x}(1) & y(1) & v_{y}(1) &\vdots& x(2) & v_{x}(2) &y(2) & v_{y}(2) \end{bmatrix}^T \end{eqnarray} The initial true state of the targets are $\mathbf{x}_{0}=\begin{bmatrix} 500&50&500&50&450&40&350&-40 \end{bmatrix}^T$. From time $t=0s$ to $t=100s$, both targets have constant velocity motion. From $t=101s$ to $t=150s$, they move in clockwise constant turn rate motion of $3 rad/s$. From $t=151s$ to $t=250s$, both targets have again constant velocity motion. The measurement sensor is located at the origin. The target's ranges $r_{1},r_{2}$ and bearings $\theta_{1},\theta_{2}$ at time $t$ are available as the measurement $\mathbf{z}_t$ \begin{eqnarray} \mathbf{z}_t=h(\mathbf{x}_{t})+\mathbf{v}_t \end{eqnarray} \begin{eqnarray} \mathbf{v}_t\sim \mathcal{N}(0,Q_v) \end{eqnarray} where $\mathbf{v}_t$ is the measurement error, $h(\cdotp)$ is the measurement model . We assume that the data association of the targets are already done and we know exactly which measurements belong to which targets. The measurement error $\mathbf{v}_t$ is uncorrelated and has zero mean Gaussian distribution with covariance matrix $Q_v$. $z_{t}(k)$ represents measurement of the target $k$. \begin{eqnarray} \mathbf{z}_t= \begin{bmatrix} z_{t}(1)&z_{t}(2) \end{bmatrix}= \begin{bmatrix} r_{1}\\ \theta_{1}\\ r_{2}\\ \theta_{2} \end{bmatrix} \end{eqnarray} \begin{eqnarray}\\ \mathbf{Q_v}= \begin{bmatrix} Q_{v:1} &\mathbf{0}\\ \mathbf{0} &Q_{v:2} \\ \end{bmatrix}= \begin{bmatrix} \sigma_{r_{1}}^{2} &0 &0 &0\\ 0 &\sigma_{\theta_{1}}^{2} &0 &0\\ 0 &0 &\sigma_{r_{2}}^{2} &0\\ 0 &0 &0 &\sigma_{\theta_{2}}^{2} \end{bmatrix}= \begin{bmatrix} 10 &0 &0 &0\\ 0 &0.5 &0 &0\\ 0 &0 &10 &0\\ 0 &0 &0 &0.5\\ \end{bmatrix} \end{eqnarray} The measurement model $h(\cdotp)$ for the targets is given by: \begin{eqnarray} h(\mathbf{x}_{t})= \begin{bmatrix} h_1(\mathbf{x}_{t}(1))\\ h_2(\mathbf{x}_{t}(2))\\ \end{bmatrix} \end{eqnarray} The measurement model $h_k(\cdotp)$ for target $k=1,2$ is given by: \begin{eqnarray} z_{t}(k)= h_k(\mathbf{x}_{t}(k))= \begin{bmatrix} \sqrt{x^2_t(k)+y^2_t(k)}\\ \tan^{-1}\left(\dfrac{y_t(k)}{x_t(k)}\right) \end{bmatrix} \end{eqnarray} The initial state estimate is assumed to be a Gaussian vector with mean $\mathbf{x}_{0}$ and error covariance $P_{0}$, such that \begin{equation} \mathbf{x}_{0}=\begin{bmatrix} 250&50&750&50&250&40&250&-40 \end{bmatrix}^T \end{equation} \begin{equation} P_{0}=diag \left(100,10,100,10,100,10,100,10\right) \end{equation} Hence initial particles $\{\mathbf{x}_{0}^{(i)}\}_{i=1}^N$ were generated based on the distribution \begin{equation} \mathbf{x}_{0} \sim \mathcal{N}(\mathbf{x}_{0},P_{0}) \end{equation} In this implementation of the particle filter, the transitional prior which is a suboptimal choice of importance density is used to propose particles. The state transition model $f(\cdotp)$ for estimation of state at time $t$ is such that: \begin{eqnarray} \mathbf{x}_{t} & = & f(\mathbf{x}_{t-1})+\mathbf{w}_{t-1}\\ f(\mathbf{x}_{t}) & = & \begin{bmatrix} f_1(\mathbf{x}_{t}(1))\\ f_2(\mathbf{x}_{t}(2))\\ \vdots\\ f_{K_t}(\mathbf{x}_{t}(K_t))\\ \end{bmatrix} \end{eqnarray} where $\mathbf{w}_{t-1}$ is the process noise with zero mean. For the target $k$, the state transition model $f_{k}(\cdotp)$ for estimation of state at time $t$ is such that: \begin{equation} \mathbf{x}_{t}(k)=f_{k}(\mathbf{x}_{t-1}(k)) \end{equation} The state transition model $f_{k}(\cdotp)$ used in this implementation of the IPPF is constant velocity model. Hence $f_{k}(\cdotp)$ is a matrix $F$ given by: \begin{equation} F=\begin{bmatrix} 1&T&0&0\\ 0&1&0&0\\ 0&0&1&T\\ 0&0&0&1\\ \end{bmatrix} \end{equation}\\ where $T$ is the sampling period of the target dynamics. Since both the targets are estimated based on the same type of state transition model, the importance density used is the same for both the targets, i.e. $f_{1}(\cdotp)=f_{2}(\cdotp)$. The process noise assumed has a diagonal covariance matrix $Q_{w}$ as: \begin{equation} Q_w = diag \left(10 ,2.5, 35 , 2.5, 10 , 2, 10, 2 \right) \end{equation} A total of $N=100$ particles were used. The detailed implementation algorithm for the two target tracking problem is given in Table.\ref{tab:IPPF_impltn}. \begin{table}[ht] \caption{Implementation of IPPF} \centering \resizebox{!}{4.5in} { \begin{tabular}{l} \hline \begin{minipage}{8in} \vskip 4pt \begin{enumerate} \item For $t=0$, initialize all particles: \begin{itemize} \item For $i=1,...,100$, generate samples $\mathbf{x}_{0}^{(i)} \sim \mathcal{N}(\mathbf{x}_{0},P_{0})$ \item For $i=1,...,100$, assign weights $w_{0}^{(i)}=\dfrac{1}{100}$ \end{itemize} \item For $t>0$, \begin{itemize} \item For $k=1, 2$ \begin{itemize} \item For $i=1, 2,..., 100$ \begin{itemize} \item Draw sample $\mathbf{x}_{t}^{(i)}(k)$ using the transitional prior. \begin{equation} a_t^{(i)}(k)= F\mathbf{x}_{t-1}^{(i)}(k) \end{equation} \begin{equation} \mathbf{x}_{t}^{(i)}(k) \sim p(\mathbf{x}_{t}(k)\mid \mathbf{x}_{t-1}^{(i)}(k))=\mathcal{N}(a_t^{(i)}(k),Q_{w}) \end{equation} \item Compute secondary weights using the likelihood $p(z_{t}(k)\mid \mathbf{x}_{t}^{i}(k))$ and the observation model $h_k$ for the target $k$. \begin{equation} b_t(k)=h_k(\mathbf{x}_{t}^{(i)}(k)) \end{equation} \begin{equation} g(\mathbf{x}_{t}^{(i)}(k))=\mathcal{N}(z_t(k);b_t(k),Q_{v:k}) \end{equation} \end{itemize} \end{itemize} \item For $k=1, 2$ \begin{itemize} \item For $i=1, 2,..., 100$, Normalize the secondary weights \begin{equation} \rho^{(i)}(k)= \frac{g(\mathbf{x}_{t}^{(i)}(k)}{\sum^{100}_{j=1}g(\mathbf{x}_{t}^{(j)}(k)} \end{equation} \end{itemize} \item For $k=1, 2$ \begin{itemize} \item For $i=1, 2,..., 100$ \begin{itemize} \item Sample indices $j_{k}(i)$ from the distribution formed by $\rho^{(n)}(k)$ for $n=1,2,3,...,100$ by any of the method given in Appendix.\ref{appendix:a}. \end{itemize} \end{itemize} \item For $i=1, 2,..., 100$ \begin{itemize} \item Set the new particles $\mathbf{x}_{t}^{(i)} \equiv \{\mathbf{x}_{t}^{(j_{1}(i))}(1),\mathbf{x}_{t}^{(j_{2}(i))}(2)\}$ and compute their corresponding particle weights, $\mathbf{w}_{t}^{(i)} \equiv w_{t}^{(j_{1}(i))} \times w_{t}^{(j_{2}(i))}$ \end{itemize} \item For $i=1, 2,..., 100$ \begin{itemize} \item Evaluate likelihood of the particles $$p(\mathbf{z}_{t}\rvert \mathbf{x}_{t}^{(i)})=\mathcal{N}(\mathbf{z}_{t};h(\mathbf{x}_{t}^{(i)}),\mathbf{Q_v})$$ \item Evaluate the importance weights $$ w_{t}^{(i)}=\frac{w_{t-1}^{(i)} p(z_{t}\rvert \mathbf{x}_{t}^{(i)})}{\prod_{k=1}^{2}\rho^{(j_{k}(i))}(k)} $$ \end{itemize} \item For $i=1, 2,..., 100$, normalize weights: $$w^{(i)} = \frac{w^{(i)}}{\sum^{100}_{j=1}w^{(j)}}$$ \end{itemize} \item If required resample the particles and do roughening. \end{enumerate} \vskip 4pt \end{minipage} \\ \hline \end{tabular} } \label{tab:IPPF_impltn} \end{table} The true trajectories of the targets and their estimates are shown in Fig.\ref{IPPF_xy_state}. The state estimates of the targets are shown in Fig.\ref{IPPF_state}. The mean square error MSE of the position estimates for 100 Monte Carlo runs are shown in Fig.\ref{IPPF_T1_MSE} and Fig.\ref{IPPF_T2_MSE}. \subsection{Comparison of IPPF with Standard Bootstrap PF} For comparison the standard bootstrap particle filter is also implemented for the same target scenario with N=100 particles. The true trajectories of the targets and their estimates are shown in Fig.\ref{PF_xy_state}. The state estimates of the targets are shown in Fig.\ref{PF_state}. The mean square error (MSE) of the position estimates for 100 Monte Carlo runs are shown in Fig.\ref{PF_T1_MSE} and Fig.\ref{PF_T2_MSE}. The results show that estimates are highly diverged compared to the IPPF estimates. Thus it shows that IPPF improves particle survival rate of the particles when there are multiple targets and hence we can use fewer particles while maintaining robustness. \begin{figure}[h] \centering \subfloat [IPPF estimate] {\label{IPPF_xy_state}\includegraphics[scale=0.4]{IPPF_fig/colour/IPPF_xy_plot} } \subfloat [PF estimate] {\label{PF_xy_state}\includegraphics[scale=0.4]{IPPF_fig/colour/PF_xy_plot} }\\ \caption{Targets' true $xy$ track and their estimated track obtained using IPPF and PF: The PF diverges during multi target tracking, but IPPF has good performance even with the same number of particles.} \label{IPPF_traj} \end{figure} \begin{figure}[h] \centering \subfloat [IPPF Target 1 estimate] {\label{IPPF_T1_MSE}\includegraphics[scale=0.4]{IPPF_fig/colour/IPPF_T1_MSE} } \subfloat [IPPF Target 2 estimate] {\label{IPPF_T2_MSE}\includegraphics[scale=0.4]{IPPF_fig/colour/IPPF_T2_MSE} }\\ \subfloat [PF Target 1 estimate] {\label{PF_T1_MSE}\includegraphics[scale=0.4]{IPPF_fig/colour/PF_T1_MSE} } \subfloat [PF Target 2 estimate] {\label{PF_T2_MSE}\includegraphics[scale=0.4]{IPPF_fig/colour/PF_T2_MSE} }\\ \caption{MSE of the position estimates obtained using IPPF and PF.} \label{IPPF_MSE} \end{figure} \begin{figure}[h] \centering \subfloat [position $x$ ] {\includegraphics[scale=0.4]{IPPF_fig/colour/IPPF_x_state} }\\ \subfloat [position $y$ ] {\includegraphics[scale=0.4]{IPPF_fig/colour/IPPF_y_state} }\\ \subfloat [velocity $v_x$] {\includegraphics[scale=0.4]{IPPF_fig/colour/IPPF_vx_state} }% \subfloat [velocity $v_y$] {\includegraphics[scale=0.4]{IPPF_fig/colour/IPPF_vy_state} }\\ \caption{Targets' true states and their estimates obtained using IPPF.} \label{IPPF_state} \end{figure} \begin{figure}[h] \centering \subfloat [position $x$ ] {\includegraphics[scale=0.4]{IPPF_fig/colour/PF_x_state} }\\ \subfloat [position $y$ ] {\includegraphics[scale=0.4]{IPPF_fig/colour/PF_y_state} }\\ \subfloat [velocity $v_x$] {\includegraphics[scale=0.4]{IPPF_fig/colour/PF_vx_state} }% \subfloat [velocity $v_y$] {\includegraphics[scale=0.4]{IPPF_fig/colour/PF_vy_state} }\\ \caption{Targets' true states and their estimates obtained using standard bootstrap PF.} \label{PF_state} \end{figure} \section{Summary} In high dimensional systems, the proportion of high likelihood particles are smaller. Hence higher number of particles are required for high dimensional systems like multi target tracking. Weighted resampling is used to efficiently modify particles using the measurement likelihood so that less number of particles are rejected during resampling. Weighted resampling doesn't alter the underlying probability distribution of the particles. Independent partitioned sampling facilitates the application of target dynamics and measurement update individually on each independent target and allows the use of weighted resampling on each target to have better rearrangement of the particles with the result that most particles are likely to contain mostly good states and fewer are rejected during resampling. Incorporation of independent partition sampling and weighted resamplimg helps the IPPF to track multiple targets with lesser number of particles. \chapter{Multiple Model Particle Filter (MMPF)} \label{chap:MMPF} \thispagestyle{empty} Multiple Model Bootstrap filter (MMPF) proposed by S. ~McGinnity et.al \cite{6}, is an extension of the standard particle filter to the multiple model target tracking problem. In maneuvering targets, apart from the straight line motion, the target can have different types of dynamics similar to circular motion, accelerated motions etc. Also they can have abrupt deviation from one type of motion to another. Such processes are difficult to represent using a single kinematic model of the target. Hence filters with multiple models representing different possible maneuvering states are run in parallel, operating simultaneously on the measurements. The validity of these models are evaluated and the final target state estimate is a probability weighted combination of the individual filters. In multiple model particle filter, each particle consists of a state vector augmented by an index vector representing the model. Thus particles have continuous valued vector $\mathbf{x}_t$ of target kinematics variables, like position, velocity, acceleration, etc, and a discrete valued regime variable $A_t$ that represents the index of the model which generated $\mathbf{x}_t$ during the time period $(t-1^+,t]$. The regime variable can be one of the fixed set of $s$ models i.e., $A_t\in S=\{1,2, . . . .,s\}$. The posterior density $p(y_t \mid z_t)$ is represented using $N$ particles $\{y_t^n,w_t^n\}_{n=1}^N$, i.e., the augmented state vector and the weight. The posterior model probabilities $\{\pi_i(t)\}_{i=1}^s$ are approximately equal to the proportion of the samples from each model in the index set $\{A_t^n\}_{n=1}^N$. It will be assumed that model switching is a Markovian process with known mode transition probabilities $\pi_{ij}$. \begin{equation} \pi_{ij}=P[A_t=j \mid A_{t-1}=i]\;;\qquad i,j \in S=\{1,2, . . . .,s\} \\ \end{equation} \begin{equation} \pi_{ij}\geq0\\ \end{equation} \begin{equation} \sum_{j=1}^s \pi_{ij} = 1\\ \end{equation} The mode transition probabilities will be assumed time invariant and independent of the base state and hence the system is assumed to have an $s$-state homogeneous Markov chain with mode transition probability matrix $\Pi=[ \pi_{ij}]_{s\times s}$, where $i,j \in S$. These mode transition probabilities are designed based on the estimator performance requirements. A lower value of $\pi_{ij}$ will contribute for less peak error during maneuver but higher RMS error during the quiescent period. Similarly a higher value of $\pi_{ij}$ will contribute for more peak error during maneuver but lower RMS error during the quiescent period \cite{5}. \begin{table}[H] \caption{Multiple Model Particle Filter, MMPF \cite{4}} \centering \begin{tabular}{l} \hline \begin{minipage}{4in} \vskip 4pt $[\{y_t^n, w_t^n\}_{n=1}^N]=$MMPF$[\{y_{t-1}^n, w_{t-1}^n\}_{n=1}^{N},z_t]$ \begin{itemize} \item Regime transition (Table.\ref{tab:RT}):\\ $[\{A_t^n\}_{n=1}^N]=$RT$[\{A_{t-1}^n\}_{n=1}^{N},\Pi]$ \item Regime Conditioned SIS (Table.\ref{tab:RTSIS}):\\ $[\{\mathbf{x}_t^n, w_t^n\}_{n=1}^N]=$RC-SIS$[\{\mathbf{x}_{t-1}^n, A_t^n, w_{t-1}^n\}_{n=1}^{N},z_t]$ \item If required resample the particles and do roughening. \end{itemize} \vskip 4pt \end{minipage} \\ \hline \end{tabular} \label{tab:MMPF} \end{table} The algorithm for multiple model particle filter is repeated in Table.\ref{tab:MMPF} from \cite{4,6}. The first step is to generate the index set $\{A_t^n\}_{n=1}^N$ based on the transition probability matrix $\Pi$. Thus it gives the appropriate model and importance density to be used by each particle at time $k-1$ for generating the particle at time $k$. This is called regime transition. Its pseudo code is repeated in Table.\ref{tab:RT} from \cite{4}. \begin{table}[H] \caption{Regime Transition \cite{4}} \centering \begin{tabular}{l} \hline \begin{minipage}{4in} \vskip 4pt $[\{A_t^n\}_{n=1}^N]=$RT$[\{A_{t-1}^n\}_{n=1}^{N},\Pi]$ \begin{itemize} \item FOR $i=1:s$, \begin{itemize} \item $c_i(0)=0$ \item FOR $j=1:s$, \begin{itemize} \item $c_i(j)=c_i(j-1)+ \pi_{ij}$ \end{itemize} \item END FOR \end{itemize} \item END FOR \item FOR $n=1:N$, \begin{itemize} \item Draw $u_n\sim \mathcal{U} [0,1]$ \item Set $i=A_{t-1}^n$ \item m=1 \item WHILE $(c_i(m)<u_n)$ \begin{itemize} \item$ m=m+1$ \end{itemize} \item END WHILE \item Set $A_{t}^n=m$ \end{itemize} \item END FOR \end{itemize} \vskip 4pt \end{minipage} \\ \hline \end{tabular} \label{tab:RT} \end{table} It implements the rule that if $A_{t-1}^n=i$, then $A_t^n$ should be set to $j$ with probability $\pi_{ij}$. It finds the cumulative distribution function of random variable $A_t$ conditioned on $A_{t-1}=i$, i.e. $\sum_{j=1}^{m} \pi_{ij}$ for $1\leq m \leq s$. It generates a uniform random variable $u_n\sim \mathcal{U} [0,1]$ and set $A_t^n$ to $m\in S=\{1,2, . . . .,s\}$ such that \begin{equation} \sum_{j=1}^{m-1} \pi_{ij} < u_n \leq \sum_{j=1}^{m} \pi_{ij} \end{equation} The regime conditioned SIS filtering is done next. Its pseudo code is repeated in Table.\ref{tab:RTSIS} from \cite{4}. The optimal regime conditioned importance density is \begin{equation} q(\mathbf{x}_{t}\mid \mathbf{x}_{t-1}^{(n)},A_t^n,z_{t})_{opt}=p(\mathbf{x}_{t}\mid \mathbf{x}_{t-1}^{(n)},A_t^n,z_{t}) \end{equation} A suboptimal choice of the regime conditioned importance density is the transitional prior. \begin{equation} q(\mathbf{x}_{t}\mid \mathbf{x}_{t-1}^{(n)},A_t^n,z_{t})_{sub-opt}=p(\mathbf{x}_{t}\mid \mathbf{x}_{t-1}^{(n)},A_t^n) \end{equation} The posterior prediction density is formed by transforming each particle using the model indexed by its corresponding augmented regime variable. After regime conditioned SIS filtering, posterior densities will automatically be weighted towards high likelihood as well as towards more appropriate models. If necessary resampling is done on the posterior density to reduce the effect of degeneracy. \begin{table}[H] \caption{Regime Conditioned SIS \cite{4}} \centering \begin{tabular}{l} \hline \begin{minipage}{4in} \vskip 4pt $[\{\mathbf{x}_t^n, w_t^n\}_{n=1}^N]=$RC-SIS$[\{\mathbf{x}_{t-1}^n, A_t^n, w_{t-1}^n\}_{n=1}^{N},z_t]$ \begin{itemize} \item FOR $n=1:N$, \begin{itemize} \item Draw $\mathbf{x}_t^n\sim q(\mathbf{x}_{t}\mid \mathbf{x}_{t-1}^{(n)},A_t^n,z_{t})$ \item Evaluate the importance weights upto a normalizing constant \begin{equation}w_{t}^{(n)}=\frac{w_{t-1}^{(n)} p(z_{t}\rvert \mathbf{\mathbf{x}}_{t}^{(n)},A_{t}^{(n)}) p(\mathbf{x}_{t}^{(n)}\rvert \mathbf{x}_{t-1}^{(n)},A_{t}^{(n)})}{q(\mathbf{x}_{t}\mid \mathbf{x}_{t-1}^{(n)},A_t^n,z_{t})} \end{equation} \end{itemize} \item END FOR \item FOR $n=1, 2,..., N$, normalize weights: \begin{equation} w^{(n)}_{t} = \dfrac{w^{(n)}_t}{\sum^{N}_{j=1}w^{(j)}} \end{equation} \item END FOR \end{itemize} \vskip 4pt \end{minipage} \\ \hline \end{tabular} \label{tab:RTSIS} \end{table} \section{Simulation Results} To verify the effectiveness of the algorithm, targets' motion scenario and their measurements are simulated according to the given models and the estimates using the algorithm is compared with the true trajectories. For comparison, estimation is done using the standard bootstrap particle filter and interacting multiple model-extended Kalman filter (IMM-EKF) for the same target tracking scenario. \subsection{Target Tracking using MMPF} We have one target which has constant velocity and constant turn motions. The augmented state vector consists of position $x,y$, velocities $v_x, v_y$ of the target, and the regime variable $A$, \begin{equation} \mathbf{x}= \begin{bmatrix} x & v_{x} & y & v_{y} &A \end{bmatrix}^T \end{equation} The initial unaugmented true state of the target is $\mathbf{x}_{0}=\begin{bmatrix} 500&100&500&0 \end{bmatrix}^T$. From $t=0s$ to $t=20s$, $t=49s$ to $t=60s$, $t=81s$ to $t=100s$ the target follows constant velocity motion. From $t=21s$ to $t=48s$, $t=61s$ to $t=80s$, it moves in clockwise constant turn rate motion of $60 rad/s$. The measurements are target's range $r$ and bearing $\theta$ available as $\mathbf{z}$. \begin{eqnarray} \mathbf{z}_t=h(\mathbf{x}_{t})+\mathbf{v}_t \end{eqnarray} \begin{eqnarray} \mathbf{v}_t\sim \mathcal{N}(0,Q_v) \end{eqnarray} where $\mathbf{v}_t$ is the measurement error, $h(\cdotp)$ is the measurement model. The measurement error $\mathbf{v}_t$ is uncorrelated and has zero mean Gaussian distribution with covariance matrix $Q_v$ \begin{eqnarray} \mathbf{z}= \begin{bmatrix} r\\ \theta\\ \end{bmatrix} \end{eqnarray} \begin{eqnarray}\\ \mathbf{Q_v}= \begin{bmatrix} \sigma_{r}^{2} &0\\ 0 &\sigma_{\theta}^{2} \end{bmatrix}= \begin{bmatrix} 10 &0 \\ 0 &0.1\\ \end{bmatrix} \end{eqnarray} The sensor is located at the origin. The initial state estimate is assumed to be a Gaussian vector with mean $\mathbf{x}_{0}$ and error covariance $P_{0}$, such that \begin{equation} \mathbf{x}_{0}=\begin{bmatrix} 500&100&500&0 \end{bmatrix}^T \end{equation} \begin{equation} P_{0}=\begin{bmatrix} 100 &10 &100 &10 \end{bmatrix}^T \end{equation} Hence initial unaugmented particles $\{\mathbf{x}_{0}^{(i)}\}_{i=1}^{N}$ were generated based on the distribution \begin{equation} \mathbf{x} \sim \mathcal{N}(\mathbf{x}_{0},P_{0}) \end{equation} The process noise assumed has the diagonal covariance matrix $Q_{w}$ as: \begin{equation} Q_w = diag(20,\;10,\;35,\;10) \end{equation} The two target motion models used by this imlementation of MMPF are constant velocity model and constant turn rate model with turn rate of $60 rad/s$. Hence the regime variable can take any of the two values, $A=1$ for constant velocity model and $A=2$ for constant turn rate model. The state transition model $f_{k}(\cdotp)$ for estimation of target state at time $t$ using $k^{th}$ model is such that: \begin{equation} \mathbf{x}_{t}=f_{k}(\mathbf{x}_{t-1})+\mathbf{w}_{t-1} \end{equation} where $\mathbf{w}_{t-1}$ is the process noise with zero mean and covariance $Q_w$. The $f_{1}(\cdotp)$ is the constant velocity model and $f_{2}(\cdotp)$ is the constant turn rate model with turn rate of $60 rad/s$. Hence $f_{1}(\cdotp)$ and $f_{2}(\cdotp)$ are matrices $F_1$ and $F_2$ repectively given by: \begin{equation} F_1=\begin{bmatrix} 1&T&0&0\\ 0&1&0&0\\ 0&0&1&T\\ 0&0&0&1\\ \end{bmatrix} \end{equation} \begin{equation} F_2= \begin{bmatrix} 1 &\dfrac{\sin (\Omega T)}{\Omega} &0 &-\dfrac{1 - \cos(\Omega T)}{\Omega}\\ 0 &\cos(\Omega T) &0 &-\sin(\Omega T)\\ 0 &\dfrac{1-\cos(\Omega T)}{\Omega} &1 &\dfrac{\sin(\Omega T)}{\Omega}\\ 0 &\sin(\Omega T) &0 &\cos(\Omega T) \\ \end{bmatrix} \end{equation} where $T$ is the sampling period of the target dynamics and $\Omega$ is the turn rate. In this implementation of the particle filter, the transitional prior which is a suboptimal choice of importance density $q(\mathbf{x}_{t} \mid \mathbf{x}_{t-1}^{(i)},z_{t})$ is used to propose the particles. Thus the importance density used is:\\ \begin{eqnarray*} q(\mathbf{x}_{t}\mid \mathbf{x}_{t-1}^{(n)},A_t^n,z_{t})&=&p(\mathbf{x}_{t}\mid \mathbf{x}_{t-1}^{(n)},A_t^n)\\ &=&\left\{ \begin{array}{rl} \mathcal{N}(f_{1}(\mathbf{x}_{t-1}),Q_{w}) & \text{if } A_t^n = 1\\ \mathcal{N}(f_{2}(\mathbf{x}_{t-1}),Q_{w}) & \text{if } A_t^n = 2\\ \end{array} \right. \end{eqnarray*} The mode transition probability matrix assumed by the filter for the target was \begin{equation} \pi_{ij}=\begin{bmatrix} .9 &.1\\ .3 &.7 \end{bmatrix} \end{equation} A total of $N=100$ particles were used. The initial mode probability is assumed to be uniform. \begin{equation} \pi_i(0)=0.5 \;;\qquad i=1,\;2 \end{equation} Hence particles were equally divided and associated with the considered target motion models, i.e. 50 particles' regime variable were associated with constant velocity model ($A=1$) and the rest were associated with constant turn rate model ($A=2$). The true trajectories of the target and its track estimate are shown in Fig. \ref{fig:MMPF_xy_plot}. The state estimates of the targets are shown in Fig. \ref{fig:MMPF_x_state}, Fig. \ref{fig:MMPF_y_state}, Fig. \ref{fig:MMPF_vx_state} and Fig. \ref{fig:MMPF_vy_state}. The mean square error (MSE) of the position estimates for 100 Monte Carlo runs are shown in Fig. \ref{fig:MMPF_MSE}. The simulation results show that MMPF can successfully track maneuvering targets if the information about the various maneuvering models are given. The ratio of regime variables corresponding to each model gives the mode probabilities and are plotted in Fig. \ref{fig:MMPF_mode_prob}. It clearly indicates that when the target is in a particular motion model, particles resembling this motion model are automatically selected more number of times by the MMPF and given more weightage. Thus the model probability gives the information about the current target motion model. \subsection{Comparison of MMPF with Standard Bootstrap PF} The standard bootstrap particle filter is implemented for the same target and measurement scenario with N=100 particles. The same states $\mathbf{x}= \begin{bmatrix} x & v_{x} & y & v_{y} \end{bmatrix}^T$, constant velocity model, the process noise $Q_w$ and the transitional prior as the importance density were used by the filter. The true trajectories of the targets and the track estimates are shown in Fig.\ref{fig:PF_xy_plot}. The state estimates of the targets are shown in Fig. \ref{fig:MMPF_x_state}, Fig. \ref{fig:MMPF_y_state}, Fig. \ref{fig:MMPF_vx_state} and Fig. \ref{fig:MMPF_vy_state}. The MSE of the position estimates for 100 Monte Carlo runs are shown in Fig.\ref{fig:PF_MSE}. The results show that the estimates diverge and the standard bootstrap particle filter is not able to track high maneuvering targets using single model. PF can have proper tracking only with higher number of particles, but the same performance can be achieved using MMPF with lesser number of particles. Thus it shows that MMPF improves tracking of targets with high maneuvers when compared to the standard bootstrap particle filter. \subsection{Comparison of MMPF with IMM-EKF} The IMM-EKF filter is implemented for the same target and measurement scenario. The same states $\mathbf{x}= \begin{bmatrix} x & v_{x} & y & v_{y} \end{bmatrix}^T$, constant velocity and constant turn models, and the process noise $Q_w$ were used by the filter. The true trajectories of the targets and the track estimates are shown in Fig.\ref{fig:IMMEKF_xy_plot}. The state estimates of the targets are shown in Fig. \ref{fig:MMPF_x_state}, Fig. \ref{fig:MMPF_y_state}, Fig. \ref{fig:MMPF_vx_state} and Fig. \ref{fig:MMPF_vy_state}. The MSE of the position estimates for 100 Monte Carlo runs are shown in Fig.\ref{fig:IMMEKF_MSE}. The results show that state estimates have larger MSE compared to the MMPF estimates. The velocity estimates particularly have very large deviation from the true states. Also the mode probabilities calculated by the filter do not always match with the true mode probabilities. Thus it is clear that the capability of IMM-EKF filter to track maneuvering targets using single model is less compared to multiple model particle filter (MMPF). \begin{figure}[p] \centering \subfloat [MMPF estimate ] {\label{fig:MMPF_xy_plot}\includegraphics[scale=0.4]{MMPF_fig/colour/MMPF_xy_plot} }\\ \subfloat [PF estimate ] { \label{fig:PF_xy_plot}\includegraphics[scale=0.4]{MMPF_fig/colour/PF_xy_plot} } \subfloat [IMM-EKF estimate ] { \label{fig:IMMEKF_xy_plot}\includegraphics[scale=0.4]{MMPF_fig/colour/IMMEKF_xy_plot} }\\ \caption{Target's true $xy$ track and its estimated tracks obtained using MMPF, PF and IMM-EKF: The MMPF estimate is more accurate than PF and IMM-EKF estimate. The PF estimate diverges completely during the maneuver of the target. (PF can have proper tracking only with higher number of particles, but the same performance can be achieved using MMPF with lesser number of particles.)} \label{fig:Ch_MMPF_xy_plot} \end{figure} \begin{figure}[p] \centering \subfloat [MMPF estimate ] { \label{fig:MMPF_mode_prob}\includegraphics[scale=0.4]{MMPF_fig/colour/MMPF_mode_prob} } \subfloat [IMM-EKF estimate ] { \label{fig:IMMEKF_mode_prob} \includegraphics[scale=0.4]{MMPF_fig/colour/IMMEKF_mode_prob} }\\ \caption{Mode probabilities estimated using MMPF and IMM-EKF: The error is less in MMPF compared to IMM-EKF: The mode probabilities calculated by the MMPF have more match with the true mode probabilities. Thus the MMPF mode probabilities can give the information about the current target motion model.} \label{fig:Ch_MMPF_mode_prob} \end{figure} \begin{figure}[p] \centering \subfloat [MMPF estimate] {\label{fig:MMPF_MSE}\includegraphics[scale=0.4]{MMPF_fig/colour/MMPF_MSE} }\\ \subfloat [PF estimate] {\label{fig:PF_MSE}\includegraphics[scale=0.4]{MMPF_fig/colour/PF_MSE} } \subfloat [IMM-EKF estimate] {\label{fig:IMMEKF_MSE}\includegraphics[scale=0.4]{MMPF_fig/colour/IMMEKF_MSE} }\\ \caption{MSE of the position estimate for 100 Monte carlo runs obtained using MMPF, PF and IMM-EKF: MMPF and IMM-EKF have similar performance, but PF estimates are diverged. (PF can have proper tracking only with higher number of particles, but the same performance can be achieved using MMPF with lesser number of particles.)} \label{fig:Ch_MMPF_MSE} \end{figure} \begin{figure}[p] \centering \subfloat [MMPF estimate ] {\includegraphics[scale=0.4]{MMPF_fig/colour/MMPF_x_state} }\\ \subfloat [PF estimate ] {\includegraphics[scale=0.4]{MMPF_fig/colour/PF_x_state} } \subfloat [IMM-EKF estimate] {\includegraphics[scale=0.4]{MMPF_fig/colour/IMMEKF_x_state} }\\ \caption{Target's true state $x$ and its estimates obtained using MMPF, PF and IMM-EKF: The performance of MMPF and IMM-EKF in estimating state $x$ are similar. PF estimates are diverged.} \label{fig:MMPF_x_state} \end{figure} \begin{figure}[p] \centering \subfloat [MMPF estimate ] {\includegraphics[scale=0.4]{MMPF_fig/colour/MMPF_y_state} }\\ \subfloat [PF estimate ] {\includegraphics[scale=0.4]{MMPF_fig/colour/PF_y_state} } \subfloat [IMM-EKF estimate] {\includegraphics[scale=0.4]{MMPF_fig/colour/IMMEKF_y_state} }\\ \caption{Target's true state $y$ and its estimates obtained using MMPF, PF and IMM-EKF: The performance of MMPF and IMM-EKF in estimating state $y$ are similar. PF estimates are diverged.} \label{fig:MMPF_y_state} \end{figure} \begin{figure}[p] \centering \subfloat [MMPF estimate ] {\includegraphics[scale=0.4]{MMPF_fig/colour/MMPF_vx_state} }\\ \subfloat [PF estimate ] {\includegraphics[scale=0.4]{MMPF_fig/colour/PF_vx_state} } \subfloat [IMM-EKF estimate] {\includegraphics[scale=0.4]{MMPF_fig/colour/IMMEKF_vx_state} }\\ \caption{Target's true state $v_x$ and its estimates obtained using MMPF, PF and IMM-EKF: Velocity estimates of MMPF is better than EKF-IMM. PF $v_x$ velocity estimates are diverged.} \label{fig:MMPF_vx_state} \end{figure} \begin{figure}[p] \centering \subfloat [MMPF estimate ] {\includegraphics[scale=0.4]{MMPF_fig/colour/MMPF_vy_state} }\\ \subfloat [PF estimate ] {\includegraphics[scale=0.4]{MMPF_fig/colour/PF_vy_state} } \subfloat [IMM-EKF estimate] {\includegraphics[scale=0.4]{MMPF_fig/colour/IMMEKF_vy_state} }\\ \caption{Target's true state $v_y$ and its estimates obtained using MMPF, PF and IMM-EKF: Velocity estimates of MMPF is better than EKF-IMM. PF $v_y$ velocity estimates are diverged.} \label{fig:MMPF_vy_state} \end{figure} \section{Summary} Targets can have abrupt deviation which are difficult to represent using single kinematic model. Particle filters can track maneuvering targets by using constant velocity model alone by increasing the number of particles. The number of particles for tracking highly maneuvering targets can be considerably reduced by incorporating multiple kinematic models. Thus multiple model particle filter (MMPF) proposes particles using multiple models. The modal that is used by a particular particle is determined by its regime/mode variable. These mode variables have transition between the models according to the transition probability matrix. The particles with correct mode have large likelihood and are selected more number of times during resampling. Thus the MMPF filters out and multiplies the particles which are closer to the true dynamics of the targets and use them efficiently. Hence lesser number of particles are enough to track highly maneuvering targets. The simulations show that MMPF have better tracking capability than standard PF and interacting multiple model Extended Kalman filter (IMM-EKF). \chapter{Monte Carlo Joint Probabilistic Data Association Filter (MC-JPDAF)} \label{chap:MC-JPDAF} Bar Shalom at.al \cite{14} developed the Joint Probabilistic Data Association Filter (JPDAF) for solving the data association problem in multi-target tracking. It is the most widely applied method for multi-target tracking under data association uncertainty. Monte Carlo Joint Probabilistic Data Association Filter (MC-JPDAF) was developed by J.~Vermaak et.al \cite{15} for solving the data association problem in multi-target tracking using particle filter framework. It incorporates clutter and missing measurements and also measurements from multiple observers. Data association problem arises due to the lack of information at the observer about the proper association between the targets and the received measurements. The problem becomes more involved when the targets move much closer and there are clutter and missed target detections at the observer. In the literature, there are various other strategies to solve the data association problem like Multiple Hypothesis Tracking(MHT), Nearest Neighbour Standard Filter(NNSF), etc. MHT keep track of all possible association hypothesis over time. Its computational complexity increases with time since the number of hypothesis grows exponentially. NNSF associates each measurement with the nearest target and neglect many other feasible hypotheses. JPDAF considers all possible hypotheses at each time step. The infeasible hypotheses are neglected using a gating procedure to reduce computational complexity. It calculates the posterior hypotheses probability of the remaining hypotheses. The filtered estimate of each hypothesis is calculated and is combined by weighting each with their corresponding posterior hypothesis probability. For estimation using extended Kalman filter framework, JPDAF relies on linear Gaussian models for evaluation of target measurement hypotheses. Non-linear models can be accommodated by suitably linearizing using EKF. But its performance degrades as non-linearity becomes severe. MC-JPDAF combines JPDAF with particle filtering technique to accommodate non-linear and non-Gaussian models. The remaining part of the chapter explores the MC-JPDAF and is organized as follows. Section \ref{sec:Model_Description} describes the hypothesis models for the target and measurement association, models for association prior and the likelihood model. Section \ref{sec:MCJPDAF} describes the MC-JPDAF. The general JPDAF framework is described and the MC-JPDAF algorithm is explained later. \section{Model Description} \label{sec:Model_Description} This section describes target and measurement model, two types of data association hypothesis model and the conversion between them. \subsection{Target model} The number of targets $K$ is assumed to be known and fixed. The state of the target $k$ at time $t$ is represented by $\mathbf{x}_{k,t},k=1,2,\ldots,K$. The combined state of all targets at time $t$ is represented by $\mathbf{x}_t=\{\mathbf{x}_{1,t},\mathbf{x}_{2,t},\ldots,\mathbf{x}_{K,t}\}$. Each target has independent Markov dynamics $p_k(\mathbf{x}_{k,t}|\mathbf{x}_{k,t-1})$. Hence the dynamics of the combined state factorizes over individual targets \begin{equation} p(\mathbf{x}_{k,t}|\mathbf{x}_{k,t-1})=\displaystyle\prod_{k=1}^{K} p_k(\mathbf{x}_{k,t}|\mathbf{x}_{k,t-1}) \end{equation} \subsection{Measurement and data association model} It is assumed that there are $N_o$ observers whose locations are given by $P_0^1,P_0^2,P_0^3,\ldots,P_0^{N_o}$. The observers are assumed to be static. The total number of measurements from an observer $i$ at a given time is denoted by $M^i$ which can vary with time due to missed target measurements and clutter measurements. Hence the measurement from a given observer $i$ is denoted by $\mathbf{y}^i=(\mathbf{y}_1^i,\mathbf{y}_2^i,\mathbf{y}_3^i,\ldots,\mathbf{y}_{M^i}^i)$. The combined set of measurements from all the $N_o$ observers are denoted as $\mathbf{y}=(\mathbf{y}^1,\mathbf{y}^2,\mathbf{y}^3,\ldots,\mathbf{y}^{N_o})$. The clutter measurements occur due to the multi path effects and observer errors etc. It is also assumed that every measurement at an observer can have only one source and more than one measurement cannot originate from a target. The targets can also be undetected. All the measurements can be clutter and there may be no measurements at a particular time. The data association is represented using a set of association variables. There are two types of representation for data association hypothesis. \begin{enumerate} \item Measurement-to-Target association ($M{\rightarrow}T$) \item Target-to-Measurement association ($T{\rightarrow}M$) \end{enumerate} Both carry same information and have one to one mapping between them. They can be converted from one type of representation to another. \subsubsection{Measurement-to-Target association($M{\rightarrow}T$) hypothesis} It is denoted by $\lambda=(\lambda^1,\lambda^2,\ldots,\lambda^{N_o})$, where $\lambda^i=(\mathbf{r}^i,M_C^i,M_T^i)$ is the hypothesis for the measurements from observer $i$. The hypothesis $\lambda^i$ indicates that the measurement has $M_C^i$ clutter measurements and $M_T^i$ target detected measurements. The sum of $M_C^i$ and $M_T^i$ gives the total number of measurements $M^i$, at the observer $i$ \begin{equation} M^i=M_C^i+M_T^i. \end{equation} The measurements are indexed from $1$ to $M^i$ and targets are indexed from $1$ to $K$. The association vector $\mathbf{r}^i=(r_1^i,r_2^i,\ldots,r_{M^i}^i)$ gives the index of the targets which has caused the measurements $1$ to $M^i$. The association vector at observer $i$ is given by \begin{equation} r_j^i= \begin{cases} 0 \hfill \text{ if measurement $j$ is due to clutter}\\ k \hfill \text{ if measurement $j$ is due to target $k$} \end{cases} \end{equation} \textit{Example }: $\mathbf{r}^i=(3,4,0,1,0,5)$ Here there are $M^i=6$ measurements, out of which the third and fifth measurements are due to clutter. The detected targets are $1,3,4$ and $5$. The first measurement correspond to target $3$. The second measurement correspond to target $4$. Fourth measurement correspond to target $1$ and sixth measurement correspond to target $5$. \subsubsection{Target-to-Measurement association($T{\rightarrow}M$) hypothesis} It is denoted by $\tilde{\lambda}=(\tilde{\lambda}^1,\tilde{\lambda}^2,\ldots,\tilde{\lambda}^{N_o})$, where $\tilde{\lambda}^i=(\mathbf{\tilde{r}}^i,M_C^i,M_T^i)$ is the target to measurement association hypothesis at observer $i$. It is similar to $M{\rightarrow}T$ association hypothesis except for the association vector $\tilde{\mathbf{r}}^i$. The association vector $\tilde{\mathbf{r}}^i=(\tilde{r}_1^i,\tilde{r}_2^i,\ldots,\tilde{r}_{K}^i)$ gives the measurements corresponding to the targets $1$ to $K$. Missed target detections are denoted as $0$. The association vector at observer $i$ is given by \begin{equation} \tilde{r}_j^i= \begin{cases} 0 \hfill \text{ if target $k$ is undetected}\\ j\in(1,\ldots,M^i) \hfill \text{ if target $k$ generated measurement $j$} \end{cases} \end{equation} \textit{Example }: $\mathbf{\tilde{r}}^i=(2,4,0,1,5)$ The above association hypothesis denotes that there are $K=5$ targets out of which third target is undetected. First target correspond to second measurement. Second target correspond to fourth measurement. The fourth target correspond to first measurement and fifth target correspond to fifth measurement. \subsubsection{Conversion between $M{\rightarrow}T$ and $T{\rightarrow}M$ hypothesis} Under the previously discussed assumptions, both representation are equivalent and carry same information. One can be uniquely converted to the other representation. The pseudo code for the conversion between $M{\rightarrow}T$ and $T{\rightarrow}M$ hypothesis are given in Table \ref{tab:T2M} and Table \ref{tab:M2T}. \begin{table}[h] \caption{$M{\rightarrow}T$ to $T{\rightarrow}M$ conversion} \centering \begin{tabular}{l} \hline \begin{minipage}{4.5in} \vskip 4pt $[(\mathbf{\tilde{r}}^i,M_C^i,M_T^i)]$ = $T{\rightarrow}M$ CONVERSION $[(\mathbf{r}^i,M_C^i,M_T^i),K,M^i]$ \begin{itemize} \item $\mathbf{\tilde{r}}^i = zeros(1,K)$. \item FOR $m=1:M^i$, \begin{itemize} \item IF($\mathbf{r}_m^i\neq0$) \begin{itemize} \item $\mathbf{\tilde{r}}^i_{\mathbf{r}_m^i} = m$ \end{itemize} \item END IF \end{itemize} \item END FOR \end{itemize} \vskip 4pt \end{minipage} \\ \hline \end{tabular} \label{tab:T2M} \end{table} \begin{table}[h] \caption{ $T{\rightarrow}M$ to $M{\rightarrow}T$ conversion} \centering \begin{tabular}{l} \hline \begin{minipage}{4.5in} \vskip 4pt $[(\mathbf{r}^i,M_C^i,M_T^i)]$ = $M{\rightarrow}T $ CONVERSION $[(\mathbf{\tilde{r}}^i,M_C^i,M_T^i),K,M^i]$ \begin{itemize} \item $\mathbf{r}^i = zeros(1,M^i)$. \item FOR $k=1:K$, \begin{itemize} \item IF($\mathbf{\tilde{r}}_k^i\neq0$) \begin{itemize} \item $\mathbf{r}^i_{\mathbf{\tilde{r}}_k^i} = k$ \end{itemize} \item END IF \end{itemize} \item END FOR \end{itemize} \vskip 4pt \end{minipage} \\ \hline \end{tabular} \label{tab:M2T} \end{table} \textit{Example }: $[(\mathbf{r}^i=\{0,3,1,2\}),K=3]$=CONVERSION $[(\mathbf{\tilde{r}}^i=\{3,4,2\}),M^i=4]$ \subsection{Association prior} The prior distribution of association hypothesis is assumed independent of state and past values of the association hypothesis. The prior distribution at observer $i$ can be written as \begin{align} p(\tilde{\lambda}^i) & = p(\tilde{\lambda}^i,M_C^i,M_T^i) \\ & = p(\tilde{\lambda}^i\mid M_T^i,M_C^i)p(M_T^i,M_C^i) \\ & = p(\tilde{\mathbf{r}}^i\mid M_T^i,M_C^i)p(M_T^i) p(M_C^i) \end{align} The number of valid hypotheses conditional on the number of target and clutter measurements is given by \begin{equation} N_{\tilde{\lambda}^i}(M_C^i,M_T^i)= {^K\mathrm{C}_{M_T^i}} {^{M^i}\mathrm{P}_{M_T^i}} \end{equation} and follows from the number of ways of choosing $M_T^i$ targets from the $K$ targets, multiplied by the number of possible associations between $M^i$ measurements and $M_T^i$ target detections. The prior for the association vector is assumed to be uniform over all the valid hypotheses and is given by \begin{equation} p(\tilde{\mathbf{r}}^i\mid M_T^i,M_C^i)= {[N_{\tilde{\lambda}^i}(M_C^i,M_T^i)]}^{-1} \end{equation} The clutter measurements are assumed to have Poisson distribution with mean ${\lambda}_C^i = \mu^i \tilde{V}^i$, where $\tilde{V}^i$ is the volume of space observed by the sensor and $\mu^i$ is the spatial density of clutter. The prior for the target measurements are assumed to follow binomial distribution. \begin{eqnarray} p(M_C^i)& = & {({\lambda}_C^i)}^{M_C^i}exp(-{\lambda}_C^i)/M_C^i! \\ p(M_T^i)& = & \displaystyle \binom{K}{M_T^i}P_D^{M_T^i}(1-P_D)^{K-M_T^i} \end{eqnarray} From an implementation point of view, a sequential factorized form of the association prior is used. It helps to calculate the association prior directly from a given target to measurement association hypothesis. \begin{eqnarray} p(\tilde{\lambda}^i)=p(M_C^i)\displaystyle\prod_{k=1}^{K}p(\tilde{\mathbf{r}}_k^{i}\mid\tilde{\mathbf{r}}_{k-1}^{i}) \label{eq:association_prior1} \end{eqnarray} where \begin{equation} p(\tilde{\mathbf{r}}_k^{i}\mid\tilde{\mathbf{r}}_{k-1}^{i})\propto \begin{cases} 1-P_D \;\qquad\text{if $j=0$,}\\ 0 \;\qquad\text{if $j>0$ and $j \in \{\tilde{r}_1^i\cdot\cdot\cdot\tilde{r}_{k-1}^i\}$},\\ \frac{P_D}{M_k^i} \;\qquad \text{otherwise.} \end{cases} \label{eq:association_prior2} \end{equation} \section{Monte Carlo JPDAF} \label{sec:MCJPDAF} In JPDAF, the distribution of interest is the marginal filtering distribution for each of the targets rather than the joint distribution. It recursively updates the marginal filtering distribution for each of the targets using the recursive Bayesian estimator. The prediction step is done independently for each target. Due to the uncertainty in the data association, the update step can't be performed independently for individual target. Hence a soft assignment of the target to measurements is performed. JPDAF calculates all possible hypotheses at each time step. The infeasible hypotheses are neglected using a gating procedure to reduce computational complexity. For estimation using Kalman filter framework, JPDAF relies on linear Gaussian models for evaluation of target measurement hypotheses. Non-linear models can be accommodated by suitably linearizing using EKF. But its performance degrades as non-linearity becomes severe. MC-JPDAF implements the JPDAF using Monte Carlo technique to accommodate non-linear and non-Gaussian models. In this section, the general JPDAF framework and its Monte Carlo implementation MC-JPDAF are discussed. \subsection{General JPDAF framework} JPDAF is a sub-optimal method for data association in tracking multiple targets under target measurement uncertainty. It assumes independent targets. The recursive Bayesian estimation for multiple targets proceeds similar to the estimation of single target previously discussed in Table \ref{tab:Recursive_Bayesian}. Estimation proceeds independently for individual target $k$ except the update step where the likelihood $p(\mathbf{y}_{t}|\mathbf{x}_k)$ can't be calculated independently for each target due to the target data association uncertainty. At each time step $t$, JPDAF solves this data association problem by a soft assignment of targets to measurements according to the posterior marginal association probability $\beta_{jk}^i$, \begin{equation} \beta_{jk}^i=p(\mathbf{\tilde{r}}_{k,t}^{i}=j\mid y_{1:t}) \end{equation} where $\beta_{jk}$ is the posterior probability that the measurement $j$ is associated with target $k$ and $\beta_{0k}$ is the posterior probability of the target $k$ being undetected. JPDAF uses the posterior marginal association probability to define the likelihood of the target $k$ as \begin{equation} p_k(\mathbf{y}_{t}|\mathbf{x}_{k,t})=\displaystyle\prod_{i=1}^{N_o}\left[\beta_{0k}^i+\displaystyle\sum_{j=1}^{M^i}\beta_{jk}^ip_T^i(\mathbf{y}_{j,t,}^i\mid \mathbf{x}_{k,t})\right] \end{equation} Here the likelihood of each target is assumed to be independent over the observers. The likelihood of the target with respect to a given observer is a mixture of the likelihood for the various target to measurement associations weighted by their posterior marginal association probability. The posterior marginal association probability $\beta_{jk}$ is computed by summing over all the posterior probabilities of the valid joint association hypotheses in which the same association event exists. \begin{align} \beta_{jk}^i & = p(\mathbf{\tilde{r}}_{k,t}^{i}=j\mid \mathbf{y}_{1:t})\\ & = \displaystyle\sum_{\{\lambda_t^i:\tilde{r}_{k,t}^i=j\}} p(\tilde{\lambda}_t^i\mid\mathbf{y}_{1:t}) \end{align} The joint association probability $p(\tilde{\lambda}_t^i\mid\mathbf{y}_{1:t}^i)$ can be expressed as \begin{align} p(\tilde{\lambda}_t^i\mid\mathbf{y}_{1:t}) & = p(\tilde{\lambda}_t^i\mid\mathbf{y}_t \mathbf{y}_{1:t-1})\\ & = \frac{1}{c}p(\mathbf{y}_t\mid\tilde{\lambda}_t^i,\mathbf{y}_{1:t-1})p(\tilde{\lambda}_t^i\mid \mathbf{y}_{1:t-1})\\ & \propto p(\tilde{\lambda}_t^i) p(\mathbf{y}_t\mid\tilde{\lambda}_t^i,\mathbf{y}_{1:t-1})\\ & \propto p(\tilde{\lambda}_t^i)\displaystyle\prod_{j=1}^{M^i}p_{r_{j,t}^i}(\mathbf{y}_{j,t}^i\mid\mathbf{y}_{1:t-1}) \label{eq:JointAsscnProb1} \end{align} The clutter likelihood model for the observer is assumed to be uniform over the measurement space $V^i$, where $V^i=2\pi R_{max}^i$ and $R_{max}^i$ is the maximum range of the sensor $i$. Since there are $M_C^i$ clutter measurements, \eqref{eq:JointAsscnProb1} becomes \begin{equation} p(\tilde{\lambda}_t^i\mid\mathbf{y}_{1:t})\propto p(\tilde{\lambda}_t^i) (V^i)^{-M_C^i}\displaystyle\prod_{j\in \mathcal{I}^i}p_{r_{j,t}^i}(\mathbf{y}_{j,t}^i\mid \mathbf{y}_{1:t-1}) \label{eq:JointAsscnProb2} \end{equation} where $\mathcal{I}^i=\{ j\in \{1,\ldots,M^i\}:r_j^i\neq0\}$. The number of clutter measurements in each hypothesis is calculated by converting $T{\rightarrow}M$ hypotheses to $M{\rightarrow}T$ hypotheses and finding the total number of zero entries in each. The association prior $p(\tilde{\lambda}_t^i)$ is calculated using \eqref{eq:association_prior1} and \eqref{eq:association_prior2}. $p_k(\mathbf{y}_{j,t}^i\mid \mathbf{y}_{1:t-1})$ is the predictive likelihood for the measurement $j$ associated with target $k$. The $M{\rightarrow}T$ hypothesis representation helps to obtain the target $r_{j,t}^i$ associated with the measurement $j$ in \eqref{eq:JointAsscnProb2}. The predictive likelihood can be calculated using the following integral. \begin{equation} p_k(\mathbf{y}_{j,t}^i\mid \mathbf{y}_{1:t-1})\propto \int p_T^i(\mathbf{y}_{j,t}^i\mid \mathbf{x}_{k,t})p_k(\mathbf{x}_{k,t}\mid \mathbf{y}_{1:t-1})d\mathbf{x}_{k,t} \end{equation} The recursive Bayesian estimation of the general JPDAF framework is repeated in Table \ref{tab:General JPDAF} from \cite{15}. \begin{table}[ht] \caption{General JPDAF Algorithm \cite{15}} \centering \resizebox{!}{4in} { \begin{tabular}{l} \hline \begin{minipage}{7in} \vskip 4pt \begin{enumerate} \item Prediction step: FOR $k=1..K$, calculate the a priori pdf \begin{equation} p_k(\mathbf{x}_{k,t}|\mathbf{y}_{1:t-1})=\int p_k(\mathbf{x}_{k,t}|\mathbf{x}_{k,t-1})p_k(\mathbf{x}_{k,t-1}|\mathbf{y}_{1:t-1})d\mathbf{x}_{k,t-1} \end{equation} \item FOR $k=1..K$, calculate target likelihood by the below method: \begin{itemize} \item FOR $k=1..K$,$i=1..N_o$, $j=1..M^i$, calculate the predictive likelihood \begin{equation} p_k(\mathbf{y}_{j,t}^i\mid \mathbf{y}_{1:t-1}) \approx \int p_T^i(\mathbf{y}_{j,t,}^i\mid \mathbf{x}_{k,t})p_k(\mathbf{y}_{j,t}^i\mid \mathbf{y}_{1:t-1})d\mathbf{x}_{k,t} \label{eq:pred_like} \end{equation} \item FOR observer $i=1..N_o$, enumerate all valid target to measurement association hypotheses $\tilde{\lambda}^i_t$. Convert $T{\rightarrow}M$ hypotheses to $M{\rightarrow}T$ hypotheses and calculate the number of clutter measurements $M_C^i$ in each hypothesis. \item FOR observer $i=1..N_o$, calculate association prior of all hypotheses. \begin{equation} p(\tilde{\mathbf{r}}_k^{i}\mid\tilde{\mathbf{r}}_{k-1}^{i})\propto \begin{cases} 1-P_D \;\qquad\text{if $j=0$}\\ 0 \;\qquad\text{if $j>0$ and $j \in \{\tilde{r}_1^i\cdot\cdot\cdot\tilde{r}_{k-1}^i\}$}\\ \frac{P_D}{M_k^i} \;\qquad \text{otherwise} \end{cases} \end{equation} \begin{eqnarray} p(\tilde{\lambda}^i)=p(M_C^i)\displaystyle\prod_{k=1}^{K}p(\tilde{\mathbf{r}}_k^{i}\mid\tilde{\mathbf{r}}_{k-1}^{i}) \end{eqnarray} \item FOR $i=1..N_o$, compute joint association posterior probability and normalize it at each observer $i$. \begin{eqnarray} p(\tilde{\lambda}_t^i\mid\mathbf{y}_{1:t}) & \propto p(\tilde{\lambda}_t^i) (V^i)^{-M_C^i}\displaystyle\prod_{j\in \mathcal{I}^i}p_{r_{j,t}^i}(\mathbf{y}_{j,t}^i\mid \mathbf{y}_{1:t-1}) \end{eqnarray} \begin{eqnarray} \displaystyle\sum p(\tilde{\lambda}_t^i\mid\mathbf{y}_{1:t}) & = 1 \qquad \text{at each observer $i$.} \end{eqnarray} \item FOR $k=1..K$, $i=1..N_o$, $j=0..M^i$, calculate the marginal association posterior probability \begin{equation} \beta_{jk}^i = \displaystyle\sum_{\{\lambda_t^i:\tilde{r}_{k,t}^i=j\}} p(\tilde{\lambda}_t^i\mid\mathbf{y}_{1:t}) \end{equation} \item FOR $k=1..K$, compute target likelihood. \begin{equation} p_k(\mathbf{y}_{t}|\mathbf{x}_{k,t})=\displaystyle\prod_{i=1}^{N_o}\left[\beta_{0k}^i+\displaystyle\sum_{j=1}^{M^i}\beta_{jk}^ip_T^i(\mathbf{y}_{j,t,}^i\mid \mathbf{x}_{k,t})\right] \end{equation} \end{itemize} \item Update step: FOR $k=1..K$ calculate the posterior pdf. \begin{equation} p_k(\mathbf{x}_{k,t}|\mathbf{y}_{1:t})=\dfrac{p_k(\mathbf{y}_{t}|\mathbf{x}_{k,t})p_k(\mathbf{x}_{k,t}|\mathbf{y}_{1:t-1}))}{p_k(\mathbf{y}_{t}|\mathbf{y}_{1:t-1})} \end{equation} \end{enumerate} \vskip 4pt \end{minipage} \\ \hline \end{tabular} } \label{tab:General JPDAF} \end{table} \subsection{Monte Carlo implementation of JPDAF} Similar to JPDAF, the distributions of interest in MC-JPDAF are the marginal distribution for each of the targets. MC-JPDAF implements the general JPDAF in the particle filter approach. It approximates the marginal filtering distribution of each target using particles. The target $k$ is represented using $N$ samples, $\{\mathbf{x}_{k,t}^{(n)},w_{k,t}^{(n)}\}_{n=1}^N$. The recursive Bayesian estimation in JPDAF for each target $k$ is implemented using the sequential importance sampling used in the standard particle filter. The new samples at every time step is obtained using the proposal distribution, \begin{equation} \mathbf{x}_{k,t}^{(n)}\sim q_k(\mathbf{x}_{k,t}\mid \mathbf{x}_{k,t-1}^{(n)},\mathbf{y}_t) \end{equation} The importance weights $w_{k,t}^{(n)}$ are obtained for each target $k$ recursively using the sequential importance sampling, similar to \eqref{eq:imp_weights}. \begin{equation} w_{k,t}^{(n)}\varpropto w_{k,t-1}^{(n)}\dfrac{p_k(\mathbf{y}_t|\mathbf{x}_{k,t}^{(n)})p_k(\mathbf{x}_{k,t}^{(n)}|\mathbf{x}_{k,t-1}^{(n)})}{q(\mathbf{x}_{k,t}^{(n)}|\mathbf{x}_{k,t-1}^{(n)},\mathbf{y}_{t})} ; \;\qquad\displaystyle \sum_{n=1}^N w_{k,t}^{(n)}=1 \end{equation} The target likelihood $p_k(\mathbf{y}_{t}\mid\mathbf{x}_{k,t}^{(n)})$ is calculated using the algorithm described in Table \ref{tab:General JPDAF}. The integral of equation \eqref{eq:pred_like} is also implemented using sequential importance sampling. \begin{align} p_k(\mathbf{y}_{j,t}^i\mid \mathbf{y}_{1:t-1}) & = \int p_T^i(\mathbf{y}_{j,t,}^i\mid \mathbf{x}_{k,t})p_k(\mathbf{x}_{k,t}\mid \mathbf{y}_{1:t-1})d\mathbf{x}_{k,t}\\ & = \int p_T^i(\mathbf{y}_{j,t}^i\mid \mathbf{x}_{k,t})\frac{p_k(\mathbf{x}_{k,t}\mid \mathbf{y}_{1:t-1})}{q_k(\mathbf{x}_{k,t}\mid\mathbf{x}_{k,t-1},\mathbf{y}_t)}q_k(\mathbf{x}_{k,t}\mid\mathbf{x}_{k,t-1},\mathbf{y}_t)d\mathbf{x}_{k,t} \label{eq:pred_like_eval} \end{align} The above integral is similar to equation \eqref{eq:PF_integral}. The Monte Carlo estimate of it can be obtained by generating $N$ samples $\{\mathbf{x}_{k,t}^{(n)}\}_{n=1}^N$ from the the proposal distribution $q_k(\mathbf{x}_k\mid\mathbf{x}_{k,t-1},\mathbf{y}_t)$, calculating the summation and normalizing the weights. \begin{align} p_k(\mathbf{y}_{j,t}^i\mid \mathbf{y}_{1:t-1}) & \propto \displaystyle \sum_{i=1}^N \alpha_{k,t}^{(n)} p_T^i(\mathbf{y}_{j,t}^i\mid \mathbf{x}_{k,t}^{(n)}) \end{align} where \begin{equation} \alpha_{k,t}^{(n)}\propto w_{k,t-1}^{(n)}\frac{p_k(\mathbf{x}_{k,t}^{(n)}\mid\mathbf{x}_{k,t-1}^{(n)})}{q_k(\mathbf{x}_{k,t}^{(n)}\mid\mathbf{x}_{k,t-1}^{(n)},\mathbf{y}_t)} ; \;\qquad \displaystyle \sum_{n=1}^N \alpha_{k,t}^{(n)}=1 \end{equation} \subsection{Gating of hypotheses} The number of hypotheses at a given observer $i$ is given by \begin{align} N_{\tilde{\lambda}}& = \displaystyle\sum_{M_T=0}^{min(K,M^i)} N_{\tilde{\lambda}}(M_C,M_T) \\ & = \displaystyle{\sum_{M_T=0}^{min(K,M^i)}} {^K\mathrm{C}_{M_T^i}} {^{M^i}\mathrm{P}_{M_T^i}} \end{align} The number of hypotheses increases exponentially with increasing number of targets $K$, and number of measurements $M^i$. This increases computational complexity and is almost infeasible for practical scenarios. Hence gating is used to reduce the number of hypotheses to a feasible level. A validation region is calculated for each target $k$ using the available information. All the measurement which fall inside the validation region are considered to be possible measurements and the measurements which fall outside the validation region are considered to be impossible measurements for the target $k$. The hypotheses containing impossible target measurements are ignored. Thus the number of valid hypotheses gets reduced. \begin{figure}[t!] \centering {\includegraphics[scale=0.5]{MCJPDAF_fig/gating}} \caption{Gating of measurement: Targets are shown in their measurement space using circles. The ellipses indicate their validation region. The measurements are shown in squares.} \label{gating} \end{figure} Suppose $\hat{\mathbf{y}}_k=g(\mathbf{x}_k,\mathbf{p_0})$ is the measurement of the target $k$, then for Gaussian assumption of likelihood model, the Monte Carlo approximation of the predictive likelihood can be expressed as \begin{align} p_k(\mathbf{y}\mid \mathbf{y}_{1:t-1}) & \approx \displaystyle{\sum_{n=1}^N}\mathcal{N}(\mathbf{y}\mid \hat{\mathbf{y}}_k^{(n)},\Sigma_{\mathbf{y}})\\ & \approx \mathcal{N}(\mu_{\hat{\mathbf{y}}_k},\Sigma_{\hat{\mathbf{y}}_k}) \label{eq:appr_pr_lh} \end{align} where \begin{eqnarray} \mu_{\hat{\mathbf{y}}_k} & = & \displaystyle{\sum_{n=1}^N}\alpha_k^{(n)}\mathbf{g}(\mathbf{x}_k^{(n)},\mathbf{p_0}) \\ \Sigma_{\hat{\mathbf{y}}_k} & = & \Sigma_{\mathbf{y}}+\displaystyle{\sum_{n=1}^N}\alpha_k^{(n)}[\mathbf{g}(\mathbf{x}_k^{(n)},\mathbf{p_0})-\mu_{\hat{\mathbf{y}}_k}][\mathbf{g}(\mathbf{x}_k^{(n)},\mathbf{p_0})-\mu_{\hat{\mathbf{y}}_k}]^T \end{eqnarray} Given a measurement $\mathbf{y}_j$, the squared distance of the measurement with respect to the predicted measurement of the target $k$ can be calculated as \begin{equation} d_k^2(\mathbf{y}_j)=(\mathbf{y}_j-\mu_{\hat{\mathbf{y}}_k})^T \Sigma_{\hat{\mathbf{y}}_k}^{-1}(\mathbf{y}_j-\mu_{\hat{\mathbf{y}}_k}) \end{equation} The set of validated measurements for target $k$ are those such that \begin{equation} \mathcal{Y}_k=\{\mathbf{y}_j:d_k^2(\mathbf{y}_j)\leq\varepsilon\} \end{equation} where $\varepsilon$ is the parameter which decides the volume of validation region. $d_k^2$ is chi square distributed approximately with degrees of freedom equal to the dimension of $\mathbf{y}_j$. Chi-square hypothesis testing is performed on the proposed target-measurement association hypotheses. A hypothesis is accepted if its chi-square statistics $d_k^2$ satisfies the relation $d_k^2<\chi^2_\alpha$ to obtain the set of gated hypotheses $\tilde{\Lambda}_t^i$ at each observer $i$. The gating reduces the number of hypotheses to a feasible level. For example, if we consider the situation in Fig.\ref{gating}, where there are three targets and three measurements, an exhaustive enumeration will result in 34 hypotheses as explained in Table.\ref{tab:all_hyp}. After gating, the number of hypotheses reduces to 5 as shown in Table \ref{tab:gated_hyp}. The summary of MC-JPDAF with gating is discussed in Table \ref{tab:MCJPDAF}. \begin{table}[H] \caption{Enumeration of hypotheses for $K=3,$ $M^i=3$} \centering \begin{tabular}{c c c c c} \hline\hline Cases & \multicolumn{3}{c}{Hypotheses} & No. of hypotheses\\ & $\tilde{r}_1$& $\tilde{r}_2$& $\tilde{r}_3$& \\ \hline \hline $M_T=3$, $M_C=0$& $M_p$ & $M_q$& $M_r$&6 \\[1ex] $M_T=2$, $M_C=1$& $M_p$ & $M_q$& $0$&6 \\[-1ex] & $M_p$ & $0$& $M_r$&6 \\[-1ex] & $0$ & $M_q$& $M_r$&6 \\[1ex] $M_T=1$, $M_C=2$& $M_p$ & $0$& $0$&3 \\[-1ex] & $0$ & $M_q$& $0$&3 \\[-1ex] & $0$ & $0$& $M_r$&3 \\[1ex] $M_T=0$, $M_C=3$& $0$ & $0$& $0$&1 \\[1ex] \hline &&&&Total=34\\[1ex] \hline\hline \hline \end{tabular} \label{tab:all_hyp} \end{table} \begin{table}[H] \caption{Hypotheses after gating} \centering \begin{tabular}{ccc} \hline $\tilde{r}_1$ & $\tilde{r}_2$& $\tilde{r}_3$ \\ \hline 0&0&0\\ 0&0&2\\ 0&0&3\\ 3&0&0\\ 3&0&2\\ \hline \end{tabular} \label{tab:gated_hyp} \end{table} \begin{table}[ht] \caption{Monte Carlo JPDAF Algorithm \cite{15}} \centering \resizebox{!}{4in} { \begin{tabular}{l} \hline \begin{minipage}{7in} \vskip 4pt \begin{itemize} \item Prediction step: FOR $k=1..K$, $n=1:N$, draw samples \begin{equation} \mathbf{x}_{k,t}^{(n)}\sim q_k(\mathbf{x}_{k,t}\mid \mathbf{x}_{k,t-1}^{(n)},\mathbf{y}_t) \end{equation} \item Evaluate the predictive weights upto a normalizing constant \begin{equation} \alpha_{k,t}^{(n)}\propto w_{k,t-1}^{(n)}\frac{p_k(\mathbf{y}_t\mid \mathbf{x}_{k,t}^{(n)})p_k(\mathbf{x}_{k,t}^{(n)}\mid\mathbf{x}_{k,t-1}^{(n)})}{q_k(\mathbf{x}_{k,t}^{(n)}\mid\mathbf{x}_{k,t-1}^{(n)},\mathbf{y}_t)} \end{equation} \item Normalize the predictive weights \begin{equation} \displaystyle \sum_{n=1}^N \alpha_{k,t}^{(n)}=1 \end{equation} \item FOR $k=1..K$, $i=1..N_o$, $j=1..M^i$, calculate the predictive likelihood \begin{equation} p_k(\mathbf{y}_{j,t}^i\mid \mathbf{y}_{1:t-1}) \approx \displaystyle \sum_{i=1}^N \alpha_{k,t}^{(n)} p_T^i(\mathbf{y}_{j,t}^i\mid \mathbf{x}_{k,t}^{(n)}) \end{equation} \item FOR observer $i=1..N_o$, enumerate all valid target to measurement association hypotheses $\tilde{\lambda}^i_t$. \item Perform gating on the valid target to measurement hypotheses by the following procedure: \begin{itemize} \item For $k=1..K$, calculate the approximation for the predictive likelihood of target $k$ using \eqref{eq:appr_pr_lh} \begin{eqnarray} p_k(\mathbf{y}\mid \mathbf{y}_{1:t-1}) & \approx & \mathcal{N}(\mu_{\hat{\mathbf{y}}_k},\Sigma_{\hat{\mathbf{y}}_k}) \\ \mu_{\hat{\mathbf{y}}_k} & = & \displaystyle{\sum_{n=1}^N}\alpha_k^{(n)}\mathbf{g}(\mathbf{x}_k^{(n)},\mathbf{p_0}) \\ \Sigma_{\hat{\mathbf{y}}_k} & = & \Sigma_{\mathbf{y}}+\displaystyle{\sum_{n=1}^N}\alpha_k^{(n)}[\mathbf{g}(\mathbf{x}_k^{(n)},\mathbf{p_0})-\mu_{\hat{\mathbf{y}}_k}][\mathbf{g}(\mathbf{x}_k^{(n)},\mathbf{p_0})-\mu_{\hat{\mathbf{y}}_k}]^T \end{eqnarray} \item For $k=1..K$, $i=1..N_o$, $j=1..M^i$, calculate the squared distance $d_k^i(\mathbf{y}_j)$ between the predicted and observed measurements using measurement innovations. \begin{equation} d_k^2(\mathbf{y}_j)=(\mathbf{y}_j-\mu_{\hat{\mathbf{y}}_k})^T \Sigma_{\hat{\mathbf{y}}_k}^{-1}(\mathbf{y}_j-\mu_{\hat{\mathbf{y}}_k}) \end{equation} \item Perform chi-square hypothesis testing on the proposed target-measurement association hypotheses. Accept a hypothesis if its chi-square statistics $d_k^2$ satisfies the relation $d_k^2<\chi^2_\alpha$ to obtain the set of gated hypotheses $\tilde{\Lambda}_t^i$ at each observer $i$. \end{itemize} \end{itemize} \vskip 4pt \end{minipage} \\ \hline \end{tabular} } \label{tab:MCJPDAF} \end{table} \begin{table}[ht] \caption{Monte Carlo JPDAF Algorithm (contd..)\cite{15}} \centering \resizebox{!}{4in} { \begin{tabular}{l} \hline \begin{minipage}{7in} \vskip 4pt \begin{itemize} \item Convert $T{\rightarrow}M$ hypotheses to $M{\rightarrow}T$ hypotheses and calculate the number of clutter measurements $M_C^i$ in each hypothesis. \item FOR observer $i=1..N_o$, calculate association prior of all hypotheses. \begin{equation} p(\tilde{\mathbf{r}}_k^{i}\mid\tilde{\mathbf{r}}_{k-1}^{i})\propto \begin{cases} 1-P_D \;\qquad\text{if $j=0$} \\ 0 \;\qquad\text{if $j>0$ and $j \in \{\tilde{r}_1^i\cdot\cdot\cdot\tilde{r}_{k-1}^i\}$} \\ \frac{P_D}{M_k^i} \;\qquad \text{otherwise} \end{cases} \end{equation} \begin{eqnarray} p(\tilde{\lambda}^i)=p(M_C^i)\displaystyle\prod_{k=1}^{K}p(\tilde{\mathbf{r}}_k^{i}\mid\tilde{\mathbf{r}}_{k-1}^{i}) \end{eqnarray} \item FOR $i=1..N_o$, compute joint association posterior probability and normalize it at each observer $i$. \begin{eqnarray} p(\tilde{\lambda}_t^i\mid\mathbf{y}_{1:t}) & \propto p(\tilde{\lambda}_t^i) (V^i)^{-M_C^i}\displaystyle\prod_{j\in \mathcal{I}^i}p_{r_{j,t}^i}(\mathbf{y}_{j,t}^i\mid \mathbf{y}_{1:t-1}) \end{eqnarray} \begin{eqnarray} \displaystyle\sum p(\tilde{\lambda}_t^i\mid\mathbf{y}_{1:t}) & = 1 \qquad \text{at each observer $i$.} \end{eqnarray} \item FOR $k=1..K$, $i=1..N_o$, $j=0..M^i$, calculate the marginal association posterior probability \begin{equation} \beta_{jk}^i = \displaystyle\sum_{\{\tilde{\lambda}_t^i\in\tilde{\Lambda}_t^i:\tilde{r}_{k,t}^i=j\}} p(\tilde{\lambda}_t^i\mid\mathbf{y}_{1:t}) \end{equation} \item FOR $k=1..K$, compute target likelihood. \begin{equation} p_k(\mathbf{y}_{t}\mid\mathbf{x}_{k,t}^{(n)})=\displaystyle\prod_{i=1}^{N_o}\left[\beta_{0k}^i+\displaystyle\sum_{j=1}^{M^i}\beta_{jk}^ip_T^i(\mathbf{y}_{j,t,}^i\mid \mathbf{x}_{k,t}^{(n)})\right] \end{equation} \item Update step: FOR $k=1..K$, $n=1..N$, calculate and normalize particle weights. \begin{equation} w_{k,t}^{(n)}\varpropto w_{k,t-1}^{(n)}\dfrac{p_k(\mathbf{y}_t|\mathbf{x}_{k,t}^{(n)})p_k(\mathbf{x}_{k,t}^{(n)}|\mathbf{x}_{k,t-1}^{(n)})}{q(\mathbf{x}_{k,t}^{(n)}|\mathbf{x}_{k,t-1}^{(n)},\mathbf{y}_{t})} ; \;\qquad\displaystyle \sum_{n=1}^N w_{k,t}^{(n)}=1 \end{equation} \item FOR $k=1..K$, if required, resample the particles $\{\mathbf{x}_{k,t}^{(n)}\}_{n=1}^N$ and do roughening. \end{itemize} \vskip 4pt \end{minipage} \\ \hline \end{tabular} } \label{tab:MCJPDAF_cntd} \end{table} \section{Simulation Results} To verify the effectiveness of the algorithm, targets' motion scenario and their measurements are simulated according to the given models and the estimates obtained using the algorithm is compared with the true trajectories. \subsection{Multi-target tracking using MC-JPDAF} We have three independent targets which have nearly constant velocity motion. The state vector consists of position and velocities of the targets. The state of the $k$-th target at time $t$ is given by \begin{eqnarray} \mathbf{x}_{k,t}= \begin{bmatrix} x_{k,t} & \dot{x}_{k,t} & y_{k,t} & \dot{y}_{k,t} \end{bmatrix}^T \end{eqnarray} The initial true positions of the targets are $(-50,50)$, $(-50,0)$, $(-50,-50)$ in meters and their velocities are $(1,-1.5)$, $(1,0)$, $(1,0.75)$ in meters per second respectively. The targets move with near constant velocity model with $\sigma_x=\sigma_y=5\times10^{-4}$. All the targets have state transition model $F$ such that: \begin{eqnarray} \mathbf{x}_{k,t} & = & F(\mathbf{x}_{k,t-1})+\mathbf{w}_{k,t-1} \end{eqnarray} where $\mathbf{w}_{k,t}$ is the process noise with zero mean and covariance $Q_{w,k}$. The matrices $F$ and $Q_{w,k}$ are given by, \begin{equation} F=\begin{bmatrix} 1&T&0&0\\ 0&1&0&0\\ 0&0&1&T\\ 0&0&0&1\\ \end{bmatrix} \end{equation} \begin{equation} Q_{w,k} = \begin{bmatrix} \sigma_x^2(T^3)/3 &\sigma_x^2*(T^2)/2 &0 &0\\ \sigma_x^2(T^2)/2 &\sigma_x^2T &0 &0\\ 0 &0 &\sigma_y^2(T^3)/3 &\sigma_y^2(T^2)/2 \\ 0 &0 &\sigma_y^2(T^2)/2 &\sigma_y^2T \\ \end{bmatrix} \end{equation} where $T$ is the sampling period of the target dynamics. The measurement sensors are located at $(-45,-45)$, $(45,45)$ meters respectively. The $k$-th targets' range $r_{k}$ and bearing $\theta_{k}$ at time $t$ are available as the measurement $\mathbf{y}_{k,t}^i$ at time step of $T=1$ at each observer $i$. \begin{eqnarray} \mathbf{y}_{k,t}^i= \begin{bmatrix} r_{k,t}\\ \theta_{k,t}\\ \end{bmatrix} \end{eqnarray} The errors in the range and bearing are such that $\sigma_R=5$ and $\sigma_\theta=0.05$. The maximum range detected by the sensor is $100 m$. The probability of detection of a target is $P_D=0.9$ and the clutter rate is $\lambda_C=5$. The exact association of the measurements to the targets is unknown at the observers. The measurement model $h(\cdotp)$ for the target $k$ at the $i$-th observer is given by: \begin{eqnarray} \mathbf{y}_{k,t}^i= h_k(\mathbf{x}_{k,t})^i+\mathbf{v}_{k,t}^i= \begin{bmatrix} \sqrt{(x_{k,t}-x_o^i)^2+(y_{k,t}-y_o^i)^2}\\ \tan^{-1}\left(\dfrac{y_{k,t}-y_o^i}{x_{k,t}-x_o^i}\right) \end{bmatrix} \end{eqnarray} with $\mathbf{p}_0^i=(x_o^i,y_o^i)$. The maximum range of sensor is $R_{max}^i=100$ and the volume of measurement space is $V^i=2\pi R_{max}^i$. The measurement error $\mathbf{v}_{k,t}^i$ is uncorrelated and has zero mean Gaussian distribution with covariance matrix $\Sigma_{\mathbf{y}_k}$. \begin{eqnarray} \Sigma_{\mathbf{y}_k}= \begin{bmatrix} \sigma_R^{2} &0 \\ 0 &\sigma_\theta^2 \\ \end{bmatrix} \end{eqnarray} The measurement errors are assumed to be the same at all the observers. The initial state estimate is assumed to be a Gaussian vector with mean $ \hat{\mathbf{x}}_{k,0}=\mathbf{x}_{k,0}$ and error covariance $P_{k,0} = diag(5,0.1,5,0.1)$. Hence initial particles for each target $\{\mathbf{x}_{k,0}^{(n)}\}_{n=1}^N$ were generated based on the distribution \begin{equation} \mathbf{x}_{k,0} \sim \mathcal{N}(\mathbf{x}_{k,0},P_{k,0}) \end{equation} In this implementation of the particle filter, the transitional prior which is a sub-optimal choice of importance density is used to propose particles. \begin{eqnarray} q(\mathbf{x}_{k,t}^{(n)}|\mathbf{x}_{k,t-1}^{(n)},\mathbf{y}_{t}) & = & p(\mathbf{x}_{k,t}|\mathbf{x}_{k,t-1}^{(n)}) \end{eqnarray} The process noise used for estimation is such that $\sigma_x=\sigma_y=5\times10^{-2}$. The squared distance of the measurement with respect to the predicted measurement $d_k^2$, follows chi-square distribution with 2 degrees of freedom. The significance level used for the gating of hypotheses is $\alpha=0.01$. The chi-square critical value comes to be $\chi^2_\alpha=9.21$. A hypothesis is rejected if its chi-square statistics $d_k^2$ satisfies the relation $d_k^2>\chi^2_\alpha$. A total of $N=100$ particles were used. The simulation was carried for $20$ Monte Carlo runs and the estimates were obtained. The true trajectories of the targets and their estimates of a single run are shown in Fig.\ref{MCJPDAF_cov_plot}. The ellipses indicate the 2-$\sigma$ region of the estimate covariances. The state estimates of the targets are shown in Fig.\ref{MCJPDAF_state}. The mean square error (MSE) of the position estimates are shown in Fig.\ref{MCJPDAF_MSE}. The results show that MCJPDAF handles data association uncertainty efficiently. It had good track of the target states in all the Monte Carlo runs and there were no diverged track estimates. The missing measurements and clutters didn't have any significant effect in the estimates. \begin{figure}[h] \centering {\includegraphics[scale=0.5]{MCJPDAF_fig/colour/MCJPDAF_cov_plot}} \caption{Targets' true $xy$ track and their estimated track covariance obtained using MCJPDAF for a single run: The location of the sensors are shown in triangles.} \label{MCJPDAF_cov_plot} \end{figure} \begin{figure}[h] \centering {\label{MCJPDAF_T1_MSE}\includegraphics[scale=0.5]{MCJPDAF_fig/colour/MCJPDAF_MSE} } \caption{MSE of the position estimates from $20$ Monte Carlo runs, obtained using MCJPDAF.} \label{MCJPDAF_MSE} \end{figure} \begin{figure}[h] \centering \subfloat [position $x$ ] {\includegraphics[scale=0.4]{MCJPDAF_fig/colour/MCJPDAF_x_state} }\\ \subfloat [position $y$ ] {\includegraphics[scale=0.4]{MCJPDAF_fig/colour/MCJPDAF_y_state} }\\ \subfloat [velocity $v_x$] {\includegraphics[scale=0.4]{MCJPDAF_fig/colour/MCJPDAF_vx_state} }% \subfloat [velocity $v_y$] {\includegraphics[scale=0.4]{MCJPDAF_fig/colour/MCJPDAF_vy_state} }\\ \caption{Targets' true states and their estimates from single run obtained using MCJPDAF.} \label{MCJPDAF_state} \end{figure} \begin{figure}[h] \centering \subfloat [Sensor 1] {\includegraphics[scale=0.4]{MCJPDAF_fig/colour/MCJPDAF_range_meas_s1} } \subfloat [Sensor 2] {\includegraphics[scale=0.4]{MCJPDAF_fig/colour/MCJPDAF_range_meas_s2} }\\ \caption{Targets' true range and their measurements: The target measurements are shown in dots and the clutter measurements are shown in squares.} \label{MCJPDAF_range_meas} \end{figure} \begin{figure}[h] \centering \subfloat [Sensor 1] {\includegraphics[scale=0.4]{MCJPDAF_fig/colour/MCJPDAF_bearing_meas_s1} } \subfloat [Sensor 2] {\includegraphics[scale=0.4]{MCJPDAF_fig/colour/MCJPDAF_bearing_meas_s2} }\\ \caption{Targets' true bearing and their measurements: The target measurements are shown in dots and the clutter measurements are shown in squares.} \label{MCJPDAF_bearing_meas} \end{figure} \begin{figure}[p] \centering \includegraphics[scale=0.7]{MCJPDAF_fig/colour/MCJPDAF_missed_targets} \caption{The index of the targets which were undetected at the sensors for a single run.} \label{MCJPDAF_missed_targets} \end{figure} \subsection{Performance of MCJPDAF with varying $\lambda_C$ and $P_D$} To study the effect of missing target detections and clutter measurements on the estimates, simulations were carried out with varying clutter rate $\lambda_C$ and detection probability $P_D$. An easy problem $(\lambda_C=0.5,P_D=1.0)$, a medium problem $(\lambda_C=2.0,P_D=0.8)$ and a difficult problem $(\lambda_C=5,P_D=0.5)$ were simulated for 20 Monte Carlo runs and their results are compared. With the increase in clutter rate and decrease in target detection probability, the 2-$\sigma$ covariance region increased in area and the mean square error of position estimates also increased. This caused few swapped track estimates and are shown in Fig. \ref{MCJPDAF_swaps}. In 20 Monte Carlo runs, there was one swapped track estimate between target 2 and 3 in medium problem as well as difficult problem. The target position estimates with the 2-$\sigma$ covariance region is shown in Fig.\ref{MCJPDAF_cov_plot_case_all} and their MSE is shown in Fig.\ref{MCJPDAF_MSE_case_all}. \begin{figure}[h] \centering \subfloat [Easy problem] {\includegraphics[scale=0.4]{MCJPDAF_fig/MCJPDAF_cov_plot_case1} }\\ \subfloat [Medium problem] {\includegraphics[scale=0.4]{MCJPDAF_fig/MCJPDAF_cov_plot_case2} }\\ \subfloat [Difficult problem] {\includegraphics[scale=0.4]{MCJPDAF_fig/MCJPDAF_cov_plot_case3} }% \caption{Targets' true $xy$ track and their estimated track covariance obtained using MCJPDAF for a single run: The location of the sensors are shown in triangles.} \label{MCJPDAF_cov_plot_case_all} \end{figure} \begin{figure}[h] \centering \subfloat [Easy problem] {\includegraphics[scale=0.4]{MCJPDAF_fig/MCJPDAF_MSE_case1} }\\ \subfloat [Medium problem] {\includegraphics[scale=0.4]{MCJPDAF_fig/MCJPDAF_MSE_case2} }\\ \subfloat [Difficult problem] {\includegraphics[scale=0.4]{MCJPDAF_fig/MCJPDAF_MSE_case3} }% \caption{MSE of the position estimates from $20$ Monte Carlo runs, obtained using MCJPDAF.} \label{MCJPDAF_MSE_case_all} \end{figure} \begin{figure}[h] \centering \subfloat [Medium problem: Swap between target 2 and 3] {\includegraphics[scale=0.4]{MCJPDAF_fig/MCJPDAF_swap_case2} }\\ \subfloat [Difficult problem: Swap between target 2 and 3] {\includegraphics[scale=0.4]{MCJPDAF_fig/MCJPDAF_swap_case3} }% \caption{Track of swapped target estimates.} \label{MCJPDAF_swaps} \end{figure} \section{Summary} Data association problem arises due to the lack of information at the observer about the proper association between the targets and the received measurements. The problem becomes more involved when the targets move much closer and there are clutter and missed target detections at the observer. MC-JPDAF combines the JPDAF with particle filtering technique to accommodate non- linear and non-Gaussian models and to solve the data association problem. It incorporates clutter and missing measurements and also measurements from multiple observers. There are two types of representation for data association hypothesis, i.e. measurement to target association ($M{\rightarrow}T$) and target to measurement association ($T{\rightarrow}M$). Both carry same information and have one to one mapping between them. They can be converted from one type of representation to another and are used depending upon the situation. The infeasible hypotheses are neglected using a gating procedure to reduce computational complexity. It calculates the posterior hypotheses probability of the remaining hypotheses. The filtered estimate of each hypothesis is calculated and is combined by weighting each with their corresponding posterior hypothesis probability. With the increase in clutter rate and decrease in target detection probability, mean square error and the number of diverged tracks gets increased slightly. MCJPDAF requires smaller number of particles for estimation. It efficiently solves the data association problem, and there were almost no diverged tracks with moderate clutter rate and detection probability. \chapter{Field Data Performance Analysis} Bearings only tracking is performed on a field data. The problem is to track a ship using the bearing meausurements obtained from another moving ship. Field data consists of recorded bearing measurements of the target as well as ownship course and speed. All angles are measured cloakwise positive from North. The target ship moves in constant velocity motion of $20$ knots. The observer ship moves in nearly constant speed motion of 12 knots. It undertakes a maneuver in midst of the course. Initial target range is $4000$ $m$ and initial ownship course is $180$ $degrees$. The measurements are obtained at a regular interval of $T=1.28s$. The details of the field data are summarised in Table.\ref{tab:scenarios}. Target estimation is done using extended Kalman filter and particle filter, and their performance is compared. \begin{table}[htbp] \centering \caption{Field Data} \begin{tabular}{p{0.5in}p{0.5in}p{0.5in}p{0.5in}p{0.5in}p{0.5in}p{0.5in}p{0.5in}} \hline S. No & Initial Target Rng (m) & Initial Target Brg (Deg) & Target Crs (Deg) & Target Spd (Knots) & Initial Ownsip Crs (Deg) & Initial Ownship Spd (knots) & Ownship Maneuver Crs (Deg) \\ \hline 1 & 4000 & 265 & 121 & 20 & 180 & 12 & 115 \\ \hline \end{tabular}% \label{tab:scenarios}% \end{table}% \section{Simulation and Results} The bearings only tracking is done using the technique similar to the one mentioned in \cite{4}. The target state vector is defined to be \begin{eqnarray} \mathbf{x}^{t}_k= \begin{bmatrix} x^{t}_k & \dot{x}^{t}_k & y^{t}_k & \dot{y}^{t}_k \end{bmatrix}^T \nonumber \end{eqnarray} where $k$ is the time index. The ownship state is also defined as \begin{eqnarray} \mathbf{x}^{o}_k= \begin{bmatrix} x^{o}_k & \dot{x}^{o}_k & y^{o}_k & \dot{y}^{o}_k \end{bmatrix}^T \nonumber \end{eqnarray}. The relative state vector is defined by \begin{align} \mathbf{x}_k & = \mathbf{x}_k^t-\mathbf{x}_k^0\\ & = \begin{bmatrix} x &\dot{x} &y &\dot{y} \end{bmatrix}^T \nonumber \end{align} The ownship initial position is assumed to be at the origin. The initial position of the target is at 4000cos$(265^0)$, 4000sin$(265^0)$. Initial state covariance is assumed to be $P = diag(\begin{matrix}[15&0.2& 15& 0.2]\end{matrix})$. The target has state transition model $\mathbf{F}$ such that: \begin{eqnarray} \mathbf{x}_{k} & = & \mathbf{F}(\mathbf{x}_{k-1})+\mathbf{w}_k-\mathbf{U}_{k,k-1} \nonumber \end{eqnarray} where $\mathbf{w}_k$ is the process noise with zero mean and covariance $Q_{w}$ and $\mathbf{U}_{k,k-1}$ is the deterministic input accounting for the ownship motion. The matrices $\mathbf{F}$, $Q_{w}$ and $\mathbf{U}_{k,k-1}$ are given by \begin{equation} \mathbf{F}=\begin{bmatrix} 1&T&0&0\\ 0&1&0&0\\ 0&0&1&T\\ 0&0&0&1\\ \end{bmatrix} \nonumber \end{equation} \begin{equation} Q_{w} = \begin{bmatrix} \sigma_x^2(T^3)/3 &\sigma_x^2*(T^2)/2 &0 &0\\ \sigma_x^2(T^2)/2 &\sigma_x^2T &0 &0\\ 0 &0 &\sigma_y^2(T^3)/3 &\sigma_y^2(T^2)/2 \\ 0 &0 &\sigma_y^2(T^2)/2 &\sigma_y^2T \\ \end{bmatrix} \nonumber \end{equation} \begin{equation} \mathbf{U}_{k,k-1} = \begin{bmatrix} \dot{x}_{k}^0-\dot{x}_{k-1}^0 \\0 \\\dot{y}_{k}^0-\dot{y}_{k-1}^0 \\0 \end{bmatrix} \nonumber \end{equation} where $T$ is the sampling period of the target dynamics and $\sigma_x=\sigma_x=5\times10^{-2}$. $\mathbf{U}_{k,k-1}$ is assumed to be known at every moment. In this simulation, it is calculated from the ownship speed and course available at every instant $k$. The target's bearing $\theta$ at time $k$ is available as the measurement $\theta_{k}$ at time step of $T$ at the ownship. \begin{eqnarray} \theta_{k}= h(\mathbf{x}_k)+\mathbf{v}_{k}= \begin{bmatrix} \tan^{-1}\left(\dfrac{x_{k}}{y_{k}}\right) \end{bmatrix} \nonumber \end{eqnarray} The error in the bearing is such that $\sigma_\theta=0.1$ radians. Target was at an initial bearing of $265$ degrees and a course of $121$ degrees. 209 measurements were available at sampling rate of $T$ $s$. The estimates of the target states are obtained from the estimates of the relative state vector $\mathbf{x}_k$ by removing the bias due to the ownship, i.e, \begin{eqnarray} \mathbf{x}_k^t=\mathbf{x}_k +\mathbf{x}_k^o \nonumber\\ \end{eqnarray} \subsection{Extended Kalman Filter Estimation} The true trajectories of the target and their estimates are shown in Fig.\ref{FD_KFS1_xy_plot}. The state estimates of the targets are shown in Fig.\ref{FD_KFS1_state}. The mean square error MSE of the position estimates are shown in Fig.\ref{FD_KFS1_MSE}. The bearing measurement and its estimate are shown in Fig.\ref{FD_KFS1_bearing_meas}. \begin{figure}[h] \centering {\includegraphics[scale=0.4]{Field_Data_fig/KF_S1/xy_plot} } \caption{Target's true $xy$ track and their estimated track is obtained using EKF.} \label{FD_KFS1_xy_plot} \end{figure} \begin{figure}[h] \centering {\includegraphics[scale=0.4]{Field_Data_fig/KF_S1/MSE} } \caption{MSE of the position estimates obtained using EKF.} \label{FD_KFS1_MSE} \end{figure} \begin{figure}[h] \centering \subfloat [position $x$ ] {\includegraphics[scale=0.4]{Field_Data_fig/KF_S1/x_state} }\\ \subfloat [position $y$ ] {\includegraphics[scale=0.4]{Field_Data_fig/KF_S1/y_state} }\\ \subfloat [velocity $v_x$] {\includegraphics[scale=0.4]{Field_Data_fig/KF_S1/vx_state} }% \subfloat [velocity $v_y$] {\includegraphics[scale=0.4]{Field_Data_fig/KF_S1/vy_state} }\\ \caption{Target's true states and their estimates obtained using EKF.} \label{FD_KFS1_state} \end{figure} \begin{figure}[h] \centering {\includegraphics[scale=0.4]{Field_Data_fig/KF_S1/Bearing} } \caption{Targets' true bearing and its estimate using EKF.} \label{FD_KFS1_bearing_meas} \end{figure} \subsection{Particle Filter Estimation} The number of particles used is $N=1000$. The true trajectories of the target and their estimates are shown in Fig.\ref{FD_PFS1_xy_plot}. The state estimates of the targets are shown in Fig.\ref{FD_PFS1_state}. The mean square error MSE of the position estimates are shown in Fig.\ref{FD_PFS1_MSE}. The bearing measurement and its estimate are shown in Fig.\ref{FD_PFS1_bearing_meas}. The results show that PF has better estimate for the bearing of the target compared to EKF. The state estimates of the target doesn't have much significant difference. \begin{figure}[h] \centering {\includegraphics[scale=0.4]{Field_Data_fig/PF_S1/xy_plot} } \caption{Target's true $xy$ track and their estimated track is obtained using PF.} \label{FD_PFS1_xy_plot} \end{figure} \begin{figure}[h] \centering {\includegraphics[scale=0.4]{Field_Data_fig/PF_S1/MSE} } \caption{MSE of the position estimates obtained using PF.} \label{FD_PFS1_MSE} \end{figure} \begin{figure}[h] \centering {\includegraphics[scale=0.4]{Field_Data_fig/PF_S1/Bearing} } \caption{Targets' true bearing and its estimate using PF.} \label{FD_PFS1_bearing_meas} \end{figure} \begin{figure}[h] \centering \subfloat [position $x$ ] {\includegraphics[scale=0.4]{Field_Data_fig/PF_S1/x_state} }\\ \subfloat [position $y$ ] {\includegraphics[scale=0.4]{Field_Data_fig/PF_S1/y_state} }\\ \subfloat [velocity $v_x$] {\includegraphics[scale=0.4]{Field_Data_fig/PF_S1/vx_state} }% \subfloat [velocity $v_y$] {\includegraphics[scale=0.4]{Field_Data_fig/PF_S1/vy_state} }\\ \caption{Target's true states and their estimates obtained using PF.} \label{FD_PFS1_state} \end{figure} \chapter{Monte Carlo Multiple Model Joint Probabilistic Data Association Filter (MC-MMJPDAF)} Monte Carlo Multiple Model Joint Probabilistic Data Association Filter (MC-MMJPDAF) is a technique proposed for tracking maneuvering multi-targets under data association uncertainty. It is an extension of MC-JPDAF for maneuvering targets. The original MC-JPDAF was proposed for slowly maneuvering targets and diverges for highly maneuvering targets. MC-MMJPDAF incorporates the technique used in Multiple Model Particle Filter discussed in chapter \ref{chap:MMPF}, which uses multiple models to account for different types of target dynamics similar to circular motion, accelerated motions etc. The resulting filter is capable of maneuvering, multi-target tracking. The model description is almost the same as the MC-JPDAF except that each particle consists of a state vector $\mathbf{x}_{k,t}$ for target $k$, augmented by an index vector $A_{k,t}$ representing the model. Thus particles have continuous valued vector $\mathbf{x}_{k,t}$ of target kinematics variables, like position, velocity, acceleration, etc, and a discrete valued regime variable $A_{k,t}$ that represents the index of the model which generated $\mathbf{x}_{k,t}$ during the time period $(t-1^+,t]$. The regime variable can be one of the fixed set of $s$ models i.e., $A_{k,t}\in S=\{1,2, . . . .,s\}$. The posterior density of $k^{th}$ target $p(x_{k,t} \mid y_t)$ is represented using $N$ particles $\{b_t^n,w_t^n\}_{n=1}^N$, i.e., the augmented state vector and the weight. The posterior model probabilities $\{\pi_i(k,t)\}_{i=1}^s$ are approximately equal to the proportion of the samples from each model in the index set $\{A_{k,t}^n\}_{n=1}^N$. The combined augmented state for all targets is represented as $\{b_{t-1}^n, w_{t-1}^n\}_{n=1}^{N}$. The combined regime variable for all targets is represented as $A_{t}^n$ The algorithm for Monte Carlo Multiple Model Joint Probabilistic Data Association Filter (MC-MMJPDAF) is presented in Table.\ref{tab:MCMMJPDAF}. \begin{table}[H] \caption{Monte Carlo Multiple Model Joint Probabilistic Data Association Filter (MC-MMJPDAF)} \centering \begin{tabular}{l} \hline \begin{minipage}{6in} \vskip 4pt $[\{b_t^n, w_t^n\}_{n=1}^N]=$MC-MMJPDAF$[\{b_{t-1}^n, w_{t-1}^n\}_{n=1}^{N},\mathbf{y}_t]$ \begin{itemize} \item For $k=1..K$ perform Regime transition (Table.\ref{tab:RT}):\\ $[\{A_{k,t}^n\}_{n=1}^N]=$RT$[\{A_{k,t-1}^n\}_{n=1}^{N},\Pi]$ \item For $k=1..K$ perform Regime Conditioned MC-JPDAF(Table.\ref{tab:RCMCJPDAF}):\\ $[\{\mathbf{x}_{k,t}^n, w_{k,t}^n\}_{n=1}^N]=$RC-SIS$[\{\mathbf{x}_{k,t-1}^n, A_{k,t}^n, w_{k,t-1}^n\}_{n=1}^{N},\mathbf{y}_t]$ \item If required resample the particles and do roughening. \end{itemize} \vskip 4pt \end{minipage} \\ \hline \end{tabular} \label{tab:MCMMJPDAF} \end{table} \begin{table}[ht] \caption{Regime Conditioned MC-JPDAF} \centering \resizebox{!}{4in} { \begin{tabular}{l} \hline \begin{minipage}{7in} \vskip 4pt $[\{\mathbf{x}_{t}^n, w_{t}^n\}_{n=1}^N]=$RC-MCJPDAF$[\{\mathbf{x}_{t-1}^n, A_{t}^n, w_{t-1}^n\}_{n=1}^{N},y_t]$ \begin{itemize} \item Prediction step: FOR $k=1..K$, $n=1:N$, draw samples \begin{equation} \mathbf{x}_{k,t}^{(n)}\sim q_k(\mathbf{x}_{k,t}\mid \mathbf{x}_{k,t-1}^{(n)},A_{k,t}^n,\mathbf{y}_t) \end{equation} \item Evaluate the predictive weights upto a normalizing constant \begin{equation} \alpha_{k,t}^{(n)}\propto w_{k,t-1}^{(n)}\frac{p_k(\mathbf{y}_t\mid \mathbf{x}_{k,t}^{(n)})p_k(\mathbf{x}_{k,t}^{(n)}\mid\mathbf{x}_{k,t-1}^{(n)},A_{k,t}^n)}{q_k(\mathbf{x}_{k,t}^{(n)}\mid\mathbf{x}_{k,t-1}^{(n)},A_{k,t}^n,\mathbf{y}_t)} \end{equation} \item Normalize the predictive weights \begin{equation} \displaystyle \sum_{n=1}^N \alpha_{k,t}^{(n)}=1 \end{equation} \item FOR $k=1..K$, $i=1..N_o$, $j=1..M^i$, calculate the predictive likelihood \begin{equation} p_k(\mathbf{y}_{j,t}^i\mid \mathbf{y}_{1:t-1}) \approx \displaystyle \sum_{i=1}^N \alpha_{k,t}^{(n)} p_T^i(\mathbf{y}_{j,t}^i\mid \mathbf{x}_{k,t}^{(n)}) \end{equation} \item FOR observer $i=1..N_o$, enumerate all valid target to measurement association hypotheses $\tilde{\lambda}^i_t$. \item Perform gating on the valid target to measurement hypotheses by the following procedure: \begin{itemize} \item For $k=1..K$, calculate the approximation for the predictive likelihood of target $k$ using \eqref{eq:appr_pr_lh} \begin{eqnarray} p_k(\mathbf{y}\mid \mathbf{y}_{1:t-1}) & \approx & \mathcal{N}(\mu_{\hat{\mathbf{y}}_k},\Sigma_{\hat{\mathbf{y}}_k}) \\ \mu_{\hat{\mathbf{y}}_k} & = & \displaystyle{\sum_{n=1}^N}\alpha_k^{(n)}\mathbf{g}(\mathbf{x}_k^{(n)},\mathbf{p_0}) \\ \Sigma_{\hat{\mathbf{y}}_k} & = & \Sigma_{\mathbf{y}}+\displaystyle{\sum_{n=1}^N}\alpha_k^{(n)}[\mathbf{g}(\mathbf{x}_k^{(n)},\mathbf{p_0})-\mu_{\hat{\mathbf{y}}_k}][\mathbf{g}(\mathbf{x}_k^{(n)},\mathbf{p_0})-\mu_{\hat{\mathbf{y}}_k}]^T \end{eqnarray} \item For $k=1..K$, $i=1..N_o$, $j=1..M^i$, calculate the squared distance $d_k^i(\mathbf{y}_j)$ between the predicted and observed measurements using measurement innovations. \begin{equation} d_k^2(\mathbf{y}_j)=(\mathbf{y}_j-\mu_{\hat{\mathbf{y}}_k})^T \Sigma_{\hat{\mathbf{y}}_k}^{-1}(\mathbf{y}_j-\mu_{\hat{\mathbf{y}}_k}) \end{equation} \item Perform chi-square hypothesis testing on the proposed target-measurement association hypotheses. Accept a hypothesis if its chi-square statistics $d_k^2$ satisfies the relation $d_k^2<\chi^2_\alpha$ to obtain the set of gated hypotheses $\tilde{\Lambda}_t^i$ at each observer $i$. \end{itemize} \end{itemize} \vskip 4pt \end{minipage} \\ \hline \end{tabular} } \label{tab:RCMCJPDAF} \end{table} \begin{table}[ht] \caption{Regime Conditioned MC-JPDAF Algorithm (contd..)} \centering \resizebox{!}{4in} { \begin{tabular}{l} \hline \begin{minipage}{7in} \vskip 4pt \begin{itemize} \item Convert $T{\rightarrow}M$ hypotheses to $M{\rightarrow}T$ hypotheses and calculate the number of clutter measurements $M_C^i$ in each hypothesis. \item FOR observer $i=1..N_o$, calculate association prior of all hypotheses. \begin{equation} p(\tilde{\mathbf{r}}_k^{i}\mid\tilde{\mathbf{r}}_{k-1}^{i})\propto \begin{cases} 1-P_D \;\qquad\text{if $j=0$} \\ 0 \;\qquad\text{if $j>0$ and $j \in \{\tilde{r}_1^i\cdot\cdot\cdot\tilde{r}_{k-1}^i\}$} \\ \frac{P_D}{M_k^i} \;\qquad \text{otherwise} \end{cases} \end{equation} \begin{eqnarray} p(\tilde{\lambda}^i)=p(M_C^i)\displaystyle\prod_{k=1}^{K}p(\tilde{\mathbf{r}}_k^{i}\mid\tilde{\mathbf{r}}_{k-1}^{i}) \end{eqnarray} \item FOR $i=1..N_o$, compute joint association posterior probability and normalize it at each observer $i$. \begin{eqnarray} p(\tilde{\lambda}_t^i\mid\mathbf{y}_{1:t}) & \propto p(\tilde{\lambda}_t^i) (V^i)^{-M_C^i}\displaystyle\prod_{j\in \mathcal{I}^i}p_{r_{j,t}^i}(\mathbf{y}_{j,t}^i\mid \mathbf{y}_{1:t-1}) \end{eqnarray} \begin{eqnarray} \displaystyle\sum p(\tilde{\lambda}_t^i\mid\mathbf{y}_{1:t}) & = 1 \qquad \text{at each observer $i$.} \end{eqnarray} \item FOR $k=1..K$, $i=1..N_o$, $j=0..M^i$, calculate the marginal association posterior probability \begin{equation} \beta_{jk}^i = \displaystyle\sum_{\{\tilde{\lambda}_t^i\in\tilde{\Lambda}_t^i:\tilde{r}_{k,t}^i=j\}} p(\tilde{\lambda}_t^i\mid\mathbf{y}_{1:t}) \end{equation} \item FOR $k=1..K$, compute target likelihood. \begin{equation} p_k(\mathbf{y}_{t}\mid\mathbf{x}_{k,t}^{(n)})=\displaystyle\prod_{i=1}^{N_o}\left[\beta_{0k}^i+\displaystyle\sum_{j=1}^{M^i}\beta_{jk}^ip_T^i(\mathbf{y}_{j,t,}^i\mid \mathbf{x}_{k,t}^{(n)})\right] \end{equation} \item Update step: FOR $k=1..K$, $n=1..N$, calculate and normalize particle weights. \begin{equation} w_{k,t}^{(n)}\varpropto w_{k,t-1}^{(n)}\dfrac{p_k(\mathbf{y}_t|\mathbf{x}_{k,t}^{(n)})p_k(\mathbf{x}_{k,t}^{(n)}|\mathbf{x}_{k,t-1}^{(n)},A_{k,t}^n)}{q(\mathbf{x}_{k,t}^{(n)}|\mathbf{x}_{k,t-1}^{(n)},A_{k,t}^n,\mathbf{y}_{t})} ; \;\qquad\displaystyle \sum_{n=1}^N w_{k,t}^{(n)}=1 \end{equation} \item FOR $k=1..K$, if required, resample the particles $\{\mathbf{x}_{k,t}^{(n)}\}_{n=1}^N$ and do roughening. \end{itemize} \vskip 4pt \end{minipage} \\ \hline \end{tabular} } \label{tab:RCMCJPDAF_cntd} \end{table} \section{Simulation Results} To verify the effectiveness of the algorithm, targets' motion scenario and their measurements are simulated according to the given models and the estimates using the algorithm is compared with the true trajectories. The augmented state vector consists of position $x,y$, velocities $v_x, v_y$ of the target, and the regime variable $A$, \begin{eqnarray} \mathbf{x}_{k,t}= \begin{bmatrix} x_{k,t} & \dot{x}_{k,t} & y_{k,t} & \dot{y}_{k,t} &A_{k,t} \end{bmatrix}^T \end{eqnarray} The target has constant velocity motion as well as coordinated turn motions. The two targets move in constant velocity of $(1.0,1.5)$ and $(1.0,1.5)$. The first target undergoes a maneuver from $t=30$ to $t=36$ and $t=64$ to $t=70$ with a turn rate of $0.1641 rad/s$ anti-clockwise, and the second target undergoes a maneuver from $t=30$ to $t=36$ and $t=64$ to $t=70$ with a turn rate of $-0.1641 rad/s$ clockwise. The initial position of the targets are $(-50,25)$ and $(-50,-25)$ respectively. All the targets have state transition model $F$ such that: \begin{eqnarray} \mathbf{x}_{k,t} & = & F(\mathbf{x}_{k,t-1})+\mathbf{w}_{k,t-1} \end{eqnarray} The constant velocity model, anti-clockwise coordinated turn model and clockwise coordinated turn model are given by the transition matrix $F_1$, $F_2$ and $F_3$ \begin{equation} F_1=\begin{bmatrix} 1&T&0&0\\ 0&1&0&0\\ 0&0&1&T\\ 0&0&0&1\\ \end{bmatrix} \end{equation} \begin{equation} F_2= \begin{bmatrix} 1 &\dfrac{\sin (\Omega T)}{\Omega} &0 &-\dfrac{1 - \cos(\Omega T)}{\Omega}\\ 0 &\cos(\Omega T) &0 &-\sin(\Omega T)\\ 0 &\dfrac{1-\cos(\Omega T)}{\Omega} &1 &\dfrac{\sin(\Omega T)}{\Omega}\\ 0 &\sin(\Omega T) &0 &\cos(\Omega T) \\ \end{bmatrix} \end{equation} \begin{equation} F_3= \begin{bmatrix} 1 &\dfrac{\sin (-\Omega T)}{-\Omega} &0 &-\dfrac{1 - \cos(-\Omega T)}{-\Omega}\\ 0 &\cos(-\Omega T) &0 &-\sin(-\Omega T)\\ 0 &\dfrac{1-\cos(-\Omega T)}{-\Omega} &1 &\dfrac{\sin(-\Omega T)}{-\Omega}\\ 0 &\sin(-\Omega T) &0 &\cos(-\Omega T) \\ \end{bmatrix} \end{equation} where $T$ is the sampling period of the target dynamics and $\Omega$ is the turn rate. The measurement sensors are located at $(-65,-60)$, $(45,45)$ meters respectively. The $k$-th targets' range $r_{k}$ and bearing $\theta_{k}$ at time $t$ are available as the measurement $\mathbf{y}_{k,t}^i$ at time step of $T=1$ at each observer $i$. \begin{eqnarray} \mathbf{y}_{k,t}^i= \begin{bmatrix} r_{k,t}\\ \theta_{k,t}\\ \end{bmatrix} \end{eqnarray} The errors in the range and bearing are such that $\sigma_R=5$ and $\sigma_\theta=0.02$. The maximum range detected by the sensor is $100 m$. The probability of detection of a target is $P_D=0.9$ and the clutter rate is $\lambda_C=0.5$. The exact association of the measurements to the targets is unknown at the observers. The measurement model $h(\cdotp)$ for the target $k$ at the $i$-th observer is given by: \begin{eqnarray} \mathbf{y}_{k,t}^i= h_k(\mathbf{x}_{k,t})^i+\mathbf{v}_{k,t}^i= \begin{bmatrix} \sqrt{(x_{k,t}-x_o^i)^2+(y_{k,t}-y_o^i)^2}\\ \tan^{-1}\left(\dfrac{y_{k,t}-y_o^i}{x_{k,t}-x_o^i}\right) \end{bmatrix} \end{eqnarray} with $\mathbf{p}_0^i=(x_o^i,y_o^i)$. The maximum range of sensor is $R_{max}^i=100$ and the volume of measurement space is $V^i=2\pi R_{max}^i$. The measurement error $\mathbf{v}_{k,t}^i$ is uncorrelated and has zero mean Gaussian distribution with covariance matrix $\Sigma_{\mathbf{y}_k}$. \begin{eqnarray} \Sigma_{\mathbf{y}_k}= \begin{bmatrix} \sigma_R^{2} &0 \\ 0 &\sigma_\theta^2 \\ \end{bmatrix} \end{eqnarray} The measurement errors are assumed to be the same at all the observers. The initial state estimate is assumed to be a Gaussian vector with mean $ \hat{\mathbf{x}}_{k,0}=\mathbf{x}_{k,0}$ and error covariance $P_{k,0} = diag(5,0.1,5,0.1)$. Hence initial particles for each target $\{\mathbf{x}_{k,0}^{(n)}\}_{n=1}^N$ were generated based on the distribution \begin{equation} \mathbf{x}_{k,0} \sim \mathcal{N}(\mathbf{x}_{k,0},P_{k,0}) \end{equation} In this implementation of the particle filter, the transitional prior which is a sub-optimal choice of importance density is used to propose particles. Thus the importance density used is:\\ \begin{eqnarray*} q(\mathbf{x}_{k,t}\mid \mathbf{x}_{k,t-1}^{(n)},A_{k,t}^n,y_{t})&=&p(\mathbf{x}_{k,t}\mid \mathbf{x}_{k,t-1}^{(n)},A_{k,t}^n)\\ &=&\left\{ \begin{array}{rl} \mathcal{N}(F_{1}(\mathbf{x}_{k,t-1}),Q_{w}) & \text{if } A_{k,t}^n = 1\\ \mathcal{N}(F_{2}(\mathbf{x}_{k,t-1}),Q_{w}) & \text{if } A_{k,t}^n = 2\\ \mathcal{N}(F_{3}(\mathbf{x}_{k,t-1}),Q_{w}) & \text{if } A_{k,t}^n = 3\\ \end{array} \right. \end{eqnarray*} The process noise used for estimation is such that $\sigma_x=\sigma_y=5\times10^{-2}$. The mode transition probability matrix assumed by the filter for the target was \begin{equation} \pi_{ij}=\begin{bmatrix} .8 &.1 &.1\\ .1 & .8 &.1 \\ .1 &.1 & .8 \end{bmatrix} \end{equation} The initial mode probability is assumed to be \begin{equation} \pi_i(0)=\left\{ \begin{array}{rl} 1& \text{if } i=1\\ 0 & \text{if } i=2\\ 0 & \text{if } i=3\\ \end{array} \right. \end{equation} The squared distance of the measurement with respect to the predicted measurement $d_k^2$, follows chi-square distribution with 2 degrees of freedom. The significance level used for the gating of hypotheses is $\alpha=0.01$. The chi-square critical value comes to be $\chi^2_\alpha=9.21$. A hypothesis is rejected if its chi-square statistics $d_k^2$ satisfies the relation $d_k^2>\chi^2_\alpha$. A total of $N=102$ particles were used. The simulation was carried for $100$ Monte Carlo runs and the estimates were obtained. The true trajectories of the targets and their estimates of a single run are shown in Fig.\ref{MCMMJPDAF_cov_plot}. The ellipses indicate the 2-$\sigma$ region of the estimate covariances. The state estimates of the targets are shown in Fig.\ref{MCMMJPDAF_state}. The mean square error (MSE) of the position estimates are shown in Fig.\ref{MCMMJPDAF_MSE}. The results show that MC-MMJPDAF handles data association as well as target maneuver efficiently . It had good track of the target states in all the Monte Carlo runs and there were no diverged track estimates. The missing measurements and clutters didn't have any significant effect in the estimates. \section{Summary} The proposed MC-MMJPDAF combines Multiple Model Particle Filter(MMPF) for highly maneuvering targets and Monte Carlo Joint Probabilistic Data Association Filter (MC-JPDAF) for multi-target tracking with data association uncertainty in presence of clutter and missed target measurements. The simulation results show that MC-MMJPDAF efficiently maintains good track of state estimates in maneuvering, multi-target tracking. \begin{figure}[h] \centering {\includegraphics[scale=0.5]{MCMMJPDAF_fig/colour/MCJPDAF_cov_plot}} \caption{Targets' true $xy$ track and their estimated track covariance obtained using MC-MMJPDAF for a single run: The location of the sensors are shown in blue triangles.} \label{MCMMJPDAF_cov_plot} \end{figure} \begin{figure}[h] \centering {\label{MCMMJPDAF_T1_MSE}\includegraphics[scale=0.4]{MCMMJPDAF_fig/colour/MCJPDAF_MSE} } \caption{MSE of the position estimates from $100$ Monte Carlo runs, obtained using MC-MMJPDAF.} \label{MCMMJPDAF_MSE} \end{figure} \begin{figure}[h] \centering \subfloat [Target 1] {\includegraphics[scale=0.4]{MCMMJPDAF_fig/colour/MCJPDAF_T1mode_prob} } \subfloat [Target 2] {\includegraphics[scale=0.4]{MCMMJPDAF_fig/colour/MCJPDAF_T2mode_prob} }\\ \caption{Targets' mode estimate} \end{figure} \begin{figure}[h] \centering \subfloat [position $x$ ] {\includegraphics[scale=0.4]{MCMMJPDAF_fig/colour/MCJPDAF_x_state} }\\ \subfloat [position $y$ ] {\includegraphics[scale=0.4]{MCMMJPDAF_fig/colour/MCJPDAF_y_state} }\\ \subfloat [velocity $v_x$] {\includegraphics[scale=0.4]{MCMMJPDAF_fig/colour/MCJPDAF_vx_state} }% \subfloat [velocity $v_y$] {\includegraphics[scale=0.4]{MCMMJPDAF_fig/colour/MCJPDAF_vy_state} }\\ \caption{Targets' true states and their estimates from single run obtained using MC-MMJPDAF.} \label{MCMMJPDAF_state} \end{figure} \begin{figure}[h] \centering \subfloat [Sensor 1] {\includegraphics[scale=0.4]{MCMMJPDAF_fig/colour/MCJPDAF_range_meas_s1} } \subfloat [Sensor 2] {\includegraphics[scale=0.4]{MCMMJPDAF_fig/colour/MCJPDAF_range_meas_s2} }\\ \caption{Targets' true range and their measurements: The target measurements are shown in dots and the clutter measurements are shown in squares.} \label{MCMMJPDAF_range_meas} \end{figure} \begin{figure}[h] \centering \subfloat [Sensor 1] {\includegraphics[scale=0.4]{MCMMJPDAF_fig/colour/MCJPDAF_bearing_meas_s1} } \subfloat [Sensor 2] {\includegraphics[scale=0.4]{MCMMJPDAF_fig/colour/MCJPDAF_bearing_meas_s2} }\\ \caption{Targets' true bearing and their measurements: The target measurements are shown in dots and the clutter measurements are shown in squares.} \label{MCMMJPDAF_bearing_meas} \end{figure} \begin{figure}[p] \centering \includegraphics[scale=0.7]{MCMMJPDAF_fig/colour/MCJPDAF_missed_targets} \caption{The index of the targets which were undetected at the sensors for a single run.} \label{MCMMJPDAF_missed_targets} \end{figure} \chapter{Conclusions} Multi-target ttracking systems are non-linear and have Guassian distributions. Kalman filter (KF) based techniques rely on linear and Gaussian models for estimation. Their performance degrades as non-linearity becomes severe. Particle filter (PF) is an efficient numerical approximation method for the implementation of recursive Bayesian solution. It represents the probability distributions of target using particles and associated weights. It is capable of handling complex noise distributions and non-linearities in target's measurements and as well as target dynamics. The performance of particle filter depends on the number of particles and proposal distributions used. Simulations confirm that particle filter outperforms EKF at the expense of computational cost. The computations in particle filter are highly parellelizable and can be efficiently implemented in FPGA and GPU. In high dimensional sytems like multi-target tracking, the proportion of high likelihood particles are smaller and higher number of particles are required. Independent partition sampling and weighted resampling helps Indepedent Partition Particle Filter (IPPF) in better proposal of particles and hence track multiple targets with lesser number of particles. For maneuvering multi-target tracking, Monte Carlo Multiple Model Joint Probabilistic Data Association Filter (MC-MMJPDAF) is proposed, which efficiently handles maneuvering multi-targets as well as data association uncertainity in presence of clutter measurements and missed target detections. It also incoorporates measurements from multiple observers. It combines Monte Carlo Joint Probabilistic Data Association Filter (MC-JPDAF) and Multiple Model Particle Filter (MMPF). Monte Carlo Joint Probabilistic Data Association Filter (MC-JPDAF) implements the standard Joint Probabilistic Data Association Filter (JPDA) for data association in particle filtering framework for non-linear and non-Gaussian systems. MC-JPDAF can track only slowly maneuvering targets and solves the data association efficiently. It requires smaller number of particles for estimation. MMPF is used for highly maneuvering targets. MMPF helps to track abrupt deviations in targets with less number of particles by incorporating multiple kinematic models. MMPF has better tracking capabilities than standard particle filter and Interacting Multiple Model Extended Kalman Filter (IMM-EKF). MC-MMJPDAF thus efficiently utilizes the multi-target data association capability of the MC-JPDA and the maneuvering target tracking capability of MMPF for tracking maneuvering multiple targets. The simulation results show that there were almost no diverged tracks with moderate clutter rate and target detection probability, and confirm the efficiency of the proposed technique. The particle filtering technique efficiently handles maneuvering, multi-target tracking, and has been verified with some field data. \chapter*{Acknowledgment} I would like to thank my guide Prof. V Rajbabu for his valuable guidance, encouragement and patience during the work. \par I would like to express my sincere thanks to Naval Research Board (NRB) for providing the necessary funding for the project. I am thankful to Bharti Center for Communication for providing me with the resources for my project activities. I have used the resources of Bharti center to the fullest and I would like to thank them for providing me with all the facilities. \par Finally I would like to thank everybody who have helped me in every way in my project enabling me to complete this work. \begin{flushright}\hspace{248pt}\textbf{T M Feroz Ali}\end{flushright} \newpage \tableofcontents \newpage \newpage \listoffigures \newpage \bibliographystyle{IEEEtran} \newpage \input{Abstract} \newpage \pagenumbering{arabic} \input{Introduction} \input{Bayesian_estimation} \input{BootstrapPF} \input{chapter1_IPPF} \input{chapter2_MMPF} \input{chapter3_MCJPDA} \input{chapter5_MCMMJPDAF} \input{chapter6_Conclusion} \input{bibliography} \chapter{Introduction} \section{Need for Estimation} The need for estimation arises since the measurements of a system may be noisy, incomplete, or delayed and the exact modeling of the system is not always possible. Depending only on the measurements is not feasible in cases when measurements are highly noisy, the delay between the occurrence of the process and the time of arrival of measurement is large, some of the states of the system are not observable, clutters exist along with the targets, targets-measurements association ambiguity exists etc. Estimation can help in getting filtered inferences with lesser variance than from the noisy measurements, predict about the system behavior in the future etc. Hence estimation techniques are needed to infer better about the system with the given system models and measurements. \section{Objective} The objective of target tracking is to continuously estimate and track the states of the target like position, velocity, acceleration, etc, using the available measurements of the target. The target's motion may be one or two dimensional and can have constant velocity or maneuvering motions also. The initial state of the target may be unknown. The possible motion models of the target is assumed to be known. There may be multiple targets which may be closer or far apart. The measurrement sensor is assumed to be stationary. The measurements of the target may be available as range, bearing and/or Doppler frequency measurements. The accuracy and noise distribution of the measurement sensors are also assumed to be known. \section{Mathematical Formulation} Given a discrete stochastic model of a dynamic system (moving target) using a state space representation \begin{eqnarray} \mathbf{x}_k & = & \mathbf{f}_{k-1}(\mathbf{x}_{k-1},\mathbf{w}_{k-1}) \label{eqn:state_model} \\ \mathbf{z}_k & = & \mathbf{h}_{k}(\mathbf{x}_{k},\mathbf{v}_{k}) \label{eqn:measurement_model} \end{eqnarray} where $k$ is the time index, $\mathbf{x}_k$ is the state vector, $\mathbf{w}_k$ is the process noise, $\mathbf{z}_k$ is the measurement of the target, $\mathbf{v}_k$ is the measurement noise, $\mathbf{f}_k(\cdotp)$ is the time varying system, $\mathbf{h}_k(\cdotp)$ is the measurement equation and $T$ is the sampling interval of the discrete system, the task is to recursively estimate the state $\mathbf{x}_k$ of the system from its available measurements $\mathbf{Z}_k = \mathbf{z}_{1:k}\equiv \{\mathbf{z}_i;i=1,...,k\}$. The state vector $\mathbf{x}_k$ contains all the information required to describe the target dynamics. The noise sequences $\mathbf{w}_k$ and $\mathbf{v}_k$ are assumed to be zero mean, white noise and mutually independent with known probability distribution function. The initial target state distribution $p(\mathbf{x}_0)$ is assumed to be known and to be independent of the noise sequences $\mathbf{w}_k$ and $\mathbf{v}_k$ \cite{4,7,9,11}. Two fundamental assumptions about the system are that the dynamic variable $\mathbf{x}_k$ is Markov of order one. \begin{equation} p(\mathbf{x}_k|\mathbf{x}_{1:k-1},\mathbf{z}_{1:k-1})=p(\mathbf{x}_k|\mathbf{x}_{k-1}) \label{eqn:markov1} \end{equation} and $\mathbf{z}_k$ is conditionally independent of past states and measurements. \begin{equation} p(\mathbf{z}_k|\mathbf{x}_{1:k},\mathbf{z}_{1:k-1})=p(\mathbf{z}_k|\mathbf{x}_{k}) \label{eqn:markov2} \end{equation}\\ where $\mathbf{x}_{1:k}\equiv \{\mathbf{x}_i;i=1,...,k\}$. \section{Measurements and System Models} The states of the targets considered in this report are the positions and velocities $x, y, v_x, v_y$ in the cartesian co-ordinate system. \begin{equation} \mathbf{x}= \begin{bmatrix} x & v_{x} & y & v_{y} \end{bmatrix}^T \end{equation} The motion models of the target considered are constant velocity model and constant turn rate model. The constant velocity model is described by \begin{align} \mathbf{x}_{k} & = f(\mathbf{x}_{k-1})+\mathbf{w}_{k-1}\\ & = F_1 \mathbf{x}_{k-1}+\mathbf{w}_{k-1} \end{align} where $F_1$ is a matrix given by \begin{equation} F_1=\begin{bmatrix} 1&T&0&0\\ 0&1&0&0\\ 0&0&1&T\\ 0&0&0&1\\ \end{bmatrix} \end{equation} where $T$ is the sampling period of the target dynamics. The constant turn rate model with turn rate $\Omega$ $rad/s$ is given by \begin{align} \mathbf{x}_{k} & = f(\mathbf{x}_{k-1})+\mathbf{w}_{k-1}\\ & = F_2 \mathbf{x}_{k-1}+\mathbf{w}_{k-1} \end{align} where $F_2$ is a matrix given by \begin{equation} F_2= \begin{bmatrix} 1 &\dfrac{\sin (\Omega T)}{\Omega} &0 &-\dfrac{1 - \cos(\Omega T)}{\Omega}\\ 0 &\cos(\Omega T) &0 &-\sin(\Omega T)\\ 0 &\dfrac{1-\cos(\Omega T)}{\Omega} &1 &\dfrac{\sin(\Omega T)}{\Omega}\\ 0 &\sin(\Omega T) &0 &\cos(\Omega T) \\ \end{bmatrix} \end{equation} where $T$ is the sampling period of the target dynamics. The available measurements of the target considered in this report are range and bearing. They are related to the target states by the measurement model: \begin{align} z_{k} & = h(\mathbf{x}_{k})+\mathbf{v}_k\\ & = \begin{bmatrix} \sqrt{x^2_k+y^2_k}\\ \tan^{-1}\left(\dfrac{y_k}{x_k}\right) \end{bmatrix} + \mathbf{v}_k \end{align} \section{Organization of Report} The chapter \ref{chap:Bayesian} describes about the Bayesian estimation and the conceptual solution for recursive Bayesian estimation. Particle filter which is a numerical Monte Carlo approximation method for the implementation of the recursive Bayesian solution is described in chapter \ref{chap:Particle_filtering}. An advanced particle filtering technique called Independent Partition Particle Filter (IPPF) for tracking multiple targets efficiently is described in chapter \ref{chap:IPPF}. In chapter \ref{chap:MMPF}, we have discussed Multiple Model Particle filter (MMPF) which is used for tracking highly maneuvering targets. \chapter*{Dissertation approval} \thispagestyle{empty} This dissertation entitled {\it Maneuvering, Multi-Target Tracking using Particle Filters} by T M Feroz Ali~is approved for the degree of Master of Technology. \par \vspace{1cm} \begin{flushright} \begin{tabular}{c} \textbf{Examiners}\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ \\ \hline \\ \\ \hline \\ \\ \textbf{Supervisor}\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ \\ \hline \\ \\ \textbf{Chairman}\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ \\ \hline \end{tabular} \vspace{2cm} \end{flushright} \begin{flushleft} \begin{tabular}{lc} Date:&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ \cline{2-2} \\ Place:&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ \cline{2-2} \end{tabular} \end{flushleft} \chapter*{Declaration} \thispagestyle{empty} I declare that this written submission represents my ideas in my own words and where others ideas or words have been included, I have adequately cited and referenced the original sources. I also declare that I have adhered to all principles of academic honesty and integrity and have not misrepresented or fabricated or falsified any idea/data/fact/source in my submission. I understand that any violation of the above will be cause for disciplinary action by the Institute and can also evoke penal action from the sources which have thus not been properly cited or from whom proper permission has not been taken when needed. \par \vspace{3cm} \begin{flushright} \begin{tabular}{c} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ \hline \\ \end{tabular} \end{flushright}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let ${\mathbb B}(\mathscr H)$ denote the $C^{*}$-algebra of all bounded linear operators on a Hilbert space ${\mathscr H}$. In the case when ${\rm dim}{\mathscr H}=n$, we identify ${\mathbb B}({\mathscr H})$ with the matrix algebra $\mathbb{M}_n$ of all $n\times n$ matrices with entries in the complex field. An operator $A\in{\mathbb B}(\mathscr H)$ is said to be contraction, if $A^*A\leq I$. The numerical radius of $T\in {\mathbb B}({\mathscr H})$ is defined by $$\omega(T):=\sup\{\left| \langle Tx, x\rangle\right| : x\in {\mathscr H}, \parallel x \parallel=1\}.$$ It is well known that $\omega(\,\cdot\,)$ defines a norm on ${\mathbb B}({\mathscr H})$, which is equivalent to the usual operator norm. In fact, $\frac{1}{2}\| \,\cdot\, \|\leq \omega(\,\cdot\,) \leq\| \,\cdot\, \|$; see \cite{gof}. An important inequality for $\omega(A)$ is the power inequality stating that $\omega(A^n)\leq \omega(A)^n\,\,(n=1,2,\cdots)$. For further information about the properties of numerical radius inequalities we refer the reader to \cite{aA, ando, sheikh} and references therein. Let ${\mathscr H_{1}},{\mathscr H_{2}}$ be Hilbert spaces, and consider the direct sum ${\mathscr H}={\mathscr H_{1}}\oplus{\mathscr H_{2}}$. With respect to this decomposition, every operator $T\in {\mathbb B}({\mathscr H})$ has a $2\times 2$ operator matrix representation $T=[T_{ij}]$ with entries $T_{ij}\in {\mathbb B}({\mathscr H_{j}}, {\mathscr H_{i}})$, the space of all bounded linear operators from ${\mathscr H_{j}}$ to ${\mathscr H_{i}}\,\,(1\leq i,j \leq 2)$. Operator matrices provide a usual tool for studying Hilbert space operators, which have been extensively studied in the literatures. Let $A\in {\mathbb B}({\mathscr H_{1}}, {\mathscr H_{1}})$, $B\in {\mathbb B}({\mathscr H_{2}}, {\mathscr H_{1}})$, $C\in {\mathbb B}({\mathscr H_{1}}, {\mathscr H_{2}})$ and $D\in {\mathbb B}({\mathscr H_{2}}, {\mathscr H_{2}})$. The operator $\left[\begin{array}{cc} A&0\\ 0&D \end{array}\right]$ is called the diagonal part of $\left[\begin{array}{cc} A&B\\ C&D \end{array}\right]$ and $\left[\begin{array}{cc} 0&B\\ C&0 \end{array}\right]$ is the off-diagonal part.\\ The classical Young inequality says that if $p, q>1$ such that $\frac{1}{p}+\frac{1}{q}=1$, then $ab\leq \frac{a^{p}}{p}+\frac{b^{q}}{q}$ for positive real numbers $a, b$. In \cite{FUJ}, the authors showed that a refinement of the scalar Young inequality as follows $\left(a^{\frac{1}{p}}b^{\frac{1}{q}}\right)^{m}+r_{0}^{m}\left(a^{\frac{m}{2}}-b^{\frac{m}{2}}\right)^{2}\leq\left(\frac{a}{p}+\frac{b}{q}\right)^{m},$ where $r_{0}=\min \{ \frac{1}{p}, \frac{1}{q}\}$ and $m=1, 2,\cdots$. In particular, if $p=q=2$, then \begin{align}\label{12} \left(a^{\frac{1}{2}}b^{\frac{1}{2}}\right)^{m}+\left(\frac{1}{2}\right)^{m}\left(a^{\frac{m}{2}}-b^{\frac{m}{2}}\right)^{2}\leq 2^{-m}(a+b)^{m}. \end{align} It has been shown in \cite{naj}, that if $T\in {\mathbb B}({\mathscr H})$, then \begin{align}\label{13} \omega(T)\leq \frac{1}{2} \| |T| + |T^{*}| \|, \end{align} where $|T|=(T^{*}T)^{\frac{1}{2}}$ is the absolute value of $T$. Recently \cite{CAL}, the authors extended this inequality for off-diagonal operator matrices of the form $T=\left[\begin{array}{cc} 0&X\\ Y&0 \end{array}\right]\in {\mathbb B}({\mathscr H_1\oplus\mathscr H_2})$ as follows \begin{align}\label{133} \omega(T)\leq \frac{1}{2} \left\| |X| + |Y^{*}| \right\|^{\frac{1}{2}}\left\| |X^*| + |Y| \right\|^{\frac{1}{2}}. \end{align} Let $T_{1}, T_{2},\cdots,T_{n}\in {\mathbb B}({\mathscr H})$. The functional $\omega_{p}$ of operators $T_{1},\cdots,T_{n}$ for $p\geq 1$ is defined in \cite{FUJ2} as follows \begin{align*} \omega_{p}(T_{1},\cdots,T_{n}):= \sup_{\| x \| = 1} \left(\sum_{i=1}^{n} | \left\langle T_{i}x, x\right\rangle |^{p}\right)^{\frac{1}{p}}. \end{align*} If $p=2$, then we have the Euclidean operator radius of $T_{1},\cdots,T_{n}$ which was defined in \cite{pop}. In \cite{sheikh}, the authors showed that an upper bound for the functional $\omega_{p}$ \begin{align*} \omega_{p}^p(T_{1},\cdots,T_{n})\leq {1\over2}\left\|\sum_{i=1}^n\left(f^{2p}(|T_i|)+g^{2p}( |T_i^*|)\right)\right\|-\inf_{\|x\|=1} \zeta (x), \end{align*} where $T_i \in {\mathbb B}({\mathscr H})\,\,(i=1,2,\cdots,n)$, $f$, $g$ are nonnegative continuous functions on $[0, \infty)$ such that $f(t)g(t)=t\,(t\in [0, \infty))$, $p\geq 1$ and {\footnotesize\begin{align*} \zeta (x)=\frac{1}{2}\sum_{i=1}^n\left(\left\langle f^{2p}(|T_i|)x,x\right\rangle^{1\over2}-\left\langle g^{2p}( |T_i^*|)x,x\right\rangle^{1\over2}\right)^2. \end{align*}} In this paper, we show some inequalities involving powers of the numerical radius for off-diagonal parts of $2\times 2$ operator matrices. In particular, we extend inequalities \eqref{13} and \eqref{133} for nonnegative continuous functions $f$, $g$ on $[0, \infty)$ such that $f(t)g(t)=t\,(t\in [0, \infty))$. Moreover, we present some inequalities including the generalized Euclidean operator radius $\omega_{p}$. \section{main results} \bigskip To prove our first result, we need the following lemmas. \begin{lemma}\cite{ROD, yam}\label{1} Let $X\in {\mathbb B}({\mathscr H})$. Then\newline $(a)\,\,\omega(X)=\underset{\theta \in \mathbb{R} }{\max }\left\Vert \textrm{Re}\left( e^{i\theta }X\right) \right\Vert =\underset{\theta \in \mathbb{R} }{\max }\left\Vert \textrm{Im}\left( e^{i\theta }X\right) \right\Vert . $\newline $(b)\,\,\omega\left(\left[\begin{array}{cc} 0&X\\ X&0 \end{array}\right]\right) = \omega(X).$ \end{lemma} \bigskip The next lemma follows from the spectral theorem for positive operators and Jensen inequality; see \cite{KIT}. \begin{lemma}\label{3} Let $T\in{\mathbb B}({\mathscr H})$, $ T \geq 0$ and $x\in {\mathscr H}$ such that $\|x\|\leq1$. Then\\ $(a)\,\, \left\langle Tx, x\right\rangle^{r} \leq \left\langle T^{r}x, x\right\rangle$ for $ r\geq 1.$\\ $(b)\,\,\left\langle T ^{r}x, x\right\rangle \leq \left\langle Tx, x\right\rangle^{r}$ for $ 0<r\leq 1$.\\ \end{lemma} \begin{proof} Let $ r\geq 1$ and $x\in {\mathscr H}$ such that $\|x\|\leq1$. Fix $u=\frac{x}{\|x\|}$. Using the McCarty inequality we have $\left\langle Tu, u\right\rangle^{r} \leq \left\langle T^{r}u, u\right\rangle$, whence \begin{align*} \left\langle Tx, x\right\rangle^{r} &\leq \|x\|^{2r-2} \left\langle T^{r}x, x\right\rangle\\&\leq\left\langle T^{r}x, x\right\rangle\qquad(\textrm {since\,}\|x\|\leq1\,\textrm {and\,}2r-2\geq0). \end{align*} Hence, we get the first inequality. The proof of the second inequality is similar. \end{proof} \begin{lemma}\cite[Theorem 1]{KIT}\label{5} Let $T\in{\mathbb B}({\mathscr H})$ and $x, y\in {\mathscr H}$ be any vectors. If $f$, $g$ are nonnegative continuous functions on $[0, \infty)$ which are satisfying the relation $f(t)g(t)=t\,(t\in[0, \infty))$, then \begin{align*} | \left\langle Tx, y \right\rangle |^2 \leq \left\langle f^2(|T |)x ,x \right\rangle\, \left\langle g^2(| T^{*}|)y,y\right\rangle. \end{align*} \end{lemma} \bigskip Now, we are in position to demonstrate the main results of this section by using some ideas from \cite{CAL, sheikh}. \begin{theorem}\label{main1} Let $T=\left[\begin{array}{cc} 0&X\\ Y&0 \end{array}\right]\in {\mathbb B}({\mathscr H_1\oplus\mathscr H_2})$, $r\geq 1$ and $f$, $g$ be nonnegative continuous functions on $[0, \infty)$ satisfying the relation $f(t)g(t)=t\,(t\in[0, \infty))$. Then \begin{align*} \omega^{r}(T)\leq 2^{r-2}\left\|f^{2r}(|X|)+g^{2r}(|Y^*|)\right\|^\frac{1}{2}\left\|f^{2r}(|Y|)+g^{2r}(|X^*|)\right\|^\frac{1}{2} \end{align*} and \begin{align*} \omega^{r}(T)\leq 2^{r-2}\left\|f^{2r}(|X|)+f^{2r}(|Y^*|)\right\|^\frac{1}{2}\left\|g^{2r}(|Y|)+g^{2r}(|X^*|)\right\|^\frac{1}{2}. \end{align*} \end{theorem} \begin{proof} Let $\mathbf{x}=\left[\begin{array}{cc} x_1\\ x_2 \end{array}\right] \in {\mathscr H_1\oplus\mathscr H_2}$ be a unit vector (i.e., $\|x_1\|^2+\|x_2\|^2=1$). Then \begin{align*} &|\left\langle T\mathbf{x}, \mathbf{x} \right\rangle |^{r}\\& =|\left\langle Xx_2, x_1 \right\rangle+\left\langle Yx_1, x_2 \right\rangle |^{r}\\& \leq\left(|\left\langle Xx_2, x_1 \right\rangle|+|\left\langle Yx_1, x_2 \right\rangle |\right)^{r} \qquad (\textrm {by the triangular inequality})\\& \leq\frac{2^r}{2}\left(|\left\langle Xx_2, x_1 \right\rangle|^r+|\left\langle Yx_1, x_2 \right\rangle |^{r}\right) \qquad (\textrm {by the convexity\,} f(t)=t^r)\\& \leq\frac{2^r}{2}\Big(\left(\left\langle f^2(|X|)x_2, x_2 \right\rangle^\frac{1}{2}\left\langle g^2(|X^*|)x_1, x_1 \right\rangle^\frac{1}{2}\right)^r \\&\qquad+\left(\left\langle f^2(|Y|)x_1, x_1 \right\rangle^\frac{1}{2}\left\langle g^2(|Y^*|)x_2, x_2 \right\rangle^\frac{1}{2} \right)^{r}\Big) \qquad(\textrm {by Lemma\,\,}\ref{5})\\&\leq\frac{2^r}{2}\left(\left\langle f^{2r}(|X|)x_2, x_2 \right\rangle^\frac{1}{2}\left\langle g^{2r}|X^*|x_1, x_1 \right\rangle^\frac{1}{2} +\left\langle f^{2r}(|Y|)x_1, x_1 \right\rangle^\frac{1}{2}\left\langle g^{2r}(|Y^*|)x_2, x_2 \right\rangle^\frac{1}{2}\right)\\& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad (\textrm {by Lemma\,\,\ref{3}(a)})\\& \leq\frac{2^r}{2}\left(\left\langle f^{2r}(|X|)x_2, x_2 \right\rangle+\left\langle g^{2r}(|Y^*|)x_2, x_2 \right\rangle\right)^\frac{1}{2}\\&\,\,\,\times \left(\left\langle f^{2r}(|Y|)x_1, x_1 \right\rangle+\left\langle g^{2r}(|X^*|)x_1, x_1 \right\rangle\right)^\frac{1}{2} \,\,\, (\textrm {by the Cauchy-Schwarz inequality})\\& =\frac{2^r}{2}\left\langle (f^{2r}(|X|)+g^{2r}(|Y^*|))x_2, x_2 \right\rangle^\frac{1}{2} \left\langle (f^{2r}(|Y|)+g^{2r}(|X^*|))x_1, x_1 \right\rangle^\frac{1}{2} \\& \leq\frac{2^r}{2}\left\|f^{2r}(|X|)+g^{2r}(|Y^*|)\right\|^\frac{1}{2}\left\|f^{2r}(|Y|)+g^{2r}(|X^*|)\right\|^\frac{1}{2}\|x_1\|\|x_2\|\\& \leq\frac{2^r}{2}\left\|f^{2r}(|X|)+g^{2r}(|Y^*|)\right\|^\frac{1}{2}\left\|f^{2r}(|Y|)+g^{2r}(|X^*|)\right\|^\frac{1}{2}\left(\frac{\|x_1\|^2+\|x_2\|^2}{2}\right) \\&\qquad\qquad\qquad\qquad\qquad\qquad\qquad(\textrm {by the arithmetic-geometric mean inequality})\\& =\frac{2^r}{4}\left\|f^{2r}(|X|)+g^{2r}(|Y^*|)\right\|^\frac{1}{2}\left\|f^{2r}(|Y|)+g^{2r}(|X^*|)\right\|^\frac{1}{2}. \end{align*} Hence, we get the first inequality. Now, applying this fact \begin{align}\label{main1eq} &|\left\langle T\mathbf{x}, \mathbf{x} \right\rangle |^{r}\nonumber\\& =|\left\langle Xx_2, x_1 \right\rangle+\left\langle Yx_1, x_2 \right\rangle |^{r}\nonumber\\& \leq\left(|\left\langle Xx_2, x_1 \right\rangle|+|\left\langle Yx_1, x_2 \right\rangle |\right)^{r} \qquad (\textrm {by the triangular inequality})\nonumber\\& \leq\frac{2^r}{2}\left(|\left\langle Xx_2, x_1 \right\rangle|^r+|\left\langle Yx_1, x_2 \right\rangle |^{r}\right) \qquad (\textrm {by the convexity\,} f(t)=t^r)\nonumber\\& \leq\frac{2^r}{2}\left(\left(\left\langle f^2(|X|)x_2, x_2 \right\rangle^\frac{1}{2}\left\langle g^2(|X^*|)x_1, x_1 \right\rangle^\frac{1}{2}\right)^r\right.\nonumber \\&\qquad +\left.\left(\left\langle g^2(|Y|)x_1, x_1 \right\rangle^\frac{1}{2}\left\langle f^2(|Y^*|)x_2, x_2 \right\rangle^\frac{1}{2} \right)^{r}\right) \qquad(\textrm {by Lemma\,\,}\ref{5}) \end{align} and a similar argument to the proof of the first inequality we have the second inequality and this completes the proof of the theorem. \end{proof} \bigskip Theorem \ref{main1} includes a special case as follows. \begin{corollary}\label{corol1} Let $T=\left[\begin{array}{cc} 0&X\\ Y&0 \end{array}\right]\in {\mathbb B}({\mathscr H_1\oplus\mathscr H_2})$, $0\leq p\leq1$ and $r\geq1$. Then \begin{align*} \omega^{r}(T)\leq 2^{r-2} \left\| | X |^{2rp} + | Y^* |^{2r(1-p)} \right\|^{\frac{1}{2}}\left\|| Y |^{2rp} + | X^* |^{2r(1-p)} \right\|^{\frac{1}{2}} \end{align*} and \begin{align*} \omega^{r}(T)\leq 2^{r-2} \left\| | X |^{2rp} + | Y^* |^{2rp} \right\|^{\frac{1}{2}}\left\|| Y |^{2r(1-p)} + | X^* |^{2r(1-p)} \right\|^{\frac{1}{2}}. \end{align*} \end{corollary} \begin{proof} The result follows immediately from Theorem \ref{main1} for $f(t)=t^p$ and $g(t)=t^{1-p}\,\,(0\leq p\leq1)$. \end{proof} \begin{remark} Taking $f(t)=g(t)=t^{\frac{1}{2}}\,(t\in[0,\infty))$ and $r=1$ in Theorem \ref{main1}, we get (see \cite[Theorem 4]{CAL}\label{100}) \begin{align*} \omega(T)\leq \frac{1}{2} \left\| | X | + | Y^* | \right\|^{\frac{1}{2}}\left\|| Y | + | X^* | \right\|^{\frac{1}{2}}, \end{align*} where $T=\left[\begin{array}{cc} 0&X\\ Y&0 \end{array}\right]\in {\mathbb B}({\mathscr H_1\oplus\mathscr H_2})$. \end{remark} \bigskip If we put $Y=X$ in Theorem \ref{main1}, then by using Lemma \ref{1}(b) we get an extension of Inequality \eqref{13}. \begin{corollary} Let $X\in {\mathbb B}({\mathscr H})$, $r\geq 1$ and $f$, $g$ be nonnegative continuous functions on $[0, \infty)$ satisfying the relation $f(t)g(t)=t\,(t\in[0, \infty))$. Then \begin{align*} \omega^{r}(X)\leq 2^{r-2}\left\|f^{2r}(|X|)+g^{2r}(|X^*|)\right\| \end{align*} and \begin{align*} \omega^{r}(X)\leq 2^{r-2}\left\|f^{2r}(|X|)+f^{2r}(|X^*|)\right\|^\frac{1}{2}\left\|g^{2r}(|X|)+g^{2r}(|X^*|)\right\|^\frac{1}{2}. \end{align*} \end{corollary} \begin{corollary} Let $X, Y\in {\mathbb B}({\mathscr H})$ and $0\leq p \leq1$. Then% \begin{align*} \omega^{\frac{r}{2}}\left( XY\right) \leq 2^{r-2} \left\| | X |^{2rp} + | Y^* |^{2r(1-p)} \right\|^{\frac{1}{2}}\left\|| Y |^{2rp} + | X^* |^{2r(1-p)} \right\|^{\frac{1}{2}} \end{align*} and \begin{align*} \omega^{\frac{r}{2}}\left( XY\right) \leq 2^{r-2} \left\| | X |^{2rp} + | Y^* |^{2rp} \right\|^{\frac{1}{2}}\left\|| Y |^{2r(1-p)} + | X^* |^{2r(1-p)} \right\|^{\frac{1}{2}} \end{align*} \bigskip for $r\geq 1$. \end{corollary} \begin{proof} It follows from the power inequality $\omega^{\frac{1}{2}}\left( T^{2}\right) \leq \omega\left( T\right)$ that \begin{equation*} \omega^{\frac{1}{2}}\left( T^{2}\right) =\omega^{\frac{1}{2}}\left( \left[ \begin{array}{cc} XY & 0 \\ 0 &YX% \end{array}% \right] \right) =\max \left\{ \omega^{\frac{1}{2}}\left( XY\right) ,\omega^{\frac{1}{2}% }\left( YX\right) \right\}. \label{7} \end{equation*}% The required result follows from Corollary \ref{corol1}. \end{proof} \begin{corollary} Let $X, Y\in {\mathbb B}({\mathscr H})$ and $r\geq 1$% . Then% \begin{equation*} \left\Vert X\pm Y^{\ast }\right\Vert ^{r}\leq 2^{2r-2}\left\Vert \left\vert X\right\vert ^{r}+\left\vert Y^{\ast }\right\vert ^{r}\right\Vert ^{\frac{1}{2}% }\left\Vert \left\vert Y\right\vert ^{r}+\left\vert X^{\ast }\right\vert ^{r}\right\Vert ^{\frac{1}{2}}. \end{equation*}% In particular, if $X$\ and $Y$\ are normal operators, then% \begin{equation} \left\Vert X\pm Y\right\Vert ^{r}\leq 2^{2r-2}\left\Vert \left\vert X\right\vert ^{r}+\left\vert Y\right\vert ^{r}\right\Vert . \label{8} \end{equation} \end{corollary} \begin{proof} Applying Lemma \ref{1}(a) and Corollary \ref{corol1} (for $p=\frac{1}{2}$), we have% \begin{eqnarray*} \left\Vert X+Y^{\ast }\right\Vert ^{r} &=&\left\Vert T+T^{\ast }\right\Vert ^{r} \\ &\leq &2^{r}\underset{\theta \in \mathbb{R} }{\max }\left\Vert \textrm{Re}\left( e^{i\theta }T\right) \right\Vert ^{r} \\ &=&2^{r}\omega^{r}\left( T\right) \\ &\leq &2^{2r-2}\left\Vert \left\vert X\right\vert ^{r}+\left\vert Y^{\ast }\right\vert ^{r}\right\Vert ^{\frac{1}{2}}\left\Vert \left\vert Y\right\vert ^{r}+\left\vert X^{\ast }\right\vert ^{r}\right\Vert ^{\frac{1}{% 2}} \end{eqnarray*}% where $T=\left[ \begin{array}{cc} 0 & X \\ Y & 0% \end{array}% \right] .$ Similarly, \begin{eqnarray*} \left\Vert X-Y^{\ast }\right\Vert ^{r} &=&\left\Vert T-T^{\ast }\right\Vert ^{r} \\ &\leq &2^{r}\underset{\theta \in \mathbb{R} }{\max }\left\Vert \textrm{Im}\left( e^{i\theta }T\right) \right\Vert ^{r} \\ &=&2^{r}\omega^{r}\left( T\right) \\ &\leq &2^{2r-2}\left\Vert \left\vert X\right\vert ^{r}+\left\vert Y^{\ast }\right\vert ^{r}\right\Vert ^{\frac{1}{2}}\left\Vert \left\vert Y\right\vert ^{r}+\left\vert X^{\ast }\right\vert ^{r}\right\Vert ^{\frac{1}{% 2}} \end{eqnarray*}% Hence we get the desired result. For the particular case, observe that $\left\vert Y^{\ast }\right\vert =$ $% \left\vert Y\right\vert $ and $|X^{\ast }|=|X|.$ \end{proof} \begin{remark} It should be mentioned here that inequality $\left( \text{\ref{8}}\right) $, which has been given earlier,\ is a generalized form of\ the well-known inequality (see \cite{Bour}): if $A$\ and $B$\ are normal operators, then% \begin{equation}\label{normal} \left\Vert X+Y\right\Vert \leq \left\Vert \left\vert X\right\vert +\left\vert Y\right\vert \right\Vert . \end{equation}% The normality of \ $X$\ and $Y$\ are necessary that means Inequality \eqref{normal} is not true for arbitrary operators $X$\ and $Y$; see \cite{khal}% \end{remark} \bigskip In the next theorem, we show another upper bound for numerical radius involving off-diagonal operator matrices. \begin{theorem}\label{main11} Let $T=\left[\begin{array}{cc} 0&X\\ Y&0 \end{array}\right]\in {\mathbb B}({\mathscr H_1\oplus\mathscr H_2})$, $r\geq 1$ and $f$, $g$ be nonnegative continuous functions on $[0, \infty)$ satisfying the relation $f(t)g(t)=t\,(t\in[0, \infty))$. Then \begin{align*} \omega^{2r}(T)\leq 4^{r-2}\left(\frac{\left\|\left(f^{2r}(|X|)+g^{2r}(|Y^*|)\right)^p\right\|}{p^2} +\frac{\left\|\left(f^{2r}(|Y|)+g^{2r}(|X^*|)\right)^q\right\|}{q^2}\right) \end{align*} and \begin{align*} \omega^{2r}(T)\leq 4^{r-2}\left(\frac{\left\|\left(f^{2r}(|X|)+f^{2r}(|Y^*|)\right)^p\right\|}{p^2} +\frac{\left(\left\|g^{2r}(|Y|)+g^{2r}(|X^*|)\right)^q\right\|}{q^2}\right), \end{align*} where $\frac{1}{p}+\frac{1}{q}=1$ and $p\geq1$. \end{theorem} \begin{proof} If $\mathbf{x}=\left[\begin{array}{cc} x_1\\ x_2 \end{array}\right] \in {\mathscr H_1\oplus\mathscr H_2}$ is a unit vector, then by a similar argument to the proof of Theorem \ref{main1} we have \begin{align}\label{remark1} &|\left\langle T\mathbf{x}, \mathbf{x} \right\rangle |^{r}\nonumber\\& =|\left\langle Xx_2, x_1 \right\rangle+\left\langle Yx_1, x_2 \right\rangle |^{r}\nonumber\\& \leq\left(|\left\langle Xx_2, x_1 \right\rangle|+|\left\langle Yx_1, x_2 \right\rangle |\right)^{r} \qquad (\textrm {by the triangular inequality})\nonumber\\& \leq\frac{2^r}{2}\left(|\left\langle Xx_2, x_1 \right\rangle|^r+|\left\langle Yx_1, x_2 \right\rangle |^{r}\right) \qquad (\textrm {by the convexity\,} f(t)=t^r)\nonumber\\& \leq\frac{2^r}{2}\Big(\left(\left\langle f^2(|X|)x_2, x_2 \right\rangle^\frac{1}{2}\left\langle g^2(|X^*|)x_1, x_1 \right\rangle^\frac{1}{2}\right)^r \nonumber\\&\qquad+\left(\left\langle f^2(|Y|)x_1, x_1 \right\rangle^\frac{1}{2}\left\langle g^2(|Y^*|)x_2, x_2 \right\rangle^\frac{1}{2} \right)^{r}\Big) \qquad(\textrm {by Lemma\,\,}\ref{5})\nonumber\\&\leq\frac{2^r}{2}\left(\left\langle f^{2r}(|X|)x_2, x_2 \right\rangle^\frac{1}{2}\left\langle g^{2r}|X^*|x_1, x_1 \right\rangle^\frac{1}{2} +\left\langle f^{2r}(|Y|)x_1, x_1 \right\rangle^\frac{1}{2}\left\langle g^{2r}(|Y^*|)x_2, x_2 \right\rangle^\frac{1}{2}\right)\nonumber\\& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad (\textrm {by Lemma\,\,\ref{3}(a)})\nonumber\\& \leq\frac{2^r}{2}\left(\left\langle f^{2r}(|X|)x_2, x_2 \right\rangle+\left\langle g^{2r}(|Y^*|)x_2, x_2 \right\rangle\right)^\frac{1}{2}\nonumber \\&\,\,\,\times\left(\left\langle f^{2r}(|Y|)x_1, x_1 \right\rangle+\left\langle g^{2r}(|X^*|)x_1, x_1 \right\rangle\right)^\frac{1}{2} \,\,(\textrm {by the Cauchy-Schwarz inequality})\nonumber\\& =\frac{2^r}{2}\left\langle (f^{2r}(|X|)+g^{2r}(|Y^*|))x_2, x_2 \right\rangle^\frac{1}{2} \left\langle (f^{2r}(|Y|)+g^{2r}(|X^*|))x_1, x_1 \right\rangle^\frac{1}{2} \nonumber\\& \leq\frac{2^r}{2}\left(\frac{\left\langle (f^{2r}(|X|)+g^{2r}(|Y^*|))x_2, x_2 \right\rangle^\frac{p}{2}}{p} +\frac{ \left\langle (f^{2r}(|Y|)+g^{2r}(|X^*|))x_1, x_1 \right\rangle^\frac{q}{2}}{q}\right)\nonumber\\& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad (\textrm {by the Young inequality}) \nonumber\\& \leq\frac{2^r}{2}\left(\frac{\left\langle (f^{2r}(|X|)+g^{2r}(|Y^*|))^px_2, x_2 \right\rangle^\frac{1}{2}}{p} +\frac{ \left\langle (f^{2r}(|Y|)+g^{2r}(|X^*|))^qx_1, x_1 \right\rangle^\frac{1}{2}}{q}\right)\nonumber\\& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad(\textrm {by Lemma\,\,\ref{3}(a)}) \nonumber \\&\leq\frac{2^r}{2}\left(\frac{\left\|\left(f^{2r}(|X|)+g^{2r}(|Y^*|)\right)^p\right\|^\frac{1}{2}}{p}\|x_2\| +\frac{ \left\|\left(f^{2r}(|Y|)+g^{2r}(|X^*|)\right)^q\right\|^\frac{1}{2}}{q}\|x_1\|\right). \end{align} Let $\alpha=\frac{\left\|\left(f^{2r}(|X|)+g^{2r}(|Y^*|)\right)^p\right\|^\frac{1}{2}}{p}$ and $\beta=\frac{\left\|\left(f^{2r}(|Y|)+g^{2r}(|X^*|)\right)^q\right\|^\frac{1}{2}}{q}$. It follows from $$\underset \|x_1\|^2+\|x_2\|^2=1 }{\max}(\alpha\|x_1\|+\beta\|x_2\|)=\underset{\theta \i [0,2\pi] }{\max}(\alpha\sin\theta+\beta\cos\theta)=\sqrt{\alpha^2+\beta^2}$$ and Inequality \eqref{remark1} that we deduce \begin{align*} &|\left\langle T\mathbf{x}, \mathbf{x} \right\rangle |^{r}\nonumber\\& \leq\frac{2^r}{2}\left(\frac{\left\|\left(f^{2r}(|X|)+g^{2r}(|Y^*|)\right)^p\right\|}{p^2} +\frac{ \left\|\left(f^{2r}(|Y|)+g^{2r}(|X^*|)\right)^q\right\|}{q^2}\right)^\frac{1}{2}. \end{align*} Taking the supremum over all unit vectors $\mathbf{x}\in \mathscr {H}_1\oplus \mathscr {H}_2$ we get the first inequality. Now, according to inequality \eqref{main1eq} and the same argument in the proof of the first inequality, we obtain the second inequality. \end{proof} \begin{remark} If $T=\left[\begin{array}{cc} 0&X\\ Y&0 \end{array}\right]\in {\mathbb B}({\mathscr H_1\oplus\mathscr H_2})$ and $\frac{1}{p}+\frac{1}{q}=1$, then by using Theorem \ref{main1} and the Young inequality we obtain the inequalities \begin{align*} \omega^{r}(T)\leq 2^{r-2}\left(\frac{\left\|f^{2r}(|X|)+g^{2r}(|Y^*|)\right\|^\frac{p}{2}}{p}+\frac{\left\|f^{2r}(|Y|)+g^{2r}(|X^*|)\right\|^\frac{q}{2}}{q}\right) \end{align*} and \begin{align*} \omega^{r}(T)\leq 2^{r-2}\left(\frac{\left\|f^{2r}(|X|)+f^{2r}(|Y^*|)\right\|^\frac{p}{2}}{p}+\frac{\left\|g^{2r}(|Y|)+g^{2r}(|X^*|)\right\|^\frac{q}{2}}{q}\right), \end{align*} where $r\geq 1$ and $f$, $g$ are nonnegative continuous functions on $[0, \infty)$ satisfying the relation $f(t)g(t)=t\,(t\in[0, \infty))$. Now, Theorem \ref{main11} shows some other upper bounds for $\omega(T)$. \end{remark} \bigskip In the special case of Theorem \ref{main11} for $Y=X$ and $p=q=2$, we have the next result. \begin{corollary} Let $X\in {\mathbb B}({\mathscr H})$, $r\geq 1$ and $f$, $g$ be nonnegative continuous functions on $[0, \infty)$ satisfying the relation $f(t)g(t)=t\,(t\in[0, \infty))$. Then \begin{align*} \omega^{2r}(X)\leq 2^{2r-3}\left\|\left(f^{2r}(|X|)+g^{2r}(|X^*|)\right)^2\right\| \end{align*} and \begin{align*} \omega^{2r}(T)\leq 2^{2r-4}\left(\left\|\left(f^{2r}(|X|)+f^{2r}(|X^*|)\right)^2\right\|+\left(\left\|g^{2r}(|X|)+g^{2r}(|X^*|)\right)^2\right\|\right). \end{align*} \end{corollary} \bigskip Applying Inequality \eqref{12} we obtain the following theorem. \begin{theorem}\label{main3} Let $T=\left[\begin{array}{cc} 0&X\\ Y&0 \end{array}\right]\in {\mathbb B}({\mathscr H_1\oplus\mathscr H_2})$ and $f$, $g$ be nonnegative continuous functions on $[0, \infty)$ satisfying the relation $f(t)g(t)=t$ $(t\in [0, \infty))$. Then for $r\geq 1$ {\footnotesize\begin{align*} \omega^{r}(T)\leq 2^{r-2}\left(\left\|f^{2r}(|X|)+g^{2r}(|Y^*|)\right\|+\left\|f^{2r}(|Y|)+g^{2r}(|X^*|)\right\|\right)-2^{r-2}\inf_{\|(x_1,x_2)\|=1} \zeta (x_1,x_2), \end{align*}} where {\footnotesize\begin{align*} \zeta (x_1,x_2)=\left(\left\langle \left(f^{2r}(|X|)+g^{2r}(|Y^*|)\right)x_2,x_2\right\rangle^\frac{1}{2}-\left\langle \left(f^{2r}(|Y|)+g^{2r}(|X^*|)\right)x_1,x_1\right\rangle^\frac{1}{2}\right)^2. \end{align*}} \end{theorem} \begin{proof} Let $\mathbf{x}=\left[\begin{array}{cc} x_1\\ x_2 \end{array}\right] \in {\mathscr H_1\oplus\mathscr H_2}$ be a unit vector. Then {\begin{align*} |\langle T\mathbf{x}&, \mathbf{x}\rangle|^{r} \\& = |\langle Xx_2,x_1\rangle+\langle Yx_1, x_2\rangle|^{r} \\&\leq\left(|\langle Xx_2,x_1\rangle|+|\langle Yx_1, x_2\rangle|\right)^{r}\qquad(\textrm {by the triangular inequality}) \\&\leq\frac{2^r}{2}\left(|\langle Xx_2,x_1\rangle|^r+|\langle Yx_1, x_2\rangle|^r\right)\qquad(\textrm {by the convexity\,} f(t)=t^r) \\&\leq\frac{2^r}{2}\left(\langle f^2(|X|)x_2,x_2\rangle^\frac{r}{2}\langle g^2(|X^*|)x_1,x_1\rangle^\frac{r}{2} +\langle f^2(|Y|)x_1,x_1\rangle^\frac{r}{2}\langle f^2(|Y^*|)x_2,x_2\rangle^\frac{r}{2}\right) \\&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad(\textrm {by Lemma\,\,}\ref{5}) \\&\leq\frac{2^r}{2}\left(\langle f^{2r}(|X|)x_2,x_2\rangle^\frac{1}{2}\langle g^{2r}(|X^*|)x_1,x_1\rangle^\frac{1}{2} +\langle f^{2r}(|Y|)x_1,x_1\rangle^\frac{1}{2}\langle g^{2r}(|Y^*|)x_2,x_2\rangle^\frac{1}{2}\right) \\&\leq\frac{2^r}{2}\left(\langle f^{2r}(|X|)x_2,x_2\rangle+\langle g^{2r}(|Y^*|)x_2,x_2\rangle\right)^\frac{1}{2}\left(f^{2r}(|Y|)x_1,x_1\rangle+\langle g^{2r}(|X^*|)x_1,x_1\rangle\right)^\frac{1}{2} \\&=\frac{2^r}{2}\langle \left(f^{2r}(|X|)+g^{2r}(|Y^*|)\right)x_2,x_2\rangle^\frac{1}{2}\langle \left(f^{2r}(|Y|)+g^{2r}(|X^*|)\right)x_1,x_1\rangle^\frac{1}{2}\\& \leq\frac{2^r}{4}\left(\langle \left(f^{2r}(|X|)+g^{2r}(|Y^*|)\right)x_2,x_2\rangle+\langle \left(f^{2r}(|Y|)+g^{2r}(|X^*|)\right)x_1,x_1\rangle\right)\\& \,\,\,\,\,-\frac{2^r}{4}\left(\langle \left(f^{2r}(|X|)+g^{2r}(|Y^*|)\right)x_2,x_2\rangle^\frac{1}{2}-\langle \left(f^{2r}(|Y|)+g^{2r}(|X^*|)\right)x_1,x_1\rangle^\frac{1}{2}\right)^2\\& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad(\textrm {by Inequality\,\,}\eqref{12})\\& \leq\frac{2^r}{4}\left(\left\|f^{2r}(|X|)+g^{2r}(|Y^*|)\right\|+\left\|f^{2r}(|Y|)+g^{2r}(|X^*|)\right\|\right)\\& \,\,\,\,\,-\frac{2^r}{4}\left(\langle \left(f^{2r}(|X|)+g^{2r}(|Y^*|)\right)x_2,x_2\rangle^\frac{1}{2}-\langle \left(f^{2r}(|Y|)+g^{2r}(|X^*|)\right)x_1,x_1\rangle^\frac{1}{2}\right)^2. \end{align*}} Taking the supremum over all unit vectors $\mathbf{x}=\left[\begin{array}{cc} x_1\\ x_2 \end{array}\right] \in {\mathscr H_1\oplus\mathscr H_2}$ we get the desired inequality. \end{proof} \bigskip If we put $Y=X$ in Theorem \ref{main3}, then we get next result. \begin{corollary} Let $X\in {\mathbb B}({\mathscr H})$ and $f$, $g$ be nonnegative continuous functions on $[0, \infty)$ satisfying the relation $f(t)g(t)=t$ $(t\in [0, \infty))$. Then for $r\geq 1$ {\footnotesize\begin{align*} \omega^{r}(X)\leq 2^{r-1}\|f^{2r}(|X|)+g^{2r}(|X^*|)\|-2^{r-2}\inf_{\|(x_1,x_2)\|=1} \zeta (x_1,x_2), \end{align*}} where {\footnotesize\begin{align*} \zeta (x_1,x_2)=\left(\left\langle \left(f^{2r}(|X|)+g^{2r}(|X^*|)\right)x_2,x_2\right\rangle^\frac{1}{2}-\left\langle \left(f^{2r}(|X|)+g^{2r}(|X^*|)\right)x_1,x_1\right\rangle^\frac{1}{2}\right)^2. \end{align*}} \end{corollary} \begin{remark} If $\mathbf{x}=\left[\begin{array}{cc} x_1\\ x_2 \end{array}\right] \in {\mathscr H_1\oplus\mathscr H_2}$ is a unit vector, then by using the inequality \begin{align*} |\langle T\mathbf{x}&, \mathbf{x}\rangle|^{r} \\& = |\left\langle Xx_2,x_1\right\rangle+\left\langle Yx_1, x_2\right\rangle|^{r} \\&\leq\left(|\left\langle Xx_2,x_1\right\rangle|+|\left\langle Yx_1, x_2\right\rangle|\right)^{r} \\&\leq\frac{2^r}{2}\left(|\left\langle Xx_2,x_1\right\rangle|^r+|\left\langle Yx_1, x_2\right\rangle|^r\right) \\&\leq\frac{2^r}{2}\left(\left\langle f^2(|X|)x_2,x_2\right\rangle^\frac{r}{2}\left\langle g^2(|X^*|)x_1,x_1\right\rangle^\frac{r}{2}\left\langle g^2(|X|)x_2,x_2\right\rangle^\frac{r}{2}\left\langle f^2(|X^*|)x_1,x_1\right\rangle^\frac{r}{2}\right) \end{align*} and the same argument in the proof if Theorem \ref{main3} we get the following inequality {\footnotesize\begin{align*} \omega^{r}(T)\leq \frac{2^r}{4}\left(\|f^{2r}(|X|)+f^{2r}(|Y^*|)\|+\|g^{2r}(|Y|)+g^{2r}(|X^*|)\|\right)-\frac{2^r}{4}\inf_{\|(x_1,x_2)\|=1} \zeta (x_1,x_2), \end{align*}} where $T=\left[\begin{array}{cc} 0&X\\ Y&0 \end{array}\right]\in {\mathbb B}({\mathscr H_1\oplus\mathscr H_2})$, $f$, $g$ are nonnegative continuous functions on $[0, \infty)$ satisfying the relation $f(t)g(t)=t$ $(t\in [0, \infty))$, $r\geq 1$ and {\footnotesize\begin{align*} \zeta (x_1,x_2)=\left(\left\langle \left(f^{2r}(|X|)+f^{2r}(|Y^*|)\right)x_2,x_2\right\rangle^\frac{1}{2}-\left\langle \left(g^{2r}(|Y|)+g^{2r}(|X^*|)\right)x_1,x_1\right\rangle^\frac{1}{2}\right)^2. \end{align*}} \end{remark} \section{Some upper bounds for $\omega_p$} \bigskip In this section, we obtain some upper bounds for $\omega_P$. We first show the following theorem. \begin{theorem}\label{main4} Let $ \widetilde{S}_{i}=\left[\begin{array}{cc} A_i&0\\ 0&B_i \end{array}\right], \widetilde{T}_{i}=\left[\begin{array}{cc} 0&X_{i}\\ Y_{i}&0 \end{array}\right]$ and $ \widetilde{U}_{i}=\left[\begin{array}{cc} C_i&0\\ 0&D_i \end{array}\right] $ be operators matrices in $ {\mathbb B}({\mathscr H_1\oplus\mathscr H_2})$$\,\,(1\leq i\leq n)$ such that $A_i, B_i, C_i$ and $D_i$ are contractions. Then \begin{align*} \omega_{p}^{p}&({\widetilde{S}}^*_1\widetilde{T}_{1}\widetilde{U}_1, \cdots, {\widetilde{S}}^*_n\widetilde{T}_{n}\widetilde{U}_n)\\&\leq 2^{p-2}\sum_{i=1}^{n}\left\|D^*_if^{2p}(|X_{i}|)D_i+ B^*_ig^{2p}(|Y^*_{i}|)B_i\right\|^\frac{1}{2}\left\|C^*_if^{2p}(|Y_{i}|)C_i+A^*_ig^{2p}(|X^*_{i}|)A_i\right\|^\frac{1}{2} \end{align*} and \begin{align*} \omega_{p}^{p}&(\widetilde{S}^*_1\widetilde{T}_{1}\widetilde{U}_1, \cdots, \widetilde{S}^*_n\widetilde{T}_{n}\widetilde{U}_n)\\&\leq 2^{p-2}\sum_{i=1}^{n}\left\|D^*_if^{2p}(|X_{i}|)D_i+ B^*_if^{2p}(|Y^*_{i}|)B_i\right\|^\frac{1}{2}\left\|C^*_ig^{2p}(|Y_{i}|)C_i+A^*_ig^{2p}(|X^*_{i}|)A_i\right\|^\frac{1}{2}, \end{align*} where $p\geq1$. \end{theorem} \begin{proof} For any unit vector $\mathbf{x}=\left[\begin{array}{cc} x_1\\ x_2 \end{array}\right] \in {\mathscr H_1\oplus\mathscr H_2}$ we have {\footnotesize\begin{align*} &\sum_{i=1}^{n}|\langle T_{i}\mathbf{x}, \mathbf{x}\rangle|^{p} \\&=\sum_{i=1}^{n}|\langle A^*_iX_{i}D_ix_2, x_1\rangle+\langle B_i^*Y_{i}C_ix_1, x_2\rangle|^{p} \\&\leq \sum_{i=1}^{n}\left(|\langle A^*_iX_{i}D_ix_2, x_1\rangle|+|\langle B_i^*Y_{i}C_ix_1, x_2\rangle|\right)^{p} \qquad (\textrm {by the triangular inequality}) \\&\leq\frac{2^p}{2}\sum_{i=1}^{n}|\langle A^*_iX_{i}D_ix_2, x_1\rangle|^{p}+|\langle B_i^*Y_{i}C_ix_1, x_2\rangle|^{p} \qquad (\textrm {by the convexity\,} f(t)=t^p) \\&=\frac{2^p}{2}\sum_{i=1}^{n}|\langle X_{i}D_ix_2, A_ix_1\rangle|^{p}+|\langle Y_{i}C_ix_1, B_ix_2\rangle|^{p} \\&\leq\frac{2^p}{2}\sum_{i=1}^{n}\langle f^2(|X_{i}|)D_ix_2, D_ix_2\rangle^\frac{p}{2}\langle g^2(|X^*_{i}|)A_ix_1, A_ix_1\rangle^\frac{p}{2} \\&\,\,\,\,\,+\langle f^2(|Y_{i}|)C_ix_1, C_ix_1\rangle^\frac{p}{2}\langle g^2(|Y^*_{i}|)B_ix_2, B_ix_2\rangle^\frac{p}{2}\qquad\qquad(\textrm {by Lemma\,\,}\ref{5}) \\&\leq\frac{2^p}{2}\sum_{i=1}^{n}\langle f^{2p}(|X_{i}|)D_ix_2, D_ix_2\rangle^\frac{1}{2}\langle g^{2p}(|X^*_{i}|)A_ix_1, A_ix_1\rangle^\frac{1}{2} \\&\,\,\,\,\,+\langle f^{2p}(|Y_{i}|)C_ix_1, C_ix_1\rangle^\frac{1}{2}\langle g^{2p}(|Y^*_{i}|)B_ix_2, B_ix_2\rangle^\frac{1}{2}\qquad\qquad(\textrm {by Lemma\,\,\ref{3}(a)}) \\&=\frac{2^p}{2}\sum_{i=1}^{n}\langle D^*_if^{2p}(|X_{i}|)D_ix_2, x_2\rangle^\frac{1}{2}\langle A^*_ig^{2p}(|X^*_{i}|)A_ix_1, x_1\rangle^\frac{1}{2} \\&\,\,\,\,\,+\langle C^*_if^{2p}(|Y_{i}|)C_ix_1, x_1\rangle^\frac{1}{2}\langle B^*_ig^{2p}(|Y^*_{i}|)B_ix_2, x_2\rangle^\frac{1}{2} \\&\leq\frac{2^p}{2}\sum_{i=1}^{n}\left(\left\langle D^*_if^{2p}(|X_{i}|)D_ix_2, x_2\rangle+\langle B^*_ig^{2p}(|Y^*_{i}|)B_ix_2, x_2\right\rangle\right)^\frac{1}{2}\\& \,\,\,\,\,\times\left(\left\langle C^*_if^{2p}(|Y_{i}|)C_ix_1, x_1\right\rangle+\left\langle A^*_ig^{2p}(|X^*_{i}|)A_ix_1, x_1\right\rangle\right)^\frac{1}{2}\,\,(\textrm {by the Cauchy-Schwarz inequality}) \end{align*} \begin{align*} &=\frac{2^p}{2}\sum_{i=1}^{n}\Big(\left\langle \left(D^*_if^{2p}(|X_{i}|)D_i+ B^*_ig^{2p}(|Y^*_{i}|)B_i\right)x_2, x_2\right\rangle\Big)^\frac{1}{2}\\&\,\,\,\,\,\times\Big(\left\langle \left(C^*_if^{2p}(|Y_{i}|)C_i+A^*_ig^{2p}(|X^*_{i}|)A_i\right)x_1, x_1\right\rangle\Big)^\frac{1}{2} \\&\leq\frac{2^p}{2}\sum_{i=1}^{n}\left\|D^*_if^{2p}(|X_{i}|)D_i+ B^*_ig^{2p}(|Y^*_{i}|)B_i\right\|^\frac{1}{2}\left\|C^*_if^{2p}(|Y_{i}|)C_i+A^*_ig^{2p}(|X^*_{i}|)A_i\right\|^\frac{1}{2}\|x_1\|\|x_2\| \\&=\frac{2^p}{2}\sum_{i=1}^{n}\left\|D^*_if^{2p}(|X_{i}|)D_i+ B^*_ig^{2p}(|Y^*_{i}|)B_i\right\|^\frac{1}{2}\\&\,\,\,\,\,\times\left\|C^*_if^{2p}(|Y_{i}|)C_i+A^*_ig^{2p}(|X^*_{i}|)A_i\right\|^\frac{1}{2} \left(\frac{\|x_1\|^2+\|x_2\|^2}{2}\right)\\& =\frac{2^p}{4}\sum_{i=1}^{n}\left\|D^*_if^{2p}(|X_{i}|)D_i+ B^*_ig^{2p}(|Y^*_{i}|)B_i\right\|^\frac{1}{2}\left\|C^*_if^{2p}(|Y_{i}|)C_i+A^*_ig^{2p}(|X^*_{i}|)A_i\right\|^\frac{1}{2}. \end{align*}} Taking the supremum over all unit vectors $\mathbf{x}\in {\mathscr H_1\oplus\mathscr H_2}$ we obtain the first inequality. Using the inequality \begin{align*} &\sum_{i=1}^{n}|\left\langle T_{i}\mathbf{x}, \mathbf{x}\right\rangle|^{p} \\&=\sum_{i=1}^{n}|\left\langle A^*_iX_{i}D_ix_2, x_1\right\rangle+\left\langle B_i^*Y_{i}C_ix_1, x_2\right\rangle|^{p} \\&\leq \sum_{i=1}^{n}\left(|\left\langle A^*_iX_{i}D_ix_2, x_1\right\rangle|+|\left\langle B_i^*Y_{i}C_ix_1, x_2\right\rangle|\right)^{p} \qquad (\textrm {by the triangular inequality}) \\&\leq\frac{2^p}{2}\sum_{i=1}^{n}|\left\langle A^*_iX_{i}D_ix_2, x_1\right\rangle|^{p}+|\left\langle B_i^*Y_{i}C_ix_1, x_2\right\rangle|^{p} \qquad (\textrm {by the convexity\,} f(t)=t^p) \\&=\frac{2^p}{2}\sum_{i=1}^{n}|\left\langle X_{i}D_ix_2, A_ix_1\right\rangle|^{p}+|\left\langle Y_{i}C_ix_1, B_ix_2\right\rangle|^{p} \\&\leq\frac{2^p}{2}\sum_{i=1}^{n}\left\langle f^2(|X_{i}|)D_ix_2, D_ix_2\right\rangle^\frac{p}{2}\left\langle g^2(|X^*_{i}|)A_ix_1, A_ix_1\right\rangle^\frac{p}{2} \\&\,\,\,\,\,+\left\langle g^2(|Y_{i}|)C_ix_1, C_ix_1\right\rangle^\frac{p}{2}\left\langle f^2(|Y^*_{i}|)B_ix_2, B_ix_2\right\rangle^\frac{p}{2}\\& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad(\textrm {by Lemma\,\,}\ref{5}) \end{align*} and a similar fashion in the proof of the first inequality we reach the second inequality. \end{proof} \bigskip In the special case of Theorem \ref{main4} for $A_i=B_i=C_i=D_i=I\,\,(1\leq i\leq n)$ we have the next result. \begin{corollary} Let $T_{i}=\left[\begin{array}{cc} 0&X_{i}\\ Y_{i}&0 \end{array}\right] \in {\mathbb B}({\mathscr H_1\oplus\mathscr H_2})\,\,(1\leq j\leq n)$. Then \begin{align*} \omega_{p}^{p}(T_{1}, T_{2},\cdots,T_{n})\leq 2^{p-2}\sum_{i=1}^{n}\left\|f^{2p}(|X_{i}|)+g^{2p}(|Y^*_{i}|)\right\|^\frac{1}{2} \left\|f^{2p}(|Y_{i}|)+g^{2p}(|X^*_{i}|)\right\|^\frac{1}{2} \end{align*} and \begin{align*} \omega_{p}^{p}(T_{1}, T_{2},\cdots,T_{n})\leq 2^{p-2}\sum_{i=1}^{n}\left\|f^{2p}(|X_{i}|)+f^{2p}(|Y^*_{i}|)\right\|^\frac{1}{2} \left\|g^{2p}(|Y_{i}|)+g^{2p}(|X^*_{i}|)\right\|^\frac{1}{2} \end{align*} for $p\geq 1$. \end{corollary} \bigskip If we put $f(t)=g(t)=t^{\frac{1}{2}}\,(t\in[0,\infty))$, then we get the next result. \begin{corollary} Let $T_{i}=\left[\begin{array}{cc} 0&X_{i}\\ Y_{i}&0 \end{array}\right] \in {\mathbb B}({\mathscr H_1\oplus\mathscr H_2})\,\,(1\leq j\leq n)$. Then \begin{align*} \omega_{p}^{p}(T_{1}, T_{2},\cdots,T_{n})\leq 2^{p-2}\sum_{i=1}^{n}\left\||X_{i}|^p+|Y^*_{i}|^p\right\|^\frac{1}{2} \left\||Y_{i}|^p+|X^*_{i}|^p\right\|^\frac{1}{2} \end{align*} for $p\geq 1$. \end{corollary} \begin{theorem} \label{th1}Let $T_{i}=\left[ \begin{array}{cc} A_{i} & B_{i} \\ C_{i} & D_{i}% \end{array}% \right]\in {\mathbb B}({\mathscr H_1}\oplus{\mathscr H_2}) \,\,(1\leq i\leq n)$ and $p\geq 1$. Then \begin{align*} \omega _{p}^{p}(T_{1},&\ldots ,T_{n})\\&\leq 2^{-p}\sum_{i=1}^{n}\left( \omega \left( A_{i}\right) +\omega \left( D_{i}\right) +\sqrt{\left( \omega \left( A_{i}\right) -\omega \left( D_{i}\right) \right) ^{2}+\left( \left\Vert B_{i}\right\Vert +\left\Vert C_{i}\right\Vert \right) ^{2}}% \right) ^{p}. \label{el2} \end{align*}% In particular, \begin{equation*} \omega \left(\left[ \begin{array}{cc} A & B \\ C & D% \end{array}% \right] \right)\leq \frac{1}{2}\left( \omega \left( A\right) +\omega \left( D\right) +\sqrt{\left( \omega \left( A\right) -\omega \left( D\right) \right) ^{2}+\left( \left\Vert B\right\Vert +\left\Vert C\right\Vert \right) ^{2}}\right) . \end{equation*} \end{theorem} \begin{proof} Let $\mathbf{x}=\left[ \begin{array}{c} x_1 \\ x_2% \end{array}% \right] $ be a unit vector in ${\mathscr H_1\oplus\mathscr H_2}$. Then \begin{align*} \left\vert \left\langle T_{i}\mathbf{x},\mathbf{x}\right\rangle \right\vert & =\left\vert \left\langle \left[ \begin{array}{cc} A_{i} & B_{i} \\ C_{i} & D_{i}% \end{array}% \right] \left[ \begin{array}{c} x_1 \\ x_2% \end{array}% \right] ,\left[ \begin{array}{c} x_1 \\ x_2% \end{array}% \right] \right\rangle \right\vert \\ & =\left\vert \left\langle \left[ \begin{array}{c} A_{i}x_1+B_{i}x_2 \\ C_{i}x_1+D_{i}x_2% \end{array}% \right] ,\left[ \begin{array}{c} x_1 \\ x_2% \end{array}% \right] \right\rangle \right\vert \\ & =\left\vert \left\langle A_{i}x_1,x_1\right\rangle +\left\langle B_{i}x_2,x_1\right\rangle +\left\langle C_{i}x_1,x_2\right\rangle +\left\langle D_{i}x_2,x_2\right\rangle \right\vert \\ & \leq\left\vert \left\langle A_{i}x_1,x_1\right\rangle \right\vert +\left\vert \left\langle B_{i}x_2,x_1\right\rangle \right\vert +\left\vert \left\langle C_{i}x_1,x_2\right\rangle \right\vert +\left\vert \left\langle D_{i}x_2,x_2\right\rangle \right\vert \end{align*}% Thus, \begin{align*} &\omega _{p}^{p}(T_{1},\ldots ,T_{n})\\& =\sup_{\Vert \mathbf{x}\Vert =1}\sum_{i=1}^{n}\left\vert \left\langle T_{i}\mathbf{x},\mathbf{x}\right\rangle \right\vert ^{p} \\ & \leq \sup_{\Vert x_1\Vert ^{2}+\Vert x_2\Vert ^{2}=1}\sum_{i=1}^{n}\left( \left\vert \left\langle A_{i}x_1,x_1\right\rangle \right\vert +\left\vert \left\langle B_{i}x_2,x_1\right\rangle \right\vert +\left\vert \left\langle C_{i}x_1,x_2\right\rangle \right\vert +\left\vert \left\langle D_{i}x_2,x_2\right\rangle \right\vert \right) ^{p} \\ & \leq \sum_{i=1}^{n}\left( \sup_{\Vert x_1\Vert ^{2}+\Vert y\Vert ^{2}=1}\left( \left\vert \left\langle A_{i}x_1,x_1\right\rangle \right\vert +\left\vert \left\langle B_{i}x_2,x_1\right\rangle \right\vert +\left\vert \left\langle C_{i}x_1,x_2\right\rangle \right\vert +\left\vert \left\langle D_{i}x_2,x_2\right\rangle \right\vert \right) \right) ^{p} \\ & \leq \sum_{i=1}^{n}\left( \sup_{\Vert x_1\Vert ^{2}+\Vert x_2\Vert ^{2}=1}\left( \omega \left( A_{i}\right) \left\Vert x_1\right\Vert ^{2}+\omega \left( D_{i}\right) \left\Vert x_2\right\Vert ^{2}+\left( \left\Vert B_{i}\right\Vert +\left\Vert C_{i}\right\Vert \right) \left\Vert x_1\right\Vert \left\Vert x_2\right\Vert \right) \right) ^{p} \\ & =\sum_{i=1}^{n}\left( \sup_{\theta \in \left[ 0,2\pi \right] }\left( \omega \left( A_{i}\right) \cos^2 \theta +\omega \left( D_{i}\right) \sin^2 \theta +\left( \left\Vert B_{i}\right\Vert +\left\Vert C_{i}\right\Vert \right) \cos \theta \sin \theta \right) \right) ^{p} \\ & =2^{-p}\sum_{i=1}^{n}\left( \omega \left( A_{i}\right) +\omega \left( D_{i}\right) +\sqrt{\left( \omega \left( A_{i}\right) -\omega \left( D_{i}\right) \right) ^{2}+\left( \left\Vert B_{i}\right\Vert +\left\Vert C_{i}\right\Vert \right) ^{2}}\right) ^{p}. \end{align*}% This completes the proof. \end{proof} \bigskip For $A_{i}=D_{i}$ and $B_{i}=C_{i}\,\,(1\leq i\leq n)$ we get the following result. \begin{corollary} Let $T_{i}=\left[ \begin{array}{cc} \pm A_{i} & \pm B_{i} \\ \pm B_{i} & \pm A_{i}% \end{array}% \right] \ $be an operator matrix with $A_{i},B_{i}\in \mathbb{B}(\mathscr{H}% ) $ $\,(1\leq i\leq n)$. Then for all $p\geq 1$, \begin{equation*} \omega _{p}^{p}(T_{1},\ldots ,T_{n})\leq \sum_{i=1}^{n}\left( \omega \left( A_{i}\right) +\left\Vert B_{i}\right\Vert \right) ^{p}. \end{equation*}% In particular, if $A,B\in \mathbb{B}(\mathscr{H})$, then% \begin{equation*} \omega \left(\left[ \begin{array}{cc} \pm A & \pm B \\ \pm B & \pm A% \end{array}% \right] \right)\leq \omega \left( A\right) +\left\Vert B\right\Vert . \end{equation*} \end{corollary} \bigskip If we take $B_{i}=C_{i}=0\,\,(1\leq i\leq n)$ in Theorem $\ref{th1}$, then we get the following inequality. \begin{corollary} \label{t1}Let $T_{i}=\left[ \begin{array}{cc} A_{i} & 0 \\ 0 & D_{i}% \end{array}% \right]\in{\mathbb B}({\mathscr H_1\oplus\mathscr H_2}) \,\,(1\leq i\leq n)$. Then for all $p\geq 1$, \begin{equation*} \omega _{p}^{p}(T_{1},\ldots ,T_{n})\leq \sum_{i=1}^{n}\max \left( \omega ^{p}\left( A_{i}\right) ,\omega ^{p}\left( D_{i}\right) \right) . \end{equation*} \end{corollary} \bigskip For $C_{i}=D_{i}=0\,\,(1\leq i\leq n)$ we obtain a result that generalize and refine the inequality $\omega\left( \left[ \begin{array}{cc} A & B \\ 0 & 0% \end{array}% \right] \right) \leq \omega(A)+\frac{\left\Vert B\right\Vert }{2}.$ \begin{corollary} Let $% T_{i}=\left[ \begin{array}{cc} A_{i} & B_{i} \\ 0 & 0% \end{array}% \right]\in{\mathbb B}({\mathscr H_1\oplus\mathscr H_2}) \, \,(1\leq i\leq n)$ and $p\geq 1$. Then \begin{equation*} \omega _{p}^{p}(T_{1},\ldots ,T_{n})\leq 2^{-p}\sum_{i=1}^{n}\left( \omega \left( A_{i}\right) +\sqrt{\omega ^{2}\left( A_{i}\right) +\left\Vert B_{i}\right\Vert ^{2}}\right) ^{p}. \end{equation*}% In particular, \begin{equation*} \omega \left( \left[ \begin{array}{cc} A & B \\ 0 & 0% \end{array}% \right] \right) \leq \frac{1}{2}\left( \omega \left( A\right) +\sqrt{% \omega ^{2}\left( A\right) +\left\Vert B\right\Vert ^{2}}\right) . \end{equation*}% \bigskip If we put $A_{i}=D_{i}=0\,\,(1\leq i\leq n)$, then we deduce \end{corollary} \begin{corollary} Let $T_{i}=\left[ \begin{array}{cc} 0 & B_{i} \\ C_{i} & 0% \end{array}% \right]\in{\mathbb B}({\mathscr H}_1\oplus{\mathscr H}_2) \,(1\leq i\leq n)$ and $p\geq 1$. The \begin{equation*} \omega _{p}^{p}(T_{1},\ldots ,T_{n})\leq 2^{-p}\sum_{i=1}^{n}\left( \left\Vert B_{i}\right\Vert +\left\Vert C_{i}\right\Vert \right) ^{p}. \end{equation*}% In particular, if $B\in \mathbb{B}(\mathscr{H}_{2},\mathscr{H}_{1})$ and $% C\in \mathbb{B}(\mathscr{H}_{1},\mathscr{H}_{2})$, then% \begin{equation*} \omega \left( \left[ \begin{array}{cc} 0 & B \\ C & 0% \end{array}% \right] \right) \leq \frac{1}{2}\left( \left\Vert B\right\Vert +\left\Vert C\right\Vert \right). \end{equation*} \end{corollary} \textbf{Acknowledgement.} The first author would like to thank the Tusi Mathematical Research Group (TMRG). \bigskip \bibliographystyle{amsplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} The idea of duality has received considerable attention recently, particularly in the context of string theory. This is a subject with a long history, which may be traced back to Olive and Montonen's conjectured duality in Yang-Mills theory \cite{ovmon}. In the $N=4$ Yang-Mills theory, one has two kinds of particle: small fluctuations in the scalar or Yang-Mills fields, and magnetic monopoles. The small fluctuations couple to the Yang-Mills field like electrically charged particles couple to the Maxwell field. They are therefore regarded as electrically charged elementary states. But the magnetic monopoles, which are the solitons of the theory, can also claim to be regarded as particles. Olive and Montonen conjectured \cite{ovmon} that there was a dual Yang-Mills theory, with coupling constant $g'=1/g$. Monopoles in the dual theory would behave like the elementary electrically charged states of the original theory, and vice versa. This concept of duality was later extended to a lattice of theories related by the discrete group $SL(2,Z)$. There is some evidence that the low energy scattering of monopoles is consistent with what one would expect from this duality, which is called $S$-duality, but no proof has been given that it goes beyond a symmetry of the equations of motion to a symmetry of the full quantum theory. Despite this lack of proof, there has been extensive speculation on how $S$-duality could extend to gravity and string-inspired supergravity theories \cite{sen}. The suggestion is that extreme, non-rotating black holes should be identified as the solitons of the theory. These states do have some particle-like properties, as there are families of electric and magnetic black holes, which fall into multiplets under the action of the global supersymmetry group at infinity. The similarity with other solitons has been increased by our recent discovery \cite{entar} that all extreme black holes have zero entropy, as one would expect for elementary particles. However, the original Montonen and Olive idea of duality was supposed to relate electrically charged elementary states, or small fluctuations in the fields, with magnetically charged monopoles, or solitons. But in the gravitational case there are both magnetically and electrically charged solitons. This has led people to try to identify extreme black holes, the solitons, with electrically or magnetically charged elementary states in string theory \cite{rususs,duff}. The only evidence so far is that one can find black holes with the same masses and charges as a certain class of elementary states \cite{duff}. But this is not very surprising, because the masses are determined by the charges and Bogomol'nyi bounds in both cases. Behind all these attempts to extend $S$-duality to extreme black holes is the idea that electrically and magnetically charged black holes behave in a similar way. This is true in the classical theory, because duality between electric and magnetic fields is a symmetry of the equations. This does not, however, imply that it is a symmetry of the quantum theory, as the action is not invariant under duality. The Maxwell action is $F^2 = B^2-E^2$, and it therefore changes sign when magnetic fields are replaced by electric. The purpose of this paper is to show that despite this difference in the action, the semi-classical approximations to the Euclidean path integral for dual electric and magnetic solutions are identical, at least where we have been able to evaluate them. In particular, we show that the rate at which black holes are pair created in cosmological and electromagnetic backgrounds is duality-invariant. We will now define our terms more precisely. It is well known that the Einstein-Maxwell equations exhibit duality. One can replace magnetic fields with electric fields and a solution remains a solution. More precisely, if $(g,F)$ are a metric and field tensor that satisfy the field equations, then $(g,*F)$ also satisfy the equations, where $*F$ is the Lorentzian dual of $F$, that is, \begin{equation} \label{Ldual} *F_{\mu\nu} = \frac{1}{2} \epsilon_{\mu\nu\rho\sigma} F^{\rho\sigma}, \end{equation} with $\epsilon_{0123} = \sqrt{-g}$, and $g$ the determinant of the metric. If $F$ represents a magnetic field, $*F$ will represent an electric field, referred to as the dual electric field. In particular, for every magnetically charged black hole solution, there is a corresponding electrically charged black hole solution. This electric-magnetic duality extends to theories with a dilaton. The only difference is that one now takes \begin{equation} *F_{\mu\nu} = \frac{1}{2} e^{-2a\phi} \epsilon_{\mu\nu\rho\sigma} F^{\rho\sigma} \end{equation} and \begin{equation} \phi \to -\phi, \end{equation} where $\phi$ is the dilaton field. We will, however, restrict attention to the duality (\ref{Ldual}) in Einstein-Maxwell theory for the sake of simplicity. Now, if $g_{\mu\nu}$ is a Lorentzian metric, its determinant will be negative, so $\epsilon_{\mu\nu\rho\sigma}$ as defined above will be real. However, if $g_{\mu\nu}$ is a Euclidean metric, its determinant will be positive, and thus $\epsilon_{\mu\nu\rho\sigma}$ will be imaginary. That is, the Lorentzian duality (\ref{Ldual}) takes real magnetic fields to real electric fields in Lorentzian space, but real magnetic fields to imaginary electric fields in Euclidean space. This is consistent, as an electric field that is real in a Lorentzian space is imaginary in its Euclidean continuation. One might therefore think that, in using the Euclidean path integral, one should use Euclidean duality instead of Lorentzian duality, and replace magnetic fields with electric fields that were real in Euclidean space. That is, perhaps one should take \begin{equation} \label{Edual} *F_{\mu\nu} = \frac{i}{2} \epsilon_{\mu\nu\rho\sigma} F^{\rho\sigma} \end{equation} instead of (\ref{Ldual}). This duality also has the advantage that it leaves the Maxwell action unchanged. However, it reverses the sign of the energy momentum tensor, so the solutions would have different geometry. That is, if $*F$ is given by (\ref{Edual}), then it is no longer true that $(g,*F)$ satisfy the field equations whenever $(g,F)$ do. In particular, there is no extreme black hole solution with real electric fields in Euclidean space. It seems therefore that if duality is to be a symmetry of black holes, it must be a duality between real electric and magnetic fields in Lorentzian space, rather than in Euclidean space. There is then a difference in action between the dual electric and magnetic solutions. What effect will this have? One of the most interesting applications of the Euclidean path integral approach is the study of semi-classical instabilities, or tunnelling processes. One uses instantons, Euclidean solutions of the field equations, to estimate the rate at which such classically-forbidden tunnelling processes occur. The rate at which a process occurs is just given by the partition function $Z$, defined by \begin{equation} \label{parf} Z = \int d[g] d[A] e^{-I}, \end{equation} where the integral is subject to some appropriate boundary conditions at infinity. When there is a Euclidean solution which satisfies the boundary conditions, we can approximate the integral by the saddle-point, which gives $Z \approx e^{-I}$, where $I$ is the action of the instanton, so it would seem that the difference in action between dual solutions must surely imply a difference in the rate for such processes. In particular, the Euclidean black hole solutions can be used as instantons for black hole nucleation or pair creation, and we might therefore think that electrically and magnetically charged black holes should be produced at different rates. However, a more careful analysis of the partition function shows that this is not the case. The point is that magnetic and electric solutions differ not only in their actions, but in the nature of the boundary conditions we can impose on them. If we consider a single black hole, we can choose a particular charge sector in the magnetic case, but we have to introduce a chemical potential for the charge in the electric case. That is to say, we can impose the magnetic charge as a boundary condition at infinity, but we can only impose the chemical potential, and not the electric charge, as a boundary condition in the electric case. Thus the partition function in the magnetic case is a function of the temperature and charge, $Z(\beta,Q)$, while in the electric case the partition function is a function of the chemical potential $\omega$, rather than $Q$, $Z(\beta,\omega)$. It is not surprising to find that these two quantities differ. What we need to do is obtain a partition function $Z(\beta,Q)$ in the electric case. To do this, we must introduce a charge projection operator \cite{quhair}. The introduction of the charge projection operator is like performing a Fourier transform on the wavefunction, to trade $\omega$ for its canonically conjugate momentum $Q$. The effect of this transform is to make the partition function as a function of charge the same for the electrically and magnetically charged black holes. The difference in action precisely cancels the additional term introduced in the partition function by the Fourier transform. We can also calculate $Z(\beta,Q)$ in the electric case directly, by using (\ref{parf}) with an action which is adapted to holding the electric charge fixed. To make the action give the classical equations of motion under a variation which holds the electric charge on the boundary fixed, we need to include an additional surface term in the action. This will make the action of dual electric and magnetic solutions identical. We are particularly interested in instantons describing black hole pair creation. To obtain pair creation of black holes, one has to have some force that is pulling the holes apart. The case that has been extensively studied is the formation of charged black holes in a background electric or magnetic field \cite{entar,garstrom,dgkt,dggh,2u1}. Here the negative electromagnetic potential energy of the holes in the background electric or magnetic field can compensate for the positive rest mass energy of the black holes. The pair creation of magnetically-charged black holes in a background magnetic field has been the subject of most work in this area, and the action and pair creation rate for this case have been calculated in \cite{garstrom,dgkt}. It was assumed in earlier work that the treatment of the electric case was a trivial extension of the magnetic; we now realize that this is not in fact the case. We consider the pair creation of electric black holes in a background electric field, and show by calculating $Z(\beta,Q)$ directly that the pair creation rate in this case is the same as in the magnetic case. The effective cosmological constant in the inflationary period of the universe can also accelerate objects away from each other, and so it should be possible to find instantons describing the pair production of black holes in a cosmological background. In the case without gauge fields, the relevant solution is the Schwarzschild de Sitter metric. This has been interpreted in the past as a single black hole in a de Sitter universe, but it really represents a pair of black holes at antipodal points on the three sphere space section of the de Sitter universe, accelerating away from each other. If one takes $t =i\tau$, one obtains a Euclidean metric. One can remove the conical singularities in this metric if the black hole and cosmological horizons have the same temperature. For the Schwarzschild de Sitter metric, this occurs in the limiting case known as the Nariai metric, which is just the analytical continuation of $S^2\times S^2$, with both spheres having the same radius \cite{desit}. If you cut this solution in half, you get the amplitude to propagate from nothing to a three surface $\Sigma$ with topology $S^2 \times S^1$ according to the no boundary proposal. One can regard $S^2 \times S^1$ as corresponding to the space section of the Nariai universe, which will settle down to two black holes in de Sitter space (see \cite{desit} for more details). The action of $S^2 \times S^2$ is $I=- 2 \pi /\Lambda$. This is greater than the action $I= - 3 \pi/ \Lambda$ of $S^4$, which corresponds to de Sitter space. Thus the amplitude to pair create neutral black holes in de Sitter space is suppressed, as one would hope. \footnote{If one were to use the tunnelling proposal \cite{vil,lin} instead of the no boundary proposal, one would find that the probability of the pair creation of neutral black holes was enhanced rather than suppressed relative to the probability for the spontaneous formation of a de Sitter universe. This is further evidence against the tunnelling proposal.} One can also consider the pair creation of electrically or magnetically charged black holes in de Sitter space. Here the relevant solutions are the Reissner-Nordstr\"om de Sitter metrics, which can again be extended to Euclidean metrics. More than one instanton can be constructed in this case; these instantons are discussed in more detail in \cite{moss,romans,robb}. We will consider only the simplest case, where the instanton is again $S^2 \times S^2$, but where the spheres now have different radii. The action for the magnetic instanton is less negative than the neutral case. Thus the pair creation of magnetic black holes is suppressed relative to neutral black holes, which is itself suppressed relative to the background de Sitter space. All this is what one might expect on physical grounds. But in the electric case, the action is less than the action of the neutral case, and can be less than the action of the background de Sitter space if the electric charge is large enough. This at first seemed to suggest that de Sitter space would be unstable to decay by pair production of electrically-charged black holes. Presumably, we have to apply a charge projection operator to obtain comparable partition functions here, as in the single black hole case. However, the $S^2 \times S^2$ instanton has no boundary, so we at first thought that it wasn't possible to have a chemical potential in this case. However, as we said above, what we actually want to consider is the amplitude to propagate from nothing to a three surface $\Sigma$ with topology $S^2 \times S^1$, and we can impose the potential on the boundary $\Sigma$. The instanton giving the semi-classical approximation to this amplitude is just half of $S^2 \times S^2$. In the magnetic case, the magnetic charge can be given as a boundary condition on this surface, but in the electric case, the boundary gives only the potential $\omega$. If we again make the Fourier transform to trade $\omega$ for $Q$, the semi-classical approximation to the wavefunction as a function of charge is the same for the electrically and magnetically charged black holes. Thus the pair creation of both magnetic and electric black holes is suppressed in the early universe. We will also discuss the entropy of black hole solutions. For the asymptotically-flat black holes, the partition function $Z(\beta,Q)$ can be interpreted as the canonical partition function, while $Z(\beta,\omega)$ can be interpreted as the grand canonical partition function. Using the instantons to approximate the partition function, we can show that the entropy of the asymptotically-flat black holes is $S = {\cal A}_{bh}/4$ for both electrically and magnetically charged black holes. For the cosmological solutions, the square of the wavefunction $\Psi(Q,\pi^{ij}=0)$ can be regarded as the density of states or microcanonical partition function. Thus the entropy is just given by the ln of the wavefunction. Using the instantons to approximate this density of states, we find that the entropy is $S = {\cal A}/4$, where ${\cal A}$ is the total area of all the horizons in the instanton. In section \ref{chproj}, we review the calculation of the action for the Reissner-Nordstr\"om black holes, and the introduction of the charge projection operator. In section \ref{desitsec}, we describe the Reissner-Nordstr\"om de Sitter solution, and derive an instanton which can be interpreted as describing black hole pair production in a background de Sitter space. We then calculate its action. We go on to argue, in section \ref{cosm}, that a charge projection can be performed in this case as well, and that the partition function as a function of the charge is the same in the electric and magnetic cases. In section \ref{elecesec}, we review the electric Ernst solution, and obtain the instanton which describes pair creation of electrically-charged black holes in an electric field. In section \ref{elacsec}, we calculate the action for this instanton, and thus obtain the pair creation rate. In section \ref{entsec}, we review the derivation of the entropy for the Reissner-Nordstr\"om black holes, and discuss its definition for the Reissner-Nordstr\"om de Sitter solutions. \section{Action and charge projection in Reissner-Nordstr\"om} \label{chproj} Let us first consider asymptotically-flat black hole solutions. To simplify the later calculation of the entropy, we will evaluate the action of these black holes by a Hamiltonian decomposition, following the treatment given in \cite{haho}. If there is a Maxwell or Yang-Mills field, one takes spatial components of the vector potential $A_i$ as the canonical coordinates on three-surfaces of constant time. The conjugate momenta are the electric field components $E^i$. The time component $A_t$ of the potential is regarded as a Lagrange multiplier for the Gauss law constraint ${\rm div} E=0$. Let us first assume that the manifold has topology $\Sigma \times S^1$. Then the action is \begin{equation} \label{act} I= -\int dt \left[\int_{\Sigma_t} (p^{\mu\nu}\ {}^3 \dot g_{\mu\nu} + E^i \dot{A}_i) - H \right]. \end{equation} There is a well-known ambiguity in the gravitational action for manifolds with boundary, as one can add any function of the boundary data to the action, and its variation will still give the same equations of motion \cite{brown}. We will adopt the approach of \cite{haho}, and require that the action of some suitable background vanish. We define a suitable background to be one which agrees with the solution asymptotically, that is, which induces the same metric and gauge fields on $S^2_\infty$. If we assume that the background is a solution of the equations of motion, the Hamiltonian $H$ is \cite{bmy} \begin{eqnarray} \label{ham} H &=& \frac{1}{8 \pi} \int_{\Sigma_t} (N{\cal H} +N^i {\cal H}_i + N A_t {\rm div} E) \\ &&- {1\over 8\pi} \int_{S^2_\infty} [N (^2 K - ^2 K_0) + N^i p_{ij} + 2 N A_t(E - E_0)], \nonumber \end{eqnarray} where $^2 K$ is the extrinsic curvature of the boundary $S^2_{\infty}$ of the surface $\Sigma_t$, $E$ is the electric field, and $^2 K_0$ and $E_0$ represent these quantities evaluated in the background. In order to get the action in this canonical form, we have had to integrate by parts the terms in the action involving spatial gradients of $A_t$. This produces the $A_t$ surface term in the Hamiltonian. This surface term is zero for magnetic monopoles and magnetic black holes. It is also zero for any solution with electric fields, but no horizons, because one can choose a gauge in which $A_t$ vanishes at infinity. Thus, the existence of this surface term in the Hamiltonian does not seem to have been generally noticed. However, it is non-zero for electrically charged black holes, because the gauge transformation required to make $A_t=0$ at infinity is not regular on the horizon. One can pass from a Lorentzian black hole solution to a Euclidean one by introducing an imaginary time coordinate $\tau = -i t$. One then has to identify $\tau$ with period $\beta = 2 \pi/ \kappa$ to make the metric regular on the horizon, where $\kappa$ is the surface gravity of the horizon. One can then use the relation between the action and the Hamiltonian to calculate the action of the Euclidean black hole solution. As the solution is static, the Euclidean action (\ref{act}) is $\beta$ times the Hamiltonian. However, the Euclidean section for a non-extreme black hole does not have topology $\Sigma \times S^1$, and so (\ref{act}) only gives the action of the region swept out by the surfaces of constant $\tau$. This is the whole of the Euclidean solution, except for the fixed point locus of the time translation killing vector on the horizon. The contribution to the action from the corner between two surfaces $\tau_1$ and $\tau_2$ is \begin{equation} \frac{\kappa}{8\pi} (\tau_2 - \tau_1) {\cal A}_{bh}, \end{equation} where ${\cal A}_{bh}$ is the area of the horizon. Thus the action is $I= \beta H -{\cal A}_{bh}/4$ \cite{ecs}. For solutions of the field equations, the three-surface integral vanishes, because of the gravitational and electromagnetic constraint equations. Thus, the value of the Hamiltonian comes entirely from the surface terms. Now we will calculate the action in this way for the nonextreme electric and magnetic Reissner-Nordstr\"om solutions. Recall that the Reissner-Nordstr\"om metric is given by \begin{eqnarray} \label{exrn} ds^2 &=& -\left( 1-\frac{2M}{ r} + \frac{Q^2}{ r^2}\right) dt^2 + \left( 1-\frac{2M}{ r} +\frac{Q^2}{ r^2}\right)^{-1} dr^2 \nonumber \\ &&+r^2( d\theta^2 + \sin^2 \theta d \phi^2), \end{eqnarray} where $M$ is the mass and $Q$ is the charge of the black hole. The gauge potential for this solution is \begin{equation} \label{mag} F = Q \sin \theta d\theta \wedge d\phi \end{equation} for a magnetically-charged solution, and \begin{equation} \label{elec} F = - \frac{Q}{r^2} dt \wedge dr \end{equation} for an electrically-charged solution. We will not consider dyonic solutions. The metric has two horizons, at $r=r_\pm = M \pm \sqrt{ M^2 -Q^2}$. We analytically continue $t \to i \tau$, and identify $\tau$ with period $\beta = 2 \pi / \kappa$, where $\kappa = (r_+ - r_-)/2 r_+^2 $ is the surface gravity of the horizon at $r=r_+$. The surfaces of constant $\tau$ meet at the event horizon $r=r_+$, whose area is \begin{equation} \label{arearn} {\cal A}_{bh} = 4\pi r_+^2 = \frac{4\pi}{\kappa} (M -Q U), \end{equation} where $U = Q/r_+$. The second equality is obtained by exploiting the definitions of $r_\pm$ and $\kappa$. If we consider the magnetically charged black hole solution, the gauge potential will be \begin{equation} \label{magrn} A = Q (1 -\cos \theta)d\phi, \end{equation} where we have chosen a gauge which is regular on the axis $\theta =0$. For a magnetic black hole, the electromagnetic surface term in the Hamiltonian vanishes, and the Hamiltonian is just given by the gravitational surface term. However, as the background spacetime usually used to calculate the Hamiltonian for the Reissner-Nordstr\"om black holes is just periodically-identified flat space, this surface term is equal to the usual ADM mass \cite{haho}. Thus the Hamiltonian is simply \begin{equation} H=M, \end{equation} and if $\tau$ is identified with period $\beta= 2 \pi / \kappa$, the action is \begin{equation} \label{magac} I = \beta M - {\cal A}_{bh}/4 = \frac{\pi}{\kappa} ( M + Q U). \end{equation} For the electrically charged black hole solution, the gauge potential is \begin{equation} \label{elrn} A = -i(Q/r - \Phi)d\tau, \end{equation} where $\Phi = U$ is the potential at infinity and we have chosen a gauge which is regular on the black hole horizon. Note that this gauge potential is pure imaginary, as we have analytically continued $t \to i\tau$. We take the point of view that one should simply accept that the gauge potential in Euclidean space is imaginary; if one analytically continued the charge to obtain a real gauge potential, the metric would be changed, and one could no longer sensibly compare the electric and magnetic solutions, as they would no longer be dual solutions. In this case, the Hamiltonian is still just equal to the surface term, but now the electromagnetic surface term survives as well. The Hamiltonian can now be calculated to be \begin{equation} H = M - Q \Phi, \end{equation} and we see that $\Phi$ may be interpreted as the electrostatic potential in this case. Thus, if $\tau$ is identified with period $\beta = 2 \pi / \kappa$, the action is \begin{equation} \label{elac} I = \beta(M - Q \Phi) - {\cal A}_{bh}/4 = \frac{\pi}{\kappa} (M - Q U), \end{equation} as asserted in \cite{actionint}. If we were to calculate the action directly, as was done in \cite{actionint}, we would find that the sign difference of the $Q U$ term in the action is due to the fact that $F^2 = 2Q^2/r^4$ for the magnetic solution, but $F^2 = -2Q^2/r^4$ for the electric solution. As we have said in the introduction, the na\"{\i}ve expectation that the rate of pair creation is simply approximated by the action ignores an important difference between the electric and magnetic cases. The partition function is \begin{equation} \label{pathin} Z = \int d[g] d[A] e^{-I[g,A]}, \end{equation} where the integral is over all metrics and potentials inside a boundary $\Sigma^\infty$ at infinity, which agree with the given boundary data on $\Sigma^\infty$. Now for the Euclidean black holes, the appropriate boundary is $\Sigma^\infty = S^2_\infty \times S^1$, and the boundary data are the three-metric $h_{ij}$ and gauge potential $A_i$ on the boundary at infinity. In the magnetic case, one can evaluate the magnetic charge by taking the integral of $F_{ij}$ over the $(\theta,\phi)$ two-sphere lying in the boundary, so the magnetic charge is a boundary condition. That is, we are evaluating the partition function in a definite charge sector. In the electric case, however, $A_i$ is constant on the boundary, so all we can construct is an integral of it over the boundary. This is the chemical potential $\omega = \int A_\tau d\tau$, where we define this integral to be in the direction of increasing $\tau$. That is, we are evaluating the partition function in a sector of fixed $\omega$. This can be written in a shorthand form as $Z(\beta,\omega)$. To obtain the partition function in a sector of definite charge, we have to introduce a charge projection operator in the path integral \cite{quhair}. This gives\footnote{There is a sign difference between this expression and the analogous expression in \cite{quhair}, but this is just due to a difference of conventions.} \begin{equation} \label{proj1} Z(\beta,Q) = \frac{1}{2\pi} \int_{-\infty}^{\infty} d\omega e^{i\omega Q} Z(\beta,\omega). \end{equation} We can think of $\omega$ as a canonical coordinate, in which case its canonically conjugate momentum is $Q$, and we can think of (\ref{proj1}) as a Fourier transform. Clearly, what we want to compare is the semi-classical approximation to the partition functions $Z(\beta,Q)$ in the magnetic and the electric case. For the magnetic case, the magnetic Reissner-Nordstr\"om solution provides the saddle-point contribution to the path integral, so \begin{equation} \label{qpfun} \ln Z(\beta,Q) = - I = - \beta M + {\cal A}_{bh}/4. \end{equation} In the electric case, the Fourier transform (\ref{proj1}) can also be calculated by a saddle-point approximation. At the saddle-point, $\omega = i \beta \Phi$, so \begin{eqnarray} \ln Z(\beta,Q) &=& - I + i \omega Q \nonumber \\ &=& - \beta (M-Q \Phi) + {\cal A}_{bh}/4 + i \omega Q \\ &=& - \beta M + {\cal A}_{bh}/4. \nonumber \end{eqnarray} Thus we see that the semi-classical approximation to the partition function is the same for dual electric and magnetic black holes. Alternatively, it is possible to construct a partition function $Z(\beta,Q)$ for the electric case directly; that is, we can write $Z(\beta,Q)$ in a path-integral form for a suitable choice of action \cite{robb}. In the path integral, we want to use the action for which it is natural to fix the boundary data on $\Sigma$ specified in the path integral (\ref{pathin}). That is, we want to use an action whose variation gives the Euclidean equations of motion when the variation fixes these boundary data on $\Sigma$ \cite{brown}. If we consider the action (\ref{act}), we can see that its variation will be \begin{eqnarray} \delta I &=& \mbox{ (terms giving the equations of motion) } \nonumber \\ &&+ \mbox{ (gravitational boundary terms) } \nonumber \\ && + \frac{1}{4\pi} \int_\Sigma d^3 x \sqrt{h} F^{\mu\nu} n_\mu \delta A_\nu, \end{eqnarray} where $n_\mu$ is the normal to $\Sigma$ and $h_{ij}$ is the induced metric on $\Sigma$ (see \cite{brown} for a more detailed discussion of the gravitational boundary terms). Thus, the variation of (\ref{act}) will only give the equations of motion if the variation is at fixed gauge potential on the boundary, $A_i$. For the magnetic Reissner-Nordstr\"om solutions, fixing the gauge potential fixes the charge on each of the black holes, as the magnetic charge is just given by the integral of $F_{ij}$ over a two-sphere lying in the boundary. However, in the electric case, fixing the gauge potential $A_i$ can be regarded as fixing $\omega$. Holding the charge fixed in the electric case is equivalent to fixing $n_\mu F^{\mu i}$ on the boundary, as the electric charge is given by the integral of the dual of $F$ over a two-sphere lying in the boundary. Therefore, the appropriate action is \begin{equation} \label{elac2} I_{el} = I - \frac{1}{4\pi} \int_{\Sigma} d^3 x \sqrt{h} F^{\mu\nu} n_\mu A_\nu, \end{equation} as its variation is \begin{eqnarray} \delta I_{el} &=& \mbox{ (terms giving the equations of motion) } \nonumber \\ &&+\mbox{ (gravitational boundary terms) } \nonumber \\ &&- \frac{1}{4\pi} \int_\Sigma d^3 x \delta(\sqrt{h} F^{\mu\nu} n_\mu) A_\nu, \end{eqnarray} and so it gives the equations of motion when $\sqrt{h} n_\mu F^{\mu i}$, and thus the electric charge, is held fixed. That is, if we use (\ref{elac2}) in (\ref{pathin}) in the electric case, the partition function we obtain is $Z(\beta,Q)$. The observation that the magnetic charge must be imposed as a boundary condition in the path integral has another, more troubling consequence. In the derivation of the action for the asymptotically flat black holes above, we have assumed that periodically-identified flat space is a suitable background, so we can take $^2 K_0$ and $E_0$ in (\ref{ham}) to be the values of these quantities in flat space. However, a suitable background is one which agrees with the solution asymptotically; that is, it must satisfy the boundary conditions in the path integral (\ref{pathin}). In the magnetic case, periodically-identified flat space cannot satisfy these boundary conditions, as it has no magnetic charge. Flat space is not a suitable background to use in the evaluation of this action. The best we can do for single black holes is to compare the action of the non-extreme black holes with the action of the extreme black hole of the same charge, as this is a suitable background. It is natural to choose the actions of the extreme black holes so that the partition functions for fixed magnetic and electric charges are equal. Such problems will not arise in the case of pair creation in a cosmological background, as the instantons are compact, so there is no need for a suitable background solution to calculate the action. \section{Reissner-Nordstr\"om de Sitter instantons} \label{desitsec} We will now describe the cosmological instanton, and calculate its action. The Reissner-Nordstr\"om de Sitter metric describes a pair of oppositely-charged black holes at antipodal points in de Sitter space, as the Euclidean section has topology $S^2 \times S^2$. The spatial sections therefore have topology $S^2 \times S^1$, which may be thought of as a Wheeler wormhole, topology $S^2 \times R^1$, attached to a spatial slice of de Sitter space, topology $S^3$. The metric is \begin{equation} \label{RNdesit} ds^2 = -V(r) dt^2 + {dr^2 \over V(r)} + r^2 (d\theta^2 + \sin^2 \theta d\phi^2), \end{equation} where \begin{equation} V(r) = 1 - {2M \over r} + {Q^2 \over r^2} - {\Lambda \over 3} r^2. \end{equation} We restrict consideration to just purely magnetically or purely electrically charged solutions. The Maxwell field for the magnetically charged solution is (\ref{mag}), and the Maxwell field for the electrically charged solution is (\ref{elec}). In general, $V(r)$ has four roots, which we will label $r_1 <r_2 \leq r_3 \leq r_4$. The two roots $r_2$ and $r_3$ are the inner and outer black hole horizons, while $r_4$ is the cosmological horizon. The smallest root $r_1$ is negative, and thus has no physical significance. We analytically continue $t \to i \tau$ to obtain a Euclidean solution. If the analytically continued metric is to be positive definite, $r$ must lie between $r_3$ and $r_4$, where $V(r)$ is positive. Then to have a regular solution, the surface gravities at $r_3$ and $r_4$ must be equal, so that the potential conical singularities at these two horizons can be eliminated with a single choice of the period of $\tau$. This can be achieved in one of three ways: either $r_3 = r_4$, $|Q| = M$, or $r_2 = r_3$ \cite{moss,romans,robb}. Let us consider in detail the case where the roots $r_3$ and $r_4$ are coincident, which is analogous to the neutral black hole instanton studied in \cite{desit}. As in \cite{desit}, the proper distance between $r=r_3$ and $r=r_4$ remains finite in the limit $r_3 \to r_4$, as we can see by making a similar change of coordinates. Let us set $r_3 = \rho - \epsilon$, $r_4 = \rho + \epsilon$. Then \begin{equation} V(r) = -\frac{\Lambda}{3r^2}(r - \rho - \epsilon)(r - \rho + \epsilon)(r-r_1)(r-r_2). \end{equation} If we make a coordinate transformation \begin{equation} r = \rho + \epsilon \cos \chi, \psi = A \epsilon \tau, \end{equation} where \begin{equation} A =\frac{\Lambda}{3 \rho^2} (\rho-r_1)(\rho-r_2), \end{equation} then \begin{equation} V(r) \approx A \epsilon^2 \sin^2 \chi. \end{equation} Thus, in the limit $\epsilon \to 0$, the metric becomes \begin{equation} \label{coinmetric} ds^2 = {1 \over A} (d\chi^2 + \sin^2 \chi d\psi^2) + {1 \over B} (d\theta^2 + \sin^2 \theta d\phi^2), \end{equation} where $\chi$ and $\theta$ both run from $0$ to $\pi$, and $\psi$ and $\phi$ both have period $2\pi$. This metric has been previously mentioned in \cite{romans}. We assume that $B = 1/\rho^2 > A$ (this corresponds to real $Q$, as we see below). The cosmological constant is given by $\Lambda = (A+B)/2$, and the Maxwell field is \begin{equation} F = Q \sin \theta d \theta \wedge d \phi \end{equation} in the magnetically charged case, and \begin{equation} F = - iQ \frac{B}{A} \sin \chi d \chi \wedge d\psi \end{equation} in the electrically charged case, where $Q^2 = (B-A)/(2 B^2)$. This metric is completely regular and, as the instanton is compact, it is extremely easy to compute its action; it is \begin{equation} \label{spaction} I = -{1 \over 16 \pi} \int (R - 2 \Lambda -F^2) = -{ \Lambda V^{(4)} \over 8 \pi} \pm {Q^2 B^2 V^{(4)} \over 8 \pi}, \end{equation} where $V^{(4)} =16 \pi^2/(AB)$ is the four-volume of the instanton. The action for the magnetic case is thus $I = -2 \pi /B$, and for the electric case the action is $I = -2 \pi /A$. Since the action for the instanton describing the creation of neutral black holes is $I = -2\pi/\Lambda$ \cite{desit}, we have $I_{magnetic} > I_{neutral} > I_{electric}$. Further, $I_{de Sitter} > I_{electric}$ if $A < 2 \Lambda /3$. Since the action is supposed to give the approximate rate for pair creation, this seems to say that de Sitter space should be disastrously unstable to the pair creation of large electrically charged black holes. \section{Charge Projection for Reissner-Nordstr\"om de Sitter} \label{cosm} Clearly, there is an analogy between this problem and the difficulty with the Reissner-Nordstr\"om solution, and so what we need to do is to introduce a charge projection operator in the path integral in the electric case. However, as the instanton is compact, it looks like we don't have any boundary to specify boundary data on, and in particular no notion of a chemical potential. However, we are again forgetting something. The pair creation of black holes in a de Sitter background is described, by the no-boundary proposal, by the propagation from nothing to a three-surface $\Sigma$ with topology $S^2 \times S^1$. This process is described by a wavefunction \begin{equation} \Psi = \int d[g] d[A] e^{-I}, \end{equation} where the integral is over all metrics and potentials on manifolds with boundary $\Sigma$, which agree with the given boundary data on $\Sigma$. This amplitude is dominated by a contribution from a Euclidean solution which has boundary $\Sigma$ and satisfies the boundary conditions there. For pair creation of black holes, the instanton is in fact {\em half} of $S^2 \times S^2$. In the semi-classical approximation, $\Psi \approx e^{-I}$, where $I$ is the action of this instanton. In the usual approach reviewed in section \ref{chproj}, we take advantage of the fact that the instanton is exactly half of the bounce, so that the tunnelling rate is $\Psi^2 = Z= e^{-I_b}$, where $I_b = 2I$ is the action of the bounce. This is helpful, as this latter action is easier to calculate, but in passing from $\Psi$ to $Z$ we have lost information about the boundary data on the surface on which the bounce is sliced in half. If there is a boundary at infinity, this isn't very important,\footnote{It is easy to apply the methods we outline below to re-derive the results of section \ref{chproj} using the instanton (half the bounce) to describe tunnelling from a spatial slice of hot flat space to a spatial slice of electrically charged Reissner-Nordstr\"om.} but in the cosmological case this information is crucial. Consider the pair creation of charged black holes in a cosmological background. Then $\Sigma$ has topology $S^2 \times S^1$, and the boundary data on $\Sigma$ will be $h_{ij}$ and $A_i$, the three-metric and gauge potential. In the magnetic case, we can again define the charge by the integral of $F_{ij}$ over the $S^2$ factor (the charge in this case is the magnitude of the charge on each of the black holes), but in the electric case, we can fix only the potential \begin{equation} \label{omega} \omega = \int A, \end{equation} where the integral is around the $S^1$ direction in $\Sigma$. This latter quantity is equal to the flux of the electric field across the disk. Let $M_-$ be a Euclidean solution of the field equations which agrees with the given data on $\Sigma$, which is its only boundary. If $M_-$ has topology $S^2 \times D^2$, which is the case we are interested in, the $S^1$ direction in $\Sigma$ is the boundary of the two disk $D^2$. For the boundary data which describes a pair of charged black holes, $M_-$ will just be half the $S^2 \times S^2$ Euclidean section of the Reissner-Nordstr\"om de Sitter solution. Let us choose coordinates so that the boundary $\Sigma$ corresponds to the surface $\psi=0, \psi =\pi$ in the metric (\ref{coinmetric}), and so that the integral in (\ref{omega}) is from the black hole horizon $\chi = \pi$ to the cosmological horizon $\chi=0$ along $\psi=0$, and back along $\psi=\pi$. The momentum canonically conjugate to $\omega$ is the electric charge $Q$. Now we are ready to make the Fourier transform \begin{equation} \Psi(Q,h_{ij}) = \frac{1}{2\pi} \int_{-\infty}^{\infty} e^{i\omega Q} \Psi(\omega,h_{ij}) \end{equation} to obtain the wavefunction in a definite charge sector in the electric case. We should make another Fourier transform, in both cases, as a natural requirement on the three-surface $\Sigma$ is that its extrinsic curvature vanish. This guarantees that $\Sigma$ bisects the bounce, and ensures that our manifold can be matched smoothly onto a Lorentzian extension. We should therefore perform a Fourier transform to trade $h_{ij}$ for its conjugate momentum $\pi^{ij} = \sqrt{h}( K^{ij} - K h^{ij})$, where $K^{ij}$ is the extrinsic curvature of $\Sigma$, and then set $\pi^{ij} =0$. Thus \begin{equation} \Psi(Q,\pi^{ij}) = \frac{1}{2\pi}\int d[h_{ij}] e^{i h_{ij} \pi^{ij}} \Psi(Q,h_{ij}). \end{equation} In the saddle-point approximation, \begin{equation} \Psi(Q,\pi^{ij}=0) = \Psi(Q,h_{ij}=h_{ij}^0), \end{equation} where $h_{ij}^0$ is the induced metric on the three-surface $\psi=0, \psi=\pi$ in the Reissner-Nordstr\"om de Sitter solution. That is, because we are setting $\pi^{ij}=0$, there is no additional term in the semi-classical value which arises from this transformation. For the electrically charged Reissner-Nordstr\"om de Sitter instanton (\ref{coinmetric}), the only vector potential which is regular everywhere on $M_-$ is \begin{equation} \label{dsvec} A = i Q \frac{B}{A} \sin \chi \; \psi d\chi. \end{equation} We have to insist that the gauge potential be regular on the instanton in the electric case to obtain gauge-independant results, as we can only determine the gauge potential on the boundary.\footnote{This can be clearly seen in the Reissner-Nordstr\"om case; we could set $A_t=0$ at infinity if we didn't insist that it be regular at the horizon.} Note that there is {\it no} electric vector potential regular everywhere on the Euclidean section of the electrically charged Reissner-Nordstr\"om de Sitter solution, as (\ref{dsvec}) is not periodic in $\tau$. Using (\ref{dsvec}), we see that in the semi-classical approximation, $\omega = 2\pi i Q B/A$ and thus, in the electric case, \begin{equation} \label{elres} \ln \Psi(Q, \pi^{ij}=0) = -I +i \omega Q= \frac{\pi}{A} - \frac{2 \pi Q^2 B}{A} = \frac{\pi}{B}, \end{equation} as the action of $M_-$ is $-\pi/A$, half the action of the electric instanton. For the magnetic solution, \begin{equation} \label{magres} \ln \Psi(Q, \pi^{ij}=0) = -I = \frac{\pi}{B}, \end{equation} so the pair creation rate turns out to be identical in the two cases. As $\Psi^2 \leq e^{2\pi/\Lambda} < e^{3\pi/\Lambda}$, these processes are suppressed relative to both de Sitter space and the neutral black hole instanton of \cite{desit}. \section{Electric Ernst instantons} \label{elecesec} Black holes may be pair created by a background electromagnetic field. An appropriate instanton which describes such pair creation is provided by the Ernst solution, which represents a pair of oppositely-charged black holes undergoing uniform acceleration in a background electric or magnetic field. The magnetic case has been extensively discussed, notably in \cite{dgkt,dggh,entar}. We now turn to the consideration of the electric case, to see if the pair creation rate is the same. An attempt was made to compare the electric case to a charged star instanton in \cite{elec}. However, the action for Ernst was not explicitly calculated. We find that the calculation of the pair creation rate in this case introduces several new features, but the pair creation rate given by $Z(\beta,Q)$ is identical to that obtained in the magnetic case. We will review the electric Ernst and Melvin solutions in this section, and describe the calculation of the action in the following section. The solution describing the background electric field is the electric version of Melvin's solution \cite{melvin}, \begin{equation} \label{melvinm} ds^2=\Lambda^2 \left(-dt^2+dz^2+d\rho^2\right) +\Lambda^{-2}\rho^2 d\varphi^2, \end{equation} where \begin{equation} \label{Llim} \Lambda = 1+ \frac{\widehat{B}_M^2}{ 4} \rho^2 , \end{equation} and the gauge field is \begin{equation} \label{melving} A_t = \widehat{B}_M z. \end{equation} The Maxwell field is $F^2 = -2\widehat{B}_M^2/\Lambda^4$, which is a maximum on the axis $\rho=0$ and decreases to zero at infinity. The parameter $\widehat{B}_M$ gives the value of the electric field on the axis. The metric for the electric Ernst solution is \begin{eqnarray} \label{ernstm} ds^2&=&(x-y)^{-2}A^{-2}\Lambda^2 \left[G(y)dt^2-G^{-1}(y)dy^2 \right. \\ &&+ \left. G^{-1}(x)dx^2\right] + (x-y)^{-2}A^{-2}\Lambda^{-2}G(x) d\varphi^2, \nonumber \end{eqnarray} where \begin{equation} G(\xi) = (1-\xi^2 - r_+ A \xi^3) (1+r_- A \xi), \end{equation} and \begin{equation} \Lambda=\left(1+\frac{1}{ 2}Bqx\right)^2+\frac{B^2}{ 4A^2(x-y)^2}G(x), \end{equation} while the gauge potential is \cite{elec} \begin{eqnarray} \label{gpot} A_t &=& -\frac{B G(y)}{2 A^2 (x-y)^2}\left[ 1 + \frac{1}{2} B q x + \frac{1}{2} B q (x-y) \right] \\ &&- \frac{B}{2 A^2} (1+ r_+ A y) (1+r_- Ay) \left( 1 - \frac{1}{2} B qy \right) + qy + k, \nonumber \end{eqnarray} where $k$ is a constant, and $q^2 = r_+r_-$. If we label the roots of $G(\xi)$ by $\xi_1,\xi_2,\xi_3,\xi_4$ in increasing order, then $x$ must be restricted to lie in $\xi_3 \leq x \leq \xi_4$ to obtain a metric of the right signature. Because of the conformal factor $(x-y)^{-2}$ in the metric, $y$ must be restricted to $-\infty < y \leq x$. The axis $x=\xi_3$ points towards spatial infinity, and the axis $x=\xi_4$ points towards the other black hole. The surface $y=\xi_1$ is the inner black hole horizon, $y = \xi_2$ is the black hole event horizon, and $y=\xi_3$ the acceleration horizon. The black holes are non-extreme if $\xi_1 < \xi_2$, and extreme if $\xi_1 = \xi_2$. Note that it is {\em not} possible to choose $k$ so that $A_t$ vanishes at both $y=\xi_2$ and $y=\xi_3$. We choose $k$ so that $A_t$ vanishes at $y=\xi_3$. As discussed in \cite{dgkt}, to ensure that the metric is free of conical singularities at both poles, $x=\xi_3, \xi_4$, we must impose the condition \begin{equation} \label{nonodes} G^\prime(\xi_3)\Lambda(\xi_4)^2 = -G^\prime(\xi_4)\Lambda(\xi_3)^2, \end{equation} where $\Lambda(\xi_i)\equiv \Lambda(x=\xi_i)$. For later convenience, we define $L \equiv \Lambda (x=\xi_3)$. We also define a physical electric field parameter $\widehat{B}_E = B G'(\xi_3) / 2 L^{3/2}$. When (\ref{nonodes}) is satisfied, the spheres are regular as long as $\varphi$ has period \begin{equation} \label{phiperiod} \Delta\varphi=\frac{4\pi L^2}{ G^\prime(\xi_3) }\ . \end{equation} As in the magnetic case \cite{entar}, if we set $r_+ = r_- = 0$, the Ernst metric reduces to the Melvin metric in accelerated form, \begin{eqnarray} ds^2 &=& \frac{\Lambda^2 }{ A^2 (x-y)^2} \left[ (1-y^2) dt^2 - \frac{dy^2}{ (1-y^2)} \right. \\ &&+ \left. \frac{dx^2}{(1-x^2) }\right] + \frac{1-x^2 }{\Lambda^2 (x-y)^2 A^2} d\varphi^2, \nonumber \end{eqnarray} where \begin{equation} \Lambda = 1+ \frac{\widehat{B}_E^2}{ 4} \frac{1-x^2}{ A^2 (x-y)^2}\ . \end{equation} The gauge field in this limit is \begin{equation} A_t = -\frac{\widehat{B}_E (1-y^2)}{2 A^2 (x-y)^2}. \end{equation} The acceleration parameter $A$ is now a coordinate degree of freedom. Ernst also reduces to Melvin at large spatial distances, that is, as $x,y \rightarrow \xi_3$. We Euclideanize (\ref{ernstm}) by setting $\tau = it$. In the non-extremal case, $\xi_1 < \xi_2$, the range of $y$ is taken to be $\xi_2 \leq y \leq \xi_3$ to obtain a positive definite metric (we assume $\xi_2 \ne \xi_3$). To avoid conical singularities at the acceleration and black hole horizons, we take the period of $\tau$ to be \begin{equation} \label{pert} \beta = \Delta \tau = \frac{4 \pi }{ G'(\xi_3)} \end{equation} and require \begin{equation} \label{nost} G'(\xi_2) = -G'(\xi_3), \end{equation} which gives \begin{equation} \label{nostrtt} \xi_2 - \xi_1 = \xi_4 - \xi_3. \end{equation} The resulting Euclidean section has topology $S^2 \times S^2 -\{pt\}$, where the point removed is $x=y=\xi_3$. This instanton is interpreted as representing the pair creation of two oppositely charged black holes connected by a wormhole. If the black holes are extremal, $\xi_1=\xi_2$, the black hole event horizon lies at infinite spatial distance from the acceleration horizon, and gives no restriction on the period of $\tau$. The range of $y$ is then $\xi_2 < y \leq \xi_3$, and the period of $\tau$ is taken to be (\ref{pert}). The topology of the Euclidean section is $R^2 \times S^2 - \{ pt \}$, where the removed point is again $x=y=\xi_3$. This instanton is interpreted as representing the pair creation of two extremal black holes with infinitely long throats. \section{Action in electric Ernst} \label{elacsec} Now, to calculate the pair creation rate, we need to calculate the action for the instanton. As in section \ref{cosm}, an instanton describing the pair creation of black holes in a Melvin background is given by cutting the Euclidean section above in half. That is, the boundary $\Sigma$ that we want the instanton to interpolate inside of consists of a three-boundary $S^3_\infty$ `at infinity', plus a boundary $\Sigma_s$ which can be identified with the surface $\tau=0, \tau=\beta/2$ in the Euclidean section. Since we want to consider the pair creation rate at fixed electric charge, the appropriate action is (\ref{elac2}). That is, in the instanton approximation, the partition function, and thus the pair creation rate, is approximately given by $Z(\beta,Q) \approx e^{-2I_{Ernst}}$, where $I_{Ernst}$ is the action (\ref{elac2}) of the instanton. Because the Euclidean section is not compact, the physical action is only defined relative to a suitable background \cite{haho}, which in this case is the electric Melvin solution. We need to ensure that we use the same boundary $S^3_\infty$ in calculating the contributions to the action from the Ernst and Melvin metrics. This is achieved by insisting that the same boundary conditions are satisfied at the boundary in these two metrics \cite{entar}. That is, we insist that the Ernst and Melvin solutions induce the same fields on the boundary (up to contributions which vanish when we take the limit that the boundary tends to infinity). Let us take the boundary $S^3_\infty$ to lie at \begin{equation} x= \xi_3 + \epsilon_E \chi,\ \ y = \xi_3 + \epsilon_E (\chi -1), \end{equation} in the Ernst solution, and define new coordinates by \begin{equation} \label{cchangei} \varphi = \frac{ 2L^2 }{ G'(\xi_3)} \varphi',\ \tau = \frac{2 }{ G'(\xi_3)} \tau'. \end{equation} We assume that $S^3_\infty$ lies at \begin{equation} x = -1 + \epsilon_M \chi, y = -1 + \epsilon_M (\chi -1) \end{equation} in the accelerated coordinate system in the Melvin solution. The metrics for the electric Ernst and Melvin solutions are the same as for the magnetic solutions, so we know from \cite{entar} that the induced metrics on the boundary can be matched by taking \begin{equation} \label{abar} \bar{A}^2 = -\frac{G'(\xi_3)^2 }{ 2 L^2 G''(\xi_3)} A^2, \end{equation} and \begin{equation} \label{expans} \epsilon_M = -\frac{G''(\xi_3) }{ G'(\xi_3)} \epsilon_E [1+ O(\epsilon_E^2)], \ \ \widehat{B}_M = \widehat{B}_E [1+ O(\epsilon_E^2)]. \end{equation} However, we cannot match the gauge potentials at the same time. We should work with a different gauge potential, as the gauge potential (\ref{gpot}) is not regular at both the horizons in the spacetime. A suitable gauge potential, which is regular everywhere on the instanton, is \begin{eqnarray} \label{gauge2} A &=& -F_{x\tau} \tau dx - F_{y\tau} \tau dy \\ &=& i \tau \left[ \frac{B}{A^2(x-y)^3} G(y) \left( 1 + \frac{1}{2} Bqx \right)\right] dx \nonumber \\ &&+ i \tau \left[ q \left( 1 + \frac{1}{2} Bqx \right)^2 - \frac{B}{A^2(x-y)^3} G(x) \left( 1 + \frac{1}{2} Bqx \right) \right. \nonumber \\ && \left. + \frac{B}{2 A^2 (x-y)^2} G'(x) \left( 1 + \frac{1}{2} Bqx \right) - \frac{B^2 q}{4 A^2 (x-y)^2} G(x) \right] dy, \nonumber \end{eqnarray} and the induced gauge potential on $S^3_\infty$ in the Ernst solution is \begin{equation} A_{\chi} = \frac{2i L^2 \tau' \widehat{B}_E}{A^2 \epsilon_E G'(\xi_3)} \left[ 1+ \frac{G''(\xi_3)}{G'(\xi_3)} (\chi-1) \epsilon_E + \frac{Bq \chi \epsilon_E}{L^{1/2}}\right] , \end{equation} while in the Melvin solution it is \begin{equation} A_\chi = \frac{i \tau \widehat{B}_M}{\bar{A}^2 \epsilon_M} \left[ 1 + \epsilon_M (\chi -1) \right], \end{equation} so they are {\em not} matched by (\ref{abar},\ref{expans}) (Note that, even if we worked in the gauge (\ref{gpot}), the induced gauge potentials on the boundary still wouldn't match). This seemed for a long time to be an insuperable difficulty, but we have now realized that, in the electric case, we no longer want to match $A_i$. Instead, we should match $n_\mu F^{\mu i}$, and calculate the action (\ref{elac2}), which will give the pair creation rate at fixed electric charge. The induced value of $n_\mu F^{\mu i}$ on $S^3_\infty$ in the Ernst solution is \begin{equation} n_\mu F^{\mu t'} = \frac{ A \epsilon_E^{1/2} G'(\xi_3)^{1/2} \widehat{B}_E}{2 L \lambda^3} \left[ 1 + \frac{G''(\xi_3)}{4 G'(\xi_3)} \epsilon_E (2\chi+1) \right], \end{equation} where \begin{equation} \label{blambdaE} \lambda = \frac{\widehat{B}_E^2 L^2 }{ A^2 G'(\xi_3) \epsilon_E} \chi + \frac{\widehat{B}_E^2 L^2 G''(\xi_3) }{ 2 A^2 G'(\xi_3)^2} \chi^2 +1, \end{equation} while in the Melvin solution it is \begin{equation} n_\mu F^{\mu t} = \frac{\bar{A} \epsilon_M^{1/2} \widehat{B}_M}{\sqrt{2} \Lambda^3} \left[ 1 - \frac{1}{4} \epsilon_M (2\chi+1) \right], \end{equation} where \begin{equation} \label{blambdaM} \Lambda = \frac{\widehat{B}_M^2 }{ 2 \bar{A}^2 \epsilon_M} \chi - \frac{\widehat{B}_M^2 }{ 4 \bar{A}^2} \chi^2 +1. \end{equation} We see that these two quantities are indeed matched by (\ref{abar},\ref{expans}). The action (\ref{elac2}) of the region of the Ernst solution inside $\Sigma$ can be written as a surface term, as we can see by writing it in covariant form: \begin{eqnarray} I_{el} &=& \frac{1}{16 \pi} \int d^4 x \sqrt{g}(-R + F^2) - \frac{1}{8\pi} \int_{\Sigma} d^3 x \sqrt{h} K \\ &&- \frac{1}{4\pi} \int_{\Sigma} d^3 x \sqrt{h} F^{\mu\nu} n_\mu A_\nu \nonumber \\ &=& - \frac{1}{8\pi}\int_{\Sigma} d^3 x \sqrt{h} K - \frac{1}{8\pi} \int_{\Sigma} d^3 x \sqrt{h} F^{\mu\nu} n_\mu A_\nu, \nonumber \end{eqnarray} as the volume integral of $R$ is zero by the field equations, and the volume integral of the Maxwell Lagrangian $F^2$ can be converted to a surface term by the field equations. The explicit surface term in (\ref{elac2}) just reverses the sign of the electromagnetic surface term obtained from the $F^2$ volume integral; that is, it has the effect of reversing the sign of the electromagnetic contribution to the action. Using the gauge choice (\ref{gauge2}), we see that the action is \begin{eqnarray} I_{el} &=& - \frac{1}{8\pi}\int_{S^3_\infty} d^3 x \sqrt{h} K - \frac{1}{8\pi} \int_{\Sigma_s} d^3 x \sqrt{h} F^{\mu\nu} n_\mu A_\nu, \\ &=& - \frac{1}{8\pi}\int_{S^3_\infty} d^3 x \sqrt{h} K - \frac{1}{16\pi} \frac{\beta}{2} \int dx dy d\varphi \sqrt{g} F^2 \nonumber \end{eqnarray} In the first line, we have used the fact that the extrinsic curvature of $\Sigma_s$ vanishes, and that $n^\mu A^\nu F_{\mu\nu} = 0$ on $S^3_\infty$; in the second line, we used (\ref{gauge2}). This is the same as the expression for the action in \cite{entar} (as the Maxwell term changes sign), and the matching conditions are the same, so we can use the calculation of the action in \cite{entar} to conclude that \begin{equation} I_{Ernst} = \frac{\pi L^2}{A^2 G'(\xi_3) (\xi_3-\xi_1)}. \end{equation} The pair creation rate is approximately $e^{-2I_{Ernst}}$, so it is thus identical to that for the magnetic case. Note that this applies to both extreme and non-extreme black holes. In particular, the pair creation of non-extreme black holes is enhanced over that of extreme black holes by a factor of $e^{{\cal A}_{bh}/4}$, as it was in the magnetic case \cite{entar}. \section{Entropy of charged black holes} \label{entsec} We turn now to a discussion of the thermodynamics of black holes. Consider first the asymptotically-flat black holes. In the electric case, one can calculate the partition function for the grand canonical ensemble at temperature $T$ and electrostatic chemical potential $\Phi$. One does a path integral over all fields that have given period and potential at infinity. In the semi-classical approximation, the dominant contribution to the path integral comes from solutions of the field equations with the given boundary conditions. These are the electrically charged Reissner-Nordstr\"om solutions. The semi-classical approximation to the partition function is $Z(\beta,\omega) \approx e^{- I}$, where $I$ is the action of the solution. But in the grand canonical ensemble, $\ln Z(\beta,\omega) =- \Omega/ T$, where $\Omega$ is a thermodynamic potential \cite{actionint}, \begin{equation} \Omega = M - TS - Q\Phi. \end{equation} Comparing this with the expression (\ref{elac}) for the action, one finds that the $M$ and $Q \Phi$ terms cancel, leaving the entropy equal to a quarter of the area, $S = {\cal A}_{bh}/4$, as expected. In the case of magnetic black holes, the entropy still comes out to be ${\cal A}_{bh}/4$, but the calculation is rather different. Since the magnetic charge is defined by the asymptotic form of the potential, or equivalently, by the choice of the electromagnetic fiber bundle, there is a separate canonical ensemble for each value of the magnetic charge, which is necessarily an integer, unlike the electric charge of an ensemble, which is a continuous variable.\footnote{We might add that the angular momentum of a black hole is a continuous variable, and is not quantised, because it is the expectation value in an ensemble, not a quantum number of a pure state. In the grand canonical ensemble, one therefore has to introduce angular velocity $\Omega $ as a chemical potential for angular momentum, like one introduces the electrostatic potential as a chemical potential for charge. The Hamiltonian gets an additional $\Omega J$ surface term. } In the magnetic case, the charge is always quantised, even for an ensemble. There is thus no need for a chemical potential for magnetic charge. That is, the partition function $Z$ depends on the charge, and should therefore be interpreted as the canonical partition function, so $\ln Z(\beta,Q) = -F/T$, where $F$ is the free energy, \begin{equation} F = M -TS, \end{equation} while the action is (\ref{magac}), and the entropy is therefore again $S ={\cal A}_{bh}/4$. We should note that, if we make the Fourier transform (\ref{proj1}) in the electric case, the partition function $Z(\beta,Q)$ is also interpreted as the canonical partition function. Therefore, this Fourier transform may be thought of as a Legendre transform giving the free energy in terms of the thermodynamic potential $\Omega$ \cite{quhair}, \begin{equation} F(\beta,Q) =\Omega(\beta,\Phi) + Q \Phi. \end{equation} The result we get for the entropy doesn't depend on whether we work with the partition function $Z(\beta,\omega)$ or $Z(\beta,Q)$ in the electric case. The absence of the Maxwell surface term in the Hamiltonian for magnetic black holes means that they have higher action than their electric counter parts. For an extreme black hole, the region swept out by the surfaces of constant time covers the whole instanton, so $I=\beta H$ \cite{haho,entar}. Now, as before, $H=M$ for a magnetic black hole, so the action of an extreme magnetic black hole is $I=\beta M$, where $\beta$ is now an {\em arbitrary} period with which one can identify an extreme black hole. For the electric case, $H= M-Q\Phi$, but $Q=M$ implies $r_+=M$, and thus $\Phi=1$, so $I=0$ for an extreme electric black hole. Both of these actions are proportional to $\beta$. This means that if you substitute the actions into the usual formula \begin{equation} \label{canent} S = - \left( \beta \frac{\partial}{\partial \beta} -1 \right) \ln Z \end{equation} for the entropy, where $Z \approx e^{-I}$, you find that both extreme electric and magnetic black holes have zero entropy, as previously announced \cite{entar}. Now for the cosmological black holes, we again need to work with the wavefunction $\Psi(Q,\pi^{ij}=0)$ rather than the partition function $Z$. Because it does not depend on the temperature, $\Psi^2$ can be interpreted as the microcanonical partition function, or density of states \cite{brown}. In fact, it should be clear that $\Psi$ represents a closed system, and the partition function should just be interpreted as counting the number of states, so the entropy should be $S = \ln Z$, or more accurately, \begin{equation} S = 2 \ln \Psi(Q,\pi^{ij}=0). \end{equation} Note that it is $\Psi(Q,\pi^{ij}=0)$, and not $\Psi(\omega,\pi^{ij}=0)$, which gives the microcanonical partition function. If we evaluate this entropy in the semi-classical approximation, where $\Psi(Q,\pi^{ij}=0)$ is given by (\ref{elres},\ref{magres}), we get \begin{equation} S = 2\pi /B = {\cal A}/4 \end{equation} in both cases, as there are two horizons, which both have area $4\pi/B$, so the total area ${\cal A} = 8\pi/B$. That is, we find that the usual relation between entropy and area holds here too, despite the fact that $\Psi$ has a very different interpretation in this case. \section{Discussion} We have seen that the action of dual electric and magnetic solutions of the Einstein-Maxwell equations differ. This is presumably a general property of $S$-duality, and initially led us to wonder whether $S$-duality could be a symmetry of a quantum theory given by a path integral. However, we found that despite the difference in actions, the semi-classical approximation to the partition function in a definite charge sector was the same for dual electric and magnetic solutions. In particular, we found that the rate at which electrically and magnetically charged black holes are created in a background electromagnetic field or in a cosmological background is the same. The pair creation of both types of charged black holes in a cosmological background by the instanton studied here is suppressed relative to de Sitter space, as we might expect, and it is also more strongly suppressed than the creation of neutral black holes. The instantons describing pair creation of black holes in a cosmological background are studied in more detail in \cite{robb}. For all the instantons, the rate at which the pair creation occurs is suppressed relative to de Sitter space. These calculations are all just semi-classical, but they do seem to offer some encouragement to the suggestion that electric-magnetic duality is more than just a symmetry of the equations of motion. The conclusion seems to be that duality is a symmetry of the quantum theory, but in a very non obvious way. As Einstein said, God is subtle, but he is not malicious. \acknowledgements The authors were greatly helped by discussions with John Preskill while they were visiting Cal Tech. S.F.R. thanks the Association of Commonwealth Universities and the Natural Sciences and Engineering Research Council of Canada for financial support.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Tree estimators and random forests are nonparmetric estimators that enjoy widespread popularity in applied data science, powering models in e-commerice, finance and macroneconomics, and medicine. Random forests, introduced by Breiman \cite{breiman01}, is a bagging algorithm that aggregates estimates from a collection of tree estimators. Since each tree has alow bias but high variance, aggregation helps performance by balancing the bias-variance trade-off. Other than good empirical performance, random forests also enjoy the following advantages over other nonparametric methods: they scale naturally with high dimensional data, as the optimal cut at each may be computed in parallel by quantizing each covariate; categorical variables and missing data can be easily corporated; and they are more interpretable since variable importance can be explicitly characterized. One fruitful application of random forests in economics is in estimating heterogeneneous treatment effects, i.e., a function of the form $f(x) = \E(Y_1 - Y_0 \mid X = x)$. In order to conduct inference in econometric applications (e.g., to test the null $H_0: f(x) = 0$), knowledge about the rate of convergence or asymptotic distribution of the estimator $\hat f(x)$ is required. Moreover, functions of $f$ are often of practical interest: for example, when we wish to study the difference in treatment effects for two different subpopulations, the quantity of interest is \begin{equation} f(x) - f(\bar x), \quad \text{where $x$ and $\bar x$ describe the two subpopulations}. \end{equation} More generally, we might also be interested in a weigthed treatment effect, where a subpopulation $x$ is given an importance weight $\mu$. In this case, the functional of $f$ is \begin{equation} \int_{x \in \mathcal{X}} f(x) d\mu, \quad \text{where $\mu$ is not necessarily the density of $x$}, \end{equation} and the integral is taken over the feature space $\mathcal{X}$. Inference on functions of $f$ requires not only the asymptotic distribution of the point estimate $f(x)$, but also the correlation between estimates at distinct points $f(x)$ and $f(\bar x)$. This paper studies the correlation structure of a class of random forests models whose asymptotic distributions were worked out in \cite{crf}. We find sufficient conditions under which the asymptotic covariance of estimates at distinct poitns vanish relative to the variance; moreover, we provide finite sample heuristics based on our calculations. To the best of our knowledge, this is the first set of results on the correlation of random forest estimates. The present paper builds on and extends the results in \cite{crf}, which in turn builds on related work \cite{Wager2015AdaptiveCO} on the concentration properties of tree estimators and random estimators in general. See also \cite{athey2019}, which extends the random forest model considered here to estimate moment conditions. Stability results established in this paper have appeared in \cite{arsov2019stability}, who study the notions of algorithmic stability for random forests and logistic regression and derive generalization error guarantees. Also closely related to our paper are \cite{chernozhukov2017} and \cite{chen2018}, concerning finite sample Gaussian approximations of sums and U-statistics in high dimensions. In this context, our paper provides a stepping stone towards applying the theory of finite sample U-statistics to random forests, where bounds on the correlation structure plays a central role. The paper is structured as follows. In Section 2, we introduce the random forest model and and state the assumptions required for our results; Section 3 contains our maion theoretical contributions; Section 4 builds on Sections 3 and discusses heuristics useful in finite sample settings; Section 5 evalutes our heuristics with numerical experiments, and Section 6 concludes. All proofs are found in the appendix. \section{Setup} In this paper, we study Gaussian approximations of random forest estimators. Throughout, we assume that a random sample $(Z_i)_{i=1}^n = (X_i, Y_i)_{i=1}^n$ is given, where $X_i \subseteq \mathcal{X} \in \mathbf{R}^p$ is a vector of regressors and $Y_i \in \mathbf{R}$ the response. A tree estimator $T(\xi; Z_1, \dots, Z_s)$ works by recursively partitioning the space of covariates $\mathcal{X}$ into a collection of non-overlapping hyper-rectangles. The estimator starts with the entire space $\mathcal{X}$ (assumed to be bounded), and uses the points $Z_1$, \dots, $Z_s$ to produce a coordinate $j \in \{ 1, \dots, p\}$ and an index $t \in \mathbf{R}$. This divides $\mathcal{X}$ into two hyperrectangles, \begin{equation} \{ x \in \mathcal{X} : x_j < t \} \quad \text{and} \quad \{ x \in \mathcal{X}: x_j > t \}. \end{equation} In the next step, each of these two sets is partitioned into two hyperrectangles each, using the points that land in each. The process continues until a particular hyperrectangle satisfies a stopping criterion. This process corresponds to a tree in the natural way, so we will refer to a generic hyperrectangle a ``node'', and the leaves of the tree ``terminal nodes'', where splitting ceases. Given the partition of $\mathcal{X}$ into terminal nodes $N_1, \dots, N_q$, the prediction is the average of the responses \begin{equation} T(x; \xi, Z_{i_1}, \dots, Z_{i_s}) = \sum_{i=1}^q \mathbf{1}(x \in N_i) \cdot \frac{1}{|N_i|} \sum_{i: X_i \in N_i} Y_i, \end{equation} where $|N_i|$ is defined to be the number of samples $X_i$ inside $N_i$. Here, $\xi$ plays the role of randomization, as the splitting decisions allows for randomization. Given a base learner $T$, a random forest estimator at $x$ is defined to be the order $s$ $U$-statistic with kernel $T$, i.e., \begin{equation} \RF(x; Z_1, \dots, Z_n) = \frac{1}{\binom{n}{s}} \sum_{i_1, \dots, i_s} \E_{\xi} T(x; \xi, Z_{i_1}, \dots, Z_{i_s}), \end{equation} where the randomization device $\xi$ is marginalized over. In this paper, we study the asymptotic distribution of the vector \begin{equation} \RF(x_1, \dots, x_q; Z_1, \dots, Z_n) \coloneqq \begin{bmatrix} \RF(x_1; Z_1, \dots, Z_n) \\ \vdots\\ \RF(x_q; Z_1, \dots, Z_n) \end{bmatrix} \in \mathbf{R}^q, \end{equation} where the random forest is employed to produce an estimate at $q$ points $x_1, \dots, x_q$. Previous work by (Athey and Wager) has shown that under regularity conditions \begin{equation} \frac{\RF(x, Z_1, \dots, Z_n) - m(x)}{\sigma_n(x)} \xRightarrow{\;\rm dist\;} N(0,1) \qquad \text{where } m(x) = \E(Y \mid X = x) \end{equation} and $\sigma_n^2(x)$ the variance of $\RF(x)$. In this paper, we extend their result to cover multivariate predictions and establish joint normality \begin{equation} \Sigma^{-1/2}\{ \RF(x_1, \dots, x_q) - m(x_1, \dots, x_q) \} \xRightarrow{\;\rm dist\;} N(0, I_{q \times q}) \end{equation} where $m(x_1, \dots, x_q)$ is the vector with components $\E(Y \mid X = x_k)$. This is the main technical result of the paper. In addition, we provide numerical simulations to gauge the finite sample behavior of the multivariate random forest estimator. \subsection{Assumptions} Our results builds on \cite{crf}, so that we will work with the same model of random forests to be maintained. Specifically, we will assume the following properties of the underlying tree algorithm. \begin{assumption}[Honesty] Conditional on $\{ X_i \}$, the tree structure (i.e., the splitting coordinates and splitting indices) are independent of the tree estimates. \end{assumption} There are several ways to satisfy this assumption. The first is to calculate splits based only on $X_i$: for example, a splitting coordinate and index is chosen to minimize the average squared distance to the center of each of the two nodes as in clustering algorithms. The second way is to perform data splitting: partition the data into two sets $\mathcal{I}_1$ and $\mathcal{I}_2$. Observations in $\mathcal{I}_1$ and $X_i \in \mathcal{I}_2$ may be freely used during the splitting process, while $Y_i \in \mathcal{I}_2$ are used to determine terminal node values. Finally, a third method assumes the existence of side information $\{ W_i \}$: splits are made in the domain of $\{ X_i \}$ using $\{ W_i \}$ are surrogate responses, with $\{ Y_i \}$ being used for prediction at the terminal nodes. For simplicity, we will assume that the first scheme is used, namely that the splitting algorithm uses only $X_i$ and not $Y_i$ in placing the splits. Our results remain valid for the third scheme. \begin{assumption}[Randomized Cyclic Splits] At each non-terminal node, the splitting algorithm decides to do data-independent split with probability $\delta > 0$, by flipping a ``$\delta$-coin'' with distribution is independent of everything else. The first time the coin lands heads, the first coordinate is chosen as the splitting coordinate; the second time, the second coordinate is chosen, and so on, such that that on the $J$-th time the coin lands heads, the $(p \text{ mod } J) + 1$-th coordinate is chosen. \end{assumption} This is modification of the assumption in \cite{crf}, in which each node has a probability $\delta$ of being chosen at each split; the latter is most easily implemented by flipping a $p\delta$ coin, and selecting one of the $p$ coordinates uniformly at random to split when the coin lands heads. Our assumption does away with the randomization in the second step. \begin{assumption}[The Splitting Algorithm is $(\alpha, k)$-Regular] There exists some $\alpha \in (0, 1/2)$ such that whenever a split occurs in a node with $m$ sample points, the two hyper-rectangles contains at least $\alpha m$ many points each. Moreover, splitting ceases at a node only if the node contains less than $2k-1$ points for some $k$. In particular, this implies that $\Var T$ has bounded entries, since the number of points in each terminal node is bounded above. \end{assumption} As shown in \cite{Wager2015AdaptiveCO}, this implies that with all but exponentially small probability, the splitting axis also shrinks by a factor between $\alpha$ and $1-\alpha$. The following assumption is specific for our model. The first specifies that the \emph{potential} splits do not depend on the data; in practice, this is ``essentially'' without loss of generality as the domain of $X$ is fixed. For example, the assumption is satisfied if all splits occur at indices that lie on a specified grid, e.g., points which can be represented by a 64-bit floating point number. In keeping with the $(\alpha, k)$ regularity assumption above, we also assume that no potential split may split the splitting axis by more than a proportion $\alpha$. \begin{assumption}[Predetermined Splits] The potential splits at each node does not depend on the realizations of the data. Furthermore, the number of potential splits at each node is finite, and each potential split shrinks the length of the corresponding axis by at most $\alpha$. \end{assumption} We will also require other assumptions regarding the ``stability'' of splits; these assumptions will be introduced later. Finally, we follow \cite{crf} and place the following distributional assumptions on the data generating process. \begin{assumption}[Distributional Assumptions on $(X,Y)$] The covariate $X$ is distributed on the $[0,1]^p$ with a density that is bounded away from zero and infinity; without loss of generality, we may assuem that $X$ is uniformly distributed on the $p$-dimensional hypercube. Furthermore, the regression function $x \mapsto \E(Y \mid X = x)$, $x \mapsto \E(Y^2 \mid X = x)$, and $x \mapsto \E(Y^3\mid X=x)$ are uniformly Lipschitz continuous. \end{assumption} \section{Gaussianity of Multivariate $U$-Statistics} \subsection{Hajek Projections} For $m \geq 1$ and a statistic $f(Z_1, \dots, Z_m) \in \mathbf{R}^q$, recall that the Hajek projection of $f$ is defined to be \begin{equation} \mathring f(Z_1, \dots, Z_m) = \sum_{i=1}^m \E[f(Z_1, \dots, Z_m) \mid Z_i] - (m-1) \E f(Z_1, \dots, Z_m). \end{equation} In particular, when $Z_1, \dots, Z_m$ is an IID sequence and $f$ is symmetric in its arguments, we have \begin{equation} \mathring f(Z_1, \dots, Z_m) = \sum_{i=1}^m f_1(Z_i) - (m-1)\E f, \end{equation} where $f_1(z) = \E[f \mid Z_1 = z]$. The previous calculations applied to our setting where $f$ is the random forest estimator yields \begin{equation} \mathring\RF(Z_1, \dots, Z_n) - \mu = \sum_{i=1}^n \E(\RF - \mu\mid Z_i) = \frac{1}{\binom{n}{s}} \sum_{i=1}^n \E \biggl[ \sum_{i_1, \dots, i_s} \E_{\xi}T(\xi; Z_{i_1}, \dots, Z_{i_s}) - \mu \mid Z_i \biggr]. \end{equation} Since our samples are independent, $\E(\E_{\xi}T(\xi;Z_{i_1}, \dots, Z_{i_s}) \mid Z_i) = \mu$ whenever $i \notin \{ i_1, \dots, i_s \}$. As $\{ i_1, \dots, i_s \}$ runs over the size-$s$ subsets of $\{ 1, \dots, n \}$, there are exactly $\binom{n-1}{s-1}$ subsets which contain~$i$. For each of these subsets \begin{equation} \E(\E_{\xi} T(\xi; Z_{i_1}, \dots, Z_{i_s}) - \mu \mid Z_i) \eqqcolon T_1(Z_i) - \mu, \end{equation} where $T_1(z) = \E_{\xi} T(z, Z_2, \dots, Z_s)$. Therefore, \begin{equation} \mathring\RF - \mu = \frac{1}{\binom{n}{s}} \sum_{i=1}^n \binom{n-1}{s-1} (T_1(Z_i) - \mu) = \frac{s}{n} \sum_{i=1}^n (T_1(Z_i) - \mu). \end{equation} \subsection{Asymptotic Gaussianity via Hajek Projections} The standard technique to derive the asymptotic distribution of a $U$-statistic is to establish a lower bound on the variance of its Hajek statistic; this is the approach taken by (Athey and Wager) and we follow the approach here. Let $V$ be the variance of $\mathring\RF$; using our previous result, we could write \begin{equation} \label{eq:Vvariance} V = \Var\biggl[ \frac{s}{n} \sum_{i=1}^n (T_1(Z_i) - \mu) \biggr] = \frac{s^2}{n} \Var(T_1(Z_1)) = \frac{s}{n} \Var\biggl[ \sum_{i=1}^s T_1(Z_i) \biggr] = \frac{s}{n} \Var \mathring T, \end{equation} where $\mathring T$ is the Hajek projection of the statistic $(Z_1, \dots, Z_s) \mapsto \E_{\xi}T(\xi; Z_1, \dots, Z_s)$. Under standard regularity conditions for a multivariate triangular array Central Limit Theorem, \begin{equation} V^{-1/2}(\mathring\RF - \mu) \xRightarrow{\;\rm dist\;} N(0, I). \end{equation} To establish the joint normality for $\RF$, write \begin{equation} V^{-1/2}(\RF - \mu) = V^{-1/2}(\RF - \mathring \RF) + V^{-1/2}(\mathring \RF - \mu). \end{equation} so all that is required is to prove that $V^{-1/2}(\RF - \mathring \RF) \xrightarrow{\;\P\;} 0$. We will show that the quantity converges in squared mean. Setting $e = V^{-1/2}(\RF - \mathring \RF)$, \begin{equation} \label{eq:error squared} \begin{split} \E (e^{\intercal} e) &= \E(\RF - \mathring \RF)^{\intercal} V^{-1} (\RF - \mathring \RF) = \E \tr V^{-1}(\RF - \mathring \RF)(\RF - \mathring \RF)^{\intercal} \\ &= \tr V^{-1} \E(\RF - \mathring \RF)(\RF - \mathring \RF)^{\intercal} = \tr V^{-1/2} \Var(\RF - \mathring \RF)V^{-1/2}. \end{split} \end{equation} By Proposition~\ref{prop:hoeffding}, we have the following Hoeffding decomposition with respect to the matrix $V^{-1}$ \begin{equation} \RF - \mathring \RF = \frac{1}{\binom{n}{s}} \biggl[ \sum_{i<j} \binom{n-2}{s-2} (T_2(Z_i, Z_j) - \mu) + \sum_{i < j < k} \binom{n-3}{s-3} (T_3(Z_i, Z_j, Z_k) - \mu) + \cdots \biggr], \end{equation} where $T_2$, $T_3$, etc.\ are the second and third order projections of $T$ obeying the orthogonality conditions \begin{equation} \label{eq:4} \E[ (T_k - \mu)^{\intercal} V^{-1} (T_{k'} - \mu)] = 0, \qquad \text{for $k \neq k'$.} \end{equation} In addition, being projections of $T$, the higher order projections also satisfy \begin{equation} \label{eq:7} \E[(T_k - \mu)^{\intercal} V^{-1} (T_k - \mu)] \leq \E[(T - \mu)^{\intercal} V^{-1} (T - \mu)]. \end{equation} Therefore, \eqref{eq:Vvariance} and \eqref{eq:error squared} imply that \begin{equation} \label{eq:1} \E(e^{\intercal}e) \leq \frac{s}{n} \tr(\Var \mathring T^{-1} \Var T). \end{equation} Athey and Wager obtains a bound on the diagonal elements of $\Var \mathring T^{-1}$ and $\Var T$, namely \begin{equation} \label{eq:3} \frac{(\Var T)_{kk}}{(\Var \mathring T)_{kk}} \leq c (\log s)^p, \qquad \text{for each $k = 1, \dots, q$}, \end{equation} for a constant $c$. In the next section, we shall bound the off-diagonal elements of $\Var \mathring T$ and $\Var T$ in order to establish $\E(e^{\intercal} e) \to 0$. \begin{proposition}[Hoeffding Decomposition for Multivariate $U$-statistics] \label{prop:hoeffding} Fix a positive definite matrix $M$. Let $f(x_1, \dots, x_n) \in \mathbf{R}^p$ be a vector-valued function that is symmetric in its arguments and let $X_1, \dots, X_n$ be a random sample such that $f(X_1, \dots, X_n)$ has finite variance. Then there exists functions $f_1, f_2, \dots, f_n$ such that \begin{equation} \label{eq:5} f(X_1, \dots, X_n) = \E(f) + \sum_{i=1}^n f_1(X_1) + \sum_{i < j} f_2(X_i, X_j) + \dots + f_n(X_1, \dots, X_n) \end{equation} where $f_k$ is a function of $k$ arguments, such that \begin{equation} \label{eq:6} \E f_k(X_1, \dots, X_k) = 0 \quad \text{and}\quad \E [f_k(X_1, \dots, X_k)^{\intercal} M f_{\ell}(X_1, \dots, X_l)] = 0. \end{equation} \end{proposition} \subsection{Covariance Calculations} The aim of this subsection is to prove the following asymptotic behavior of the covariance matrix $\Var \mathring T$ under suitable stability conditions of the splitting algorithm \begin{equation} \label{eq:covariance bounds} (\Var \mathring T)_{k, l} = o(s^{-\epsilon}) \text{ for all $k \neq l$ and some $\epsilon > 0$}. \end{equation} The next proposition shows that this is enough to establish $\E(e^{\intercal} e) \to 0$. \begin{proposition}\label{prop:trace-bound} Suppose $\Var T$ and $\Var \mathring T$ satisfy the conditions in \eqref{eq:covariance bounds}. Then \begin{equation} \label{eq:2} \frac{s}{n} \tr \Var \mathring T^{-1} \Var T \to 0. \end{equation} \end{proposition} \begin{remark} Proposition~\ref{prop:trace-bound} does not require the fact $(\Var T)_{k, l} \to 0$ for the off-diagonal entries of $\Var T$, only that they are bounded. Moreover, we only require the much weaker bound $(\Var \mathring T)_{k, l} = o(\frac{1}{\log^p s})$. However, tighter bounds are still useful for applications: for one they allow us to control the ``approximation'' error $\E(e^{\intercal} e)$; second, they are useful for bounding the off diagonal terms in $V^{-1}(\RF - \mu)$. \end{remark} The rest of this section will establish the covariance bound for $\Var \mathring T$. Recall that \begin{equation} \label{eq:10} \mathring T - \mu = \sum_{i=1}^s \E(T \mid Z_i) \quad \text{so that}\quad \Var \mathring T = s \Var(\E(T \mid Z_1)) \text{ due to independence}. \end{equation} By the orthogonality condition, \begin{equation} \label{eq:11} \Var \E(T \mid Z_1) = \Var[\E(T \mid Z_1) - \E(T \mid X_1)] + \Var[\E(T \mid X_1)], \end{equation} and furthermore, honesty implies \begin{equation} \label{eq:12} \E(T_k \mid Z_1) - \E(T_k \mid X_1) = \E(I_k \mid X_1) (Y_1 - \E(Y_1 \mid I_k = 1, X_1)) \end{equation} where $T_k$ is the $k$-th coordinate of $T$ (i.e., the estimate of the tree at $x_k$) and $I_k$ is the indicator variable for the event that $X_1$ belongs to the terminal node containing $x_k$. In particular, the off-diagonal entry at $(j, k)$ of $\Var[\E(T \mid Z_1) - \E(T \mid X_1)]$ is equal to \begin{equation} \label{eq:15} \E[\E(I_k \mid X_1) \E(I_j \mid X_1) (Y_1 - \E(Y_1 \mid X_1, I_k = 1) (Y_1 - \E(Y_1 \mid X_1, I_j = 1)] \end{equation} Following the same truncation argument as in (Athey and Wager, Theorem 5), this quantity is bounded by \begin{equation} \E[\E(I_k \mid X_1) \E(I_{j} \mid X_1)]. \end{equation} Alternatively, this bound also holds in the case when $Y$ is bounded. (Note that a direct application of the Cauchy-Schwarz inequality yields a weaker bound $\sqrt{\E[\E(I_k \mid X_1)^2 \E(I_{j} \mid X_1)^2]}$.) Recall that $\E(I_k \mid X_1 = x)$ is the probability that $x$ and $x_k$ belong to the same terminal node; likewise $\E(I_j \mid X_1 = x)$ is the probability that $x$ and $x_j$ belong to the same terminal node. The following proposition shows the intuitive result that for sufficiently large $s$, at least one of these probabilities is small. \begin{proposition} \label{prop:terminal-node-probability} Fix two distinct points $x$ and $\bar x \in [0,1]^p$, and let $M(x, \bar x)$ be the probability that $x$ and $\bar x$ belongs to the same terminal node. If $\delta > 1/2$, then \begin{equation} \label{eq:19} M(x, \bar x) = o(s^{-(1+\epsilon)}) \quad \text{for some $\epsilon > 0$}. \end{equation} The same holds for the quantity $M(x, \bar x \mid X_1 = \bar x)$, the probability conditional on $X_1 = \bar x$. \end{proposition} \todo{Discuss the significance of the metric $M(x, \bar x)$ that measures how likely $x$ and $\bar x$ lie in the same terminal node.} \todo{Simulations should show that the off-diagonal terms are strongly correlated with $M(x, \bar x)$.} This proposition shows that the contribution of $\Var[\E(T \mid Z_1) - \E(T \mid X_1)]$ to the cross covariances of $\Var \E(T \mid Z_1)$ is small (i.e., smaller than the required $(\log s) / s$). The requirement that $\delta > 1/2$, while needed for our theoretical results, may not be needed in practice. The reasons is that $\delta > 1/2$ is required for a uniform bound on the quantity \begin{equation} \E(I_k \mid X_1) \E(I_{j} \mid X_1), \end{equation} while what the proposition demands is a bound on the expectation. Indeed, in the extreme case $x = 0$ and $\bar x = 1$, it is easy to see that the expectation satisfies the required bound even when $\delta \leq 1/2$. In light of this, we could instead impose the high level condition that \begin{equation} M(x, \bar x) = o\biggl( \frac{1}{\log^p s \cdot s} \biggr), \end{equation} where $M(x, \bar x) = \E[\E(I_k \mid X_1) \E(I_{j} \mid X_1)]$ as above. \subsubsection{Bounding $\Var \E(T \mid X_1)$} We turn next to bound the cross covariances of $\Var[\E(T \mid X_1)]$. It will be convenient to change notations, so that $x \mapsto x_j$, $\bar x \mapsto x_k$, and let $x_1$ denote the value of $X_1$. Thus, we need to show that \begin{equation} \label{eq:covariance-bound} \E[ (\E(T \mid X_1 = x_1) - \mu) (\E(\bar T \mid X_1 = x_1) - \bar \mu)) ] = o\biggl( \frac{\log s}{s} \biggr). \end{equation} where $T$ and $\bar T$ are the tree estimates at $x$ and $\bar x$, and $\mu$ and $\bar \mu$ their (unconditional) expectations. The plan is to show that if $\|x - x_1\|_{\infty}$ is small, then \begin{equation} \label{eq:20} \E(T \mid X_1 = x_1) - \mu = \E(T \mid X_1 = x_1) - \E(T) \end{equation} is also small. Intuitively, the knowledge of sample point $X_1 = x_1$ changes the expectation of the tree estimate at $x$ only when $x_1$ affects the position of the terminal node containing $x$. This is unlikely to happen when $X_1 = x_1$ is farther away; as soon as $X_1$ leaves the node containing $x$ in the splitting process, its effect on the terminal node diminishes. Toward this end, fix $x$, and let $\Pi$ denote the terminal node that contains $x$. Since the set of potential splits is unchanged when conditioning on $X_1$, $\Pi$ is a discrete random variable and we may write \begin{equation} \label{eq:tree-formula} \E(T) = \sum_{\pi} \P(\Pi = \pi) \mu_{\pi} \quad \text{and}\quad \E(T \mid X_1 = x_1) = \sum_{\pi} \P(\Pi = \pi \mid X_1 = x) \mu_{\pi}' \end{equation} where $\mu_{\pi} = \E(T \mid \Pi = \pi)$ and $\mu_{\pi}' = \E(T \mid \Pi = \pi, X_1 = x)$. Recall that $\Pi$ is generated by a recursive splitting procedure, and we can make a natural correspondence between \eqref{eq:tree-formula} and the expectation taken over a directed acyclic graph (DAG) in the following way. Let $[0, 1]^p$ be the root of the DAG; for every potential split at $[0,1]^p$, there is a directed edge to a new vertex, where the vertex is the hyperrectangle that contains $x$. If a vertex is a leaf (i.e., splitting ceases at that node), then the vertex has no outgoing edges; otherwise, a vertex corresponding to a non-terminal node has an outgoing edge for each potential split, with each edge going to another vertex that is the halfspace containing $x$. To each terminal vertex $v$ in the DAG, which naturally correspond to a terminal $\pi$ containing $x$, associate a value $f(v) \coloneqq \mu_{\pi}$. Each edge $e = (v \to w)$ corresponding to a split $s$ from a node $v$ to a halfspace $w$; associate with this edge the transition probability \begin{equation} \label{eq:24} p(e) \coloneqq \P(\text{$s$ is chosen at $v$} \mid \text{current node is $v$}) \eqqcolon \P(w \mid v). \end{equation} Given the transition probabilities, the value $f$ may be extended to each vertex $v$ via the recursion \begin{equation} \label{eq:25} f(v) \coloneqq \sum_{e: v \to w} \P(w \mid v) f(w). \qquad \text{Thus, $f(v)$ is the continuation value at $v$.} \end{equation} In this way, the expectation $\E(T) = f(\text{root}) = f([0,1]^p)$. If instead we assign the terminal values $f'(v) = \mu'_v$ and the transition probabilities \begin{equation} \label{eq:27} p'(e) = \P(\text{$s$ is chosen at $v$} \mid \text{current node is $v$}, X_1 = x_1) = \P'(w \mid v), \end{equation} then we recover $\E(T \mid X_1 = x_1) = f'([0,1]^p)$. Therefore, to bound $\E(T \mid X_1 = x_1) - \E(T)$ requires bounding the difference in the continuation values. We will need to assume that $p'(e) \approx p(e)$; that is, We will require the following assumption regarding differences in the splitting probabilities. \begin{splitting-stability} For any node $v$, the total variation distance between the distributions $\{ p(e) \}_{e: v \to w}$ and $\{ p'(e) \}_{e: v \to w}$ is bounded by the effective volume of $v$. Specifically, there exists some $\delta>0$ such that for all $v$, \begin{equation} \label{eq:29} \operatorname{TV}(p, p') \leq \biggl( \frac{1}{s|v|} \biggr)^{1+\delta} \end{equation} up to some constant. Here, $|v|$ denotes the volume of the hyperrectangle at $v$, i.e., \begin{equation} \label{eq:30} |v| = \biggl| \prod_{j=1}^p (a_j, b_j) \biggr| = \prod_{j=1}^p |b_j - a_j|. \end{equation} \end{splitting-stability} Recall that with probability exceeding $1/s$, the number of sample points $X_1, \dots, X_s$ in the hyperrectangle $v$ is approximately $s|v|$, from our uniform distribution assumption. In general, since the density of $X$ is bounded away from zero and infinity, the number of sample points will be bounded (above and below) by a constant times $s|v|$. Therefore, the stability assumption places a restriction on procedure used to select optimal splits: namely, if the decision is made on the basis of $m$ points, then by moving any one of the points changes the optimal split with probability bounded by $m^{-(1+\delta)}$. In practice, most splitting procedures satisfy a stronger bound. A set of sufficient conditions is given in the following proposition. \begin{proposition} \label{prop:split-stability-sufficient} Assume that the optimal split is chosen based on the quantities \begin{equation} f_1(\mu_1, \dots, \mu_Q), \dots, f_P(\mu_1, \dots, \mu_Q) \end{equation} for some $Q \geq 1$, where $\mu_1, \dots, \mu_Q$ are the sample averages of the points being split \begin{equation} \mu_k = \frac{1}{n_v} \sum_{i: X_i \in v} m_k(X_i). \end{equation} Specifically, the optimal split depends on the which $f_i$ achieves the largest value, i.e., the value $\argmax_i f_i(\mu)$. Here, $f_1, \dots, f_P$ are assumed to be Lipschitz, and the functions $m_1, \dots, m_Q$ are such that $m_k(X)$ is 1-subExponential. Then the splitting stability assumption is satisfied. \end{proposition} \begin{remark} Recall that the $X_i$ are bounded---therefore, $m_k(X)$ being subExponential allows the use of squared loss to compute the optimal split. \end{remark} In general, the conditions in Proposition~\ref{prop:split-stability-sufficient} is sufficient to guarantee the an exponential bound instead of a polynomial one. Thus, the proposition should be viewed as simply as providing a ``plausibility argument'' that stable splitting rules are common in practice. The next proposition shows that when the splitting probabilities are bounded by $(s|v|)^{-(1+\delta)}$ as in the splitting stability assumption, so are the continuation values. \begin{proposition} \label{prop:value-bounds} Suppose the splitting probabilities satisfy a generic bound \begin{equation} \TV(p, p') \leq \log(s) \Delta(s|v|) \quad \text{at each node $v$}. \end{equation} Then for any node $v$ containing $x$ but not $x_1$, \begin{equation} |f(v) - f'(v)| \leq C \Delta(s|v|) \quad \text{for some constant $C$}. \end{equation} (The constant $C$ does not depend on $v$.) \end{proposition} Finally, we can put the bounds on $\TV(p, p')$ and $|f-f'|$ together and prove the required bound on the cross diagonals. \begin{proposition}\label{prop:2-bound} Suppose that the splitting rule is stable as in \eqref{eq:29} and that $\delta > 1-\alpha$. For $x \neq x_1$, \begin{equation} \label{eq:26} |\E(T \mid X_1 = x_1) - \E(T)| = o\biggl( \frac{1}{s^{1+\epsilon}} \biggr) \end{equation} for some $\epsilon > 0$. In particular, the off-diagonal entries of $\Var \E(T \mid X_1)$ are $o(s^{-(1+\epsilon)})$. \end{proposition} Note that Proposition~\ref{prop:terminal-node-probability} required that $\delta > 1/2$. Since $\alpha < 1/2$ by definition, this bound is necessarily more restrictive. As before, this requirement is plausibly looser in applications: we use it to give the bound \begin{equation} |v| \geq \alpha^L, \end{equation} even though the RHS is only tight only at most $1/2^L$ proportion of the nodes. Note that the ``average'' node has area approximately $(1/2)^L$, so that $\delta > 1/2$ may be more appropriate. Putting everything together, we have show that required result that \begin{equation} \label{eq:31} \Var(\mathring T)_{k, l} = o(s^{-\epsilon}) \qquad \text{for some $\epsilon > 0$}. \end{equation} Therefore, by Proposition~\ref{prop:trace-bound} and the trace calculations, the asymptotic joint normality of the random forest estimator is established. \section{Heuristics and Practical Recommendations} The previous sections established the asymptotic normality result that \begin{equation} \label{eq:32} V^{-1/2}(\mathring \RF - \mu) \xRightarrow{\;\rm dist\;} N(0,I), \qquad \text{where } V = \Var \mathring \RF = \frac{s}{n} \Var \mathring T. \end{equation} Recall $\mu$ is the expectation of $\RF$, while the target function is $m(x) = \E(Y \mid X = x)$. Athey and Wager shows that $(\RF - \mu)/\sqrt{V} \xRightarrow{\;\rm dist\;} N(0, 1)$ in the univariate case; since we have shown that $V$ is diagonally dominant, their result carries over, so that $V^{-1/2}(\RF - m) \xRightarrow{\;\rm dist\;} N(0, I)$. Moreover, (Athey and Wager) proposes a jackknife estimator that can consistently estimate $\sqrt{V}$ in the scalar case. Our diagonally dominance result suggests that the random forest estimates at $x$ and $\bar x \in [0, 1]^p$ are independent in the limit $n\to\infty$, \begin{equation} \label{eq:33} \begin{split} \Var(\RF(x) + \RF(\bar x)) &= \Var(\RF(x)) + \Var(\RF(\bar x)) + 2 \Cov(\RF(x) + \RF(\bar x)) \\&\approx \Var(\RF(x)) + \Var(\RF(\bar x)) \end{split} \end{equation} so that the jackknife estimator for the scalar case can be used to obtain confidence intervals. However, the approximation above depends on decay of the off-diagonal terms. This section, we provide a back of the envelope bound for the covariance term that may be useful for practioners. We stress that the following calculations are mostly heuristics: as we have shown above, the covariance term depends on quantities such as $M(x, \bar x)$ which is heavily dependent on the splitting algorithm. Since the aim is to produce a ``usable'' result, we focus on heuristics. To begin, recall that the covariance term is upper bounded by\footnote{This is a very crude upper bound as we have dropped the quantity $\Delta(\alpha^{\ell} s)$ from the infinite series.} \begin{equation} M(x, \bar x) + \log^2s \biggl(\sum_{\ell = 0}^{\infty} p_{\ell}\biggr) \biggl(\sum_{\ell = 0}^{\infty} \bar p_{\ell}\biggr) \end{equation} where $p_{\ell} = \P(L \geq \ell)$ is the probability that $x$ and $x_1$ are not separated after $\ell$ splits (c.f.\ proof of Proposition~\ref{prop:2-bound}) and likewise for $\bar p_{\ell}$. If we let $I$ (resp.\ $\bar I$) denote the indicator that $X_1$ is in the terminal node of $x$ (resp.\ $\bar x$), then $I = 1$ is equal to the event that $L = \log_2 s$, so that \begin{equation} \E(I \mid X_1 = x_1) = \P(L = \log s) \leq \frac{\E L}{\log s}. \end{equation} Replacing the inequality with an approximation, we have $\E L = (\log s) \E(I \mid X_1 = x_1)$. All of this shows that the covariance term is bounded by \begin{equation} (\log^4s) M(x, \bar x), \quad \text{up to appropriate constants}. \end{equation} Towards a useful heuristic, we will consider a bound on the correlation instead of the covariance. In our notation, the result of (Athey and Wager) lower bounds $M(x, x)$ (and $M(\bar x, \bar x)$) in their proof, while we upper bound $M(x, \bar x)$. Ignoring the logarithmic terms, we have \begin{equation} \biggl|\frac{\Cov(T(x), T(\bar x))}{\sqrt{\Var T(x) \cdot \Var T(\bar x)}}\biggr| \approx \frac{M(x, \bar x)}{\sqrt{M(x, x) M(\bar x, \bar x)}}. \end{equation} Recall that $M(x, \bar x) = \E[\E(I \mid X_1) \E(\bar I \mid X_1)]$ and decays superlinearly as $\bar x$ moves away from $x$. Using the previous expression (note that $M(x,x) \approx M(\bar x, \bar x)$ due to symmetry between $x$ and $\bar x$), we can bound the correlation from purely geometric considerations. Since the integrand \begin{equation} \E(I \mid X_1) \E(\bar I \mid X_1) \end{equation} decays as $X_1$ moves away from $x$ (and $\bar x$), we may imagine that its integral $M(x,x)$ comes from contributions to points $X_1$ near $x$, say those points in a $L_{\infty}$-box of side lengths $d$ with volume $d^p$, i.e., points $\{ y \in [0,1]^p : \| x - y \| \leq d/2\}$. For $M(x, \bar x)$, the contribution would come from points that are with $d/2$ of both $x$ \emph{and} $\bar x$, and to a first degree approximation, the volume of these points $\{ y \in [0,1]^p : \| x - y \|_{\infty} \leq d/2, \| \bar x - y \| \leq d/2 \}$ is \begin{equation} (d - z_1) \dots (d - z_p) \approx d^p - (z_1 + \dots + z_p)d^{p-1}, \quad \text{where $z_i = |x_i - \bar x_i|$}, \end{equation} where the approximation is accurate if $|z_i| \ll 1$. The proportion of the volume of these points is therefore $1 - \frac{1}{d}\| x - \bar x \|_1$, which leads to the heuristic \begin{equation} \biggl|\frac{\Cov(T(x), T(\bar x))}{\sqrt{\Var T(x) \cdot \Var T(\bar x)}} \biggr| \approx 1 - c \| x - \bar x \|_1, \qquad \text{for some constant $c$}. \end{equation} The RHS has the correct scaling when $x = \bar x$, i.e., the correlation equals one when $\| x - \bar x \|_1 = 0$. To maintain correct scalinag at the other extreme with $\| x - \bar x \|_1 = p$, we should take $c = 1/p$, so that \begin{equation} \biggl|\frac{\Cov(T(x), T(\bar x))}{\sqrt{\Var T(x) \cdot \Var T(\bar x)}}\biggr| \approx 1 - \frac{1}{p}\sum_{i=1}^p |x_i - \bar x_i|. \end{equation} Of course, this heurstic is ``wrong'' in that it does not depend on $s$; our theoretical results show that even for non-diametrically opposed points, the correlation drops to zero as $s \to \infty$. Therefore, another recommendation is to use \begin{equation} \label{linear-heuristic} \biggl|\frac{\Cov(T(x), T(\bar x))}{\sqrt{\Var T(x) \cdot \Var T(\bar x)}}\biggr| \approx \min \biggl( 1 - \frac{s^{\epsilon}}{p}\sum_{i=1}^p |x_i - \bar x_i|, \; 0 \biggr), \end{equation} for some $\epsilon > 0$, where dependence $s^{\epsilon}$ comes from matching the heuristic $c = 1/d$ above with the proof of Proposition~\ref{prop:terminal-node-probability}. \section{Simulations} In this section, we conduct numerical experiments of varying sample sizes to gauge the empirical relevance of our theoretical results and heuristics. In our experiments, we set $p = 2$, so that the covariates $X$ are distributed on the unit square. Instead of adopting a uniform distribution, for ease of interpretation, the distribution of $X$ is \begin{equation} \label{eq:16} X \sim \begin{cases} \bar N(\mu_1, I_2) & \text{with probability $1/4$} \\ \bar N(\mu_2, I_2) & \text{with probability $1/4$}\\ \bar N(\mu_3, I_2) & \text{with probability $1/4$}\\ \bar N(\mu_4, I_2) & \text{with probability $1/4$} \end{cases} \quad \text{where}\quad \begin{cases} \mu_1 = (0.3, 0.3)^{\intercal} \\ \mu_2 = (0.3, 0.7)^{\intercal} \\ \mu_3 = (0.7, 0.3)^{\intercal} \\ \mu_4 = (0.7, 0.7)^{\intercal} \\ \end{cases} \end{equation} and $\bar N$ denotes a truncated multivariate Gaussian distribution on the unit square.\footnote{ That is, $\bar N(\mu, \Sigma)$ denotes the conditional distribution of $x \sim N(\mu, \Sigma)$ on the event $x \in [0, 1]^2$.} Thus, $X$ has a bounded density on the unit square, and has four peaks at $\mu_1, \dots, \mu_4$. The distribution of $Y$ conditional on $X = (x_1, x_2)$ is \begin{equation} \label{eq:18} Y \sim \frac{x_1 + x_2}{2} + \frac{1}{5} N(0, 1). \end{equation} The random splitting probability is $\delta = 1/2$, and the regularity parameters are $\alpha = 0.01$ and $k = 1$, so that the tree is grown to the fullest extent (i.e., terminal nodes may contain a single observation), with each terminal node lying on the $101 \times 101$ grid of the unit square. For each sample size $n$, five thousand trees are grown, and the estimates are aggregated to compute the correlation. Figure~\ref{fig:1} plots the correlation of estimates at $x$ and $\bar x$ as a function of the $L_1$ norm $\|x - \bar x\|_1$. The calculation is performed by first fixing $x$, then calculating the sample correlation (across five thousand trees) as $\bar x$ ranges over each cell: the correlation is associated with the $L_1$ norm $\|x - \bar x\|_1$. This process is then repeated by varying the reference point $x$, and the correlation at $\| x - \bar x \|_1$ is the average of the correlations observed. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{C.pdf} \caption{Correlation as a function of sample size and $L_1$ norm.} \label{fig:1} \end{figure} The figure demonsrates that the linear heuristic \eqref{linear-heuristic} given in the previous section is conservative: it is evident that correlation decreases super-linearly as $x$ and $\bar x$ become separated. Figure~\ref{fig:2} plots the correlation on a logarithmic scale, which shows that that correlation decay is exponential in a neighborhood of unity. In other words, simulations suggest that the correct heuristic is of the shape \begin{equation} \label{eq:28} \biggl|\frac{\Cov(T(x), T(\bar x))}{\sqrt{\Var T(x) \cdot \Var T(\bar x)}}\biggr| \approx e^{-\lambda\| x - \bar x \|_1} \end{equation} for some suitable $\lambda$. Note, however, that the rate of decay of the correlation depends heavily on the exact splitting algorithm employed. Thus, an avenue for future empirical work is to investigate the decay behavior of popular random forest algorithms used in practice. \begin{figure}[t] \centering \includegraphics{Clog.pdf} \caption{Logarithm of correlation as a function of sample size and $L_1$ norm.} \label{fig:2} \end{figure} \FloatBarrier \section{Conclusion and Points for Future Work} Regression trees and random forests are popular and effective non-parametric estimators in practical applications. A recent paper by Athey and Wager shows that the random forest estimate at any point is asymptotically Gaussian; in this paper, we extend this result to the multivariate case and show that the vector of estimates at multiple points is jointly normal. Specifically, the covariance matrix of the limiting normal distribution is diagonal, so that the estimates at any two points are independent in sufficiently deep trees. Moreover, the off-diagonal term is bounded by quantities capturing how likely two points belong to the same partition of the resulting tree. Certainly stability properties of the base learner are central to our results, c.f.\ Propositions~\ref{prop:terminal-node-probability} and \ref{prop:split-stability-sufficient}. We test our proposed covariance bound and the associated coverage rates of confidence intervals in numerical simulations. This paper provides the a theoretical basis for performing inferences on functionals of target function (e.g., a heterogeneous treatment effect) when the functional is based on values of the target function at multiple points in the feature space. Specifically, we show that the covariance vanishes in the limit relative to the variance, and provide heuristics on the size of the correlation in finite samples. There are a couple interesting avenues for future work: the first is to extend our framework to cover categorical or discrete-valued features. Here, new assumptions would be required in order to maintain the guarantee that node sizes are ``not too small.'' Another direction is to use our bounds---with possible improvements---on the covariance matrix of the underlying $U$-statistic and leverage recent results \cite{chernozhukov2017,chen2018} in order to provite finite sample bounds. This would provide a more sound theoretical underpinning of our heuristics.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:int} Over the past decade, convolutional neural networks (CNNs) have been accepted as the core of many computer vision solutions. Deep models trained on a massive amount of data have delivered impressive accuracy on a variety of tasks, including but not limited to semantic segmentation, face recognition, object detection and recognition. In spite of the successes, mobile devices cannot take much advantage of these models, mainly due to their inadequacy of computational resources. As is known to all, camera based games are dazzling to be operated with object recognition and detection techniques, hence it is eagerly anticipated to deploy advanced CNNs (e.g., AlexNet~\cite{Krizhevsky2012}, VGG-Net~\cite{Simonyan2014} and ResNet~\cite{He2016}) on tablets and smartphones. Nevertheless, as the winner of ILSVRC-2012 competition, AlexNet comes along with nearly 61 million real-valued parameters and 1.5 billion floating-point operations (FLOPs) to classify an image, making it resource-intensive in different aspects. Running it for real-time applications would require considerable high CPU/GPU workloads and memory footprints, which is prohibitive on typical mobile devices. The similar situation occurs on the deeper networks like VGG-Net and ResNet. Recently, CNNs with binary weights are designed to resolve this problem. By forcing the connection weights to only two possible values (normally $+$1 and $-$1), researchers attempt to eliminate the required floating-point multiplications (FMULs) for network inference, as they are considered to be the most expensive operations. In addition, since the real-valued weights are converted to be binary, these networks would commit much less space for storing their parameters, which leads to a great saving in the memory footprints and thus energy costs~\cite{Han2015}. Several methods have been proposed to train such networks~\cite{Courbariaux2015, Courbariaux2016, Rastegari2016}. However, the reported accuracy of obtained models is unsatisfactory on large dataset (e.g., ImageNet)~\cite{Rastegari2016}. Even worse, since straightforwardly widen the networks does not guarantee any increase in accuracy~\cite{Juefei-Xu2016}, it is also unclear how we can make a trade-off between the model precision and expected accuracy with these methods. \begin{figure}[t] \begin{center} \includegraphics[width=0.94\linewidth]{1.png} \end{center} \vskip -0.1in \caption{Sketching a network model by exploiting binary structure within pre-trained filter banks, after which the full-precision model can be converted to an efficient one with binary (in black and light grey) connections.} \label{fig:1} \vskip -0.1in \end{figure} In this paper, we introduce network sketching as a new way of pursuing binary-weight CNNs, where the binary structures are exploited in pre-trained models rather than being trained from scratch. To seek the possibility of yielding state-of-the-art models, we propose two theoretical grounded algorithms, making it possible to regulate the precision of sketching for more accurate inference. Moreover, to further improve the efficiency of generated models (a.k.a., sketches), we also propose an algorithm to associatively implement binary tensor convolutions, with which the required number of floating-point additions and subtractions (FADDs)\footnote {Without ambiguity, we collectively abbreviate floating-point additions and floating-point subtractions as FADDs.} is likewise reduced. Experimental results demonstrate that our method works extraordinarily well on both AlexNet and ResNet. That is, with a bit more FLOPs required and a little more memory space committed, the generated sketches outperform the existing binary-weight AlexNets and ResNets by large margins, producing near state-of-the-art recognition accuracy on the ImageNet dataset. The remainder of this paper is structured as follows. In Section~\ref{sec:rel}, we briefly introduce the related works on CNN acceleration and compression. In Section~\ref{sec:ske}, we highlight the motivation of our method and provide some theoretical analyses for its implementations. In Section~\ref{sec:spe}, we introduce the associative implementation for binary tensor convolutions. At last, Section~\ref{sec:exp} experimentally demonstrates the efficacy of our method and Section~\ref{sec:con} draws the conclusions. \section{Related Works}\label{sec:rel} The deployment problem of deep CNNs has become a concern for years. Efficient models can be learnt either from scratch or from pre-trained models. Generally, training from scratch demands strong integration of network architecture and training policy~\cite{Lin2016}, and here we mainly discuss representative works on the latter strategy. Early works are usually hardware-specific. Not restricted to CNNs, Vanhoucke et al.~\cite{Vanhoucke2011} take advantage of programmatic optimizations to produce a $3\times$ speedup on x86 CPUs. On the other hand, Mathieu et al.~\cite{Mathieu2013} perform fast Fourier transform (FFT) on GPUs and propose to compute convolutions efficiently in the frequency domain. Additionally, Vasilache et al.~\cite{Vasilache2015} introduce two new FFT-based implementations for more significant speedups. More recently, low-rank based matrix (or tensor) decomposition has been used as an alternative way to accomplish this task. Mainly inspired by the seminal works from Denil et al.~\cite{Denil2013} and Rigamonti et al.~\cite{Rigamonti2013}, low-rank based methods attempt to exploit parameter redundancy among different feature channels and filters. By properly decomposing pre-trained filters, these methods~\cite{Denton2014, Jaderberg2014, Lebedev2014, Zhang2015, Liu2015} can achieve appealing speedups ($2\times$ to $4\times$) with acceptable accuracy drop ($\leq 1\%$).~\footnote{Some other works concentrate on learning low-rank filters from scratch~\cite{Tai2016, Ioannou2016}, which is out of the scope of our paper.} Unlike the above mentioned ones, some research works regard memory saving as the top priority. To tackle the storage issue of deep networks, Gong et al.~\cite{Gong2014}, Wu et al.~\cite{Wu2016} and Lin et al.~\cite{Lin2016} consider applying the quantization techniques to pre-trained CNNs, and trying to make network compressions with minor concessions on the inference accuracy. Another powerful category of methods in this scope is network pruning. Starting from the early work of LeCun et al's~\cite{Lecun1989} and Hassibi \& Stork's~\cite{Hassibi1993}, pruning methods have delivered surprisingly good compressions on a range of CNNs, including some advanced ones like AlexNet and VGGnet~\cite{Han2015, Srinivas2015, Guo2016}. In addition, due to the reduction in model complexity, a fair speedup can be observed as a byproduct. As a method for generating binary-weight CNNs, our network sketching is orthogonal to most of the existing compression and acceleration methods. For example, it can be jointly applied with low-rank based methods, by first decomposing the weight tensors into low-rank components and then sketching them. As for the cooperation with quantization-based methods, sketching first and conducting product quantization thereafter would be a good choice. \section{Network Sketching}\label{sec:ske} In general, convolutional layers and fully-connected layers are the most resource-hungry components in deep CNNs. Fortunately, both of them are considered to be over-parameterized~\cite{Denil2013,Veit2016}. In this section, we highlight the motivation of our method and present its implementation details on the convolutional layers as an example. Fully-connected layers can be operated in a similar way. Suppose that the learnable weights of a convolutional layer $\mathcal L$ can be arranged and represented as $\{ \mathbf W^{(i)}:0\leq i < n \}$, in which $n$ indicates the target number of feature maps, and $\mathbf W^{(i)} \in \mathbb R^{c\times w\times h}$ is the weight tensor of its $i$th filter. Storing all these weights would require $32\times c\times w\times h\times n$ bit memory, and the direct implementation of convolutions would require $s\times c\times w\times h\times n$ FMULs (along with the same number of FADDs), in which $s$ indicates the spatial size of target feature maps. Since many convolutional models are believed to be informational redundant, it is possible to seek their low-precision and compact counterparts for better efficiency. With this in mind, we consider exploiting binary structures within $\mathcal L$, by using the divide and conquer strategy. We shall first approximate the pre-trained filters with a linear span of certain binary basis, and then group the identical basis tensors to pursue the maximal network efficiency. Details are described in the following two subsections, in which we drop superscript $(i)$ from $\mathbf W$ because the arguments apply to all the $n$ weight tensors. \subsection{Approximating the Filters}\label{sec:app} As claimed above, our first goal is to find a binary expansion of $\mathbf W$ that approximates it well (as illustrated in Figure~\ref{fig:2}), which means $\mathbf W \approx \left \langle \mathbf B, \mathbf a \right \rangle = \sum_{j=0}^{m-1} \mathbf a_j \mathbf B_j $, in which $\mathbf B \in \{+1,-1\}^{c\times w\times h\times m} $ and $\mathbf a \in \mathbb R^m$ are the concatenations of $m$ binary tensors $\{ \mathbf B_0,...,\mathbf B_{m-1} \}$ and the same number of scale factors $\{ \mathbf a_0,...,\mathbf a_{m-1} \}$, respectively. We herein investigate the appropriate choice of $\mathbf B$ and $\mathbf a$ with a fixed $m$. Two theoretical grounded algorithms are proposed in Section~\ref{sec:dir} and~\ref{sec:ref}, respectively. \begin{figure}[ht] \begin{center} \includegraphics[width=0.92\linewidth]{2.png} \end{center} \caption{Approximate the real-valued weight tensor with a sum of scaled binary tensors.} \label{fig:2} \vskip -0.1 in \end{figure} \subsubsection{Direct Approximation}\label{sec:dir} For easy understanding, we shall first introduce the direct approximation algorithm. Generally, the reconstruction error (or approximation error, round-off error) $e^2:=\| \mathbf W- \langle \mathbf B, \mathbf a \rangle \|^2$ should be minimized to retain the model accuracy after expansion. However, as a concrete decision problem, directly minimizing $e^2$ seems NP-hard and thus solving it can be very time consuming~\cite{Davis1997}. In order to finish up in reasonable time, we propose a heuristic algorithm, in which $\mathbf B_j$ and $\mathbf a_j$ are sequentially learnt and each of them is selected to be the current optimum with respect to the $e^2$ minimization criterion. That is, \begin{equation}\label{eq:1} \mathbf B_j, \mathbf a_j = \argmin_{B\in \mathcal B,\ a \in \mathbb R} \ \left \| \hat{\mathbf W}_j - a B \right \|^2, \end{equation} in which $\mathcal B := \{+1,-1\}^{c\times w\times h}$, the norm operator $\| \cdot \|$ is defined as $\|\mathbf X\|:=\langle \mathbf X, \mathbf X\rangle^{1/2}$ for any 3-D tensor $\mathbf X$, and $\hat{\mathbf W}_j$ indicates the approximation residue after combining all the previously generated tensor(s). In particular, $\hat{\mathbf W}_j = \mathbf W$ if $j=0$, and \begin{equation}\label{eq:2} \hat{\mathbf W}_j = \mathbf W-\sum_{k=0}^{j-1} \mathbf a_k \mathbf B_k \end{equation} if $ j \geq 1$. It can be easily known that, through derivative calculations, Equation~(\ref{eq:1}) is equivalent with \begin{equation}\label{eq:3} \mathbf B_j = \mathrm{sgn}(\hat{\mathbf W}_j)\quad \mathrm{and}\quad \mathbf a_j=\frac{\langle \mathbf B_j , \hat{\mathbf W}_j \rangle}{t} \end{equation} under this circumstance, in which function $\mathrm{sgn}(\cdot)$ calculates the element-wise sign of the input tensor, and $t = c\times w\times h$. \begin{algorithm}[tbp] \caption{The direct approximation algorithm:} \begin{algorithmic} \STATE {\bfseries Input:} $\mathbf {W}$: the pre-trained weight tensor, $m$: the desired cardinality of binary basis. \\ \STATE{\bfseries Output:} $\{ \mathbf B_j, \mathbf a_j:\ 0\leq j < m \}$: a binary basis and a series of scale factors.\\ \STATE Initialize $j \leftarrow 0$ and $\hat{\mathbf W}_j\leftarrow \mathbf W$. \\ \REPEAT \STATE Calculate $\mathbf B_j$ and $\mathbf a_j$ by Equation~(\ref{eq:3}). \STATE Calculate $\hat{\mathbf W}_{j+1} = \hat{\mathbf W}_j-\mathbf a_j \mathbf B_j$ and update $j \leftarrow j+1$. \\ \UNTIL{ $j$ reaches its maximal number $m$. } \end{algorithmic}\label{alg:1} \end{algorithm} The above algorithm is summarized in Algorithm~\ref{alg:1}. It is considered to be heuristic (or greedy) in the sense that each $\mathbf B_j$ is selected to be the current optimum, regardless of whether it will preclude better approximations later on. Furthermore, some simple deductions give the following theoretical result. \begin{theorem}\label{theo:1} For any $m\geq 0$, Algorithm~\ref{alg:1} achieves a reconstruction error $e^2$ satisfying \begin{equation}\label{eq:4} e^2 \leq \| \mathbf W \|^2 (1-1/t)^m. \end{equation} \end{theorem} \begin{proof} Since $\mathbf B_j = \mathrm{sgn}(\hat{\mathbf W}_j)$, we can obtain that, \begin{equation}\label{eq:5} \langle \mathbf B_j, \hat{\mathbf W}_j \rangle = \sum \nolimits_l | \hat{w}^{(l)}_j | \geq \| \hat{\mathbf W}_j \|, \end{equation} in which $\hat{w}_j$ is an entry of $\hat{\mathbf W}_j$, with superscript $(l)$ indicates its index. From Equation~(\ref{eq:2}) and~(\ref{eq:5}), we have \begin{equation}\label{eq:6} \begin{split} \| \hat{\mathbf W}_{j+1} \|^2 & = \|\hat{\mathbf W}_j \|^2 - \mathbf a_j \langle \mathbf B_j, \hat{\mathbf W}_j \rangle \\ & = \|\hat{\mathbf W}_j \|^2 \left (1 - \frac{\langle \mathbf B_j, \hat{\mathbf W}_j \rangle^2}{ t\|\hat{\mathbf W}_j \|^2} \right ) \\ & \leq \|\hat{\mathbf W}_j \|^2 \left (1 - 1/t \right ). \end{split} \end{equation} The result follows by applying Formula~(\ref{eq:6}) for $j$ varying from $0$ to $m-1$. \end{proof} \subsubsection{Approximation with Refinement}\label{sec:ref} We can see from Theorem~\ref{theo:1} that, by utilizing the direct approximation algorithm, the reconstruction error $e^2$ decays exponentially with a rate proportional to $1/t$. That is, given a $\mathbf W$ with small size (i.e., when $t$ is small), the approximation in Algorithm~\ref{alg:1} can be pretty good with only a handful of binary tensors. Nevertheless, when $t$ is relatively large, the reconstruction error will decrease slowly and the approximation can be unsatisfactory even with a large number of binary tensors. In this section, we propose to refine the direct approximation algorithm for better reconstruction property. Considering that, in Algorithm~\ref{alg:1}, both $\mathbf B_j$ and $\mathbf a_j$ are chosen to minimize $e^2$ with fixed counterparts. However, in most cases, it is doubtful whether $\mathbf B$ and $\mathbf a$ are optimal overall. If not, we may need to refine at least one of them for the sake of better approximation. On account of the computational simplicity, we turn to a specific case when $\mathbf B$ is fixed. That is, suppose the direct approximation has already produced $\hat{\mathbf B}$ and $\hat{\mathbf a}$, we hereby seek another scale vector $\mathbf a$ satisfying $\|\mathbf W-\langle \hat{\mathbf B},\mathbf a \rangle \|^2 \leq \|\mathbf W-\langle \hat{\mathbf B},\hat{\mathbf a} \rangle \|^2$. To achieve this, we follow the least square regression method and get \begin{equation}\label{eq:7} \mathbf a_j = \left ( B_j^T B_j \right)^{-1} B_j^T \cdot \mathrm{vec}(\mathbf W), \end{equation} in which the operator $\mathrm{vec}(\cdot)$ gets a column vector whose elements are taken from the input tensor, and $B_j$ gets $[ \mathrm{vec}(\mathbf B_{0}),...,\mathrm{vec}(\mathbf B_{j})]$. \begin{algorithm}[tbp] \caption{Approximation with refinement:} \begin{algorithmic} \STATE {\bfseries Input:} $\mathbf {W}$: the pre-trained weight tensor, $m$: the desired cardinality of binary basis. \\ \STATE{\bfseries Output:} $\{ \mathbf B_j, \mathbf a_j: 0\leq j < m \}$: a binary basis and a series of scale factors.\\ \STATE Initialize $j \leftarrow 0$ and $\hat{\mathbf W}_j\leftarrow \mathbf W$. \\ \REPEAT \STATE Calculate $\mathbf B_j$ and $\mathbf a_j$ by Equation~(\ref{eq:3}) and~(\ref{eq:7}). \\ \STATE Update $j \leftarrow j+1$ and calculate $\hat{\mathbf W}_j$ by Equation~(\ref{eq:2}). \\ \UNTIL{ $j$ reaches its maximal number $m$. } \end{algorithmic}\label{alg:2} \end{algorithm} The above algorithm with scale factor refinement is summarized in Algorithm~\ref{alg:2}. Intuitively, the refinement operation attributes a memory-like feature to our method, and the following theorem ensures it can converge faster in comparison with Algorithm~\ref{alg:1}. \begin{theorem}\label{theo:2} For any $m\geq 0$, Algorithm~\ref{alg:2} achieves a reconstruction error $e^2$ satisfying \begin{equation}\label{eq:8} e^2 \leq \left \| \mathbf W \right \|^2 \prod_{j=0}^{m-1} \left (1-\frac{1}{t-\lambda(j,t)} \right ), \end{equation} in which $\lambda(j,t)\geq0$, for $0 \leq j \leq m-1$. \end{theorem} \begin{proof} To simplify the notations, let us further denote $w_j := \mathrm{vec}(\mathbf W_j)$ and $b_{j+1} := \mathrm{vec}(\mathbf B_{j+1})$, then we can obtain by block matrix multiplication and inverse that, \begin{equation} (B_{j+1}^T B_{j+1})^{-1} =\begin{bmatrix} \Phi+\Phi \psi \psi^T\Phi/r & -\Phi \psi/r \\ -\psi^T \Phi/r & 1/r \end{bmatrix}, \end{equation} in which $\Phi = (B_j^T B_j)^{-1}$, $\psi = B_{j}^T b_{j+1}$, and $r = b_{j+1}^T b_{j+1}-\psi^T \Phi \psi$. Consequently, we have the following equation for $j=0,...,m-1$, \begin{equation}\label{eq:ske11} w_{j+1}=\left ( I- \frac{\Lambda (b_{j+1} b_{j+1}^T)}{b_{j+1}^T \Lambda b_{j+1}} \right ) w_j, \end{equation} by defining $\Lambda := I-B_j\Phi B_j^T$. As we know, given positive semi-definite matrices $X$ and $Y$, $\mathrm{tr}(XY)\leq \mathrm{tr}(X)\mathrm{tr}(Y)$. Then, Equation~(\ref{eq:ske11}) gives, \begin{equation}\nonumber \begin{split} \| \hat{\mathbf W}_{j+1} \|^2 & \leq \| \hat{\mathbf W}_j \|^2 - \frac{w_j^T (b_{j+1} b_{j+1}^T) \Lambda (b_{j+1} b_{j+1}^T) w_j}{ (b_{j+1}^T \Lambda b_{j+1})^2} \\ & = \| \hat{\mathbf W}_j \|^2 - \frac{w_j^T (b_{j+1} b_{j+1}^T) w_j}{b_{j+1}^T \Lambda b_{j+1}} \\ & = \|\hat{\mathbf W}_j \|^2 \left (1 - \frac{\langle \mathbf B_j, \hat{\mathbf W}_j \rangle^2}{b_{j+1}^T \Lambda b_{j+1} \|\hat{\mathbf W}_j \|^2 } \right ). \end{split} \end{equation} Obviously, it follows that, \begin{equation} \| \hat{\mathbf W}_{j+1} \|^2 \leq \|\hat{\mathbf W}_j \|^2 (1-1/(t-\lambda(j,t))), \end{equation} in which $\lambda (j,t) := b_{j+1}^T (I-\Lambda) b_{j+1}$. Since $\lambda (j,t)$ represents the squared Euclidean norm of an orthogonal projection of $b_{j+1}$, it is not difficult to prove $\lambda(j,t)\geq0$, and then the result follows. \end{proof} \subsection{Geometric Interpretation}\label{sec:exp} After expanding the pre-trained filters, we can group the identical binary tensors to save some more memory footprints. In this paper, the whole technique is named as network sketching, and the generated binary-weight model is straightforwardly called a sketch. Next we shall interpret the sketching process from a geometric point of view. For a start, we should be aware that, Equation~(\ref{eq:1}) is essentially seeking a linear subspace spanned by a set of $t$-dimensional binary vectors to minimize its Euclidean distance to $\mathrm{vec}(\mathbf W)$. In concept, there are two variables to be determined in this problem. Both Algorithm~\ref{alg:1} and~\ref{alg:2} solve it in a heuristic way, and the $j$th binary vector is always estimated by minimizing the distance between itself and the current approximation residue. What make them different is that, Algorithm~\ref{alg:2} takes advantage of the linear span of its previous $j-1$ estimations for better approximation, whereas Algorithm~\ref{alg:1} does not. Let us now take a closer look at Theorem~\ref{theo:2}. Compared with Equation~(\ref{eq:4}) in Theorem~\ref{theo:1}, the distinction of Equation~(\ref{eq:8}) mainly lies in the existence of $\lambda(j,t)$. Clearly, Algorithm~\ref{alg:2} will converge faster than Algorithm~\ref{alg:1} as long as $\lambda(j,t)>0$ holds for any $j \in [0,m-1]$. Geometrically speaking, if we consider $B_j(B_j^T B_j)^{-1} B_j^T$ as the matrix of an orthogonal projection onto $\mathcal S_j:=\mathrm{span}(b_0,...,b_j)$, then $\lambda(j,t)$ is equal to the squared Euclidean norm of a vector projection. Therefore, $\lambda(j,t)=0$ holds if and only if vector $b_{j+1}$ is orthogonal to $\mathcal S_j$, or in other words, orthogonal to each element of $\{b_0,...,b_j\}$ which occurs with extremely low probability and only on the condition of $t\in\{2k: k\in \mathbb N\}$. That is, Algorithm~\ref{alg:2} will probably prevail over Algorithm~\ref{alg:1} in practice. \section{Speeding-up the Sketch Model}\label{sec:spe} Using Algorithm~\ref{alg:1} or~\ref{alg:2}, one can easily get a set of $mn$ binary tensors on $\mathcal L$, which means the storage requirement of learnable weights is reduced by a factor of $32t/(32m+tm)\times$. When applying the model, the required number of FMULs is also significantly reduced, by a factor of $(t/m)\times$. Probably, the only side effect of sketching is some increases in the number of FADDs, as it poses an extra burden on the computing units. In this section, we try to ameliorate this defect and introduce an algorithm to further speedup the binary-weight networks. We start from an observation that, yet the required number of FADDs grows monotonously with $m$, the inherent number of addends and augends is always fixed with a given input of $\mathcal L$. That is, some repetitive FADDs exist in the direct implementation of binary tensor convolutions. Let us denote $\mathbf X\in \mathbb R^{c\times w\times h}$ as an input sub-feature map and see Figure~\ref{fig:3} for a schematic illustration. \begin{figure}[ht] \begin{center} \includegraphics[width=0.92\linewidth]{3.png} \end{center} \caption{As highlighted in the rounded rectangles, with high probability, repetitive FADD exists in the direct implementation of binary tensor convolutions.} \label{fig:3} \end{figure} \subsection{Associative Implementation}\label{sec:ass} To properly avoid redundant operations, we first present an associative implementation of the multiple convolutions $\mathbf X\ast \mathbf B_0,...,\mathbf X\ast \mathbf B_{mn-1}$ on $\mathcal L$, in which the connection among different convolutions is fully exploited. To be more specific, our strategy is to perform convolutions in a hierarchical and progressive way. That is, each of the convolution results is used as a baseline of the following calculations. Suppose the $j_0$-th convolution is calculated in advance and it produces $\mathbf X \ast \mathbf B_{j_0}=s$, then the convolution of $\mathbf X$ and $\mathbf B_{j_1}$ can be derived by using \begin{equation}\label{eq:12} \mathbf X \ast \mathbf B_{j_1} = s+(\mathbf X \ast (\mathbf B_{j_0} \veebar \mathbf B_{j_1}) )\times2, \end{equation} or alternatively, \begin{equation}\label{eq:13} \mathbf X \ast \mathbf B_{j_1} =s-(\mathbf X \ast (\neg \mathbf B_{j_0} \veebar \mathbf B_{j_1}) )\times 2, \end{equation} in which $\neg$ denotes the element-wise not operator, and $\veebar$ denotes an element-wise operator whose behavior is in accordance with Table~\ref{tab:1}. \begin{table} \begin{center} \begin{tabular}{|c|c|c|} \hline $\mathbf B_{j_1}$ & $\mathbf B_{j_2}$ & $\mathbf B_{j_1} \vee \mathbf B_{j_2}$ \\ \hline \hline $+1$ & $-1$ & $-1$\\ $+1$ & $+1$ & $0$ \\ $-1$ & $-1$ & $0$\\ $-1$ & $+1$ & $+1$\\ \hline \end{tabular} \end{center} \caption{Truth table of the element-wise operator $\vee$.}\label{tab:1} \end{table} Since $\mathbf B_{j_0} \veebar \mathbf B_{j_1}$ produces ternary outputs on each index position, we can naturally regard $\mathbf X \ast (\mathbf B_{j_0} \veebar \mathbf B_{j_1})$ as an iteration of \texttt{switch} ... \texttt{case} ... statements. In this manner, only the entries corresponding to $\pm1$ in $\mathbf B_{j_0} \veebar \mathbf B_{j_1}$ need to be operated in $\mathbf X$, and thus acceleration is gained. Assuming that the inner-product of $\mathbf B_{j_0}$ and $\mathbf B_{j_1}$ equals to $r$, then $(t-r)/2+1$ and $(t+r)/2+1$ FADDs are still required for calculating Equation~(\ref{eq:12}) and~(\ref{eq:13}), respectively. Obviously, we expect the real number $r\in [-t,+t]$ to be close to either $t$ or $-t$ for the possibility of fewer FADDs, and thus faster convolutions in our implementation. In particular, if $r\geq0$, Equation~(\ref{eq:12}) is chosen for better efficiency; otherwise, Equation~(\ref{eq:13}) should be chosen. \subsection{Constructing a Dependency Tree}\label{sec:dep} Our implementation works by properly rearranging the binary tensors and implementing binary tensor convolutions in an indirect way. For this reason, along with Equations~(\ref{eq:12}) and~(\ref{eq:13}), a dependency tree is also required to drive it. In particular, dependency is the notion that certain binary tensors are linked to specify which convolution to perform first and which follows up. For instance, with the depth-first-search strategy, $\mathcal T$ in Figure~\ref{fig:4} shows a dependency tree indicating first to calculate $\mathbf X\ast \mathbf B_1$, then to derive $\mathbf X\ast \mathbf B_0$ from the previous result, then to calculate $\mathbf X\ast \mathbf B_3$ on the base of $\mathbf X\ast \mathbf B_0$, and so forth. By traversing the whole tree, all $mn$ convolutions will be progressively and efficiently calculated. \begin{figure}[ht] \begin{center} \includegraphics[width=0.72\linewidth]{4.png} \end{center} \caption{A dependency tree for our algorithm. It suggests an order under which the associative convolutions are to be performed.} \label{fig:4} \end{figure} In fact, our algorithm works with a stochastically given tree, but a dedicated $\mathcal T$ is still in demand for its optimum performance. Let $G=\{V,E\}$ be an undirected weighted graph with the vertex set $V$ and weight matrix $E \in \mathbb R^{mn\times mn}$. Each element of $V$ represents a single binary tensor, and each element of $E$ measures the distance between two chosen tensors. To keep in line with the previous subsection, we here define the distance function of the following form \begin{equation} d(\mathbf B_{j_0},\mathbf B_{j_1}):=\min \left(\frac{t+r}{2}, \frac{t-r}{2} \right), \end{equation} in which $r=\langle \mathbf B_{j_0}, \mathbf B_{j_1}\rangle$ indicates the inner-product of $\mathbf B_{j_0}$ and $\mathbf B_{j_1}$. Clearly, the defined function is a metric on $\{-1,+1\}^{c\times w\times h}$ and its range is restricted in $[0,t]$. Recall that, we expect $r$ to be close to $\pm t$ in the previous subsection. In consequence, the optimal dependency tree should have the shortest distance from root to each of its vertices, and thus the minimum spanning tree (MST) of $G$ is what we want. From this perspective, we can use some off-the-shell algorithms to construct such a tree. Prim's algorithm~\cite{Prim1957} is chosen in this paper on account of its linear time complexity with respect to $|E|$, i.e., $O(m^2n^2)$ on $\mathcal L$. With the obtained $\mathcal T$, one can implement our algorithm easily and the whole process is summarized in Algorithm~\ref{alg:3}. Note that, although the fully-connected layers calculate vector-matrix multiplications, they can be considered as a bunch of tensor convolutions. Therefore, in the binary case, we can also make accelerations in the fully-connected layers by using Algorithm~\ref{alg:3}. \begin{algorithm}[tbp] \caption{The associative implementation:} \begin{algorithmic} \STATE {\bfseries Input:} $\{\mathbf {B}_j: 0\leq j<mn\}$: the set of binary tensors, $\mathbf X$: the input sub-feature map, $\mathcal T$: the dependency tree. \\ \STATE{\bfseries Output:} $\{y_j:0\leq j<mn\}$: the results of convolution.\\ \STATE Get $z =\mathcal T.root$ and calculate $y_{z.key} = \mathbf X \ast \mathbf B_{z.key}$.\\ \STATE Initialize the baseline value by $s\leftarrow y_{z.key}$. \REPEAT \STATE Search the next node of $\mathcal T$ and update $z$, $s$. \STATE Calculate $y_{z.key}$ by using Equation~(\ref{eq:12}) or~(\ref{eq:13}). \\ \UNTIL{ search ends. } \end{algorithmic}\label{alg:3} \end{algorithm} \section{Experimental Results}\label{sec:exp} In this section, we try to empirically analyze the proposed algorithms. For pragmatic reasons, all experiments are conducted on the famous ImageNet ILSVRC-2012 database~\cite{ILSVRC15} with advanced CNNs and the open-source Caffe framework~\cite{Jia2014}. The training set is comprised of 1.2 million labeled images and the test set is comprised of 50,000 validation images. In Section~\ref{sec:exp1} and~\ref{sec:exp2}, we will test the performance of the sketching algorithms (i.e., Algorithm~\ref{alg:1} and~\ref{alg:2}) and the associative implementation of convolutions (i.e., Algorithm~\ref{alg:3}) in the sense of filter approximation and computational efficiency, respectively. Then, in Section~\ref{sec:exp3}, we evaluate the whole-net performance of our sketches and compare them with other binary-weight models. \subsection{Efficacy of Sketching Algorithms}\label{sec:exp1} As a starting experiment, we consider sketching the famous AlexNet model~\cite{Krizhevsky2012}. Although it is the champion solution of ILSVRC-2012, AlexNet seems to be very resource-intensive. Therefore, it is indeed appealing to seek its low-precision and efficient counterparts. As claimed in Section~\ref{sec:int}, AlexNet is an 8-layer model with 61M learnable parameters. Layer-wise details are shown in Table~\ref{tab:2}, and the pre-trained reference model is available online~\footnote{\url{https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet}.}. \begin{table} \begin{center} \begin{tabular}{|c|c|c|C{0.55in}|} \hline Layer Name & Filters & Params (b) & FLOPs \\ \hline \hline Conv1 & 96 & $\smallsym{\mathrel}{\sim}$1M & $\smallsym{\mathrel}{\sim}$211M\\ Conv2 & 256 & $\smallsym{\mathrel}{\sim}$10M & $\smallsym{\mathrel}{\sim}$448M\\ Conv3 & 384 & $\smallsym{\mathrel}{\sim}$28M & $\smallsym{\mathrel}{\sim}$299M\\ Conv4 & 384 & $\smallsym{\mathrel}{\sim}$21M & $\smallsym{\mathrel}{\sim}$224M\\ Conv5 & 256 & $\smallsym{\mathrel}{\sim}$14M & $\smallsym{\mathrel}{\sim}$150M\\ Fc6 & 1 & $\smallsym{\mathrel}{\sim}$1208M & $\smallsym{\mathrel}{\sim}$75M\\ Fc7 & 1 & $\smallsym{\mathrel}{\sim}$537M & $\smallsym{\mathrel}{\sim}$34M\\ Fc8 & 1 & $\smallsym{\mathrel}{\sim}$131M & $\smallsym{\mathrel}{\sim}$8M\\ \hline \end{tabular} \end{center} \caption{Details of the learnable layers in AlexNet~\cite{Krizhevsky2012}, in which "Conv2" is the most computationally expensive one and "Fc6" commits the most memory (in bits). In all these layers, FLOPs consist of the same number of FADDs and FMULs.}\label{tab:2} \end{table} Using Algorithm~\ref{alg:1} and~\ref{alg:2}, we are able to generate binary-weight AlexNets with different precisions. Theoretical analyses have been given in Section~\ref{sec:ske}, so in this subsection, we shall empirically analyze the proposed algorithms. In particular, we demonstrate in Figure~\ref{fig:5} how "energy" accumulates with a varying size of memory commitment for different approximators. Defined as $1-\sum e^2/\sum\|W\|^2$, the accumulative energy is negatively correlated with reconstruction error~\cite{Zhang2015}, so the faster it increases, the better. In the figure, our two algorithms are abbreviately named as "Sketching (direct)" and "Sketching (refined)". To compare with other strategies, we also test the stochastically generated binary basis (named "Sketching\_random") as used in~\cite{Juefei-Xu2016}, and the network pruning technique~\cite{Han2015} which is not naturally orthogonal with our sketching method. The scalar factors for "Sketching (random)" is calculated by Equation~(\ref{eq:7}) to ensure its optimal performance. We can see that, it is consistent with the theoretical result that Algorithm~\ref{alg:1} converges much slower than Algorithm~\ref{alg:2} on all learnable layers, making it less effective on the filter approximation task. However, on the other hand, Algorithm~\ref{alg:1} can be better when compared with "Sketching (random)" and the pruning technique. With small working memory, our direct approximation algorithm approximates better. However, if the memory size increases, pruning technique may converge faster to its optimum. As discussed in Section~\ref{sec:spe}, parameter $m$ balances the model accuracy and efficiency in our algorithms. Figure~\ref{fig:5} shows that, a small $m$ (for example 3) should be adequate for AlexNet to attain over 80\% of the accumulative energy in its refined sketch. Let us take layer "Conv5" and "Fc6" as examples and see Table~\ref{tab:3} for more details. \begin{table}[htb] \begin{center} \begin{tabular}{|C{0.8in}|C{0.65in}|C{0.6in}|C{0.5in}|} \hline Layer Name & Energy (\%) & Params (b) & FMULs \\ \hline \hline Conv2\_sketch & 82.9 & $\smallsym{\mathrel}{\sim}$0.9M & $\smallsym{\mathrel}{\sim}$560K \\ Fc6\_sketch & 94.0 & $\smallsym{\mathrel}{\sim}$114M & $\smallsym{\mathrel}{\sim}$12K \\ \hline \end{tabular} \end{center} \caption{With only 3 bits allocated, the refined sketch of AlexNet attains over 80\% of the energy on "Conv2" and "Fc6", and more than \textbf{10}$\times$ reduction in the committed memory for network parameters. Meanwhile, the required number of FMULs is also extremely reduced (by \textbf{400}$\times$ and $\sim$\textbf{3000}$\times$) on the two layers.}\label{tab:3} \end{table} \subsection{Efficiency of Associative Manipulations}\label{sec:exp2} The associative implementation of binary tensor manipulations (i.e., Algorithm~\ref{alg:3}) is directly tested on the 3-bit refined sketch of AlexNet. To begin with, we still focus on "Conv2\_sketch" and "Fc6\_sketch". Just to be clear, we produce the result of Algorithm~\ref{alg:3} with both a stochastically generated dependency tree and a delicately calculated MST, while the direct implementation results are compared as a benchmark. All the implementations require the same number of FMULs, as demonstrated in Table~\ref{tab:3}, and significantly different number of FADDs, as compared in Table~\ref{tab:4}. Note that, in the associative implementations, some logical evaluations and $\times 2$ operations are specially involved. Nevertheless, they are much less expensive than the FADDs and FMULs~\cite{Rastegari2016}, by at least an order of magnitude. Hence, their cost are not deeply analyzed in this subsection~\footnote{Since the actual speedups varies dramatically with the architecture of processing units, so we will not measure it in the paper.}. \begin{table}[ht] \begin{center} \begin{tabular}{|C{1.2in}|C{0.8in}|C{0.68in}|} \hline Implementation & Conv2\_sketch & Fc6\_sketch \\ \hline \hline Direct & $\smallsym{\mathrel}{\sim}$672M & $\smallsym{\mathrel}{\sim}$113M \\ Associative (random) & $\smallsym{\mathrel}{\sim}$328M & $\smallsym{\mathrel}{\sim}$56M \\ Associative (MST) & $\smallsym{\mathrel}{\sim}$265M & $\smallsym{\mathrel}{\sim}$49M \\ \hline \end{tabular} \end{center} \caption{The associative implementation remarkably reduces the required number of FADDs on "Conv2\_sketch" and "Fc6\_sketch".}\label{tab:4} \end{table} From the above table, we know that our associative implementation largely reduces the required number of FADDs on "Conv2\_sketch" and "Fc6\_sketch". That is, it properly ameliorates the adverse effect of network sketching and enables us to evaluate the 3-bit sketch of AlexNet without any unbearably increase in the required amount of computation. In addition, the MST helps to further improve its performance and finally get $\sim$\textbf{2.5}$\times$ and $\sim$ \textbf{2.3}$\times$ reductions on the two layers respectively. Results on all learnable layers are summarized in Figure~\ref{fig:6}. \begin{figure}[t] \includegraphics[width=0.87\linewidth]{6.png} \caption{The associative implementation of binary tensor convolutions helps to gain 2$\times$ to 3$\times$ reductions in the required number of FADDs on all learnable layers of "AlexNet\_sketch".} \label{fig:6} \vskip -0.1in \end{figure} \subsection{Whole-net Performance}\label{sec:exp3} \begin{figure*}[t] \begin{center} \captionsetup[subfigure]{labelformat=empty} \subfloat[]{ \includegraphics[width=0.225\textwidth]{5a.png}}\hskip 10pt \subfloat[]{ \includegraphics[width=0.225\textwidth]{5b.png}}\hskip 10pt \subfloat[]{ \includegraphics[width=0.225\textwidth]{5c.png}}\hskip 10pt \subfloat[]{ \includegraphics[width=0.225\textwidth]{5d.png}}\hskip 10pt \vskip -10pt \subfloat[]{ \includegraphics[width=0.225\textwidth]{5e.png}}\hskip 10pt \subfloat[]{ \includegraphics[width=0.225\textwidth]{5f.png}}\hskip 10pt \subfloat[]{ \includegraphics[width=0.225\textwidth]{5g.png}}\hskip 10pt \subfloat[]{ \includegraphics[width=0.225\textwidth]{5h.png}}\hskip 10pt \caption{Network sketching approximates AlexNet well enough with a much smaller amount of committed memory, and the refinement operation helps to achieve better convergency on all of its learnable layers.} \label{fig:5} \end{center} \vskip -0.06in \end{figure*} Having tested Algorithm~\ref{alg:1}, ~\ref{alg:2} and~\ref{alg:3} on the base of their own criteria, it is time to compare the whole-net performance of our sketch with that of other binary weight models~\cite{Courbariaux2015, Rastegari2016}. Inspired by the previous experimental results, we still use the 3-bit (direct and refined) sketches for evaluation, as they are both very efficient and accurate. Considering that the fully-connected layers of AlexNet contain more than 95\% of its parameters, we should try sketching them to an extreme, namely 1 bit. Similar with Rastegari et al.~\cite{Rastegari2016}, we also keep the 'fc8' layer (or say, the output layer) to be of its full precision and report the top-1 and top-5 inference accuracies. However, distinguished from their methods, we sketch the 'conv1' layer as well because it is also compute-intensive (as shown in Table~\ref{tab:1}). \begin{table}[ht] \begin{center} \begin{tabular}{|C{0.8in}|c|c|c|} \hline Model & Params (b) & Top-1 (\%) & Top-5 (\%) \\ \hline \hline Reference & $\smallsym{\mathrel}{\sim}$1951M & 57.2 & 80.3 \\ Sketch (ref.) & $\smallsym{\mathrel}{\sim}$193M & \textbf{55.2} & \textbf{78.8} \\ Sketch (dir.) & $\smallsym{\mathrel}{\sim}$193M & 54.4 & 78.1 \\ BWN~\cite{Rastegari2016} & $\smallsym{\mathrel}{\sim}$190M & 53.8 & 77.0 \\ BC~\cite{Courbariaux2015}& $\smallsym{\mathrel}{\sim}$189M & 35.4 & 61.0 \\ \hline \end{tabular} \end{center} \caption{Network sketching technique generates binary-weight AlexNets with the ability to make faithful inference and roughly \textbf{10.1}$\times$ fewer parameters than its reference (in bits). Test accuracies of the competitors are cited from the paper. An updated version of BWN gains significantly improved accuracies (top-1: 56.8\% and top-5: 79.4\%), but the technical improvement seems unclear.}\label{tab:5} \end{table} Just to avoid the propagation of reconstruction errors, we need to somehow fine-tune the generated sketches. Naturally, there are two protocols to help accomplish this task; one is known as projection gradient descent and the other is stochastic gradient descent with full precision weight update~\cite{Courbariaux2015}. We choose the latter by virtue of its better convergency. The training batch size is set as 256 and the momentum is 0.9. We let the learning rate drops every 20,000 iterations from 0.001, which is one tenth of the original value as set in Krizhevsky et al.'s paper~\cite{Krizhevsky2012}, and use only the center crop for accuracy evaluation. After totally 70,000 iterations (i.e., roughly 14 epochs), our sketches can make faithful inference on the test set, and the refined model is better than the direct one. As shown in Table~\ref{tab:5}, our refined sketch of AlexNet achieves a top-1 accuracy of 55.2\% and a top-5 accuracy of 78.8\%, which means it outperforms the recent released models in the name of BinaryConnect (BC)~\cite{Courbariaux2015} and Binary-Weight Network (BWN)~\cite{Rastegari2016} by large margins, while the required number of parameters only exceeds a little bit. Network pruning technique also achieves compelling results on compressing AlexNet. However, it demands a lot of extra space for storing parameter indices, and more importantly, even the optimal pruning methods perform mediocrely on convolutional layers~\cite{Han2015, Guo2016}. In contrast, network sketching works sufficiently well on both of the layer types. Here we also testify its efficacy on ResNet~\cite{He2016}. Being equipped with much more convolutional layers than that of AlexNet, ResNet wins the ILSVRC-2015 classification competition. There are many instantiations of its architecture, and for fair comparisons, we choose the type B version of 18 layers (as with Rastegari et al.~\cite{Rastegari2016}). A pre-trained Torch model is available online~\footnote{\url{https://github.com/facebook/fb.resnet.torch/tree/master/pretrained}.} and we convert it into an equivalent Caffe model before sketching~\footnote{\url{https://github.com/facebook/fb-caffe-exts}.}. For the fine-tuning process, we set the training batch size as 64 and let the learning rate drops from 0.0001. After 200,000 iterations (i.e., roughly 10 epochs), the generated sketch represents a top-1 accuracy of 67.8\% and a top-5 accuracy of 88.4\% on ImageNet dataset. Refer to Table~\ref{tab:6} for a comparison of the classification accuracy of different binary-weight models. \begin{table}[ht] \begin{center} \begin{tabular}{|C{0.8in}|c|c|c|} \hline Model & Params (b) & Top-1 (\%) & Top-5 (\%) \\ \hline \hline Reference & $\smallsym{\mathrel}{\sim}$374M & 68.8 & 89.0 \\ Sketch (ref.) & $\smallsym{\mathrel}{\sim}$51M & \textbf{67.8} & \textbf{88.4} \\ Sketch (dir.) & $\smallsym{\mathrel}{\sim}$51M & 67.3 & 88.2 \\ BWN~\cite{Rastegari2016}& $\smallsym{\mathrel}{\sim}$28M & 60.8 & 83.0 \\ \hline \end{tabular} \end{center} \caption{Network sketching technique generates binary-weight ResNets with the ability to make faithful inference and roughly \textbf{7.4}$\times$ fewer parameters than its reference (in bits). The test accuracies of BWN are directly cited from its paper.}\label{tab:6} \end{table} \section{Conclusions}\label{sec:con} In this paper, we introduce network sketching as a novel technology for pursuing binary-weight CNNs. It is more flexible than the current available methods and it enables researchers and engineers to regulate the precision of generated sketches and get better trade-off between the model efficiency and accuracy. Both theoretical and empirical analyses have been given to validate its efficacy. Moreover, we also propose an associative implementation of binary tensor convolutions to further speedup the sketches. After all these efforts, we are able to generate binary-weight AlexNets and ResNets with the ability to make both efficient and faithful inference on the ImageNet classification task. Future works shall include exploring the sketching results of other CNNs. {\small \bibliographystyle{ieee}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Koszul sign convention is a rule which plays a fundamental role in graded algebra. This convention was used by Koszul in his thesis~\cite{ko:hom}. Actually, whenever the algebraic framework is based on graded categories as in homology theory~\cite{loday:cychom}, homotopy theory~\cite{fht:rht}, algebraic operads theory~\cite{lv:alop}, this convention is systematically used in the definitions of the new concepts naturally appearing in the theory, and in the statements of the new results as well. In general -- in particular in the cited books -- the convention is just stated, without further explanation. The aim of this paper is to propose a tool encoding precisely the Koszul sign convention. In 1966, without any reference to Koszul, Boardman introduced a Principle of Signs~\cite{board:signs}, in order to make precise the way of inserting the right signs in the identities frequently obtained in algebraic topology. His approach lies on a class of various $n$-ary operations and on an involution, subjected to some axioms, allowing him to characterize the identities written in a standard form. Then he proved that the set of these identities is stable under natural algebraic transformations. Our approach is different, and it can be viewed as a preliminary step of Boardman's one. Although we fix a set of graded elements as in Boardman, we do not need to define the operations and the identities acting on these elements. For us, only the order of the elements inside of the result of an operation is important to produce a sign in front of the algebraic expression. \section{A Koszul sign map} The Koszul sign convention is used in various graded contexts. The objects on which the convention is applied are homogeneous, and the nature of the objects -- graded elements, graded maps -- depends on the context. However the convention does not depend on the nature of the objects, but just on their degrees. In our general setting, the homogeneous objects will be called \emph{symbols}. At each symbol, we associate a degree in $\mathbb{Z}$. Roughly speaking, the Koszul sign convention is the following: if in a manipulation of a monomial algebraic expression concerning on symbols naturally written from the left to the right, a symbol $f_i$ jumps over a symbol $f_j$ situated on the left or on the right of $f_i$, then the sign $(-1)^{|f_i||f_j|}$ appears in front of the expression, where $|f_i|$ denotes the degree of $f_i$. If the algebraic expression is a sum of monomial algebraic expressions, the convention is applied to each term. Let us note that the convention is independent of the algebraic operations included in the monomial algebraic expression. Although the manipulation passes from a monomial algebraic expression to another one, only the order of the objects in the initial and final expressions are significant. For example, in the definition of a tensor product of graded linear maps $$(f\otimes g)(a\otimes b)= (-1)^{|g||a|} f(a) \otimes g(b),$$ the ordered symbols in the initial -- final -- expression are $f$, $g$, $a$, $b$ -- $f$, $a$, $g$, $b$. In our setting, we are led to permute arbitrarly the symbols from the order of the initial expression. The result of a permutation might be organized in a final monomial algebraic expression respecting the final order, but such an expression would be irrelevant for us. Throughout the paper, we fix an integer $n\geq 2$, a sequence $f=(f_1, \ldots ,f_n)$ of symbols, and a sequence of degrees $$|f|=(|f_1|, \ldots ,|f_n|) \in \mathbb{Z}^n$$ called the degree of $f$. We denote by $\mathcal{E}_f$ the set of the permutations of $f$. If $S_n$ is the group of permutations of $\{1, \ldots , n\}$, we have $$\mathcal{E}_f= \{g=(g_1, \ldots ,g_n) \, ; \, \exists \rho \in S_n,\ g_i=f_{\rho^{-1}(i)}\ for \ i=1, \ldots, n\},$$ where $g_1, \ldots ,g_n$ are still seen as symbols. The degree of $g$ is defined by $$|g|=(|g_1|, \ldots ,|g_n|)$$ with $|g_i|=|f_{\rho^{-1}(i)}|$. The group $S_n$ acts on the left on $\mathcal{E}_f$ by defining $$\sigma (g)=(g_{\sigma^{-1}(1)}, \ldots ,g_{\sigma^{-1}(n)}),$$ for $g \in \mathcal{E}_f$ and $\sigma \in S_n$. We want to define now a \emph{Koszul sign map} \begin{eqnarray*} \kappa : & S_n \times \mathcal{E}_f & \rightarrow \ \{\pm 1\}\\ & (\sigma, g) & \mapsto \ \kappa (\sigma ,g), \end{eqnarray*} which respects the Koszul sign convention. For the moment, we state this convention in a rather heuristic form as follows. \emph{If in the permutation $\rho : (g_1, \ldots ,g_n) \rightarrow (f_1, \ldots ,f_n)$ correcting the permutated sequence $(g_1, \ldots ,g_n)$ into the initial sequence $(f_1, \ldots ,f_n)$, $g_i$ jumps over $g_j$, then the sign $(-1)^{|g_i||g_j|}$ appears in $\kappa$.} We begin to remark that it is not clear how to define a map $$\kappa(-, f): S_n \rightarrow \{\pm 1\}$$ respecting the convention when $n=3$ -- if $n=2$, $\kappa(-, f): S_2 \rightarrow \{\pm 1\}$ is well-defined and is a group morphism. In fact, after acting the transpositions $(1,2)$ and $(2,3)$ on $f=(f_1,f_2,f_3)$, we obtain $$\kappa ((1,2),f)=(-1)^{|f_1||f_2|},$$ $$\kappa ((2,3),f)=(-1)^{|f_2||f_3|}.$$ Since $(2,3)(1,2)f=(f_3,f_1,f_2)$ is corrected into $(f_1,f_2,f_3)$ by jumping $f_3$ over $f_1$ and $f_2$, we obtain $$\kappa ((2,3)(1,2),f)=(-1)^{|f_3|(|f_1|+|f_2|)},$$ so that $\kappa ((2,3)(1,2),f)\neq \kappa ((2,3),f)\, k((1,2),f)$ for a certain choice of the degrees. Therefore it is not possible to define a group morphism $\kappa(-, f): S_n \rightarrow \{\pm 1\}$ in great generality. However, if we set $g=(1,2)f=(f_2,f_1,f_3)$, we have $$\kappa ((2,3),g)=(-1)^{|g_2||g_3|}= (-1)^{|f_1||f_3|},$$ Thus we are led to the right formula $$\kappa ((2,3)(1,2),f)= \kappa ((2,3),(1,2)(f))\, \kappa ((1,2),f).$$ In order to define $\kappa$ from this formula and its generalizations, we want to be sure that $\kappa (\sigma ,g)$ does not depend on the way to correct $g$ into $f$ by using transpositions. Consequently, we first define $\kappa$ on the free group $F_{n-1}$ generated by the transpositions $s_i=(i,i+1)$ for $i=1, \ldots n-1$. Let us recall that the group $S_n$ is defined by these generators and the following relations in $F_{n-1}$ \begin{equation} \label{relations} s_i^2=e, \ \ s_is_j=s_js_i \ \mathrm{if} \ |i-j|>1, \ \ s_is_{i+1}s_i=s_{i+1}s_is_{i+1}, \end{equation} where $e$ denotes the unit of the group $F_{n-1}$. For $g$ in $\mathcal{E}_f$ and $1\leq i \leq n-1$, we set \begin{equation} \label{action} s_i(g)= s_i^{-1}(g)= (g_1, \ldots g_{i-1}, g_{i+1}, g_i, g_{i+2}, \ldots ,g_n), \end{equation} which defines an action $x(g)$ of the elements $x$ of the group $F_{n-1}$ on the set $\mathcal{E}_f$. This action induces naturally the action of $S_n$ on $\mathcal{E}_f$ defined above. We define the map \begin{eqnarray} \label{freekappa} \kappa : & F_{n-1} \times \mathcal{E}_f & \rightarrow \ \{\pm 1\} \\ & (x, g) & \mapsto \ \kappa (x , g) \nonumber \end{eqnarray} as follows. For any $g$ in $\mathcal{E}_f$, we set $ \kappa (e, g)=1$, and for $1\leq i \leq n-1$, \begin{equation} \label{transposition} \kappa (s_i, g)=\kappa (s_i^{-1}, g)=(-1)^{|g_i||g_{i+1}|}. \end{equation} Moreover, for any $x$ in $F_{n-1}$ decomposed in a reduced form $$x =t_{i_1} \ldots t_{i_m}$$ where $t_{i_j}=s_{i_j}$ or $t_{i_j}=s_{i_j}^{-1}$, and $i_1, \ldots , i_m$ are in $\{1, \ldots , n-1\}$, we set \begin{equation} \label{deffreekappa} \kappa (x , g)=\kappa (t_{i_1}, t_{i_2} \dots t_{i_m} (g))\, \kappa (t_{i_2}, t_{i_3} \dots t_{i_m} (g)) \ldots \kappa (t_{i_m}, g). \end{equation} Reduced form means that two consecutive factors $t_{i_j}$ and $t_{i_{j+1}}$ are never inverse to each other. A reduced form being unique, the map (\ref{freekappa}) is well-defined. \begin{Lm} \label{lemma} For $g$ in $\mathcal{E}_f$ and $1\leq i \leq n-1$, we have \begin{equation} \label{inverses} \kappa (s_i, s_i^{-1}(g))\, \kappa (s_i^{-1}, g)=1= \kappa (s_i^{-1}, s_i(g))\, \kappa (s_i, g). \end{equation} \end{Lm} {\it Proof.}\ Since $\kappa (s_i, g)=\kappa (s_i^{-1}, g)$ and $s_i(g)=s_i^{-1}(g)$, it suffices to verify the first equality. From (\ref{action}), we draw $\kappa (s_i, s_i^{-1}(g))=(-1)^{|g_{i+1}||g_i|}= \kappa (s_i^{-1}, g)$. \rule{2mm}{2mm} \\ The lemma shows that the formula (\ref{deffreekappa}) extends to any decomposition $x =t_{i_1} \ldots t_{i_m}$ reduced or not. Therefore, one has \begin{equation} \label{formula1} \kappa (x \,y , g)=\kappa (x, y (g))\, \kappa (y, g) \end{equation} for $g$ in $\mathcal{E}_f$, $x$, $y$ in $F_{n-1}$, and consequently \begin{equation} \label{formula2} \kappa (x^{-1} , g)=\kappa (x, x^{-1} (g)). \end{equation} \begin{Po} \label{defkappa} Passing through the relations (\ref{relations}), the map $\kappa : F_{n-1} \times \mathcal{E}_f \rightarrow \{\pm 1\}$ induces a map $\kappa : S_n \times \mathcal{E}_f \rightarrow \{\pm 1\}$. \end{Po} {\it Proof.}\ We will prove that, for each relation in (\ref{relations}) and for each fixed $g$, $\kappa (-,g)$ gives the same result on the left-hand side and on the right-hand side of the relation. Fisrstly, $\kappa (s_i^2 , g)=\kappa (s_i, s_i(g))\, \kappa (s_i, g)$ according to (\ref{formula1}), hence $\kappa (s_i^2 , g)=1$ by the lemma. Secondly, let us suppose that $j>i+1$, so that $$s_j(g)= (g_1, \ldots g_{i}, g_{i+1}, \ldots g_{j+1}, g_{j}, \ldots ,g_n),$$ thus $\kappa (s_i, s_j(g))=(-1)^{|g_i||g_{i+1}|}$ and $\kappa (s_j, g)=(-1)^{|g_j||g_{j+1}|}$. Using (\ref{formula1}), we obtain $$\kappa (s_i s_j, g)=(-1)^{|g_i||g_{i+1}|+|g_j||g_{j+1}|}.$$ The same if $j<i-1$. So, if $|i-j|>1$, we obtain the expected equality $$\kappa (s_i s_j, g)= \kappa (s_j s_i, g).$$ Thirdly, from (\ref{formula1}), we draw \begin{equation} \label{braid} \kappa (s_i \,s_{i+1} \, s_i , g)=\kappa (s_i, s_{i+1} \, s_i (g))\, \kappa (s_{i+1}, s_i (g))\,\kappa (s_i, g). \end{equation} Using $s_i(g)= (g_1, \ldots g_{i+1}, g_i, g_{i+2}, \ldots ,g_n)$ and $s_{i+1} s_i(g)= (g_1, \ldots g_{i+1}, g_{i+2}, g_i, \ldots ,g_n)$, the formula (\ref{braid}) implies $$\kappa (s_i \,s_{i+1} \, s_i , g)=(-1)^{|g_{i+1}||g_{i+2}|+|g_i||g_{i+2}|+|g_i||g_{i+1}|}.$$ Using $s_{i+1}(g)= (g_1, \ldots g_i, g_{i+2}, g_{i+1}, \ldots ,g_n)$ and $s_i s_{i+1}(g)= (g_1, \ldots g_{i+2}, g_i, g_{i+1}, \ldots ,g_n)$, the formula (\ref{braid}) in which $i$ and $i+1$ are exchanged gives $$\kappa (s_{i+1} \, s_i \,s_{i+1} , g)=(-1)^{|g_i||g_{i+1}|+ |g_i||g_{i+2}| + |g_{i+1}||g_{i+2}|}.$$ Then we arrive to $\kappa (s_i \,s_{i+1} \, s_i , g)=\kappa (s_{i+1} \, s_i \,s_{i+1} , g)$. An equivalent way to say what we have obtained is the following. Writing each relation (\ref{relations}) as an equality $r=e$ for a certain element $r$ in $F_{n-1}$, we have $\kappa (r,g)=1$ for any $g$ -- it suffices to apply the formula (\ref{formula1}). Now for any $x$ in $F_{n-1}$, we have \begin{eqnarray*} \kappa (x r x^{-1},g) & = & \kappa (x, r (x^{-1} (g))\, \kappa (r, x^{-1}(g))\, \kappa (x^{-1},g) \\ & = & \kappa (x, x^{-1} (g))\, \kappa (x^{-1},g) \end{eqnarray*} since $r(g')=g'$ for any $g'$ in $\mathcal{E}_f$. Therefore (\ref{formula2}) implies that $\kappa (x r x^{-1},g)=1$ for any $g$ in $\mathcal{E}_f$. Using again (\ref{formula1}), we deduce $\kappa( x r x^{-1} x' r' x'^{-1},g)=1$ for any $g$ in $\mathcal{E}_f$, and any elements $r$, $r'$ of $F_{n-1}$ generating the relations. Inductively, we obtain that $\kappa (a,g)=1$ for any $g$ in $\mathcal{E}_f$ and any $a$ in the normal subgroup of $F_{n-1}$ generated by the elements $r$. More generally, $\kappa (a x,g)=\kappa (x,g)$ for any $a$ as previously and any $x$ in $F_{n-1}$. \rule{2mm}{2mm} \\ Our construction of the map $\kappa : S_n \times \mathcal{E}_f \rightarrow \{\pm 1\}$ shows immediately the following proposition. This proposition could be used as a definition. \begin{Po} \label{caractkappa} The map $\kappa : S_n \times \mathcal{E}_f \rightarrow \{\pm 1\}$ is the unique map such that \begin{enumerate} \item $\forall \sigma \in S_n,\, \forall \tau \in S_n,\, \forall g \in \mathcal{E}_f, \ \kappa (\sigma \,\tau , g)=\kappa (\sigma, \tau (g))\, \kappa (\tau, g)$, \item $\forall i \in \{1, \ldots , n-1\},\, \forall g \in \mathcal{E}_f, \ \kappa (s_i, g)=(-1)^{|g_i||g_{i+1}|}$. \end{enumerate} \end{Po} From $f$ and its degree $|f|$, we have constructed the map $\kappa$ which should be rather denoted by $\kappa_f$. For any $f'$ in $\mathcal{E}_f$, one has $\mathcal{E}_{f'}=\mathcal{E}_f$. The degree of $f'$ is obviously defined from the degree of $f$. Then Proposition \ref{caractkappa} shows that the map $$\kappa_{f'} : S_n \times \mathcal{E}_{f'} \rightarrow \{\pm 1\}$$ coincides with $\kappa _f$. However, it is possible that $$\kappa _f(-,f) \neq \kappa _{f'}(-,f'),$$ as in the example $f=(f_1,f_2,f_3)$ and $f'=(f_1,f_3,f_2)$, with $|f_1|$ and $|f_2|$ odd, $|f_3|$ even. \begin{Ex} \emph{Take $n=5$ and $g=(f_4,f_1,f_3,f_5,f_2)$, so that $g=\rho (f)$ where} $$\rho=\left( \begin{array}{ccccc} 4 & 1 & 3 & 5 & 2 \\ 1 & 2 & 3 & 4 & 5 \end{array} \right).$$ \emph{Using a bubble sort, we find that $\kappa(\rho ,f)=(-1)^Z$ with $z_i=|f_i|$ and} $$Z=z_1z_4 + z_3z_4 + z_2z_5 + z_2z_4 + z_2z_3.$$ \end{Ex} \setcounter{equation}{0} \section{When $\kappa (-,g)$ is a group morphism} An integer $n\geq 2$, a sequence $f=(f_1, \ldots ,f_n)$ of symbols, and a sequence of degrees $|f|=(|f_1|, \ldots ,|f_n|) \in \mathbb{Z}^n$ being given, we have defined the map $\kappa =\kappa_f$ in the previous section. If $n=2$, then $\kappa (-, (f_1,f_2))=\kappa (-, (f_2,f_1))$ is always a group morphism from $S_2$ to $\{\pm 1\}$. Let us suppose that $n\geq 3$. We want to know when $$\kappa (-,g) : S_n \rightarrow \{\pm 1\}$$ is a group morphism. From 1. in Proposition \ref{caractkappa}, it is the case if and only if for any $\sigma$ and $\tau$ in $S_n$, one has $$\kappa (\sigma, \tau (g))= \kappa (\sigma, g),$$ that is, if and only if, for any $\sigma$ in $S_n$ and $h$ in $\mathcal{E}_f$, $$\kappa (\sigma, h)= \kappa (\sigma, g),$$ which implies that $\kappa (- , h)$ is a group morphism as well. So, it suffices to examine when $\kappa (- , f)$ is a group morphism, and we have seen that it is the case if and only if \begin{equation} \label{morphism1} \forall \sigma \in S_n,\, \forall \tau \in S_n,\, \forall g \in \mathcal{E}_f, \ \kappa (\sigma, \tau (g))=\kappa (\sigma, g). \end{equation} From 1. in Proposition \ref{caractkappa}, it is equivalent to \begin{equation} \label{morphism2} \forall i \in \{1, \ldots , n-1\}, \, \forall j \in \{1, \ldots , n-1\}, \, \forall g \in \mathcal{E}_f, \ \kappa (s_i, s_j(g))=\kappa (s_i, g). \end{equation} It is clear that (\ref{morphism2}) holds whenever $i=j$ or $|i-j|>1$. Thus (\ref{morphism2}) is equivalent to \begin{equation} \label{morphism3} \forall i \in \{1, \ldots , n-2\}, \, \forall g \in \mathcal{E}_f, \ \kappa (s_i, s_{i+1}(g))=\kappa (s_i, g) \ \mathrm{and} \ \kappa (s_{i+1}, s_i(g))=\kappa (s_{i+1}, g). \end{equation} Some calculations included in the proof of Proposition \ref{defkappa} show that it is equivalent to \begin{equation} \label{morphism4} \forall i \in \{1, \ldots , n-2\}, \, \forall g \in \mathcal{E}_f, \ (-1)^{|g_i||g_{i+2}|} = (-1)^{ |g_i||g_{i+1}|} = (-1)^{|g_{i+1}||g_{i+2}|}, \end{equation} which, in turn, is equivalent to say that among the parities of the triplets $(|g_i|,|g_{i+1}|,|g_{i+2}|)$, only the cases of one even parity and two odd parities are forbidden. The last condition is satisfied if all the degrees $|f_1|, \ldots ,|f_n|$ have the same parities, otherwise if only one is odd. Conversely, if the last condition is satisfied and if $|f_1|, \ldots ,|f_n|$ do not have the same parities, it is not possible to find one even parity and two odd parities among them -- a suitable permutation of $f$ then providing a forbidden triplet. We have obtained the following. \begin{Po} \label{kappamorphism} Suppose that $n\geq 2$, $f=(f_1, \ldots ,f_n)$, and $|f|=(|f_1|, \ldots ,|f_n|) \in \mathbb{Z}^n$ are given. \begin{enumerate} \item The map $\kappa (-,g) : S_n \rightarrow \{\pm 1\}$ is a group morphism for an element $g$ of $\mathcal{E}_f$ if and only if $\kappa (-,f)$ is a group morphism, and in this case, all the group morphisms $\kappa (-,g)$ are equal. \item The map $\kappa (-,f)$ is a group morphism if and only if either all the integers $|f_1|, \ldots ,|f_n|$ have the same parities or only one is odd among them. \item The map $\kappa (-,f)$ is constant equal to $1$ if and only if all the integers $|f_1|, \ldots ,|f_n|$ are even, with possibly one exception. \end{enumerate} \end{Po} \setcounter{equation}{0} \section{A cohomological interpretation of $\kappa_f$} An integer $n\geq 2$, a sequence $f=(f_1, \ldots ,f_n)$ of symbols, and a sequence of degrees $|f|=(|f_1|, \ldots ,|f_n|) \in \mathbb{Z}^n$ are given. For $\sigma$ and $\rho$ in $S_n$, we put $$c_f(\sigma, \rho) = \kappa_f (\sigma, \rho(f)).$$ So we define a map $c_f : S_n \times S_n \rightarrow \{\pm 1\}$. This is the unique map such that \begin{enumerate} \item $\forall \sigma \in S_n,\, \forall \tau \in S_n,\, \forall \rho \in S_n, \ c_f (\sigma \,\tau , \rho)=c_f (\sigma, \tau \, \rho)\, c_f (\tau, \rho)$, \item $\forall i \in \{1, \ldots , n-1\},\, \forall \rho \in S_n, \ c_f (s_i, \rho)=(-1)^{|f_{\rho^{-1}(i)}||f_{\rho^{-1}(i+1)}|}$. \end{enumerate} We want to regard the map $c_f$ as a 2-cochain for the cohomology of the group $S_n$ with coefficients in the multiplicative group $\{\pm 1\}$. The automorphism group of $\{\pm 1\}$ is formed of $+Id$ and $-Id$, and it is identified to the group $\{\pm 1\}$. Then a structure of $S_n$-module on the group $\{\pm 1\}$ is equivalent to the datum of a group morphism $$u : S_n \rightarrow \{\pm 1\},$$ the action of $\sigma \in S_n$ on $+1$ or $-1$ being given by the product by $u(\sigma)$. It is well-known that there are only two such morphisms $u$: the constant morphim $u=1$ and the signature $u=sgn$. Let us choose a structure $u$ of $S_n$-module on $\{\pm 1\}$. Let us calculate the coboundary operator $\delta$ of the 2-cochain $c_f$. For $\sigma$, $\tau$ and $\rho$ in $S_n$, one has $$\delta (c_f)(\sigma, \tau, \rho)= (\sigma . c_f(\tau, \rho))\, c_f(\sigma \tau, \rho)^{-1}\, c_f(\sigma, \tau \rho) \, c_f(\sigma, \tau)^{-1}.$$ Since $\sigma . c_f(\tau, \rho)= u(\sigma)\, c_f(\tau, \rho)$, the relation 1. just above shows that $$\delta (c_f)(\sigma, \tau, \rho)= u(\sigma)\, c_f(\sigma, \tau)^{-1}.$$ Thus $c_f$ is a 2-cocycle if and only if $c_f(\sigma, \tau)= u(\sigma)$ for any $\sigma$ and $\tau$. This condition implies that $c_f (-, \tau): S_n \rightarrow \{\pm 1\}$ is a group morphism for any $\tau$, thus either all the integers $|f_1|, \ldots ,|f_n|$ have the same parities or only one is odd among them (Proposition \ref{kappamorphism}). Conversely, if either all the integers $|f_1|, \ldots ,|f_n|$ have the same parities or only one is odd among them, then the numbers $u_i=(-1)^{|f_{i}||f_{i+1}|}$ are all equal. By $u(s_i)= u_i$ for $i = 1, \ldots , n-1$, we define a group morphism $u : S_n \rightarrow \{\pm 1\}$, hence a structure of $S_n$-module on $\{\pm 1\}$ for which $c_f$ is a 2-cocycle. Moreover we have $u=c_f(-, e)$, so that $$\delta (u)(\sigma, \tau)= u(\sigma) c_f(\tau, e)\, c_f(\sigma \tau, e)\, c_f(\sigma, e).$$ Then the relation 1. just above implies that $\delta (u)=c_f$. Let us sum up what we have obtained. \begin{Po} \label{2-cocycle} Let us suppose that $n\geq 2$, $f=(f_1, \ldots ,f_n)$, and $|f|=(|f_1|, \ldots ,|f_n|) \in \mathbb{Z}^n$ are given. Let us endow the group $\{\pm 1\}$ with the $S_n$-module structure defined by a group morphism $u : S_n \rightarrow \{\pm 1\}$. Then the map $$c_f : S_n \times S_n \rightarrow \{\pm 1\}$$ is a 2-cocycle with coefficients in the group $\{\pm 1\}$ if and only if either all the integers $|f_1|, \ldots ,|f_n|$ have the same parities or only one is odd among them. When this condition holds, $u=1$ if and only if all the integers $|f_1|, \ldots ,|f_n|$ are even, with possibly one exception, and $u=sgn$ if and only if all the integers $|f_1|, \ldots ,|f_n|$ are odd. Moreover, in both cases, $c_f$ is equal to the coboundary of $u$. \end{Po} The 2-cochain $c_f$ is not symmetric in general. In fact, $c_f(e,\tau)=1$ for any $\tau$, while $c_f(s_i, e)= (-1)^{|f_{i}||f_{i+1}|}$. \textbf{Question.} Find another cohomological interpretation for which $c_f$ is always a 2-cocycle, and is a 2-coboundary if and only if $c_f(-,e)$ is a group morphism.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{Introduction} In this article we prove new theorems which are higher-dimensional generalizations of the classical theorems of Siegel on integral points on affine curves and of Picard on holomorphic maps from $\mathbb{C}$ to affine curves. In the first section we will give the statements of Siegel's and Picard's theorems, and we will recall why these two theorems from such seemingly different areas of mathematics are related. We will then proceed to give a number of new conjectures describing, from our point of view, how we expect Siegel's and Picard's theorems to optimally generalize to higher dimensions. These include conjectures on integral points over varying number fields of bounded degree and conjectures addressing hyperbolic questions. These conjectures appear to be fundamentally new. However, in some special cases we will be able to relate our conjectures to Vojta's conjectures. In this respect, we are also led to formulate a new conjecture relating the absolute discriminant and height of an algebraic point on a projective variety over a number field (Conjecture \ref{conj4}). We will then summarize our progress on these conjectures. We have been able to get results in all dimensions, with best-possible results in many cases for surfaces. Our technique is based on the new proof of Siegel's theorem given by Corvaja and Zannier in \cite{Co}. They showed how one may use the Schmidt Subspace Theorem to obtain a very simple and elegant proof of Siegel's theorem. More recently, they have used this technique to obtain other results on integral points (see \cite{Co5}, \cite{Co3}, and \cite{Co2}) and Ru has translated the approach to Nevanlinna theory \cite{Ru3}. We will use the Schmidt Subspace Theorem approach to get results on integral points on higher-dimensional varieties, and analogously, we will use a version of Cartan's Second Main Theorem due to Vojta to obtain results on holomorphic curves in higher-dimensional complex varieties, generalizing Picard's theorem. As an application of our results, we show how to improve a result of Faltings on integral points on the complements of certain singular plane curves, proving a statement about hyperbolicity as well. We end with a discussion of our conjectures, relating them to previously known results and conjectures, and giving examples limiting any improvement to their hypotheses and conclusions. \section{Theorems of Siegel and Picard} \label{sclassical} It has been observed by Osgood, Vojta, Lang, and others that there is a striking correspondence between statements in Nevanlinna theory and in Diophantine approximation (see \cite{Ru} and \cite{Vo2}). This correspondence has been extremely lucrative, influencing results and conjectures in both subjects considerably. The correspondence can be formulated in both a qualitative and quantitative way. In this section, we will concentrate on the simplest case of the qualitative correspondence, Siegel's and Picard's theorems. Let $V\subset \mathbb{A}^n$ be an affine variety defined over a number field $k$. We will also view $V$ as a complex analytic space. Then it has been noticed that $V(\mathcal{O}_{L,S})$ (the set of points with all coordinates in $\mathcal{O}_{L,S}$, the $S$-integers of $L$) seems to be infinite for sufficiently large number fields $L$ and sets of places $S$ if and only if there exists a non-constant holomorphic map $f:\mathbb{C}\to V$. When $V=C$ is a curve (i.e. one-dimensional variety), this correspondence has been proven to hold exactly, and it is known precisely for which curves $C$ the two statements hold. On the number theory side, Siegel's theorem is the fundamental theorem on integral points on curves. On the analytic side the analogue is a theorem of Picard. We now give the following formulations of these two theorems. \begin{theorema}[Siegel] \label{Siegel2} Let $k$ be a number field. Let $S$ be a finite set of places of $k$ containing the archimedean places. Let $C$ be an affine curve defined over $k$ embedded in affine space $\mathbb{A}^m$. Let $\tilde{C}$ be a projective closure of $C$. If $\# \tilde{C}\backslash C >2$ (over $\overline{k}$) then $C$ has finitely many points in $\mathbb{A}^m(\mathcal{O}_{k,S})$. \end{theorema} \begin{theoremb}[Picard] \label{Picard} Let $\tilde{C}$ be a compact Riemann surface. Let $C \subset \tilde{C}$. If $\# \tilde{C}\backslash C > 2$, then all holomorphic maps $f:\mathbb{C} \to C$ are constant. \end{theoremb} In other words, Siegel's and Picard's theorems state that if $D$ consists of many distinct points on a curve $X$, then any set of integral points on $X\backslash D$ is finite and any holomorphic map $f:\mathbb{C}\to X\backslash D$ is constant. We will thus view as generalizing Siegel's or Picard's theorem any theorem that asserts that if $D$ has ``enough components" then there is some limitation on the integral points on $X\backslash D$ or on the holomorphic maps $f:\mathbb{C}\to X\backslash D$. In Picard's theorem it may also be shown that the curves $C$ in question satisfy the stronger condition of being Kobayashi hyperbolic. We will frequently be able to generalize this fact to higher dimensions as well. Siegel's theorem is usually stated with the extra information that the $\# \tilde{C}\backslash C >2$ hypothesis is unnecessary for nonrational affine curves $C$. However, it may be shown that this stronger version of Siegel's theorem may be derived from Siegel's theorem as we have stated it by using \'etale coverings of the curve $C$ (see \cite{Co}). A similar statement holds for Picard's theorem. It is Siegel's and Picard's theorems in the form we have given above that we will generalize. We note that when the geometric genus of $C$ is greater than one, Siegel's theorem follows from the much stronger theorem of Faltings that $C$ has only finitely many $k$-rational points. Similarly, it is a theorem of Picard that there are no nonconstant holomorphic maps $f:\mathbb{C}\to \tilde{C}$ when $\tilde{C}$ is a projective curve of geometric genus greater than one. \section{Some Preliminary Definitions} In order to state our conjectures and results we will need a few definitions. In Vojta's Nevanlinna-Diophantine dictionary, the Diophantine object corresponding to a holomorphic map $f:\mathbb{C}\to X\backslash D$ is a set of $(D,S)$-integral points on $X$. We'll now sketch the definition of a set of $(D,S)$-integral points on $X$ in terms of Weil functions. Let $D$ be a Cartier divisor on a projective variety $X$, both defined over a number field $k$. Let $M_k$ denote the set of places of $k$ (see Section \ref{sDio}). Let $v\in M_k$. Extend $|\cdot|_v$ to an absolute value on $\overline{k}_v$. We define a local Weil function for $D$ relative to $v$ to be a function $\lambda_{D,v}:X(\overline{k}_v)\backslash D \to \mathbb{R}$ such that if $D$ is represented locally by $(f)$ on an open set $U$ then \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} \begin{equation*} \lambda_{D,v}(P)=-\log{|f(P)|_v}+\alpha_v(P) \end{equation*} where $\alpha_v$ is a continuous function on $U(\overline{k}_v)$ (in the $v$-topology). By choosing embeddings $k\to \overline{k}_v$ and $\overline{k}\to \overline{k}_v$, we may also think of $\lambda_{D,v}$ as a function on $X(k)\backslash D$ or $X(\overline{k})\backslash D$. A global Weil functions consists of a collection of local Weil functions, $\lambda_{D,v}$, for $v\in M_k$, where the $\alpha_v$ above satisfy certain reasonable boundedness conditions as $v$ varies. We refer the reader to \cite{La} and \cite{Vo2} for a further discussion of this. \begin{definition} Let $D$ be an effective Cartier divisor on a projective variety $X$, both defined over a number field $k$. Let $S$ be a finite set of places in $M_k$ containing the archimedean places. Let $R \subset X(\overline{k})\backslash D$. Then $R$ is defined to be a $(D,S)$-integral set of points if there exists a global Weil function $\lambda_{D,v}$ and constants $c_v$, with $c_v=0$ for all but finitely many $v$, such that for all $v\in M_k\backslash S$ and all embeddings $\overline{k} \to \overline{k}_v$\\ \begin{equation*} \lambda_{D,v}(P) \leq c_v \end{equation*} for all $P$ in $R$. \end{definition} We will frequently just say $D$-integral, omitting the reference to $S$, when $S$ has been fixed or when the statement is true for all possible $S$. Except where explicitly stated, we will also require from now on that a set of $D$-integral points be $k$-rational, i.e. $R\subset X(k)$. For us, the key property of a set of $(D,S)$-integral points is given by the following theorem. \begin{theorem} \label{reg} Let $R\subset X(\overline{k})\backslash D$ be a set of $(D,S)$-integral points on $X$. Then for any regular function $f$ on $X\backslash D$ (defined over $\overline{k}$) there exists a constant $a\in k$ such that $af(P)$ is $S$-integral for all $P$ in $R$, that is $af(P)$ lies in the integral closure of $\mathcal{O}_{k,S}$ in $\overline{k}$ for all $P\in R$. \end{theorem} In fact, in what follows, most of our results hold, and our conjectures should hold, for any set $R$ satisfying the conclusion of Theorem \ref{reg}. We will prefer to work with sets of $D$-integral points because they are better geometrically behaved (e.g. under pullbacks) and because they are the right objects to use so that the Diophantine exceptional set we are about to define matches (conjecturally) the holomorphic exceptional set we will define. We note that sets of $D$-integral points are also essentially the same as the sets of scheme-theoretic integral points one would get from working with models of $X\backslash D$ over $\mathcal{O}_{k,S}$ (see \cite[Prop. 1.4.1]{Vo2}). It will be necessary to define various exceptional sets of a variety. \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}A} \setcounter{theoremb}{\value{theorem}} \begin{definitiona} Let $X$ be a projective variety and $D$ an effective Cartier divisor on $X$, both defined over a number field $k$. Let $L$ be a number field, $L\supset k$, and $S$ a finite set of places of $L$ containing the archimedean places. We define the Diophantine exceptional set of $X\backslash D$ with respect to $L$ and $S$ to be \begin{equation*} \ExcL(X\backslash D)=\bigcup_R \dim_{>0}(\overline{R}) \end{equation*} where the union runs over all sets $R$ of $L$-rational $(D,S)$-integral points on $X$ and $\dim_{>0}(\overline{R})$ denotes the union of the positive dimensional irreducible components of the Zariski-closure of $R$. We define the absolute Diophantine exceptional set of $X\backslash D$ to be \begin{equation*} \Excd(X\backslash D)=\bigcup_{L \supset k,S} \ExcL(X\backslash D), \end{equation*} with $L$ ranging over all number fields and $S$ ranging over all sets of places of $L$ as above. \end{definitiona} These definitions depend only on $X\backslash D$ and not on the choices of $X$ and $D$. \begin{definitionb} Let $X$ be a complex variety. We define the holomorphic exceptional set of $X$, $\Exch(X)$, to be the union of all images of non-constant holomorphic maps $f:\mathbb{C}\to X$. \end{definitionb} Conjecturally, it is expected that $\Excd(X\backslash D)=\Exch(X\backslash D)$ (it may also be necessary to take the Zariski-closures of both sides first). \begin{definitiona} Let $X$ be a projective variety defined over a number field $k$. Let $D$ be an effective Cartier divisor on $X$. Then we define $X\backslash D$ to be Mordellic if $\Excd(X\backslash D)$ is empty. We define $X\backslash D$ to be quasi-Mordellic if $\Excd(X\backslash D)$ is not Zariski-dense in $X$. \end{definitiona} \begin{definitionb} Let $X$ be a complex variety. We define $X$ to be Brody hyperbolic if $\Exch(X)$ is empty. We define $X$ to be quasi-Brody hyperbolic if $\Exch(X)$ is not Zariski-dense in $X$. \end{definitionb} Note that $X$ being quasi-Brody hyperbolic is a stronger condition than the non-existence of holomorphic maps $f:\mathbb{C}\to X$ with Zariski-dense image. Similarly, $X\backslash D$ being quasi-Mordellic is stronger than the non-existence of dense sets of $D$-integral points on $X$. We will also need a convenient measure of the size of a divisor. We will use $\mathcal{O}_X(D)$, or simply $\mathcal{O}(D)$ when there is no ambiguity, to denote the invertible sheaf associated to a Cartier divisor $D$ on $X$, and $h^i(D)$ to denote the dimension of the vector space $H^i(X,\mathcal{O}(D))$. When $h^0(D)>0$, we will also frequently use the notation $\Phi_D$ to denote the rational map (unique up to projective automorphisms) from $X$ to $\mathbb{P}^{h^0(D)-1}$ corresponding to a basis of $H^0(X,\mathcal{O}(D))$. \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} \begin{definition} \label{defk} Let $D$ be a divisor on a nonsingular projective variety $X$. We define the dimension of $D$ to be the integer $\kappa(D)$ such that there exists positive constants $c_1$ and $c_2$ such that \begin{equation*} c_1 n^{\kappa(D)} \leq h^0(nD)\leq c_2 n^{\kappa(D)} \end{equation*} for all sufficiently divisible $n>0$. If $h^0(nD)=0$ for all $n>0$ then we let $\kappa(D)=-\infty$. \end{definition} Alternatively, if $\kappa(D)\geq 0$, one can show that \begin{equation*} \kappa(D)=\max \{\dim \Phi_{nD}(X)|n>0,h^0(nD)>0\}. \end{equation*} If $D$ is a Cartier divisor on a singular complex projective variety, we define $\kappa(D)=\kappa(\pi^*D)$ where $\pi:X'\to X$ is a desingularization of $X$. It is easy to show that this is independent of the chosen desingularization. For more properties of $\kappa(D)$ we refer the reader to \cite[Ch. 10]{Ii}. \begin{definition} \label{defbig} We define a Cartier divisor $D$ on $X$ to be quasi-ample (or big) if $\kappa(D)=\dim X$. \end{definition} If $D$ is quasi-ample then there exists an $n>0$ such that $\Phi_{nD}$ is birational, justifying the name. \section{General Setup and Notation} \label{gsetup} Throughout this paper we will use the following general setup and notation.\\\\ \textbf {General setup}: Let $X$ be a complex projective variety. Let $D=\sum_{i=1}^r D_i$ be a divisor on $X$ with the $D_i$'s effective Cartier divisors for all $i$. Suppose that at most $m$ $D_i$'s meet at a point, so that the intersection of any $m+1$ distinct $D_i$'s is empty.\\\\ In the Diophantine setting, we will also assume that $X$ and $D$ are defined over a number field $k$ and we let $S$ be a finite set of places of $k$ containing the archimedean places. From now on, we will freely use the notation $X$, $D$, $D_i$, $r$, $m$, $k$, and $S$ as above without further explanation. \section{Siegel and Picard-type Conjectures} In this section we give conjectures generalizing Siegel's theorem and Picard's theorem in various directions. \subsection{Main Conjectures} Some special cases of the conjectures given in this section are related to Vojta's Main Conjecture. Later, we will also give conjectures related to Vojta's General Conjecture, hence our terminology in this section and the next. We remind the reader that throughout we are using the general setup of the last section. \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}A} \setcounter{theoremb}{\value{theorem}} \begin{conjecturea}[Main Siegel-type Conjecture] \label{conjmaina} Suppose that $\kappa(D_i)\geq \kappa_0>0$ for all $i$. If $r>m+\frac{m}{\kappa_0}$ then there does not exist a Zariski-dense set of $k$-rational $(D,S)$-integral points on $X$. \end{conjecturea} \begin{conjectureb}[Main Picard-type Conjecture] \label{conjmainb} Suppose that $\kappa(D_i)\geq \kappa_0>0$ for all $i$. If $r>m+\frac{m}{\kappa_0}$ then there does not exist a holomorphic map $f:\mathbb{C} \to X \backslash D$ with Zariski-dense image. \end{conjectureb} As mentioned earlier, we will usually just say $D$-integral, omitting $k$ and $S$ from the notation. Siegel's theorem (resp. Picard's theorem) is the case $m=\kappa_0=\dim X=1$ of Conjecture \ref{conjmaina} (resp. Conjecture \ref{conjmainb}). We note that the dimension of $X$ does not appear in the conjectures, but $\kappa(D_i)$ is bounded by $\dim X$. We will now discuss some consequences and special cases of these conjectures which seem important enough in their own right to be listed separately as new conjectures, and which will sometimes contain extra conjectures (e.g. on the exceptional sets) which do not follow from the main conjectures above. At the two extremes of $\kappa_0$ we have \begin{conjecturea} \label{conj1a} If $\kappa(D_i)>0$ for all $i$ and $r> 2m$ then there does not exist a Zariski-dense set of $D$-integral points on $X$. \end{conjecturea} \begin{conjectureb} \label{conj1b} If $\kappa(D_i)>0$ for all $i$ and $r>2m$ then there does not exist a holomorphic map $f:\mathbb{C} \to X \backslash D$ with Zariski-dense image. \end{conjectureb} \begin{conjecturea} \label{conj1ab} If $D_i$ is quasi-ample for all $i$ and $r>m+\frac{m}{\dim X}$ then $X\backslash D$ is quasi-Mordellic. \end{conjecturea} \begin{conjectureb} \label{conj1bb} If $D_i$ is quasi-ample for all $i$ and $r>m+\frac{m}{\dim X}$ then $X\backslash D$ is quasi-Brody hyperbolic. \end{conjectureb} We note that when the $D_i$'s are in some sort of general position, so that $m=\dim X$, the inequalities in the last two conjectures above take the nicer form $r>\dim X +1$. The statements on quasi-Mordellicity and quasi-Brody hyperbolicity do not follow (directly at least) from the Main Conjectures. Of particular interest is the case where $D_i$ is ample for all $i$. In this case we conjecture very precise bounds on the dimensions of the exceptional sets (see Remark \ref{rbig} for a possible generalization to quasi-ample divisors). \begin{conjecturea}[Main Siegel-type Conjecture for Ample Divisors] \label{conj2a} Suppose that $D_i$ is ample for all $i$.\\\\ (a). If $r>m+\frac{m}{\dim X}$ then $\dim \Excd(X) \leq \frac{m}{r-m}$.\\ (b). In particular, if $r>2m$ then $X\backslash D$ is Mordellic. \end{conjecturea} \begin{conjectureb}[Main Picard-type Conjecture for Ample Divisors] \label{conj2b} Suppose that $D_i$ is ample for all $i$.\\\\ (a). If $r>m+\frac{m}{\dim X}$ then $\dim \Exch(X) \leq \frac{m}{r-m}$.\\ (b). If $r>2m$ then $X\backslash D$ is complete hyperbolic and hyperbolically imbedded in $X$. In particular, $X\backslash D$ is Brody hyperbolic. \end{conjectureb} It is not hard to show that the Main Conjectures for ample divisors follow from Conjectures \ref{conj1ab} and \ref{conj1bb}. \subsection{General Conjectures} We will also consider the situation where the field that the integral points are defined over is allowed to vary over all fields of degree less than or equal to $d$ over some fixed field $k$. So in this section we do not require that the integral points be $k$-rational. \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} \begin{definition} Let $R\subset X(\overline{k})$. We define the degree of $R$ over $k$ to be $\deg_k R=\sup_{P\in R} [k(P):k]$. \end{definition} Generalizing the Main Siegel-type Conjecture of last section, we conjecture \begin{conjecture}[General Siegel-type Conjecture] \label{congen} Suppose that $\kappa(D_i)\geq \kappa_0>0$ for all $i$. Let $d$ be a positive integer. If $r>m+\frac{m(2d-1)}{\kappa_0}$ then there does not exist a Zariski-dense set of $D$-integral points on $X$ of degree $d$ over $k$. \end{conjecture} \noindent We will see later that this conjecture and others in this section are related to Vojta's General Conjecture. We will also want to define a degree $d$ Diophantine exceptional set for a variety $V$. With the notation from our earlier definition for $\Excd$ we define \begin{definition} Let $X$ be a projective variety and $D$ an effective Cartier divisor on $X$, both defined over a number field $k$. Let $L$ be a number field, $L\supset k$, and $S$ a finite set of places of $L$ containing the archimedean places. We define the degree $d$ Diophantine exceptional set of $X\backslash D$ with respect to $L$ and $S$ to be \begin{equation*} \dExcL(X\backslash D)=\bigcup_R \dim_{>0}(\overline{R}) \end{equation*} where the union runs over all sets $R$ of $(D,S)$-integral points on $X$ of degree $d$ over $L$. We define the degree $d$ absolute Diophantine exceptional set of $X\backslash D$ to be \begin{equation*} \dExcd(X\backslash D)=\bigcup_{L \supset k,S} \dExcL(X\backslash D), \end{equation*} with $L$ ranging over all number fields and $S$ ranging over all sets of places of $L$ as above. \end{definition} Similarly we define $X\backslash D$ to be degree $d$ Mordellic (resp. degree $d$ quasi-Mordellic) if $\dExcd(X\backslash D)$ is empty (resp. not Zariski-dense in X). At the two extremes of $\kappa_0$ we have \begin{conjecture} Let $d$ be a positive integer. If $\kappa(D_i)>0$ for all $i$ and $r>2dm$ then there does not exist a Zariski-dense set of $D$-integral points on $X$ of degree $d$ over $k$. \end{conjecture} \begin{conjecture} Let $d$ be a positive integer. If $D_i$ is quasi-ample for all $i$ and $r>m+\frac{m(2d-1)}{\dim X}$ then $X\backslash D$ is degree $d$ quasi-Mordellic. \end{conjecture} We can also give a conjecture for ample divisors giving bounds on the degree $d$ Diophantine exceptional set. \begin{conjecture}[General Siegel-type Conjecture for Ample Divisors] Suppose that $D_i$ is ample for all $i$.\\\\ (a). If $r>m+\frac{m(2d-1)}{\dim X}$ then $\dim \dExcd(X\backslash D)\leq \frac{m(2d-1)}{r-m}$.\\ (b). In particular, if $r>2dm$ then $X\backslash D$ is degree $d$ Mordellic. \end{conjecture} \subsection{Conjectures over $\mathbb{Z}$ and Complex Quadratic Rings of Integers} When $\#S=1$, or equivalently, when $\mathcal{O}_{k,S}$ is $\mathbb{Z}$ or the ring of integers of a complex quadratic field, and $D_i$ is defined over $k$ for all $i$, we conjecture improvements to our previous conjectures. We will refer to these conjectures as ``over $\mathbb{Z}$", though they apply equally well to rings of integers of complex quadratic fields. \begin{conjecture}[Main Siegel-type Conjecture over $\mathbb{Z}$] Let $k=\mathbb{Q}$ or a complex quadratic field and let $S=\{v_\infty\}$ consist of the unique archimedean place of $k$. Suppose that $D_i$ is defined over $k$ for all $i$ and that $\kappa(D_i)>0$ for all $i$. If $r>m$ then there does not exist a Zariski-dense set of $(D,S)$-integral points on $X$. \end{conjecture} We emphasize that in contrast to our previous conjectures, each $D_i$ must be defined over $k$. We also conjecture that in the above if each $D_i$ is quasi-ample, then $\Exck(X)$ is not Zariski-dense in $X$. For ample divisors, as usual, we conjecture something more. \begin{conjecture}[Main Siegel-type Conjecture over $\mathbb{Z}$ for Ample Divisors] \label{conjS} Let $k=\mathbb{Q}$ or a complex quadratic field and let $S=\{v_\infty\}$ consist of the unique archimedean place of $k$. Suppose that $D_i$ is ample and defined over $k$ for all $i$.\\\\ (a). All sets $R$ of $(D,S)$-integral points on $X$ have $\dim R \leq 1+\dim (\bigcap_i D_i)$.\\ (b). In particular, if $D=D_1+D_2$ is a sum of two ample effective Cartier divisors on $X$, both defined over $k$, with no irreducible components in common, then there does not exist a Zariski-dense set of $(D,S)$-integral points on $X$. \end{conjecture} \begin{conjecture}[General Siegel-type Conjecture over $\mathbb{Z}$] \label{GZ} Let $k=\mathbb{Q}$ or a complex quadratic field and let $S=\{v_\infty\}$ consist of the unique archimedean place of $k$. Suppose that $D_i$ is defined over $k$ for all $i$ and that $\kappa(D_i)\geq \kappa_0>0$ for all $i$. Let $d$ be a positive integer. If $r>m+\frac{m(d-1)}{\kappa_0}$ then there does not exist a Zariski-dense set of $(D,S)$-integral points on $X$ of degree $d$ over $k$. \end{conjecture} If $D_i$ is quasi-ample for all $i$ in the above conjecture, then we also conjecture that $\dExck(X\backslash D)$ is not Zariski-dense in $X$. For ample divisors we have \begin{conjecture}[General Siegel-type Conjecture over $\mathbb{Z}$ for Ample Divisors] Let $k=\mathbb{Q}$ or a complex quadratic field and let $S=\{v_\infty\}$ consist of the unique archimedean place of $k$. Suppose that $D_i$ is ample and defined over $k$ for all $i$. Let $d$ be a positive integer. If $r>m+\frac{m(d-1)}{\dim X}$ then $\dim \dExck(X\backslash D)\leq \frac{m(d-1)}{r-m}$. \end{conjecture} \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}A} \setcounter{theoremb}{\value{theorem}} We will discuss the conjectures in greater detail in Section \ref{Remarks}. \section{Overview of Results} Sections \ref{smain}-\ref{SVgeneral} will be concerned with proving special cases of the above conjectures. In this section we highlight some of our results. Along the lines of the Main Conjectures we have \begin{theorema} Suppose $r>2m\dim X$.\\\\ (a). If $D_i$ is quasi-ample for all $i$ then $X\backslash D$ is quasi-Mordellic.\\ (b). If $D_i$ is ample for all $i$ then $X\backslash D$ is Mordellic. \end{theorema} \begin{theoremb} Suppose $r>2m\dim X$.\\\\ (a). If $D_i$ is quasi-ample for all $i$ then $X\backslash D$ is quasi-Brody hyperbolic.\\ (b). If $D_i$ is ample for all $i$ then $X\backslash D$ is complete hyperbolic and hyperbolically imbedded in $X$. In particular, $X\backslash D$ is Brody hyperbolic. \end{theoremb} If in addition the $D_i$'s have no irreducible components in common, then in the part (a)'s above we only need $r>2[\frac{m+1}{2}]\dim X$ where $[x]$ denotes the greatest integer in $x$. When $X$ is a surface, $m\leq 2$, and the $D_i$'s have no irreducible components in common, we are able to prove the Main Conjectures, Conjectures \ref{conjmaina},B through \ref{conj2a},B. \begin{theorema} Suppose $X$ is a surface and the $D_i$'s have no irreducible components in common.\\\\ (a). If $m=1$, $\kappa(D_i)>0$ for all $i$, and $r>2$ then there does not exist a Zariski-dense set of $D$-integral points on $X$.\\ (b). If $m=2$, $\kappa(D_i)>0$ for all $i$, and $r>4$ then there does not exist a Zariski-dense set of $D$-integral points on $X$.\\ (c). If $m=2$, $D_i$ is quasi-ample for all $i$, and $r>3$ then $X\backslash D$ is quasi-Mordellic.\\ (d). If $m=2$, $D_i$ is ample for all $i$, and $r>4$ then $X\backslash D$ is Mordellic. \end{theorema} \begin{theoremb} Suppose $X$ is a surface and the $D_i$'s have no irreducible components in common.\\\\ (a). If $m=1$, $\kappa(D_i)>0$ for all $i$, and $r>2$ then there does not exist a holomorphic map $f:\mathbb{C}\to X\backslash D$ with Zariski-dense image.\\ (b). If $m=2$, $\kappa(D_i)>0$ for all $i$, and $r>4$ then there does not exist a holomorphic map $f:\mathbb{C}\to X\backslash D$ with Zariski-dense image.\\ (c). If $m=2$, $D_i$ is quasi-ample for all $i$, and $r>3$ then $X\backslash D$ is quasi-Brody hyperbolic.\\ (d). If $m=2$, $D_i$ is ample for all $i$, and $r>4$ then $X\backslash D$ is complete hyperbolic and hyperbolically imbedded in $X$. In particular, $X\backslash D$ is Brody hyperbolic. \end{theoremb} We will see later that if $m=1, r>1,$ and $\kappa(D_i)>0$ for all $i$, then we must necessarily have $\kappa(D_i)=1$ for all $i$. As to the General Conjectures, when the integral points are allowed to vary over fields of a bounded degree, we prove \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} \begin{theorem} Let $d$ be a postive integer. If $D_i$ is ample for all $i$ and $r>2d^2m\dim X$ then $X\backslash D$ is degree $d$ Mordellic (all sets of $D$-integral points on $X$ of degree $d$ over $k$ are finite). \end{theorem} \begin{theorem} Let $k=\mathbb{Q}$ or a complex quadratic field. Let $S=\{v_\infty\}$ consist of the unique archimedean place of $k$. Let $d$ be a positive integer. If $D_i$ is ample and defined over $k$ for all $i$ and $r>dm$ then all sets of $(D,S)$-integral points on $X$ of degree $d$ over $k$ are finite. \end{theorem} As an application of our results, we will discuss an improvement to a result of Faltings. Faltings \cite{Fa} has recently shown how theorems on integral points on the complements of divisors with many components may occasionally be used to prove theorems on the complements of irreducible divisors. He shows how to do this with certain very singular curves on $\mathbb{P}^2$ by reducing the problem to a covering surface and applying the method of \cite{Fa2}. In \cite{Co4}, Zannier uses the subspace theorem approach instead of \cite{Fa2} to prove a result similar to Faltings. In Section \ref{Faltings} we will prove a theorem which generalizes both results. As an added bonus, we also prove the theorem in the case of holomorphic curves. \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}A} \setcounter{theoremb}{\value{theorem}} \section{Preliminaries} \subsection{Diophantine Approximation} \label{sDio} \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} Let $k$ be a number field. Let $\mathcal{O}_k$ be the ring of integers of $k$. As usual, we have a set $M_k$ of absolute values (or places) of $k$ consisting of one place for each prime ideal $\mathfrak{p}$ of $\mathcal{O}_k$, one place for each real embedding $\sigma:k \to \mathbb{R}$, and one place for each pair of conjugate embeddings $\sigma,\overline{\sigma}:k \to \mathbb{C}$. Let $k_v$ denote the completion of $k$ with respect to $v$. We normalize our absolute values so that $|p|_v=p^{-[k_v:\mathbb{Q}_p]/[k:\mathbb{Q}]}$ if $v$ corresponds to $\mathfrak{p}$ and $\mathfrak{p}|p$, and $|x|_v=|\sigma(x)|^{[k_v:\mathbb{R}]/[k:\mathbb{Q}]}$ if $v$ corresponds to an embedding $\sigma$ (in which case we say that $v$ is archimedean). If $v$ is a place of $k$ and $w$ is a place of a field extension $L$ of $k$, then we say that $w$ lies above $v$, or $w|v$, if $w$ and $v$ define the same topology on $k$. With the above definitions we have the product formula \begin{equation*} \prod_{v \in M_k}|x|_v=1 \quad \text{for all } x\in k^*. \end{equation*} For a point $P=(x_0,\ldots,x_n)\in \mathbb{P}^n(k)$ we define the height to be \begin{equation*} H(P)=\prod_{v\in M_k} \max(|x_0|_v,\ldots,|x_n|_v). \end{equation*} It follows from the product formula that $H(P)$ is independent of the choice of homogeneous coordinates for $P$. It is also easy to see that the height is independent of $k$. We define the logarithmic height to be \begin{equation*} h(P)=\log H(P). \end{equation*} At the core of our Diophantine results is the following version of Schmidt's Subspace Theorem due to Vojta \cite{Vo6}. \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}A} \setcounter{theoremb}{\value{theorem}} \begin{theorema} Let $k$ be a number field. Let $S$ be a finite set of places in $M_k$ containing the archimedean places. Let $H_1,\ldots,H_m$ be hyperplanes in $\mathbb{P}^n$ defined over $\overline{k}$ with corresponding Weil functions $\lambda_{H_1},\ldots,\lambda_{H_m}$. Then there exists a finite union of hyperplanes $Z$, depending only on $H_1,\ldots,H_m$ (and not $k$ or $S$), such that for any $\epsilon>0$, \begin{equation} \sum_{v\in S}\max_I \sum_{i \in I} \lambda_{H_i,v}(P) \leq (n+1+\epsilon)h(P) \end{equation} holds for all but finitely many $P$ in $\mathbb{P}^n(k)\backslash Z$, where the max is taken over subsets $I \subset \{1,\ldots,m\}$ such that the linear forms defining $H_i,i \in I$ are linearly independent. \end{theorema} Explicitly, if $H$ is a hyperplane on $\mathbb{P}^n$ defined by the linear form $L(x_0,\ldots,x_n)$ then a Weil function for $H$ is given by \begin{equation} \label{Weila} \lambda_{H,v}(P)=\log \max_i \frac{|x_i|_v}{|L(P)|_v}. \end{equation} where $P=(x_0,\cdots,x_n)$. We will also need the close relative of Schmidt's theorem, the $S$-unit lemma. \begin{theorema} Let $k$ be a number field and let $n>1$ be an integer. Let $\Gamma$ be a finitely generated subgroup of $k^*$. Then all but finitely many solutions of the equation \begin{equation} u_0+u_1+\cdots+u_n=1, u_i\in \Gamma \end{equation} lie in one of the diagonal hyperplanes $H_I$ defined by the equation $\sum_{x\in I}x_i=0$, where $I$ is a proper subset of $\{0,\ldots,n\}$ with at least two elements. \end{theorema} \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} For the convenience of the reader, we have collected various properties of $D$-integral points that we will use (sometimes implicitly) throughout the paper (see \cite{Vo2}). \begin{lemma} \label{Integral} Let $k$ be a number field and $S$ a finite set of places in $M_k$ containing the archimedean places. Let $D$ be an effective Cartier divisor on a projective variety $X$, both defined over $k$.\\\\ (a). Let $L$ be a finite extension of $k$ and let $T$ be the set of places of $L$ lying over places in $S$. If $R$ is a set of $(D,S)$-integral points then it is a set of $(D,T)$-integral points.\\ (b). Let $E$ be an effective Cartier divisor on $X$. If $R$ is a set of $(D+E)$-integral points then $R$ is a set of $D$-integral points.\\ (c). The $D$-integrality of a set is independent of the multiplicities of the components of $D$.\\ (d). Let $Y$ be a projective variety defined over $k$. Let $\pi:Y\to X$ be a morphism defined over $k$ with $\pi(Y)\not\subset D$. If $R$ is a set of $(D,S)$-integral points on $X$ then $\pi^{-1}(R)$ is a set of $(\pi^*D,S)$-integral points on $Y$. \end{lemma} Note also in (d), that if in addition $\pi:Y\backslash \pi^*D \to X \backslash D$ is a finite \'etale map, then by the Chevally-Weil theorem there exists a number field $L$ such that $\pi^{-1}(R)\subset Y(L)$ \cite[Th. 1.4.11]{Vo2}. \subsection{Nevanlinna Theory and Kobayashi Hyperbolicity} We will be interested in Nevanlinna theory as it applies to holomorphic maps $f:\mathbb{C} \to \mathbb{P}^n$ and hyperplanes on $\mathbb{P}^n$. Let $f:\mathbb{C} \to \mathbb{P}^n$ be a holomorphic map. Then we may choose a representation of $f$, $\mathbf{f}=(f_0,\ldots,f_n)$ where $f_0,\ldots,f_n$ are entire functions without common zeros. Let us define $\|\mathbf{f}\|=(|f_0|^2+\cdots +|f_n|^2)^{\frac{1}{2}}$. Then we define a characteristic function $T_f(r)$ of $f$ to be \begin{equation*} T_f(r)=\int_{0}^{2\pi} \log \|\mathbf{f}(re^{i\theta})\|\frac{d\theta}{2\pi}. \end{equation*} Note that by Jensen's formula this function is well-defined up to a constant. Let $H$ be a hyperplane in $\mathbb{P}^n$ defined by a linear form $L$. Then we define a Weil function $\lambda_H(f(z))$ of $f$ with respect to $H$ by \begin{equation} \label{Weilb} \lambda_H(f(z))=-\log \frac{|L(\mathbf{f}(z))|}{\|\mathbf{f}(z)\|}. \end{equation} We note that this is independent of the choice of $\mathbf{f}$ and depends on the choice of $L$ only up to a constant. The analogue of Schmidt's Subspace Theorem that we will need is the following version of Cartan's Second Main Theorem, due to Vojta \cite{Vo3}. \begin{theoremb} Let $H_1,\ldots H_m$ be hyperplanes in $\mathbb{P}^n$ with corresponding Weil functions $\lambda_{H_1},\ldots,\lambda_{H_m}$. Then there exists a finite union of hyperplanes $Z$ such that for any $\epsilon >0$ and any holomorphic map $f:\mathbb{C}\to \mathbb{P}^n\backslash Z$ \begin{equation} \int_{0}^{2\pi} \max_I \sum_{i \in I} \lambda_{H_i}(f(re^{i\theta}))\frac{d\theta}{2\pi} \leq (n+1+\epsilon)T_f(r) \end{equation} holds for all $r$ outside a set of finite Lebesgue measure, where the max is taken over subsets $I \subset \{1,\ldots,m\}$ such that the linear forms defining $H_i,i \in I$, are linearly independent. \end{theoremb} The analogue of the $S$-unit lemma is the Borel lemma. \begin{theoremb} Let $f_1,\ldots,f_n$ be entire functions. Suppose that \begin{equation} e^{f_1}+\cdots+e^{f_n}=1. \end{equation} Then $f_i$ is constant for some $i$. \end{theoremb} Closely connected to questions about holomorphic curves is the Kobayashi pseudo-distance and Kobayashi hyperbolicity. We refer the reader to \cite{La2} for the definitions of the Kobayashi pseudo-distance, Kobayashi hyperbolic, complete hyperbolic, and hyperbolically imbedded. It is trivial that Kobayashi hyperbolic implies Brody hyperbolic. We will want a criterion for proving the converse in special cases. On projective varieties, this is given by Brody's theorem. More generally, we will use the following theorem of Green (see \cite{Gr2} and \cite{La2}). \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} \begin{theorem}[Green] \label{hyperbolic} Let $X$ be a complex projective variety. Let $Y=\bigcup_{i \in I} D_i$ be a finite union of Cartier divisors $D_i$ on $X$. Suppose that for every subset $\emptyset \subset J \subset I$, \begin{equation*} \bigcap_{j\in J}D_j\backslash \bigcup_{i\in I\backslash J}D_i \end{equation*} is Brody hyperbolic, where $\bigcap_{j\in \emptyset}D_j=X$. Then $X\backslash Y$ is complete hyperbolic and hyperbolically imbedded in $X$. \end{theorem} \subsection{Nef and Quasi-ample Divisors} \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} We now recall some basic definitions and facts regarding nef and quasi-ample divisors. We will use the theory of intersection numbers on projective varieties as presented in, for instance, \cite{Kl}. We will use the notation $D^n$ to denote the intersection number of the $n$-fold intersection of $D$ with itself. In what follows $X$ will be a projective variety over an algebraically closed field of characteristic $0$. \begin{definition} A Cartier divisor $D$ (or invertible sheaf $\mathcal{O}(D)$) on $X$ is said to be numerically effective, or nef, if $D.C\geq 0$ for any closed integral curve $C$ on $X$. \end{definition} The next lemma summarizes some basic properties of nef divisors (see \cite{Kl}). \begin{lemma} Nef divisors satisfy the following:\\\\ (a). Let $n=\dim X$. If $D_1,\ldots,D_n$ are nef divisors on $X$ then $D_1.D_2.\ldots.D_n\geq 0$.\\ (b). Let $D$ be a nef divisor and $A$ an ample divisor on $X$. Then $A+D$ is ample.\\ (c). Let $f:X \to Y$ be a morphism and let $D$ be a nef divisor on $Y$. Then $f^*\mathcal{O}(D)$ is nef on $X$. \end{lemma} Recall that we have defined $\kappa(D)$ and quasi-ampleness for a Cartier divisor (Definitions \ref{defk} and \ref{defbig}). It is always true that $\kappa(D) \leq \dim X$, so $D$ is quasi-ample (or big) if and only if it has the largest possible dimension for a divisor on $X$. For nef divisors it is possible to give a more numerical criterion for a divisor to be quasi-ample. It is also possible in this case to get an asymptotic formula for $h^0(nD)$. We have the following lemma, due to Sommese, as it appears in \cite{Ka}. \begin{lemma} \label{nefbig} Let $D$ be a nef divisor on a nonsingular projective variety $X$. Let $q=\dim X$. Then $h^0(nD)=\frac{D^q}{q!}n^q+O(n^{q-1})$. In particular, $D^q>0$ if and only if $D$ is quasi-ample. \end{lemma} \begin{proof} Let $K_X$ denote the canonical divisor on $X$. Let $L$ be an ample divisor on $X$ such that $L+K_X$ is very ample. Since $D$ is nef, $nD+L$ is ample, and so by Kodaira's vanishing theorem we have \begin{equation*} H^i(X,\mathcal{O}(nD+L+K_X))=0 \text{ for } i>0. \end{equation*} Therefore, \begin{equation*} h^0(nD+L+K_X)=\chi(\mathcal{O}(nD+L+K_X))=\frac{D^q}{q!}n^q+O(n^{q-1}) \end{equation*} by Riemann-Roch. Let $Y$ be a general member of the linear system $|L+K_X|$, so that $Y$ is nonsingular and irreducible. Then we have an exact sequence \begin{equation*} 0 \to H^0(X,\mathcal{O}(nD)) \to H^0(X,\mathcal{O}(nD+L+K_X))\to H^0(Y,i^*\mathcal{O}(nD+L+K_X)) \end{equation*} where $i:Y\to X$ is the inclusion map. But since $\dim Y=q-1$, we have $\dim H^0(Y,i^*\mathcal{O}(nD+L+K_X))\leq O(n^{q-1})$. It follows that $h^0(nD)=\frac{D^q}{q!}n^q+O(n^{q-1})$. \end{proof} Since we will use it multiple times, we state the exact sequence used above as a lemma. \begin{lemma} \label{exact} Let $D$ be an effective Cartier divisor on $X$ with inclusion map $i:D \to X$. Let $E$ be any Cartier divisor on $X$. Then we have exact sequences \begin{align} &0 \to \mathcal{O}(E-D) \to \mathcal{O}(E) \to i_{*}(i^*\mathcal{O}(E)) \to 0\\ &0 \to H^0(X,\mathcal{O}(E-D))\to H^0(X,\mathcal{O}(E)) \to H^0(D,i^*(\mathcal{O}(E)). \end{align} \end{lemma} \begin{proof} If $D$ is an effective Cartier divisor, then a fundamental exact sequence is \begin{equation*} 0 \to \mathcal{O}(-D) \to \mathcal{O}_X \to i_{*} \mathcal{O}_D \to 0. \end{equation*} Tensoring with $\mathcal{O}(E)$ and using the projection formula, we get the first exact sequence. Taking global sections then gives the second exact sequence. \end{proof} We can prove a little more for surfaces. \begin{lemma} \label{surfbig} Let $D$ be an effective divisor on a nonsingular projective surface $X$. If $D^2>0$ then $h^0(nD)\geq \frac{n^2D^2}{2}+O(n)$ and $D$ is quasi-ample. \end{lemma} \begin{proof} By Riemann-Roch, \begin{equation} h^0(nD)-h^1(nD)+h^0(K-nD)=\frac{n^2D^2}{2}-\frac{nD.K}{2}+1+p_a. \end{equation} Since $D$ is effective, $D \neq 0$, $h^0(K-nD)=0$ for $n\gg 0$ (for example, choose $n>K.H$ where $H$ is an ample divisor). We also have $h^1(nD)\geq 0$, so $h^0(nD)\geq \frac{n^2D^2}{2}+O(n)$ and $D$ is quasi-ample. \end{proof} It is not always true that if $D$ is nef then $h^0(E-D) \leq h^0(E)$. If $h^0(D)=0$ (for example if $D$ corresponds to a non-zero torsion element of Pic $X$) then when $E=D$ we have $h^0(E-D)=h^0(D-D)=h^0(0)=1 > h^0(E)=0$. We will want some control over $h^0(E-D)$ when $D$ is nef, and so we prove the following weak lemma. \begin{lemma} \label{nef} Let $X$ be a nonsingular projective variety of dimension $q$. Let $D$ be a nef divisor on $X$. Let $E$ be any divisor on $X$. Then \begin{equation*} h^0(nE-mD) \leq h^0(nE)+O(n^{q-1}) \end{equation*} for all $m,n\geq 0$, where the implied constant is independent of $m$. \end{lemma} \begin{proof} We first claim that if $F$ is any nef divisor then there exists a divisor $C$, independent of $F$, such that $h^0(C+F)>0$. Explicitly, we may take $C=(q+2)A+K_X$, where $A$ is a very ample divisor on $X$. We prove this by induction on the dimension $q$. The case $q=1$ is easy. For the inductive step, we have an exact sequence \begin{multline*} 0\to H^0(X,\mathcal{O}((q+1)A+K_X+F))\to H^0(X,\mathcal{O}((q+2)A+K_X+F)) \\ \to H^0(Y,i^*(\mathcal{O}((q+2)A+K_X+F)))\to H^1(X,\mathcal{O}((q+1)A+K_X+F)) \end{multline*} where $Y$ is an irreducible nonsingular element of $|A|$ with inclusion map $i:Y\to X$. Since $(q+1)A+F$ is ample, by Kodaira vanishing, the last term above is $0$. Since $\omega_Y\cong i^*(\mathcal{O}(A+K_X))$, by induction we get that $\dim H^0(Y,i^*(\mathcal{O}((q+2)A+K_X+F)))>0$. Since the penultimate map in the exact sequence above is surjective, we therefore also have $h^0((q+2)A+k_X+F)=h^0(C+F)>0$, proving our claim. Then we have \begin{equation*} h^0(nE-mD)\leq h^0(nE-mD+(C+mD))= h^0(nE+C) \leq h^0(nE) +O(n^{q-1}) \end{equation*} independently of $m$, where the last inequality follows from Lemma \ref{exact} as in the proof of Lemma \ref{nefbig}. \end{proof} \section{Fundamental Theorems on Large Divisors} \label{smain} In this section we prove a slightly expanded version of a theorem of Corvaja and Zannier and its analogue for holomorphic curves. These theorems will be fundamental to our future results. Let $D$ be a divisor on a nonsingular projective variety $X$ defined over a field $k$. Let $\overline{k}(X)$ denote the function field of $X$ over $\overline{k}$. We will write $D\geq E$ if $D-E$ is effective. Let div$(f)$ denote the principal divisor associated to $f$. Let $L(D)$ be the $\overline{k}$-vector space $L(D)=\{f \in \overline{k}(X)|\text{div}(f)\geq -D\}$, and let $l(D)=\dim L(D)=h^0(D)$. If $E$ is a prime divisor we let $\text{ord}_E f$ denote the coefficient of $E$ in div$(f)$. We make the following definition. \begin{definition} Let $D$ be an effective divisor on a nonsingular projective variety $X$ defined over a field $k$. Then we define $D$ to be a very large divisor on $X$ if for every $P\in D(\overline{k})$ there exists a basis $B$ of $L(D)$ such that $\text{ord}_E\prod_{f \in B}f>0$ for every irreducible component $E$ of $D$ such that $P\in E$. We define $D$ to be a large divisor if some nonnegative integral linear combination of its irreducible components is very large on $X$. \end{definition} \begin{remark} \label{remlarge} Suppose $D$ is very large. Let $P\in D$ and let $\mathcal E$ be the set of irreducible components $E$ of $D$ such that $P\in E$. If $B$ is a basis of $L(D)$ that has the property in the definition of very large with respect to $P$, then $B$ also works as a basis with respect to any $Q\in \bigcap_{E \in \mathcal{E}}E$. Thus, it is easily seen that in the definition of very large one only needs to use bases $B\in \mathcal{B}$ for some finite set of bases $\mathcal{B}$ for any very large divisor $D$. \end{remark} We will see (Theorem \ref{cor3}) for example that on any nonsingular projective variety $X$ the sum of sufficiently many ample effective divisors in general position is large. On the other hand, it is obvious from the definition that if $D$ is an irreducible effective divisor on $X$ then $D$ cannot be large. Roughly speaking, large divisors have a lot of irreducible components of high $D$-dimension. With this definition we have the following theorems. \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}A} \setcounter{theoremb}{\value{theorem}} \begin{theorema}[Corvaja-Zannier] \label{maina} Let $X$ be a nonsingular projective variety defined over a number field $k$. Let $S\subset M_k$ be a finite set of places of $k$ containing the archimedean places. Let $D$ be a large divisor on $X$ defined over $k$. Then there does not exist a Zariski-dense set of $D$-integral points on $X$. Furthermore, if $D$ is very large and $\Phi_D$ is a rational map to projective space corresponding to $D$, then there exists a proper closed subset $Z\subset X$ depending only on $D$ (and not $k$ or $S$) such that $\Phi_D(R\backslash Z)$ is finite for any set $R$ of $D$-integral points on $X$. In particular, if $\Phi_D$ is birational, $X\backslash D$ is quasi-Mordellic. \end{theorema} \begin{theoremb} \label{mainb} Let $X$ be a nonsingular complex projective variety. Let $D$ be a large divisor on $X$. Then there does not exist a holomorphic map $f:\mathbb{C} \to X \backslash D$ with Zariski-dense image. Furthermore, if $D$ is very large and $\Phi_D$ is a rational map to projective space corresponding to $D$, then there exists a proper closed subset $Z\subset X$ depending only on $D$ such that for all holomorphic maps $f:\mathbb{C} \to X \backslash D$, either $f(\mathbb{C})\subset Z$ or $\Phi_D\circ f$ is constant. In particular, if $\Phi_D$ is birational, $X\backslash D$ is quasi-Brody hyperbolic. \end{theoremb} Theorem \ref{maina} appears, essentially, in the proof of the Main Theorem in \cite{Co2}, and for curves in \cite{Co}. We have added the last two statements to the theorem by using Vojta's result on the exceptional hyperplanes in the Schmidt Subspace Theorem. Given these theorems, many of our results mentioned in the introduction reduce to showing that certain divisors are large. Let us prove Theorem \ref{maina} first. Before proving this theorem, we need a lemma. \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} \begin{lemma} \label{seq} Let $X$ be a projective variety defined over a number field $k$. Let $R\subset X(k)$ be a Zariski-dense subset of $X$. Let $v\in M_k$. Then there exists a point $P$ in $X(k_v)$ and a sequence $\{P_i\}$ in $R$ such that $\{P_i\}\to P$ in the $v$-topology on $X(k_v)$ and $\bigcup \{P_i\}$ is Zariski-dense in $X$. \end{lemma} \begin{proof} We will always be working in the $v$-topology on $X(k_v)$. First we claim that there exists a $P$ in $\overline{R}\subset X(k_v)$ such that for every neighborhood $U$ of $P$ in $X(k_v)$, $U\cap R$ is Zariski-dense in $X$. Indeed, suppose there is no such $P$. Then for each $P$ in $R$, let $U_P$ be a neighborhood of $P$ such that $U_P \cap R$ is not Zariski-dense in $X$. Since $X(k_v)$ is compact because $X$ is projective, $\overline{R}$ is compact, so we may cover $\overline{R}$ by finitely many open sets $U_{P_1},\ldots,U_{P_n}$. But then $R=(U_{P_1}\cap R)\cup\cdots \cup (U_{P_n}\cap R)$ is not Zariski-dense in $X$, a contradiction. Now pick some $P$ as in the claim above. Embed $X$ in $\mathbb{P}^n_k$ for some $n$. Since $k$ is countable, the set of hypersurfaces in $\mathbb{P}^n_k$ not containing $X$ is countable. Let $\{H_i\}$ be an enumeration of these. There also exists a countable collection of neighborhoods $\{U_i\}$ of $P$ in $X(k_v)$ such that $U_i \subset U_j$ for $i>j$ and $\bigcap U_i=\{P\}$. Since $U_i\cap R$ is Zariski-dense in $X$, for all $i$ there exists a $P_i \in U_i \cap R$ such that $P_i \notin H_i$. Then $\{P_i\}\to P$ in $X(k_v)$ and $\bigcup \{P_i\}$ is Zariski-dense in $X$ since it is not contained in any hypersurface. \end{proof} \begin{proof}[Proof of Theorem \ref{maina}] Let $D$ be a large divisor and $S$ and $X$ as in Theorem \ref{maina}. Clearly, we may reduce to the case where $D$ is very large. Extending $k$ if necessary and enlarging $S$, we may assume without loss of generality that every irreducible component of $D$ is defined over $k$ and that all of the finitely many functions in $L(D)$ we use (see Remark \ref{remlarge}) are defined over $k$. Let $\{\phi_1,\ldots,\phi_{l(D)}\}$ be a basis of $L(D)$ over $k$. Let $R$ be a $(D,S)$-integral set of points on $X$. It suffices to prove the theorem in the case that $\overline{R}$ is irreducible. By repeatedly applying Lemma~\ref{seq}, we see that there exists a sequence $P_i$ in $R$ such that for each $v$ in $S$, $\{P_i\}$ converges to a point $P_v\in X(k_v)$ and $\bigcup \{P_i\}$ is Zariski-dense in $\overline{R}$. Let $S'$ be the set of places $v\in S$ such that $P_v\in D(k_v)$, and let $S''=S\backslash S'$. Since $D$ is very large, for each $v\in S'$ let $L_{iv},i=1,\ldots,l(D)$ be a basis for $L(D)$ such that $\text{ord}_E\prod_{i=1}^{l(D)}L_{iv}>0$ for all irreducible components $E$ of $D$ such that $P_v\in E(k_v)$. Of course each $L_{iv}$ is a linear form in the $\phi_j$'s over $k$. For $v\in S''$, we set $L_{jv}=\phi_j$ for $j=1,\ldots,l(D)$. Let $\phi(P)=(\phi_1(P),\ldots,\phi_{l(D)}(P))$ for $P\in X\backslash D$. Let $H_{jv}$ denote the hyperplane in $\mathbb{P}^{l(D)-1}$ determined by $L_{jv}$ with respect to the basis $\phi_1,\ldots,\phi_{l(D)}$. Let $\lambda_{H_{jv},v}$ be the Weil function for $H_{jv}$ given in Equation (\ref{Weila}). We will now show that there exists $\epsilon>0$ and a constant $C$ such that \begin{equation} \label{Schmidt} \sum_{v\in S}\sum_{j =1}^{l(D)} \lambda_{H_{jv},v}(\phi(P_i)) > (l(D)+\epsilon)h(\phi(P_i))+C. \end{equation} Since $R$ is a set of $(D,S)$-integral points, we have \begin{equation*} h(\phi(P_i))<\sum_{v \in S}\log{\max_j|\phi_j(P_i)|_v}+O(1). \end{equation*} Using this it suffices to prove that \begin{equation*} \sum_{v\in S}\sum_{j =1}^{l(D)}\log \max_{j'} \frac{|\phi_{j'}(P_i)|_v}{|L_{jv}(P_i)|_v}>(l(D)+\epsilon)\sum_{v\in S}\log{\max_{j'}|\phi_{j'}(P_i)|_v}+C' \end{equation*} for some $C'$ or rearranging things, simplifying, and exponentiating \begin{equation*} \prod_{v\in S} |\max_{j'}(\phi_{j'}(P_i))^{\epsilon} \prod_{j=1}^{l(D)}L_{jv}(P_i)|_v \end{equation*} is bounded for some $\epsilon>0$. Let \begin{equation*} M=\max\{-\text{ord}_E \phi_j|E \text{ is an irreducible component of } D, j=1,\ldots,l(D)\}. \end{equation*} Let $\epsilon=\frac{1}{M}$. For $v\in S''$ both $|\phi_{j'}(P_i)|_v$ and $|L_{jv}(P_i)|_v$ are bounded for all $i$ since $P_v \notin D(k_v)$ and $\phi_{j'}$ and $L_{jv}$ have poles lying only in the support of $D$. Let $v\in S'$. So $P_v\in D(k_v)$. It follows from the definition of $M$ and the fact that $\text{ord}_E \prod_{i=1}^{l(D)}L_{iv}>0$ for any irreducible component $E$ of $D$ such that $P_v \in E(k_v)$ that $\text{ord}_E\phi_{j'} (\prod_{i=1}^{l(D)}L_{iv})^M\geq -M+ M \geq 0$ for any irreducible component $E$ of $D$ such that $P_v \in E(k_v)$. Since the $\phi_{j'}$ and $L_{iv}$ have poles only in the support of $D$, it follows from the previous order computation that $|\max_{j'}(\phi_{j'}(P_i))^{\epsilon} \prod_{j=1}^{l(D)}L_{jv}(P_i)|_v$ is bounded for all $i$ and all $v\in S$ when $\epsilon=\frac{1}{M}>0$. So we have proved Equation (\ref{Schmidt}). Note that either $h(\phi(P_i))\to \infty$ as $i\to \infty$ or $\phi(P_i)=\phi(\overline{R})$, and $\phi(P_i)$ is constant for all $i$. In the latter case the theorem is proved, so we may assume the former. Therefore, making $\epsilon$ smaller, we see that Equation (\ref{Schmidt}) holds with $C=0$ for all but finitely many $i$. So by Schmidt's Subspace Theorem, there exists a finite union of hyperplanes $Z\subset \mathbb{P}^{l(D)-1}$ such that all but finitely many of the points in the set $\{\phi(P_i)=(\phi_1(P_i),\ldots,\phi_{l(D)}(P_i))|i\in \mathbb{N}\}$ lie in $Z$. Using Remark \ref{remlarge} we see that we may choose the hyperplanes $H_{iv}$ used above from a finite set of hyperplanes independent of $R$. Therefore, using the statement on the exceptional hyperplanes in the Schmidt Subspace Theorem, we see that $Z$ may be chosen to depend only on $D$ and not $R$, $k$, or $S$. Since it was assumed that $\overline{R}$ is irreducible and $\phi(\overline{R})$ is not a point, it follows that $\phi(R)\subset Z$. Since $\phi_1,\ldots,\phi_{d}$ are linearly independent functions in $K(X)$ and $Z$ is a finite union of hyperplanes, it follows that $\phi^{-1}(Z)$ is a finite union of proper closed subvarieties of $X$. So $R\subset \phi^{-1}(Z)$ and the theorem is proved. \end{proof} The proof of Theorem \ref{mainb} is very similar. \begin{proof}[Proof of Theorem \ref{mainb}] Since our assertion depends only on the support of $D$ we may assume without loss of generality that $D$ is very large on $X$. Let $f:\mathbb{C} \to X \backslash D$ be a holomorphic map. By Remark \ref{remlarge} there exists a finite set $J$ of elements in $L(D)$ such that for any $P\in D$ there exists a subset $I \subset J$ that is a basis of $L(D)$ such that $\text{ord}_E \prod_{g\in I}g>0$ for every irreducible component $E$ of $D$ such that $P\in E$. Let $\phi_1,\ldots,\phi_{l(D)}$ be a basis for $L(D)$. Let $\phi=(\phi_1,\ldots,\phi_{l(D)}):X\backslash D \to \mathbb{P}^{l(D)-1}$. Let $J'$ be the set of linear forms $L$ in $l(D)$ variables over $\mathbb{C}$ such that $L\circ \phi \in J$. If $L$ is a linear form, let $H_L$ be the corresponding hyperplane. We will now show that there exists $\epsilon>0$ and a constant $C$ such that \begin{equation} \label{Cartan} \int_{0}^{2\pi} \max_I \sum_{L \in I} \lambda_{H_L}(\phi \circ f(re^{i\theta}))\frac{d\theta}{2\pi} > (l(D)+\epsilon)T_{\phi \circ f}(r)-C \end{equation} for all $r>0$, where the max is taken over subsets $I \subset J'$ such that $I$ consists of exactly $l(D)$ linearly independent linear forms. Substituting the definition of the Weil function in Equation (\ref{Weilb}) and the definition of $T_{\phi \circ f}$, after some manipulation the inequality in Equation (\ref{Cartan}) becomes \begin{equation*} \int_{0}^{2\pi} \epsilon \log|\phi\circ f(re^{i\theta})|+\min_I \sum_{L \in I} \log |L\circ \phi \circ f(re^{i\theta})| \frac{d\theta}{2\pi}<C \end{equation*} with $I$ as before. Since $|\phi\circ f(re^{i\theta})|\leq \sqrt{l(D)} \max_j |\phi_j \circ f(re^{i\theta})|$ it clearly suffices to show that \begin{equation} \label{Cartan2} \max_j|\phi_j \circ f(re^{i\theta})|^{\epsilon} \min_I \prod_{L \in I} |L\circ \phi \circ f(re^{i\theta})| \end{equation} is bounded independently of $r$ and $\theta$ for some $\epsilon>0$. Let $D_1,\ldots,D_m$ be the irreducible components of $D$. Let \begin{equation*} M=\max\{-\text{ord}_{D_i} \phi_j|i=1,\ldots,m, j=1,\ldots,l(D)\}. \end{equation*} We will work in the classical topology. Let $P\in D$. Then there exists a neighborhood $U$ of $P$ such that for all $Q\in \overline{U}$ if $Q \in D_i$ for some $i$ then $P \in D_i$. Let $I\subset J'$ be a subset of $J'$ such that $\text{ord}_{D_i} \prod_{L\in I} L \circ \phi>0$ for all $i$ such that $P\in D_i$. If $P \in D_i$, then by the definition of $M$ we have $\text{ord}_{D_i}\phi_j (\prod_{L\in I} L \circ \phi)^M\geq 0$ for all $j$. By the definition of $U$ we see that $\phi_j (\prod_{L\in I} L \circ \phi)^M$ is bounded for all $j$ on the compact set $\overline{U}$. Since $D$ is compact and may be covered by such sets we see that $\max_j|\phi_j|\min_I \prod_{L \in I} |L\circ \phi|^M$ is bounded on $X\backslash D$ (using also that away from $D$ everything is obviously bounded since the $\phi_j$'s have poles only in $D$). Therefore Equation (\ref{Cartan2}) is bounded independently of $r$ and $\theta$ for $\epsilon=\frac{1}{M}$. If $\phi \circ f$ is constant then there is nothing to prove, so assume otherwise. Then $T_{\phi \circ f}(r)\to \infty$ as $r\to \infty$, and so making $\epsilon$ smaller, we see that we have proven the inequality (\ref{Cartan}) with $C=0$ for all sufficiently large $r$. Therefore by Cartan's Second Main Theorem, there exists a finite union of hyperplanes $Z\subset \mathbb{P}^{l(D)-1}$ depending only on $D$ (the $H_L$'s depended only on $D$) such that $\phi(f(\mathbb{C}))\subset Z$. Since the $\phi_j$'s are linearly independent and $Z$ is a finite union of hyperplanes, $\phi^{-1}(Z)$ is a finite union of closed subvarieties of $X$ and $f(\mathbb{C})\subset \phi^{-1}(Z)$. \end{proof} \begin{remark} If $D$ is very large and one can explicitly compute the map $\phi$ and the hyperplanes used in the above proofs, then one can explicitly compute the closed set $Z$ in the theorems above. This follows from the explicit description of the exceptional hyperplanes in \cite{Vo6} and \cite{Vo3}. \end{remark} \section{Large Divisors} For an effective divisor $D=\sum_{i=1}^r D_i$ on $X$ and $P\in D(\overline{k})$, we let $D_P=\sum_{i:P\in D_i}D_i$. \begin{lemma} \label{large} Let $D=\sum_{i=1}^r D_i$ be a divisor on a nonsingular projective variety $X$ with $D_i$ effective for each $i$. Let $P\in D$. Let $f_P(m,n)=l(nD-mD_P)-l(nD-(m+1)D_P)$. If there exists $n>0$ such that $\sum_{m=0}^{\infty}(m-n)f_P(m,n)>0$ for all $P\in D$ then $nD$ is very large. \end{lemma} \begin{proof} Let $n>0$ be such that $\sum_{m=0}^{\infty}(m-n)f_P(m,n)>0$ for all $P\in D$. This sum is clearly finite for all $P\in D$ and we let $M_P(n)$ be the largest integer such that $f_P(M_P(n),n)>0$. Let $P\in D$. Let $M=M_P(n)$. Let $V_j=L(nD- jD_P)$. So $\dim V_j/V_{j+1}=f_P(j,n)$. We have $L(nD)=V_0 \supset V_1 \supset \ldots \supset V_M\neq 0$. Choose a basis of $V_M$ and successively complete it to bases of $V_{M-1},V_{M-2},\ldots,V_0$, to obtain a basis $f_1,\ldots,f_{l(nD)}$. Let $E$ be an irreducible component of $D$ such that $P \in E$. If $f_j \in V_m$ then $\text{ord}_E f_j\geq (m-n)\ord_ED$. So we get that $\text{ord}_E\prod_{i=1}^{l(nD)}f_i\geq (\ord_ED)\sum_{m=0}^{M}(m-n)f_P(m,n)>0$. So $nD$ is very large. \end{proof} \begin{theorem} \label{cor2} Let $X$ be a nonsingular projective variety. Let $q= \dim X$. Let $D=\sum_{i=1}^{r}D_i$ be a divisor on $X$ such that $D_i$ is effective and nef for each $i$. Suppose also that every irreducible component of $D$ is nonsingular. If \begin{equation*} D^q>2q D^{q-1}.D_P, \qquad \forall P\in D \end{equation*} then $nD$ is very large for $n\gg 0$. In particular, $D$ is large. \end{theorem} \begin{proof} Let $P \in D$. Let $D_P=\sum_{j=1}^{k}a_j E_j$, where each $E_j$ is a distinct prime divisor. Repeatedly applying Lemma~\ref{exact}, we obtain \begin{multline*} \dim H^0(X,\mathcal{O}(nD-m D_P))-\dim H^0(X,\mathcal{O}(nD-(m+1)D_P)) \\ \leq \sum_{j=1}^k \sum_{l=0}^{a_{j}-1}\dim H^0(E_{j},i^*_{E_{j}}\mathcal{O}(nD-mD_P -\sum_{j'=1}^{j-1}a_{j'}E_{j'}-lE_{j})) \end{multline*} It follows from the fact that $D_P$ is nef, Lemma \ref{exact}, and Lemma \ref{nef} that \begin{multline*} \dim H^0(E_{j},i^*_{E_{j}}\mathcal{O}(nD-mD_P -\sum_{j'=1}^{j-1}a_{j'}E_{j'}-lE_{j}))\\ \leq \dim H^0(E_{j},i^*_{E_{j}}\mathcal{O}(nD))+O(n^{q-2}). \end{multline*} Therefore, \begin{multline*} \dim H^0(X,\mathcal{O}(nD-m D_P))-\dim H^0(X,\mathcal{O}(nD-(m+1)D_P))\\ \leq \sum_{j=1}^k a_j \dim H^0(E_j,i_{E_j}^*\mathcal{O}(nD))+O(n^{q-2}). \end{multline*} Since $D$ is nef, $l(nD)=\frac{n^q}{q!}D^q + O(n^{q-1})$. Since $i_{E_j}^*\mathcal{O}(D)$ is also nef, we have $\dim H^0(E_j,i_{E_j}^* \mathcal{O}(nD))= \frac{n^{q-1}}{(q-1)!} D^{q-1}.E_j+ O(n^{q-2})$. So \begin{equation*} f_P(m,n)\leq \frac{n^{q-1}}{(q-1)!} \sum_{j=1}^k a_j D^{q-1}.E_j+ O(n^{q-2}) =\frac{n^{q-1}}{(q-1)!} D^{q-1}.D_P+ O(n^{q-2}). \end{equation*} To use this estimate, we borrow a lemma from \cite{Co2}. \begin{lemma} \label{CZlemma} Let $h$ and $R$ be integers with $R\leq h$ and let $x_1,\ldots,x_h,U_1,\ldots,U_R$ be real numbers. If $0\leq x_i\leq U_i$ for $i=1,\ldots, R$ and $\sum_{j=1}^RU_j\leq \sum_{j=1}^hx_j$ then $\sum_{j=1}^hjx_j\geq \sum_{j=1}^RjU_j$. \end{lemma} \begin{proof} We have \begin{align*} \sum_{j=1}^RjU_j+\sum_{j=1}^h(R+1-j)x_j&\leq \sum_{j=1}^RjU_j+\sum_{j=1}^R(R+1-j)x_j\\ &\leq \sum_{j=1}^RjU_j+\sum_{j=1}^R(R+1-j)U_j=(R+1)\sum_{j=1}^RU_j \end{align*} So, rearranging things. \begin{equation*} \sum_{j=1}^h jx_j\geq \sum_{j=1}^RjU_j+(R+1)\left(\sum_{j=1}^hx_j-\sum_{j=1}^RU_j\right) \end{equation*} and the last term is positive by assumption. \end{proof} Let $R_n=\frac{n^q}{q!}D^q$ and $S_n= \frac{n^{q-1}}{(q-1)!} D^{q-1}.D_P$. In the notation of Lemma \ref{large}, we have \begin{equation*} \sum_{m=0}^{M_P(n)}f_P(m,n)=l(nD)=R_n+O(n^{q-1}). \end{equation*} and $f_P(m,n)\leq S_n+O(n^{q-2})$. We will assume from now on that $S_n\neq 0$ (the case $S_n=0$ is similar). Then using our estimate, we have $M_P(n) \geq \frac{R_n}{S_n}+O(1)$ and $\sum_{m=0}^{\frac{R_n}{S_n}+O(1)}(S_n+ O(n^{q-2}))\leq \sum_{m=0}^{M_P(n)}f_P(m,n)$. So using Lemma \ref{CZlemma}, for $n \gg 0$ we get the estimate \begin{align*} \sum_{m=0}^{M_P(n)}(m-n)f_P(m,n) & \geq \sum_{m=0}^{\frac{R_n}{S_n}+O(1)}m(S_n+O(n^{q-2}))-n\sum_{m=0}^{M_P(n)}f_P(m,n)\\ &\geq \frac{R_n^2}{2S_n}-nR_n+O(n^q)\\ &\geq \frac{R_n}{S_n}\left[\frac{n^q}{2q!}\left(D^q-2q D^{q-1}.D_P\right)+O(n^{q-1})\right]. \end{align*} So for $n \gg 0$, $ \sum_{m=0}^{M_P}(m-n)f_P(m,n)>0$ if $D^q>2q D^{q-1}.D_P$. Then we are done by Lemma \ref{large}. \end{proof} When $q=1$ we obtain \begin{corollary} Let $D$ be an effective divisor on a nonsingular projective curve $X$. If $D$ is a sum of more than 2 distinct points on $X$ then $D$ is large. \end{corollary} By Theorems \ref{maina} and \ref{mainb} we then recover \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}A} \setcounter{theoremb}{\value{theorem}} \begin{corollarya} Siegel's theorem (Theorem~\ref{Siegel2}) \end{corollarya} \begin{corollaryb} Picard's theorem (Theorem \ref{Picard}) \end{corollaryb} \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} Actually we have only proved these theorems for nonsingular curves $\tilde{C}$. However, the general case follows from this case by looking at the normalization of $\tilde{C}$. Suppose that we have a divisor $D=\sum_{i=1}^{r}D_i$ satisfying the hypotheses of Theorem~\ref{cor2}. We would like to modify $D$ to a divisor $D'=\sum_{i=1}^{r}a_iD_i$ so that we may optimally apply the theorem. When each $D_i$ is ample, this amounts to choosing the $a_i$'s so that in the embedding given by $nD'$ for $n\gg 0$ the degree of each $a_iD_i$ is the same. In terms of intersection theory, we would like $a_iD_i.(D')^{q-1}$ to be the same for each $i$. We make the following definition: \begin{definition} Let $X$ be a nonsingular projective variety. Let $q=\dim X$. Let $D=\sum_{i=1}^rD_i$ be a divisor on $X$ with $D_1,\ldots,D_r$ effective.. Then $D$ is said to have equidegree with respect to $D_1,\ldots,D_r$ if $D_i.D^{q-1}=\frac{D^q}{r}$ for $i=1,\ldots,r$. We will say that $D$ is equidegreelizable (with respect to $D_1,\ldots,D_r$) if there exist real numbers $a_i>0$ such that if $D'=\sum_{i=1}^ra_iD_i$ then $D'$ has equidegree with respect to $a_1D_1,\ldots,a_r D_r$. (extending intersections to $\Div X\otimes \mathbb{R}$ in the canonical way). \end{definition} We will frequently omit the reference to the $D_i$'s when it is clear what we mean. \begin{lemma} \label{equi} Let $X$ be a nonsingular projective variety. Let $q=\dim X$. Let $D_1,\ldots,D_r$ be divisors on $X$ with $D_i^q>0$ for all $i$. Suppose that all $q$-fold intersections of the $D_i$'s are nonnegative. Then $\sum_{i=1}^r D_i$ is equidegreelizable with respect to $D_1,\ldots,D_r$. \end{lemma} \begin{proof} Consider the function $f(a_1,\ldots,a_r)=(\sum_{i=1}^r e^{a_i}D_i)^q$ subject to the constraint $g(a_1,\ldots,a_r)=\sum_{i=1}^r a_i=0$. Since all $q$-fold intersections of the $D_i$'s are nonnegative, $f(a_1,\ldots,a_r)\geq e^{q a_i}D_i^q$ for any $i$. Since $D_i^q>0$ for all $i$, as $\max \{a_i\}\to \infty$ we have $f(a_1,\ldots,a_r) \to \infty$. It follows that $f$ attains a minimum on the plane $\sum_{i=1}^r a_i=0$. Therefore there exists a solution $\lambda,a_1,\ldots,a_r$ to the Lagrange multiplier equations $g=0,\frac{\partial f}{\partial a_i}=e^{a_i}D_i.(\sum_{i=1}^r e^{a_i} D_i)^{q-1}=\lambda \frac{\partial g}{\partial a_i}=\lambda, i=1,\ldots,r$. So $D'=\sum_{i=1}^r e^{a_i}D_i$ has equidegree with respect to $D_1,\ldots,D_r$ and trivially $e^{a_i}>0$. \end{proof} We give an example to show that not all divisor sums are equidegreelizable. \begin{example} Let $X=\mathbb{P}^1 \times \mathbb{P}^1$. Let $D_1=P_1 \times \mathbb{P}^1,D_2=P_2 \times \mathbb{P}^1$, and $D_3=\mathbb{P}^1 \times Q$, where $P_1,P_2$, and $Q$ are points in the various $\mathbb{P}^1$'s. So $D_1.D_2=D_1^2=D_2^2=D_3^2=0$ and $D_1.D_3=D_2.D_3=1$. Let $D=a_1D_1+a_2D_2+a_3D_3$. Since $a_3D_3.D=a_1D_1.D+a_2D_2.D$, it is clear that there do not exist $a_1,a_2,a_3>0$ such that $a_iD_i.D=\frac{D^2}{3}$ for $i=1,2,3$. So $D=D_1+D_2+D_3$ is not equidegreelizable with respect to $D_1, D_2$, and $D_3$. \end{example} With the above definition, we have the following theorem. \begin{theorem} \label{cor3} Let $X$ be a nonsingular projective variety. Let $q=\dim X$. Let $D=\sum_{i=1}^r D_i$ be a quasi-ample divisor on $X$ equidegreelizable with respect to $D_1,\ldots, D_r$, with $D_1,\ldots, D_r$ nef and effective. Suppose that every irreducible component of $D$ is nonsingular. Suppose that the intersection of any $m+1$ distinct $D_i$'s is empty. If $r>2mq$ then $D$ is large. Furthermore, there exists a very large divisor $E$ with the same support as $D$ such that $\Phi_E$ is birational. \end{theorem} \begin{proof} Since $D$ is equidegreelizable, we may find positive integers $a_i$ such that if $D'=\sum_{i=1}^{r}a_iD_i$ then $\frac{a_iD_i.(D')^{q-1}}{(D')^q}$ is arbitrarily close to $\frac{1}{r}$ for each $i$. Note that $D'$ is again quasi-ample. Since for any $P\in D(\overline{k})$, $P$ belongs to at most $m$ divisors $D_i$, and $r>2mq$, we have that \begin{equation*} 2q(D')^{q-1}.(D')_P=2q \sum_{i: P\in D_i(\overline{k})}a_i D_i.(D')^{q-1}<(D')^q. \end{equation*} So the hypotheses of Theorem~\ref{cor2} are satisfied and so $nD'$ is very large for $n \gg 0$. The last statement then follows from the fact that $D'$ is quasi-ample. \end{proof} \begin{lemma} \label{reduce} Let $X$ be a complex projective variety. Let $D=\sum_{i=1}^rD_i$ be a sum of effective Cartier divisors on $X$. Then there exists a nonsingular projective variety $X'$, a birational morphism $\pi:X'\to X$, and a divisor $D'=\sum_{i=1}^rD_i'$ on $X'$ such that $\supp D_i'\subset \supp \pi^*D_i$ for all $i$, every irreducible component of $D'$ is nonsingular, $|D_i'|$ is base-point free for all $i$ (in particular $D_i'$ is nef), and $\kappa(D_i')=\kappa(D_i)=\dim \Phi_{D_i'}(X')$ for all $i$. \end{lemma} \begin{proof} Taking a resolution of the singularities of $X$ and of the embedded singularities of the irreducible components of $D$ we may assume that $X$ and every irreducible component of $D$ are nonsingular. For each $i$, let $m_i>0$ be such that $\dim \Phi_{m_iD_i}(X)=\kappa(D_i)$. Let $\pi:X'\to X$ be the map obtained by blowing up the base-points of all the linear systems $|m_iD_i|$. Then $\pi^*(m_iD_i)=D_i'+F_i$ for each $i$, where $|D_i'|$ is base-point free and $F_i$ is the fixed part of $|\pi^*(m_iD_i)|$. We have, trivially from the definition, $\kappa(D_i)=\kappa(m_iD_i)$. Further, $\kappa(m_iD_i)=\kappa(\pi^*(m_iD_i))$ (in fact $l(mD_i)=l(\pi^*(mD_i))$ for all $m$ follows easily from $\pi_*\mathcal{O}_{X'}=\mathcal{O}_X$ and the projection formula). Finally, $\kappa(\pi^*(m_iD_i))=\kappa(D_i')$ since by construction $\kappa(D_i')=\max_{n>0}\dim \Phi_{nD_i'}(X')\geq \kappa(D_i)=\kappa(\pi^*(m_iD_i))$ (the other inequality being trivial). So $\kappa(D_i')=\kappa(D_i)$ for all $i$ and therefore $X',\pi$, and $D'=\sum_{i=1}^rD_i'$ satisfy the requirements of the lemma. \end{proof} We now obtain one of our main results. \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}A} \setcounter{theoremb}{\value{theorem}} \begin{theorema} \label{cor4} Let $X$ be a projective variety defined over a number field $k$. Let $q=\dim X$. Let $D=\sum_{i=1}^{r} D_i$ be a divisor on $X$ defined over $k$ such that the $D_i$'s are effective Cartier divisors and the intersection of any $m+1$ distinct $D_i$'s is empty.\\\\ (a). If $D_i$ is quasi-ample for each $i$ and $r> 2mq$ then $X\backslash D$ is quasi-Mordellic.\\ (b). If $D_i$ is ample for each $i$ and $r> 2mq$ then $X\backslash D$ is Mordellic. \end{theorema} \begin{theoremb} \label{cor4b} Let $X$ be a complex projective variety. Let $q=\dim X$. Let $D=\sum_{i=1}^{r} D_i$ be a divisor on $X$ such that the $D_i$'s are effective Cartier divisors and the intersection of any $m+1$ distinct $D_i$'s is empty.\\\\ (a). If $D_i$ is quasi-ample for each $i$ and $r>2mq$ then $X\backslash D$ is quasi-Brody hyperbolic.\\ (b). If $D_i$ is ample for each $i$ and $r> 2mq$ then $X\backslash D$ is complete hyperbolic and hyperbolically imbedded in $X$. In particular, $X\backslash D$ is Brody hyperbolic. \end{theoremb} Aside from the statement about being complete hyperbolic and hyperbolically imbedded, the same proof works for both Theorems \ref{cor4} and \ref{cor4b}. \begin{proof} We'll prove part (a) first. Note that if $\pi:X'\to X$ is a birational morphism and the conclusions of part (a) of the theorems hold for $\pi^*D$ on $X'$ then they hold for $D$ on $X$. Therefore, by Lemma \ref{reduce}, we may assume (extending $k$ in the Diophantine case if necessary) that $X$ is nonsingular, every irreducible component of $D$ is nonsingular, and $D_i$ is nef for all $i$. The statement then follows from Lemma \ref{equi}, Theorem \ref{cor3}, and Theorems \ref{maina} and \ref{mainb}. For part (b), we note that by (a) any set of $D$-integral points (resp. the image of any holomorphic map $f:\mathbb{C} \to X \backslash D$) is not Zariski-dense. Let $R$ be a set of $D$-integral points (resp. the image of a holomorphic map $f:\mathbb{C} \to X \backslash D$). Let $Y$ be an irreducible component of the Zariski-closure of $R$. Suppose $\dim Y>0$. Then $D$ pulls back to a sum of $r$ ample (hence quasi-ample) divisors on $Y$ such that the intersection of any $m+1$ of them is empty. But $R\cap Y$ is a dense set of $D|_Y$-integral points on $Y$ (resp. the image of a holomorphic map $f:\mathbb{C} \to Y \backslash D$), contradicting part (a) proven above since $r>2mq>2m\dim Y$. Therefore $\dim Y=0$. To prove the extra hyperbolicity statements in (b) in the analytic case, we use Theorem \ref{hyperbolic}. Let $\emptyset \subset J \subset \{1,\ldots,r\}$. Let $s=\#J$. Let $X'=\bigcap_{j\in J}D_j$. We may clearly assume that $X'\not\subset D_i$ for any $i\in I\backslash J$ and that $\dim X'>0$. Let $D'=\sum_{i\in I\backslash J}D_i|_{X'}$. Then $D'$ is a sum of $r-s$ ample divisors on $X'$ and the intersection of any $m-s+1$ of the ample divisors is empty since $X'$ is already an intersection of $s$ of the $D_i$'s. But $r>2mq$ implies that $r-s>2(m-s)\dim X'$. Therefore by what we have proven above, $X'\backslash D'$ is Brody hyperbolic. So by Theorem~\ref{hyperbolic}, $X\backslash D$ is complete hyperbolic and hyperbolically imbedded in $X$. \end{proof} We can prove our Main Conjectures in the simple case $m=1$ by reducing to Siegel's and Picard's theorems. We will need the following Bertini theorem (see \cite[Th. 7.19]{Ii}). \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} \begin{theorem} \label{Bertini} Let $|D|$ be a base-point free linear system on a nonsingular projective variety $X$ with $\dim \Phi_{D}(X)\geq 2$. Then every member of $|D|$ is connected and a general member of $|D|$ is nonsingular and irreducible. \end{theorem} \begin{lemma} \label{m1} Suppose $D=D_1+D_2$ is a Cartier divisor on a projective variety $X$ with $\kappa(D_1)>0,\kappa(D_2)>0$ and $D_1\cap D_2=\emptyset$. Then $\kappa(D)=\kappa(D_1)=\kappa(D_2)=1$. \begin{proof} By Lemma \ref{reduce}, we may assume that $X$ is nonsingular and $|D|$ is base-point free. If $\kappa(D)\geq 2$ then $\dim \Phi_{nD}(X)\geq 2$ for some $n>0$. But by Theorem \ref{Bertini}, every divisor in $|nD|$ is connected, contradicting $D_1\cap D_2=\emptyset$. \end{proof} \end{lemma} \begin{theorem} \label{tm1} The Main Conjectures, Conjectures \ref{conjmaina},B through \ref{conj2a},B, are true if $m=1$ (i.e. $D_i\cap D_j=\emptyset$ for all $i\neq j$). \end{theorem} \begin{proof} By the above lemma, it suffices to prove the conjectures when $D=\sum_{i=1}^rD_i$ with $r>2$, and $\kappa(D)=1$. By Lemma \ref{reduce}, we may assume that $X$ is nonsingular and $D$ is base-point free. For $n\gg 0$, $\Phi_{nD}(X)$ is a nonsingular curve $C$ and $\Phi_{nD}$ has connected fibers. Therefore, since $D_i\cap D_j=\emptyset$ for $i\neq j$, we have $\Phi_{nD}(X\backslash D)=C\backslash\{r\text{ points}\}$. Since $r>2$, we are done by Siegel's and Picard's theorems. \end{proof} \section{A Filtration Lemma} We'll now show how some of the results in the last section may be improved by use of a linear algebra lemma on filtrations. The idea of using this lemma, as well as its statement and proof, are taken from the paper \cite{Co2}. Corvaja and Zannier used it to prove a result on integral points on surfaces, and it will be essential for our results on surfaces in the next section also. \begin{lemma} Let $V$ be a vector space of finite dimension $d$ over a field $k$. Let $V=W_1\supset W_2\supset \cdots \supset W_h,V=W_1^*\supset W_2^*\supset \cdots \supset W_{h^*}^*$ be two filtrations on $V$. There exists a basis $v_1,\dots, v_d$ of $V$ which contains a basis of each $W_j$ and $W_j^*$. \end{lemma} \begin{proof} The proof will be by induction on $d$. The case $d=1$ is trivial. By refining the first filtration, we may assume without loss of generality that $W_2$ is a hyperplane in $V$. Let $W_i'=W_i^*\cap W_2$. By the inductive hypothesis there exists a basis $v_1,\ldots,v_{d-1}$ of $W_2$ containing a basis of each of $W_3,\ldots, W_h$ and $W_1',\ldots,W_h'$. If $W_i^*\subset W_2$ for $i>1$ then $W_i'=W_i^*$ for $i>1$. So in this case if we complete $v_1,\ldots,v_{d-1}$ to any basis of $V$ we are done. Otherwise, let $l$ be the maximal index with $W_l^* \not\subset W_2$ and let $v_d\in W_l^*\backslash W_l'$. We claim that $B=\{v_1,\ldots,v_d\}$ is a basis of $V$ with the required property. It clearly contains a basis of $W_i$ for each $i$. Let $i\in \{1,\ldots,h^*\}$. If $i>l$ then $W_i^*=W_i'$ and so by construction $B$ contains a basis of $W_i^*$. If $i\leq l$ then $v_d\in W_l^*\backslash W_l' \subset W_i^*\backslash W_i'$. Since $B$ contains a basis $B_i'$ of $W_i'$ and $W_i'$ is a hyperplane in $W_i^*$, we see that $B_i'\cup \{v_d\}$ is a basis of $W_i^*$. \end{proof} Using our notation from the last section, suppose that for $P\in D$ we have $D_P=D_{P,1}+D_{P,2}$ where $D_{P,1}$ and $D_{P,2}$ are effective divisors with no irreducible components in common. We may then prove the following versions of Lemma \ref{large} and Theorem \ref{cor2}. \begin{lemma} \label{large2} Let $D=\sum_{i=1}^r D_i$ be a nonzero divisor on a nonsingular variety $X$ with $D_i$ effective for each $i$. Let $P\in D$. Let $f_{P,j}(m,n)=l(nD-mD_{P,j})-l(nD-(m+1)D_{P,j})$ for $j=1,2$. If there exists $n>0$ such that either $\sum_{m=0}^{\infty}(m-n)f_{P,j}(m,n)>0$ or $D_{P,j}=0$ for all $P\in D$ and $j=1,2$ then $nD$ is very large. \end{lemma} \begin{theorem} \label{cor22} Let $X$ be a nonsingular variety. Let $q= \dim X$. Let $D=\sum_{i=1}^{r}D_i$ be a divisor on $X$ such that $D_{P,j}$ is nef for all $P\in D$ and $j=1,2$. Suppose also that every irreducible component of $D$ is nonsingular. If \begin{equation*} D^q>2q D^{q-1}.D_{P,j}, \qquad \forall P\in D, j=1,2 \end{equation*} then $nD$ is very large for $n\gg 0$. \end{theorem} The proofs are similar to the proofs of Lemma \ref{large} and Theorem \ref{cor2}. The only difference is that in the proof of Lemma \ref{large2}, we look at the two filtrations of $L(nD)$ given by $W_j=L(nD-jD_{P,1})$ and $W_j^*=L(nD-jD_{P,2})$ and we use the filtration lemma to construct a basis $f_1,\ldots,f_{l(nD)}$ that contains a basis for each $W_j$ and $W_j^*$. Suppose now that $D=\sum_{i=1}^rD_i$ where the $D_i$'s are effective divisors and the intersection of any $m+1$ distinct $D_i$'s is empty. We may then write $D_P=D_{P,1}+D_{P,2}$ where $D_{P,1}$ and $D_{P,2}$ are each not a sum of more than $[\frac{m+1}{2}]$ $D_i$'s, where $[x]$ denotes the greatest integer in $x$. Using this, we get the following improvements to the part (a)'s of Theorems \ref{cor4} and \ref{cor4b}. \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}A} \setcounter{theoremb}{\value{theorem}} \begin{theorema} \label{cor42} Let $X$ be a projective variety defined over a number field $k$. Let $q=\dim X$. Let $D=\sum_{i=1}^{r} D_i$ be a divisor on $X$ defined over $k$ such that the $D_i$'s are effective Cartier divisors with no irreducible components in common and the intersection of any $m+1$ distinct $D_i$'s is empty. If $D_i$ is quasi-ample for each $i$ and $r> 2[\frac{m+1}{2}]q$ then $X\backslash D$ is quasi-Mordellic. \end{theorema} \begin{theoremb} \label{cor4b2} Let $X$ be a complex projective variety. Let $q=\dim X$. Let $D=\sum_{i=1}^{r} D_i$ be a divisor on $X$ such that the $D_i$'s are effective Cartier divisors with no irreducible components in common and the intersection of any $m+1$ distinct $D_i$'s is empty. If $D_i$ is quasi-ample for each $i$ and $r>2[\frac{m+1}{2}]q$ then $X \backslash D$ is quasi-Brody hyperbolic. \end{theoremb} Unfortunately, we need the requirement that the $D_i$'s have no irreducible components in common so that we may have $D_{P,1}$ and $D_{P,2}$ with no irreducible components in common (which is necessary in proving Lemma \ref{large2}). Because of this, we cannot prove a finiteness result about ample divisors as we did in the last section, since the restrictions of the $D_i$'s to a subvariety of $X$ may have irreducible components in common. \section{Surfaces} \label{ssurf} \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} We will now see that we may make the results of the last two sections more precise if we restrict to the case where $X$ is a surface. With regards to integral points, this section builds on some of the work in \cite{Co2}. Corvaja and Zannier prove, essentially, Theorem \ref{surf} \cite[Main Theorem]{Co2} and they prove Theorem \ref{surf3a} when $m=2$ and the $D_i$'s have multiples which are all numerically equivalent. The Nevanlinna-theoretic analogues of the results in \cite{Co2} were proved by Ru and Liu in \cite{Ru4}. Our results overlap with their results as well. We first prove a consequence of the Hodge Index theorem. \begin{lemma} \label{Hodge} Let $D$ be a divisor on a nonsingular surface $X$ with $D^2>0$. Then $(D^2)(E^2)\leq (D.E)^2$ for any divisor $E$ on $X$. \end{lemma} \begin{proof} By the Hodge index theorem, the intersection pairing on Num $X\bigotimes\mathbb{R}$ can be diagonalized with one $+1$ on the diagonal and all other diagonal entries $-1$. We will identify elements of Pic $X$ as elements of Num $X\bigotimes\mathbb{R}$ in the canonical way. Extend $D$ to an orthogonal basis $B$ of Num $X\bigotimes\mathbb{R}$. Let $E$ be any divisor on $X$. Writing $E$ in the basis $B$, it is apparent from the Hodge index theorem that $(D^2)(E^2)\leq (D.E)^2$. \end{proof} For surfaces, the more precise version of Theorem~\ref{cor22} is \begin{theorem}[Corvaja-Zannier] \label{surf} Let $X$ be a nonsingular projective surface. Let $D=\sum_{i=1}^{r}D_i$ be a nef divisor on $X$ with the $D_i$'s effective divisors and $D^2>0$. For $P\in D$, let $D_P=\sum_{i:P\in D_i}D_i=D_{P,1}+D_{P,2}$ where $D_{P,1}$ and $D_{P,2}$ are effective divisors with no irreducible components in common. Suppose that for all $P\in D$, $j=1,2$ and $m,n>0$ we have either $l(nD-mD_{P,j})=0$ or \begin{equation*} l(nD-mD_{P,j})-l(nD-(m+1)D_{P,j})\leq (nD-mD_{P,j}).mD_{P,j}+O(1) \end{equation*} where the constant does not depend on $m$ or $n$. Let $A_{P,j}=D_{P,j}.D_{P,j},B_{P,j}=D.D_{P,j}$, and $C=D.D$ for $j=1,2$. If for all $P \in D$ and $j=1,2$ either we have $D_{P,j}=0$ or we have \begin{align*} &A_{P,j}>0 \Longrightarrow B_{P,j}^2-2A_{P,j}C+3A_{P,j}B_{P,j}+(3A_{P,j}-B_{P,j})\sqrt{B_{P,j}^2-A_{P,j}C}<0\\ &A_{P,j}=0 \Longrightarrow C>4B_{P,j}\\ &A_{P,j}<0 \Longrightarrow B_{P,j}^2-2A_{P,j}C+3A_{P,j}B_{P,j}+(3A_{P,j}-B_{P,j})\sqrt{B_{P,j}^2-A_{P,j}C}>0 \end{align*} then $nD$ is very large for $n\gg 0$ (note that by Lemma \ref{Hodge} $B_{P,j}^2-A_{P,j}C>0$). \end{theorem} \begin{proof} Let $P \in D$ and $j\in \{1,2\}$ with $D_{P,j}\neq 0$. Let $A=A_{P,j}$ and $B=B_{P,j}$. By assumption, we have \begin{align*} f_{P,j}(m,n)&=\dim H^0(X,\mathcal{O}(nD-m D_{P,j}))-\dim H^0(X,\mathcal{O}(nD-(m+1)D_{P,j}))\\ &\leq nB-mA+O(1) \end{align*} where the constant in the $O(1)$ does not depend on $m$ or $n$. We have \begin{equation*} l(nD)=\frac{D^2}{2}n^2+O(n)=\frac{C}{2}n^2+O(n). \end{equation*} Solving \begin{equation*} \sum_{m=0}^{M(n)} nB-mA+O(1)=\frac{C}{2}n^2+O(n)= l(nD) \end{equation*} for $M(n)$, we get \begin{align*} &M(n)=\frac{B \pm \sqrt{B^2-AC}}{A}n+O(1), &A \neq 0\\ &M(n)=\frac{C}{2B}n+O(1), &A=0,B\neq 0\\ &M(n)=O(n^2), &A=0,B=0. \end{align*} From now on, we will always choose the minus sign in the first expression above. We also have $\sum_{m=0}^{\infty}f_{P,j}(m,n)=l(nD)$. Therefore by Lemma \ref{CZlemma}, \begin{equation} \label{MP} \sum_{m=0}^{\infty}(m-n)f_{P,j}(m,n) \geq \sum_{m=0}^{M(n)}m(nB-mA+O(1))-nl(nD). \end{equation} Let $K=\frac{B - \sqrt{B^2-AC}}{A}$. If $A \neq 0$ then substituting $K$ into (\ref{MP}) we get \begin{equation*} \sum_{m=0}^{\infty}(m-n)f_{P,j}(m,n)\geq(-\frac{A}{3}K^3+\frac{B}{2}K^2-\frac{C}{2})n^3+O(n^2) \end{equation*} So if $-\frac{A}{3}K^3+\frac{B}{2}K^2-\frac{C}{2}>0$ then by Lemma \ref{large2}, $nD$ will be very large for $n\gg 0$. Algebraic simplification then gives the theorem in the case $A \neq 0$. The other cases are similar. \end{proof} \begin{lemma} \label{clemma} Let $X$ be a nonsingular projective surface. Let $C$ be an irreducible curve on $X$ and $D$ any divisor on $X$. Then \begin{equation*} h^0(D)-h^0(D-C)\leq \max\{0,1+C.D\}. \end{equation*} \end{lemma} \begin{proof} The statement depends only on the linear equivalence class of $D$, so replacing $D$ by an appropriate divisor linearly equivalent to $D$, we may assume that the support of $D$ does not contain any possible singularity of $C$. By Lemma \ref{exact} we have \begin{equation*} h^0(D)-h^0(D-C)\leq \dim H^0(C,\mathcal{O}(D)|_C) \end{equation*} Since the support of $D$ does not contain any singularity of $C$, $\mathcal{O}(D)|_C$ has degree $C.D$ on $C$ and $\dim H^0(C,\mathcal{O}(D)|_C)\leq \max\{0,1+C.D\}$. \end{proof} \begin{lemma} \label{slemma} Let $X$ be a nonsingular projective surface. Let $D$ be a nef divisor on $X$. Let $E$ be an effective divisor on $X$ such that either $E$ is linearly equivalent to an irreducible curve or for every irreducible component $C$ of $E$, $C.E\leq 0$. Then for all $m,n>0$ either $l(nD-mE)=0$ or \begin{equation} \label{ineq} l(nD-mE)-l(nD-(m+1)E)\leq (nD-mE).E+O(1) \end{equation} where the constant is independent of $m$ and $n$. \end{lemma} \begin{proof} In the first case, suppose $E$ is linearly equivalent to an irreducible curve $C$. If $(nD-mE).E\geq 0$ then (\ref{ineq}) holds by Lemma \ref{clemma}. If $(nD-mE).E=nD.C-mC.C<0$ then since $D$ is nef, we must have $C.C>0$. But if $l(nD-mE)>0$ then $nD-mE$ is linearly equivalent to an effective divisor $F=G+mC$ where $m\geq 0$ and $G$ is an effective divisor not containing $C$. Since clearly $G.C\geq 0$, $F.C=(nD-mE).E<0$ implies $C.C<0$, a contradiction. So either $l(nD-mE)=0$ or (\ref{ineq}) holds in this case. Now suppose we are in the second case, where for every irreducible component $C$ of $E$, $C.E\leq 0$. Let $E=\sum_{j=1}^{k}a_j C_j$, where each $C_j$ is a distinct prime divisor. Then as in the proof of Theorem~\ref{cor2} we have \begin{multline*} l(nD-mE)-l(nD-(m+1)E) \leq \\ \sum_{j=1}^k \sum_{l=0}^{a_{j}-1}\dim H^0(C_{j},i^*_{C_{j}}\mathcal{O}(nD-mE -\sum_{j'=1}^{j-1}a_{j'}C_{j'}-lC_{j})) \end{multline*} But \begin{align*} \dim H^0(C_{j},i^*_{C_{j}}\mathcal{O}(nD-mE -\sum_{j'=1}^{j-1}a_{j'}C_{j'}-lC_{j}))&\leq \dim H^0(C_{j},i^*_{C_{j}}\mathcal{O}(nD-mE))+O(1)\\ &\leq (nD-mE).C_j+O(1), \end{align*} where the constant is independent of $m$ and $n$. The second inequality follows since $(nD-mE).C_j\geq nD.C_j\geq 0$ as $D$ is nef and $E.C_j\leq 0$. Combining the above inequalities, we then see that (\ref{ineq}) always holds in this case. \end{proof} Going back to the General Setup of Section \ref{gsetup}, we have \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}A} \setcounter{theoremb}{\value{theorem}} \begin{theorema} \label{surf3a} Let $X$ be a projective surface. Suppose the $D_i$'s have no irreducible components in common.\\\\ (a). If $D_i$ is quasi-ample for all $i$ and $r\geq 4[\frac{m+1}{2}]$ then $X\backslash D$ is quasi-Mordellic.\\ (b). If $D_i$ is ample for all $i$ and either $m$ is even and $r>2m$ or $m$ is odd and $r>2m+1$ then $X\backslash D$ is Mordellic. \end{theorema} \begin{theoremb} \label{surf3b} Let $X$ be a projective surface. Suppose the $D_i$'s have no irreducible components in common.\\\\ (a). If $D_i$ is quasi-ample for all $i$ and $r\geq 4[\frac{m+1}{2}]$ then $X\backslash D$ is quasi-Brody hyperbolic.\\ (b). If $D_i$ is ample for all $i$ and either $m$ is even and $r>2m$ or $m$ is odd and $r>2m+1$ then $X\backslash D$ is complete hyperbolic and hyperbolically imbedded in $X$. In particular, $X\backslash D$ is Brody hyperbolic. \end{theoremb} \begin{proof} We'll prove the part (a)'s first. It suffices to prove these in the case $r=4[\frac{m+1}{2}]$. As in the proofs of Theorems \ref{cor4} and \ref{cor4b}, we may use Lemma \ref{reduce} to reduce to the case where $X$ is nonsingular, $|D_i|$ is base-point free for all $i$, and $\dim \Phi_{D_i}(X)=2$ for all $i$. Therefore $D_i^2>0$ and $D_i$ is nef for each $i$. By Lemma \ref{equi}, $D$ is equidegreelizable. So we may find positive integers $a_1,\ldots,a_r$ such that if $D'=\sum_{i=1}^{r}a_iD_i$ then $\frac{a_iD_i.D'}{(D')^2}$ is arbitrarily close to $\frac{1}{r}$ for all $i$. Since at most $m$ $D_i$'s meet at any given point, $D'_P$ is a sum of at most $m$ $a_iD_i$'s for any $P\in D'$. Therefore we may write $D'_P=D'_{P,1}+D'_{P,2}$ where each $D'_{P,j}$ is a sum of at most $[\frac{m+1}{2}]$ $a_iD_i$'s, and $D'_{P,1}$ and $D'_{P,2}$ have no irreducible components in common. Note that when $D'_{P,j}\neq 0$, we have, from our assumptions on the $D_i$'s, that $|D'_{P,j}|$ is base-point free and $\dim \Phi_{D'_{P,j}}(X)=2$. So by Theorem \ref{Bertini}, $D'_{P,j}$ is linearly equivalent to an irreducible curve. Therefore, by Lemma \ref{slemma}, we will be able to apply Theorem \ref{surf} to $D'$. The hardest case is clearly when $D'_{P,j}$ is a sum of the maximum $[\frac{m+1}{2}]$ $a_iD_i$'s. For simplicity, we will now restrict to this case. It follows that in the notation of Theorem \ref{surf} we may take, for all such $P$ and $j$, \begin{equation*} \left|\frac{C}{B_{P,j}}-\frac{r}{[\frac{m+1}{2}]}\right|=\left|\frac{C}{B_{P,j}}-4\right|<\epsilon \end{equation*} where by adjusting the $a_i$'s in $D'$, $\epsilon$ may be made arbitrary close to $0$ while at the same time $\frac{A_{P,j}}{B_{P,j}}$ is positive and bounded away from $0$. Furthermore, by Lemma \ref{Hodge}, $\frac{A_{P,j}}{B_{P,j}}\leq \frac{B_{P,j}}{C}$. Let $a=\frac{A_{P,j}}{B_{P,j}}$ and $c=\frac{C}{B_{P,j}}$. Then by Theorem \ref{surf}, we must show that \begin{equation} \label{sineq} 1-2ac+3a+(3a-1)\sqrt{1-ac}<0 \end{equation} where $0<a\leq\frac{1}{c}$. When $c=4$, we get $1-5a+(3a-1)\sqrt{1-4a}$, which is easily seen to have a root only at $a=0$ for $0\leq a\leq\frac{1}{4}$, and is negative for $0<a\leq\frac{1}{4}$ since putting $a=\frac{1}{4}$ gives $-\frac{1}{4}$. So when $c=4+\epsilon$, since $a$ is bounded away from zero as $\epsilon \to 0$, we see that (\ref{sineq}) is negative for small enough $\epsilon$. Therefore by Theorem \ref{surf}, $nD'$ is very large for $n\gg 0$. Since $D'$ is quasi-ample, $\Phi_{nD'}$ is a birational map to projective space for some arbitrarily large $n$. Therefore by Theorems \ref{maina} and \ref{mainb} we are done, as $D$ and $D'$ have the same support. Assume the hypotheses in the part (b)'s. Let $Y$ be the Zariski-closure of a set of $D$-integral points (resp. $f(\mathbb{C})$). By what we have proven above, $\dim Y\leq 1$. If $\dim Y=1$, let $C$ be an irreducible component of this curve with $\dim C>0$. Since each $D_i$ is ample, $D_i$ must intersect $C$ in a point. Since at most $m$ $D_i$'s meet at a point and $r>2m$, we see that $D|_C$ contains at least $3$ distinct points. Therefore by Siegel's (resp. Picard's) theorem we get a contradiction as the above gives a dense set of $D|_C$-integral points (resp. a dense holomorphic map $\mathbb{C} \to C\backslash D|_C$). This same argument and Theorem \ref{hyperbolic} show that in the analytic case $X\backslash D$ is hyperbolic and hyperbolically embedded in $X$. \end{proof} It is possible to make minor improvements to this theorem. For example, \begin{theorema} \label{surf4a} Let $X$ be a nonsingular projective surface. Suppose $m=2$, $D=\sum_{i=1}^4D_i$, $D_i.D_j>0$ for $i\neq j$, $D_1^2>0$, $D_i$ is nef for all $i$, and the $D_i$'s have no irreducible components in common. Suppose also that the conclusion of Lemma \ref{slemma} holds with $D$ any positive integral linear combination of the $D_i$'s and $E=D_i$, for $i=1,2,3,4$. Then $X\backslash D$ is quasi-Mordellic. \end{theorema} \begin{theoremb} \label{surf4b} With the same hypotheses as above, in the analytic setting, $X\backslash D$ is quasi-Brody hyperbolic. \end{theoremb} \begin{proof} We first show that for any $\epsilon>0$, $(\sum_{i=1}^4e^{a_i}D_i)^2\geq e^{\frac{2}{3}\max_i\{a_i\}}$ on the plane $(1+\epsilon)a_1+\sum_{i=2}^4a_i=0$. If $\max_i\{a_i\}=a_1$ then $(\sum_{i=1}^4e^{a_i}D_i)^2\geq e^{2a_1}D_1^2\geq e^{2a_1}$. Otherwise, if $\max_i\{a_i\}=a_j$, $j>1$, then clearly we must have $a_k\geq -\frac{a_j}{3}$ for some $j\neq k$. Then $(\sum_{i=1}^4e^{a_i}D_i)^2\geq e^{a_j+a_k}D_j.D_k\geq e^{\frac{2}{3}a_j}$ since $D_j.D_k\geq 1$. Therefore $(\sum_{i=1}^4e^{a_i}D_i)^2$ takes a minimum on the plane $(1+\epsilon)a_1+\sum_{i=2}^4a_i=0$. So looking at the Lagrange multiplier equations as in Lemma \ref{equi}, there exist real numbers $b_i>0,\lambda>0$ (depending on $\epsilon$) such that if $D'=\sum_{i=1}^4b_iD_i$ then $b_1D_1.D'=(1+\epsilon)\lambda$ and $b_iD_i.D'=\lambda$ for $i=2,3,4$, or written differently, $\frac{(D')^2}{b_1D_1.D'}=\frac{4+\epsilon}{1+\epsilon}$ and $\frac{(D')^2}{b_iD_i.D'}=4+\epsilon>4$ for $i=2,3,4$. Note also that it follows from the inequality we proved above that in terms of $a_1,\ldots,a_4$, the region where $(\sum_{i=1}^4e^{a_i}D_i)^2$ takes a minimum may be bounded independently of $\epsilon$. Therefore there exist positive constants $K,K'$ independent of $\epsilon$, such that we may choose $K<b_i<K'$ for all $i$, and in particular, as $\epsilon\to 0$, $\frac{(b_1D_1)^2}{b_1D_1.D'}$ is bounded away from zero. We now choose positive integers $c_i$ such that $\frac{c_i}{c_j}$ closely approximates $\frac{b_i}{b_j}$, and let $E=\sum_{i=1}^4c_iD_i$. Having chosen $\epsilon$ small enough and the integers $c_i$ correctly, we will then have $E^2>4c_iD_i.E$ for $i=2,3,4$ and we will have $\frac{E^2}{c_1D_1.E}$ close enough to $4$ (see the proof of Theorem \ref{surf3a}, B) so that the inequalities in $\ref{surf}$ hold for $E_{P,j}=c_iD_i$ for any $i$. Since $m=2$, we may always take $E_{P,j}=0$ or $E_{P,j}=c_iD_i$ for some $i$. By our hypotheses, we may apply Theorem \ref{surf}, so $nE$ is very large for $n\gg 0$. Since $D_1^2>0$, $E$ is quasi-ample. So we are done by Theorems \ref{maina} and \ref{mainb}, as $D$ and $E$ have the same support. \end{proof} \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} \begin{example} Let $X=\mathbb{P}^1\times \mathbb{P}^1$. Let $D_1=\{0\}\times \mathbb{P}^1, D_2=\mathbb{P}^1\times \{0\}$, and let $D_3$ and $D_4$ be ample effective divisors on $X$. Suppose also that the intersection of any three of the $D_i$'s is empty. Let $D=\sum_{i=1}^4D_i$. Then the hypotheses of Theorems \ref{surf4a}, B are satisfied and $X\backslash D$ is quasi-Mordellic and quasi-Brody hyperbolic. Note also that $X\backslash D_1\cup D_2\cong \mathbb{A}^2\cong \mathbb{P}^2\backslash \{\text{a line}\}$. Therefore, we can also prove many theorems for $\mathbb{P}^2\backslash D$, where $D$ is a sum of three effective divisors on $\mathbb{P}^2$. \end{example} Recently, Corvaja and Zannier \cite{Co6} have shown another way their methods may get results on $\mathbb{P}^2\backslash D$ where $D$ is a sum of three effective divisors satisfying certain hypotheses. We have the following general corollary to the above theorems. \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}A} \setcounter{theoremb}{\value{theorem}} \begin{corollarya} Let $X$ be a projective surface. Suppose $m=2$, $D=\sum_{i=1}^4D_i$, $D_1,D_2,D_3$ are quasi-ample, $\kappa(D_4)>0$, and the $D_i$'s have no irreducible components in common. Then $X\backslash D$ is quasi-Mordellic. \end{corollarya} \begin{corollaryb} Let $X$ be a projective surface. Suppose $m=2$, $D=\sum_{i=1}^4D_i$, $D_1,D_2,D_3$ are quasi-ample, $\kappa(D_4)>0$, and the $D_i$'s have no irreducible components in common. Then $X\backslash D$ is quasi-Brody hyperbolic. \end{corollaryb} \begin{proof} We first reduce to the situation of Lemma \ref{reduce}. So $X$ is nonsingular, each $D_i$ is nef, and $D_1^2,D_2^2,D_3^2>0, D_4^2\geq 0$ By Lemma \ref{m1}, $D_i.D_j>0$ for $i\neq j$. For $i=1,2,3$ and $n>0$ $nD_i$ is linearly equivalent to an irreducible curve by Theorem \ref{Bertini}, since by our reductions $|nD_i|$ is base-point free and $\dim \Phi_{nD_i}(X)=2$. The same holds for $nD_4$ if $D_4^2>0$. If $D_4^2=0$, then for every irreducible component $C$ of $D_4$ we must have $C.D_4=0$ since $D_4$ is nef. This verifies the hypotheses of Lemma \ref{slemma} with $E=D_i$ for $i=1,2,3,4$. Therefore, we may apply Theorems \ref{surf4a}, B. \end{proof} We note that one can construct examples where $m=2$, $D_1$ and $D_2$ are quasi-ample, $\kappa(D_3)=\kappa(D_4)=1$, the $D_i$'s have no irreducible components in common, and there exist dense sets of $D$-integral points. We now prove a theorem in the case where we only have $\kappa(D_i)>0$ for all $i$. \begin{theorema} Let $X$ be a projective surface. Suppose the $D_i$'s have no irreducible components in common. If $\kappa(D_i)>0$ for all $i$ and $r> 4[\frac{m+1}{2}]$ then there does not exist a Zariski-dense set of $D$-integral points on $X$. \end{theorema} \begin{theoremb} Let $X$ be a projective surface. Suppose the $D_i$'s have no irreducible components in common. If $\kappa(D_i)>0$ for all $i$ and $r> 4[\frac{m+1}{2}]$ then there does not exist a holomorphic map $f:\mathbb{C}\to X\backslash D$ with Zariski-dense image. \end{theoremb} \begin{proof} We first reduce to the situation of Lemma \ref{reduce}, so in particular $|D_i|$ is base-point free for all $i$. In this case, for any subset $I\subset \{1,\ldots,r\}$, if $D_I=\sum_{i\in I}D_i$ is quasi-ample, there exists $n_I>0$ such that $\dim \Phi_{n_ID_I}(X)=2$. Since the $D_i$'s are nef, this happens if and only if $D_i.D_j>0$ for some $i,j\in I$. Let $N=\prod_{I}n_I$ where the $I$ ranges over subsets such that $D_I$ is quasi-ample. Let $D'=ND$ and $D_i'=ND_i$. Then we see that for any nonnegative integral linear combination, $E$, of the $D_i'$'s, if $E$ is quasi-ample, then $E$ is linearly equivalent to an irreducible divisor since $|E|$ is base-point free and $\dim \Phi_E(X)=2$, and otherwise, for every irreducible component $C$ of $E$ we have $C.E=0$. Therefore, by Lemma \ref{slemma}, replacing $D$ by $D'$, we may assume that we may apply Theorem \ref{surf} to any nonnegative linear combination of the $D_i$'s. By Theorem \ref{tm1}, we are done if any three of the $D_i$'s have pairwise empty intersection. So suppose that this is not the case. Then we have $m\geq 2$ and $r\geq 5$. We now show that $D$ is equidegreelizable. As in the proof of Lemma \ref{equi}, it suffices to show that $(\sum_{i=1}^re^{a_i}D_i)^2$ attains a minimum on the plane $\sum_{i=1}^ra_i=0$. For this, it will suffice to show that $(\sum_{i=1}^re^{a_i}D_i)^2\geq e^{\frac{1}{3}\max_i\{a_i\}}$. Suppose $\max_i\{a_i\}=a_j$ for $j\in \{1,\ldots,r\}$. Let $a_k$ and $a_l$ be some choice of the next largest $a_i$'s. Clearly, since $\sum_{i=1}^ra_i=0$, we must have $a_k,a_l\geq -\frac{2a_j}{r-2}\geq -\frac{2}{3}a_j$ since $r\geq 5$. We now show that either $D_j.D_k\geq 1$ or $D_j.D_l\geq 1$. Suppose otherwise. Then by our assumption, we must have $D_k.D_l\geq 1$. But then $D_k+D_l$ is quasi-ample, and so we must have $(D_k+D_l).D_j\geq 1$ by Lemma \ref{m1}, a contradiction. So if, say, $D_j.D_k\geq 1$ then $(\sum_{i=1}^re^{a_i}D_i)^2\geq e^{a_j+a_k}D_j.D_k\geq e^{\frac{1}{3}\max_i\{a_i\}}$, as was to be shown. Since $D$ is equidegreelizable, there exist positive integers $c_i$ such that $D'=\sum_{i=1}^rc_iD_i$, and $\frac{c_iD_i.D'}{(D')^2}$ is as close as we like to $\frac{1}{r}$. Since we may choose $D'_{P,j}$ to consist of a sum of at most $[\frac{m+1}{2}]$ $c_iD_i$'s and $r>4[\frac{m+1}{2}]$, we may choose the $c_i$'s so that we always have $C>4B_{P,j}$. We also have $A_{P,j}\geq 0$. But then, as we have seen previously, the inequalities of Theorem \ref{surf} will be satisfied. \end{proof} \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} To summarize some of the results in this section: \begin{theorem} Let $X$ be a projective surface. Suppose that $m\leq 2$ and the $D_i$'s have no irreducible components in common. Then all of the Main Conjectures (Conjectures \ref{conjmaina},B-\ref{conj2a},B) are true. \end{theorem} \section{Small S} \label{ssmall} We now prove some theorems in the special case that $\#S$ is small relative to the number of components of $D$. Throughout we use the general Diophantine setup of Section \ref{gsetup}. \begin{theorem} \label{sthm} Suppose that $D_i$ is defined over $k$ for all $i$. Let $s=\#S$.\\\\ (a). If $D_i$ is quasi-ample for all $i$ and $r>ms$ then there exists a proper closed subset $Z\subset X$ such that for any set $R$ of $(D,S)$-integral points on $X$, $R\backslash Z$ if finite.\\ (b). If $D_i$ is ample for all $i$ and $r>ms$ then all sets of $(D,S)$-integral points on $X$ are finite. \end{theorem} \begin{proof} We reduce to the case where $X$ is nonsingular. We prove part (a) first. Our proof is a modification of the proof of Theorem~\ref{maina}. Suppose $R$ is a Zariski-dense set of $(D,S)$-integral points on $X$. Then as in the proof of Theorem \ref{maina}, there exists a sequence $P_i$ in $R$ such that for each $v$ in $S$, $\{P_i\}$ converges to a point $P_v\in X(k_v)$ and $\bigcup\{P_i\}$ is Zariski-dense in $X$. Since $r>ms$, there exists an index $i$ such that $P_v\notin D_i(k_v)$ for all $v\in S$. Since $D_i$ is quasi-ample, it follows from Lemma \ref{exact} and the argument in Lemma \ref{nefbig} that for some $n>0$, $l(nD_i-\sum_{j\neq i}D_i)>0$. Then $(n+1)D_i-\sum_{j\neq i}D_i$ is quasi-ample, and so for some $n'>0$, and $E=n'(n+1)D_i-n'\sum_{j\neq i}D_i$, $\Phi_{E}$ is birational. Now let $S'$ be the set of places $v\in S$ such that $P_v \in D(k_v)$. Let $\phi_1,\ldots,\phi_{l(E)}$ be a basis for $L(E)$ over $k$. Then for any $v\in S'$, $\text{ord}_F\prod_{i=1}^{l(E)}\phi_i>0$ for every irreducible component $F$ of $D$ such that $P_v \in F(k_v)$. This is precisely what we used the largeness hypothesis for in the proof of Theorem \ref{maina}. Let $\phi=(\phi_1,\ldots,\phi_{l(E)})$. Let $L_{jv}=\phi_j$ for $j=1,\ldots,l(E)$ and $v\in S$. Then the same proof as in Theorem \ref{maina} (replacing $D$ by $E$ in appropriate places) proves part (a). For (b), let $R$ be a set of $(D,S)$-integral points on $X$. Let $Y$ be an irreducible component of the Zariski-closure, $\overline{R}$, of $R$. Suppose $\dim Y>0$. Then $D$ pulls back to a sum of $r$ ample effective divisors on $Y$ such that at most $m$ of them meet at a point. But then part (a) applied to $D|_Y$ contraticts the fact that $R\cap Y$ is a dense set of $(D|_Y,S)$-integral points. Therefore $\dim Y=0$. \end{proof} When $\#S=1$ this theorem gives a particularly strong result. \begin{corollary} \label{qs1} Suppose $\#S=1$. If $D_i$ is ample for all $i$ and $r>m$ then all sets of $(D,S)$-integral points on $X$ are finite. \end{corollary} It follows from the Dirichlet unit theorem that $\#S=1$ if and only if $\mathcal{O}_{k,S}^*$ is finite if and only if $\mathcal{O}_{k,S}=\mathbb{Z}$ or the ring of integers of a complex quadratic field. The inequality in Corollary \ref{qs1} is sharp as the next example shows. \begin{example} Let $X=\mathbb{P}^n$. Let $k=\mathbb{Q}$ and let $S$ consist only of the prime at infinity. Let $D_i$ be the divisor on $\mathbb{P}^n$ defined by $x_i=0$, where $x_0,\ldots, x_n$ are homogeneous coordinates on $\mathbb{P}^n$. Let $D=\sum_{i=1}^n a_iD_i$. Let $m=\sum_{i=1}^na_i$. Then the set of points with $x_0\in \mathbb{Z}$ and $x_i=1$, $i=1,\ldots,n$, is an infinite set of $(D,S)$-integral points on $X$ and $D$ is a sum of $m$ ample divisors defined over $\mathbb{Q}$. \end{example} \begin{theorem} \label{Pic} Let $X$ be a nonsingular projective variety. Suppose $\#S=1$. Let $\rho$ denote the Picard number of $X$ and let $n$ be the rank of the group of $k$-rational points of $\text{Pic}^0(X)$. Suppose that the $D_i$'s are defined over $k$ for all $i$ and have no irreducible components in common. If $r>\rho+n$ then there does not exist a dense set of $(D,S)$-integral points on $X$. \end{theorem} Our proof is essentially the first half of the proof of Theorem 2.4.1 in \cite{Vo2}. \begin{proof} It follows from the definitions that the group of divisor classes with a representative defined over $k$ has rank at most $\rho+n$. Since $r>\rho+n$, there exists a linear combination of the $D_i$'s that is principal, equal to $(f)$ for some nonconstant rational function $f$ on $X$. Let $R$ be a set of $(D,S)$-integral points on $X$. Since all of the poles of $f$ lie in $D$ there exists an $a\in k$ such that $af$ takes on integral values on $R$. Since the poles of $\frac{1}{f}$ also lie in $D$, the same reasoning applies to $\frac{1}{f}$. Therefore $f(R)$ lies in only finitely many cosets of the group of units $\mathcal{O}_{k,S}^*$. But since $\#S=1$, $\mathcal{O}_{k,S}^*$ is finite. Therefore $R$ lies in the finite union of proper subvarieties of $X$ of the form $f=a$ for a finite number of $a\in k$. \end{proof} We note that the requirement in all of these results that not only $D$ be defined over $k$, but that the $D_i$'s be defined over $k$ is absolutely necessary. For example, if $X=\mathbb{P}^1$, $k=\mathbb{Q}$, $S=\{\infty\}$, and $D=P+Q$ where $P$ and $Q$ are conjugate over a real quadratic field, then from Pell's equation there do exist dense sets of $(D,S)$-integral points on $X$. \section{Results on the General Conjectures} \label{SVgeneral} We will now consider the case where the integral points are allowed to vary over number fields of a bounded degree over some number field $k$. As an application of their results on surfaces in \cite{Co2}, Corvaja and Zannier prove \begin{theorem} Let $X$ be a projective curve defined over a number field $k$. Let $S$ be a finite set of places of $k$ containing the archimedean places. Let $D=\sum_{i=1}^r P_i$ be a divisor on $X$ defined over $k$ such that the $P_i$'s are distinct points. If $r>4$ then all sets of $D$-integral points on $X$ quadratic over $k$ are finite. \end{theorem} This theorem can also be obtained as a consequence of a result of Vojta (see Section \ref{sVo}). Using the same technique Corvaja and Zannier used, looking at symmetric powers of $X$, our higher-dimensional results give \begin{theorem} Let $n=\dim X$. If $D_i$ is ample for all $i$ and $r>2d^2mn$ then all sets of $D$-integral points on $X$ of degree $d$ over $k$ are finite. \end{theorem} \begin{proof} Suppose $r>2d^2mn$ and let $R\subset X(\overline{k})$ be a set of $D$-integral points on $X$ of degree $d$ over $k$. It suffices to prove the finiteness of $R$ in the case where for every $P\in R$ we have $[k(P):k]=d$. Let $X^d$ be the $d$-fold product of $X$ with itself, and let $\pi_i:X^d\to X$ be the $i$-th projection map for $i=1,\ldots, d$. Let $\Sym^d X$ denote the $d$-fold symmetric product of $X$ with itself and let $\phi:X^d\to \Sym^d X$ be the natural map. Let $E_i=\phi(\pi_1^*D_i)$ and $E=\sum_{i=1}^rE_i$. We have that $\phi^*E_i=\sum_{j=1}^d \pi_j^*D_i$ which is ample on $X^d$. Since $\phi$ is a finite surjective morphism, it follows that $E_i$ is ample. By looking at the corresponding statement on $X^d$ we see that the intersection of any $dm+1$ distinct $E_i$'s is empty. We also have $\dim \Sym X^d=dn$. Since $r>2(dm)(dn)$, by Theorem \ref{cor4}(b) we have that all sets of $k$-rational $E$-integral points on $\Sym^d X$ are finite. For $P\in R$ let $P^{(1)},\ldots,P^{(d)}$ denote the $d$ conjugates of $P$ over $k$. Then $R'=\{(P^{(1)},\ldots,P^{(d)})\in X^d|P\in R\}$ is a set of $\sum_{i=1}^d\pi_i^*D$-integral points on $X^d$. So $\phi(R')$ is a set of $E$-integral points on $\Sym^d X$. Note that $\phi(R')$ is actually a set of $k$-rational points on $\Sym^d X$. Therefore, from above, $\phi(R')$ must be finite, and so clearly $R$ must be finite. \end{proof} When $\#S=1$ we have the stronger theorem \begin{theorem} Let $X$ be a projective variety defined over $k=\mathbb{Q}$ or a complex quadratic field $k$. Let $S=\{v_\infty\}$ consist of the unique archimedean place of $k$. If $D_i$ is ample and defined over $k$ for all $i$ and $r>dm$ then all sets of $D$-integral points on $X$ of degree $d$ over $k$ are finite. \end{theorem} \begin{proof} The proof is identical with the proof of the previous theorem, except that instead of using Theorem \ref{cor4}(b) we use Corollary \ref{qs1}. \end{proof} \section{A Result of Faltings} \label{Faltings} In \cite{Fa}, Faltings proves the finiteness of integral points on the complements of certain irreducible singular curves in $\mathbb{P}^2$. Recently a similar result has also been obtained by Zannier in \cite{Co4}. We show, as simple corollaries of our work on surfaces, how we may improve both results on integral points, and at the same time we will prove the analogous statement for holomorphic curves. Let $X$ be an irreducible nonsingular projective surface over an algebraically closed field $k$ of characteristic $0$. Let $\mathcal{L}=\mathcal{O}_X(L)$ be an ample line bundle on $X$ with $K_X+3L$ ample. Assume that the global sections $\Gamma(X,\mathcal{L})$ generate\\\\ (a). $\mathcal{L}_x/\mathfrak{m}_x^4\mathcal{L}_x$ for all points $x\in X$\\ (b). $\mathcal{L}_x/\mathfrak{m}_x^3\mathcal{L}_x \bigoplus \mathcal{L}_y/\mathfrak{m}_y^3\mathcal{L}_y$ for all pairs $\{x,y\}$ of distinct points\\ (c). $\mathcal{L}_x/\mathfrak{m}_x^2\mathcal{L}_x \bigoplus \mathcal{L}_y/\mathfrak{m}_y^2\mathcal{L}_y \bigoplus \mathcal{L}_z/\mathfrak{m}_z^2\mathcal{L}_z$ for all triples $\{x,y,z\}$ of distinct points.\\ A three-dimensional subspace $E\subset \Gamma(X,\mathcal{L})$ that generates $\mathcal{L}$ gives a morphism $f_E:X \to \mathbb{P}^2$. Faltings studies this map when $E$ is suitably generic. \begin{definition} \label{Fgeneric} Let $E\subset \Gamma(X,\mathcal{L})$ be a three-dimensional subspace. We call $E$ generic if:\\\\ (a). E generates $\mathcal{L}$.\\ (b). The discriminant locus $Z\subset X$ of $f_E$ is nonsingular.\\ (c). The restriction of $f_E$ to $Z$ is birational onto its image $D\subset \mathbb{P}^2$.\\ (d). $D$ has only cusps and nodes as singularities. \end{definition} Three-dimensional subspaces $E\subset \Gamma(X,\mathcal{L})$ are naturally parametrized by a Grassmannian $G$. Let $n=L^2$. It is then proven that \begin{theorem} \label{F2} With notation as above\\ (a). Generic $E$'s form a dense open subset $G'$ of $G$.\\ (b). For generic $E$ let $\pi:Y\to X \to \mathbb{P}^2$ denote the associated normal Galois covering. Then $Y$ is smooth, $Z$ is irreducible, and the covering group $Aut(Y/\mathbb{P}^2)$ is the full symmetric group $S_n$. \end{theorem} Faltings also proves \begin{theorem} \label{DP} Let $\pi^*D$ be the pullback of $D$ to $Y$. Then $\pi^*D=2\sum_{1\leq i<j \leq n}Z_{ij}=\sum_{i=1}^nA_i$ where $Z_{ij}$ is effective and nonsingular for every $i$ and $j$, and $A_i=\sum_{j\neq i}Z_{ij}$ is the pullback of $Z$ under the $i$th projection map $Y\to X$. Furthermore, let $P\in \pi^*D$. Then one of the following holds:\\\\ (a). $\pi(P)$ is a smooth point of $D$ and $P\in Z_{ij}$ for exactly one $\{ij\}$.\\ (b). $\pi(P)$ is a node of $D$ and exactly two components $Z_{ij}$ and $Z_{kl}$ of $\pi^*D$ for disjoint $\{ij\}$ and $\{kl\}$ intersect at $P$.\\ (c). $\pi(P)$ is a cusp of $D$ and exactly three components $Z_{ij},Z_{ik},Z_{jk}$ intersect at $P$ for some $i,j,k$. \end{theorem} Let $d=\deg D$ and assume that everything above is defined over a number field. The main result of \cite{Fa} is \begin{theorem}[Faltings] \label{Famain} If $dL-\alpha Z$ is ample on $X$ for some $\alpha>12$ then $\mathbb{P}^2\backslash D$ is Mordellic. \end{theorem} Zannier proves this unconditionally if the Kodaira number of $X$ is nonnegative, and more generally he gives a numerical condition replacing the condition on $L$ and $Z$ above. We will prove Theorem \ref{Famain} unconditionally, i.e. without the ampleness condition. We also prove the analogue for holomorphic curves. Under the assumptions discussed above, we prove \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}A} \setcounter{theoremb}{\value{theorem}} \begin{theorema} $\mathbb{P}^2\backslash D$ is Mordellic. \end{theorema} \begin{theoremb} $\mathbb{P}^2\backslash D$ is complete hyperbolic. In particular, $\mathbb{P}^2\backslash D$ is Brody hyperbolic. \end{theoremb} \begin{proof} Since $\pi:Y\backslash \pi^*D\to \mathbb{P}^2\backslash D$ is a finite \'etale covering, the problem is reduced to proving the theorems for $Y\backslash \pi^*D$. The assumption (a) on $L$ given at the beginning of the section implies that $n=L^2\geq 9$. We have $\pi^*D=\sum_{i=1}^n A_i$ and that $A_i$ is the pullback of $Z$ under the $i$th projection map $Y\to X$. Therefore $A_i$ is ample as the projection is a finite map (recall that we assumed $Z\sim K_X+3L$ was ample). It follows from Theorem \ref{DP} that at most four $A_i$'s meet at a point. Therefore we're done by Theorems \ref{surf3a}(b) and \ref{surf3b}(b) with $r\geq 9$ and $m=4$. That $\mathbb{P}^2\backslash D$ is complete hyperbolic follows from the fact that $Y\backslash \pi^*D$ is complete hyperbolic (see \cite{La2}). \end{proof} \section{Remarks on the Siegel and Picard-type Conjectures} \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}A} \setcounter{theoremb}{\value{theorem}} \label{Remarks} In this section we will show the sharpness of the inequalities and the necessity of certain hypotheses in many of the conjectures, how our conjectures relate to other conjectures that have been made, and what special cases of the conjectures are known by previous work. \subsection{Main Conjectures} \subsubsection{Examples Limiting Improvements to the Conjectures} Our main goal here is to show that the inequalities in all of the main conjectures cannot be improved. We'll start with two fundamental examples on $\mathbb{P}^n$. \begin{examplea} \label{NHypera} Let $X=\mathbb{P}^n$. Let $D=\sum_{i=0}^nD_i$, where $D_i$ is the hyperplane defined by $x_i=0,i=0,\ldots,n$. Let $k$ be a number field with an infinite number of units. Let $S$ be the set of archimedean places. Let $R$ be the set of points in $\mathbb{P}^n$ which have a representation where the coordinates are all units. Then $R$ is a set of $D$-integral points on $X$. It follows from the $S$-unit lemma that $R$ is Zariski-dense in $X$. \end{examplea} \begin{exampleb} \label{NHyper} Let $X$ and $D$ be as above. Let $f_i,i=0,\ldots,n$ be linearly independent entire functions. Let $f:\mathbb{C}\to X$ be defined by $f=(e^{f_0},\ldots,e^{f_n})$. Clearly the image of $f$ does not intersect $D$. It follows from Borel's lemma that the image of $f$ is Zariski-dense in $X$. \end{exampleb} We will give two variants of these examples which show that the inequalities in the Main Siegel and Picard-type Conjectures, Conjectures~\ref{conjmaina} and \ref{conjmainb}, are sharp for all values of $m$ and $\kappa_0$. \begin{examplea} \label{var1a} Let $X$, $k$, $S$, $D$, $D_i$, and $R$ be as in Example \ref{NHypera}. Let $Y=X^q$ and let $\pi_j$ be the $j$th projection map from $Y$ to $X$ for $j=1,\ldots,q$. Let $R'=R^q\subset Y$. Let $E_{i,j}=\pi_j^*D_i$ for $0\leq i \leq n,1\leq j \leq q$. Let $1\leq m\leq nq$. Let $r=\left[m+\frac{m}{n}\right]$ and $r'=\left[\frac{r}{n+1}\right]=\left[\frac{m}{n}\right]$. Let \begin{equation*} E=\sum_{j=1}^{r'}\sum_{i=0}^n E_{i,j}+\sum_{i=1}^{r-r'(n+1)}E_{i,r'+1}. \end{equation*} Then $R'$ is a set of $E$-integral points on $Y$ and it follows, again, from the $S$-unit lemma that $R'$ is Zariski-dense in $Y$. Furthermore, there are at most $nr'+r-r'(n+1)=r-r'=m$ of the $E_{i,j}$'s in $E$ meeting at a given point, and $E$ is a sum of $r=\left[m+\frac{m}{n}\right]$ of the $E_{i,j}$'s with $\kappa(E_{i,j})=n$ for all $i$ and $j$. \end{examplea} \begin{exampleb} \label{var1b} Same as the above example, except that instead of $R'$, we use a holomorphic map $f:X\to Y\backslash E$ given by $f=(e^{f_{0,1}},\ldots,e^{f_{n,1}})\times\cdots \times(e^{f_{0,t}},\ldots,e^{f_{n,t}})$ where the $f_{i,j}$'s are linearly independent entire functions. It follows from Borel's lemma that $f$ has Zariski-dense image in $Y$. \end{exampleb} The second variants of Examples \ref{NHypera} and \ref{NHyper} are \begin{examplea} \label{var2a} Let $m$ and $n$ be positive integers. Let $X$, $k$, $S$, $D_i$, $r$, and $r'$ be as in Example \ref{var1a}. Let $D=\sum_{i=0}^{n}a_iD_i$ where $a_i=r'+1$ for $i=0,\ldots,r-(n+1)r'-1$ and $a_i=r'$ for $i=r-(n+1)r',\ldots,n$. Then counting the $D_i$'s with their multiplicity in $D$, $D$ is a sum of $\sum_{i=0}^{n}a_i=r$ effective divisors such that the intersection of any $m+1$ of them is empty. We have $\kappa(D_i)=n$ for all $i$. By Example \ref{NHypera} there exist dense sets of $D$-integral points on $X$. \end{examplea} \begin{exampleb} \label{var2b} The same example as above, except we use the holomorphic map from Example \ref{NHyper}. \end{exampleb} The above four examples also show that one cannot improve the inequalities in Conjectures \ref{conj1a},B and \ref{conj1ab},B. We have not yet discussed the $\kappa_0=0$ case. If $D$ is a divisor on a projective variety $X$, then by blowing up subvarieties of $D$ on $X$ we may get a divisor $D'$ on $X'$ with arbitrarily many components and $X\backslash D \cong X'\backslash D'$. In this case, the new components $C$ have $\kappa(C)=0$. So, as is suggested by the $\kappa_0$ in the denominators of the inequalities, there is no possible result of the type in the Main Siegel and Picard-type Conjectures if one allows divisors $D_i$ with $\kappa(D_i)=0$. However, all is not lost in this case. If we are willing to include in the inequalities numerical invariants of the variety such as the Picard number, then it is possible to give theorems for arbitrary effective divisors. We will discuss this in Section \ref{mainknown}. There are also examples showing that the exceptional sets may be dense, even if the hypotheses of the Main Siegel and Picard-type Conjectures are satisfied. For example, let $X=\mathbb{P}^1\times \mathbb{P}^1$ and let $D=\sum_{i\in I} P_i\times \mathbb{P}^1$ be a finite sum with $P_i\in \mathbb{P}^1(k), i \in I$, for some number field $k$. Then it is easy to show that $\Excd(X\backslash D)=\Exch(X\backslash D)=X\backslash D$. For the Main Conjectures for Ample Divisors we have \begin{examplea} Let $D$ be the sum of any $r$ hyperplanes in general position (i.e. the intersection of any $n+1$ of them is empty) in $\mathbb{P}^n$ with $n<r\leq 2n$. Assume also that $D$ is defined over a number field. Then one may show that there exists a linear subspace $L\subset \mathbb{P}^n$ with $\dim L=\left[\frac{n}{r-n}\right]$ such that $L$ contains a dense set of $D|_L$-integral points (for some $k$ and $S$) (see \cite{Fu2}, \cite{Gr}, and \cite{No} for the constructions). \end{examplea} \begin{exampleb} In the same situation as above, one may also show that there exists a holomorphic map $f:\mathbb{C}\to L\backslash D$ with Zariski-dense image. \end{exampleb} In the simplest case, where $r=2m=2n$, we may simply take $L$ to be a line that passes through points $P$ and $Q$ where $P$ is the intersection of, say, the first $n$ hyperplanes and $Q$ is the intersection of the last $n$ hyperplanes. Then $L\cap D$ is a $\mathbb{P}^1$ minus two points, and so we see that we cannot have finiteness or constancy for the objects in question. \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} \begin{remark} \label{rbig} It is quite possible that our Main Conjectures for Ample Divisors may be extended to quasi-ample divisors. Let $D$ be a quasi-ample divisor on a projective variety $X$. Let $n>0$ be large enough such that the map $\Phi=\Phi_{nD}$, corresponding to $nD$, is birational. It is then quite plausible that all of our conclusions that held for ample divisors generalize to quasi-ample divisors if we state things in terms of $\Phi$, that is, replace $\dim \Excd(X\backslash D)$ and $\dim \Exch(X\backslash D)$ by $\dim \Phi(\Excd(X\backslash D))$ and $\dim \Phi(\Excd(X\backslash D))$ in the conjectures. \end{remark} \subsubsection{Relation to Vojta's Main Conjecture} \label{mainrelation} We now show how some special cases of the Main Conjectures are related to Vojta's Main Conjecture. If $D$ is a divisor on a nonsingular complex variety $X$, we say that $D$ has normal crossings if every point $P\in D$ has an analytic open neighborhood in $X$ with analytic local coordinates $z_1,\ldots,z_n$ such that $D$ is locally defined by $z_1\cdot z_2\cdots z_i=0$ for some $i$. Inspired by results in equi-dimensional Nevanlinna theory, Vojta made the following conjecture in \cite{Vo2}. \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}A} \setcounter{theoremb}{\value{theorem}} \begin{conjecturea}[Vojta's Main Conjecture] \label{Vmain} Let $X$ be a nonsingular projective variety with canonical divisor $K$. Let $D$ be a normal crossings divisor on $X$, and let $k$ be a number field over which $X$ and $D$ are defined. Let $A$ be a quasi-ample divisor on $X$. Let $\epsilon>0$. Then there exists a proper Zariski-closed subset $Z=Z(X,D,\epsilon,A)$ such that \begin{equation*} m(D,P)+h_K(P)\leq \epsilon h_A(P)+O(1) \end{equation*} for all points $P\in X\backslash Z$. \end{conjecturea} Similarly, the analogue is conjectured for holomorphic curves \begin{conjectureb} \label{Vmainb} Let $X$ be a nonsingular complex projective variety with canonical divisor $K$. Let $D$ be a normal crossings divisor on $X$. Let $A$ be a quasi-ample divisor on $X$. Let $\epsilon>0$. Then there exists a proper Zariski-closed subset $Z=Z(X,D,\epsilon,A)$ such that for all holomorphic maps $f:\mathbb{C}\to X$ whose image is not contained in $Z$, \begin{equation*} m(D,r)+T_K(r)\leq \epsilon T_A(r)+O(1) \end{equation*} holds for all $r$ outside a set of finite Lebesgue measure. \end{conjectureb} Qualitatively, these conjectures have the following simple consequences. \begin{conjecturea} \label{conj3} Let $X$ be a nonsingular projective variety, defined over a number field $k$. Let $K$ be the canonical divisor of $X$, and $D$ a normal crossings divisor on $X$, defined over $k$. Suppose that $K+D$ is quasi-ample. Then $X\backslash D$ is quasi-Mordellic. \end{conjecturea} \begin{conjectureb} \label{conj3b} Let $X$ be a nonsingular complex projective variety. Let $K$ be the canonical divisor of $X$, and $D$ a normal crossings divisor on $X$. Suppose that $K+D$ is quasi-ample. Then $X\backslash D$ is quasi-Brody hyperbolic. \end{conjectureb} \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} To relate these conjectures to our conjectures we recall the following theorem, which is a consequence of Mori theory \cite[Lemma 1.7]{Mo}. \begin{theorem} Let $X$ be a nonsingular complex projective variety of dimension $n$. If $D_1,\ldots,D_{n+2}$ are ample divisors on $X$ then $K+\sum_{i=1}^{n+2}D_i$ is ample. \end{theorem} So when $X$ is nonsingular, the $D_i$'s are ample, and $D$ has normal crossings, we see that Conjectures \ref{conj1ab} and \ref{conj1bb} are consequences of Conjectures \ref{conj3} and \ref{conj3b}. \subsubsection{Previously Known Results Related to the Conjectures} \label{mainknown} As was discussed earlier, our work builds on previous work of Corvaja and Zannier, who obtained results on surfaces in \cite{Co2}, and initiated the general method we have used in \cite{Co}. The Nevanlinna theoretic analogues of \cite{Co2} were proved by Liu and Ru in \cite{Ru4}. We briefly discussed these previous results in Section \ref{ssurf}. We now discuss what is known for arbitrary divisors. As a consequence of his work on integral points on subvarieties of semi-abelian varieties, Vojta \cite{Vo1} proved \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}A} \setcounter{theoremb}{\value{theorem}} \begin{theorema} \label{Vojtaa} Let $X$ be a projective variety defined over a number field $k$. Let $\rho$ denote the Picard number of $X$. Let $D$ be an effective divisor on $X$ defined over $k$ which has more than $\dim X - h^1(X,\mathcal{O}_X)+\rho$ (geometrically) irreducible components. Then $X\backslash D$ is quasi-Mordellic. \end{theorema} Similarly, a special case of work of Noguchi \cite{No2} gives \begin{theoremb} \label{Vojtab} Let $X$ be a complex projective variety. Let $\rho$ denote the Picard number of $X$. Let $D$ be an effective divisor on $X$ which has more than $\dim X - h^1(X,\mathcal{O}_X)+\rho$ irreducible components. Then $X\backslash D$ is quasi-Brody hyperbolic. \end{theoremb} We note that it is easily shown that both theorems are sharp in that there are divisors with $\dim X - h^1(X,\mathcal{O}_X)+\rho$ irreducible components for which the conclusions of the theorems are false. For a weaker, but more elementary theorem along these lines, see also Th. 2.4.1 in~\cite{Vo2}. As consequences of Theorems \ref{Vojtaa},B we see that Conjectures \ref{conj1ab},B are true for $X=\mathbb{P}^n$, and more generally for any projective variety $X$ with Picard number one. From the work of Noguchi and Winkelmann \cite{No} we have the following theorems related to our Main Conjectures for Ample Divisors (some special cases of these results had been obtained previously by various people; see \cite{No} for the history). \begin{theorema} Let $X$ be a projective variety of dimension $n$ defined over a number field $k$. Let $S$ be a finite set of places of $k$ containing the archimedean places. Let $\rho$ be the Picard number of $X$. Let $D=\sum_{i=1}^rD_i$ be a divisor on $X$ defined over $k$ with the $D_i$'s effective reduced ample Cartier divisors such that the intersection of any $n+1$ of them is empty.\\\\ (a). If $r>n+1$ then all sets of $D$-integral points $R$ have $\dim R\leq \frac{n}{r-n}\rho$.\\ (b). If $r>n(\rho+1)$ then $X\backslash D$ is Mordellic.\\ (c). If $X\subset \mathbb{P}^N$, all $D_i$'s are hypersurface cuts of $X$, and $r>2n$ then $X\backslash D$ is Mordellic. \end{theorema} \begin{theoremb} Let $X$ be a complex projective variety of dimension $n$. Let $\rho$ be the Picard number of $X$. Let $D=\sum_{i=1}^rD_i$ be a divisor on $X$ with the $D_i$'s effective reduced ample Cartier divisors such that the intersection of any $n+1$ of them is empty.\\\\ (a). If $r>n+1$ then all holomorphic maps $f:\mathbb{C}\to X\backslash D$ have $\dim f(\mathbb{C})\leq \frac{n}{r-n}\rho$.\\ (b). If $r>n(\rho+1)$ then $X\backslash D$ is complete hyperbolic and hyperbolically imbedded in $X$. In particular, $X\backslash D$ is Brody hyperbolic.\\ (c). If $X\subset \mathbb{P}^N$, all $D_i$'s are hypersurface cuts of $X$, and $r>2n$ then $X\backslash D$ is complete hyperbolic and hyperbolically imbedded in $X$. In particular, $X\backslash D$ is Brody hyperbolic. \end{theoremb} Consequently, when $m=\dim X$, the $D_i$'s are reduced divisors, and $\rho(X)=1$, we have that the Main Conjectures for Ample Divisors, Conjectures \ref{conj2a},B, are true modulo the statements on the exceptional sets (i.e. replace $\Excd(X\backslash D)$ by any particular set of integral points $R$ in Conjecture \ref{conj2a}, etc.) Similarly, the part (c)'s of the above theorems give special cases of the part (b)'s of Conjectures \ref{conj2a},B. \subsection{General Conjectures} \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} \subsubsection{Examples Limiting Improvements to the Conjectures} We start off with an example showing that the inequalities in the General Conjectures are best possible when $X$ is a curve. \begin{example} \label{genex} Let $X$ be a projective curve defined over a number field $k$ with $\mathcal{O}_k^*$ infinite. Let $f:X\to \mathbb{P}^1$ be a morphism of degree $d$ defined over $k$. Let $P,Q\in \mathbb{P}^1(k)$ be two distinct points over which $f$ is unramified, and let $D=P+Q$. Then there exists an infinite set $R$ of $k$-rational $D$-integral points on $\mathbb{P}^1\backslash D$. Since $f$ has degree $d$, $f^{-1}(R)$ is a set of $f^*D$-integral points on $X\backslash f^*D$ of degree $d$ over $k$ and $f^*D$ is a sum of $2d$ distinct points on $X$. \end{example} Taking products of curves, we then get examples in all dimensions showing that the inequality in the General Siegel-type Conjecture cannot be improved in the case $\kappa_0=1$. \begin{example} \label{genex2} Let $D=\sum_{i=1}^{2md}H_i$ be a sum of hyperplanes on $\mathbb{P}^n$ defined over a number field $k$ such that the intersection of any $m+1$ of the $H_i$'s is empty. Suppose also that $\bigcap_{i=(j-1)m+1}^{jm}H_i=\{P_j\}$ consists of a single point for $j=1,\ldots,2d$ and the $P_j$'s are collinear. Then there exist infinite sets of $D$-integral points of degree $d$ on $\mathbb{P}^n\backslash D$ over large enough number fields. Indeed, the line $L$ through the $P_j$'s intersects $D$ in $2d$ points, and we see from Example \ref{genex} that $L\backslash L\cap D$ contains infinite sets of integral points over large enough number fields. \end{example} This shows that the inequality in the finiteness part of the General Siegel-type Conjecture for Ample Divisors cannot be improved. We expect that using only divisors that are sums of hyperplanes on projective space, one may show that the other inequalities in the General Conjectures may not be improved for any set of parameters. For example, it should be true that if $D$ is a sum of $2d+n-1$ hyperplanes in general position on $\mathbb{P}^n$, then for some number field $k$ there exist dense sets of $D$-integral points on $\mathbb{P}^n$ of degree $d$ over $k$. In any case, it is easy to show that $\Excd(\mathbb{P}^n\backslash D)=\mathbb{P}^n\backslash D$. If $P$ is a point where $n$ of the hyperplanes intersect, then any line through $P$ will intersect $D$ in $2d$ points. But as we have seen, over some number field $k$, such lines will contain infinitely many integral points of degree $d$ over $k$. To show the existence of a Zarisk-dense set of $D$-integral points, one needs to show that if the lines and their sets of integral points are chosen correctly, then the infinite union of the sets of integral points will still be a set of $D$-integral points (there is no problem for finite unions). \subsubsection{Vojta's General Conjecture and a Conjectural Discriminant-Height Inequality} We will now investigate how the General Siegel-type Conjecture, Conjecture \ref{congen}, is related to Vojta's General Conjecture. In order to make a connection between the two conjectures, we will need to formulate a new conjecture bounding the absolute logarithmic discriminant in terms of heights. We will digress briefly to discuss this new conjecture. Let $X$ be a variety defined over a number field $k$ and let $P\in X(\overline{k})$. Let $d(P)=\frac{1}{[k(P):\mathbb{Q}]}\log |D_{k(P)/\mathbb{Q}}|$ where $D_{k(P)/\mathbb{Q}}$ is the discriminant of $k(P)$ over $\mathbb{Q}$. We call $d(P)$ the absolute logarithmic discriminant of $P$. Let $m(D,P)=\sum_{v\in S}\lambda_{D,v}(P)$. Then Vojta's General Conjecture states \begin{conjecture}[Vojta's General Conjecture] \label{Vgeneral} Let $X$ be a complete nonsingular variety with canonical divisor $K$. Let $D$ be a normal crossings divisor on $X$, and let $k$ be a number field over which $X$ and $D$ are defined. Let $A$ be a quasi-ample divisor on $X$. Let $\epsilon>0$. If $\nu$ is a positive integer then there exists a Zariski-closed subset $Z=Z(\nu,X,D,\epsilon,A)$ such that \begin{equation*} m(D,P)+h_K(P)\leq d(P)+\epsilon h_A(P)+O(1) \end{equation*} for all points $P\in X(\overline{k})\backslash Z$ such that $[k(P):k]\leq \nu$. \end{conjecture} Actually, Vojta's General Conjecture as it appears in \cite{Vo2} has the discriminant term as $(\dim X)d(P)$, but it is now believed that the $\dim X$ term is unecessary (see \cite[Conjecture 8.7]{Vo5} or the discussion at the end of \cite{Vo6}). Vojta's General Conjecture, with $D=0$, can be seen as giving a lower bound on the absolute logarithmic discriminant in terms of heights (outside some Zariski-closed subset). As a companion to this, we give the following conjectural upper bound on the logarithmic discriminant in terms of heights. \begin{conjecture} \label{conj4} Let $X$ be a nonsingular projective variety of dimension $n$ defined over a number field $k$ with canonical divisor $K$. Let $A$ be an ample divisor on $X$. Let $\nu$ be a positive integer. Let $\epsilon>0$. Then \begin{equation*} d(P)\leq h_K(P)+(2[k(P):k]+n-1+\epsilon)h_A(P)+O(1) \end{equation*} for all $P\in X(\overline{k})$ with $[k(P):k]\leq \nu$. \end{conjecture} \begin{remark} It is possible that with the hypothesis $A$ ample weakened to $A$ quasi-ample that the inequality holds outside of some Zariski-closed subset of $X$ (it is not hard to see the necessity of the Zariski-closed subset in this case). It is also possible that the conjecture is true with $\epsilon=0$. As with Vojta's General Conjecture, it is quite plausible that one may take $\nu=\infty$, i.e. the inequality holds for all $P\in X(\overline{k})$. \end{remark} It is a result of Silverman \cite{Si2} that Conjecture \ref{conj4} is true for $X=\mathbb{P}^n$ with $\epsilon=0$ and $\nu=\infty$. For curves, Conjecture \ref{conj4} is true by a result of Song and Tucker \cite[Eq. 2.0.3]{Tu}. They proved the stronger statement \begin{theorem} \label{thTu} Let $X$ be a nonsingular projective curve defined over a number field $k$ with canonical divisor $K$. Let $A$ be an ample divisor on $X$. Let $\nu$ be a positive integer. Let $\epsilon>0$. Then \begin{equation*} d(P)\leq d_a(P)\leq h_K(P)+(2[k(P):k]+\epsilon)h_A(P)+O(1) \end{equation*} for all $P\in X(\overline{k})$ with $[k(P):k]\leq \nu$, where $d_a(P)$ is the arithmetic discriminant of $P$ (see \cite{Vo8} for the definition and properties). \end{theorem} We now show how Vojta's General Conjecture, combined with our conjectural upper bound on the discrimant, imply a special case of the General Siegel-type Conjecture. \begin{theorem} Assume Vojta's General Conjecture, Conjecture \ref{Vgeneral}, and the conjectural upper bound on the absolute logarithmic discriminant, Conjecture \ref{conj4}. Let $X$ be a nonsingular projective variety defined over a number field $k$. Let $n=\dim X$. Let $D=\sum_{i=1}^r D_i$ be a normal crossings divisor defined over $k$ with $D_i$ ample and effective for all $i$. If $r>2\nu+n-1$ then $X\backslash D$ is degree $\nu$ quasi-Mordellic. In particular, there do not exist Zariski-dense sets of $D$-integral points on $X$ of degree $\nu$ over $k$. \end{theorem} \begin{proof} Let $R$ be a set of $D$-integral points on $X$ of degree $\nu$ over $k$. Then $m(D,P)+h_K(P)=h_D(P)+h_K(P)+O(1)$ for $P\in R$. By Conjecture \ref{conj4}, for any $\epsilon>0$, $h_{D_i}(P)\geq \frac{d(P)-h_K(P)}{2\nu+n-1+\epsilon}+O(1)$. So since $r>2\nu+n-1$, we have $h_D(P)\geq d(P)-h_K(P)+(1-\epsilon)h_{D_1}(P)+O(1)$. Therefore \begin{equation*} m(D,P)+h_K(P)>d(P)+\epsilon h_A(P)+O(1) \end{equation*} for all $P\in R$, for any ample divisor $A$ on $X$ and small enough $\epsilon$. So we're done by Vojta's General Conjecture. \end{proof} So assuming Vojta's General Conjecture and Conjecture \ref{conj4}, we see that the General Siegel-type Conjecture is true if $D_i$ is ample for all $i$ and $D$ has normal crossings. \subsubsection{Previously Known Results Related to the Conjectures} \label{sVo} In \cite{Vo7}, Vojta proved the following generalization of Falting's theorem on rational points on curves and the Thue-Siegel-Roth-Wirsing theorem. \begin{theorem} Let $X$ be a nonsingular projective curve defined over a number field $k$ with canonical divisor $K$. Let $D$ be an effective divisor on $X$ defined over $k$ with no multiple components and $A$ an ample divisor on $X$. Let $\nu$ be a positive integer and let $\epsilon>0$. Then \begin{equation*} m(D,P)+h_K(P)\leq d_a(P)+\epsilon h_A(P)+O(1) \end{equation*} for all $P\in X(\overline{k})\backslash D$ with $[k(P):k]\leq \nu$, where the constant in $O(1)$ depends on $X,D,\nu,A$, and $\epsilon$. \end{theorem} Using Theorem \ref{thTu} we then easily obtain the following theorem. \begin{corollary} Let $X$ be a nonsingular projective curve defined over a number field $k$. Let $D$ be an effective divisor on $X$ that is a sum of more than $2\nu$ distinct points. Then $X\backslash D$ is degree $\nu$ Mordellic. \end{corollary} Therefore our General Siegel-type Conjectures are true for curves. Of course for $\mathbb{P}^1$ this was already known from the Thue-Siegel-Roth-Wirsing theorem. As mentioned earlier, the special case $\nu=2$ was also proven by Corvaja and Zannier using the Schmidt Subspace Theorem technique \cite{Co2}. \subsection{Conjectures over $\mathbb{Z}$ and Complex Quadratic Rings of Integers} I am not aware of any previous results that pertain to these conjectures, or any way to relate them to other known conjectures. An open problem then is to formulate quantitative conjectures explaining the qualitative conjectures I have made over $\mathbb{Z}$ and complex quadratic rings of integers. We now briefly discuss some examples showing that in many cases the inequalities in these conjectures may not be improved. For the Main Siegel-type Conjecture over $\mathbb{Z}$, to show that the inequality in the conjecture may not be improved we may simple take $D=mH$ where $H$ is a hyperplane on $\mathbb{P}^n$ defined over $\mathbb{Q}$. Examples where the $m$ divisors have no components in common are easily obtained from products of projective spaces. For the Main Conjecture on Ample Divisors over $\mathbb{Z}$, if $D=\sum_{i=1}^mH_i$ is a sum of $m<n$ distinct hyperplanes on $\mathbb{P}^n$ defined over $\mathbb{Q}$ then $\dim \cap_{i=1}^mH_i=n-m$ and there is a $Y=\mathbb{P}^{n-m+1}\subset \mathbb{P}^n$ with $D|_Y$ a hyperplane on $Y$ defined over $\mathbb{Q}$. So there are sets of $D$-integral points on $\mathbb{P}^n$ with dimension $n-m+1$. Examples for the General Conjectures over $\mathbb{Z}$ are nearly identical to Examples \ref{genex} and \ref{genex2}, except that we must replace $2d$ by $d$ everywhere, since we are using $\mathbb{A}^1$ as our starting point. Again, we expect that using only divisors that are sums of hyperplanes on projective space, one may show that the inequalities in the General Conjectures over $\mathbb{Z}$ may not be improved for any set of parameters.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec_intro} Analytic modeling relies on describing shape and configuration pointsets as sublevel sets of functions and formulating fundamental operations (e.g., pertaining to detecting collisions, similarity, complementarity, or symmetry) in terms of correlations between those functions. For example, Minkowski operations \cite{Roerdink2000} that are central to mathematical morphology are formalized as convolutions of constituent functions and computed efficiently in the Fourier domain \cite{Lysenko2011a}. Minkowski operations have been used extensively to formulate important problems in robot path planning \cite{Lozano-Perez1983}, mechanism workspace design \cite{Nelaturi2011}, virtual reality (graphics/haptics) \cite{Behandish2015,Behandish2015b}, protein docking \cite{Bajaj2011}, packaging and nesting \cite{Chernov2010}, and more. Unfortunately, their combinatorial computation even for 3D polyhedral objects quickly becomes impractical with increasing number of polygons \cite{Lysenko2011a}. This can be alleviated using classical FFTs \cite{Cooley1965} for numerical implementation of convolutions, which take advantage of uniform spatial sampling. However, this and several related correlation-based problems can be solved more efficiently by using a spherical decomposition of the shape and nonuniform FFTs \cite{Potts2001}. Here we briefly review the roots of the main ideas, with a focus on collision detection (CD) and shape complementarity (SC) literature. \begin{figure*} \centering \includegraphics[width=\textwidth]{figure1} \caption{Research in combinatorial methods has shown that CD tests over uniform grids (i.e., `voxmaps') \cite{Sagardia2014} (a) and octrees (special case of OBB-trees) \cite{Gottschalk1996} (b) can be made more efficient if voxels are replaced with spherical primitives, e.g., built around octrees \cite{OSullivan1999} (c), sampled over the MA \cite{Hubbard1996,Bradshaw2004} (d) or packed inside using distance fields \cite{Weller2011} (e). However, the more nascent analytic methods are still mostly reliant on uniform grids for, e.g., CD testing for solids by integrating the intersection \cite{Kavraki1995} (f), and SC scoring for proteins by integrating skin overlaps \cite{Chen2003a} (g). The latter has been outperformed by grid-free correlations of atoms grouped with equal radii \cite{Bajaj2011} (h). We show that constructions in (c--e) with arbitrary radii can also be interpreted analytically as a convolution and solved by nonuniform FFTs after a geometric lifting.} \label{figure1} \end{figure*} \subsection{Background} \label{sec_lit} At one end of the research spectrum are combinatorial techniques that use surface meshes or higher-order algebraic parametrizations to resolve collisions or identify matching features. Examples are polyhedral CD methods based on Voronoi-clipping/marching \cite{Mirtich1998} and oriented bounding box (OBB) trees \cite{Gottschalk1996}, or spatial enumeration-based techniques such as the Voxmap PointShell (VPS) \cite{Sagardia2014}. VPS works by a pairwise test between a shell of vertices for the moving object against a map of voxels that discretizes the stationary obstacle, and is popular in physically-based modeling in virtual environments \cite{Kim2004}. Others have identified more efficient techniques for time-critical CD by using hierarchical bounding spheres sampled on octrees \cite{OSullivan1999} or on the medial axes (MA) \cite{Hubbard1996,Bradshaw2004}, and interior sphere packing guided by distance fields \cite{Weller2011} (Fig. \ref{figure1} (a--e)). These `sphere-tree' based methods have been shown to outperform voxmap or OBB-tree based techniques in real-time applications \cite{Weller2011}---particularly because primitive collision predicates are simplified to center-distance tests as a result of the orientational symmetry of balls---and are considered state-of-the-art in practice. For a more complete survey on CD methods, we refer the reader to \cite{Jimenez2001}. On the other end of the spectrum are analytic methods that have been more popular in robotics \cite{Lozano-Perez1983}. Unlike the combinatorial approach that searches for a collision certificate point (or lack thereof) in the intersection of the objects, the analytic approach treats the collision predicate as a Boolean combination of inequalities over some configuration space of the objects \cite{Lysenko2013}. The obstacle avoidance in path planning is, for example, treated as an optimization problem subjected to holonomic collision constraints formulated analytically as a convolution of the robot and its workspace \cite{Kavraki1995}. Most convolution-based methods have so far focused on generating uniformly sampled configuration bitmaps for all spatial positions and orientations simultaneously, which can be cumulatively computed with asymptotically optimal FFTs \cite{Cooley1965}. However, a complete description of the configuration obstacle is overkill for real-time CD. A recent work \cite{Lysenko2013}, also reliant on uniform grid-based sampling, formally reframes the approach for time-critical CD (Fig. \ref{figure1} (f)), but has not yet been compared with sphere-tree methods, nor applied to real-time applications. In an independent line of research, numerous analytic methods for molecular surface analysis and SC-based protein docking have been developed, whose outcomes are platforms that use grid-based occupancy enumeration and leverage classical FFTs \cite{Cooley1965} such as ZDock \cite{Chen2003a}, or more recent grid-free techniques that rely on nonuniform FFTs \cite{Potts2001}, such as F$^2$Dock \cite{Bajaj2011} (Fig. \ref{figure1} (g, h)). The latter exploits the spherical shape of the atomic building blocks and implicitly represents the proteins as summations of radial kernels centered around atoms, assigning different weights to core and skin atoms. The SC score is obtained by cross-correlating these functions from different proteins, which turns into a convolution discretized over the center points. It has been shown that grid-free methods outperform uniform grid-based methods \cite{Bajaj2011} by taking advantage of the spherical geometry. A comprehensive survey on advances in protein docking is available in \cite{Ritchie2008a}. Although objects of arbitrary shape, unlike molecules, cannot be represented {\it exactly} as finite unions of balls, the sphere-tree methods for time-critical CD were shown to be more successful in progressively approximating the shape, when compared to uniform grid- or octree-based voxelization, with a faster convergence and a better use of computational resources \cite{Hubbard1996}. Motivated by this observation, we present a generic framework for representing arbitrary shapes with finite (or countably infinite, in the limit) radial kernels, formulated as a convolution of a discrete pointset and the primitive kernel in a higher-dimensional space. The latter is described as a {\it geometric lifting} trick in Section \ref{sec_count}, and is deemed necessary due to the inevitable size difference between primitive balls, unlike the simpler case for the proteins. We show that this approach offers `the best of both worlds' by combining the computational efficiency of the sphere-tree techniques for time-critical applications (i.e., with a single configuration query) with that of the analytic methods for cumulative configuration space constructions (i.e., requiring a complete map for all spatial relations), unified under a single paradigm with analytic formalism that applies to a multitude of applications. \subsection{Contributions} The main contributions of this paper are to 1) present an analytic shape correlation paradigm centered around a nonuniform discretization scheme\footnote{What we mean by a `discretization scheme' is not a particular decomposition algorithm or approximation method, but a generic formalism for reconciling such a nonuniform discretization (in contrast to the extensively used uniform sampling) to analytic modeling, using Minkowski sums and convolutions. We do present one new algorithm in \ref{app_sampling}; nevertheless, other methods \cite{OSullivan1999,Hubbard1996,Bradshaw2004,Weller2011} are also applicable under the same scheme.} that relies on progressive spherical approximations with balls of different sizes; and 2) a uniform and efficient approach to solving a variety of problems that deal with detecting collisions, shape similarity/complementarity, and shape morphing, examples of which we describe in Section \ref{sec_app}. Moreover, we show that the spherical discretization offers an algebraic structure that is closed under Minkowski sum/product operations, and at the same time offers more appealing properties than uniform grid- or octree-based discretizations. As the continuous geometry (of both shapes and configurations) is abstracted away by the balls, the computational implementation solely relies on convolution algebra over discrete sets specified completely by ball center coordinates and radii, allowing the use of the efficient nonequispaced FFT (NFFT) algorithm \cite{Potts2001} on the highly parallel graphics processing units (GPU) \cite{Kunis2012}. Unlike combinatorial methods whose complexities typically depend on the syntactic size of the representation (e.g., polygon count) fixed upfront, our method allows for a choice of complexity in real-time based on the affordable resources, by proceeding deep enough down the sphere-tree---which can be constructed using any algorithm of choice, such as \cite{OSullivan1999,Hubbard1996,Bradshaw2004,Weller2011} or our own presented in \ref{app_sampling}. On the other hand, unlike grid-based analytic methods whose arithmetic complexities scale with object size and grid resolution, our method enables filling large regions with large balls and efficiently allocating more primitives to capture features of smaller size with higher fidelity. Finally, by working in the Fourier domain, aside from converting convolutions to simple pointwise products and differentiations to simple multipliers, the method allows for `graceful' degradation of the accuracy by truncating the frequency domain representations, enabling another trade-off mechanism between running time and precision. \subsection{Outline} In Section \ref{sec_samp} we present the formalism for discretization of an arbitrary shape as a countable union of balls, its interpretation as a Minkowski sum, and its analytic description as a convolution. In Section \ref{sec_cor} we formulate correlation predicates in terms of Minkowski sums across multiple shapes relatively positioned and oriented in arbitrary `poses' together with their convolution forms, and use the results from the previous section to carry the discretization into the configuration space. In Section \ref{sec_app} we provide examples of applying these tools to solving fundamental problems, whose efficient GPU implementations are shown to outperform some of the state-of-the-art methods in those areas as demonstrated in Section \ref{sec_res}. A more detailed outline of the subsections is provided at the beginning of each section. \section{Shape Discretization} \label{sec_samp} In this section, we present what we mean by `analytic modeling' in \ref{sec_model}. We define a discretization scheme in \ref{sec_count} as a 3D slice of a 4D Minkowski sum of a countable set of `knots'\footnote{The terminology is borrowed from Potts et al. \cite{Potts2004}, where convolution with radial kernels at `nonequispaced knots' (which underlies our development) is carried out using the nonequispaced FFT \cite{Potts2001}.} and a primitive cone that represents balls of all permissible sizes stacked along the 4th dimension. In \ref{sec_pack}, we briefly introduce the sampling approaches that generate such discretizations from arbitrary representations. We incorporate motions in \ref{sec_conf} and show the advantages of spherical symmetry in the presence of rotations. In \ref{sec_Fourier}, we present the Fourier expansion of the spherical sampling that facilitates development of fast algorithms. \subsection{Analytic Solid Modeling} \label{sec_model} As usual, we assume solids to be `r-sets', defined by Requicha \cite{Requicha1977a} as compact (i.e., bounded and closed) regular semianalytic subsets of the Euclidean $3-$space $\mathcal{S} \subset \mathcal{P}({\R^3})$.\footnote{The collection $\mathcal{P}(A) = \{ B ~|~ B \subset A \}$ denotes the `power set' of a set $A$, i.e., the set of all subsets of $A$.} The regularity condition (i.e., $S = r S$) ensures {\it continuous homogeneity},\footnote{$r S = \kappa \imath S$ denotes the `regularized' (i.e., closure of interior of) $S$. The `boundary' $\partial S = \kappa S \cap \kappa c S$ is homogeneously 2D for a 3D r-set, separating the open `interior' $\imath S$, from the open `exterior' $cS$ \cite{Requicha1978}.} while the semianalytic requirement guarantees {\it finite describability} of the set \cite{Requicha1977a}, as well as its medial axis (MA) and medial axis transform (MAT) \cite{Chazal2004}. The latter is defined as an embedding of the MA in the 4D space with the radius of the maximal ball conceptualized as a new coordinate, and contains enough information to reconstruct the solid $S$, hence can be used to develop a discretization/sampling scheme in Section \ref{sec_count}. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figure2} \caption{An r-set (a), its indicator function (b) and bump function (c), discretized using grid-based bitmap sampling (d), and grid-free spherical sampling corresponding to $\alpha \rightarrow \infty$ (e) and $\alpha := 2$ (f) in (\ref{eq_psi}).} \label{figure2} \end{figure} To enable an {\it analytic} formulation of the geometric modeling operators in Section \ref{sec_cor}, a solid $S \in \mathcal{S}$ can be implicitly described as a sublevel set of a real-valued function $f_S: {\R^3} \rightarrow \mathds{R}$ (called the `defining function' of $S$), first introduced by Comba \cite{Comba1968} for CD between convex objects, and used later by Ricci \cite{Ricci1973} to implement Boolean combinations. By restricting to compactly supported nonnegative functions $f_S: {\R^3} \rightarrow [0, \infty)$, and using regularized $0-$sublevels, i.e., $S = U^\ast_0(f_S) = r U_0(f_S)$, where \begin{equation} U_t(f_S) := \{ \mathbf{x} \in \text{domain of}~ f_S ~|~ f_S(\mathbf{x}) > t\}, \end{equation} the regularized set operations are supported by the analytic expressions \begin{align} S_1 \cup^\ast S_2 &= U^\ast_0 (f_{S_1} + f_{S_2}), ~\text{i.e.,}~ f_{S_1 \cup^\ast S_2} = f_{S_1} + f_{S_2}, \label{eq_un} \\ S_1 \cap^\ast S_2 &= U^\ast_0 (f_{S_1} f_{S_2}), \quad~~\text{i.e.,}~ f_{S_1 \cap^\ast S_2} = f_{S_1} f_{S_2}. \label{eq_int} \end{align} A popular choice for the defining function is the `indicator function' $f_S := \mathbf{1}_{S} : {\R^3} \rightarrow \{0, 1\}$, where $\mathbf{1}_S(\mathbf{x}) = 1$ if $\mathbf{x} \in S$ and $\mathbf{1}_S(\mathbf{x}) = 0$ otherwise. However, the discontinuity of the indicator function makes it difficult to compute the gradients of correlation functions between solids. Lysenko \cite{Lysenko2013} circumvented this problem in his collision constraint formulation by employing `bump functions' $f_S \in C_0^\infty({\R^3})$ instead, which are compactly supported nonnegative functions that are also smooth, offering more appealing differential properties than the discontinuous indicator functions \cite{Kavraki1995}, the $C^0-$ (but not $C^1-$) distance fields \cite{Ricci1973}, and the rather complex R-functions \cite{Shapiro2007}. Figure \ref{figure2} illustrates indicator and bump functions, and their discretizations introduced in Section \ref{sec_count}. A useful result that underlies the collision predicate formulation in Section \ref{sec_col} is the {\it null-volume lemma} \cite{Lysenko2013}, which relies on the regularity (hence 3D homogeneity): \begin{lemma} For every r-set $S = U^\ast_0(f_S)$ with a nonnegative defining function $f_S : {\R^3} \rightarrow [0, \infty)$, % \begin{align} S \neq \emptyset ~~\rightleftharpoons~~ v(f_S) := \int_{{\R^3}} f_S(\mathbf{x}) ~d\mathbf{x} > 0. \label{eq_null} \end{align} % \end{lemma} In particular, if $f_S := \mathbf{1}_S$ then $v(f_S)$ exactly gives the volume of $S$. Letting $S:= S_1 \cap^\ast S_2$ in (\ref{eq_null}) and applying (\ref{eq_int}), i.e., $f_S = f_{S_1} f_{S_2}$, yields the collision predicate in terms of the inner product of functions \begin{align} S_1 \cap^\ast S_2 \neq \emptyset ~~\rightleftharpoons~~ v(f_{S}) = \langle f_{S_1}, f_{S_2} \rangle > 0, \label{eq_pred} \end{align} which underlies generalized correlation predicates for a range of applications discussed in Section \ref{sec_app}. It is rarely the case in practical applications with arbitrarily complex shapes to have the defining function in closed form. However, one can always decompose the solid into a finite number of simpler primitives, as an immediate consequence of its finite describability \cite{Requicha1977a}, and apply (\ref{eq_un}) to combine the primitive defining functions---to which we refer as `discretization'. However, obtaining exact discretizations (e.g., curvilinear cell decompositions) from the popular constructive solid geometry (CSG) or boundary representation (B-rep) schemes \cite{Requicha1980a} is not trivial.\footnote{An r-set $S \in \mathcal{S}$ is a `tame' embedding of a polyhedron $\Delta = \bigcup_{1 \leq i \leq n} \Delta_i$ under a homeomorphism $\gamma: {\R^3} \rightarrow {\R^3}$ \cite{Requicha1977a}, i.e., $S = \gamma(\Delta) = \bigcup_{1 \leq i \leq n} \gamma(\Delta_i)$, and $f_S(\mathbf{x}) = \sum_{1 \leq i \leq n} (f_{\Delta_i} \circ \gamma^{-1})(\mathbf{x})$ due to (\ref{eq_un}). Although coming up with a closed form for $f_{\Delta_i}$ for a tetrahedron $\Delta_i$ is trivial, finding $\gamma$ from CSG or B-rep is not.} An alternative is to use approximate discretizations (e.g., spatial enumerations via uniform grids or octrees) which converge to the r-set in the limit. Next, we introduce a more general discretization scheme that subsumes these enumeration methods with non-intersecting cubic primitives (i.e., {voxelization}) as special cases, and enables other types of (possibly intersecting) primitives. \begin{figure*} \centering \includegraphics[width=\textwidth]{figure3} \caption{An equiradius sampling of a 3D r-set is a 3D Minkowski sum $S_n(P) = P \oplus B_0 = U^\ast_0(\rho_P \ast f_{B_0})$ (a, b), while its nonequiradius sampling is a 3D slice through a 4D Minkowski sum $K_n(A) = A \oplus D_0 = U^\ast_0(\rho_A \ast f_{D_0})$ of lifted primitives (c--f), illustrated here for 2D r-sets.} \label{figure3} \end{figure*} \subsection{Discrete Constructions} \label{sec_count} Consider the case when an r-set can be decomposed as a finite union $S = \bigcup_{1 \leq i \leq n} B_i$ hence $f_{S} (\mathbf{x}) = \sum_{1 \leq i \leq n} f_{B_i} (\mathbf{x})$ due to (\ref{eq_un}). The following two cases are of prime significance for our purposes: \paragraph{\bf Equiradius Decomposition} First, let $B_i~(1 \leq i \leq n)$ be relocated instances of the same base shape $B_0 \in \mathcal{S}$ instantiated by the different translations $\mathbf{x} \mapsto \mathbf{x}_i + \mathbf{x}$, i.e., $B_i = \{ \mathbf{x}_i + \mathbf{x} ~|~ \mathbf{x} \in B_0 \}$. Each such translation can be abstracted by a 3D point $\mathbf{x}_i \in {\R^3}$, hence the discrete pointset $P := \{\mathbf{x}_i\}_{1 \leq i \leq n} \subset {\R^3}$ (of cardinality $|P| = n$) and the base primitive $B_0$ contain all the information to reconstruct the solid. We can use the notation $S = S_n(P)$, as if $S_n: \mathds{R}^{3n} \rightarrow \mathcal{P}({\R^3})$ is a mapping from the discrete space to the shape space, illustrated in Fig. \ref{figure3} (a, b). A crucial observation is that this mapping can be viewed as a Minkowski sum $S_n(P) = P \oplus B_0$. To construct an analytic model, let $B_0 = U^\ast_0(f_{B_0})$ hence a defining function for each primitive instance can be obtained as $f_{B_i}(\mathbf{x}) := c_i f_{B_0}(\mathbf{x} - \mathbf{x}_i)$, where $c_i > 0$ are weight coefficients (arbitrarily assigned, for now). The Boolean summation of the balls from (\ref{eq_un}) takes the form \begin{align} f_{S_n(P)}(\mathbf{x}) &= \sum_{1 \leq i \leq n} f_{B_i} (\mathbf{x}) = \sum_{1 \leq i \leq n} c_i f_{B_0}(\mathbf{x} - \mathbf{x}_i), \label{eq_Psum} \end{align} which can be viewed as a discrete convolution. To make it compatible with continuous convolutions to be introduced in Section \ref{sec_cor}, let us rewrite (\ref{eq_Psum}) as an integral \begin{align} f_{S_n(P)}(\mathbf{x}) = \sum_{1 \leq i \leq n} \int_{{\R^3}} \delta^3(\mathbf{x}' - \mathbf{x}_i) \big[ c_i f_{B_0}(\mathbf{x} - \mathbf{x}') \big] d\mathbf{x}', \nonumber \end{align} in which the integration variable is $\mathbf{x}' = (x_1', x_2', x_3') \in {\R^3}$, and $d\mathbf{x}' = dx_1' dx_2' dx_3'$ is the infinitesimal volume element. $\delta^3(\mathbf{x}') = \delta(x_1')\delta(x_2')\delta(x_3')$ is the 3D Dirac delta function. If we assume a density function of the form \begin{align} \rho_P (\mathbf{x}) := \sum_{1 \leq i \leq n} c_i \varsigma_{\mathbf{x}_i}(\mathbf{x}) = \sum_{1 \leq i \leq n} c_i \delta^3(\mathbf{x} - \mathbf{x}_i), \label{eq_rhoP} \end{align} where $\varsigma_{\mathbf{x}_0}(\mathbf{x}) = \delta^3(\mathbf{x} - \mathbf{x}_0)$, to represent the discrete pointset $P$ as a collection of spatial impulses of intensities $c_i$ at positions $\mathbf{x} = \mathbf{x}_i$ (i.e., each point carrying a `lumped mass' of $c_i$), the Minkowski sum can be analytically expressed as a convolution \begin{align} f_{S_n(P)} = \rho_P \ast f_{B_0}, ~\text{i.e.,}~ P \oplus B_0 = U^\ast_0(\rho_P \ast f_{B_0}). \label{eq_ballconv} \end{align} A particularly favorable choice for the primitive shape (for reasons to be explained in Section \ref{sec_conf}) is a closed $L^2-$ball, i.e., \begin{align} B_i := B(\mathbf{x}_i, r) = \{ \mathbf{x} \in {\R^3} ~|~ \|\mathbf{x} - \mathbf{x}_i \|_2 \leq r \} \label{eq_L2ball} \end{align} are balls of constant radius $r > 0$ centered at $\mathbf{x}_i \in P$. Let $f_{B_0} (\mathbf{x}) := \psi_\alpha (\| \mathbf{x} \|_2 / r)$ where the function $\psi_\alpha : \mathds{R} \rightarrow [0, 1]$ is a generic cut-off kernel (also referred to as the `mollifier' or the `bump') with the closed form \begin{equation} \psi_\alpha(x) = \left\{ \begin{array}{ll} e^{(1 - |x|^{-\alpha})^{-1}} &\text{if}~ |x| < 1,\\ 0 &\text{otherwise}, \end{array} \right. \label{eq_psi} \end{equation} which can be thought of as a smoothed extension of the discontinuous cut-off function $\mathbf{1}_{(-1, +1)} (x) = \lim_{\alpha \rightarrow \infty} \psi_\alpha(x)$, and the resulting $f_{B_0} (\mathbf{x}) = \psi_\alpha (\| \mathbf{x} \|_p / r)$ is a mollified extension of the binary indicator $\mathbf{1}_{\imath B_0}(\mathbf{x}) = \lim_{\alpha \rightarrow \infty} f_{B_0} (\mathbf{x})$. The spatial enumeration schemes over uniform grids---ranging from bitmap encoding for path planning \cite{Kavraki1995} to rasterized density functions for protein docking \cite{Chen2003a}, illustrated in Fig. \ref{figure1} (f, g)---can be viewed as special cases of this scheme with cubic primitives (i.e., $L^\infty-$ instead of $L^2-$balls in (\ref{eq_L2ball})) with an additional disjointness condition that is unnecessary for our purposes. Grid-free molecular modeling based on Gaussian densities for protein surface reconstruction \cite{Duncan1993} and protein docking \cite{Bajaj2011} (Fig. \ref{figure1} (h)) are more closely related to the scheme proposed here, as they use spherical primitives.\footnote{Except that these methods use Gaussian kernels rather than compactly supported cut-off kernels such as the one in (\ref{eq_psi}), and use the $1-$ (instead of the $0-$) isolevel to define the molecular surface.} Generalizing this grid-free discretization to solid objects of arbitrary geometric complexity would enable more efficient use of the computational resources by adaptively approximating the shape, filling large interior regions with fewer primitives, and allocating resources to capture the details of surface features. A simple solution is to use a recursive decomposition (e.g., an octree) and take the leaf cells (or balls enclosed by their bounding spheres) as primitives (Fig. \ref{figure1} (c)), collected into groups of constant radii according to their level in the tree, for (\ref{eq_ballconv}) to apply. Hubbard \cite{Hubbard1996} showed that octree-based spherical approximation is nonoptimal in terms of Hausdorff convergence, and compares poorly with sampling the centers of primitive balls over the MA (Fig. \ref{figure1} (d)). However, the latter requires a generalization of (\ref{eq_ballconv}) that supports different sizes for the balls, which we address next. \paragraph{\bf Nonequiradius Decomposition} The generalization is enabled by a simple {\it geometric lifting} trick. This time, let $B_i~(1 \leq i \leq n)$ be translated and scaled instances of the base shape $B_0 \in \mathcal{S}$, i.e., instantiated by the affine transformation $\mathbf{x} \mapsto \mathbf{x}_i + (r_i \mathbf{x})$, i.e., $B_i = \{ \mathbf{x}_i + (r_i \mathbf{x}) ~|~ \mathbf{x} \in B_0 \}$. Each such transformation can be abstracted by a 4D point $\mathbf{a}_i := (\mathbf{x}_i , r_i) \in {\R^4}$. The discrete pointset $A := \{ \mathbf{a}_i \}_{1 \leq i \leq n} \subset {\R^4}$ (of cardinality $|A| = n$) contains all the information to reconstruct the solid, and $S = S_n(A)$ can be viewed as a Minkowski product \cite{Roerdink2000} of the form $S_n(A) = A \otimes \gamma_0(B_0)$, defined over the group of the aforementioned instance transformations $G \cong {\R^4}$, where $\gamma_r: {\R^3} \hookrightarrow {\R^4}$, $\gamma_r(\mathbf{x}) = (\mathbf{x}, r)$, and $\gamma_0(B_0) = B_0 \times \{0\}$ is a trivial embedding in $G$. A more helpful way of looking at this formulation is to think of each primitive $B_i \subset {\R^3}$ as a cross-section (i.e., a 3D `slice' orthogonal to the $r-$axis at $r = 0$) through a hypothetical hypercone $C_i \subset {\R^4}$ whose apex is located at $(\mathbf{x}_i, r_i) \in A$, illustrated in Fig \ref{figure3} (c, d). To ensure compactness of the 4D objects, let us replace the unbounded cones $C_i$ with {\it trimmed} half-cones $D_i \subset {\R^4}$: \begin{align} C_i := C(\mathbf{x}_i, r_i) &= \{ (\mathbf{x}, r) \in {\R^4} ~|~ \|\mathbf{x} - \mathbf{x}_i \|_2 \leq |r - r_i| \}, \nonumber \\ D_i := D(\mathbf{x}_i, r_i) &= C(\mathbf{x}_i, r_i) \cap ({\R^3} \times [r_i - L, r_i]), \label{eq_L2cone} \end{align} where $L > \max_{1 \leq i \leq n} r_i$ to guarantee that all displaced half-cones will intersect the $r = 0$ hyperplane. If we define $K_n := \bigcup_{1 \leq i \leq n} D_i$, the 3D solid $S_n (A)$ becomes a slice of the 4D solid $K_n(A)$ at $r = 0$, where $K_n: \mathds{R}^{4n} \rightarrow \mathcal{P}({\R^4})$ is the discretization mapping illustrated in Fig. \ref{figure3} (d, e). Unlike the scaled primitives $B_i$ that have different sizes, their cones $C_i$ and $D_i$ are all translated instances of the same base shapes $C_0$ and $D_0$, respectively, whose apexes are at the origin. Their union can thus be viewed as a Minkowski sum $K_n(A) = A \oplus D_0$. The 3D solid is then obtained as a 3D slice $S_n(A) = K_n(A) |_{r=0} := \gamma_0^{-1}(K_n(A) \cap \gamma_0({\R^3}))$,\footnote{For a set $K \subset {\R^4}$, we use the simplified notation $K|_{r=r_0}$ for its $r=r_0$ slice projected to ${\R^3}$, i.e., $K|_{r=r_0} := \gamma_{r_0}^{-1}(K \cap \gamma_{r_0}({\R^3}))$, where $\gamma_r: {\R^3} \hookrightarrow {\R^4}$ is defined as $\gamma_r(\mathbf{x}) = (\mathbf{x}, r)$ hence $\gamma_r^{-1}(\mathbf{x}, r) = \mathbf{x}$.} illustrated in Fig. \ref{figure3} (e, f). To obtain an analytic form for the 4D geometry similar to (\ref{eq_ballconv}), let $D_0 = U^\ast_0(f_{D_0})$, hence a defining function for each compact cone can be obtained as $f_{D_i} = f_{D_0}(\mathbf{a} - \mathbf{a}_i)$, where $\mathbf{a} = (\mathbf{x}, r) \in {\R^4}$. If we form an impulsive density function similar to (\ref{eq_rhoP}) \begin{align} \rho_A (\mathbf{a}) :&= \sum_{1 \leq i \leq n} c_i \varsigma_{\mathbf{a}_i}(\mathbf{a}) = \sum_{1 \leq i \leq n} c_i \delta^4(\mathbf{a} - \mathbf{a}_i), \label{eq_rhoA} \end{align} where $\delta^4(\mathbf{a}) = \delta^3(\mathbf{x}) \delta(r)$, the convolution in (\ref{eq_ballconv}) generalizes to the $4-$space as \begin{align} f_{K_n(A)} = \rho_A \ast f_{D_0}, ~\text{i.e.,}~ A \oplus D_0 = U^\ast_0(\rho_A \ast f_{D_0}), \label{eq_coneconv} \end{align} whose domain restriction to the $r = 0$ hyperplane gives a bump function for the 3D solid as $f_{S_n(A)} = f_{K_n(A)}|_{r=0}$,\footnote{For a function $f_{K}: {\R^4} \rightarrow \mathds{R}$, we use the notation $f_K|_{r=r_0}: {\R^3} \rightarrow \mathds{R}$ to denote the restriction (and trivial projection) of its domain to the $r = r_0$ hyperplane, i.e., $f_K|_{r=r_0}(\mathbf{x}) = f_K(\mathbf{x}, r_0)$.} therefore $S_n(A) = U^\ast_0(f_{K_n(A)}|_{r=0})$. As before, we can let $f_{C_0} (\mathbf{x}, r) := \psi_\alpha (\| \mathbf{x} \|_p / r)$ whose unbounded support is smoothly trimmed along the 4th dimension to $r \in (-L, 0)$ as $f_{D_0} (\mathbf{x}, r) := \psi_\alpha (\| \mathbf{x} \|_p / r) \psi_\alpha(1 - 2r/L).$ \subsection{Sampling Strategies} \label{sec_pack} Clearly, a solid $S \in \mathcal{S}$ of arbitrary shape cannot in general be exactly constructed as a finite union of balls $S_n(A) = A \otimes \gamma_0(B_0)$, i.e., a 3D slice of $K_n(A) = A \oplus D_0$. However, a similar construction is possible by replacing the finite set of ball centers $P$ and its radius-lift $A$, with the MA (denoted by $\mathcal{M}[\imath S]$) and the MAT (denoted by $\mathcal{T}[\imath S]$), respectively. In fact $S = \mathcal{T}[\imath S] \otimes^\ast \gamma_0(B_0)$, which is a 3D slice of $K = \mathcal{T}[\imath S] \oplus^\ast D_0$.\footnote{The MA/MAT of an r-set are not necessarily closed, and neither are their Minkowski sums with a closed ball/cone, which is why the regularized Minkowski sum/product (denoted by $\oplus^\ast/\otimes^\ast$) need to be used.} This construction can be thought of as sweeping a resizeable ball along the MA (with prespecified scaling for the balls along the MA trajectory), or equivalently, sweeping a rigid cone along the MAT in 4D followed by a 3D slicing. Unfortunately, the convolution formulation is not as simple in this case, because MA and MAT are not homogeneous, but are in general made of $2-$, $1-$ or $0-$dimensional subanalytic components for 3D solids \cite{Chazal2004}. We conjecture that it is possible to generalize the density function $\rho_A$ defined in (\ref{eq_rhoA}) to $\rho_{\mathcal{T}[\imath S]}$, by using Dirac delta functions of various orders over different regions of $\mathcal{T}[\imath S]$ depending on their dimensionalities, whose formal treatment we postpone to a follow-up study on skeletal density functions (SDF). Here we take a simpler approach, by assuming sequences of finite samples $A \subset {\R^4}$ of different sizes $|A| = 1, 2, \cdots$ that progressively approximate the shape. The set $S_n(A)$ is called an $\epsilon-$approximation of $S$ if $d_H(S_n(A), S) \leq \epsilon$, where $d_H$ denotes the Hausdorff $L^2-$metric. It is important to emphasize that our formulation in Section \ref{sec_count} does not impose any theoretical restriction on the sampling algorithm, as long as it guarantees that as $n \rightarrow \infty$, $S_n(A)$ converges to $S$ (i.e., $\lim_{|A| \rightarrow \infty} d_H(S_n(A), S) = 0$), and the convolution in (\ref{eq_coneconv}) holds in the limit for the {\it countably} infinite set of knots $A$. A variety of methods that have been in use in the CD literature \cite{OSullivan1999,Hubbard1996,Bradshaw2004,Weller2011} (Figs. \ref{figure1} (c--e)) can be used, two of which are briefly reviewed here due to their theoretical significance and computational relationship with our algorithm in \ref{app_sampling}. Hubbard \cite{Hubbard1996} proposed an algorithm that populates the maximal balls over the MA (Fig. \ref{figure1} (d)), obtained from pruning the Voronoi diagram of a dense sampling over the boundary, and follows a principle of conservative coverage to create a bounding sphere-tree. In terms of our formulation, this is equivalent to selecting $A \subset \mathcal{T}(S)$ and has been shown in \cite{Hubbard1996} to converge to the shape faster than octree-based sampling (Fig. \ref{figure1} (c)). However, MA and MAT are unstable with respect to $C^0-$ and $C^1-$perturbations of the boundary \cite{Chazal2004}, making their computations extremely difficult in the presence of noise/errors. Weller and Zachmann \cite{Weller2011} proposed the inner-sphere tree (IST) method that precomputes the distance function over a uniform grid and uses a greedy algorithm to pack the interior of the solid, giving priority to the largest ball that fits at each step. This approach has been proven effective for real-time applications \cite{Weller2011}, but leaves out void spaces in the interior of the set that are undesirable for analytic modeling, and is nonoptimal for thin objects. We use a similar greedy algorithm that is guided by the SDF field \cite{Behandish2014b}, which creates spherical samples that are similar to the outcomes of MA-based algorithms \cite{Hubbard1996,Bradshaw2004}, without the need to compute and prune the MA, and is capable of producing better approximations than distance-based sphere packing \cite{Weller2011} with fewer number of balls. Our algorithm guarantees bounds on the Hausdorff distance from the original shape that are proportional to the SDF grid resolution. This property is used in Section \ref{sec_res} as a basis for time complexity comparisons between operations on uniform samples versus spherical samples generated with the same input grid resolution. To prevent distraction from this article's main focus on Minkowski discretizations and their Fourier reconciliations, the details of our spherical decomposition algorithm along with its topological properties and approximation error bounds are postponed to \ref{app_sampling}. \subsection{Why Spherical Primitives?} \label{sec_conf} The motion of rigid bodies can be abstracted as trajectories of points in the so-called `configuration space' (commonly abbreviated as the $\mathcal{C}-$space), first introduced to the field of robotics by Lozano-Perez \cite{Lozano-Perez1983}. Every rigid motion in 3D can be parameterized by a tuple \begin{equation} M = (R, \mathbf{t}) \in SE(3), \quad SE(3) \cong SO(3) \ltimes {\R^3}, \end{equation} where $R \in SO(3)$ is a rotation described by a $3 \times 3$ special orthogonal matrix (i.e., $R^\mathrm{T} = R^{-1}$ and $\mathrm{det}(R) = +1$) and $\mathbf{t} \in {\R^3}$ is a translation described by a $3-$vector. See \ref{app_SE3} for the definition, properties, and terminology of the motion group and its action on r-sets. The advantage of using primitives with spherical symmetry becomes evident in the light of the isometric property of $SE(3)$. A 3D ball $B_0 := B(\mathbf{0}, r)$ is invariant under 3D rotations, i.e., $RB_0 = B_0$ hence $(R, \mathbf{t}) B_0 = B(\mathbf{t}, r)$. The same invariance property can be asserted for 4D cones $C_0$ and $D_0$ whose axes stay parallel to the $r-$axis after 3D translations and rotations. Accordingly, the transformation of $S_n(A)$ and $K_n(A)$ amounts only to a relocation of the center and apex positions. For the equiradius case, the Minkowski sum in (\ref{eq_ballconv}) for the transformed solid is given by \begin{align} &(R, \mathbf{t}) S_n(P) ~= S_n((R, \mathbf{t})P) = ((R, \mathbf{t})P) \oplus B_0, \label{eq_Mink0} \end{align} whose analytic expression is given by the bump function \begin{align} f_{(R, \mathbf{t}) S_n(P)} = \rho_{(R, \mathbf{t})P} \ast f_{B_0} = \left[ \rho_{P} \circ (R, \mathbf{t})^{-1} \right] \ast f_{B_0}, \label{eq_RPconv} \end{align} where the lumped density \begin{align} \rho_{(R, \mathbf{t})P} (\mathbf{x}) = \sum_{1 \leq i \leq n} c_i \delta^3(\mathbf{x} - (R, \mathbf{t})\mathbf{x}_i), \end{align} is an implicit representation of the transformed set of 3D knots $(R, \mathbf{t})P = \{(R, \mathbf{t})\mathbf{x} ~|~ \mathbf{x} \in P \}$. In a similar fashion, for nonequiradius case the Minkowski sum in (\ref{eq_coneconv}) for the transformed lifted geometry is given by \begin{align} &(R, \mathbf{t}) K_n(A) = K_n((R, \mathbf{t})A) = ((R, \mathbf{t})A) \oplus D_0, \label{eq_Mink2} \end{align} whose analytic expression is given by the bump function \begin{align} f_{(R, \mathbf{t}) K_n(A)} = \rho_{(R, \mathbf{t})A} \ast f_{D_0} = \left[ \rho_{A} \circ (R, \mathbf{t})^{-1} \right] \ast f_{D_0}, \label{eq_RAconv} \end{align} where the lumped density \begin{align} \rho_{(R, \mathbf{t})A} (\mathbf{a}) = \sum_{1 \leq i \leq n} c_i \delta^4(\mathbf{a} - (R, \mathbf{t})\mathbf{a}_i), \label{eq_rhoAtrans} \end{align} is an implicit representation of the transformed set of 4D knots $(R, \mathbf{t})A = \{(R, \mathbf{t})\mathbf{a} ~|~ \mathbf{a} \in A \}$, using the trivial extension $(R, \mathbf{t})\mathbf{a} = ((R, \mathbf{t})\mathbf{x}, r)$ for $\mathbf{a} = (\mathbf{x}, r) \in {\R^4}$. The strength of the discretization schemes that transform according to (\ref{eq_Mink0}) through (\ref{eq_rhoAtrans}) lies in the rotation invariance of the primitive sets $B_0$ or $D_0$ and their radial kernels $f_{B_0}$ or $f_{D_0}$, which appear in the same form in all equations before and after motion. We show in Section \ref{sec_cor} that the same form is conserved across Minkowski sums and related cross-correlations between pairs of discretized objects. The practical implication is that the primitives do not need to take an explicit role in the computations, and numerical algorithms deal only with the discrete sets $P$ or $A$ or their density functions $\rho_P$ or $\rho_A$. \subsection{Fourier Expansions} \label{sec_Fourier} A major motivation behind using analytic methods is the efficient tractability of convolutions and differentiations via Fourier transforms. Using the orthonormal basis of the form $e^{2\pi \bm{\mathsf{i}} (\bm{\upomega} \cdot \mathbf{x})}$ (where $\bm{\mathsf{i}}^2 = -1$), every r-set's bump function $f_S \in L^2({\R^3})$ can be expanded into its components given by $\hat{f}_S \in L^2({\R^3})$, using the continuous Fourier transform (CFT):\footnote{Bump functions are square-integrable, i.e., $C^\infty_0({\R^3}) \subset L^2({\R^3})$.} \begin{equation} \hat{f}_S = \langle f_S, e^{ +2\pi \bm{\mathsf{i}} (\bm{\upomega} \cdot \mathbf{x})} \rangle ~\rightleftharpoons~ f_S = \langle \hat{f}_S, e^{ -2\pi \bm{\mathsf{i}} (\bm{\upomega} \cdot \mathbf{x})} \rangle. \label{eq_CFT_def} \end{equation} For most applications such as the ones discussed in Section \ref{sec_app}, the current analytic methods rely on discretizing the defining functions over a dense sample of points in the interior \cite{Kavraki1995,Chen2003a,Lysenko2013}, which turns the CFT into a discrete Fourier transform (DFT). If the sampling is over a uniform grid, the DFT can be implemented very efficiently using the well-known fast Fourier transform (FFT) algorithm, first discovered by Cooley and Tukey \cite{Cooley1965}. The rotations are handled by an interpolation over the frequency grid and translations are embedded into convolutions. However, when the objects are discretized with spherical primitives whose centers form a nonuniform set of points, the CFT interpolates a nonequispaced DFT (NDFT), to which the classical FFT algorithms do not apply. Potts et al. \cite{Potts2001} developed a nonequispaced FFT (NFFT) algorithm for efficient implementation of NDFT sums in asymptotically similar running times with the classical FFT. See \ref{app_CFT} for the definition, properties, and complexity of Fourier transforms. If an r-set $S \in \mathcal{S}$ moves via $(R, \mathbf{t}) \in SE(3)$, its transformed bump function changes in the frequency domain as $\hat{f}_{(R, \mathbf{t})S} = \hat{\varsigma}_\mathbf{t} \hat{f}_{RS} = \hat{\varsigma}_\mathbf{t} (\hat{f}_{S} \circ R^\mathrm{T})$, where $\varsigma_\mathbf{t}(\mathbf{x}) := \delta(\mathbf{x} - \mathbf{t})$ denotes a shifted Dirac delta function that transfers to $\hat{\varsigma}_\mathbf{t}(\bm{\upomega}) = e^{-2\pi \bm{\mathsf{i}} (\bm{\upomega} \cdot \mathbf{t})}$. For an equiradius discretization $S_n(P) = P \oplus B_0$ the Fourier expansion of $f_{S_n(P)} = \rho_{P} \ast f_{B_0}$ is a simple product $\hat{f}_{S_n(P)} = \hat{\rho}_{P} \hat{f}_{B_0}$. Applying the CFT to (\ref{eq_RPconv}) gives \begin{align} \hat{f}_{(R, \mathbf{t})S_n(P)} = \hat{\varsigma}_\mathbf{t} \hat{\rho}_{RP} \hat{f}_{B_0} = \hat{\varsigma}_\mathbf{t} \left( \hat{\rho}_{P} \circ R^\mathrm{T} \right) \hat{f}_{B_0}, \label{eq_fballhat} \end{align} where the density function given in (\ref{eq_rhoP}) is transferred to \begin{align} \hat{\rho}_{P} (\bm{\upomega}) = \sum_{1 \leq i \leq n} c_i \hat{\varsigma}_{\mathbf{x}_i}(\bm{\upomega}) = \sum_{1 \leq i \leq n} c_i e^{ -2\pi \bm{\mathsf{i}} (\bm{\upomega} \cdot \mathbf{x}_i)}. \label{eq_rhoPhat} \end{align} The evaluation of (\ref{eq_rhoPhat}) from a nonuniform set of 3D knots to a uniform 3D frequency grid amounts to a one-sided 3D NDFT computation. For nonequiradius discretization $S_n(A) = K_n (A)|_{r=0}$ where $K_n (A) = A \oplus D_0$, we have $f_{K_n(A)} = \rho_{A} \ast f_{D_0}$ hence $\hat{f}_{K_n(A)} = \hat{\rho}_{A} \hat{f}_{D_0}$. Applying the CFT to (\ref{eq_RAconv}) gives \begin{align} \hat{f}_{(R, \mathbf{t})K_n(A)} = \hat{\varsigma}_{\mathbf{a}} \hat{\rho}_{RA} \hat{f}_{D_0} = \hat{\varsigma}_{\mathbf{a}}\left( \hat{\rho}_{A} \circ R^\mathrm{T} \right) \hat{f}_{D_0}, \label{eq_fconehat} \end{align} where $\mathbf{a} = (\mathbf{t}, r) \in {\R^4}$ represents a lifted translation, and the density function given in (\ref{eq_rhoA}) is transferred to \begin{align} \hat{\rho}_{A} (\bm{\upupsilon}) = \sum_{1 \leq i \leq n} c_i \hat{\varsigma}_{\mathbf{a}_i}(\bm{\upupsilon}) = \sum_{1 \leq i \leq n} c_i e^{-2\pi \bm{\mathsf{i}} (\bm{\upupsilon} \cdot \mathbf{a}_i)}, \label{eq_rhoAhat} \end{align} in which $\hat{\varsigma}_{\mathbf{a}}(\bm{\upupsilon}) = \hat{\varsigma}_{\mathbf{t}}(\bm{\upomega}) e^{-2\pi \bm{\mathsf{i}} (\eta r)}$ for the lifted physical domain $\mathbf{a}_i = (\mathbf{x}_i, r_i) \in A$ is obtained in a similar fashion to (\ref{eq_rhoPhat}) using NDFTs, except that the frequency domain is also lifted to 4D as $\bm{\upupsilon} = (\bm{\upomega}, \eta) \in {\R^4}$. \begin{figure*} \centering \includegraphics[width=\textwidth]{figure4} \caption{The $\mathcal{C}-$obstacle is obtained as a Minkowski sum. The discretization scheme is closed under Minkowski sums for both equiradius and nonequiradius samples. For the latter, the summands must have the same cone orientation along the $r-$axis, requiring a pre-reflection.} \label{figure4} \end{figure*} \section{Correlation Functions} \label{sec_cor} Having defined a spherical sampling in terms of a Minkowski sum of discrete knots and balls/cones alongside their analytic formulation in Section \ref{sec_samp}, we now investigate how they embed into cross-correlations between pairs of objects. In \ref{sec_cross}, we show that spherical discretization structure is preserved and carried into configuration pointsets, whose Fourier formulation is given in \ref{sec_CFT}. \subsection{Discrete Correlations} \label{sec_cross} Given two r-sets $S_1, S_2 \in \mathcal{S}$, we define their `correlation function' as $g_{S_1, S_2}: SE(3) \times SE(3) \rightarrow \mathds{R}$ where \begin{align} g_{S_1, S_2}(M_1, M_2) &= \langle f_{M_1 S_1}, f_{M_2 S_2} \rangle, \label{eq_cor_M1M2} \end{align} accumulates the pointwise multiplication of the overlapped shape descriptor functions of $S_1$ and $S_2$ moved using $M_1 = (R_1, \mathbf{t}_1)$ and $M_2 = (R_1, \mathbf{t}_1)$, respectively. The function in (\ref{eq_cor_M1M2}) can formulate, for example, a holonomic collision constraint (Section \ref{sec_col}), a shape complementarity metric (Section \ref{sec_comp}), or a morphological operator (Section \ref{sec_prod}). For instance, comparing (\ref{eq_cor_M1M2}) with (\ref{eq_null}) shows that if $S_1 = U^\ast_0(f_{S_1})$ and $S_2 = U^\ast_0(f_{S_2})$, then $g_{S_1, S_2}(M_1, M_2)$ defines a collision predicate for the moved solids, i.e., $M_1 S_1 \cap^\ast M_2 S_2 \neq \emptyset$ iff $g_{S_1, S_2}(M_1, M_2) > 0$. It is easy to show that the correlation function only depends on the {\it relative} configuration $M := M_1^{-1} M_2$, i.e., the motion of $S_2$ observed from a coordinate frame attached to $S_1$. Letting $M = (R, \mathbf{t}) = (R_1, \mathbf{t}_1)^{-1}(R_2, \mathbf{t}_2) = (R^\mathrm{T}_1 R_2, R^\mathrm{T}_1(\mathbf{t}_2 - \mathbf{t}_1))$, the alternative formulation $g_{S_1,S_2}: SE(3) \rightarrow \mathds{R}$ becomes \begin{align} g_{S_1, S_2}(R, \mathbf{t}) &= \langle f_{S_1}, (f_{R S_2} \circ T^{-1}) \rangle, \label{eq_relg_0} \end{align} noting that $f_{(R, \mathbf{t})S} = f_{RS} \circ T^{-1} = (f_S \circ R^\mathrm{T}) \circ T^{-1}$ where $T: {\R^3} \rightarrow {\R^3}$, $T\mathbf{x} = \mathbf{x} + \mathbf{t}$. The inner product in (\ref{eq_relg_0}) can be viewed as a 6D noncommutative convolution over $SE(3)$ \cite{Lysenko2011a}. For numerical tractability, we decompose the motion into rotational and translational parts, and view the latter as a 3D commutative convolution: \begin{align} g_{S_1, S_2}(R, \mathbf{t}) &= \left( f_{RS_2} \star f_{S_1} \right) (\mathbf{t}) = \left( f_{S_1} \ast f_{-RS_2} \right) (\mathbf{t}), \label{eq_relg} \end{align} where $-S = \{-\mathbf{x} ~|~ \mathbf{x} \in S\}$ denotes a reflection with respect to the origin. This defines a {\it relative} collision predicate, i.e., $S_1 \cap^\ast (R, \mathbf{t}) S_2 \neq \emptyset$ iff $g_{S_1, S_2}(R, \mathbf{t}) > 0$, and brings us to the important concept of a `configuration obstacle' (or the $\mathcal{C}-$obstacle for short) in robotics \cite{Lozano-Perez1983}, defined as \begin{equation} O_{S_1, S_2} = \{ (R, \mathbf{t}) \in SE(3) ~|~ g_{S_1, S_2}(R, \mathbf{t}) > 0 \}, \label{eq_obs} \end{equation} whose complement $cO_{S_1, S_2}$ is the `free space'. The obstacle space does not contain the contact space by definition, i.e., is an open set, whose closure $O^\ast_{S_1, S_2} := \kappa O_{S_1, S_2}$ is a 6D r-set \cite{Lysenko2013}. For a fixed rotation $R \in SO(3)$ the translational obstacle defined by \begin{equation} O_{S_1, S_2}|_{R} := \{ \mathbf{t} \in {\R^3} ~|~ g_{S_1, S_2}(R, \mathbf{t}) > 0 \} \end{equation} is a 3D slice through the 6D obstacle, whose closure is obtained by offsetting $S_1$ with $-RS_2$ given by a Minkowski sum \begin{equation} O^\ast_{S_1, S_2}|_{R} = S_1 \oplus (-R S_2) = U^\ast_0 (f_{S_1} \ast f_{-RS_2}). \label{eq_Tobs} \end{equation} As a result of the definition in (\ref{eq_obs}), $g_{S_1, S_2}(R, \mathbf{t})$ serves as a defining function of the regularized $\mathcal{C}-$obstacle as a $0-$sublevel set $O^\ast_{S_1, S_2} = U^\ast_0(g_{S_1, S_2})$, referred to as the `gap function' \cite{Lozano-Perez1983}. Similarly, the restriction of the gap function to a fixed $R \in SO(3)$ is a defining function for the 3D slice $O^\ast_{S_1, S_2}|_{R} = U^\ast_0(g_{S_1, S_2}|_{R})$, where $g_{S_1, S_2}|_{R} = f_{S_1} \ast f_{-RS_2}$. Next we investigate the discretization of the $\mathcal{C}-$obstacle (and the Minkowski sums in general) when spherical sampling is used for the constituent parts. \paragraph{\bf Equiradius Correlations} First, let $S_1 = S_{n_1}(P_1)$ and $S_2 = S_{n_2}(P_2)$ be composed of instances of balls denoted by $B_1 := B(\mathbf{0}, r_1)$ and $B_2 := B(\mathbf{0}, r_2)$, respectively. Substituting from (\ref{eq_Mink0}) in (\ref{eq_Tobs}), and noting the invariance of the balls to reflection and rotation (i.e., $-RB_2 = B_2$) and the commutativity of Minkowski sums, we obtain \begin{align} O^\ast_{S_1, S_2}|_{R} &= (P_1 \oplus B_1) \oplus ((-R P_2) \oplus B_2) \label{eq_ORP_1} \\ &= (P_1 \oplus (-R P_2)) \oplus (B_1 \oplus B_2) \label{eq_ORP_2}\\ &= P_O|_{R} \oplus B_O. \label{eq_ORP} \end{align} The first term $P_O|_{R} := P_1 \oplus (-R P_2)$ is a finite set of $n_1 n_2$ points in the 3D translation space obtained from pairwise summations of ball centers in $P_1$ and $-R P_2$. It represents the discrete translational obstacle $O_{P_1, P_2}|_{R}$, a 3D slice through $P_O := O_{P_1, P_2} \subset SE(3)$ defining a collection of $n_1 n_2$ curves. The second term $B_O := B_1 \oplus B_2 = B(\mathbf{0}, r_O)$ is a ball of radius $r_O = r_1 + r_2$ in the translational space, representing the `primitive obstacle' $O_{B_1, B_2}|_{R}$, which is a 3D slice of $O_{B_1, B_2} \subset SE(3)$. Therefore, the total obstacle itself is a finite union of balls, i.e., discretized with the same scheme as the original objects: $O^\ast_{S_1, S_2}|_{R} = S_{n_1 n_2} (P_O|_R)$. An illustration is given in Fig. \ref{figure4} (a--c). The analytic formulation in (\ref{eq_relg}) develops in parallel as \begin{align} g_{S_1, S_2}|_{R} &= (\rho_{P_1} \ast f_{B_1}) \ast (\rho_{-RP_2} \ast f_{B_2}) \\ &= (\rho_{P_1} \ast \rho_{-RP_2}) \ast (f_{B_1} \ast f_{B_2}), \\ &= \rho_{P_O|_{R}} \ast f_{B_O}, \label{eq_convgP} \end{align} where $\rho_{P_O|_{R}} := (\rho_{P_1} \ast \rho_{-RP_2})$ is the impulsive density function of the discrete pointset $P_O|_{R}$ made of $n_1n_2$ impulses corresponding to pairwise cross-correlation of Dirac delta terms from the constituents,\footnote{Note that if $\rho_1(\mathbf{x}) := \delta^3(\mathbf{x} - \mathbf{x}_1)$, $\rho_2(\mathbf{x}) := \delta^3(\mathbf{x} - \mathbf{x}_2)$, and $\tilde{\rho}(\mathbf{x}) := \rho(-\mathbf{x})$, then $(\rho_2 \star \rho_1) (\mathbf{t}) = (\rho_1 \ast \tilde{\rho}_2) (\mathbf{t}) = \delta^3(\mathbf{t} - (\mathbf{x}_1 + \mathbf{x}_2))$.} while $f_{B_O} := f_{B_1} \ast f_{B_2}$ is a cross-correlation of two radial bumps of radii $r_1$ and $r_2$, leading to another radial bump of radius $r_O = r_1 + r_2$ that defines the obstacle ball $B_O$. Note that if we choose $f_{B_1} (\mathbf{x}) := \psi_\alpha(\|\mathbf{x}\|_2/r_1)$ and $f_{B_2} (\mathbf{x}) := \psi_\alpha(\|\mathbf{x}\|_2/r_2)$ using the form in (\ref{eq_psi}), their convolution does not take the same form, i.e., $f_{B_O} (\mathbf{x}) \neq \psi_\alpha(\|\mathbf{x}\|_2/r_O)$. However, the latter can be safely replaced for the last term in (\ref{eq_convgP}) without changing the obstacle $O^\ast_{S_1, S_2}|_{R} = U^\ast_0(g_{S_1, S_2}|_{R})$, since the bump form choice is arbitrary as long as $B_O = U^\ast_0(f_{B_O})$. \paragraph{\bf Nonequiradius Correlations} The generalization to $S_1 = S_{n_1}(A_1)$ and $S_2 = S_{n_2}(A_2)$ made of primitive balls of different sizes is not straightforward. This is because the commutativity of the Minkowski sum $S_n(P) = P \oplus B_0$ that led from (\ref{eq_ORP_1}) to (\ref{eq_ORP_2}) does not hold for the product $S_n(A) = A \otimes \gamma_0(B_0)$. In terms of the lifted geometry, this manifests as the observation that the 3D Minkowski sum of the cross-sections is not equal to the cross-section of the 4D Minkowski sum; or in other words, a collision between $K_1 = K_{n_1}(A_1)$ and $K_2 = K_{n_2}(A_2)$ does not necessarily imply a collision between the cross-sections $S_1 = K_{n_1}(A_1)|_{r=0}$ and $S_2 = K_{n_2}(A_2)|_{r=0}$. At the primitive level, this is because the 4D half-cones, despite being invariant under 3D rotations and reflections, are {\it not} invariant under 4D reflections, hence $D_0 \neq -D_0$ and the sum $D_0 \oplus (-D_0)$ (that appears in $K_1 \oplus (-RK_2)$) does {\it not} give a half-cone in the $\mathcal{C}-$space. Fortunately, this can be solved by a pre-reflection with respect to the $r = 0$ hyperplane of one of the two lifted shapes. If we let \begin{equation} \breve{K} = \{ (\mathbf{x}, -r) ~|~ (\mathbf{x}, r) \in K \}, \quad (K, \breve{K} \subset {\R^4}) \end{equation} denote the $r-$mirror image of the 4D set $K$, the 3D set can be retrieved from both of them as $S = K|_{r=0} = \breve{K}|_{r=0}$. Then the nonequiradius discretization scheme in (\ref{eq_Mink2}) gives $\breve{K}_n(\breve{A}) = \breve{A} \oplus \breve{D}_0$ and $-\breve{K}_n(\breve{A}) = (-\breve{A}) \oplus D_0$, noting that $D_0 = -\breve{D}_0$ and the sum $D_0 \oplus (-\breve{D}_0)$ (that appears in $K_1 \oplus (-R\breve{K}_2)$) is a half-cone of double the size in the $\mathcal{C}-$space. Furthermore, it is easy to prove that in this case, for two collections of half-cones of {\it opposite} directions that intersect the $r = 0$ hyperplane, a collision between the 4D solids does in fact imply a collision between the 3D slices: \begin{lemma} \label{lemma_2} $S_1 \cap^\ast (R, \mathbf{t})S_2 \neq \emptyset ~\rightleftharpoons~ K_1 \cap^\ast (R, \mathbf{t})\breve{K}_2 \neq \emptyset$. \end{lemma} \begin{proof} The proof is straightforward, by noting that for every pair of 4D cones $D_1 := D(\mathbf{x}_1, r_1)$ and $D_2 := D(\mathbf{x}_2, r_2)$ corresponding to $(\mathbf{x}_1, r_1) \in A_1$ and $(\mathbf{x}_2, r_2) \in A_2$, respectively, they intersect after flipping one of them upside down (i.e., $D_1 \cap^\ast \breve{D}_2 \neq \emptyset$) if and only if their 3D slices intersect (i.e., $D_1|_{r=0} \cap^\ast \breve{D}_2|_{r=0} \neq \emptyset$). The assertion is very easy to picture for 3D cones whose slices are 2D disks. \end{proof} As a direct corollary, we can define a 4D translational $\mathcal{C}-$obstacle that is discretized with the same scheme as \begin{align} O^\ast_{K_1, \breve{K}_2}|_{R} &= (A_1 \oplus D_0) \oplus ((-R \breve{A}_2) \oplus D_0) \\ &= (A_1 \oplus (-R \breve{A}_2)) \oplus (D_0 \oplus D_0) \\ &= A_O|_{R} \oplus D_O, \label{eq_ORA} \end{align} and the 3D obstacle is a slice $O^\ast_{S_1, S_2}|_{R} = \big[O^\ast_{K_1, \breve{K}_2}|_{R}\big]_{r=0}$. The first term $A_O|_{R} := A_1 \oplus (-R \breve{A}_2)$ is a finite set of $n_1 n_2$ points in the 4D translation space obtained from pairwise summations of cone apexes in $A_1$ and $-R \breve{A}_2$, which is the same as $O^\ast_{A_1, \breve{A}_2}|_{R}$. In this case, the primitive obstacle $D_O := D_0 \oplus D_0$ is a larger half-cone with a height of $2L$, which is equal to $O^\ast_{D_0, D_0}|_{R}$. Therefore, the $\mathcal{C}-$obstacle is discretized with the same scheme as $O^\ast_{S_1, S_2}|_{R} = S_{n_1 n_2} (A_O|_R) = \big[ K_{n_1 n_2} (A_O|_R) \big]_{r=0}$. An illustration is given in Fig. \ref{figure4} (d--f). The analytic formulation in (\ref{eq_relg}) develops in parallel as \begin{align} g_{K_1, \breve{K}_2}|_{R} &= (\rho_{A_1} \ast f_{D_0}) \ast (\rho_{-R \breve{A}_2} \ast f_{D_0}) \\ &= (\rho_{A_1} \ast \rho_{-R \breve{A}_2}) \ast (f_{D_0} \ast f_{D_0}) \\ &= \rho_{A_O|_{R}} \ast f_{D_O}, \label{eq_convgA} \end{align} whose restriction to $r = 0$ gives $g_{S_1, S_2}|_{R} = \big[ g_{K_1, \breve{K}_2}|_{R} \big]_{r=0}$. Similar to the equiradius case, $\rho_{A_O|_{R}} := (\rho_{A_1} \ast \rho_{-R \breve{A}_2})$ is the impulsive density function of the discrete pointset $A_O|_{R}$ made of $n_1n_2$ impulses corresponding to cross-correlation of pairs of shifted Dirac delta functions, while $f_{D_O} := (f_{D_0} \ast f_{D_0})$ is an auto-correlation, which can be arbitrarily modified from the original convolved form to $f_{D_O}(\mathbf{t}, r) := \psi_\alpha(\|\mathbf{t}\|_2 / r) \psi_\alpha(1-r/L)$ without changing the obstacle $O^\ast_{S_1, S_2}|_R = U^\ast_0(g_{K_1, \breve{K}_2}|_{r = 0})$. Although one only needs the $r = 0$ slice to retrieve the $\mathcal{C}-$obstacle, the other slices carry useful information. In fact, any $r \neq 0$ slice corresponding to $r \in (-L, +L)$ gives the obstacle for a pair of {\it offset} 3D solids defined as $r_1-$slice of $K_1$ (i.e., shrinking $S_1$'s primitives by $r_1$) and $r_2-$slice of $\breve{K}_2$ (i.e., expanding $S_2$'s primitives by $r_2$) giving a total offset of $-r = -(r_1 + r_2)$. These `offset obstacles' can be used, for example, to incorporate tolerances for machine tooling, guarantee safety margins for path planning, or construct skin layers for protein shape complementarity modeling (Section \ref{sec_app}). \subsection{Fourier Correlations} \label{sec_CFT} To take advantage of the convolution theorem for a significantly faster computation of correlation functions, we present an analysis of the gap function in the Fourier domain. For two r-sets $S_1, S_2 \in \mathcal{S}$, applying CFT to (\ref{eq_relg}) yields the Fourier correlation function as \begin{align} \hat{g}_{S_1,S_2}|_R = \hat{f}_{S_1} \bar{\hat{f}}_{RS_2} = \hat{f}_{S_1} \left( \bar{\hat{f}}_{S_2} \circ R^\mathrm{T} \right), \label{eq_ccghat} \end{align} noting that $\hat{f}_{-RS} = \bar{\hat{f}}_{RS}$ for real defining functions, i.e., CFT converts reflection (in both physical and frequency domains) to conjugation in the frequency domain, a property known as Hermitian symmetry (see \ref{app_CFT}). $\hat{g}_{S_1,S_2}|_R = \mathcal{F} \{ g_{S_1,S_2}|_R \}$ is the CFT of $g_{S_1,S_2}(R, \mathbf{t})$ only with respect to translation at a fixed $R \in SO(3)$. For the equiradius discretizations $S_{n_1}(P_1) = P_1 \oplus B_1$ and $S_{n_2}(P_2) = P_2 \oplus B_2$ with Fourier representations $\hat{f}_{S_{n_1}(P_1)} = \hat{\rho}_{P_1} \hat{f}_{B_1}$ and $\hat{f}_{S_{n_2}(P_2)} = \hat{\rho}_{P_2} \hat{f}_{B_2}$, respectively, their Fourier correlation is obtained by substituting (\ref{eq_fballhat}) in (\ref{eq_ccghat}), or directly applying the CFT to (\ref{eq_convgP}) as \begin{align} \hat{g}_{S_1,S_2}|_R &= \big( \hat{\rho}_{P_1} \hat{f}_{B_1} \big) \big( \bar{\hat{\rho}}_{RP_2} \bar{\hat{f}}_{B_2} \big) \\ &= \big( \hat{\rho}_{P_1} \bar{\hat{\rho}}_{RP_2} \big) \big( \hat{f}_{B_1} \hat{f}_{B_2}\big) = \hat{\rho}_{P_O|_{R}} \hat{f}_{B_O}, \end{align} noting that $f_{B_{1,2}} = f_{-B_{1,2}}$ hence $\hat{f}_{B_{1,2}} = \bar{\hat{f}}_{B_{1,2}}$ (i.e., are both real-valued), and so is $\hat{f}_{B_O} = \hat{f}_{B_1} \hat{f}_{B_2}$. As expected, $\hat{\rho}_{P_O|_{R}} = \hat{\rho}_{P_1} \bar{\hat{\rho}}_{RP_2}$ is computed from a pointwise multiplication of the NDFTs over the knots $P_1$ and $P_2$. Analogously, for the nonequiradius discretizations $S_{n_1}(A_1) = K_{n_1} (A_1)|_{r=0}$ and $S_{n_2}(A_2) = \breve{K}_{n_2} (\breve{A}_2)|_{r=0}$, where $K_{n_1}(A_1) = A_1 \oplus D_0$ and $\breve{K}_{n_2}(\breve{A}_2) = \breve{A}_2 \oplus \breve{D}_0$ with $\hat{f}_{K_{n_1}(A_1)} = \hat{\rho}_{A_1} \hat{f}_{D_0}$ and $\hat{f}_{\breve{K}_{n_2}(\breve{A}_2)} = \hat{\rho}_{\breve{A}_2} \hat{f}_{\breve{D}_0}$, respectively, their Fourier correlation is obtained by substituting (\ref{eq_fconehat}) in (\ref{eq_ccghat}), or directly applying CFT to (\ref{eq_convgA}) as \begin{align} \hat{g}_{K_1, \breve{K}_2}|_R &= \big( \hat{\rho}_{A_1} \hat{f}_{D_0} \big) \big( \bar{\hat{\rho}}_{R \breve{A}_2} \bar{\hat{f}}_{\breve{D}_0} \big) \\ &= \big( \hat{\rho}_{A_1} \bar{\hat{\rho}}_{R \breve{A}_2} \big) \big( \hat{f}_{D_0} \hat{f}_{D_0}\big) = \hat{\rho}_{A_O|_{R}} \hat{f}_{D_O}, \end{align} where $\hat{f}_{D_O} = \hat{f}_{D_0}^2 = \bar{\hat{f}}_{\breve{D}_0}^2$, noting that $\hat{f}_{D_0} = \bar{\hat{f}}_{\breve{D}_0}$ as a result of the reflective duality $f_{D_0} = f_{-\breve{D}_0}$. In a similar fashion, $\hat{\rho}_{A_O|_{R}} = \hat{\rho}_{A_1} \bar{\hat{\rho}}_{R \breve{A}_2}$ is computed from a pointwise multiplication of the NDFTs over the knots $A_1$ and $\breve{A}_2$. A critical observation is that the computational implementation relies only on the discrete knots $P$ and $A$ (or $\breve{A}$), expressed in the Fourier domain by the NDFTs $\hat{\rho}_P$ in (\ref{eq_rhoPhat}) and $\hat{\rho}_{A}$ (or $\hat{\rho}_{\breve{A}}$) in (\ref{eq_rhoAhat}), respectively. The continuous geometry is completely embodied by the primitives implicit in $\hat{f}_{B_O} = \hat{f}_{B_1} \hat{f}_{B_2}$ or $\hat{f}_{D_O} = \hat{f}_{D_0}^2 = \bar{\hat{f}}_{\breve{D}_0}^2$. However, despite appearing in equations, they do {\it not} explicitly participate into the numerical algorithms, and that reflects the true power of this particular discretization scheme. \section{Applications} \label{sec_app} Next, we show how the tools developed in Sections \ref{sec_samp} and \ref{sec_cor} can be applied to collision detection in \ref{sec_col}, shape complementarity in \ref{sec_comp}, and configuration products in \ref{sec_prod}, and identify future research opportunities in each area. \subsection{Collision Detection} \label{sec_col} Analytic collision detection (CD) can be traced to the work by Comba \cite{Comba1968} on convex sets. Kavraki \cite{Kavraki1995} discovered the interpretation of the translational $\mathcal{C}-$obstacle as a convolution of the objects---the robot and its workspace in the context of path planning \cite{Lozano-Perez1983}---along with the application of the FFT. Both objects $S_1, S_2 \in \mathcal{S}$ are represented by binary indicators $\mathbf{1}_{S_1}, \mathbf{1}_{S_2} : {\R^3} \rightarrow \{0, 1\}$, discretized as bitmaps, and the integer map of the translational $\mathcal{C}-$obstacle obtained as $g_{S_1, S_2}(\mathbf{t}) = (\mathbf{1}_{S_1} \ast \mathbf{1}_{-S_2})(\mathbf{t})$ simply counts the number of grid cells that overlap at a relative translation $\mathbf{t} \in {\R^3}$. The algorithm performs two forward FFTs to obtain $\hat{\mathbf{1}}_{S_1}$ and $\hat{\mathbf{1}}_{S_2}$, a pairwise multiplication to obtain $\hat{g}_{S_1, S_2} = \hat{\mathbf{1}}_{S_1} \bar{\hat{\mathbf{1}}}_{S_2}$, and an inverse FFT to retrieve the obstacle map in $O(m \log m)$ time, where $m$ is the grid size. Although the algorithm is asymptotically optimal to obtain a complete description of the obstacle for all possible translations in a given discretized domain, it is rarely useful for time-critical CD (e.g., in real-time simulations and physically-based modeling \cite{Weller2011}) where a {\it single} configuration is queried. Lysenko \cite{Lysenko2013} recently generalized the approach by using bump functions to facilitate differentiation, and proposed techniques to enable time-critical CD for a single-configuration query via truncated Fourier expansions, along with an analytic groundwork for early-hit/miss tests. Noting that the inner product structure is preserved by the CFT according to Parseval's theorem (See \ref{app_CFT}), the collision predicate in (\ref{eq_relg}) for a single relative configuration $(R, \mathbf{t}) \in SE(3)$ can be obtained as \begin{align} g_{S_1,S_2}|_R(\mathbf{t}) = \Big\langle f_{S_1}, (f_{RS_2} \circ T^{-1}) \Big\rangle = \Big\langle \hat{f}_{S_1}, \hat{\varsigma}_\mathbf{t} \hat{f}_{RS_2} \Big\rangle, \label{eq_single} \end{align} noting that $f_{(R, \mathbf{t})S} = f_{RS} \circ T^{-1} = (f_S \circ R^\mathrm{T}) \circ T^{-1}$ which transforms to $\hat{f}_{(R, \mathbf{t})S} = \hat{\varsigma}_\mathbf{t} \hat{f}_{RS} = \hat{\varsigma}_\mathbf{t} (\hat{f}_{S} \circ R^\mathrm{T})$. As mentioned earlier, $T^{-1} \mathbf{x} := \mathbf{x} - \mathbf{t}$ is the shift function whose Fourier operator $\hat{\varsigma}_\mathbf{t}(\bm{\upomega}) = e^{-2\pi \bm{\mathsf{i}}(\bm{\upomega} \cdot \mathbf{t})}$ is the CFT of the shifted Dirac Delta $\varsigma_\mathbf{t}(\mathbf{x}) = (\delta^3 \circ T^{-1})(\mathbf{x}) = \delta^3(\mathbf{x} - \mathbf{t})$. If a grid-based discretization is used, the rotation can be incorporated by a trilinear interpolation in either domain. Although a brute-force computation of the physical domain inner product over a grid of size $m$ takes $O(m)$---without a simple way of reducing the complexity once the grid resolution is fixed upfront---the frequency domain integral can be computed in $O(m')$ over a truncated grid of much smaller size $m' \ll m$ specified on-the-fly. This provides a mechanism for trading off accuracy with time, by a spiral traversal of the frequency grid starting from the dominant modes until the available time is over. On the other hand, the numerous combinatorial CD methods developed over the years (reviewed in \cite{Jimenez2001}) exploit a variety of data structures to avoid brute-force testing in the physical domain, the likes of which are not available in the frequency domain. The sphere-tree methods \cite{OSullivan1999,Hubbard1996,Bradshaw2004,Weller2011} are among the most efficient, which enable another trade-off mechanism by descending down the tree until the time allocated to CD is consumed. Our framework enables exploiting the existing combinatorial techniques alongside the recent analytic methods in both domains. The details pertaining to the following are beyond the scope of this article and will be presented elsewhere: 1) {\it early-hit test} by limiting the integration of (\ref{eq_single}) to an intersection with a ball, which is a simple multiplication in the physical domain via (\ref{eq_int}); 2) {\it early-miss test} by offsetting (i.e., Minkowski sum) with a single ball, which is a simple multiplication in the frequency domain; and 3) {\it differentiation} of (\ref{eq_single}) for contact force/torque computation, using pairwise spherical primitive interactions. \subsection{Shape Complementarity} \label{sec_comp} Surface shape complementarity (SC) is a predominant determinant of successful binding of protein molecules, and is critical in early-stage lead compound generation for rational drug design. The numerous FFT-based correlation techniques developed over the years (reviewed in \cite{Ritchie2008a}) use the same principles as analytic CD or path planning, except that they quantify SC by overlapping skin-layers. The so-called `double-skin layer' approach \cite{Bajaj2011} integrates the skin-skin intersections to obtain a SC `score function' and subtracts core-core collisions as penalty, which add up to a convolution of `affinity functions' of individual molecules (analogous to defining functions for CD, except with different complex-valued weights for skin/core atoms). Chen and Weng \cite{Chen2003a} described successful heuristics for weight assignment rasterized on a uniform grid along with the use of FFT. Bajaj et al. \cite{Bajaj2011} proposed a faster grid-free method along with the use of NFFT, which has been highly influential in the development of our ideas. For SC analysis of arbitrary shapes with important applications in assembly automation, packaging and nesting, and path planning in narrow environments, in addition to protein docking, we propose a reformulation of the double-skin layer approach by defining the complex affinity function $F_S: {\R^3} \rightarrow \mathds{C}$ for an arbitrary r-set $S \in \mathcal{S}$ as \begin{align} F_S(\mathbf{x}) :&= \bm{\mathsf{i}} \left(f_{S \oplus B_0}(\mathbf{x}) - \lambda f_S(\mathbf{x}) \right), \end{align} where $f_S, f_{S \oplus B_0} \in C^\infty_0({\R^3})$ are bump functions of the shape and its offset by the ball $B_0 = B(\mathbf{0}, r_0)$. The offset $r_0 > 0$ is decided depending on the feature size (e.g., set to the size of a water molecule ($1.4~\AA$) for protein docking \cite{Bajaj2011}, or to MA-based local/weak feature size \cite{Chazal2004} for other applications), and $\lambda > 0$ defines the `penalty factor'. For two shapes $S_1, S_2 \in \mathcal{S}$, the cross-correlation of their affinity functions gives the SC score as \begin{align} G_{S_1,S_2} (R, \mathbf{t}) = (F_{RS_2} \star F_{S_1})(\mathbf{t}) = (F_{S_1} \ast \bar{F}_{-RS_2})(\mathbf{t}). \label{eq_score} \end{align} Substituting for $F_S$ and noting that $f_{S \oplus B_0} = f_S \ast f_{B_0}$: \begin{align} G_{S_1,S_2}|_R = \lambda^2 g_{S_1,S_2}|_R - 2\lambda &g_{S_1,S_2}|_R \ast f_{B_0} \nonumber \\ + &g_{S_1,S_2}|_R \ast f_{B_O}, \label{eq_score} \end{align} where the terms $g_{S_1,S_2}|_R = f_{RS_2} \star f_{S_1} = f_{S_1} \ast f_{-RS_2}$ and $f_{B_O} = f_{B_0} \ast f_{B_0}$ were studied earlier. If $\lambda > 1$ is large enough, the first term on the right-hand side of (\ref{eq_score}) takes interior-interior collisions into account with a penalty of $\propto -O(\lambda^2)$, the second term includes offset-interior overlaps with a reward of $\propto +O(\lambda)$, while the third term adds a smaller penalty of $\propto -O(1)$ for offset-offset overlaps. An important observation is that for spherical sampling, the offsets correspond to an increase of radii in the primitive balls, which in turn corresponds to a 3D slice of {\it elevated} cones in 4D (i.e., translation of the knots $A \in {\R^4}$ along the $r-$axis). The different terms in (\ref{eq_score}) thus become 3D slices of $g_{K_1, \breve{K}_2}|_R$ corresponding to $r=0, r_0,$ and $2r_0$, all of which can be cumulatively obtained from a single 4D NDFT. The time-critical query in (\ref{eq_single}) for CD can also be extended to SC formulation, useful in real-time energy computations for interactive assembly or protein docking. More elaboration on these and other possibilities are left to a follow-up publication, including a study of {\it offset sensitivity} defined as differentiation of $g_{K_1, \breve{K}_2}|_R$ with respect to $r$ and its implications for SC analysis---think of the infinitesimal overlaps in (\ref{eq_score}) when $\lambda = 1$ and $r_0 \rightarrow 0^+$. \subsection{Morphological Operations} \label{sec_prod} Roerdink \cite{Roerdink2000} generalized the concept of Minkowski sums/differences to Minkowski products/quotients over general groups, whose noncommutative convolutional formulation was presented by Lysenko et al. \cite{Lysenko2011a}. Nelaturi and Shapiro \cite{Nelaturi2011} applied the concept to $SE(3)$ (in this context referred to as `configuration products/quotients') and showed its applicability to direct and inverse $\mathcal{C}-$space problems ranging from computing general sweeps to solving for maximal shapes and motions subject to containment constraints. The method embeds the solids in $SE(3)$, and uses a uniform sampling over translations and rotations followed by pairwise matrix multiplications across the two samples to compute the $\mathcal{C}-$products and quotients. Our discretization scheme can readily be applied to more efficiently sample the translation space for different rotational sections through the 6D domain, as will be demonstrated in Section \ref{sec_res1}. An interesting extension of the method would be to formulate 6D spherical sampling of the subsets of the Riemannian manifold $SE(3)$ based on geodesic distances (see \ref{app_SE3} for more details). One possible application is in machine tool path planning in the presence of tolerances \cite{Behandish2015d}, where the embedded workpiece complements the configuration product of the motion trajectory and tool profile. The tolerances can be introduced into either set by Minkowski operations between the `nominal' geometry and primitive tolerance sets, e.g., Euclidean (for translational tolerances) and geodesic (for rotational tolerances) disks, cylinders, balls, or tori, all of which can be more efficiently discretized via spherical primitives than uniform samples. In a similar fashion to the constructions in Section \ref{sec_cross}, the tool's swept volume is then given by a 3D projection of the 6D Minkowski product of the embedded tool profile and its motion, each of which are described by a Minkowski product of sample points on their nominal sets with the primitive tolerance sets. Rearranging the terms (similar to (\ref{eq_ORP_1}) through (\ref{eq_ORP})) abstracts the tolerances away into a 6D configuration space tolerance set, and allows working with lower-dimensional nominal sets only. Unfortunately, the corresponding Fourier analysis in this case becomes quite tedious, whose potential benefits are unclear at this stage. See the example in \cite{Behandish2015d} for an elaborate discussion. A more important research question remains regarding the extension of spherical sampling to dual operations, i.e., Minkoswki differences or quotients, which would open up the opportunity to extend the benefits of this approach to inverse problems in configuration modeling. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figure5} \caption{The Minkowski sum of two spherical samples (a, b) for two sampled rotations, which are sections through the 6D Minkowski product, obtained from pairwise Minkowski sum of primitives (c, d).} \label{figure5} \end{figure} \section{Numerical Results} \label{sec_res} In this section we demonstrate how spherical sampling outperforms uniform sampling for Minkowski computations that are central to the range of applications discussed in Section \ref{sec_app}, and validate the additional performance improvement by using FFT algorithms. We implemented the method as a C++ API that reads triangular meshes, generates spherical decompositions (using Algorithm \ref{alg_sampling} in \ref{app_sampling}), converts the geometry to an analytic representation in the physical and/or frequency domains, and computes the correlations in either domain. We report on both CPU- and GPU-parallel computing, implemented using C++ Boost \cite{Schling2011} and CUDA-C libraries, respectively. Our numerical experiments were conducted on a desktop computer with Intel Xeon E5-2687W CPU (32 cores, 3.10 GHz clock-rate, 64GB host memory) and NVIDIA Tesla K20c GPU (2,496 CUDA cores, 5GB device memory). \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figure6} \caption{Nonuniform spherical sampling significantly outperforms uniform sampling, by efficient use of memory and time resources.} \label{figure6} \end{figure} \begin{table} \caption{Comparison of the sample size between grid-based uniform and gird-free spherical samplings. The ratio scales rapidly with size.} \label{tab_comp} \vspace{-0.2cm} \scalebox{0.72}{ \begin{tabular}{| c | r r r | r r r | r |} \hline & \multicolumn{3}{|c|}{Uniform Sampling} & \multicolumn{3}{|c|}{Spherical Sampling} & {Ratio} \\ \hline $m$ & $n_1'$ & $n_2'$ & $n' = n_1' n_2'$ & $n_1$ & $n_2$ & $n = n_1 n_2$ & $n'/n$\\ \hline $2^{12}$ & $666$ & $44$ & $29,304$ & $49$ & $26$ & $1,274$ & $23.0$\\ $2^{15}$ & $5,921$ & $689$ & $4.08 \times 10^6$ & $159$ & $83$ & $13,197$ & $309.1$\\ $2^{18}$ & $49,981$ & $3,867$ & $1.93 \times 10^8$ & $1,024$ & $340$ & $3.48 \times 10^5$ & $764.5$\\ $2^{21}$ & $409,058$ & $36,874$ & $1.5 \times 10^{10}$ & $3,686$ & $1,081$ & $3.98 \times 10^6$ & $3,785.5$\\ \hline \end{tabular} } \end{table} \begin{figure*} \centering \includegraphics[width=\textwidth]{figure7} \caption{A section through the gap function representation (a) of the Minkowski sum in Fig. \ref{figure5}; its approximations (b--f) with truncated Fourier expansions (top), and the residual error (bottom). Uniform grid-based FFT implementation of the convolution outperforms the pairwise primitive multiplication method by two orders of magnitude (g).} \label{figure7} \end{figure*} \subsection{Combinatorial Advantage} \label{sec_res1} Figure \ref{figure5} illustrates the $\mathcal{C}-$obstacle construction discussed in Section \ref{sec_cross} for a pair of objects discretized via nonequiradius spherical sampling developed in Section \ref{sec_count}. We repeat Algorithm \ref{alg_sampling} with different grid sizes $m$, and compare the arithmetic complexity of the result with that of the uniform sampling of the same grid dimensions used in \cite{Nelaturi2011}. As described in \ref{app_sampling}, this guarantees that the Hausdorff metric-based approximation error of the spherical discretization is upperbounded by that of the uniform sample, which is $\epsilon = \sqrt{3} (L/m^{\frac{1}{3}})$. Therefore, the `initial grid size' $m$ will be used as a measure of resolution for both methods for the purpose of comparison. As reported in Table \ref{tab_comp}, our method offers a clear advantage, decreasing the complexity by several orders of magnitude. Figure \ref{figure6} (a, b) plots the memory requirement and running times, respectively, for Minkowski sum computation (viewed as a translational cross-section of the configuration product, as described in Section \ref{sec_prod}) by pairwise summations in the nonuniform 4D sample space, compared to pairwise summations in the uniform 3D sample space used in \cite{Nelaturi2011}. The speed-ups of our method scale significantly with resolution, and reaches the range $400$--$600\times$ (on both CPU and GPU)\footnote{In each individual scenario, the GPU runs are only slightly faster than CPU runs ($1$--$3 \times$) due to extensive global memory references, but can be improved in future versions by memory optimization.} for a grid size of $m := 2^{18} =$ 262,144, decreasing the CPU/GPU running time from $28/48$ seconds to $104/47$ milliseconds. For larger initial grid sizes such as $m := 2^{21} =$ 2,097,152, the memory cannot accommodate the Minkowski sum of uniform samples, while our method succeeds and carries out the sum in less than a second. \subsection{Analytic Advantage} \label{sec_res2} We next consider the computational performance of the analytic method, using both uniform and nonuniform sampling. Given the spherical decomposition of the two solids, their bump functions as a sum of radial kernels can be rasterized on uniform grids in the physical domain. The gap function representation of the Minkowski sum can then be computed by two forward FFTs, a pointwise multiplication over the frequency grids, and an inverse FFT to retrieve the result, whose running times are separately plotted in Fig. \ref{figure7} (g). The GPU implementation in this case offers significant speed-ups of $400$--$800\times$ over its CPU counterpart.\footnote{The FFT is implemented using FFTW \cite{Frigo2005} on the CPU and using cuFFT(W) on the GPU. With the exception of FFTW, all other CPU and GPU routines were written in parallel.} Comparing the results with Fig. \ref{figure6} (b) shows an improvement of $30$--$80\times$ over the pairwise computations in the physical domain. For a grid size of $m := 2^{21} =$ 2,097,152, accurate computation of the convolution takes less than $80$ milliseconds on the GPU. It appears that the pointwise multiplication step is the bottleneck in the FFT-based convolutions. However, by performing this step (and the following inverse FFT) over a small subset of size $m' \ll m$ of the frequency grid in the neighborhood of the dominant modes, one could decide on the amount of computation time to spend in a trade-off with accuracy depicted in Fig. \ref{figure7} (a--f). It is clear that small gap function errors do not necessarily imply small geometric discrepancies of the $0-$sublevel set in terms of Haudorff metric. However, it was shown by Lysenko \cite{Lysenko2013} that it is also possible to impose upperbounds on the Hausdorff distance-based error as a function of the number of retained frequencies. As depicted in Sections \ref{sec_Fourier} and \ref{sec_CFT}, the uniform 3D grid-based FFT can be replaced with a nonuniform 4D grid-free NFFT. As the difference between the number of sample points in each method grows according to Table \ref{tab_comp}, even a cascade 4D NDFT over the spherical discretization can be faster, with the additional flexibility it offers in choosing the frequency domain grid size on-the-fly independently of the physical domain sample size. The NDFT does not require the additional step of bump function rasterization over the uniform grid, which is basically a cascade computation of the convolution of the knots and the conical kernel in (\ref{eq_coneconv}). It rather incorporates that step as a pointwise multiplication with the kernel's frequency domain representation in (\ref{eq_fconehat}) which can be precomputed to full precision. The comparison is shown in Fig. \ref{figure8} for different number of modes $m'$ over the 4D frequency grid, which demonstrates an advantage to NDFT for $m > 2^{15} =$ 32,768 and $m' < 2^{16} =$ 65,536 on the CPU, and for $m > 2^{18} =$ 262,144 and $m' < 2^{12} =$ 4,096 on the GPU. This can be further improved using the optimal NFFT \cite{Potts2001}. Unfortunately, NFFTs have been implemented on the GPU \cite{Kunis2012} for 1, 2, and 3D, while at present the 4D NFFT is available only on the CPU \cite{Keiner2009}. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figure8} \caption{A comparison of 1) bump function rasterization (i.e., cascade convolution of knots with conical kernels) + 3D FFT; and 2) 4D NDFT + pointwise multiplication with kernel's frequency domain representation, for different numbers of needed frequencies $m' \leq m$.} \label{figure8} \end{figure} Lastly, we test the performance for time-critical computation of the correlation predicate for a single configuration, using the method presented in Section \ref{sec_col} based on \cite{Lysenko2013} via truncated frequency grid integration in (\ref{eq_single}). Figure \ref{figure9} shows sequential integration time on the CPU for different choices of the number of retained modes $m' \leq m$. An almost linear speed-up of $m/m'$ (as expected from the theory) is achieved, and the collision predicate is computed in less than a millisecond for $m' < 2^{12} =$ 4,096. This enables fast physically-based modeling and multibody dynamics simulations in real-time applications that require a refresh rate of $1$ kHz for graphics and haptics feedback \cite{Weller2011}. An important research question concerns the development of a {\it hybrid} method that further improves the performance by using this method alongside a sphere-tree traversal used in \cite{OSullivan1999,Hubbard1996,Bradshaw2004,Weller2011}, and limits the integration over the NDFT of fewer primitives at the tree leaves. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figure9} \caption{Time-critical collision detection for a single configuration query by integrating over the $m' \leq m$ dominant modes in the frequency domain. The sub-millisecond region is considered `real-time'.} \label{figure9} \end{figure} \section{Conclusion} Analytic methods for geometric modeling have shown great promise in formulating and solving important problems in terms of convolutions, typically implemented by uniform grid-based sampling to enable the use of the efficient FFT. We proposed a versatile analytic paradigm built around a nonuniform discretization scheme that approximates the shape with a union of 3D balls, equivalent to a nonuniform sample of 4D knots in an elevated space. We showed that this can be conceptualized as a 4D Minkowski sum, analytically represented by a convolution of the knots with a conical kernel. As a result of the spherical symmetry, the discretization structure is closed under rigid motions and Minkowski sums, hence the ball/cone geometry and its defining kernel abstract the continuous geometry away in a consistent manner across configuration space correlations, allowing the numerical algorithms to operate on the discrete set alone. Among the important applications we referred to collision detection, shape complementarity, and morphological operations, which are central to important problems in motion/path planning, physically-based modeling, manufacturing automation, protein docking, and more. We showed that the spherical discretization allows for an efficient allocation of time and memory resources to capturing the details of geometric features, by rapidly filling the large interior regions of both shape and configuration pointsets. The CPU and GPU implementations were presented, demonstrating speed-ups up to several orders of magnitude. This research opens up promising theoretical and computational directions for future studies on analytic methods that are emerging in solid and physical modeling. \section{Acknowledgement} The authors would like to thank Saigopal Nelaturi from Palo Alto Research Center for the helpful discussions and constructive feedback. This work was supported in part by the National Science Foundation grants CMMI-1200089, CMMI-0927105, and CNS-0927105. The responsibility for any errors and omissions lies solely with the authors. \section{References} \bibliographystyle{elsarticle-harv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Water is a crucial constituent for the formation of life and therefore how water is delivered to a protoplanet is of great interest. Water ice from the natal cloud is thought to accrete onto a protoplanetary core forming icy planetesimals and comets as a possible carrier of the water delivery\deleted{onto a protoplanetary core}. While the evolution of the water ice in the protoplanetary environments is important, astronomical detections for water ice in the protoplanetary disks are still too limited to develop a unified view of the water ice evolution in the disks \citep[][]{pon05,ter07,ter12a,ter12b,hon09,hon16}. Edge-on disks are suitable for investigation of protoplanetary disks because the disk geometry is well defined with the central star occulted by the circumstellar disk. Recent sub-arcsecond resolution imaging with $Hubble$ $Space$ $Telescope$ and ground-based adaptive optics (AO) facilities enables us to spatially resolve disk formation sites, and the number of objects associated with a circumstellar disk morphology has been increasing. \citet{per06} discovered an edge-on disk around the intermediate-mass young stellar object, Herbig Ae star, PDS 144N. The disk around PDS 144N exhibits almost a perfect edge-on morphology and its inclination angle is estimated to be 83$\arcdeg$ \citep[][]{per06}. PDS 144N has a widely separated companion object, PDS 144S, which is also a Herbig Ae star \citep[][]{hor12}. PDS 453 is another Herbig Ae object showing an edge-on disk morphology with less inclination angle of 79$\arcdeg$ \citep[][]{per10}. These two edge-on disks are unique objects to investigate protoplanetary disks around higher mass YSOs since most of the detected edge-on disks are around low-mass YSOs. Water ice shows a strong absorption at 3\,$\micron$ with rich features to provide information about its grain size, crystallinity, and mixtures with the other ices \citep[][]{boo15}. Ground-based spectroscopy at 2.8--4.2\,$\micron$ is a powerful tool to reveal the water ice in the protoplanetary disks. In particular, wavefront correction at $\ge$3\,$\micron$ is relatively easy and AO-assisted spectroscopy is very beneficial not only for high spatial observation but also for obtaining stable and reliable spectra. In this wavelength region, thermal background from the ambient background is still acceptably low and this allows achieving a high signal-to-noise ratio. In Section \ref{sec:obs}, methods for observations and data analysis are described. Section \ref{sec:res} shows the results of the 3\,$\micron$ water ice absorption features for PDS 144N, PDS 144S, and PDS 453. Characteristics of the detected water ice absorption profiles are discussed in Section \ref{sec:dis}, including its possible variability in time and similarity to the profile of the edge-on disk object d216-0939. The conclusion is summarized in Section \ref{sec:sum}. \section{Observation \& Data Reduction} \label{sec:obs} All the observations were performed using the Infrared Camera and Spectrograph \citep[IRCS;] []{tok98,kob00} mounted on the Nasmyth platform of the Subaru Telescope. The Subaru AO system \citep[AO188;][]{hay10} was utilized only for the observations of PDS 453. The object itself served as a wavefront correction reference, since it is relatively bright in the optical ($R$$\sim$12.5). Imaging of PDS 453 at $K^{\prime}$($\lambda$=2.12\,$\micron$) and $L^{\prime}$($\lambda$=3.77\,$\micron$) was conducted with 20 mas camera of the IRCS on a single epoch of 2009 June 3, and spectroscopy at 2.8--4.2\,$\micron$ is carried out in two and six epochs from 2006 to 2014 for PDS 144N and PDS 453, respectively. The airmass mismatch between spectroscopy of the object and the standard star was minimized to avoid any artificial features in the spectra due to mismatch of the telluric absorption and the resultant airmass difference was $\sim$0.01. In the second epoch of spectroscopy for PDS 144N, no standard star was observed and PDS 144S acts as a spectral standard star for the exact cancellation of the telluric absorption to obtain the PDS 144N spectrum. Sky conditions were excellent for all the observing epochs in terms of its extinction, seeing, and precipitable water level. The observation details are summarized in Table~\ref{tbl-obslog}. Five and nine-point box-shaped dithering patterns were used for imaging of PDS 453 at $K^{\prime}$ and $L^{\prime}$ with a separation of 3\arcsec, respectively. For spectroscopy of PDS 453, the slit was set with a position angle of 133$\arcdeg$ to align to the scattered light disk only for the first epoch observation on 2009 August 17. In the other observing epochs, the slit position angle was 0$\arcdeg$. The slit position angle for PDS 144N was 119$\arcdeg$ in order to locate PDS 144S on the same slit. A-BB-A nodding operation was performed for all the spectroscopy. Nodding separation for PDS 453 was 3$\arcsec$ for the first epoch and 1.5$\arcsec$ for the other epochs. In case of PDS 144N, the nodding separation of 2.5$\arcsec$ and 3.0$\arcsec$ was chosen for the first and the second epochs respectively to avoid interference with PDS 144S located with a separation of 5\farcs40. In spectroscopy of PDS 144N under natural seeing conditions, the slit widths were selected to be 0\farcs6 (corresponding spectral resolving power, R$\sim$190) in the first epoch and 0\farcs45 (R$\sim$260) in the second epoch. For AO spectroscopy of PDS 453, it was 0\farcs225 (R$\sim$510) throughout all the epochs. Data reduction was performed by IRAF software packages through a standard procedure that consists of flat fielding, sky subtraction, telluric correction, and wavelength calibration. HIP 79229 \citep[A0V, $V$=6.64 mag;][]{hog00} was used with an assumption of $V$$-$$L^{\prime}$=0 for a photometric estimate of PDS 453 at $L^{\prime}$. No $K^{\prime}$ photometric calibration was performed, because the peak signal in the image of PDS 453 at $K^{\prime}$ was saturated, For spectroscopic reference, four A0 stars, HR 5197, HR 6061, HR 6354, and HR 6490 were observed for correction of the telluric absorption. The hydrogen absorption feature from the A0 stars were removed using the method by \citet{vac03}. The telluric absorption lines in the spectrum was used for the wavelength calibration. \begin{deluxetable*}{l l c c c c c c} \tabletypesize{\scriptsize} \tablecolumns{8} \tablewidth{0pt} \tablecaption{Observing log\label{tbl-obslog}} \tablehead{ \colhead{}&\colhead{Observing}&\colhead{}&\colhead{Exposure}&\colhead{Spectral}&\colhead{Average}&\colhead{Standard}&\colhead{$\Delta$Airmass}\\ \colhead{Object}&\colhead{Date}&\colhead{Mode}&\colhead{Time}&\colhead{Resolution}&\colhead{Airmass}&\colhead{Star}&\colhead{Obj.-Std.}\\ \colhead{}&\colhead{(UT)}&\colhead{}&\colhead{(s)}&\colhead{($\lambda$/$\Delta\lambda$)}&\colhead{}&\colhead{}&\colhead{}} \startdata PDS 144S \& N&2006 Feb 7&$L$ Spectroscopy&1080&190&1.497&HR 5197&$-$0.025\\ &2008 Feb 18&$L$ Spectroscopy &840&260&1.470&\nodata&\nodata\\ PDS 453&2009 Jun 3&$K^{\prime}$ Imaging&75&\nodata&&\nodata&\nodata\\ & &$L^{\prime}$ Imaging&180&\nodata&1.495&HIP 79229&$-$0.007\\ &2009 Aug 17&$L$ Spectroscopy &480&510&1.467&HR 6061&$+$0.001\\ &2011 Aug 14&$L$ Spectroscopy &360&510&1.531&HR 6490&$-$0.010\\ &2011 Aug 19&$L$ Spectroscopy &480&510&1.572&HR 6490&$-$0.014\\ &2012 Sep 17&$L$ Spectroscopy &480&510&1.594&HR 6354&$+$0.0002\\ &2014 Mar 20&$L$ Spectroscopy &720&510&1.440&HR 6490&$+$0.008\\ &2014 Mar 21&$L$ Spectroscopy &720&510&1.439&HR 6490&$-$0.001\\ \enddata \end{deluxetable*} \section{Results} \label{sec:res} We present the 2.8--4.2\,$\micron$ spectra for two Herbig Ae stars (PDS 144N and PDS 453) with an edge-on morphology of the surrounding disks and extract the water ice absorption feature in the spectrum. For extraction of the feature, the continuum of the spectrum was estimated using a second-order polynomial fitting with wavelength regions of 2.875--2.89\,$\micron$ and 3.7--4.0\,\micron. Since the water ice profile is known to continue in the wavelength region of $<$2.88\,\micron, it is noted that this continuum determination could cause an underestimate of the water ice absorption by $\sim$25\% at the peak. \subsection{PDS 144N} PDS 144N exhibits a clear edge-on morphology with a high spatial resolution imaging, while its binary companion, PDS 144S shows no apparent disk structure \citep{per06}. Spectroscopy for PDS 144N and PDS 144S were simultaneously done with an appropriate position angle (119$\arcdeg$) of the slit. The spectra of both objects are shown in Figure~\ref{fig-pds144-1st} for the first epoch (2006-02-07-UT) observation. The absolute flux is calibrated with $K$ and $L^{\prime}$ magnitudes derived by \citet{per06}. The most prominent feature in the spectrum of PDS 144N is a PAH emission feature ranging from 3.2\,$\micron$ to 3.6\,\micron, and the shallow water ice absorption feature is seen in the wavelength region of 2.9--3.2\,\micron. On the other hand, the spectrum of PDS 144S is featureless ($\tau_{ice}$$\le$0.002) and flat in this low resolution spectrum except for the hydrogen recombination lines at 2.873\,\micron, 3.039\,\micron, 3.297\,\micron, 3.741\,\micron, and 4.052\,\micron, which means that PDS 144S can act as an atmospheric calibrator to obtain nearly perfect cancellation of the telluric absorption. After extraction of the water ice absorption from the spectrum of PDS 144N in the first and second (2008-02-17-UT) epochs using PDS 144S for the telluric absorption cancellation, their optical depths are plotted in Figure~\ref{fig-pds144-tau}. There is no significant change of the optical depth ($\tau_{ice}$=0.09$\pm$0.01) and the wavelength of maximum optical depth ($\sim$3.09\,$\micron$) in the two epochs. \begin{figure} \begin{center} \plotone{f1.eps} \caption{\label{fig-pds144-1st}Spectra of PDS 144N and PDS 144S. The inset shows a spectrum of PDS 144N simply divided by PDS 144S, which is normalized at 4.1\,$\micron$. The estimated continuum is shown by the grey solid lines. While PDS 144S exhibits no absorption feature, a shallow absorption around 3.1\,$\micron$ is seen in the spectrum of PDS 144N.} \end{center} \end{figure} \begin{figure} \begin{center} \plotone{f2.eps} \caption{\label{fig-pds144-tau}Water ice optical depth of PDS 144N in the first (2006-02-07-UT) and second (2008-02-18-UT) epochs showing good agreement. } \end{center} \end{figure} \subsection{PDS 453} Figure~\ref{fig-pds453-img} demonstrates the AO images taken at $K^{\prime}$ and $L^{\prime}$. Aperture photometry is applied to the $L^{\prime}$ image with a radius of 1\farcs5 to find $L^{\prime}= 8.10$ mag. The photometric error is $\sim$0.1 mag. After subtracting the object images by the normalized images of the nearby star (2MASS J17205612-2603307), the residual images are displayed at the bottom panel. Scattered light disk around the object can be seen in the point-spread-function (PSF) subtracted image at $K^{\prime}$, which is consistent with the discovery result of \citet{per10}. At $L^{\prime}$, no significant structure is found in the PSF subtracted image. The normalization factor for $L^{\prime}$ is determined taking into account their brightness at $L^{\prime}$. For the $K^{\prime}$ image, the factor is searched to obtain the best subtraction of a speckle pattern around the object. Since the nearby star is fainter than the object, the signal-to-noise ratio of the PSF is limited by the nearby star. \begin{figure} \begin{center} \plotone{f3.eps} \caption{\label{fig-pds453-img}Adaptive Optics Images of PDS 453 at $K^{\prime}$ and $L^{\prime}$. The object and the nearby star as PSF reference are shown at the top and the middle panels, respectively. The residual signals of the PSF subtracted images are seen in the bottom panel. The PSF subtracted image at $K^{\prime}$ clearly shows the nearly edge-on morphology of the circumstellar disk around PDS 453 at a position angle of 133$\arcdeg$.} \end{center} \end{figure} The 2.8--4.2\,$\micron$ spectra are presented in Figure~\ref{fig-pds453-spec} for 6 epochs from 2009-08-17-UT to 2014-03-21-UT, after being normalized for the observed $L^{\prime}$ magnitude of 8.10. The slope of the continuum was changing with time, and a shallow water ice absorption was clearly detected in all the spectra. In all the epochs, the water ice absorption has a depth of 0.19$\pm$0.01 with a wide absorption band ranging from 3.10\,$\micron$ to 3.23\,$\micron$. To look into the water ice profile, their normalized optical depths are presented in Figure~\ref{fig-pds453-var}. The normalization is applied to the averaged optical depth in the wavelength range of 3.15--3.18\,\micron, where it is almost free from the telluric absorption. While the overall profile is quite consistent throughout six epochs, an apparent change of the water ice absorption profile can be seen at 3.20--3.25\,$\micron$. More specifically, data on 2009-08-17-UT and 2012-09-17-UT shows deviations from the other spectra. This change of the water ice absorption profile is discussed in the Section \ref{sec:dis}. \begin{figure} \begin{center} \plotone{f4.eps} \caption{\label{fig-pds453-spec}Spectra of PDS 453 at six epochs. Each spectrum is offset for a better presentation. Solid grey lines are the estimated continuum with a second-order polynominal function. } \end{center} \end{figure} \section{Discussion} \label{sec:dis} Physical parameters for PDS 144N, PDS 144S, and PDS 453 are summarized in Table~\ref{tbl-tarpara}. Although the distance to PDS 144N was originally suggested to be around 1000pc \citep{per06}, more recent investigation favors the smaller value of 145pc \citep{hor12}. The distance to PDS 453 is more uncertain, but the value of 140pc assumed by \citet{per10} is adopted here. \begin{deluxetable*}{l c c c c c c} \tabletypesize{\scriptsize} \tablecolumns{7} \tablewidth{0pt} \tablecaption{Target Parameters\label{tbl-tarpara}} \tablehead{ \colhead{}&\colhead{Spectral}&\colhead{Inclination}&\colhead{Disk}&\colhead{Possible}&\colhead{}&\colhead{}\\ \colhead{Object}&\colhead{Type}&\colhead{Angle}&\colhead{Diameter}&\colhead{Association}&\colhead{Distance}&\colhead{Reference}\\ \colhead{}&\colhead{}&\colhead{($\arcdeg$)}&\colhead{($\arcsec$)}&\colhead{}&\colhead{(pc)}&\colhead{}} \startdata PDS 144N&A2IV&83$\pm$1&0.8&Upper Scorpius&145$\pm$2&\citet{per06}, \citet{hor12}\\ PDS 144S&A5V&73$\pm$7 &0.8&Upper Scorpius&145$\pm$2&\citet{per06}, \citet{hor12}\\ PDS 453&F2V&79$\pm$3&3.1&Scorpius-Centaurus&140&\citet{per10}\\ \enddata \end{deluxetable*} \begin{figure} \begin{center} \plotone{f5.eps} \caption{\label{fig-pds453-var} Normalized optical depth of the detected water ice in PDS 453 in six observing epochs. The inset shows a close-up view around the peak depths to show the possible variation of the profile in 3.2--3.25\,$\micron$. } \end{center} \end{figure} \subsection{Location of Detected Water Ice} Water ice absorption towards young stellar objects is often attributed to the foreground cloud materials residing in front of the targets, and that possibility is investigated here. While PDS 144S shows no water ice absorption, its binary Herbig Ae star, PDS 144N exhibits a shallow water ice absorption around 3.1\,$\micron$. Therefore, the detected water ice toward PDS 144N is confirmed to be localized around PDS 144N with a radius of less than 5\farcs40 (783au) and it is most likely attributed to the circumstellar protoplanetary disk of PDS 144N. Regarding PDS 453, there is no bright nearby star around the object to use as a comparison. To see the surrounding environment around PDS 453 within 71\farcs4 $\times$ 71\farcs4 (corresponding to 10000au $\times$ 10000au), Figure~\ref{fig-colcol} shows 2-color diagram by using $J$, $H$, and $K_{s}$ photometry with the 2MASS catalog . In this figure, it is seen that PDS 453 is separated significantly in these $J$-$H$ and $H$-$K_{s}$ colors and therefore the absorbing material is localized around PDS 453. \begin{figure} \begin{center} \plotone{f6.eps} \caption{\label{fig-colcol}Two-color diagram for PDS 453 field (71\farcs4 $\times$ 71\farcs4). The gray lines indicates a location of dwarfs and giants. The arrow shows an extinction vector of $A_{V}=5$ as a reference. Only PDS 453 exhibits a red color and it is separated significantly from the others. This indicates that the circumstellar material is localized around PDS 453. Just for reference, the PDS 144 field (69\farcs0 $\times$ 69\farcs0; 10000au $\times$ 10000au) is also presented with a gray open circles, in which PDS 144N and PDS 144S are found to be much redder than PDS 453. } \end{center} \end{figure} According to \citet{hor12}, the inclination angle of the circumstellar disk around PDS 144S is very high (73$\pm$7\arcdeg). On the other hand, no signature for the scattered light morphology of the disk is seen in the infrared image of PDS 144S, which implies a small inclination angle for the circumstellar disk. \citet{ter12b} found a threshold of the inclination angle of 65--75$\arcdeg$ for protoplanetary disks that exhibit the water ice absorption in the low-mass young stellar objects in the Orion nebula cluster and M43 regions. In analogy to the result for low-mass YSOs, no detection of water ice absorption towards PDS 144S suggests a critical inclination angle of more than 73$\arcdeg$ for the detection of water ice absorption on the assumption that PDS 144S has similar abundance of the water ice in the circumstellar disk as PDS 144N and PDS 453. Figure~\ref{fig-inctau} shows the optical depth of the water ice detected for PDS 144S, PDS 144N, and PDS 453 together with data from \citet{ter12b}. Here, the critical inclination angle for showing the water ice absorption appears to be 73--79$\arcdeg$ for the disks around the intermediate-mass YSOs. \begin{figure} \begin{center} \plotone{f7.eps} \caption{\label{fig-inctau} Water ice optical depth as a function of the inclination angle of the disks. Black squares are for the edge-on disks around Herbig Ae stars. Gray squares show optical depths of the water ice detected in the silhouette disks of the Orion nebula cluster and M43 taken from Figure 9 of \citet{ter12b}. For clarity, data points for d121-1925 and d053-717 are not shown because the detection toward d121-1925 is attributed to the foreground ice and d053-717 is suspected to be not in the silhouette disk. The shaded area correcponds to the critical inclination angle range suggested by \citet{ter12b} for disks around the low-mass YSOs. In this figure, the critical inclination angle to show the water ice in the Herbig Ae disks is found to be 73--79$\arcdeg$, which is shown with the diagonal lines. Area in yellow shows a key inclination angle (76--80$\arcdeg$) to exhibit the wider water ice absorption in protoplanetary disks shown in Figure~\ref{fig-pds453-comp} and discussed in Section \ref{sec:sim}. } \end{center} \end{figure} \subsection{Effect of UV Radiation from the Herbig Ae Stars on Water Ice in the Disks} Water ice in the circumstellar disk is known to be affected through a photodesorption process by the far UV (FUV) radiation \citep[e.g.,][]{oka12}. Since the far UV radiation from the central stars is supposed to be harsher in the Herbig Ae star system than in the low-mass young stellar object system, the water ice distribution especially at the disk surface could be totally different between these two systems. Due to the high quality spectra with a signal-to-noise of $\ge$100, very shallow water ice absorption could be detected with $\tau$$\sim$0.1 and 0.2 for PDS 144N and PDS453, respectively. These values are significantly smaller compared with that for the low-mass young stellar objects associated with edge-on disks ($\tau$$=$0.7--1.7), HK Tau B, HV Tau C, and d216-0939 \citep{ter07,ter12b}. This can be qualitatively explained by the stronger photodesorption process. Also for the possible larger critical angle (73--79\arcdeg) of the disk inclination for producing the water ice absorption, the photodesorption effect by the stronger UV radiation may be the primary cause. The stronger FUV radiation pushes the snow line at the disk surface further into the disk mid-plane, and as a result the opening angle of the ice region in the disk will be smaller. \subsection{Water Ice Absorption Profile and Similarity between PDS 453 and d216-0939} \label{sec:sim} The strong PAH emission features seen in PDS 144N prevents from an accurate evaluation for the entire absorption profile of the water ice. However, PAH emission is considered to exhibit its feature typically from 3.2\,$\micron$ to 3.6\,$\micron$ \citep[e.g.,][]{tok91,vandie04,tie08}, and we assume a negligible contribution of the PAH emission to the optical depth of the water ice at wavelengths $\le$3.2\,\micron. The water ice profile of PDS 144N presented in Figure~\ref{fig-pds144-tau} shows a water ice absorption at a peak wavelength of $\sim$3.08\,$\micron$, which is similar to the one in the edge-on disks of the low-mass young stellar objects \citep{ter07}. Regarding PDS 453, the overall profile of the detected water ice absorption exhibits an enhanced optical depth around 3.2\,$\micron$ (see Figure~\ref{fig-pds453-var}). It resembles the features reported for the silhouette disk object, d219-0939 in the M43 region \citep{ter12a}, in which the feature is interpreted as a large particle size ($\sim$0.8\,$\micron$) crystallized water ice absorption. The same procedure for extraction of the water ice feature was applied to the PDS 453 and d219-0939 data. The two optical depths are plotted in Figure~\ref{fig-pds453-comp} for the best quality spectra of PDS 453 (2014-03-21-UT) and d216-0939 (center position on 2009-10-02-UT) with a normalization around 3.15--3.18\,$\micron$. It clearly shows a very similar water ice absorption in these objects. This absorption feature is unique among the water ice absorption profiles detected so far in various kinds of astronomical targets \citep{boo15}. Taking into account that both PDS 453 and d219-0939 have similar inclination angles of their disks around 78$\pm$2$\arcdeg$ with a line-of-sight to the disk surface (see Figure~\ref{fig-inctau}), this suggests that a unique phenomena for grain growth and crystallization process goes on at the icy disk surface to produce this peculiar feature. \begin{figure} \begin{center} \plotone{f8.eps} \caption{\label{fig-pds453-comp}Comparison between the water ice absorption profiles of PDS 453 and d216-0939. The best quality data is chosen among multi-epoch data set. Both the profiles are nearly identical around the peak of the optical depth. } \end{center} \end{figure} \subsection{Possible Time Variability of Water Ice Absorption} As described in the Section \ref{sec:res}, the absorption profile of the water ice detected for PDS 453 at multiple epochs exhibits variability in the wavelength region of 3.2--3.25\,\micron, whereas the absorption feature of PDS 144N is found to be the same for the achieved signal-to-noise ratio. It is noted that the behavior of the absorption features in the 3.2--3.25\,$\micron$ is correlated with a feature at 2.97\,\micron. As seen in Figure~\ref{fig-pds453-spec}, the small absorption features in the 3.2--3.25\,$\micron$ wavelength region in the epochs of 2009-08-17-UT and 2012-09-17-UT are associated with the emission feature at 2.97\,\micron. In both the wavelength regions, strong telluric water vapor features exist and there is a possibility that this apparent variability is due to inappropriate estimate of the telluric absorption. Even given that difficulty in identification of the variation source, it is still very interesting here to note the large photometric variation ($\Delta$$V$$\sim$1 mag) of PDS 453 in the optical found in the ASAS3 survey \citep{poj02}, which may suggest the variable extinction due to the absorbing material change in the sightline to the disk \citep{per10}. In fact, signals in the optical obtained by an abalanche photodiode (APD) on the AO188 system for the wavefront sensing with PDS 453 show a variation through these observation epochs. In addition, the continuum slope change is seen in the 3\,$\micron$ spectra. We define the $L$ continuum slope index as $\alpha$=(log(I$_{\lambda_{2}}$)-log(I$_{\lambda_{1}}$))/(log($\lambda_{2}$)-log($\lambda_{1}$)), where $\lambda_{1}$=2.875--2.89\,$\micron$ and $\lambda_{2}$=3.7--4.0\,$\micron$, and the obtained APD counts and continuum index ($\alpha$) are plotted in Figure~\ref{fig-pds453-crr}. As shown with a dashed circle, both the APD count and the $\alpha$ index are located in the lower-left area for two epochs of 2009-08-17-UT and 2012-09-17-UT, in which the different feature of the water ice absorption profile is exhibited in the wavelength region of 3.20--3.25\,$\micron$. Although systematic errors are not taken into account for the APD count and the slope index, this correlation may imply the real change of the water ice absorption feature in PDS 453. \begin{figure} \begin{center} \plotone{f9.eps} \caption{\label{fig-pds453-crr} APD counts of AO188 system vs. $L$ continuum slope index of PDS 453 spectra. } \end{center} \end{figure} \section{Summary} \label{sec:sum} We summarize the results of this study as follows. \begin{itemize} \item[1.]{Shallow 3\,$\micron$ water ice absorption features of two Herbig Ae stars with edge-on disks, PDS 144N and PDS 453, are detected. The absorption originates from the protoplanetary disks. } \item[2.]{No water ice absorption is detected towards PDS 144S. It indicates that the critical inclination angle to show the water ice absorption is larger in Herbig Ae disks than in low-mass young stellar disks. The larger critical inclination angle and shallower water ice absorption could be due to photodesorption of ice by the harsher FUV radiation from the Herbig Ae stars.} \item[3.]{The unusual profile of the water ice absorption detected in PDS 453 is very similar to the one found in d216-0939. The observations suggest that an inclination angle of 76--80$\arcdeg$ is needed to show this feature which is attributed to larger ice grains with high crystallinity.} \item[4.]{Water ice absorption features detected in multi-epoch 2.8--4.2\,$\micron$ spectra of PDS 453 show a possible variation correlated with the $L$ continuum slope and optical brightness. It may be caused by variable absorption at the disk surface.} \end{itemize} \acknowledgements We thank the entire support staff at the Subaru telescope for their efforts in keeping this very complicated facility operational, in particular the instrument maintenance staff whose efforts kept the instrument stable and allowed us to obtain reliable monitoring observations over a long period of time. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. \facility{Subaru(IRCS, AO188)}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} This paper is a contribution to the study of singular stationarity. We prove two consistency results concerning the notion of tightly stationary sets, which was introduced by Foreman and Magidor in the 1990s. Two notions of singular stationarity were introduced and studied in \cite{ForMag-MS}: Mutual Stationarity and Tight Stationarity. Both notions are related to properties of sequences $\langle S_i \mid i <\tau\rangle$ of sets $S_i \subseteq \kappa_i$ for an increasing sequence of regular cardinals $\langle \kappa_i \mid i < \tau\rangle$. It is shown that the notion of tight stationarity sets is a strengthening of mutually stationary which satisfies analogs of the well-known Fodor's Lemma and Solovay's splitting theorem for stationary sets of regular cardinals. Foreman and Magidor raised the question of whether every mutually stationary sequence is tightly stationary. Models containing mutually stationary sequences on the cardinals $\langle \omega_n \mid n < \omega\rangle$ which are not tight were obtained by Cummings, Foreman, and Magidor, and by Steprans and Foreman (\cite{CFM-CanStrII}). Chen and Neeman (\cite{CheNee-TS}) obtained a strong global result of a model in which there are mutual and non-tight stationary sequences on every increasing $\omega$-sequence of regular cardinals. Moreover, they show that the property of their model is immune to further forcing by a wide class of natural posets. In \cite{BN-MSI}, several consistency results regarding mutually stationary were obtained by the author. The goal of this paper is to introduce two positive results concerning tightly stationary sets. We construct models which contain a sequence of regular cardinals $\vec{\kappa} = \langle \kappa_n \mid n < \omega\rangle$ with the property that every sequence $\vec{S} = \langle S_n \mid n < \omega\rangle$ of stationary sets of some fixed cofinality $S_n \subseteq \kappa_n$ is tightly stationary in a forcing extension. This shows there is no natural indestructible obstruction to constructing models in which the notions of mutual stationarity and tight stationarity coincide. The methods we apply to obtain the results are called extender-based forcing methods. They are known for their ability to generically add scales to products of cardinals $\prod_n \kappa_n$, which are known to be connected with tightly stationary sequences. In \cite{CFM-CanI}, the authors have established many connections between scales and tight structures. These connections have been further studied and extender in \cite{chen-treelikeTS}, and obtained strong failure results of tight stationarity. Building on the known connections between tight stationarity and scales, we are able to reduce the problem of forcing a sequence $\langle S_n \mid n < \omega\rangle$ to be tightly stationary to forcing a scale $\vec{f} = \langle f_\alpha \mid \alpha < \lambda\rangle$ in $\prod_n\kappa_n$ with certain properties (i.e., scales with stationarily many good continuous points $\delta < \lambda$ for which $f_\delta(n) \in S_n$ for all but finitely many $n < \omega$). This approach leads us to the following two results. \begin{theorem}\label{thm1} Suppose that $\langle \kappa_n \mid n < \omega\rangle$ is an increasing sequence of $(+1)$-extendible cardinals. Then for every sequence of fixed-cofinality stationary sets $\vec{S} = \langle S_n \mid n < \omega\rangle$ with $S_n \subseteq \kappa_n$, there exists a generic extension in which $\vec{S}$ is tightly stationary. \end{theorem} \begin{theorem}\label{thm2} It is consistent relative to the existence of a sequence $\langle \kappa_n \mid n < \omega\rangle$ of cardinals $\kappa_n$ which are $\kappa_n^{+n+3}$-strong, that there is a model with a subset $\langle \omega_{s_n} \mid n < \omega\rangle$ of the $\omega_n$'s such that every fixed-cofinality sequence of stationary sets $S_n \subseteq \omega_{s_n}$ is tightly stationary in a generic extension. \end{theorem} Although it might seem, at first glance, that the second theorem is superior to the first one in every parameter, the second theorem provides a result which is less canonical in the following sense: As opposed to the first theorem, where the sequence of cardinal $\langle \kappa_n \mid n < \omega\rangle$ is given in advance, the sequence of cardinals $\langle \omega_{s_n}\mid n < \omega\rangle$ in the second theorem does not exist in the minimal ground model (i.e., in the core model $K$ or the mantle of the final generic extension) but rather obtained as a Prikry generic sequence in some intermediate extension. Both theorems are obtained using variants of the short-extenders forcing method of Gitik (\cite{Gitik-EBF1} and \cite{GitUng-SEF}). To prove Theorem \ref{thm1}, we apply the short-extenders method to a sequence of long extenders\footnote{i.e., a $(\kappa_n,j_n(\kappa_n))$-extender derived from an embedding $j_n$ with critical point $\kappa_n$.} and argue that if the extenders are derived from $(+1)$-extendible embeddings then every stationary sequence in the ground model is tight in the generic extension. For Theorem \ref{thm2}, we force with variants of the short-extenders forcing where the $n$th assignment function $a_n$ depends on the generic information of the previous Prikry points of the normal generators. \\ \textbf{Organization of the paper - } In Section \ref{section-pre}, we review relevant results which connect the notions of approachable sets, tight structures, and scales. The description follows the work of Cummings, Foreman, and Magidor from \cite{CFM-CanI}. We conclude Section \ref{section-pre} with a result (Proposition \ref{prop-IAmain}) which reduces the problem of obtaining tightly stationary sequences to the existence of a certain scale. In Section \ref{section-thm1}, we describe Gitik's extenders-based forcing and its main properties. We then show that a long-extender variant of this forcing produces a desirable scale and prove Theorem \ref{thm1}. Finally, in Section \ref{section-thm2}, we introduce a variant of the short-extenders forcing which allows us to obtain a similar construction below $\aleph_\omega$ and prove Theorem \ref{thm2}.${}$\\ Our notations are (hopefully) standard with the exception that our forcing convention follows the Jerusalem forcing convention. This means that for two conditions $p,q$ in poset $(\mathbb{P},\leq)$, the fact $p$ is stronger than $q$ (i.e., it is more informative) will be denoted by $p \geq q$. \section{Preliminaries}\label{section-pre} The purpose of this section is to define the notions of tight structures and tightly stationary sequences and review their connections with the existence of scales with many good points. The connections go through the notions of internally approachable structures and results by Cummings, Foreman, Magidor, and Shelah. Our presentation follows the description of \cite{CFM-CanI}, and we refer the reader to this paper for an extensive treatment of the subject. We commence by defining the notions of tight structures and tightly stationary sequences from \cite{ForMag-MS}. \begin{definition} \begin{enumerate} \item Let $\vec{\kappa} = \langle \kappa_n \mid n < \omega\rangle$ be an increasing sequence of regular cardinals and $\mathfrak{A}$ an algebra expanding $\langle H_\theta,\in,<_\theta\rangle$ for some regular cardinal $\theta > \cup_n \kappa_n$. A subalgebra $M \prec \mathfrak{A}$ is called tight for $\vec{\kappa}$ if $\vec{\kappa} \in M$ and for every $g \in \prod_n (M \cap \kappa_n)$ there exists $f \in M \cap \prod_n \kappa_n$ such that $g(n) < f(n)$ for all $n < \omega$. \item A stationary sequence in $\vec{\kappa}$ is a sequence $\vec{S} = \langle S_n \mid n < \omega\rangle$ so that $S_n \subseteq \kappa_n$ is stationary for every $n < \omega$. $\vec{S}$ is tightly stationary if for every algebra $\mathfrak{A}$ there exists a tight substructure $M \prec \mathfrak{A}$ such that $\sup(M \cap \kappa_n) \in S_n$ for every $n< \omega$ (we say that $M$ meets $S_n$). \end{enumerate} \end{definition} It is not difficult to see that in the definition of a tight structure, we can replace the last requirement with a slightly weaker one, which demands that for every $g \in \prod_n (M \cap \kappa_n)$ there exists $f \in M \cap \prod_n \kappa_n$ such that $g(n) < f(n)$ for all but finitely many $n < \omega$ (also denoted by $g<^*f$). We will focus on the case where the sets $S_n$ in $\vec{S}$ consist of ordinals of some fixed cofinality. That is, sequences $\vec{S}$ for which there exists some regular $\mu < \cup_n \kappa_n$, such $S_n$ is defined for every $n$ with $\mu < \kappa_n$ and $S_n \subseteq \kappa_n \cap \cof(\mu)$. To show that a certain sequence $\vec{S}$ is tightly stationary we will show that every algebra $\mathfrak{A}$ has a tight subalgebra $M \prec \mathfrak{A}$ of size $|M| = \mu$, such that $\sup(M \cap \kappa_n) \in S_n$ for all but finitely many $n < \omega$. Obtaining this suffices to show that $\vec{S}$ is tightly stationary since by a well-known argument of Baumgartner (\cite{Baum}), adding ordinals to $M$ below a cardinal $\kappa_i$ does not change its supremum below any regular cardinal $\kappa > \kappa_i$ in $M$. The same argument shows that this addition does add a new function $f \in \prod_n \kappa_n$ which dominates every function in $M \cap \prod_n \kappa_n$ in the $\kappa_{\omega} = \cup_n\kappa_n$ directed order $<^*$, which is defined by $f <^* g$ if and only if $f(n) < g(n)$ for all but finitely many $n < \omega$. It follows that for every finite sequence of stationary sets $S_m, S_{m+1},\dots,S_k$ with $S_i \subseteq \kappa_i \cap \cof(\mu)$, there exists an elementary extension $M'$ of $M$, which meets $S_n$ for all $n$ with $\kappa_n > \mu$, and further satisfies \begin{itemize} \item $\sup(M' \cap \kappa_n) =\sup(M \cap \kappa_n)$ for almost all $n < \omega$. \item Every function in $M' \cap \prod_n \kappa_n$ is dominated in $<^*$ by a function in $M \cap \prod_n \kappa_n$. \end{itemize} The last, combined with the fact $M$ is tight, guarantees $M'$ is also tight. \begin{remark} The same considerations show that the ideal-based methods which were used in \cite{BN-MSI} to obtain mutually stationary sequences are not useful in the context of tight stationarity. The reason is that substructures $M \prec \mathfrak{A}$ constructed in $\cite{BN-MSI}$ are limits of $\omega$-chains of structures, $M = \cup_n M_n$, where $M_0$ is a tight structure, and for each $n < \omega$, $M_{n+1}$ is obtained from $M_n$ by adding a family $s(\alpha)$ of sets of ordinals below $\kappa_{n+1}$ (i.e, $M_{n+1} = \SK^{\mathfrak{A}}(M_n \cup s(\alpha))$) so that $M_{n+1} \cap \kappa_n = M_n \cap \kappa_n$ and $\sup(M_{n+1} \cap \kappa_{n+1}) \in S_{n+1}$. While it is clear that $\sup(M_n \cap \kappa_n) > \sup(M_0 \cap \kappa_n)$ for almost all $n$, it is possible to show by induction on $n$, using Baumgartner's argument, that $M_0 \cap \prod_n \kappa_n$ is $<^*$-cofinal in the product $M_k \cap \prod_n \kappa_n$ for all $k$, and thus also in $M \cap \prod_n \kappa_n$. Consequently, the functions in $M \cap \prod_n \kappa_n$, which are dominated by the functions in $M_0 \cap \prod_n \kappa_n$, cannot dominate all function in the product $\prod_n (M \cap \kappa_n)$, which is strictly bigger than the product $\prod_n (M_0 \cap \kappa_n)$. \end{remark} We proceed to describe the connection between tight structures, approachable ordinals, and scales. \subsection{Internally approachable structures} \begin{definition} Let $\mathfrak{A}$ be an algebra expanding $\langle H_\theta,\in,<_\theta\rangle$ for some regular cardinal $\theta$. \begin{enumerate} \item A sequence $\vec{M} = \langle M_i \mid i < \rho\rangle$ of substructures $M_i \prec \mathfrak{A}$ is called an internally approachable chain if it is $\subseteq$-increasing and continuous\footnote{i.e., $M_i \subseteq M_j$ for all $i < j$, and $M_\gamma = \bigcup_{i<\gamma}M_i$ if $\gamma < \rho$ is a limit ordinal.}, and for every successor ordinal $j < \delta$, $\vec{M}\restriction j = \langle M_i \mid i < j\rangle$ belongs to $M_j$. \item A substructure $M \prec \mathfrak{A}$ is called internally approachable (IA) if there exists an IA chain $\langle M_i \mid i < \rho\rangle$ such that $M = \bigcup_{i < \rho}M_i$. \end{enumerate} \end{definition} We refer to $\rho$ in the definition as the length of the IA chain. Internally approachable structures satisfy many natural properties. We list three. \begin{theorem}[\cite{CFM-CanI}] Suppose that $\delta$ is a regular uncountable cardinal and that $M$ is a limit of an IA chain $\vec{M} = \langle M_i \mid i < \delta\rangle$. Then \begin{enumerate} \item $\delta \subseteq M$. \item For every regular cardinal $\kappa < \delta$ in $M$, $\cf(M \cap \kappa) = \delta$. \item Suppose that $\vec{\kappa} = \langle \kappa_n \mid n < \omega\rangle$ is an increasing sequence of regular cardinals. If $\vec{\kappa} \in M$ and $|M| < \kappa_0$ then every function in $\prod_n (M \cap \kappa_n)$ is pointwise dominated by a function in $M \cap \prod_n \kappa_n$. Therefore, $M$ is tight for $\vec{\kappa}$. \end{enumerate} \end{theorem} Regarding the third statement, we note that since $M$ is a limit of an IA chain $\vec{M} = \langle M_i \mid i < \delta\rangle$ of uncountable length $\delta$, then the range of every function $f \in \prod_n (M \cap \kappa_n)$ is contained in $M_i$ for some $i <\delta$. Thus $f$ is dominated by the characteristic function $\chi_{M_i}^{\vec{\kappa}}$ of $M_i$, defined by $\chi_{M_i}^{\vec{\kappa}}(n) = \sup(M_i \cap \kappa_n)$. This function clearly belongs to $M \cap \prod_n \kappa_n$ since $M_i$ does. \subsection{Existence of IA structures} We state two results by Shelah and by Foreman and Magidor, which together, guarantee the existence of many ordinals $\delta$ which are of the form $\sup(N \cap \lambda)$ form some IA structure $N$ of size $|N| = \cf(\delta)$. \begin{definition}\label{def-IApoints} Let $\lambda$ be a regular cardinal and $\vec{a} = \langle a_\nu \mid \nu < \lambda\rangle$ be a sequence of bounded subsets of $\lambda$. A limit ordinal $\delta < \lambda$ is said approachable with respect to $\vec{a}$ if there is a cofinal subset $D \subseteq \delta$ of minimal ordertype $\otp(D) = \cf(\delta)$, such that for every $\beta < \delta$, $D \cap \beta \in \vec{a}\restriction\delta = \langle a_\nu \mid \nu < \delta\rangle$. We denote the set of approachable ordinals with respect to $\vec{a}$ by $S_{\vec{a}}$. \end{definition} \begin{theorem}[Shelah,\cite{She-CarAri}]\label{thm-ShelahIA} Suppose that $\lambda = \eta^+$ is a successor cardinal. Then for every regular cardinal $\mu < \eta$ there exists a sequence $\vec{a} \subseteq [\lambda]^{<\lambda}$ such that $S_{\vec{a}} \cap \cof(\mu)$ is stationary in $\lambda$. \end{theorem} Foreman and Magidor established the connection between approachable ordinals and internally approachable structures. \begin{theorem}[Foreman-Magidor \cite{ForMag-veryweak}]\label{thm-IAmodels} Let $\lambda$ be a regular cardinal and $\vec{a} = \langle a_\alpha \mid \alpha < \lambda\rangle$ be a sequence of bounded subsets of $\lambda$. Suppose that $\mathfrak{A}$ is an algebra which expands $\langle H_\theta,\in,<_\theta, \vec{a} \rangle$ for some regular $\theta > \lambda$. Then there exists a closed unbounded set $C \subseteq \lambda$ such that for every $\delta \in C \cap S_{\vec{a}}$ there is an IA chain of length $\mu = \cf(\delta)$ whose limit $M \prec \mathfrak{A}$ has cardinality $\mu$ and satisfies that $\sup(M \cap \lambda) = \delta$. \end{theorem} The idea is to start with a long IA chain $\langle M_i \mid i < \lambda\rangle$ of substructures of $\mathfrak{A}$ and take $C$ to be the club of $\delta < \lambda$ such that $M_\delta \cap \lambda = \delta$. Then, for $\delta \in C \cap S_{\vec{a}}$, $M_\delta$ contains $\vec{a}\restriction \delta$ and we can therefore approximate some cofinal $D \subseteq \delta$ of ordertype $\mu= \cf(\delta)$ in $M_\delta$. With this, one can create an IA-chain of $\mu$-sized structures $\langle N_\nu \mid \nu < \mu\rangle$ within $M_\delta$, each of which is the Skolem hull in some $M_i$ of initial segments of $D$. The union of these substructures is an IA-structure $N \subseteq M_\delta$ of size $\mu$, which contains $D$ and thus satisfies $\sup(N \cap \lambda) = \delta$. \begin{corollary}\label{cor-IAsum} Suppose that $\lambda = \eta^+$ is a successor cardinal and $\mu < \eta$ is regular. Then for every regular cardinal $\theta >\lambda$ and an algebra $\mathfrak{A}$ expanding $\langle H_\theta,\in,<_\theta\rangle$ there is a sequence $\vec{a}$ such that $S_{\vec{a}} \cap \cof(\mu)$ is stationary in $\lambda$ and for every $\delta \in S_{\vec{a}}$ there is an IA substructure $M \prec \mathfrak{A}$ of length and cardinality $\mu$ such that $\sup(M \cap \lambda) = \delta$. \end{corollary} \subsection{Scales and IA structures} Let $<^*$ be the order relation on functions $f$ from $\omega$ to the ordinals, defined by $f<^* g$ if and only if $f(n) < g(n)$ for all but finitely many $n < \omega$. \begin{definition} Let $\vec{\kappa} = \langle \kappa_n \mid n < \omega\rangle$ be an increasing sequence of regular cardinals. A sequence of functions $\vec{f} = \langle f_\alpha \mid \alpha < \lambda\rangle$ is called a scale on $\prod_n \kappa_n$ if it satisfies the following conditions: \begin{enumerate} \item $\vec{f}$ is increasing in the order $<^*$; \item $\vec{f}$ is cofinal in the structure $(\prod_n \kappa_n,<^*)$ in the sense that for every $g \in \prod_n \kappa_n$ there exists some $\alpha < \lambda$ such that $g<^* f_\alpha$; and \item for every $\alpha < \lambda$, $f_\alpha(n) < \kappa_n$ for all but finitely many $n < \omega$. \end{enumerate} \end{definition} Our definition of a scale is a slight relaxation of the usual definition of a scale, which further requires that $\vec{f}$ to be contained in $\prod_n \kappa_n$ (namely, that $f_\alpha(n) < \kappa_n$ for all $n < \omega$). The two versions are equivalent for all of our purposes and it is not difficult to transform a sequence $\vec{f}$ which is a scale according to our definition to a scale accordring to the standard definition. We proceed to define exact upper bounds and continuity points of scales. \begin{definition} Let $\vec{f} = \langle f_\alpha \mid \alpha< \lambda\rangle$ be a scale on some product $\prod_n \kappa_n$. \begin{enumerate} \item Let $\delta < \lambda$ and $\vec{f}\restriction \delta = \langle f_\alpha \mid \alpha < \delta\rangle$. We say that a function $g \in \prod_n \kappa_n$ is an exact upper bound (eub) of $\vec{f}\restriction \delta$ if $\vec{f}\restriction \delta$ is a scale on $\prod_n g(n)$. \item We say that an ordinal $\delta < \lambda$ is a continuity point of $\vec{f}$ if either $\vec{f}\restriction\delta$ does not have an eub, or $f_\delta$ is such a bound. \end{enumerate} \end{definition} It is not difficult to verify that if $g_1$ and $g_2$ are two eubs of $\vec{f}\restriction\delta$ then $g_1(n) = g_2(n)$ for almost all $n < \omega$. Let $\mathfrak{A}$ be an algebra expanding $\langle H_\theta,\in,<_\theta\rangle$ for some regular cardinal $\theta$, and $M \prec \mathfrak{A}$ be a tight substructure which contains a scale $\vec{f} = \langle f_\alpha \mid \alpha < \lambda\rangle$ on $\prod_n \kappa_n$. Denote $\sup(M \cap \lambda)$ by $\delta$. We make a few observations concerning $\vec{f}\restriction \delta$ and $M$. \begin{enumerate} \item Every function in $M \cap \vec{f}$ is $<^*$-dominated by the characteristic function $\chi_M^{\vec{\kappa}}$ of $M$, defined by $\chi_{M}^{\vec{\kappa}}(n) = \sup(M \cap \kappa_n)$. \item Suppose $h$ is a function in $\prod_n \chi_M^{\vec{\kappa}}(n)$. Then $h$ is pointwise dominated by a function $g \in \prod_n (M \cap \kappa_n)$. Now, since $M$ is tight, $g$ is $<^*$-dominated by some $f \in M \cap \prod_n \kappa_n$, which, in turn, is $<^*$-dominated by some $f_\alpha \in M \cap \vec{f}$. \item It follows from the last two observations that $\chi_M^{\vec{\kappa}}$ is an eub of $\vec{f} \cap M$. Since $\vec{f} \cap M$ is cofinally interleaved in the ordering $<^*$ with $\vec{f}\restriction \delta$, we conclude that $\chi_{M}^{\vec{\kappa}}$ is an eub of $\vec{f}\restriction \delta$. \end{enumerate} By combining the last observation with Corollary \ref{cor-IAsum}, and the fact that every IA structure is tight, we obtain the following conclusion. \begin{proposition}\label{prop-IAmain} Let $\vec{\kappa} = \langle \kappa_n \mid n < \omega\rangle$ be an increasing sequence of regular cardinals whose limit is $\kappa_{\omega} = \cup_n \kappa_n$. Suppose that $\vec{f} = \langle f_\alpha \mid \alpha < \lambda\rangle$ is a scale on $\prod_n \kappa_n$ of a successor length $\lambda \geq \kappa_{\omega}^+$ and that $\mathfrak{A}$ is an algebra expanding $\langle H_\theta, \in,<_\theta, \vec{f}\rangle$ for some regular cardinal $\theta > \lambda$. Then for every regular cardinal $\mu < \kappa_\omega$ there is a sequence $\vec{a} \subseteq [\lambda]^{<\lambda}$ and a closed unbounded set $C \subseteq \lambda$ such that $S_{\vec{a}} \cap \cof(\mu)$ is stationary in $\lambda$ and for every $\delta \in S_{\vec{a}} \cap C$ there is a tight substructure $M \prec \mathfrak{A}$ which satisfies that $\sup(M \cap \lambda) = \delta$ and $\chi_M^{\vec{\kappa}}$ is an eub of $\vec{f}\restriction \delta$. If moreover, $\delta$ is a continuity point of $\vec{f}$ then $\sup(M \cap \kappa_n) = f_\delta(n)$ for almost every $n < \omega$. \end{proposition} \section{Short extenders forcing and tight stationarity}\label{section-thm1} The purpose of this section is to prove Theorem \ref{thm1}. The proof is obtained by forcing with a version of Gitik's short extenders-based forcing with certain long extenders. Our presentation follows \cite{GitUng-SEF} for the most part and omits most of the technical proofs. The only exception to this is that we will replace the notion of $k$-good ordinals in \cite{GitUng-SEF} with the more recent one from \cite{Gitik-EBF1}. We commence by describing the large cardinal framework which is used to construct the forcing. \subsection{Ground model assumptions and related forcing preliminaries} Let $\kappa$ be a measurable cardinal and $j: V_{\kappa+1} \to N$ be an elementary embedding of transitive sets with critical point $\kappa$ such that ${}^\kappa N \subseteq N$. We say \begin{enumerate} \item $j$ is $\lambda$-strong for some $\lambda \leq j(\kappa)$ if $V_{\lambda} \subseteq N$. \item $j$ is $(+1)$-extendible if $N = V_{j(\kappa)+1}$. \end{enumerate} Correspondingly, we say \begin{itemize} \item $\kappa$ is $\lambda$-strong if there exists a $\lambda$-strong embedding $j$ as above, with $\lambda < j(\kappa)$. \item $\kappa$ is superstrong if there exists an embedding $j$ as above which is $j(\kappa)$-strong. \item $\kappa$ is $(+1)$-extendible if there exists a $(+1)$-extendible embeding $j$ as above. \end{itemize} These notions are consistency-wise increasing in the large cardinal hierarchy: A $(+1)$-extendible cardinal is superstrong, and the consistency of a superstrong cardinal implies the consistency of a cardinal $\kappa$ which is $\lambda$-strong for every $\lambda$. Furthermore, if $\kappa$ is $\kappa^+$ supercompact cardinal, then it is a limit of $(+1)$-extendible cardinals (see \cite{kanamori}). For the rest of the section, we assume $V$ is a model of $\GCH$ which contains an increasing sequence of cardinals $\vec{\kappa} = \langle \kappa_n \mid n < \omega\rangle$ such that each $\kappa_n$ is the critical point of an elementary embedding $j_n : V_{\kappa_n+1} \to N_n$ which is $\lambda_n$-strong for some regular $\lambda_n$ with $\kappa_n^{+n+2} \leq \lambda_n \leq j_n(\kappa_n)$. We also fix a regular cardinal $\chi >> \kappa_{\omega} = \cup_n \kappa_n$ and a structure $(H_{\chi}, \in,<_\chi)$. For each $n < \omega$, we derive an $(\kappa_n,\lambda_n)$-extender $E_n$ from $j_n$ as follows. For every ordinal $\alpha \in [\kappa_n,\lambda_n)$ let $E_n(\alpha)$ be the $\kappa_n$-complete measure on $\kappa_n$ defined by $X \in E_n(\alpha) \iff \alpha \in j_n(X)$. We define a Rudin-Kiesler order on the indicies of $E_n$ by writing $\alpha \leq_{E_n} \beta$ if and only if $\alpha \leq \beta$ and there exists a function $f : \kappa_n \to \kappa_n$ so that $j_n(f)(\beta) = \alpha$. For each $\alpha \leq_{E_n} \beta$ we denote the first function $f$ with the above property in the well ordering $<_\chi$ by $\pi_{\beta,\alpha}$, with the possible exception when $\alpha = \beta$, in which case we take $\pi_{\beta,\beta}$ to be the identity function. It turns out that the ordering $\leq_{E_n}$ is $\kappa_n$-directed (\cite{Gitik-HB}). We proceed to define the notion of $k$-good indices. Our notion here deviates from the description of \cite{GitUng-SEF}, and follows \cite{Gitik-EBF1}. \begin{definition}\label{def-goodpoints} For any two integer values $1 < k \leq n$, let $\mathfrak{A}_{n,k}$ be the structure $( H_{\chi^{+k}}, \in, <_{\chi^{+k}}, \chi,E_n, \langle \alpha \mid \alpha \leq \kappa_n^{+k}\rangle)$\footnote{namely, $\mathfrak{A}_{n,k}$ is the expansion of $( H_{\chi^{+k}}, \in, <_{\chi^{+k}}, \chi,E_n)$, in a language which contains $\kappa_n^{+k}$ additional constant symbols, $c_\alpha$, $\alpha < \kappa_n^{+k}$, so that each $c_\alpha$ is interpreted in the model $\mathfrak{A}_{n,k}$ as $\alpha$.}. We assume that the well-ordering $<_{\chi^{+k}}$ of $H_{\chi^{+k}}$ extends the given order $<_\chi$ of $H_{\chi}$. An ordinal $\delta < \lambda_n$ is called $k$-good if there exists an elementary substructure $M_{n,k}(\delta) \prec \mathfrak{A}_{n,k}$ so that $M_{n,k}(\delta) \cap \lambda_n = \delta$. $\delta$ is said to be good if it is $k$-good for every $k \leq n$. \end{definition} It is easy to see that the set of good ordinals $\alpha$ is closed unbounded in $\lambda_n$ for each $n < \omega$. We end this part with a simple but important observation. \begin{lemma}\label{lem-Observationsgoodness} $\mathfrak{A}_{n,l} \in \mathfrak{A}_{n,k}$ for every $n < \omega$ and $l < k < \omega$. We therefore have \begin{enumerate} \item An ordinal $\gamma < \lambda_n$ is $l$-good iff $\mathfrak{A}_{n,k} \models \gamma \text{ is } l\text{-good }$. \item Suppose that $\gamma$ is $k$-good and $x \in \mathfrak{A}_{n,k}$ is a set of ordinals with $\min(x) \geq \gamma$. For every formula $\phi(v)$ in parameters from $M_{n,k}(\gamma)$, if $\mathfrak{A}_{n,k} \models \phi(x)$ then for every $\gamma' < \gamma$ there exists a set of ordinals $x' \in M_{n,k}(\gamma)$ (in particular $x' \subseteq \gamma)$ such that $\min(x') > \gamma'$ and $M_{n,k}(\gamma) \models \phi(x')$. \end{enumerate} \end{lemma} Assuming $\GCH$, there are only $\kappa_n^{++}$ ultrafilters on $\kappa_n$ and if $k \geq 2$ they are all definable in $\mathfrak{A}_{n,k}$. We can therefore apply Lemma \ref{lem-Observationsgoodness} to statements which involve ultrafilters and their Rudin-Kiesler projections. For example, if $\gamma$ is a good ordinal and $\delta \in [\gamma,\lambda_n)$ satisfies that $E_n(\delta) = U$ for some ultrafilter $U$ (which must belong to $M_{n,k}(\gamma)$) then for every $\gamma' < \gamma$ there exists some $\delta' \in (\gamma',\gamma)$ such that $E_n(\delta') = U$ as well. This ability to move around indices of $E_n$ measures without changing their essential ultrafilter information, plays a major role in the proof that the extenders-based Prikry-type poset $\mathbb{P}$ satisfies ${\kappa_\omega^{++}}$.c.c. \subsection{The forcing $(\mathbb{P},\leq,\leq^*)$} Let $\kappa_\omega = \cup_n \kappa_n$. Before we proceed to define the main poset $\mathbb{P}$, we introduce some relevant terminology involving partial functions from ${\kappa_\omega^{++}}$ to $\lambda_n$ and subsets of $\kappa_n$. \begin{definition}[Relevant components]\label{def-relevant} ${}$ \begin{enumerate} \item A set $r_n \in [\lambda_n]^{<\kappa_n}$ is called $k$-relevant for some $k \leq n$ if it consists of $k$-good ordinals and has a maximal ordinal in the $\leq_{E_n}$ ordering. \item A pair $(r_n,A_n)$ of a sets $r_n \in [\lambda_n]^{<\kappa_n}$ and $A_n \subseteq \kappa_n$ is $k$-relevant if $r_n$ is $k$-relevant with a maximal ordinal $\gamma_n = \max(r_n)$, $A_n \in E_n(\gamma_n)$, and the following conditions hold. \begin{itemize} \item For every two ordinals $\alpha < \beta$ in $r_n$ and $\nu \in A_n$, $\pi_{\gamma_n,\alpha}(\nu) < \pi_{\gamma_n,\beta}(\nu)$. \item Suppose that $\alpha \leq_{E_n} \beta \leq_{E_n} \gamma$ are three ordinals in $r_n$ and $\nu \in \pi_{\gamma_n,\gamma}``A$. Then \[ \pi_{\gamma,\alpha}(\nu) = \pi_{\beta,\alpha}\circ \pi_{\gamma,\beta}(\nu) . \] \end{itemize} \item A pair $(a_n,A_n)$ of a partial function $a_n : \kappa_{\omega}^{++} \to \lambda_n$ and a subset $A_n \subseteq \kappa_n$, is called $k$-relevant if $a_n$ is order preserving and $(\rng(a_n),A_n)$ is $k$-relevant in the above sense. \end{enumerate} \end{definition} We turn to define the forcing $\mathbb{P}$ which adds $\kappa_{\omega}^{++}$-many new $\omega$-sequences below $\kappa_{\omega}$. Conditions in $\mathbb{P}$ are sequences $p = \langle p_n \mid n < \omega\rangle$ which satisfy the following conditions: \begin{enumerate} \item There exists some $\ell < \omega$ such that for every $n < \ell$, $p_n = f_n$ is a partial function from $\kappa_{\omega}^{++}$ to $\kappa_n$, of size $|f_n| \leq \kappa_\omega$. \item For every $n \geq \ell$, $p_n = \langle a_n,A_n,f_n\rangle$ where \begin{itemize} \item $f_n$ is a partial function from ${\kappa_\omega^{++}}$ to $\kappa_n$ of size $|f_n| \leq \kappa_\omega$, \item $(a_n,A_n)$ is a $k_n$-relevant pair for some $k_n \geq 2$, where $a_n$ is a partial function from ${\kappa_\omega^{++}}$ to $\lambda_n$, and $\dom(a_n) \cap \dom(f_n) = \emptyset$. \end{itemize} \item $\dom(a_n) \subseteq \dom(a_m)$ for every $n \leq m$. \item $\kappa_n \in \rng(a_n)$ for all $n \geq \ell$. \item The sequence $\langle k_n \mid n < \omega\rangle$ is nondecreasing and unbounded in $\omega$. \end{enumerate} We will frequently use the following conventions when referring to conditions $p \in \mathbb{P}$: The integer $\ell$ in the definition of $p$ will be denoted by $\ell^p$. The functions $f_n$ in the definition will be denoted by $f_n^p$, and similarly, for every $n \geq \ell^p$, we will denote $a_n$ and $A_n$ by $a_n^p$ and $A_n^p$ respectively. The order relation $\leq$ of the poset $\mathbb{P}$ is the closure of the following two basic operations. \begin{enumerate} \item Given a condition $p \in \mathbb{P}$, a \textbf{direct extension} of $p$ is a condition $q$ which satisfies the following conditions: \begin{itemize} \item $\ell^q = \ell^p$; \item $f_n^p \subseteq f_n^q$ for all $n < \omega$; \item $a_n^p \subseteq a_n^q$ for all $n \geq \ell^q$; and \item for every $n \geq \ell^q$, if $\gamma^q_n = \max(\rng(a_n^q))$ and $\gamma^p_n = \max(\rng(a_n^p))$, then $A_n^q \subseteq \pi_{\gamma^q_n,\gamma^p_n}^{-1}(A_n^p)$. \end{itemize} The fact that $q$ is a direct extension of $p$ is denoted by $p \leq^* q$. \item Given a condition $p \in \mathbb{P}$, a \textbf{one-point extension} of $p$ is a condition ${p'}$ with the following properties: \begin{itemize} \item $\ell^{{p'}} = \ell^{p} + 1$; \item $p_n = {p'}_n$ for all $n \neq \ell_p$; and \item denoting $\max(\dom(a_{\ell^p}^p))$ by $\eta$, there exists some $\nu\in A_{\ell^p}^p$ such that \[{p'}_{\ell^p} = f^p_{\ell^p} \cup \{ \langle \tau, \pi_{a^p_{\ell^p}(\eta),a^p_{\ell^p}(\tau)}(\nu)\rangle \mid \tau \in \dom(a^p_{\ell^p})\}\] \end{itemize} The fact that ${p'}$ is obtained as a one-point extension of $p$ by $\nu \in A^p_{\ell^p}$ is denoted by writing ${p'} = p {}^\frown \langle \nu\rangle$. \end{enumerate} As mentioned above, the order $\leq$ of $\mathbb{P}$ is the one which is generated by the two given operations. Therefore, for two conditions $p,q \in \mathbb{P}$, $q$ extends $p$ (denoted $p \leq q$) if it obtained from $p$ by finitely many applications of one-point extensions and direct extensions. It is routine to verify that if $q$ extends $p$ then $q$ is a direct extension of a condition of the form \[p {}^\frown \langle \nu_{\ell^p}, \nu_{\ell^p+1}, \dots, \nu_t\rangle = (\dots((p {}^\frown \langle \nu_{\ell^p})\rangle {}^\frown \langle\nu_{\ell^p+1})\rangle \dots ) {}^\frown \langle\nu_t\rangle \] which is the condition obtained from $p$ by taking $(t+1-\ell_p)$ many one-point extensions with ordinals $\nu_n \in A^p_n$ for every $n$, $\ell^p \leq n \leq t$. Let $p = \langle p_n \mid n < \omega\rangle$ be a condition in $\mathbb{P}$. For every $m < \omega$ we decompose $p$ into the two parts, $p\restriction m = \langle p_n \mid n < m\rangle$ and $p\downharpoonright m = \langle p_n \mid n \geq m\rangle$. With this, we define $\mathbb{P}_{< m} = \{ p\restriction m \mid p \in \mathbb{P}\}$ and $\mathbb{P}_{\geq m} = \{ p\downharpoonright m \mid p \in \mathbb{P}\}$. It is not difficult to see that that the orders $\leq$ and $\leq^*$ on $\mathbb{P}$ naturally order relations on $\mathbb{P}_{< m}$ and $\mathbb{P}_{\geq m}$ for every $m < \omega$. Moreover, for every $p \in \mathbb{P}$ and $m \leq \ell^p$, the poset $(\mathbb{P}/p,\leq)$ naturally breaks into the product $(\mathbb{P}_{\leq m}/p\restriction m, \leq) \times (\mathbb{P}_{>m}/ p\downharpoonright m, \leq)$. The same holds if we replace $\leq$ by $\leq^*$. Finally, we note that if $m \leq \ell^p$ then the restrictions of $\leq$ and $\leq^*$ to $\mathbb{P}_{< m}/ p\restriction m$ coincide. We list several basic properties of $\mathbb{P}$ which are immediate consequences of the definitions. \begin{lemma}\label{Lem-PObasicproperties}${}$ \begin{enumerate} \item $\mathbb{P}$ satisfies the Prikry condition. That is, for every statement $\sigma$ of the forcing language $(\mathbb{P},\leq)$ and every condition $p \in \mathbb{P}$ there exists a direct extension $p^* \geq^* p$ such that $p^*$ decides $\sigma$. The same is true for $\mathbb{P}_{< m}$ and $\mathbb{P}_{\geq m}$ for every $m < \omega$. \item For every $m < \omega$, the direct extension order $\leq^*$ of $\mathbb{P}_{\geq m}$ is $\kappa_m$-closed. \item For every condition $p \in \mathbb{P}$ and $m \leq \ell^p$, the order $\leq$ of $\mathbb{P}_{< m}$ is $\kappa_{\omega}^+$-closed. \item For every condition $p \in \mathbb{P}$, the direct extension order of $\mathbb{P}/p$ is $\kappa_{\ell^p}$-closed. \end{enumerate} \end{lemma} The last property implies that the forcing $\mathbb{P}$ does not add new bounded subsets to $\kappa_{\omega}$. Next, we state a technical strengthening of the Prikry Lemma which follows from the argument of its proof. \begin{lemma}\label{Lem-meetdense} Let $D \subseteq \mathbb{P}$ be an open dense set (in the usual order $\leq$). For every condition $p \in \mathbb{P}$ there are $k \geq \ell^p$ and $p^* \geq^* p$ so that for every $\vec{\nu} = \langle \nu_{\ell^p},\dots, \nu_{k-1}\rangle \in \prod_{\ell^p \leq n < k} A_n^{p^*}$, $p^* {}^\frown \vec{\nu}$ belongs to $D$ \footnote{Note that when $k = \ell^p$, the product of the sets $A_n$ is empty, and therefore $p^* \in D$.}. \end{lemma} A standard application of Lemma \ref{Lem-meetdense} it that the forcing $\mathbb{P}$ preserves $\kappa_\omega^+$. We sketch the argument. \begin{corollary} $\mathbb{P}$ does not collapse $\kappa_\omega^+$. \end{corollary} \begin{proof}[Proof Sketch.] The fact that $\kappa_{\omega}$ is singular in $V$ implies that if $\kappa_\omega^+$ is collapsed then $\mathbb{P}$ introduces a cofinal function $f : \rho \to \kappa_{\omega}^+$ from some $\rho < \kappa_{\omega}$. Let $\dot{f}$ be a $\mathbb{P}$-name for a function from $\rho$ to $\kappa_{\omega}^+$, and $p$ be a condition $\mathbb{P}$ with $\kappa_{\ell^p} > \rho$. For every $i < \rho$, let $D_i$ be the dense open subset of $\mathbb{P}$ of conditions $q \in \mathbb{P}$ which decide the ordinal value of $\dot{f}(\check{i})$. Since the direct extension order of $\mathbb{P}/p$ is $\kappa_{\ell^p}$-closed, we can repeatedly use Lemma \ref{Lem-meetdense} and construct a $\leq^*$-increasing sequence of conditions $\langle p^i \mid i \leq \rho\rangle$ such that for every $i < \rho$ there exists some $n_i \geq \ell^p$ so that $p^i {}^\frown \vec{\nu}$ belongs to $D_i$ for all $\vec{\nu}\in \prod_{\ell^p\leq n < n_i}A^{p^i}_n$. Let $p^* = p^\rho$. It follows that there are functions $F_i$, $i < \rho$ with $F_i : \prod_{\ell^p \leq n < n_i} A^{p^*}_n \to \kappa_\omega^+$ for all $i$, such that for each $i < \rho$ and $\vec{\nu} \in \prod_{\ell^p \leq n < n_i} A^{p^*}_n$, $p^* {}^\frown \vec{\nu} \Vdash \dot{f}(\check{i}) = \check{F_i}(\vec{\nu})$. It follows that $p^*$ forces that $\rng{\dot{f}}$ is a subset of $X = \bigcup_{i<\gamma}\rng(F_i)$, which has size $|X| \leq \kappa_{\omega}$. Consequently, $p^*$ forces that $\dot{f}$ is bounded in $\kappa_\omega^+$. \end{proof} \subsection{The essential generic information} Let $G \subseteq \mathbb{P}$ be a generic filter and denote ${\kappa_\omega^{++}}^V$ by $\lambda$. Without loss of generality, we assume $G$ contains a condition $p$ with $\ell^p = 0$. A standard density argument shows that for every $\alpha<\lambda$ and $n < \omega$ there is a condition $p \in G$ with $\ell^p > n$, so that $\alpha \in \dom(f_n^p)$\footnote{note that if $\alpha \in \dom(a_n^p)$ then $\alpha \in \dom(f_n^q)$ for every extension $q$ of $p$ which involves at least $n$ one-point extensions.}. It is easy to see that the value $f_n^p(\alpha) < \kappa_n$ does not depend on the choice of the condition $p \in G$, and we denote it by $t_\alpha(n)$. It follows that $t_\alpha \in \prod_n \kappa_n$. Also, recall that by our definition of conditions $p \in \mathbb{P}$, $\kappa_n \in \rng(a_n^p)$ for some $p \in G$. Let $\alpha^0_n < {\kappa_\omega^{++}}$ be the unique value for which $\kappa_n = a_n^p(\alpha^0_n)$ and define $\rho_n = t_{\alpha^0_n}(n)$. The sequence $\vec{\rho} = \langle \rho_n \mid n < \omega\rangle$ is generic for the diagonal Prikry forcing (\cite{Gitik-HB}) by the normal measures $\langle E_n(\kappa_n) \mid n < \omega\rangle$. For the proof of Theorem \ref{thm1}, we will only care about functions $t_\alpha$ which originate in the ``extender components`` of $G$, namely, for values $\alpha$ which belong to $\dom(a_m^p)$ for some $p \in G$ (and thus, also to $\dom(a_n^p)$ for every $n \geq m$). The following definition makes this notion precise. \begin{definition} We define the set $\mathcal{A}_G \subseteq \lambda$ of active points in $V[G]$ by $\mathcal{A}_G = \{ \alpha < \lambda \mid \alpha \in \dom(a_n^p) \text{ for some } p \in G \text{ and } n < \omega\}$. \end{definition} A simple density argument shows that the set $\mathcal{A}_G$ is unbounded in $\lambda$. Let $\vec{t} = \langle t_\alpha \mid \alpha \in \mathcal{A}_G\rangle$. It is easy to see $\vec{t}$ is increasing in the ordering $<^*$. Like most extender-based forcings, it is typical that $\vec{t}$ is forms a scale in a product $\prod_n \tau_n$ of cardinals $\tau_n > \rho_n$ such that, loosely speaking, each $\tau_n$ is to $\rho_n$ what $\lambda_n$ is to $\kappa_n$. An example of such a result involving different extender-based posets can be found in \cite{Gitik-HB}. For an argument which involves short extenders forcings, we refer the reader to \cite{Gitik-EBF1}. We state two relevant results. \begin{lemma}\label{lem-genericscaleproduct} ${}$ \begin{enumerate} \item Suppose that for each $n < \omega$, $\lambda_n = j_n(h_n)(\kappa_n)$ for some function $h_n : \kappa_n \to \kappa_n$. Then in $V[G]$, $\vec{t}$ is a scale on the product $\prod_n h_n(\rho_n)$. For example if $\lambda_n = \kappa_n^{+n+2}$ then $\vec{t}$ is cofinal in $\prod_n \rho_n^{+n+2}$. \item If $\lambda_n = j_n(\kappa_n)$ for each $n <\omega$, then $\vec{t}$ is a scale on $\prod_n \kappa_n$. \end{enumerate} \end{lemma} As will be shown below, the generic sequence $\vec{t}$ has some appealing properties which fit the results established in Section \ref{section-pre}. Two apparent issues need to be taken care of before we can apply the results of Section \ref{section-pre} to $\vec{t}$ in $V[G]$. The first one is that the indices of the sequence $\vec{t}$ are not all the ordinals below $\lambda$, but only an unbounded subset. This issue is merely cosmetic, and it is straightforward to verify that all the results of Section \ref{section-pre} apply to sequences $\vec{f}$ with domain $A \subseteq \lambda$, as long as we restrict the argument to domain points $\delta \in A$ (For example, the statement of Proposition \ref{prop-IAmain} applies to all points $\delta \in S_{\vec{a}} \cap C \cap A$ which are continuity points of $\vec{f}$). The second issue, which is much more substantial and demands a revision of the forcing $(\mathbb{P},\leq)$ is that $\lambda = {\kappa_\omega^{++}}^V$ need not be a cardinal in $V[G]$. Indeed, the forcing $\mathbb{P}$ fails to preserve ${\kappa_\omega^{++}}^V$ and does not generate a model in which $\SCH$ fails. Gitik resolved this by identifying a quotient order of $(\mathbb{P},\leq)$, introduced by an equivalence relation $\leftrightarrow$ on $\mathbb{P}$, which satisfies ${\kappa_\omega^{++}}$.c.c but does not affect the essential generic information $\vec{t}$. Namely, every two conditions $p,{p'}$ which are $\leftrightarrow$ equivalent force the exact same statments about $\vec{t}$. We proceed to review the details. \subsection{The order $\rightarrow$} Fix integers $1 < k \leq n$, and let $\mathcal{L}_{n,k}$ be the language of the structure $\mathfrak{A}_{n,k}$. We define the $(n,k)$-type of an element $x \in \mathfrak{A}_{n,k}$ to be the $\mathcal{L}_{n,k}$-type which is realized by $x$ in the model $\mathfrak{A}_{n,k}$. We denote the type by $\tp_{n,k}(x)$ and identify it with a subset of $\kappa_n^{+k} = |\mathcal{L}_{n,k}|$. We will also need a relativized version of these types. For every element $r \in \mathfrak{A}_{n,k}$ let $\mathfrak{A}^r_{n,k}$ be the model of the expanded language $\mathcal{L}_{n,k}^c$ in which a new constant symbol $c$ is interpreted as $r$, and define the $(n,k)$-type $x \in \mathfrak{A}_{n,k}$ relative to $r$ to be the $\mathcal{L}_{n,k}^c$-type realized by $x$ in the model $\mathfrak{A}^r_{n,k}$. We denote the $r$-relativized type by $\tp^r_{n,k}(x)$. Since we assume $V$ satisfies the $\GCH$, for each $n < \omega$ there are only $\kappa_n^+$ many functions $\pi : \kappa_n \to \kappa_n$, and only $\kappa_n^{++}$ many ultrafilters $U$ on $\kappa_n$. Therefore, if $k \geq 2$ every such function $\pi$ and ultrafilter $U$ are definable in the language of $\mathfrak{A}_{n,k}$ which contain constants for every $\tau < \kappa_n^{++}$. The following is an immediate consequence. \begin{lemma}\label{lemma-typequiv} Fix $n< \omega$ and $k \geq 2$. Let $x$ be a set in $[\lambda_n]^{<\kappa_n}$. The following features of $x$ are completely determined by its type $\tp_{n,k}(x)$: \begin{enumerate} \item $\otp(x) < \kappa_n$; \item the ultrafilter $E_n(\alpha)$ for every $\alpha \in x$; \item the projection maps $\pi_{\beta,\alpha}$ for every two ordinals $\alpha,\beta \in x$ with $\beta \geq_{E_n} \alpha$. \end{enumerate} Similarly, the relative type $\tp_{n,k}^r(x)$ determines the same for $x\cup r$ because it determines the type $\tp_{n,k}(r \cup x)$. \end{lemma} \begin{definition}\label{def-equiv} ${}$ \begin{enumerate} \item Fix $n < \omega$ and let $r,r'$ be two sets in $[\lambda_n]^{<\kappa_n}$ for some $n < \omega$. We say that $r,r'$ are $k$-equivalent if $\tp_{n,k}(r) = \tp_{n,k}(r')$. \item Let $p_n = \langle a_n,A_n,f_n\rangle$ and ${p'}_n = \langle a'_n,A'_n,f'_n\rangle$ be two $k$-relevant components for some $k < \omega$. We write $p_n \iff_{n,k} {p'}_n$ if and only if $\rng(a_n)$ and $\rng(a_n')$ are $k$-equivalent sets in $[\lambda_n]^{<\kappa_n}$, $A_n = A_n'$, and $f_n = f'_n$. \item For every two conditions $p,{p'} \in \mathbb{P}$, we write $p \iff {p'}$ if and only if $\ell^p = \ell^{{p'}}$ and there is a nondecreasing unbounded sequence $\langle k_n^* \mid n < \omega\rangle$ of integers $k_n^* \geq 2$ such that for ${p'}_n = p_n$ for every $n < \ell^p$, and $p_n \iff_{n,k^*_n} {p'}_n$ for every $n \geq \ell^p$. \end{enumerate} \end{definition} It is straightforward to verify that $\iff$ is an equivalence relation. We also note that if $r,r'\in [\lambda_n]^{<\kappa_n}$ are $k$-equivalent then they are $l$-equivalent for every $l < k$. Therefore, if $p_n \iff_{n,k} {p'}_n$ then $p_n \iff_{n,l} {p'}_n$. \begin{definition} Let $p,q$ be two conditions of $\mathbb{P}$. We write $p \rightarrow q$ to mean that $q$ is obtained from $p$ by finitely many $\geq-$extensions and $\iff$ transitions. \end{definition} Therefore if $p \rightarrow q$ then $q$ is stronger (more informative) than $p$. It is clear that every two conditions $p\iff {p'}$ in $\mathbb{P}$ are forcing equivalent in the poset $(\mathbb{P},\rightarrow)$, and by Lemma \ref{lemma-typequiv}, that they force the exact same statemets about $\vec{t}$. The following two results are crucial to the success of the forcing construction. \begin{theorem}[Gitik, see \cite{GitUng-SEF}]\label{thm-SEFchain} ${}$ \begin{enumerate} \item If $p \iff {p'}$ are two equivalent conditions and $q'$ extends ${p'}$ in $\leq$, then there are conditions $q'' \geq {q'}$ and $p'' \geq {p'}$ such that $p'' \iff q''$. Consequently, for every dense open set $D$ in the poset $(\mathbb{P},\rightarrow)$ and a condition $p \in \mathbb{P}$ there exists some $p'' \geq p$ in $D$. \item $(\mathbb{P},\rightarrow)$ satisfies ${\kappa_\omega^{++}}$.c.c. \end{enumerate} \end{theorem} We note that the first statement of Theorem \ref{thm-SEFchain} implies that the identity function forms a forcing projection of $(\mathbb{P},\leq)$ onto $(\mathbb{P},\rightarrow)$ and therefore allows us to use the Prikry forcing machinery of $(\mathbb{P},\leq)$ to analyze $(\mathbb{P},\rightarrow)$. In particular, $(\mathbb{P},\rightarrow)$ does not introduce new bounded subsets to $\kappa_{\omega}$ and does not collapse $\kappa_{\omega}^+$. The second statement asserts that $(\mathbb{P},\rightarrow)$ does not collapse cardinals $\lambda \geq {\kappa_\omega^{++}}$ and allows us to apply the results of Section \ref{section-pre} to the generic scale $\vec{t}$.\\ We sketch the argument for ${\kappa_\omega^{++}}$.c.c to justify Definitions \ref{def-relevant}, \ref{def-goodpoints}, and the use of $k$-good ordinals. Suppose that $\{p_\alpha \mid \alpha < {\kappa_\omega^{++}}\}$ is a family of conditions of $\mathbb{P}$. By applying standard $\Delta$-system and pressing down arguments, it is possible to find a subfamily of the same size such that for every two conditions in the subfamily, $p_\alpha,p_\beta$ with $\alpha < \beta$, they agree on $\ell^{p_\alpha} = \ell^{p_\beta} = \ell$, and the following hold for each $n< \omega$: \begin{enumerate} \item $f^{p_\alpha}_n$ and $f^{p_\beta}_n$ are compatible functions (i.e., they agree on the values of common domain ordinals); \item $A_n^{p_\alpha} = A_n^{p_\beta} = A_n$ and $\rng(a_n^{p_\alpha}) = \rng(a_n^{p_\beta}) = r_n$ for all $n \geq \ell$; \item $\dom(a_n^{p_\alpha}) \cap \alpha = \dom(a_n^{p_\beta}) \cap \beta = d_n$ for some $d_n \in [{\kappa_\omega^{++}}]^{<\kappa_n}$; \item $\dom(a_n^{p_\alpha})\setminus \alpha \subseteq \beta$; and \item $k_n^{p_\alpha} = k_n^{p_\beta} = k_n$. \end{enumerate} The only obstruction to $p_\alpha$ and $p_\beta$ having a common extension is that the disjoint sets $\dom(a_n^{p_\alpha}) \setminus \alpha$ and $\dom(a_n^{p_\beta}) \setminus \beta$ are mapped by the order preserving functions $a_n^{p_\alpha}$ and $a_n^{p_\beta}$, respectively, to the same ordinals in $r_n$. This makes it impossible for $a_n^{p_\alpha} \cup a_n^{p_\beta}$ to be order preserving. To circumvent this, we use the equivalence relation $\iff$ to replace $p_\alpha$ with an equivalent $p_\alpha'$ so that $a_n^{p_\alpha'}$ is compatible with $a_n^{p_\beta}$. Let $x = a_n^{p_\alpha}``(\dom(a_n^{p_\alpha}) \setminus \alpha) = a_n^{p_\beta}``(\dom(a_n^{p_\beta}) \setminus \beta)$, $\gamma = \min(x)$ and $\gamma' = \sup(r_n \setminus x)$. Recall that since $\gamma$ is $k_n$-good there is a substructure $M_{n,k_n}(\gamma) \prec \mathfrak{A}_{n,k_n}$ such that $M_{n,k_n}(\gamma) \cap \lambda_n = \gamma$. The language $\mathcal{L}_{n,k}$ includes a constant for each $\tau < \kappa_{n}^{+k_n}$. Therefore $M_{n,k_n}(\gamma)$ contains all $(n,k_{n-1})$-types and in particular the relative type $t = t^{r_n}_{n,k_n-1}(x)$. By Lemma \ref{lem-Observationsgoodness} there exists a set of ordinals $x' \subseteq \gamma \setminus (\gamma'+1)$ which realizes the same type $t$. This, and Lemma \ref{lemma-typequiv} in turn, imply that $x'$ consists of $k_{n-1}$-ordinals and that $\tp_{n,k_n-1}(r_n \cup x') = \tp_{n,k_n-1}(r_n \cup x)$. Let $a_n'$ be the partial and order preserving function obtained from $a_n^{p_\alpha}$ by replacing the range $r_n \cup x$ with $r_n \cup x'$. By our choice of $x'$ we have that $(a_n', A_n^{p_\alpha})$ is $(k_{n}-1)$-relevant. If $p' = \langle p'_n \mid n < \omega\rangle$ is the sequence obtained from $p_\alpha$ by defining $a_n^{p'} = a_n'$ and $A_n^{p'} = A_n^{p_\alpha}$, then $p'$ is a condition in $\mathbb{P}$ which is $\iff$ equivalent to $p_\alpha$. Finally, it is clear from the construction that $a_n^{p'} \cup a_n^{p_\beta}$ is order preserving and $(k_{n}-1)$-relevant. We conclude that $p_\beta$ and $p' \iff p_\alpha$ are compatible in $\leq$ and thus $p_\beta$ and $p_\alpha$ are compatible in $\rightarrow$. \subsection{Proof of Theorem \ref{thm1}} The last argument justifies the restriction in the definition of conditions in $\mathbb{P}$ to $k$-good ordinals. This restriction is mild since the set of $n$-good ordinals is closed unbounded in $\kappa_n$, which leaves plenty of room to choose extender indices from $E_n$ to construct the generic scale $\vec{t}$. Our situation requires more caution, as would like to control the extender indices $\gamma \in \rng(a_n^p)$ to the level where we can guarantee that $\gamma \in j_n(S_n)$ for a prescribed stationary subset $S_n$ of $\kappa_n$. By the elementarity of $j_n$, it is clear that the set $T_n = j_n(S_n)$ is stationary in the codomain of $j_n$. However, $T_n$ need not be stationary in $V$ and thus might not contain good ordinals. It is for this reason that we require that $j_n$ possess a stronger (large cardinal) property than the one presented by $E_n$. For example, while requiring that each $j_n$ and $E_n$ are superstrong suffices for obtaining a generic scale on $\prod_n \kappa_n$, we will further assume each $j_n$ is $(+1)$-extendible; a property which is not reflected in its derived extender $E_n$. We proceed to the proof of Theorem \ref{thm1}. Suppose that $\langle \kappa_n \mid n < \omega\rangle$ is an increasing sequence of $(+1)$-extendible cardinals in a model $V$ of $\GCH$. For each $n < \omega$, let $j_n : V_{\kappa_n+1} \to V_{\lambda_n+1}$ be a $(+1)$-extendible embedding (i.e., $\lambda_n = j_n(\kappa_n)$) and $E_n$ be the $(\kappa_n,\lambda_n)$ extender derived from $j_n$. Denote ${\kappa_\omega^{++}}$ by $\lambda$. By Theorem \ref{thm-ShelahIA}, for every regular uncountable cardinal $\mu < \kappa_{\omega}$ there exists a sequence $\vec{a}^\mu = \langle a^\mu_\alpha \mid \alpha < \lambda\rangle$ of bounded subsets of $\lambda$ such that $S_{\vec{a}^\mu}^V \cap \cof(\mu)$ is stationary in $\lambda$. We force over $V$ with the short extenders poset $(\mathbb{P},\rightarrow)$ defined by the extenders $\langle E_n \mid n < \omega\rangle$. By Theorem \ref{thm-SEFchain}, $(\mathbb{P},\rightarrow)$ satisfies $\lambda$.c.c and therefore $S(\vec{a}^\mu)^V \cap \cof(\mu)$ remains stationary in $\lambda$ for all regular uncountable $\mu <\kappa_{\omega}$. \begin{remark} It is clear from Defintion \ref{def-IApoints}, that if $\gamma$ is an approachable ordinal with respect to $\vec{a}^\mu$ in $V$, then it is such in every generic extension $V[G]$. On its face, $V[G]$ can contain new ordinals which are approachable with respect to a sequence $\vec{a}^\mu$, however, using Lemma \ref{Lem-meetdense}, it is possible to show that $S_{\vec{a}}^V = S_{\vec{a}}^{V[G]}$. The last fact will not be used in the proof of Theorem \ref{thm1} below, which only requires that the set $S_{\vec{a}}^V \cap \cof(\mu)$ is stationary in $V[G]$ and contains ordinals which are approachable with respect to $\vec{a}^\mu$. \end{remark} By Lemma \ref{lem-genericscaleproduct}, $G$ introduces a scale $\vec{t} = \langle t_\alpha \mid \alpha \in \mathcal{A}_G\rangle$ in the product $\prod_n \kappa_n$. Fix a regular uncountable cardinal $\mu < \kappa_\omega$, and suppose that $m < \omega$ is the first integer such that $\mu < \kappa_m$, and $\vec{S} = \langle S_n \mid m \leq n < \omega\rangle$ is a sequence of stationary sets $S_n \subseteq \kappa_n \cap \cof(\mu)$, in $V$. We claim that $\vec{S}$ is tightly stationary in $V[G]$. It is sufficient to show that for every algebra $\mathfrak{A}$ which expands $\langle H_\theta^{V[G]}, \in ,<_\theta, \vec{t}, \vec{a}^\mu\rangle$ there is a tight substructure $M \prec \mathfrak{A}$ so that $\sup(M \cap \kappa_n) \in S_n$ for almost all $n < \omega$. Moreover, Proposition \ref{prop-IAmain} guarantees that in $V[G]$, for every algebra $\mathfrak{A}$ which expands $\langle H_\theta, \in,<_\theta,\vec{t},\vec{a}^\mu\rangle$ for some regular cardinal $\theta > \lambda$ there is a closed unbounded set $C \subseteq \lambda$ with the property that for every $\delta \in S_{\vec{a}}^{V[G]} \cap C$, if $\delta$ is a continuity point of $\vec{t}$ then there is a tight substructure $M \prec \mathfrak{A}$ such that $\sup(M \cap \kappa_n) = t_\delta(n)$ for almost all $n < \omega$. It is therefore sufficient to verify that $\vec{t}$ satisfies the following property. \begin{proposition}\label{proposition-mainthm1} For every closed unbounded subset $C \subseteq \lambda$ there exists an ordinal $\delta \in C \cap S_{\vec{a}} \cap \cof(\mu)$ which is a continuity point of $\vec{t}$ and $t_\delta(n) \in S_n$ for almost all $n < \omega$. \end{proposition} \noindent\emph{proof (Proposition \ref{proposition-mainthm1}).}\\ Since $(\mathbb{P},\rightarrow)$ satisfies $\lambda$.c.c, every closed unbounded subset of $\lambda$ in $V[G]$ contains a closed unbounded set in $V$. It is therefore sufficient to provide a density argument and show that for every closed unbounded set $C \subseteq \lambda$ in $V$ and a condition $p \in \mathbb{P}$, there are $\delta \in S_{\vec{a}} \cap C \cap \cof(\mu)$ and an extension $p^*$ of $p$ which forces that $\delta$ is a continuity point of $\vec{t}$ and that $t_\delta(n) \in S_n$ for all $n \geq \max(m,\ell^p)$. To this end, fix a condition $p \in \mathbb{P}$ and a club $C \subseteq \lambda$. We may assume that $\ell^p \geq m$. Let $a = \bigcup_n (\dom(a_n^p) \cup \dom(f_n^p)) \in [\lambda]^{\kappa_\omega}$. We can pick some $\delta \in S_{\vec{a}} \cap C \cap \cof(\mu)$ which is strictly above $\sup(a)$, and a continuous, increasing, and cofinal sequence $d = \langle \delta(i) \mid i < \cf(\delta)\rangle$ in $\delta \setminus (\sup(a)+1)$. For each $n \geq \ell^p$ let $T_n = j_n(S_n) \subseteq \lambda_n$. By the elementarity of $j_n$, $T_n$ is a stationary subset of $\lambda_n$ in $V_{\lambda_n+1}$, and thus also stationary in $V$. Furthermore, the fact $\mu < \kappa_n$ implies that $T_n \subseteq \cof(\mu)$. It follows that $T_n$ contains an $n$-good ordinal $\delta_n > \sup(\rng(a_n^p))$ of cofinality $\mu$ which is also a limit of an increasing continuous sequence of $n$-good ordinals, $d_n = \langle \delta(i)_n \mid i < \mu\rangle$. Extend the partial function $a_n^p$ to a function $a_n'$ which is defined by $a_n' = a_n^p \cup \{ \langle \delta,\delta_n\rangle\} \cup \{\langle \delta(i),\delta_n(i)\rangle \mid i < \mu\}$. Next, we choose an ordinal $\rho \in \lambda \setminus (\delta+1)$, and for each $n \geq \ell^p$, pick an $n$-good ordinal $\rho_n > \delta_n$ which is an $\leq_{E_n}$-upper bound for $\rng(a_n')$ (recall that the order $\leq_{E_n}$ is $\kappa_n$-directed). Define $a_n^* = a_n' \cup \{ \langle \rho,\rho_n\rangle\}$, and let $A_n^* \subseteq \pi^{-1}_{\rho_n,\max(\rng(a^p_n))}(A_n^p)$ be the set of all ordinals $\nu$ which satisfy the following two conditions: \begin{enumerate} \item $\pi_{\rho_n,\delta_n}(\nu) \in S_n$; and \item $\langle \pi_{\rho_n,\delta_n(i)}(\nu) \mid i < \mu\rangle$ is increasing, continuous, and confinal in $\pi_{\rho_n,\delta_n}(\nu)$. \end{enumerate} Finally, let $p^* = p^* = \langle p_n^* \mid n < \omega\rangle$ be defined by \[ p_n^* = \begin{cases} p_n &\mbox{ if } n < \ell^p \\ \langle a_n^*,A_n^*,f_n^p\rangle &\mbox{ if } n \geq \ell^p \end{cases} \] It is straightforward to verify $p^*$ is a direct extension of $p$ in $\mathbb{P}$, and that \[p^* \Vdash \dot{t_\delta} \text{ is an eub of } \dot{\vec{t}}\restriction d \text{ and } \dot{t_\delta(n)} \in \check{S_n} \text{ for all } n \geq \ell^p \] The fact $d= \langle \delta(i) \mid i < \cf(\delta)\rangle$ is cofinal in $\delta$ implies that $\vec{t}\restriction\delta$ is cofinally interleaved with $\vec{t}\restriction d$. Hence $p^*$ forces that $\dot{t_\delta}$ is an eub of $\vec{t}\restriction\delta$, and thus that $\delta$ is a continuity point of $\vec{t}$. \qed{Proposition \ref{proposition-mainthm1}}\\ \qed{Theorem \ref{thm1}} \section{Down to $\aleph_\omega$}\label{section-thm2} In this section we prove Theorem \ref{thm2} which is similar to Theorem \ref{thm1} with two major differences. \begin{enumerate} \item The sequence of regular cardinals to which the result applies is $\langle \omega_{s_n} \mid n < \omega\rangle$ for some subsequence $\langle s_n \mid n < \omega\raggedbottom$ of $\omega$. \item This sequence is Prikry geneneric over a ground model and therfore does not exist in the core model or the mental. \end{enumerate} The last property allows us to reduce the large cardinal assumption from the level of extendibility to the hypermeasurability assumption of an increasing sequence $\langle \kappa_n \mid n < \omega\rangle$ such that each $\kappa_n$ is $\kappa_n^{+n+3}$ strong. Fix for each $n$ a $\kappa_n^{+n+3}$-strong emedding $j_n : V_{\kappa_n+1} \to N_n$ with critical point $\kappa_n$, and let $E_n$ be the $(\kappa_n,\kappa_n^{+n+2})$-extender derived from $j_n$. The gap between the strength of the extender $E_n$ (which is $\kappa_n^{+n+2}$) to the strength of the embedding $j_n$ ($\kappa_n^{+n+3}$) is analogous to the gap between the superstrong extenders $E_n$ and the $(+1)$-extendible embeddings $j_n$ in the proof of Theorem \ref{thm1}. It will be used to insure a name of a stationary subset $T_n$ of $\kappa_n^{+n+2}$ in $N_n$, is also a name for a stationary set in $V$, and thus must contain many good points $\delta < \kappa_n^{+n+2}$ in the sense of Definition \ref{def-goodpoints}. To prove Theorem \ref{thm2} we will modify the short extenders forcing $\mathbb{P}$ from the previous section. A key feature of the revised version of $\mathbb{P}$ is that it subsumes a ``vanilla`` diagonal Prikry forcing with interleaved collapses, which will be denoted here by $\bar{\mathbb{P}}$. The forcing $\bar{\mathbb{P}}$ introduces a single Prikry sequece $\vec{\rho} = \langle \rho_n \mid n < \omega\rangle$ which is associated with the sequence of normal measures $\langle E_n(\kappa_n) \mid n < \omega\rangle$. Besides adding the diagonal Prikry sequence $\vec{\rho}$, the poset $\bar{\mathbb{P}}$ incorporates Levy posets which further collapse the cardinals in the intervals $(\rho_n^{+n+3},\kappa_n)$ and $(\kappa_n^{+n+3},\rho_{n+1})$ for every $n$. Therefore, in a $\bar{\mathbb{P}}$ generic extension $V[\bar{G}]$, the sequence of cardinals $\langle \rho_n^{+n+2} \mid n < \omega\rangle$ forms a subsequence $\langle \omega_{s_n} \mid n < \omega\rangle$ of the $\omega_n$s. $V[\bar{G}]$ will be the ground model that is specified in the statement of Theorem \ref{thm2}. We will argue that every fixed-cofinality sequence $\vec{S} = \langle S_n \mid n < \omega\rangle$ of stationary sets $S_n \subseteq \rho_n^{n+2}$ is tightly stationary in a further forcing extension over $V[\bar{G}]$. This will be done by proving that $\vec{S}$ is tight in the (full) $\mathbb{P}$ generic extension $V[G]$ which can be seen as a $\mathbb{P}/\bar{\mathbb{P}}$ forcing extension of $V[\bar{G}]$. Let us explain why this description dictates an additional revision of $\mathbb{P}$ (besides adding an inverleaved collapse posets). A standard analysis of Prikry type forcings shows that $\bar{\mathbb{P}}$ satisfies a version of Lemma \ref{Lem-meetdense} which implies that if $\dot{S_n}$ is a $\bar{\mathbb{P}}$-name of a subset of $\rho_n^{+n+2}$ then for every condition $q \in \bar{\mathbb{P}}$ there exists a direct extension $p^*$ such that every choice of the first $(n+1)$ generic Prikry points $\vec{\rho}_{n+1} = \langle \rho_0,\dots,\rho_n\rangle$ reduces $\dot{S_n}$ to a name $\bar{S}_n(\vec{\rho}_{n+1})$ which depends only on the collapse product of cardinals below $\rho_n$. Obtaining this substitution of names brings us sufficiently close to the assumptions of Theorem \ref{thm1} and allows us to apply a similar argument, and show that there are sufficiently many good IA ordinals $\delta < {\kappa_\omega^{++}}$ that can be generically map to some $n$-good ordinal $\delta_n < \kappa_n^{+n+2}$, such that $\delta_n$ is forced to belong to the stationary name $T_n(\vec{\rho}_{n+1}) = j_n({\bar{S}_n}(\vec{\rho}_{n+1}))$ by some suitable collapse conditions. The caveat in this description is that the choice of $\delta_n$ assumes the knowledge of the first diagonal Prikry points $\vec{\rho}_{n+1}$. To circumvent this issue, we modify the construction of $\mathbb{P}$ by requiring that in conditions $p \in \mathbb{P}$, the extender indicies maps $a_n = a_n^p$ depend on the preceeding diagonal Prikry points $\vec{\rho}_{n} = \langle \rho_0,\dots,\rho_{n-1}\rangle$ below $\kappa_{n-1}$ \footnote{we will be able to avoide knowing the value of the next point $\rho_n$ by some standard integration manipulation.}. Namely, $a_n$ will be a function which maps every potential Prikry initial segment $\vec{\rho}_n$ to a partiral function, $a_n^{\vec{\rho}_n} : {\kappa_\omega^{++}} \to \lambda_n$, with similar properties to the functions $a_n$ which were used in the previous section. Accordingly, we will also make the measure one set component $A_n = A_n^p$ to depend on the same information. Therefore $A_n$ will be a function which will map every relevant $\vec{\rho}_n$ to a set $A_n^{\vec{\rho}_n} \in E_n(\max(\rng(a_n^{\vec{\rho}_n}))$. We proceed to define $\bar{\mathbb{P}}$ and $\mathbb{P}$. \subsection{The poset $\bar{\mathbb{P}}$} Suppose that $V$ is a model which contains an increasing sequence $\vec{\kappa} = \langle \kappa_n \mid n < \omega\rangle$ of cardinals so that each $\kappa_n$ is $\kappa_{n}^{+n+3}$ strong. For each $n < \omega$ we fix a $\kappa_{n}^{+n+3}$-strong embedding $j_n : V_{\kappa_n+1} \to N_n$ and let $E_n$ be the $(\kappa_n,\kappa_n^{n+2})$-extender derived from $j_n$. We denote $\kappa_n^{+n+2}$ by $\lambda_n$. For notational simplicity, we define $\kappa_{-1} = \omega_1$. The poset $\bar{\mathbb{P}}$ is a diagonal Prikry forcing with interleaved Levy collapse posets. Conditions $\bar{p} \in \bar{\mathbb{P}}$ are of the form $\bar{p} = \langle \bar{p}_n \mid n < \omega\rangle$ and satisfy the following consitions: \begin{enumerate} \item There exists some $\ell < \omega$ such that $\bar{p}_n = \langle \rho_n, g_n,h_n\rangle$ for every $n < \ell$, where $\rho_n \in (\kappa_{n-1},\kappa_n)$, $g_n \in \col(\kappa_{n-1}^{+(n-1)+3}, <\rho_n)$, and $h_n \in \col(\rho_n^{+n+3},<\kappa_n)$. \item For every $n \geq \ell$, $p_n = \langle \np{A_n}, g_n,H_n\rangle$, where $g_n \in \col(\kappa_{n-1}^{+(n-1)+3},<\kappa_n)$, $\np{A_n} \in E_n(\kappa_n)$ consist of regular cardinals $\rho$ such that $g_n \in \col(\kappa_{n-1}^{+(n-1)+3},<\rho)$, and $H_n$ is a function with $\dom(H_n) = \np{A}_n$ and $H_n(\rho) \in \col(\rho^{+n+3},<\kappa_n)$ for each $\rho$ in its domain. \end{enumerate} A usual, we denote $\ell,g_n,h_n, \np{A_n}, H_n, \rho_n$ by $\ell^{\bar{p}},g_n^{\bar{p}},h_n^{\bar{p}}, \np{A_n}^{\bar{p}}, H_n^{\bar{p}}, \rho_n^{\bar{p}}$, respectively. Furthermore, we denote the sequence $\langle \rho_0^p,\dots, \rho_{\ell^p-1}^{\bar{p}}\rangle$ by $\vec{\rho}_{\bar{p}}$.\\ A condition $\bar{q} \in \bar{\mathbb{P}}$ is a direct extension of $\bar{p}$ (denoted $\bar{q} \geq^* \bar{p}$) if the following conditions hold: \begin{itemize} \item $\ell^{\bar{q}} = \ell^{\bar{p}}$; \item for every $n < \ell^{\bar{p}}$, $g_n^{\bar{q}} \geq g_n^{\bar{p}}$ and $h_n^{\bar{q}} \geq h_n^{\bar{p}}$; \item for every $n \geq \ell^{\bar{p}}$, $\np{A_n}^{\bar{q}}\subseteq \np{A_n}^{\bar{p}}$, $g_n^{\bar{q}} \geq g_n^{\bar{p}}$, and $H_n^{\bar{q}}(\rho) \geq H_n^{\bar{p}}(\rho)$ for every $\rho \in A_n^{\bar{q}}$. \end{itemize} A condition $\bar{q}$ is a one-point extension of $\bar{p}$ if $\ell^{\bar{q}} = \ell^{\bar{p}} + 1$, $\bar{p}_n = \bar{q}_n$ for every $n \neq \ell^{\bar{p}}$, and $\bar{q}_{\ell^{\bar{p}}} = \langle \rho, g_{\ell^{\bar{p}}}^{\bar{p}}, H_{\ell^{\bar{p}}}^{\bar{p}}(\rho)\rangle$ for some $\rho \in \np{A}_{\ell^{\bar{p}}}^{\bar{p}}$. We denote $\bar{q}$ by $\bar{p} {}^\frown \langle \rho\rangle$. Similarly, for a sequence of ordinals $\vec{\rho} = \langle \rho_{\ell^{\bar{p}}}, \rho_{\ell^{\bar{p}}+1}, \dots , \rho_{m-1} \rangle \in \prod_{\ell^{\bar{p}}\leq k < m} \np{A_k}^{\bar{p}}$, we define $\bar{p} {}^\frown \vec{\rho}$ to be the condition obtained by taking $m - \ell^{\bar{p}}$ consequtive one-point extensions by the ordinals in $\vec{\rho}$. The ordering $\leq$ of $\bar{\mathbb{P}}$ is defined by setting $\bar{q} \geq \bar{p}$ if and only if $\bar{q}$ is obtained from $\bar{p}$ by finitely many one-point extensions and direct extensions and one-point extensions. Equivalently, $\bar{q}$ is a direct extension of $\bar{p} {}^\frown \vec{\rho}$ for some finite sequence $\vec{\rho} \prod_{\ell^{\bar{p}}\leq k < m} \np{A_k}^{\bar{p}}$ for some $m \geq \ell^{\bar{p}}$. The following notational conventions and terminology will be useful for our treatment of $\bar{\mathbb{P}}$ and the revised extenders-based poset $\mathbb{P}$. \begin{enumerate} \item For every $\bar{p} \in \bar{\po}$ and $n \geq \ell^{\bar{p}}$, we define $\np{A}_{\bar{p}\restriction n} = \langle \rho_0^{\bar{p}},\dots,\rho^{\bar{p}}_{\ell^{\bar{p}}-1} \rangle \times \prod_{\ell^{bp} \leq k < n}A^{\bar{p}}_k$ \item For every $\vec{\rho}_{n+1} = \langle \rho_0,\dots,\rho_{n}\rangle \in \np{A}_{\bar{p}\restriction (n+1)}$, we define $\mathbb{Q}(\vec{\rho}_{n+1})$ to be the product of the Levy collapse posets which are determined by the sequence $\vec{\rho}_{n+1}$, namely, \[\col(\kappa_{-1}^{+2}, <\rho_0) \times \col(\rho_0^{+3},<\kappa_0) \times \dots \times \col(\kappa_{n-1}^{+(n-1)+3}, <\rho_{n}) \times \col(\rho_{n}^{+n+3},<\kappa_{n}).\] Therefore, conditions in $\mathbb{Q}(\vec{\rho}_{n+1})$ are sequences of Levy collapse functions, of the form $\langle g_0,h_0,\dots,g_n,h_n\rangle$, where for each $i$, $g_i \in \col(\kappa_{i-1}^{+i+2}, <\rho_{i})$ and $h_i \in \col(\rho_{i}^{+i+3}, <\kappa_i)$. \item We also define the restricted collapse product to be the poset $\mathbb{Q}'(\vec{\rho}_{n+1})$ which is obtained by removing the top collapse poset $\col(\rho_{n}^{+n+3},<\kappa_{n})$, from $\mathbb{Q}(\bar{\rho})$; \[\mathbb{Q}'(\bar{\rho}) = \col(\kappa_{-1}^{+2}, <\rho_0) \times \col(\rho_0^{+3},<\kappa_0) \times \dots \times \col(\kappa_{n-1}^{+(n-1)+3}, <\rho_{n}) \] Clearly, $\mathbb{Q}(\bar{\rho}) = \mathbb{Q}'(\bar{\rho}) \times \col(\rho_{n}^{+n+3},<\kappa_{n})$. \end{enumerate} Like the short extenders forcing $\mathbb{P}$, $\bar{\po}$ is a Prikry type forcing which admits some natural decomposition properties. We adopt the relevant notational conventions which were used to analyze $\mathbb{P}$. Therefore, for a condition ${\bar{p}} = \langle {\bar{p}}_n \mid n < \omega\rangle$ and $m < \omega$ we define ${\bar{p}}\restriction m = \langle {\bar{p}}_n \mid n < m\rangle$ and ${\bar{p}}\downharpoonright m =\langle {\bar{p}}_n \mid n \geq n\rangle$. We also define $\bar{\po}_{< m} = \{ \bar{p}\restriction m \mid \bar{p} \in \bar{\po}\}$ and $\bar{\po}_{\geq m} = \{ \bar{p}\downharpoonright m \mid \bar{p} \in bP\}$. The forcing $\bar{\po}/\bar{p}$ breaks into the product $\bar{\po}_{<\ell^{\bar{p}}}/\bar{p}\restriction \ell^{\bar{p}} \times \bar{\po}_{\geq \ell^{\bar{p}}}/\bar{p}\downharpoonright \ell^{\bar{p}}$. We note that $\bar{\po}_{<\ell^{\bar{p}}}/\bar{p}\restriction \ell^{\bar{p}} \cong \mathbb{Q}(\bar{\rho}^{\bar{p}})$ and that the direct extension order of $\bar{\po}_{\geq \ell^{\bar{p}}}/\bar{p}\downharpoonright \ell^{\bar{p}}$ is $\kappa_{\ell^{\bar{p}}}$-closed. A crucial component in the proof of the Prikry Lemma for $\bar{\po}$, is the ability to collect and amalgamate information from the different collapse posets $\mathbb{Q}(\vec{\rho}_m)$ (or $\mathbb{Q}'(\vec{\rho}_m)$) without deciding on the initial segment $\vec{\rho}_m$ of the generic Prikry sequence (e.g. see the proof of the Prikry Lemma in \cite{Unger-SEBFcol},\cite{SinUng-CombAleph}) Isolating this part of the argument gives rise to the following assertion. \begin{lemma}\label{lem-meetCollapse} Let $\bar{p} \in \bar{\po}$ be a condition in $\bar{\po}$ and $n \geq \ell^{\bar{p}}$. Suppose that $\{ D(\vec{\rho}_n) \mid \vec{\rho}_n \in \np{A}_{\bar{p}\restriction n}\}$ is a family of sets so that each $D(\vec{\rho}_n)$ is a dense open in $\mathbb{Q}(\vec{\rho}_n)$. Then there exists a direct extension $\bar{q} \geq^* \bar{p}$ such that for every $\vec{\rho}_* \in \prod_{\ell^{\bar{p}}\leq k < n}A^{\bar{q}}_k$, the condition $(\bar{q}\restriction n) {}^\frown \vec{\rho}_*$ belongs to $D(\vec{\rho}_{\bar{p}}{}^\frown \vec{\rho}_*)$. \end{lemma} An important consequence of Lemma \ref{lem-meetCollapse} is that it is possible to reduce any $\bar{\po}$ name of an $\omega$-sequence of bounded sets in $\kappa_\omega$ to a family of names which depend on posets $\mathbb{Q}(\vec{\rho}_m)$ or $\mathbb{Q}'(\vec{\rho}_m)$ for some suitable initial segments of $\vec{\rho}_m$ of the generic Prikry sequence. For example, suppose that $\langle \dot{S_n} \mid n < \omega\rangle$ is a $\bar{\po}$ name for a sequence of sets so that $\dot{S_n} \subseteq \rho_n^{+n+2}$. Let $\bar{p} \in \mathbb{P}$, and note that for every $n \geq \ell^p$ and $\vec{\rho}_* \in \prod_{\ell \leq k \leq n}A^{\bar{p}}_k$, the direct extension order of the poset $\mathbb{P}_{\geq (n+1)}$, above the condition $(\bar{p} {}^\frown \vec{\rho}_*)\downharpoonright (n+1)$, is $\kappa_n$ closed, and thus does not add new subsets to $\rho_n^{+n+2}$. We can therefore assume that for every $n < \omega$, $\dot{S_n}$ depends only on $\bar{\po}_{\leq n}$. This name reduction can be further improved since the poset $\col(\rho_n^{+n+3},<\kappa_n)$ (which is the top collapse component of $\mathbb{Q}(\vec{\rho}_{n+1})$) is also sufficiently closed to decide all names of subsets of $\rho_n^{+n+2}$. It follows that for every $n \geq \ell^{\bar{p}}$ and $\vec{\rho}_{n+1} \in A_{\bar{p}\restriction (n+1)}$, there exists a dense open $D(\vec{\rho})$ of conditions in $\mathbb{Q}(\vec{\rho})$ which force $\dot{S_n}$ to be equal to another name $\bar{S_n}$ which depends only on the restricted collapse product $\mathbb{Q}'(\vec{\rho})$. This allows us to apply Lemma \ref{lem-meetCollapse} and consequently, obtain the following result. \begin{corollary}\label{cor-PbarSn} Let $\langle \dot{S_n}\mid n < \omega\rangle$ be a sequence of $\bar{\mathbb{P}}$-name so that each $\dot{S_n}$ is a name for a subset of $\rho_n^{+n+2}$. For every $\bar{p} \in \bar{\mathbb{P}}$ there exists $\bar{q} \geq^* \bar{p}$ and a sequence of functions $\langle \bar{S_n} \mid \ell^{\bar{p}} \leq n < \omega\rangle$ so that for every $m \geq \ell^{\bar{p}}$ and $\vec{\rho}_* = \langle \rho_{\ell^{\bar{p}}}, \dots, \rho_m \rangle \in \prod_{\ell^{\bar{p}}< k \leq m} \np{A_k}^{\bar{q}}$, \[\bar{q} {}^\frown \vec{\rho}_* \Vdash \dot{S_m} = \bar{S_m}(\vec{\rho}_{\bar{p}}{}^\frown \vec{\rho}_*),\] where $\bar{S_m}(\vec{\rho})$ is a name of the restricted product $\mathbb{Q}'(\vec{\rho})$ for every $\vec{\rho} \in \np{A}_{\bar{q}\restriction (m+1)}$. \end{corollary} \subsection{A modified short extenders forcing} Before we turn to define our modified version $\mathbb{P}$ of the short extenders poset, let us point out a notational convention which we will use throughout this section frequently. We will be working here with measure one sets with respect to measures $E_n(\alpha)$ for some $\alpha \in [\kappa_n,\kappa_n^{+n+2})$. It is clear that for every such $\alpha$, the measure $E_n(\alpha)$ contains the set $X_n$ of all ordinals $\nu < \kappa_n$ for which there exists a unique inaccessible cardinal $\rho$ such that $\nu \in [\rho, \rho^{+n+2})$. Moreover, it is routine to verify that the map $\bar{\pi} : X_n \to \kappa_n$ which sends each $\nu \in X_n$ to $\rho$ as above, is a Rudin-Kiesler projection from $E_n(\alpha)$ to $E_n(\kappa_n)$. We will $\bar{\pi} = \pi_{\alpha,\kappa_n}$ for every $\alpha < \kappa_n^{+n+2}$. The function $\bar{\pi}$ naturally extends to a map whose domain is a sequences of ordinals $\vec{\nu} = \langle \nu_0,\dots, \nu_m\rangle$ in $\prod_{k \leq m}X_k$ and as defined below, to a forcing projection from $\mathbb{P}$ to $\bar{\po}$. We will frequently abuse the notation of $\bar{\pi}$, and further use it to denote its resulting natural extensions. \begin{definition} Conditions in the revised $\mathbb{P}$ are of the form $p = \langle p_n \mid n < \omega\rangle$ which satisfy the following conditions: \begin{enumerate} \item There exists some $\ell < \omega$ such that for every $n < \ell$, $p_n = \langle f_n,\rho_n,g_n,h_n\rangle$ where $\langle \rho_n,g_n,h_n\rangle$ satisfies condition (1) of the definition of $\bar{\po}$, and $f_n$ is a partial function from $\kappa_{\omega}^{++}$ to $\kappa_n$ of size $|f_n| \leq \kappa_\omega$. \item For $n \geq \ell^p$, the components $p_n = \langle a_n,\np{A_n}, A_n,f_n,g_n,H_n\rangle$ are defined by induction on $n$. Suppose that $p\restriction n = \langle p_k \mid k < n\rangle$ has been defined and that $A_k \subseteq X_k$ for evey $k \geq \ell$. Let \[ \np{A}_{p\restriction n} = \{\rho_0\} \times \{\rho_1\} \times \dots \times \{\rho_{\ell-1}\} \times \prod_{\ell \leq k < n} \np{A_k}. \] $p_n = \langle a_n,\np{A_n},A_n,f_n,g_n,H_n\rangle$ is defined as follows. \begin{itemize} \item $\langle \np{A_n}, g_n,H_n\rangle$ satisfies condition (2) of the definition of $\bar{\po}$. \item $f_n$ is a partial function from ${\kappa_\omega^{++}}$ to $\kappa_n$ of size $|f_n| \leq \kappa_\omega$. \item $a_n,A_n$ are functions. Their common domain is $\np{A}_{p\restriction n}$ and for every every $\vec{\rho} \in \np{A}_{p\restriction n}$, the result of apllying $a_n$ and $A_n$ to $\bar{\rho}$ are denoted by $a_n^{\vec{\rho}}$ and $A_n^{\vec{\rho}}$ respectively. Also, we require that there exists some integer $k_n \geq 2$ so that for every $\vec{\rho} \in \np{A}_{p\restriction n}$, $(a_n^{\bar{\rho}},A_n^{\bar{\rho}})$ is $k_n$-relevant pair in the sense of Definition \ref{def-relevant}. \item $\kappa_n \in \rng(a_n^{\vec{\rho}})$ for every ${\vec{\rho}} \in \np{A}_{p\restriction n}$ and $\pi_{\max(\rng(a_n^{{\vec{\rho}}})), \kappa_n}``A_n^{\vec{\rho}} = \np{A_n}$. \end{itemize} \item $\dom(a_n^{\vec{\rho}_n}) \subseteq \dom(a_m^{\vec{\rho}_m})$ whenever $m \geq n \geq \ell$, $\vec{\rho}_m \in \np{A}_{p\restriction m}$, $\vec{\rho}_n \in \np{A}_{p\restriction n}$, and $\vec{\rho}_n = \vec{\rho}_m \restriction n$ \item The sequence $\langle k_n \mid n < \omega\rangle$ is nondecreasing and unbounded in $\omega$. \end{enumerate} We denote $a_n,A_n,\np{A_n},f_n,g_n,h_n$ of $p$ by $a_n^p,A_n^p,\np{A_n}^p,f_n^p,g_n^p,h_n^p$ respectively. Also, for $\vec{\rho}_n \in \dom(a_n^p) = \dom(A_n^p)$, we denote for ease of notation $(a_n^p)^{\vec{\rho}}$ and $(A_n^p)^{\vec{\rho}}$ by $a_n^{p,\vec{\rho}}$ and $A_n^{p,\vec{\rho}}$ respectively.\\ As before, the order relation $\leq$ of $\mathbb{P}$ is the derived from two basic operations of direct extension and one-point extension. \begin{enumerate} \item Given two conditions $p,q \in \mathbb{P}$, $q$ is a direct extension of $p$ if it satisfies the following conditions: \begin{itemize} \item $\ell^q = \ell^p$; \item $\rho_n^p = \rho_n^q$ and $h_n^p \leq h_n^q$ for every $n < \ell^p$; \item $f_n^p \subseteq f_n^q$ and $g_n^p \leq g_n^q$ for all $n < \omega$; \item for every $n \geq \ell^p$, $\np{A_n}^q \subseteq \np{A_n}^p$ and $H^p_n(\rho) \leq H^q_n(\rho)$ for all $\rho \in \np{A_n}^q$; \item for every $n \geq \ell^q$ and $\vec{\rho} \in \np{A}_{q\restriction n}$, $(a_n^p)^{\vec{\rho}} \subseteq (a_n^q)^{\vec{\rho}}$. Furthermore, if $\gamma^{q,\vec{\rho}}_n = \max(\rng(a_n^{q,\vec{\rho}})$ and $\gamma^{p,\vec{\rho}}_n = \max(\rng(a_n^{q,\vec{\rho}}))$, then we require that $A_n^{q,\vec{\rho}} \subseteq \pi_{\gamma^{q,\vec{\rho}}_n,\gamma^{p,\vec{\rho}}_n}^{-1}(A_n^{q,\vec{\rho}})$. \end{itemize} \item Given a condition $p = \langle p_n \mid n < \omega\rangle \in \mathbb{P}$, a one-point extension $p'$ of $p$ is a sequence $p' = \langle p'_n \mid n < \omega\rangle$ satisfying \begin{itemize} \item $\ell^{p'} = \ell^{p} + 1$; \item $p_n = p'_n$ for all $n \neq \ell^p$; \item denoting $\max(\dom(a_{\ell^p,\vec{\rho}_p}))$ by $\eta$, there exists some $\nu\in A_{\ell^p}^{p,\vec{\rho}_p}$ such that $p'_{\ell^p} = \langle f_{\ell^p},\rho_{\ell^p},g_{\ell^p},h_{\ell^p}\rangle$ where \begin{enumerate} \item $f^{p'}_{\ell^p} = f^{p}_{\ell^p} \cup \{ \langle \tau, \pi_{a^{p,\vec{\rho}_p}_{\ell^p}(\eta),a^{p,\vec{\rho}_p}_{\ell^p}(\tau)}(\nu)\rangle \mid \tau \in \dom(a^{p,\vec{\rho}_p}_{\ell^p})\}$. \item $g^{p'}_{\ell^p} = g^p_{\ell^p}$. \item $\rho_{\ell^p} = \pi_{\eta,\kappa_{\ell^p}}(\nu) = \bar{\pi}(\nu)$. \item $h_{\ell^p}^{p'} = H^{p}_{\ell^p}(\rho)$. \end{enumerate} \item For every $n > \ell^p$ we have that $f_n^{p'} = f_n$, $g_n^{p'} = g_n$, $H_n^{p'} = H_n$, $a_n^{p'} = a_n^p\restriction\np{A}_{p'\restriction n}$, and $A_n^{p'} = a_n^p\restriction \np{A}_{p'\restriction n}$. \end{itemize} As usual, $p'$ is denoted by $p {}^\frown \langle \nu\rangle$. \end{enumerate} \end{definition} We note that for every $p \in \mathbb{P}$ the domain of $a^p_{\ell_p}$ and $A^p_{\ell^p}$ is the singleton $\np{A}_{p\restriction \ell_p} = \{ \vec{\rho}_{p} \}$. \begin{definition For a condition $p \in \mathbb{P}$, we define $\bar{\pi}(p)$ to be the sequence $\bar{p} = \langle \bar{p}_n \mid n < \omega\rangle$ defined by $\bar{p}_n = \langle \rho^p_n,g^p_n,h^p_n\rangle$ for every $n < \ell^p$, and $\bar{p}_n = \langle \np{A_n}^p,g^p_n,H^p_n\rangle$ otherwise. \end{definition} \begin{remark}\label{remark-PprojPbar} The following facts are straightforward to derive from the definition of $\mathbb{P}$. \begin{itemize} \item For every one-point extension $p {}^\frown \langle \nu\rangle$ of $p$, $\bar{\pi}(p {}^\frown \langle \nu\rangle)$ is the one-point extension of $\bar{\pi}(p) {}^\frown \langle \bar{\pi}(\nu)\rangle$ of $\bar{p}$ in $\bar{\mathbb{P}}$. Conversely, for every ordinal $\rho$, if $\bar{\pi}(p) {}^\frown \langle \rho\rangle$ is a one-point extension of $\bar{\pi}(p)$ there is $\nu \in A^{p,\vec{\rho}_p}_{\ell^p}$ such that $\bar{\pi}(\nu)) = \rho$, and thus $\bar{\pi}(p {}^\frown \langle \nu\rangle) = \bar{\pi}(p) {}^\frown \langle \rho\rangle$. \item For every direct extension $p^*$ of $p$, $\bar{\pi}(p^*)$ is a direct extension of $\bar{\pi}(p)$ in $\bar{\mathbb{P}}$. Conversely, every direct extension of $\bar{\pi}(p)$ in $\bar{\mathbb{P}}$ is the $\bar{\pi}$ projection of some direct extension of $p$ in $\mathbb{P}$. \end{itemize} It follows that $\pi : \mathbb{P} \to \bar{\mathbb{P}}$ is projection of Prikry type forcings from $\mathbb{P}$ onto $\bar{\po}$. \end{remark} Finally, we modify the equivalence relation $\iff$ and the resulting ordering $\rightarrow$. Given $p,q \in \mathbb{P}$ we write $p \iff q$ if and only if the following conditions hold: \begin{enumerate} \item $\ell^p = \ell^q$; \item $p\restriction \ell^p = q\restriction \ell^q$; \item $(g_n^p,H^n_p, \np{A}_{p\restriction n}) = (g_n^q,H^n_q,\np{A}_{q\restriction n})$ for all $n \geq \ell^p$; and \item there exists a nondecreasing sequence of integers $\langle k_n^* \mid n < \omega\rangle$ which is unbounded in $\omega$, such that for every $n \geq \ell^p$ and $\vec{\rho} \in np{A}_{p\restriction n}$, $(a_n^{p,\vec{\rho}},A_n^{p,\vec{\rho}},f^p_n) \iff_{n,k^*n} (a_n^{q,\vec{\rho}},A_n^{q,\vec{\rho}},f^q_n)$ in the sense of Definition \ref{def-equiv}. \end{enumerate} As before, we define $\rightarrow$ on $\mathbb{P}$ as the closure of the operations $\leq$ and $\iff$. The modified short extenders forcing $\mathbb{P}$ satisfies all key properties which are satisfied by the poset $\mathbb{P}$ defined in the previous section. \begin{lemma}${}$ \begin{enumerate} \item $(\mathbb{P},\leq,\leq^*)$ is Prikry type forcing. \item The identity function is a forcing projection from $(\mathbb{P},\leq)$ to $(\mathbb{P},\rightarrow)$. \item $(\mathbb{P},\rightarrow)$ satisfies ${\kappa_\omega^{++}}$.c.c. \end{enumerate} \end{lemma} The incorporation of interleaved collapse posets between the points of the $\kappa_n$s is standard. The fact that the the collapse poset between $\kappa_n$ and $\kappa_{n+1}$ is $\lambda_n^+$-closed (i.e., it is closed beyond the length of the extender $E_n$) is crucial for the proof of the Prikry Lemma. We refer the reader to \cite{Unger-EBFcollapse} for a proof of the Prikry Lemma for short extenders forcing with collapses. The fact $(\mathbb{P},\rightarrow)$ satisfies ${\kappa_\omega^{++}}$.c.c follows from a similar argument to the one sketched in Section \ref{section-thm1}, where instead of using the type to replace $a_n= a_n^{p_\alpha}$ with an equivalent $a_n'$ once, we do it for $a_n^{\bar{\rho}}$ for every $\bar{\rho} \in \np{A}_{p\restriction n}$. \subsection{Proof of Theorem \ref{thm2}} Suppos that $\langle \kappa_n \mid n < \omega\rangle$ is an increasing sequence of cardinals in $V$ and $n < \omega$, $j_n :V_{\kappa_n+1} \to N_n$ is a $\kappa_n^{+n+3}$-strong embedding. Let $E_n$ be the $(\kappa_n,\kappa_{n}^{+n+2})$ embedding derived from $j_n$. Force with the modified short extenders forcing $(\mathbb{P},\rightarrow)$, and let $G \subseteq \mathbb{P}$ be a generic filer over $V$. By Remark \ref{remark-PprojPbar}, the map $\bar{\pi} : \mathbb{P} \to \bar{\mathbb{P}}$ is a focring projection. Therefore the set $\bar{G} = \bar{\pi}``G \in V[G]$ is $\bar{\po}$ generic filter over $V$. The intermediate generic extension $V[\bar{G}]$ contains the diagonal Prikry generic sequence $\vec{\rho} = \langle \rho_n \mid n < \omega\rangle$ and collapse generic filters for the intervals $(\kappa_{n-1}^{+n+3}, \rho_n)$ and $(\rho_n^{+n+3},\kappa_n)$, for every $n < \omega$. It is therefore clear that in $V[\bar{G}]$, the sequnece $\langle \rho_n^{+n+2} \mid n < \omega\rangle$ forms an infinite subset $\{ \omega_{s_n} \mid n < \omega\}$ of the set $\{ \omega_n \mid n < \omega\}$. Fix a regular cardinal $\mu < \kappa_{\omega} = \aleph_\omega^{V[\bar{G}]}$ and $k < \omega$ so that $\rho_k > \mu$. We claim that for every sequence $\vec{S} = \langle S_n \mid k \leq n < \omega\rangle$ of statonary sets $S_n \subseteq \rho_n^{+n+2} \cap \cof(\mu)$ is $V[\bar{G}]$, $\vec{S}$ is tightly stationary in $V[G]$. We follow the steps of the proof of Theorem \ref{thm1} and fix, in $V$, a sequence $\vec{a}^\mu$ of length ${\kappa_\omega^{++}}$, such that $S_{\vec{a}^\mu} \cap \cof(\mu)$ is stationary in ${\kappa_\omega^{++}}$. Since $(\mathbb{P},\rightarrow)$ satisfies ${\kappa_\omega^{++}}$.c.c, $S_{\vec{a}^\mu} \cap \cof(\mu)$ remains stationary in $V[G]$. As explained in the proof of Theorem \ref{thm1}, the last property allows us to reduce the tight stationarity assertion concerning $\vec{S}$ in $V[G]$, to proving the following result: For every closed unbounded set $C \subseteq {\kappa_\omega^{++}}$, in $V$, and every condition $p \in \mathbb{P}$, there exists some $q^* \geq p$ and $\delta \in C \cap S_{\vec{a}^\mu} \cap \cof(\mu)$ such that $q^* \Vdash \delta \text{ is a continuity point of } \dot{\vec{t}}, \text{ and } \dot{t_\delta}(n) \in \dot{S_n} \text{ for almost all } n < \omega.$ To this end, we work in $V$ and fix a condition $p \in \mathbb{P}$ and a closed unbounded set $C \subseteq {\kappa_\omega^{++}}$. By extending $p$ if necessary, we may assume that $\ell^p \geq k+1$, $\rho_k^p > \mu$, and that $p \Vdash \forall n \geq k. \dot{S_n} \subseteq \cof(\mu)$. By Corollary \ref{cor-PbarSn}, and the fact $\pi$ projects the poset $(\mathbb{P},\leq^*)$ onto $(\bar{po},\leq^*)$ (Remark \ref{remark-PprojPbar}), there exists a direct extension $q$ of $p$ and sequence of functions $\langle \bar{S}_n \mid \ell^{p} \leq n < \omega\rangle$ so that for every $m \geq \ell^{p}$ and $\vec{\rho}_* = \langle \rho_{\ell^{p}}, \dots, \rho_m \rangle \in \prod_{\ell^{\bar{p}}< n \leq m} \np{A_n}^{q}$, $\bar{q} {}^\frown \vec{\rho}_* \Vdash_{\bar{\mathbb{P}}} \dot{S_m} = \bar{S}_m(\vec{\rho}_p {}^\frown \vec{\rho}_*)$, where $\bar{S}_m(\bar{\rho})$ is a name of the restricted finite product $\mathbb{Q}'(\bar{\rho})$ of a stationary subset of $\rho_m^{+m+2}$. We point out that for each $m \geq \ell^p$, the domain of the function $\bar{S}_m$ coincides with the product $\np{A}_{q\restriction (m+1)} = \np{A}_{q\restriction m} \times \np{A}^{q}_m$. We proceed to define sets of ordinals which will be needed for the construction of a desired extension $q^*$ of $q$. We first define a subset $a$ of ${\kappa_\omega^{++}}$ by \[a = \left(\cup_{n<\omega}\dom(f_n^q)\right) \bigcup \left(\cup_{n<\omega} \cup_{\bar{\rho}\in \np{q\restriction n}} \dom(a^{q,\bar{\rho}}_n)\right).\] $a$ has size $\kappa_\omega$ and is therefore bounded in ${\kappa_\omega^{++}}$. Similarly, for every $n \geq \ell^p$ we define a subset $r_n$ of $\kappa_n^{+n+2}$ by \[r_n = \bigcap_{\vec{\rho} \in \np{A}_{q\restriction n}} \rng(a_n^{q,\bar{\rho}}).\] $|r_n| < \kappa_n$ because $|\np{A}_{q\restriction n}| = \kappa_{n-1}$. In particular, $r_n$ is bounded in $\kappa_n^{+n+2}$. Next, we fix a increasing and continuous sequence $d = \langle \delta(i) \mid i \leq \mu\rangle$ of ordinals in ${\kappa_\omega^{++}} \setminus (\sup(a)+1)$, so that $\delta(\mu) \in S_{\vec{a}^\mu} \cap C$. We also fix some $\tau \in {\kappa_\omega^{++}}$ above $\delta(\mu)$. In the constructing the final condition $q^*$, given below, we will add the collection $d \cup \{\tau\}$ to $\dom(a_n^{q^*,\vec{\rho}})$ for every relevant $\vec{\rho}$. $q^*$ is constructed in $\omega$ many steps. We will define for each $k \geq \ell^p$ a condition segments $q^k \in \mathbb{P}_{< k}$ and guarantee that the following conditions hold: (1) $q^k \geq^* q\restriction k$ for all $k \geq \ell^p$; and (2) $q^{k_2}\restriction k_1 \geq^* q^{k_1}\restriction k_1$ for every $k_1 < k_2$. We start by taking $q^{\ell^p} = q\restriction \ell^p$. Suppose that $q^n$ has been defined. Note that for every $\vec{\rho}_n \in \np{A}_{q^n}$ and $\rho \in \np{A^q_n}$ the name $\bar{S}_n(\vec{\rho}_n{}^\frown \langle \rho\rangle)$ is defined. Aplly $j_n$, and consider the function $T_n$ defined by $\dom(T_n) = \np{A}_{q^n}$ and $T_n(\vec{\rho}_n) = j_n(\bar{S}_n)(\vec{\rho}_n {}^\frown \langle \kappa_n\rangle)$. By the elementarity of $j_n$, $T_n(\vec{\rho})$ is a $j_n(\mathbb{Q}')(\vec{\rho} {}^\frown \langle \kappa_n\rangle)$-name for a stationary subset of $\kappa_n^{+n+2}$. Furthermore, the fact that $j_n : V_{\kappa_n+1} \to N_n$ is $\kappa_n^{+n+3}$-strong implies that $N_n$ contains every closed unbounded subset of $\kappa_n^{+n+2}$, and in particular, the set of all $n$-good ordinals below $\kappa_n^{+n+2}$. It follows that for every $\vec{\rho}_n \in \np{A}_{q^n}$, there is a dense open set $D(\vec{\rho}_n) \subseteq \mathbb{Q}(\vec{\rho}_n)$ of conditions $z \in \mathbb{Q}(\vec{\rho}_n)$ for which there it $g_z \in \col(\kappa_{n-1}^{n+2},<\kappa_n)$, and a increasing and continuous sequence $d_n^{\vec{\rho}_n} = \langle \delta^{\vec{\rho}_n}_n(i) \mid i \leq \mu\rangle \subseteq \kappa_n^{+n+2} \setminus (\sup(r_n)+1)$ of $n$-good ordinals, such that \[z {}^\frown g_z \Vdash_{\mathbb{Q}'(\vec{\rho}_n{}^\frown \langle \kappa_n\rangle)} \delta_n^{\vec{\rho}_n}(\mu) \in T_n(\vec{\rho}_n)\] Moreover, note that there are only $\kappa_{n-1}$ many relevant sequence $\vec{\rho}_n$ and conditions $z \in \mathbb{Q}'(\vec{\rho}_n)$, and the collapse poset $\col(\kappa_{n-1}^{n+2},<\kappa_n)$ is a $\kappa_{n-1}^{+n-2}$ closed. It is therefore routine to form a single condition $g_n^* \in \col(\kappa_{n-1}^{n+2},<\kappa_n)$, extending $g^{q}_n$ (i.e., $g^q_n$ belongs to the $n$-th component of the condition $q$) such that for all $\vec{\rho}_n \in \np{A}_{q^n}$ and $z \in D(\vec{\rho}_n)$, \[z {}^\frown g^*_n \Vdash_{\mathbb{Q}'(\vec{\rho}_n{}^\frown \langle \kappa_n\rangle)} \delta_n^{\vec{\rho}_n}(\mu) \in T_n(\vec{\rho}_n).\] Let $\bar{q}^n = \bar{\pi}(q^n) \in \bar{\mathbb{P}}_{<n}$. By Lemma \ref{lem-meetCollapse} there exists a direct extension $\bar{t} \geq^* \bar{q}^n$ such that for every $\vec{\rho}_n \in \np{A}_{\bar{t}}$, $\bar{t} {}^\frown \vec{\rho}_n \in D(\vec{\rho}_n)$. Let $t$ be a direct extension of $q^n$ in $\mathbb{P}_{<n}$ so that $\bar{\pi}(t) = \bar{t}$, and define $q^{n+1}\restriction n = t$. It remains to define $q^{n+1}_n$. Before that, we define two auxiliery components $a_n^*$ and $A_n^*$ as follows: \begin{itemize} \item $\dom(a_n^*) = \np{A}_{t}$, and for every $\vec{\rho}_n \in \dom(a^*_n)$, let $a_n^{*,\vec{\rho}_n} = a_n^{q,\vec{\rho}} \cup \{ \langle \delta(i),\delta_n^{\vec{\rho}_n}(i) \rangle \mid i \leq \mu \} \cup \{ \langle \tau,\tau_n^{\vec{\rho}}\rangle\}$. Where, here, $\tau_n^{\vec{\rho}_n}$ is some $n$-good ordinal which is an upper bound to the set $\rng(a_n^{q,\vec{\rho}_n}) \cup \{ \delta^{\vec{\rho}_n}_n(i) \mid i \leq \mu\}$ in the Rudin-Kiesler ordering $\leq_{E_n}$. \item $\dom(A^*_n) = \np{A}_{t}$. For every $\vec{\rho}_n \in \dom(A^*_n)$, $A_n^{*,\vec{\rho}_n}$ is defined to be the set of all $\nu \in \pi_{\tau_n^{\vec{\rho}_n},\max(a_n^{q,\vec{\rho}_n})}^{-1}(A_n^{q,\vec{\rho}_n})$ which satisfy the following conditions: \begin{enumerate} \item The condition $g^*_n \in \col(\kappa_{n-1}^{+n+2},<\kappa_n)$ belongs to $\col(\kappa_{n-1}^{+n+2},<\rho_\nu)$, where $\rho_\nu = \bar{\pi}(\nu)$; \item For every $\bar{\rho} \in \np{A}_{q'}$, the condition $(q' {}^\frown \bar{\rho}) * g^*_n$ of $\mathbb{Q}(\bar{\rho}) \times \col(\kappa_{n-1}^{+n+2},<\rho_\nu) = \mathbb{Q}'(\bar{\rho}{}^\frown \langle \rho_\nu\rangle)$ forces the statements \[``\pi_{\tau_n^{\bar{\rho}},\delta_n^{\bar{\rho}}(\mu)}(\nu) \in \bar{S_n}(\bar{\rho} {}^\frown \langle \rho_\nu\rangle)`` \] and \[`` \langle \pi_{\tau_n^{\bar{\rho}},\delta_n^{\bar{\rho}}(i)}(\nu) \mid i \leq \mu\rangle \text{ is an increasing and continuous sequence } `` \] \end{enumerate} \end{itemize} It is straightforward to verify that $a_n^*$ and $A_n^*$ are well defined components, using our choice of $\bar{t}$ and the sequences $\langle \delta_n^{\vec{\rho}_n}(i) \mid i \leq \mu\rangle$, $\vec{\rho}_n \in \np{A}_{\bar{t}}$. We turn to define $q^{n+1}_n = \langle a_n,\np{A_n},A_n,f_n,g_n,H_n \rangle$. We define each component in turn: \begin{enumerate} \item $a_n = a_n^*$; \item $\np{A_n} = np{A_n}^q \cap \left(\bigcap_{\vec{\rho}_n \in \np{A}_{t}}\bar{\pi}``A_n^{*,\vec{\rho}_n}\right)$; \item $\dom(A_n) = \np{A}_{t}$ and for every $\vec{\rho}_n \in \dom(A_n)$, $A_n^{\vec{\rho}} = A_n^{*,\vec{\rho}_n} \cap \bar{\pi}^{-1}(\np{A_n})$; \item $f_n = f_n^q$; \item $g_n = g_n^*$; \item $H_n = H_n^q$. \end{enumerate} It is straightforward to verify that $q^*$ is a direct extension of $q$ in $\mathbb{P}$. Our choice of sequences $\langle \delta_n^{\vec{\rho}_n}(i) \mid i \leq \mu\rangle$, $\vec{\rho}_n \in \np{A}_{\bar{t}}$ for every $n \geq \ell^{p}$ and $\vec{\rho}_n \in \np{A}_{q^*\restriction n}$ guarantee that for every $n \geq \ell^{q}$ and a sequence $\vec{\nu} = \langle \nu_{\ell^q}, \dots \nu_n\rangle$, if $q^* {}^\frown \vec{\nu}$ is a valid extension on of $q^*$ then it must force that $t_{\delta(\mu)}(n) \in \dot{S_n}$ is a limit point of the increasing sequence $\langle t_{\delta(i)}(n) \mid i < \mu\rangle$. We conclude that $q^*$ forces that $t_\delta$ is a continuity point of $\vec{t}$ and that $t_\delta(n) \in \dot{S_n}$ for almost all $n < \omega$. \qed{Theorem \ref{thm2}} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Intuitively, two objects are independent if they do not affect each other. The concept is well-understood in classical information theory. There, the objects are random variables, the information in a random variable is its Shannon entropy, and two random variables $X$ and $Y$ are declared to be independent if the information in the join $(X,Y)$ is equal to the sum of the information in $X$ and the information in $Y$. This is equivalent to saying that the information in $X$ conditioned by $Y$ is equal to the information in $X$, with the interpretation that, on average, knowing a particular value of $Y$ does not affect the information in $X$. The notion of independence has been defined in algorithmic information theory as well for finite strings~\cite{Cha82}. The approach is very similar. This time the information in a string $x$ is the complexity (plain or prefix-free) of $x$, and two strings $x$ and $y$ are independent if the information in the join string $\langle x, y \rangle$ is equal to the sum of the information in $x$ and the information in $y$, up to logarithmic (or, in some cases, constant) precision. The case of infinite sequences (in short, sequences) has been less studied. An inspection of the literature reveals that for this setting, independence has been considered to be synonymous with pairwise relative randomness, $\mbox{i.e.}$, two sequences $x$ and $y$ are said to be independent if they are (Martin-L\"{o}f) random relative to each other (see~\cite{vlam:j:randomness, dow-hir:b:algrandom}). The effect of this approach is that the notion of independence is confined to the situation where the sequences are random. The main objective of this paper is to put forward a concept of independence that applies to \emph {all} sequences. One can envision various ways for doing this. One possibility is to use Levin's notion of mutual information for sequences~\cite{lev:j:mutualinformation} (see also the survey paper~\cite{gru-vit:t:shankolm}) and declare two sequences to be independent if their mutual information is small. If one pursues this direction, the main issue is to determine the right definition for ``small." We take another approach, which consists in extending in the natural way the notion of independence from finite strings to sequences. This leads us to two concepts: \emph {independence} and \emph{finitary-independence}. We say that (1) two sequences $x$ and $y$ are independent if, for all $n$, the complexity of $x {\upharpoonright} n$ (the prefix of $x$ of length $n$) and the complexity of $x {\upharpoonright} n$ relativized with $y$ are within $O(\log n)$ (and the same relation holds if we swap the roles of $x$ and $y$), and (2) two sequences $x$ and $y$ are finitary-independent if, for all $n$ and $m$, the complexity of $x {\upharpoonright} n$ and the complexity of $x {\upharpoonright} n$ given $y{\upharpoonright} m$ are within $O(\log n + \log m)$ (and the same relation holds if we swap the roles of $x$ and $y$). We have settled for the additive logarithmical term of precision (rather than some higher accuracy) since this provides robustness with respect to the type of complexity (plain or prefix-free) and other technical advantages. We establish a series of basic facts regarding the proposed notions of independence. We show that independence is strictly stronger than finitary-independence. The two notions of independence apply to a larger category of sequences than the family of random sequences, as intended. However, they are too rough for being relevant for computable sequences. It is not hard to see that a computable sequence $x$ is independent with any other sequence $y$, simply because the information in $x$ can be obtained directly. In fact, this type of trivial independence holds for a larger type of sequences, namely for any $H$-trivial sequence, and trivial finitary-independence holds for any sequence $x$ whose prefixes have logarithmic complexity. It seems that for this type of sequences (computable or with very low complexity) a more refined definition of independence is needed (perhaps, based on resource-bounded complexity). We show that the two proposed notions of independence have some of the intuitive properties that one naturally expects. For example, for every sequence $x$, the set of sequences that are finitary-independent with $x$ has measure one. The same issue for independence remains open. We next investigate to what extent pairs of independent, or finitary-independent sequences, can be effectively constructed via Turing reductions. For example, is there a Turing reduction $f$ that given oracle access to an arbitrary sequence $x$ produces a sequence that is finitary-independent with $x$? Clearly, if we allow the output of $f$ to be a computable sequence, then the answer is positive by the type of trivial finitary-independence that we have noted above. We show that if we insist that the output of $f$ has super-logarithmic complexity whenever $x$ has positive constructive Hausdorff dimension, then the answer is negative. In the same vein, it is shown that there is no effective way of producing from an arbitrary sequence $x$ with positive constructive Hausdorff dimension two sequences that are finitary-independent and have super-logarithmic complexity. Similar questions are considered for the situation when we are given two (finitary-) independent sequences. It is shown that there are independent sequences $x$ and $y$ and a Turing reduction $g$ such that $x$ and $g(y)$ are not independent. This appears to be a bad artifact of the notion of independence proposed in this paper. We consider that this is the only counter-intuitive effect of our definitions. We do not know if a similar phenomenon holds for finitary-independence. On the other hand, for any independent sequences $x$ and $y$ and for any Turing reduction $g$, $x$ and $g(y)$ are finitary-independent. We also raise the question on whether given as input several (finitary-) independent sequences $x$ and $y$ it is possible to effectively build a new sequence that is (finitary-) independent (not in the trivial way) with each sequence in the input. It is observed that the answer is positive if the sequences in the input are random, but for other types of sequences the question remains open. The same issue can be raised regarding finite strings and for this case a positive answer is obtained. Namely, it is shown that given three independent finite strings $x$, $y$ and $z$ with linear complexity, one can effectively construct a new string that is independent with each of $x, y$ and $z$, and has high complexity and length a constant fraction of the length of $x, y$ and $z$. \subsection{Preliminaries} ${\mathbb N}$ denotes the set of non-negative integers; the size of a finite set $A$ is denoted $|| A||$. Unless stated otherwise, all numbers are in ${\mathbb N}$ and all logs are in base 2. We work over the binary alphabet $\{0,1\}$. A string is an element of $\{0,1\}^*$ and a sequence is an element of $\{0,1\}^{\infty}$. If $x$ is a string, $|x|$ denotes its length; $xy$ denotes the concatenation of the strings $x$ and $y$. If $x$ is a string or a sequence, $x(i)$ denotes the $i$-th bit of $x$ and $x{\upharpoonright} n$ is the substring $x(1) x(2) \cdots x(n)$. For two sequences $x$ and $y$, $x \oplus y$ denotes the sequence $x(1) y(1) x(2) y(2) x(3) y(3) \cdots $ and $x ~{\mathrm{XOR}}~ y$ denotes the sequence $(x(1) ~{\mathrm{XOR}}~ y(1)) (x(2) ~{\mathrm{XOR}}~ y(2)) (x(3) ~{\mathrm{XOR}}~ y(3)) \cdots$, where $(x(i) ~{\mathrm{XOR}}~ y(i))$ is the sum modulo $2$ of the bits $x(i)$ and $y(i)$. We identify a sequence $x$ with the set $\{n \in \mathbb{N} \mid x(n) = 1\}$. We say that a sequence $x$ is computable (computably enumerable, or c.e.) if the corresponding set is computable (respectively, computably enumerable, or c.e.). If $x$ is c.e., then for every $s \in \mathbb{N}$, $x_s$ is the sequence corresponding to the set of elements enumerated within $s$ steps by some machine $M$ that enumerates $x$ (the machine $M$ is given in the context). We also identify a sequence $x$ with the real number in the interval $[0,1]$ whose binary writing is $0.x(1) x(2) \cdots$. A sequence $x$ is said to be left c.e. if the corresponding real number $x$ is the limit of a computable increasing sequence of rational numbers. The plain and the prefix-free complexities of a string are defined in the standard way; however we need to provide a few details regarding the computational models. The machines that we consider process information given in three forms: (1) the input, (2) the oracle set, (3) the conditional string. Correspondingly, a universal machine has 3 tapes: \begin{itemize} \item one tape for the input and work, \item one tape for storing the conditional string, \item one tape (called the oracle-query tape) for formulating queries to the oracle. \end{itemize} The oracle is a string or a sequence. If the machine enters the query state and the value written in binary on the oracle-query tape is $n$, then the machine gets the $n$-th bit in the oracle, or if $n$ is larger than the length of the oracle, the machine enters an infinite loop. We fix such a universal machine $U$. The notation $U^w(u \mid v)$ means that the input is $u$, the conditional string $v$ and the oracle is given by $w$, which is a string or a sequence. The plain complexity of a string $x$ given the oracle $w$ and the conditional string $v$ is $C^w(x \mid v) = \min\{|u| \mid U^w(u \mid v) = x \}$. There exists a constant $c$ such that for every $x, v$ and $w$ $C^w(x \mid v) < |x| + c$. A machine is prefix-free (self-delimiting) if its domain is a prefix-free set. There exist universal prefix-free machines; we fix such a machine $U$; the prefix-free complexity of a string $x$ given the oracle $w$ and the conditional string $v$ is $H^w(x \mid v) = \min\{|u| \mid U^w(u \mid v) = x \}$. In case $w$ or $v$ are the empty strings, we omit them in $C(\cdot)$ and $H(\cdot)$. Throughout this paper we use the $O(\cdot)$ notation to hide constants that depend only on the choice of the universal machine underlying the definitions of the complexities $C$ and $H$. Since the prefix-free universal machine is a particular type of machine, it follows that $C^w(x \mid v) < H^w(x \mid v) + O(1)$, for every $x, v$ and $w$. The reverse inequality between $C(\cdot)$ and $H(\cdot)$ also holds true, within an additive logarithmic term, and is obtained as follows. For example, a string $x = x(1) x(2) \cdots x(n)$ can be coded in a self-delimiting way by $x \mapsto code(x)= \underbrace{11 \cdots 1}_{|\mbox{\rm bin}(n)|} 0 \mbox{\rm bin}(n) x(1) x(2) \cdots x(n),$ where $\mbox{\rm bin}( n)$ is the binary representation of $n \in \mathbb{N}$. Note that $|code(x)| = |x| + 2 \log |x| + O(1)$. This implies that for every $x,v$, and $w$, \begin{equation} \label{e:plain-prefix} C^w(x \mid v) > H^w(x \mid v) - 2 \log|x| - O(1). \end{equation} The following inequalities hold for all strings $x$ and $y$: \begin{equation} \label{e:lifting} C^y(x) \leq C(x \mid y) + 2 \log |y| + O(1), \end{equation} \begin{equation} \label{e:symmetry} | C(xy) - (C(x|y) + C(y)) | \leq O(\log C(x) + \log C(y)). \end{equation} The first inequality is easy to derive directly; the second one is called the Symmetry of Information Theorem, see \cite{zvo-lev:j:kol}. There are various equivalent definitions for (algorithmic) random sequences as defined by Martin-L\"of \cite{mar-lof:j:mltests} (see \cite{Cris}). In what follows we will use the (weak) complexity-theoretic one \cite{cha:j:ait} using the prefix-free complexity: A sequence $x$ is Martin-L\"{o}f random (in short, random) if there is a constant $c$ such that for every $n$, $H(x {\upharpoonright} n) \geq n-c$. The set of random sequences has constructive (Lebesgue) measure one \cite{mar-lof:j:mltests}. The sequence $x$ is random relative to the sequence $y$ if there is a constant $c$ such that for every $n$, $H^y(x {\upharpoonright} n) \geq n-c$. Note that if $x$ is random, then for every $n$, $C(x {\upharpoonright} n) \geq n - 2\log n -O(1)$ (by inequality (\ref{e:plain-prefix})). A similar inequality also holds for the relativized complexities, i.e. for all $ x$ that are random relative to $y$ and for all $n$, $C^y(x {\upharpoonright} n ) > n - 2 \log n - O(1)$. These results will be repeatedly used throughout the paper. In \cite{vlam:j:randomness} van Lambalgen proves that $x \oplus y$ is random iff $x$ is random and $y$ is random relative to $x$. This implies that if $x$ is random and $y$ is random relative to $x$ then $x$ is random relative to $y$. The constructive Hausdorff dimension of a sequence $x$---which is the direct effectivization of ``classical Hausdorff dimension''---defined by $\mathrm{dim}(x)=\lim \inf_{n \rightarrow \infty} C(x {\upharpoonright} n)/n\left( = \lim \inf_{n \rightarrow \infty} H(x {\upharpoonright} n)/n\right)$, measures intermediate levels of randomness (see \cite{rya:j:dimension, sta:93, tad:j:partialrand, may:j:dimension-kol, lut:j:dimension, rei:t:thesis,sta:j:dimension, cal-sta-ter:j:partialrand, dow-hir-nie-ter:j:calibrating}). \if01Originally it has been defined using measure-theoretical tools but we give an alternative equivalent definition (for the proof and history of this equivalence see~\cite{may:j:dimension-kol, rya:j:dimension, sta:j:dimension} ) that involves plain (or prefix-free) complexity. Thus, the constructive Hausdorff dimension of a sequence $x$, denoted $\mathrm{dim}(x)$ is $\lim \inf_{n \rightarrow \infty} \frac{C(x {\upharpoonright} n)}{n} $ ($ = \lim \inf_{n \rightarrow \infty} \frac{H(x {\upharpoonright} n)}{n}$). \fi A Turing reduction $f$ is an oracle Turing machine; $f(x)$ is the language computed by $f$ with oracle $x$, assuming that $f$ halts on all inputs when working with oracle $x$ (otherwise we say that $f(x)$ does not exist). In other words, if $n \in f(x)$ then the machine $f$ on input $n$ and with oracle $x$ halts and outputs $1$ and if $n \not\in f(x)$ then the machine $f$ on input $n$ and with oracle $x$ halts and outputs $0$. The function \emph{use} is defined as follows: $use_f^x(n)$ is the index of the rightmost position on the tape of $f$ accessed during the computation of $f$ with oracle $x$ on input $n$. The Turing reduction $f$ is a \emph{wtt-reduction} if there is a computable function $q$ such that $use_f^x(n) \leq q(n)$, for all $n$. The Turing reduction $f$ is a \emph{truth-table reduction} if $f$ halts on all inputs for every oracle. A truth-table reduction is a wtt-reduction. \section{Defining independence} The basic idea is to declare that two objects are independent if none of them contains significant information about the other one. Thus, if in some formalization, $I(x)$ denotes the information in $x$ and $I(x \mid y)$ denotes the information in $x$ given $y$, $x$ and $y$ are independent if $I(x) - I(x \mid y)$ and $I(y) - I(y \mid x)$ are both small. In this paper we work in the framework of algorithmic information theory. In this setting, in case $x$ is a string, $I(x)$ is the complexity of $x$ (where for the ``complexity of $x$" there are several possibilities, the main ones being the plain complexity or the prefix-free complexity). The independence of strings was studied in~\cite{Cha82}: two strings are independent if $I(xy) \approx I(x) + I(y)$. This approach motivates our Definition~\ref{d:strongindep} and Definition~\ref{d:weakindep}. In case $x$ is an infinite sequence, the information in $x$ is characterized by the sequence $(I(x {\upharpoonright} n))_{n \in \mathbb{N}}$ of information in the initial segments of $x$. In the infinite case, for the information upon which we condition ($\mbox{e.g.}$, the $y$ in $I(x \mid y)$), there are two possibilities: either the entire sequence is available in the form of an oracle, or we consider initial segments of it. Accordingly, we propose two notions of independence. \begin{definition} {\rm(}{\bf The ``integral" type of independence}{\rm )} \label{d:strongindep} Two sequences $x$ and $y$ are {\rm independent} if $C^x(y{\upharpoonright} n) \geq C(y{\upharpoonright} n) - O(\log n )$ and $C^y(x{\upharpoonright} n) \geq C(x{\upharpoonright} n) - O(\log n)$. \end{definition} \medskip \begin{definition} {\rm(}{\bf The finitary type of independence}{\rm )} \label{d:weakindep} Two sequences $x, y$ are {\rm finitary-independent} if for all natural numbers $n$ and $m$, \[ C(x {\upharpoonright} n ~y {\upharpoonright} m) \geq C(x{\upharpoonright} n) + C(y{\upharpoonright} m) - O(\log(n) + \log(m)). \] \end{definition} \begin{remark} {\rm We will show in Proposition~\ref{p:cond}, that the inequality in Definition~\ref{d:weakindep} is equivalent to saying that for all $n$ and $m$, $C(x{\upharpoonright} n \mid y{\upharpoonright} m) \geq C(x{\upharpoonright} n) - O(\log n + \log m)$, which is the finite analogue of the property in Definition~\ref{d:strongindep} and is in line with our discussion above. } \end{remark} \begin{remark} \label{r:indepimplieswindep} {\rm If $x$ and $y$ are independent, then they are also finitary-independent (Proposition~\ref{p:indepimplieswindep}). The converse is not true (Corollary~\ref{c:windepnotindep}). } \end{remark} \begin{remark} \label{r:plain-prefix} {\rm The proposed definitions use the plain complexity $C(\cdot)$, but we could have used the prefix-free complexity as well, because the two types of complexity are within an additive logarithmic term. Also, in Definition~\ref{d:weakindep} (and throughout this paper), we use concatenation to represent the joining of two strings. However, since any reasonable pairing function $\langle x, y \rangle$ satisfies $| ~|\langle x, y \rangle| - |xy| ~| < O(\log|x| + \log|y|)$, it follows that $|C(<x,y>) - C(xy)| < O(\log|x| + \log|y|)$, and thus any reasonable pairing function could have been used instead. } \end{remark} \begin{remark} {\rm A debatable issue is the subtraction of the logarithmic term. Indeed, there are other natural possibilities. We argue that our choice has certain advantages over other possibilities that come to mind. Let us focus on the definition of finitary-independence. We want $C(x{\upharpoonright} n ~y{\upharpoonright} m) \geq C(x{\upharpoonright} n) + C(y{\upharpoonright} n) - O(f(x) + f(y))$, for all $n, m$, where $f$ should be some ``small" function. We would like the following two properties to hold: \begin{itemize} \item[(A)] the sequences $x$ and $y$ are finitary-independent iff $C(x{\upharpoonright} n \mid y{\upharpoonright} m) > C(x{\upharpoonright} n) - O(f(x{\upharpoonright} n) + f(y{\upharpoonright} m))$, for all $n$ and $m$, \item[(B)] if $x$ is ``somewhat'' random and $y = 0^{\omega}$, then $x$ and $y$ are finitary-independent. \end{itemize} Other natural possibilities for the definition could be: (i) if $f(x) = C(|x|)$, the definition of finitary independence--(i) would now be: \[ C(x{\upharpoonright} n ~y{\upharpoonright} m) \geq C(x{\upharpoonright} n) + C(y{\upharpoonright} m) - O(C(n) + C(m)), \] or (ii) if $f(x) = \log C(x)$, the definition of finitary-independence--(ii) would now be: \[ C(x{\upharpoonright} n ~y{\upharpoonright} m) \geq C(x{\upharpoonright} n) + C(y{\upharpoonright} m) - O(\log C(x{\upharpoonright} n) + \log C(y{\upharpoonright} m)). \] If sequences $x$ and $y$ satisfy (i), or (ii), then they also satisfy Definition~\ref{d:weakindep}. Variant (i) implies (B), but not(A) (for example, consider sequences $x$ and $y$ with $C(n) << \log C(x {\upharpoonright} n)$ and $C(m) << \log C(y{\upharpoonright} m)$, for infinitely many $n$ and $m$). Variant (ii) implies (A), but does not imply (B) (for example if for infinitely many $n$, $C(x{\upharpoonright} n) = O(\log^3 n)$; take such a value $n$, let $p$ be a shortest description of $x {\upharpoonright} n$, and let $m$ be the integer whose binary representation is $1p$. Then $x{\upharpoonright} n$ and $0^\omega {\upharpoonright} m$, do not satisfy (B)). The proposed definition implies both (A) and (B). Another advantage is the robustness properties from Remark~\ref{r:plain-prefix}. \end{remark} \begin{remark} \label{r:trivial} {\rm If the sequence $x$ is computable, then $x$ is independent with every sequence $y$. In fact a stronger fact holds. A sequence is called $H$-trivial if, for all $n$, $H(x {\upharpoonright} n) \leq H(n) + O(1)$. This is a notion that has been intensively studied recently (see \cite{dow-hir-nie-ter:j:calibrating}). Clearly every computable sequence is $H$-trivial, but the converse does not hold~\cite{zam:t:kolmog, sol:t:kolmog}. If $x$ is $H$-trivial, then it is independent with every sequence $y$. Indeed, $H^y(x {\upharpoonright} n) \geq H(x{\upharpoonright} n) - O(\log n)$, because $H(x {\upharpoonright} n) \leq H(n) + O(1) \leq \log n + O(1)$, and $H^x(y {\upharpoonright} n) \geq H(y {\upharpoonright} n) - O(\log n)$, because, in fact, $H^x(y {\upharpoonright} n)$ and $H(y {\upharpoonright} n)$ are within a constant of each other~\cite{nie:j:ktrivial}. The same inequalities hold if we use the $C(\cdot)$ complexity (see Remark~\ref{r:plain-prefix}). For the case of finitary-independence, a similar phenomenon holds for a (seemingly) even larger class. \begin{definition} A sequence $x$ is called C-logarithmic if $C(x{\upharpoonright} n) = O(\log n)$. \end{definition} It can be shown (for example using Proposition~\ref{p:cond}, (a)) that if $x$ is C-logarithmic, then it is finitary-independent with every sequence $y$. Note that every sequence $x$ that is the characteristic sequence of a $\mbox{c.e.}$ set is C-logarithmic. This follows from the observation that, for every $n$, the initial segment $x {\upharpoonright} n$ can be constructed given the number of $1$'s in $x {\upharpoonright} n$ (an information which can be written with $\log n$ bits) and the finite description of the enumerator of the set represented by $x$. If a sequence is $H$-trivial then it is C-logarithmic, but the converse probably does not hold. In brief, the notions of independence and finitary-independence are relevant for strings having complexity above that of $H$-trivial sequences, respectively C-logarithmic sequences. The cases of independent (finitary-independent) pairs $(x,y)$, where at least one of $x$ and $y$ is $H$-trivial (respectively, C-logarithmic) will be referred to as \emph{trivial independence}. } \end{remark} \begin{remark} \label{r:properties} {\rm Some desirable properties of the independence relation are: \begin{itemize} \item[P1.] Symmetry: $x$ is independent with $y$ iff $y$ is independent with $x$. \item[P2.] Robustness under type of complexity (plain or prefix-free). \item[P3.] If $f$ is a Turing reduction, except for some special cases, $x$ and $f(x)$ are dependent (``independence cannot be created''). \item[P4.] For every $x$, the set of sequences that are dependent with $x$ is small ($\mbox{i.e.}$, it has measure zero). \end{itemize} Clearly both the independence and the finitary-independence relations satisfy P1. They also satisfy P2, as we noted in Remark~\ref{r:plain-prefix}. It is easy to see that the independence relation satisfies P3, whenever we require that the initial segments of $x$ and $f(x)$ have plain complexity $\omega (\log n)$ (because $C^x(f(x) {\upharpoonright} n) = O(\log n)$, while $C(f(x) {\upharpoonright} n) = \omega(\log n)$). We shall see that the finitary-independence relation satisfies P3 under some stronger assumptions for $f$ and $f(x)$ (see Section~\ref{s:onesource} and in particular Theorem~\ref{t:wtt-1}). We do not know whether the independence relation satisfies P4. Theorem~\ref{t:measone} shows that the finitary-independence relation satisfies P4. } \end{remark} \subsection{Properties of independent and finitary-independent sequences} \label{s:wealindep} The following simple properties of finitary-independent sequences are technically useful in some of the next proofs. \begin{proposition} \label{p:cond} \begin{itemize} \item[\rm (a)] Two sequences $x$ and $y$ are finitary-independent $\Leftrightarrow$ for all $n$ and $m$, $C(x{\upharpoonright} n \mid y{\upharpoonright} m) \geq C(x{\upharpoonright} n) - O(\log n + \log m)$. \item [\rm (b)] Two sequences $x$ and $y$ are finitary-independent if and only if for all $n$, $C(x{\upharpoonright} n ~y{\upharpoonright} n) \geq C(x{\upharpoonright} n) + C(y{\upharpoonright} n) - O(\log(n))$. \item [\rm (c)] Two sequences $x$ and $y$ are finitary-independent if and only if for all $n$, $C(x{\upharpoonright} n \mid y{\upharpoonright} n) \geq C(x{\upharpoonright} n) - O(\log(n))$. \item [\rm (d)] If $x$ and $y$ are not finitary-independent, then for every constant $c$ there are infinitely many $n$ such that $C(x{\upharpoonright} n ~y{\upharpoonright} n) < C(x {\upharpoonright} n) + C(y {\upharpoonright} n) - c \log n$. \item [\rm (e)] If $x$ and $y$ are not finitary-independent, then for every constant $c$ there are infinitely many $n$ such that $C(x{\upharpoonright} n \mid y{\upharpoonright} n) < C(x {\upharpoonright} n) - c \log n$. \end{itemize} \end{proposition} {\em Proof}. We use the following inequalities which hold for every strings $x$ and $y$ (they follow from the Symmetry of Information Equation~(\ref{e:symmetry})): \begin{equation} \label{e:condgeq} C(xy) \geq C(x) + C(y \mid x) - O(\log|x| + \log |y|), \end{equation} and \begin{equation} \label{e:condleq} C(xy) \leq C(x) + C(y \mid x) + O(\log|x| + \log |y|). \end{equation} \phantom{xx.} (a)``$\Rightarrow$" \begin{equation*} \begin{array}{ll} \quad\quad\quad C(x{\upharpoonright} n \mid y{\upharpoonright} m) & \geq C(x{\upharpoonright} n ~y{\upharpoonright} m) - C(y{\upharpoonright} m) - O(\log n + \log m) \quad \quad (\mbox{by~(\ref{e:condleq})}) \\ & \geq C(x{\upharpoonright} n) + C(y{\upharpoonright} m) - C(y{\upharpoonright} m) - O(\log n + \log m) \quad \quad (\mbox{by independence}) \\ & = C(x{\upharpoonright} n) - O(\log n + \log m). \end{array} \end{equation*} \phantom{xx. (a)}``$ \Leftarrow$" \begin{equation*} \begin{array}{ll} C(x{\upharpoonright} n ~y{\upharpoonright} m) & \geq C(y{\upharpoonright} m) + C(x{\upharpoonright} n \mid y{\upharpoonright} m) - O(\log n + \log m) \quad \quad (\mbox{by~(\ref{e:condgeq})}) \\ & \geq C(y{\upharpoonright} m) + C(x{\upharpoonright} n) - O(\log n + \log m) \quad \quad (\mbox{by hypothesis}). \end{array} \end{equation*} \phantom{xx.} (b) ``$\Rightarrow$" Take $n=m$. \phantom{xx. (b)\quad}``$\Leftarrow$" Suppose $n \geq m$ (the other case can be handled similarly). \begin{equation*} \begin{array}{ll} C(x{\upharpoonright} n ~y{\upharpoonright} m) & \geq C(y{\upharpoonright} m) + C(x{\upharpoonright} n \mid y{\upharpoonright} m) - O(\log(n) + \log(m)) \quad \quad (\mbox{by~(\ref{e:condgeq})}) \\ & \geq C(y{\upharpoonright} m) + C(x{\upharpoonright} n \mid y{\upharpoonright} n) - O(\log(n) + \log(m)) \\ & \geq C(y{\upharpoonright} m) + C(x{\upharpoonright} n) - O(\log(n) + \log(m)) \quad \quad (\mbox{by (a)}). \end{array} \end{equation*} \phantom{xx.} (c) This follows from (b) with a similar proof as for (a). \smallskip (d) Suppose that for some constant $c$ the inequality holds only for finitely many $n$. Then one can choose a constant $c' > c$ for which the opposite inequality holds for every $n$, which by (b) would imply the finitary-independence of $x$ and $y$. \smallskip (e) Follows from (c), in a similar way as (d) follows from (b).~\QE \medskip \begin{proposition} \label{p:indepimplieswindep} If the sequences $x$ and $y$ are independent, then they are also finitary-independent. \end{proposition} {\em Proof}. Suppose $x$ and $y$ are not finitary-independent. By Proposition~\ref{p:cond} (e), for every constant $c$ there are infinitely many $n$ such that $C(x{\upharpoonright} n \mid y {\upharpoonright} n) < C(x {\upharpoonright} n)- c \cdot \log n$. Taking into account inequality~(\ref{e:lifting}), we obtain $C^y(x{\upharpoonright} n) < C(x {\upharpoonright} n) - (c-3) \log n$, for infinitely many $n$, which contradicts that $x$ and $y$ are independent.~\QE \medskip \begin{proposition} \label{p:dimXOR} If $\mathrm{dim}(x) = \sigma$ and $(x,y)$ are finitary-independent, then $\mathrm{dim}(x ~\mathrm{XOR}~ y) \geq \sigma$. \end{proposition} {\em Proof}. Note that $C(x {\upharpoonright} n \mid y {\upharpoonright} n) \leq C((x ~\mathrm{XOR}~ y) {\upharpoonright} n) + O(1)$, for all $n$ (this holds for all sequences $x$ and $y$). Suppose there exists $\epsilon > 0$ such that $\mathrm{dim}(x ~\mathrm{XOR}~ y) \leq \sigma - \epsilon$. It follows that, for infinitely many $n$, $C((x ~\mathrm{XOR}~ y) {\upharpoonright} n) \leq (\sigma - \epsilon) n$. Then \[ \begin{array}{ll} C(x {\upharpoonright} n \mid y {\upharpoonright} n) & < C((x ~\mathrm{XOR}~ y) {\upharpoonright} n ) + O(1) \\ & < (\sigma-\epsilon) n + O(1) \quad\quad \mbox{for infinitely many $n$}. \end{array} \] By the finitary-independence of $(x,y)$, $C(x {\upharpoonright} n) \leq C(x {\upharpoonright} n \mid y {\upharpoonright} n) + O(\log n) \leq (\sigma - \epsilon/2)n + O(1)$, $\mbox{i.o.}$ $n$, which contradicts the fact that $\mathrm{dim}(x) = \sigma$.~\QE \medskip \begin{proposition} \label{p:weakindepXOR} \begin{itemize} \item[(a)] If $x$ is random and $(x,y)$ are finitary-independent, then $(y, x ~\mathrm{XOR}~ y)$ are finitary-independent. \item[(b)] If $x$ is random and $(x,y)$ are independent, then $(y, x ~\mathrm{XOR}~ y)$ are independent. \end{itemize} \end{proposition} {\em Proof}. We prove (a) ((b) is similar). Suppose that $y$ and $x ~{\mathrm{XOR}}~ y$ are not finitary-independent. Then for every constant $c$, there are infinitely many $n$, such that $C((x ~{\mathrm{XOR}}~ y){\upharpoonright} n \mid y{\upharpoonright} n) < C((x ~{\mathrm{XOR}}~ y){\upharpoonright} n) - c\log n$. Note that if a program can produce $(x ~{\mathrm{XOR}}~ y){\upharpoonright} n$ given $y{\upharpoonright} n$, then by doing an extra bitwise XOR with $y{\upharpoonright} n$ it will produce $x{\upharpoonright} n$. Thus, $C(x{\upharpoonright} n \mid y{\upharpoonright} n) < C((x ~{\mathrm{XOR}}~ y){\upharpoonright} n \mid y{\upharpoonright} n) + O(1)$ for all $n$. Combining with the first inequality, for every constant $c$ and for infinitely many $n$ we have: \[ \begin{array}{ll} C(x{\upharpoonright} n \mid y{\upharpoonright} n) & < C((x ~{\mathrm{XOR}}~ y){\upharpoonright} n) - c\log n + O(1) \\ & < n - c \log n + O(1) \\ & < C(x{\upharpoonright} n) + 2 \log n - c\log n +O(1) \\ & = C(x {\upharpoonright} n) - (c-2) \log n +O(1). \end{array} \] This contradicts the fact that $x$ and $y$ are finitary-independent.~\QE \medskip \begin{proposition} There are sequences $x, y$, and $z$ such that $(x,y)$ are independent, $(x,z)$ are independent, but $(x,y \oplus z)$ are not finitary-independent. \end{proposition} {\em Proof}. Take $y$ and $z$ two sequences that are random relative to each other, and let $x = y ~\mathrm{XOR}~ z$. Then $(x,y)$ are independent, and $(x,z)$ are independent, by Proposition~\ref{p:weakindepXOR}. On the other hand note that $\mathrm{dim}(y ~\mathrm{XOR}~ z) = 1$ (by Proposition~\ref{p:dimXOR}) and $C((y ~\mathrm{XOR}~ z) {\upharpoonright} n \mid (y \oplus z) {\upharpoonright} 2n) < O(1)$. Consequently, for every constant $c$ and for almost every $n$, $C((y ~\mathrm{XOR}~ z) {\upharpoonright} n \mid (y \oplus z){\upharpoonright} 2n) < C((y ~\mathrm{XOR}~ z) {\upharpoonright} n) - c(\log n + \log 2n)$, and thus, $(y ~\mathrm{XOR}~ z, y \oplus z)$ are not finitary-independent.~\QE \medskip In Remark~\ref{r:trivial}, we have listed several types of sequences that are independent or finitary-independent with any other sequence. The next result goes in the opposite direction: it exhibits a pair of sequences that can not be finitary-independent (and thus not independent). \begin{proposition}{\rm \cite{frank}} \label{p:ce} If $x$ and $y$ are left $\mbox{c.e.}$ sequences, ${\rm dim}(x) > 0$, and ${\rm dim}(y) > 0$, then $x$ and $y$ are not finitary-independent. \end{proposition} {\em Proof}. For each $n$, let ${\rm cm}_x(n) = \min\{ s \mid x_s {\upharpoonright} n = x {\upharpoonright} n \}$ and ${\rm cm}_y(n) = \min \{ s \mid y_s {\upharpoonright} n = y {\upharpoonright} n \}$ (the convergence moduli of $x$ and, respectively, $y$). Without loss of generality we can assume that ${\rm cm}_x(n) > {\rm cm}_y(n)$, for infinitely many $n$. For each $n$ satisfying the inequality, $y {\upharpoonright} n$ can be computed from $x {\upharpoonright} n$ as follows. First compute $s = {\rm cm}_x(n)$ (which can be done because $x {\upharpoonright} n$ is known) and output $y_s {\upharpoonright} n$. Consequently, for infinitely many $n$, $C( y {\upharpoonright} n \mid x {\upharpoonright} n) < O(1)$. On the other hand, since $\dim(y)>0$, there exists a constant $c$ such that $C(y {\upharpoonright} n) \geq c \cdot n$, for almost every $n$. Consequently, $x$ and $y$ are not finitary-independent.~\QE \section{Examples of independent and finitary-independent sequences} We give examples of pairs of sequences that are independent or finitary-independent (other than the trivial examples from Remark~\ref{r:trivial}). \begin{theorem} \label{t:indepex} Let $x$ be a random sequence and let $y$ be a sequence that is random relative to $x$. Then $x$ and $y$ are independent. \end{theorem} {\em Proof}. Since $y$ is random relative to $x$, for all $n$, $C^x(y {\upharpoonright} n) > n - 2 \log n - O(1) \geq C(y {\upharpoonright} n) - 2 \log n - O(1)$. The van Lambalgen Theorem \cite{vlam:j:randomness} implies that $x$ is random relative to $y$ as well. Therefore, in the same way, $C^y(x {\upharpoonright} n) > n - 2 \log n - O(1) \geq C(x {\upharpoonright} n) - O(\log n)$.~\QE \medskip From Theorem~\ref{t:indepex} we can easily derive examples of pairs $(x,y)$ that are independent and which have constructive Hausdorff dimension $\epsilon$, for every rational $\epsilon > 0$. For example, if we start with $x$ and $y$ that are random with respect to each other and build $x' = x(1)~0 x(2)~0 \ldots$ ($\mbox{i.e.,}$ we insert $0$s in the even positions) and similarly build $y'$ from $y$, then $x'$ and $y'$ have constructive Hausdorff dimension equal to $1/2$ and are independent (because $C^{x'}(y' {\upharpoonright} n)$ and $C^{x}(y {\upharpoonright} (n/2))$ are within a constant of each other, as are $C(y' {\upharpoonright} n)$ and $C(y {\upharpoonright} (n/2))$). The pairs of sequences from Theorem~\ref{t:indepex} (plus those derived from there as above) and those from Remark~\ref{r:trivial} are the only examples of independent sequences that we know. Thus, currently, we have examples of independent pairs $(x,y)$ only for the case when $x$ has maximal prefix-free complexity ($\mbox{i.e.}$, $x$ is random) or $x$ is obtained via a straightforward transformation as above from a random sequence, and for the case when $x$ has minimal prefix-free complexity ($\mbox{i.e.}$, $x$ is $H$-trivial). We believe that for every $x$, there are sequences $y$ independent with it, and moreover we believe that the set of sequences independent with $x$ has measure one. For finitary-independence these facts are true. \begin{theorem} Let $x$ be an arbitrary sequence and let $y$ be a sequence that is random conditioned by $x$. Then $x$ and $y$ are finitary-independent. \end{theorem} {\em Proof}. Suppose $x$ and $y$ are not finitary-independent. Then there are infinitely many $n$ with $C(y{\upharpoonright} n \mid x{\upharpoonright} n) < C(y{\upharpoonright} n) - 5 \log n$. Consider a constant $c_1$ satisfying $C(y{\upharpoonright} n) < n+ c_1$, for all $n$. We get (under our assumption) that, for infinitely many $n$. $C(y{\upharpoonright} n \mid x{\upharpoonright} n) < n - 5 \log n + c_1$. Then, by inequality~\ref{e:lifting}, for infinitely many $n$, $C^{x{\upharpoonright} n}(y{\upharpoonright} n) < n - 3 \log n + c + c_1$. Note that that for every $n$ and every $m \geq n$, $ C^{x{\upharpoonright} m}(y{\upharpoonright} n ) < C^{x{\upharpoonright} n}(y{\upharpoonright} n )$. Thus, for infinitely many $n$ and for all $m > n$, \begin{equation} \label{e:ineq1} C^{x{\upharpoonright} m}(y{\upharpoonright} n ) < n - 3 \log n + (c + c_1). \end{equation} On the other hand, $y$ is random conditioned by $x$. Therefore, for all $n$, $H^x (y{\upharpoonright} n)> n - O(1)$. Let $U'$ be the universal machine underlying the complexity $H(\cdot)$ and let $p^*$ be the shortest program such that $U'^x (p^*) = y{\upharpoonright} n$ (if there are ties, take $p^*$ to be the lexicographically smallest among the tying programs). Let $m(n) = \min (n, \mbox{use} (U'^x(p*)))$. Note that, for all $n$, $H^x (y{\upharpoonright} n) = H^{x{\upharpoonright} m(n)} (y{\upharpoonright} n)$. It follows that, for every $n$, $H^{x{\upharpoonright} m(n)}(y{\upharpoonright} n )= H^x (y{\upharpoonright} n)> n - O(1)$. Recall that for every strings $u$ and $v$, $C^{v}(u) > H^{v}(u ) - 2 \log |u| - O(1)$. Thus, for every $n$, \begin{equation} \label{e:ineq2} C^{x{\upharpoonright} m(n)}(y{\upharpoonright} n ) > n - 2 \log n - O(1). \end{equation} Inequalities~(\ref{e:ineq1}) and~(\ref{e:ineq2}) are contradictory.~\QE \begin{theorem} \label{t:measone} For every $x$, the set $\{y \mid y \mbox{ finitary-independent with } x \}$ has measure one. \end{theorem} {\em Proof}. By the previous result, the set in the statement of the theorem contains the set $\{y \mid y \mbox{ random conditioned by } x \}$ which has measure one.~\QE \medskip Thus there are many (in the measure-theoretical sense) pairs of sequences that are finitary-independent. But is it possible to have such pairs satisfying a given constraint? We consider one instance of this general issue. \begin{proposition} \label{p:constraint} If $x$ is a random sequence then there are $y$ and $z$ such that $(y,z)$ are finitary-independent and $x = y~\mathrm{XOR}~z$. \end{proposition} {\em Proof}. Take a sequence $y$ finitary-independent with $x$. Then, by Proposition~\ref{p:weakindepXOR}, $y$ and $(x ~{\mathrm{XOR}}~ y)$ are finitary-independent. By taking $z = x ~{\mathrm{XOR}}~ y$, it follows that $x = y ~{\mathrm{XOR}}~ z$, with $y$ and $z$ finitary-independent.~\QE \begin{comment} \subsection{More elaborated example: $\Omega$ numbers that are finitary-independent} (PANIC - THIS SECTION IS FAR FROM COMPLETE AND VERY ROUGH.) We saw that pairs of finitary-independent sequences abound. Now we seek examples of finitary-independent sequences that have additional properties. \medskip {\bf Open question:} Is it true that for every $\epsilon_1 \in [0,1]$ and $\epsilon_2 \in [0,1]$, there are $x$ and $y$ c.e. sequences, with $x$ strictly-$\epsilon_1$-random and $y$ strictly-$\epsilon_2$-random, that are finitary-independent. ($\epsilon$-random means having constructive Hausdorff dimension $\epsilon$; strictly-$\epsilon$-random means $\epsilon$-random but not $\delta$-random, for every $\delta > \epsilon$.) \\ We do not know how to do the above, but the sketch below provides an example of two $\Omega$ numbers that are independent - which is interesting - but only one of them is c.e. The first one $x = \Omega_U$ is c.e., but the other one is an $\Omega$ relativized with $x$, and that is not c.e. Let $\sigma \in (0,1]$ be a computable real. A prefix-free Turing machine $U$ is {\em $\sigma$-universal} if for all prefix-free Turing machines $T$ there exists a constant $c_T$ such that for all programs $p\in \Sigma^*$, there exists $q\in \Sigma^*$ such that \[ U(q) = T(p) \mbox{ and } \sigma |q| < |p| + c_T. \] A Turing machine $U$ is {\em strictly $\sigma$-universal} if it is $\sigma$-universal but not $\delta$-universal for every $\delta > \sigma$. Theorem. For every $0 < \sigma \le 1$ computable, there effectively exists a strictly $\sigma$-universal prefix-free Turing machine. Theorem. Let $\sigma \in (0,1]$ be a computable real. If $U$ is a strictly $\sigma$-universal prefix-free Turing machine, then there exists a constant $c$ (depending on $U$) such that $H_U (\Omega_U [m]) \ge \sigma m -c$, and for every $\sigma < \delta \le 1$, computable, and every constant $a$, $H_U (\Omega_U [m]) < \delta m -a$, for infinitely many $m$. That is, $\Omega_U$ is $\sigma$-random, but not $\delta$-random. Comment: $\Omega_U$ is c.e. Next we do your trick: Start with a computable real $\sigma \in (0,1]$ and a strictly $\sigma$-universal prefix-free Turing machine $U$ and produce the c.e. $\sigma$-random, but not $\delta$-random $\Omega_U$. Construct the relativized $\sigma$-universal prefix-free Turing machine $W^{\Omega_{U}}$ and prove that it is {\bf $\Omega_U$-c.e.\ and $\sigma$-random but not $\delta$-random, hence the pair $(\Omega_U, \Omega_{W^{\Omega_{U}}})$ is independent.} Cris {\bf More details: July 21} Let $\Sigma = \{0,1\}$. Let $\varepsilon \in (0,1]$ be computable. A prefix-free Turing machine $U$ is {\em $\varepsilon$-universal} if for all prefix-free Turing machines $T$ there exists a constant $c=c_{U,T}$ such that for all programs $p\in \Sigma^*$, there exists $q\in \Sigma^*$ such that \[ U(q) = T(p) \mbox{ and } \varepsilon \cdot |q| < |p| + c. \] The {\em program-size complexity} of the string induced by a prefix-free Turing machine $W$, $H_{W}(x)$, is \[ H_W(x) = \min \{|p| \; : \; W(p) = x\}. \] \noindent {\bf Comment}. The prefix-free Turing machine $U$ is $\varepsilon$-universal iff for every (there exists a) universal prefix-free Turing machine $V$ there exists a constant $c = c_{U,V}$ such that for all $x\in \Sigma^{*}$: \[\varepsilon \cdot H_{U}(x) \le H_{V}(x) + c.\] {\thm \label{euniv} For every computable $\varepsilon \in (0,1]$, there effectively exists a $\varepsilon$-universal prefix-free Turing machine.} \medskip {\em Proof.} Let $V$ be a prefix-free universal Turing machine. Then define \[U_{\varepsilon}(0^{\lfloor (1/\varepsilon -1) |p|\rfloor} 1p) = V(p).\] It is straightforward to check that the machine $U_{\varepsilon}$ is prefix-free and, indeed, $\varepsilon$-universal (the constant is 1). \hfill $\Box$ {\thm The machine $U_{\varepsilon}$ has the following properties: \begin{enumerate} \item For all $x \in \Sigma^{*}, H_{U_{\varepsilon}} (x) = \lfloor H_{V}(x)/ \varepsilon \rfloor +1.$ \item The machine $U_{\varepsilon}$ is strictly $\varepsilon$-universal, that is, $U_{\varepsilon}$ is not $\delta$-universal for every $\varepsilon < \delta \le 1.$ \end{enumerate}} \medskip {\em Proof.} Indeed, if there were a constant $c$ such that for all $x \in \Sigma^{*}$ we would have $\delta \cdot H_{U_{\varepsilon}} (x) \le H_{V}(x) +c$, then in view of the previous property we would have: \[(\delta / \varepsilon -1) \cdot H_{V}(x) \le c,\] for all $x \in \Sigma^{*}$, a contradiction. \hfill $\Box$ \medskip The {\em halting probability}, given the measure $\mu(p)=2^{-|p|}$, of a prefix-free Turing machine $W$ is \[ \Omega_W = \sum_{W(p) \mbox{ halts}} 2^{-|p|}. \] The prefix of length $m$ of the real ${\bf x} = 0.x_{1}x_{2}\cdots x_{n}\cdots $ is denoted by ${\bf x}[m]$. Accordingly, the prefix of length $m$ of bits of the binary expansion of $ \Omega_W= 0.x_{1}x_{2}\cdots x_{n}\cdots$ is $ \Omega_W[m]= x_{1}x_{2}\cdots x_{m}$. \medskip {\thm {\bf [Chaitin]} \label{chaitinproof} The halting probability of a universal prefix-free Turing machine $V$ is (algorithmically) random, i.e.\ there exists a constant $c = c_{V}$ such that for all $m\ge1$, \[H_{V}(\Omega_{V}[m]) \ge m-c.\]} Let $\varepsilon \in (0,1]$ be computable and let $V$ be a universal prefix-free Turing machine. A real ${\bf x} =0. x_{1}x_{2}\cdots x_{n}\cdots $ is {\em Chaitin (algorithmically) $\varepsilon$-random} if there is a constant $c=c_{V}$ such that for all $m\ge1$, \[H_{V}({\bf x} [m]) \ge \varepsilon \cdot m-c.\] {\thm \label{euniv} Fix computable $\varepsilon \in (0,1]$. Then, the halting probability of $U_{\varepsilon}$ is Chaitin (algorithmically) $\varepsilon$-random.} \medskip {\em Proof.} By definition: \[\Omega_{U_{\varepsilon}} = \sum_{U_{\varepsilon}(p)<\infty} 2^{-|p|}.\] Given the construction of $U_{\varepsilon}$ (see the proof of Theorem~\ref{euniv}) we have \[\Omega_{U_{\varepsilon}} = \sum_{V(p)<\infty} 2^{-f(p)} = 0.\omega_{1}\omega_{2} \cdots, \] where $f(p) = \lfloor |p|/ \varepsilon \rfloor $ is a function from strings to positive integers, and the infinite binary sequence $\omega_{1}\omega_{2} \cdots$ contains infinitely many ones. Using Chaitin's proof of Theorem~\ref{chaitinproof} we consider a c.e.\ enumeration $\{p_{1}, p_{2}, \ldots \}$ of the domain of $U_{\varepsilon}$. Since $\varepsilon$ is c.e., there exists a p.c.\ function $\gamma$ (from binary strings to positive integers) such that \begin{equation} \label{approx} 0.\omega_{1}\omega_{2} \cdots \omega_{m} < \sum_{i=1}^{\gamma (\omega_{1}\omega_{2} \cdots \omega_{m})} 2^{-f(p_{i})}. \end{equation} For every $m>0$, \[\{p \in \Sigma^{*}\,:\, f(p) < m, V(p) < \infty\} \subseteq \{p_{1},p_{2}, \ldots, p_{\gamma (\omega_{1}\omega_{2} \cdots \omega_{m})}\},\] \noindent hence for all $m>0, i> \gamma (\omega_{1}\omega_{2} \cdots \omega_{m})$ we have: $f(p_{i})>m$. Take now $s\not\in \{V(p_{i})\,:\, i\le \gamma (\omega_{1}\omega_{2} \cdots \omega_{m})$. Then, there exists a $p_{k}$ such that $s = V(p_{k})$ for some $ k> \gamma (\omega_{1}\omega_{2} \cdots \omega_{m})$, so $f(p_{k})>m$, hence \[\varepsilon \cdot m < \varepsilon \cdot f(p_{k}) \le |p_{k}|.\] Consequently, $H_{V}(s) > \varepsilon \cdot m$, so in view of (\ref{approx}) we can construct a p.c.\ function $\Gamma$ (from strings to strings) such that \begin{equation} \label{em} H_{V}(\Gamma(\omega_{1}\omega_{2} \cdots \omega_{m})) > \varepsilon \cdot m. \end{equation} Next define the prefix-free Turing machine $C$ by $C(w) = \Gamma(V(w))$, and note that for all $x\in \Sigma^{*}$ we have: \begin{equation} \label{translation} H_{C}(\Gamma (x)) = H_{V}(x). \end{equation} Using the universality of $V$ (for $C$) we get a constant $c=c_{V,C}$ such that \[H_{V}(x) \le H_{C}(x) +c,\] \noindent hence by (\ref{translation}) \[H_{V}(\Gamma (x)) \le H_{C}(\Gamma (x)) +c \le H_{V}(x) +c,\] \noindent and finally by (\ref{em}) \[ \varepsilon \cdot m < H_{V}(\Gamma (\omega_{1}\omega_{2} \cdots \omega_{m}))) \le H_{V}(\omega_{1}\omega_{2} \cdots \omega_{m}) + c = H_{V}(\Omega_{U_{\varepsilon}}[m])+c,\] proving that $U_{\varepsilon}$ is $\varepsilon$-universal. \hfill $\Box$ \medskip The real ${\bf 0.x}$ is {\em strictly Chaitin $\varepsilon$-random} if it is Chaitin $\varepsilon$-random, but not Chaitin $\delta$-random, for every $1\ge \delta > \varepsilon$. \medskip {\sch Fix computable $\varepsilon \in (0,1]$. Then, the halting probability of $U_{\varepsilon}$ is strictly Chaitin (algorithmically) $\varepsilon$-random.} \medskip {\em Proof.} From Theorem~\ref{euniv} $U_{\varepsilon}$ is strictly Chaitin $\varepsilon$-random. The proof of the second part of Tadaki Theorem~3.2. can be easily adapted to show that \[H_{V}(\Omega_{U_{\varepsilon}}[m]) \le \varepsilon \cdot m + o(m),\] hence $U_{\varepsilon}$ cannot be $\delta$-random, for every $1\ge \delta > \varepsilon$. \hfill $\Box$ {\bf End -- More details: July 21} \end{comment} \begin{comment} {\bf More details - May 23} Let $\Sigma = \{0,1\}$. Let $\varepsilon \in (0,1]$ be computable. A prefix-free Turing machine $U$ is {\em $\varepsilon$-universal} if for all prefix-free Turing machines $T$ there exists a constant $c_{U,T}$ such that for all programs $p\in \Sigma^*$, there exists $q\in \Sigma^*$ such that \[ U(q) = T(p) \mbox{ and } \varepsilon \cdot |q| < |p| + c_{U,T}. \] The {\em program-size complexity} of the string induced by a prefix-free Turing machine $W$, $H_{W}(x)$, is \[ H_W(x) = \min \{|p| \; : \; W(p) = x\}. \] \noindent {\bf Comment}. The prefix-free Turing machine $U$ is $\varepsilon$-universal iff for every (there exists a) universal prefix-free Turing machine $V$ there exists a constant $c = c_{U,V}$ such that for all $x\in \Sigma^{*}$: \[\varepsilon \cdot H_{U}(x) \le H_{V}(x) + c.\] A prefix-free Turing machine $U$ is {\em strictly $\varepsilon$-universal} if it is $\varepsilon$-universal but not $\delta$-universal for every $1 \ge \delta > \varepsilon$. \medskip {\theorem \label{euniv} For every computable $\varepsilon \in (0,1)$, there effectively exists a strictly $\varepsilon$-universal prefix-free Turing machine.} \medskip {\em Proof.} Let $V$ be a universal universal prefix-free Turing machine and for every program $p \in {\rm dom} (V)$ define $\iota_{\varepsilon} = \lfloor \frac{1}{\varepsilon} -1\rfloor$ and \begin{equation} \label{U}U_{\iota_{\varepsilon}} (0^{\iota_{\varepsilon} (|p|-1)}1p) = V(p).\end{equation} Note that $|p|>0$, $\iota_{\varepsilon}\ge 1$, so $|0^{\iota_{\varepsilon} |p|}1p|> |p|.$ Moreover, the set \[\{0^{\iota_{\varepsilon} (|p|-1)}1p \,:\, p\in {\rm dom}(V)\}\] is prefix-free, so $U_{\iota_{\varepsilon}}$ is a prefix-free Turing machine. Furthermore, for all $x\in \Sigma^*$ \begin{equation} \label{uo} H_{U_{\iota_{\varepsilon}}(x)} = (\iota_{\varepsilon} +1)\cdot H_{V}(x) +(1-\iota_{\varepsilon}), \end{equation} hence, $U_{\iota_{\varepsilon}}$ is $\frac{1}{\iota_{\varepsilon}+1}$-universal; since, $\iota_{\varepsilon}+1 \le \varepsilon$, $U_{\iota_{\varepsilon}}$ is also $\varepsilon$-universal. Next we show that $U_{\iota_{\varepsilon}}$ cannot be $\delta$-universal for every $1\ge \delta > \frac{1}{\iota_{\varepsilon}+1}\raisebox{0.5ex}{.}$ \if01 Assume, by contradiction, that $U_{\iota_{\varepsilon}}$ is $\delta$-universal, that is, there exists a constant $c=c_{U_{\iota_{\varepsilon}},V}$ such that for each $p\in \Sigma^*$ there is a $q\in \Sigma^*$ with \[\delta \cdot |q| \le |p| + c, \mbox{ and } U_{\iota_{\varepsilon}}(q) = V(p).\] Because of the construction of $U$, $q = 0^{\iota_{\varepsilon} |r|}1r$, for some $r \in {\rm dom} (V)$, so the inequality above can be re-written as: \[\delta \cdot |0^{\iota_{\varepsilon} |r|}1r| \le |p| + c,\] \noindent and implies \begin{equation} \label{1} \frac{\delta \cdot |r|}{\varepsilon} \le |p|+c. \end{equation} Take now $x\in \Sigma^{*}$ and put $p=x^{*}$, where $x^{*}$ is the elegant (canonical) program for $x$ via $V$, that is, $x^{*} = \min \{y\in \Sigma^{*}\,:\, V(y)=c\}$, $\min$ being taken in quasi-lexicographical order. As $V(x^{*}) = V(r)$, $|x^{*}| \le |r|$, so using (\ref{1}) we get: \[|x^{*}| \le |r| \le \frac{\varepsilon}{\delta} (|x^{*}| +c),\] \noindent hence \[|x^{*}| \le \frac{\varepsilon c}{\delta -\varepsilon}\raisebox{0.3ex},\] which can hold only for finitely many elegant programs, a contradiction. {\bf Second} (better?) proof for $U_{\iota_{\varepsilon}}$ not being $\delta$-universal: \fi Assume, by contradiction, that there exists a constant $c=c_{U_{\iota_{\varepsilon}},V}$ such that for each $x\in \Sigma^*$ \[\delta \cdot H_{U_{\iota_{\varepsilon}}}(x) \le H_{V}(x) + c.\] For all $x\in \Sigma^{*}$ we have: \[ \delta \cdot H_{U_{\iota_{\varepsilon}}}(x) = \delta (\iota_{\varepsilon} +1) \cdot H_{V}(x) + \delta (1-\iota_{\varepsilon}) \le H_{V}(x) + c,\] \noindent hence \[(\delta (\iota_{\varepsilon} +1)-1) \cdot H_{V}(x) \le c-\delta (1-\iota_{\varepsilon}),\] and since $\delta (\iota_{\varepsilon} +1)>1$, \[H_{V} (x) \le \frac{c-\delta(1-\iota_{\varepsilon})}{\delta (\iota_{\varepsilon} +1)-1}\raisebox{0.5ex}{,}\] an impossibility. So far we have proved that $U_{\iota_{\varepsilon}}$ is strictly $\frac{1}{\iota_{\varepsilon}+1}$-universal. As it is possible that $\delta > \varepsilon$ but $\delta < \frac{1}{\iota_{\varepsilon}+1} $ the argument above does not directly produce a strictly $\varepsilon$-universal machine. However, from the computable $\varepsilon$ we can effectively compute the computable number \[f(\varepsilon) = \frac{1}{\lceil 1/\varepsilon \rceil}\raisebox{0.5ex}{,}\] such that \[\frac{1}{\iota_{f(\varepsilon)}+1} = \varepsilon,\] which finishes the proof. \hfill $\Box$ \medskip \medskip The {\em halting probability}, given the measure $\mu(p)=2^{-|p|}$, of a prefix-free Turing machine $W$ is \[ \Omega_W = \sum_{W(p) \mbox{ halts}} 2^{-|p|}. \] The prefix of length $m$ of the real ${\bf x} = 0.x_{1}x_{2}\cdots x_{n}\cdots $ is denoted by ${\bf x}[m]$. Accordingly, the prefix of length $m$ of bits of the binary expansion of $ \Omega_W= 0.x_{1}x_{2}\cdots x_{n}\cdots$ is $ \Omega_W[m]= x_{1}x_{2}\cdots x_{m}$. \smallskip {\theorem {\bf [Chaitin]} The halting probability of a universal prefix-free Turing machine $V$ is algorithmically random, i.e.\ there exists a constant $c = c_{V}$ such that for all $m\ge1$, \[H_{V}(\Omega_{V}[m]) \ge m-c.\]} Let $\varepsilon \in (0,1]$ be computable and let $W$ be a $\varepsilon$-universal prefix-free Turing machine $V$. A real ${\bf x} =0. x_{1}x_{2}\cdots x_{n}\cdots $ is {\em Chaitin $(\varepsilon,W)$-random} if there is a constant $c=c_{W}$ such that for all $m\ge1$, \[H_{W}({\bf x} [m]) \ge \varepsilon \cdot m-c.\] The real ${\bf 0.x}$ is {\em strictly Chaitin $(\varepsilon,W)$-random} if it is Chaitin $(\varepsilon,W)$-random, but not Chaitin $(\delta,W)$-random, for every $1\ge \delta > \varepsilon$. If $s>1$ is computable and $W$ is a prefix-free Turing machine, then \[\Omega_{W}(s) = \sum_{W(p) \mbox{ halts}} 2^{-s|p|}. \] {\theorem {\bf [Tadaki]} \label{Tadaki} If $s>1$ is computable, then $\Omega_V(s)$ is Chaitin $(1/s, V)$-random.} \if01 {\corollary Let $U$ be the machine constructed in Theorem~\ref{euniv} from the universal prefix-free Turing machine $V$. Then, $\Omega_{V}$ is Chaitin $(1/\varepsilon,U)$-random.} \medskip {\em Proof.} There is a constant $c$ such that for all $m\ge 1$: \[H_{U}(\Omega_{V}[m]) \ge \frac{1}{\varepsilon}\cdot H_{V}(\Omega_{V}[m]) \ge \frac{m}{\varepsilon}-c.\] \hfill$\Box$ \fi {\theorem \label{erand} Let $U_{\iota_{\varepsilon}}$ be the machine constructed in Theorem~\ref{euniv} from the universal prefix-free Turing machine $V$ via (\ref{U}). Then, $\Omega_{V}$ is Chaitin $(1,U_{\iota_{\varepsilon}})$-random.} \medskip {\em Proof.} In view of (\ref{uo}) we have \begin{eqnarray} \nonumber \Omega_{U_{\iota_{\varepsilon}}} & = & \sum_{U_{\iota_{\varepsilon}}(p) \mbox{ halts}} 2^{-|p|} \\ \nonumber & = & \sum_{V(q) \mbox{ halts}} 2^{-( 1- \iota_{\varepsilon}+ (1 +\iota_{\varepsilon})|q|)} \\ \nonumber & = & 2^{\iota_{\varepsilon} -1}\cdot \sum_{V(q) \mbox{ halts}} 2^{ -(1 +\iota_{\varepsilon})|q|} \\ \label{conver} & = & 2^{\iota_{\varepsilon} -1} \cdot \Omega_{V}(1 +\iota_{\varepsilon}) \le 1. \end{eqnarray} In view of Tadaki's Theorem, $ \Omega_{V}(1 +\iota_{\varepsilon}) $ is Chaitin $(\frac{1}{1 +\iota_{\varepsilon}},V)$-random, $2^{\iota_{\varepsilon} -1} \cdot \Omega_{V}(1 +\iota_{\varepsilon}) $ is Chaitin $(\frac{1}{1 +\iota_{\varepsilon}},V$)-random, hence $\Omega_{U_{\iota_{\varepsilon}}}$ is Chaitin $(\frac{1}{1 +\iota_{\varepsilon}},V)$-random. This means that there exists a constant $c$ such that for all $m\ge 1$ \begin{equation} \label{vu} H_{V}(\Omega_{U_{\iota_{\varepsilon}}}[m]) \ge \frac{m}{1 +\iota_{\varepsilon}}-c. \end{equation} Using the equation (\ref{uo}) we have: \[H_{V}(\Omega_{U_{\iota_{\varepsilon}}}[m]) = \frac{H_{U_{\iota_{\varepsilon}}}(\Omega_{U_{\iota_{\varepsilon}}}[m]) +(\iota_{\varepsilon}-1)} {1 +\iota_{\varepsilon}}\raisebox{0.5ex}{,}\] \noindent so by (\ref{vu}) we get \[H_{U_{\iota_{\varepsilon}}}(\Omega_{U_{\iota_{\varepsilon}}}[m]) \ge m - (1-\iota_{\varepsilon}+c(1 +\iota_{\varepsilon})),\] which shows that $\Omega_{U_{\iota_{\varepsilon}}}$ is Chaitin $(1,U_{\iota_{\varepsilon}})$-random. \if01 Next we prove that $\Omega_{U_{\iota_{\varepsilon}}}$ is strictly Chaitin $(\frac{1}{1+\iota_{\varepsilon}},V)$-random. In view of (\ref{vu}) we need to prove that for every $\delta > \frac{1}{1+\iota_{\varepsilon}}$, $\Omega_{U_{\iota_{\varepsilon}}}$ is not Chaitin $(\delta,V)$-random. Assume, by contradiction, that there is a constant $c>0$ such that for all $m\ge 1$, $H_{U_{\iota_{\varepsilon}}}(\Omega_{U_{\iota_{\varepsilon}}}[m]) \ge \delta m -c$. In view of (\ref{uo}) there exists a constant $c'>0$ such that for all $m\ge 1$: \begin{equation} \label{lbound} H_{U_{\iota_{\varepsilon}}}(\Omega_{U_{\iota_{\varepsilon}}}[m]) \ge (1+\iota_{\varepsilon})\delta m -c'.\end{equation} From the inequality \[\max_{|x|=m} H_{V}(x) \le m + \mbox{O}(\log m),\] \noindent using (\ref{uo}) we deduce \[\max_{|x|=m} H_{U_{\iota_{\varepsilon}}}(x)\le (1+\iota_{\varepsilon})\cdot \max_{|x|=m} H_{V}(x) + (1-\iota_{\varepsilon}) \le (1+\iota_{\varepsilon})m + \mbox{O}(\log m),\] \noindent which contradicts (\ref{lbound}). \fi \hfill$\Box$ \bigskip \noindent {\bf Comment}. As $U_{\iota_{\varepsilon}}$ is weaker than $V$, $\Omega_{U_{\iota_{\varepsilon}}}$ is more random for $U_{\iota_{\varepsilon}}$ than for $V$. \medskip \if01 {\theorem \label{erand} Let $\varepsilon \in (0,1)$ be computable. The halting probability of a $\varepsilon$-universal prefix-free Turing machine $U$ is Chaitin $(\varepsilon,U)$-random.} \medskip {\em Proof.} Let $f$ be a computable one-to-one function which enumerates ${\rm dom}(U)$. Let $\omega_k= \sum_{j=1}^{k} 2^{-|f(j)|}$. Clearly, $(\omega_k)$ is a computable, increasing sequence of rationals converging to $\Omega_U$. Consider the binary expansion of $\Omega_U = 0. \Omega_0 \Omega_1 \cdots$ We define a prefix-free Turing machine $\psi$ as follows: on input $x \in \Sigma^*,$ $\psi$ first ``tries to compute" $y=U(x)$ and the smallest number $t$ with $\omega_t\geq 0.y$. If successful, $\psi(x)$ is the first (in quasi-lexicographical order) string not belonging to the set $\{U(f(1)),U(f(2)),\ldots,U(f(t))\}$; otherwise, $C(x)=\infty$ if $U(x)=\infty$ or $t$ does not exist. If $x \in {\rm dom}(\psi)$ and $x^\prime$ is a string with $U(x)=U(x^\prime)$, then $\psi(x)=\psi(x^\prime)$. Applying this to $x\in {\rm dom}(\psi)$ and the elegant program $x^\prime=(U(x))^*$ of $U(x)$ yields \begin{equation} \label{psiU} H_\psi(\psi(x)) \leq |x^\prime| = H_U(U(x)). \end{equation} Furthermore, by $\varepsilon$-universality of $U$ and (\ref{psiU}), there is a constant $c=c_{U,\psi}$ such that for all $x\in {\rm dom}(\psi)$: \begin{equation} \label{equa:omegar2} \varepsilon \cdot H_U(\psi(x)) \leq H_\psi(\psi(x)) + c \leq H_U (U(x)) + c. \end{equation} Fix a number $m\ge 1$ and assume that $x$ is a program such that $U(x)=\Omega_{U}[m]$. Then $\psi$ is defined in $x$. Let $t$ be the smallest number (computed in the second step of the computation of $\psi$) with $\omega_t \geq 0.\Omega_U [m]$. We have \[ 0.\Omega_U [m] \leq \omega_t < \omega_t + \sum_{s=t+1}^\infty 2^{-|f(s)|} = \Omega_U \leq 0.\Omega_U[m] + 2^{-m} \,.\] Hence, $\sum_{s=t+1}^\infty 2^{-|f(s)|} \leq 2^{-m}$, which implies $|f(s)|\geq m$, for every $s \geq t+1$. \medskip From the construction of $\psi$ we conclude that $H_U(\psi(x)) \geq m$. Using $\varepsilon$-universality of $U$, and (\ref{equa:omegar2}) we obtain \begin{eqnarray*} n &\leq & H_U(\psi(x)) \\ &\leq & \frac{1}{\varepsilon} \cdot \left(H_\psi(\psi(x)) + c\right) \\ & = & \frac{1}{\varepsilon} \cdot ( H_U(\Omega_U[m]) + c). \end{eqnarray*} which proves that the sequence $\Omega_U$ is Chaitin $(\varepsilon,U)$-random. \\ Next we prove that $\Omega_U$ is not Chaitin $(\delta,U)$-random, for every $1\ge \delta > \varepsilon$. To this aim let $V$ be a universal prefix-free Turing machine and note that because \[\max_{|x|=m} H_{V}(x) \le m + \mbox{O}(\log m),\] and the $\varepsilon$-universality of $U$, for all $x\in \Sigma^{*}$, $\varepsilon \cdot H_{U}(x) \le H_{V}(x) + c$, for some fixed $c>0$ we have \[\max_{|x|=m} H_{U}(x) \le \max_{|x|=m} (H_{V}(x) + c) \le m + \mbox{O}(\log m).\] Assume, by contradiction, that $\Omega_{U}$ is $\delta$-universal, i.e.\ there is a constant $a>0$ such that $H_{U}(\Omega_{U}[m]) \ge \delta \cdot m -a$, for all $m\ge 1$. In view of the above inequality we get \[\delta \cdot m -a \le H_{U}(\Omega_{U}[m]) \le \frac{1}{\varepsilon} \cdot (m + \mbox{O}(\log m)) \] \hfill$\Box$ \fi {\corollary Let $\varepsilon \in (0,1)$ be computable. Let $U_{\iota_{\varepsilon}}$ be the machine constructed in Theorem~\ref{euniv} from the universal prefix-free Turing machine $V$ via (\ref{U}). Then, $\Omega_{U_0}$ is Chaitin $(\frac{1}{1+\iota_{\varepsilon}},V)$-random, hence Chaitin $(\varepsilon,V)$-random.} \medskip {\em Proof. } The result follows from the above Theorem~\ref{erand} and (\ref{vu}). \hfill$\Box$ \bigskip {\theorem Let $\varepsilon \in (0,1)$ be computable. Let $U_{\iota_{\varepsilon}}$ be the machine constructed in Theorem~\ref{euniv} from the universal prefix-free Turing machine $V$ via (\ref{U}). Then, $\Omega_{V}$ is Chaitin $(1+\varepsilon,U_{\iota_{\varepsilon}})$-random. } \medskip {\em Proof. } For $x\in \Sigma^{*}$ let the elegant (canonical) program for $x$ via $V$ and $U_{\iota_{\varepsilon}}$ be $x^{*}, x^{\dagger}$, respectively: $x^{*} = \min \{y\in \Sigma^{*}\,:\, V(y)=x\}$, $x^{\dagger} = \min \{y\in \Sigma^{*}\,:\, U_{\iota_{\varepsilon}}(y)=x\}$, $\min$ being taken in quasi-lexicographical order. Let $p = (\Omega_{V}[m])^{*}$ and note that $0^{\varepsilon (|p|-1)}1p = (\Omega_{V}[m])^{\dagger}$. Using Chaitin's Theorem we get: \[H_{U_{\iota_{\varepsilon}}}(\Omega_{V}[m]) = (1+\varepsilon)H_{V}(\Omega_{V}[m]) + (1-\varepsilon) \ge (1+\varepsilon) m -c.\] \hfill$\Box$ \medskip \noindent {\bf Comment}. Again, as $U_{\iota_{\varepsilon}}$ is weaker than $V$, $\Omega_{V} $ looks more random to $U_{\iota_{\varepsilon}}$ than to $V$. \bigskip {\theorem {\bf [Tadaki]} \label{strictTadaki} Let $s > 1$ be computable and let $V$ be a universal prefix-free Turing machine. Then, \[H_{V}(\Omega_{V}(s)[m]) \le \frac{m}{s} + \mbox{o}(m).\] } {\corollary \label{cstrictTadaki} Let $s > 1$ be computable and let $V$ be a universal prefix-free Turing machine. Then, $\Omega_{V}(s)$ is strictly Chaitin $(1/s, V)$-random.} \medskip {\em Proof.} By Theorem~\ref{Tadaki}, $\Omega_{V}(s)$ is Chaitin $(1/s, V)$-random. For every $\delta > 1/s$, $\Omega_{V}(s)$ cannot Chaitin $(\delta, V)$-random: otherwise, it would contradict Theorem~\ref{strictTadaki}. \hfill$\Box$ \medskip {\theorem \label{strictu0} Let $\varepsilon \in (0,1)$ be computable. Let $U_{\iota_{\varepsilon}}$ be the machine constructed in Theorem~\ref{euniv} from the universal prefix-free Turing machine $V$. Then, $\Omega_{U_{\iota_{\varepsilon}}}$ is strictly Chaitin $(\frac{1}{1+\iota_{\varepsilon}},V)$-random.} \medskip {\em Proof. } In view of (\ref{vu}) we need to prove that for every $\delta > \frac{1}{1+\iota_{\varepsilon}}$, $\Omega_{U_{\iota_{\varepsilon}}}$ is not Chaitin $(\delta,V)$-random: this follows from the equality (\ref{conver}) and Corollary~\ref{cstrictTadaki}.\hfill$\Box$ {\bf End - More details May 23} \end{comment} \section{Effective constructions of finitary-independent sequences} The examples of (finitary-) independent sequences that we have provided so far are existential ($\mbox{i.e.}$, non-constructive). In this section we investigate to what extent it is possible to effectively construct such sequences. We show some impossibility results and therefore we focus on the weaker type of independence, finitary-independence (clearly, if it is not possible to produce a pair of sequences that are finitary-independent, then it is also not possible to produce a pair of sequences that are independent). Since a C-logarithmic sequence is finitary-independent with any other sequence, the issue of constructibility is interesting if we also require that the sequences have complexity above that of C-logarithmic sequences (see Remark~\ref{r:trivial}). Such sequences are of course non-computable, and therefore the whole issue of constructibility appears to be a moot point. However this is not so if we assume that we already have in hand one (or several) non-computable sequence(s), and we want to build additional sequences that are finitary-independent. Informally speaking, we investigate the following questions: \smallskip {\bf Question (a)} Is it possible to effectively construct from a sequence $x$ another sequence $y$ $\mbox{(finitary-)}$ ~independent with $x$, where the independence is not trivial (recall Remark~\ref{r:trivial})? This question has two variants depending on whether we seek a uniform procedure ($\mbox{i.e.}$, one procedure that works for all $x$), or whether we allow the procedure to depend on $x$. \smallskip {\bf Question (b)} Is it possible to effectively construct from a ~sequence $x$ two sequences $y$ and $z$ that are $\mbox{(finitary-)}$~independent, where the independence is not trivial? Again, there are uniform and non-uniform variants of this question. \smallskip We analyze these questions in Section~\ref{s:onesource}. Similar questions for the case when the input consists of two sequences $x_1$ and $x_2$ are tackled in Section~\ref{s:twosources}. \subsection{If we have one source} \label{s:onesource} We first consider the uniform variant of Question~(a): Is there a Turing reduction $f$ such that for all $x \in \{0, 1\}^*$, $(x, f(x))$ are (finitary-)~independent? We even relax the requirement and demand that $f$ should achieve this objective only if $x$ has positive constructive Hausdorff dimension (this only makes the following impossibility results stronger). As discussed above, we first eliminate some trivial instances of this question. Without any requirement on the algorithmic complexity of the desired $f(x)$, the answer is trivially YES because we can take $f(x) = 0^\omega$ (or any other computable sequence). Even if we only require that $f(x)$ is not computable, then the answer is still trivially YES because we can make $f(x)$ to be C-logarithmic. For example, consider $$f(x) =x(1) ~x(2)0 ~x(3)000 \ldots ~x(k) \underbrace{0 \ldots 0}_{2^{k-1}-1} \ldots .$$ Then $f(x)$ is C-logarithmic, but not computable provided $x$ is not computable, and $(x, f(x))$ are finitary-independent simply because $f(x)$ is C-logarithmic. As noted above, the question is interesting if we require $f(x)$ to have some ``significant" amount of randomness whenever $x$ has some ``significant" amount of randomness. We expect that in this case the answer should be negative, because, intuitively, one should not be able to produce independence (this is property P3 in Remark~\ref{r:properties}). We consider two situations depending on two different meanings of the concept of ``significant" amount of randomness. \medskip {\bf Case 1:} We require that $f(x)$ is not C-logarithmic. We do not solve the question, but we show that every reduction $f$ that potentially does the job must have non-polynomial use. \begin{proposition} \label{p:notindep} Let $f$ be a Turing reduction. For every sequence $x$, if the function $use_f^x(n)$ is polynomially bounded, then $x$ and $f(x)$ are not finitary-independent, unless one of them is C-logarithmic. \end{proposition} {\em Proof}. Let $y$ be $f(x)$. Then for every $n$, let $m(n) = \max_{k \leq n} use_f^x(1^n))$. Then $y{\upharpoonright} n$ depends only on $x{\upharpoonright} m(n)$ and $m(n)$ is polynomial in $n$. Then $C(y{\upharpoonright} n \mid x{\upharpoonright} m(n)) \leq O(\log n)$. If $x$ and $y$ were finitary-independent, then $C(y{\upharpoonright} n) \leq C(y {\upharpoonright}(n) \mid x {\upharpoonright} m(n)) + O(\log n + \log m(n)) \leq O(\log(n)) + \log(m(n)) \leq O(\log n)$, for all $n$, $\mbox{i.e.}$, $y$ would be C-logarithmic .~\QE \medskip {\bf Case 2:} We require that $f(x)$ has complexity just above that of C-logarithmic sequences (in the sense below). We show that in this case, the answer to the uniform variant of Question (a) is negative: there is no such $f$. The following definition introduces a class of sequences having complexity just above that of C-logarithmic sequences. \begin{definition} \label{d:simpleio} A sequence $x$ is C-superlogarithmic if for every constant $c > 0$, $C(x {\upharpoonright} n) > c \log n$, for almost every $n$. \end{definition} The next proofs use the following facts. \begin{fact} {\rm (Variant of Theorem 3.1 in~\cite{nie-rei:c:wtt-Kolm-increase})} \label{f:NiesReimann} For all rationals $0 \leq \alpha < \beta < 1$, and for every set $S$ that is infinite and computable, there exists a sequence $x$ such that $\dim(x) = \alpha$ and for all wtt-reductions $f$, either $f(x)$ does not exist or $C(f(x){\upharpoonright} n) \leq \beta n$, for infinitely many $n$ in $S$. \end{fact} \begin{fact} \label{f:bds} {\rm (Variant of Theorem 3.1 in~\cite{bie-dot-ste:c:haussdimension})} For every Turing reduction $h$, for all rationals $0 < \alpha < \beta < 1$, and for every set $S$ that is infinite and computable, there is a sequence $x$ with $\dim(x) \geq \alpha$ such that either $h(x)$ does not exist or $C(h(x){\upharpoonright} n) < \beta n$, for infinitely many $n$ in $S$. \end{fact} \begin{fact} \label{f:one-two-sources} {\rm (Theorem 4.15 in~(\cite{zim:t:extractKolm})} For any $\delta > 0$, there exist a constant $c$, a set $S$ that is infinite and computable, and a truth-table reduction $g: \{0,1\}^\infty \times \{0,1\}^\infty \rightarrow \{0,1\}^\infty$ ($\mbox{i.e.}$, $g$ is a Turing machine with two oracles) with the following property: If the input sequences $x$ and $y$ are finitary-independent and satisfy $C(x{\upharpoonright} n) > c \cdot \log n$ and $C(y{\upharpoonright} n) > c \cdot \log n$, for almost every $n$, then the output $z = f(x,y)$ satisfies $C(f(x,y) {\upharpoonright} n) > (1- \delta) \cdot n$, for almost every $n$ in $S$. \end{fact} Theorem 3.1 in~\cite{nie-rei:c:wtt-Kolm-increase} is for $S = {\mathbb N}$ (and is stronger in that $\alpha = \beta$) but its proof can be modified in a straightforward manner to yield Fact~\ref{f:NiesReimann}. Theorem 3.1 in~\cite{bie-dot-ste:c:haussdimension} is also for $S = {\mathbb N}$ and can also be modified in a simple manner -- using Fact~\ref{f:NiesReimann} -- to yield Fact~\ref{f:bds}. We can now state the impossibility results related to {\bf Case 2}. To simplify the structure of quantifiers in the statement of the following result, we posit here the following task for a function $f$ mapping sequences to sequences: \smallskip TASK A: for every $x \in \{0,1\}^\infty$ with ${\rm dim}(x) > 0$, the following should hold: \begin{itemize} \item[(a)] $f(x)$ exists. \item[(b)] $f(x)$ is C-superlogarithmic. \item[(c)] $x$ and $f(x)$ are finitary-independent. \end{itemize} \begin{theorem} \label{t:impossible-one-source} There is no Turing reduction $f$ that satisfies TASK A. \end{theorem} {\em Proof}. Suppose there exists $f$ satisfying (a), (b) and (c) in TASK A. Let $S$ be the infinite, computable set and let $g$ be the truth-table reduction promised by Fact~\ref{f:one-two-sources} for $\delta = 0.3$. Let $h$ be the Turing reduction $h(x) = g(x, f(x))$. Let $x^*$ be the sequence promised by Fact~\ref{f:bds} for $\alpha = 0.5$, $\beta = 0.6$, and the above set $S$ and Turing reduction $h$. On one hand, by Fact~\ref{f:bds}, $C(h(x^*) {\upharpoonright} n) < 0.6n$, for infinitely many $n \in S$. On the other hand, by Fact~\ref{f:one-two-sources}, $C(h(x^*) {\upharpoonright} n) > 0.7n$, for almost every $n \in S$. We have reached a contradiction.~$\hspace*{\fill}\Box$ \medskip We next consider the uniform variant of Question~(b). \smallskip First we remark, that by van Lambalgen Theorem \cite{vlam:j:randomness}, if the sequence $x$ is random, then $x_{even}$ and $x_{odd}$ are random relative to each other (where $x_{odd}$ is $x(1) x(3) x(5) \ldots $ and $x_{even}$ is $x(2) x(4) x(6) \ldots$). Thus, $x_{even}$ and $x_{odd}$ are certainly independent. Kautz~\cite{kau:t:kolmog} has shown a much more general result by examining the splittings of sequences obtained using bounded Kolmogorov-Loveland selection rules.\footnote{A Kolmogorov-Loveland selection rule is an effective process for selecting bits from a sequence. Informally, it is an iterative process and at each step, based on the bits that have been already read, a new bit from the sequence is chosen to be read and (before that bit is actually read) the decision on whether that bit is selected or not is taken. A \emph{bounded} Kolmogorov-Loveland selection rule satisfies a certain requirement of monotonocity for deciding the selected bits, see~\cite{kau:t:kolmog}.} He showed that if $x$ is a random sequence, $x_0$ is the subsequence of $x$ obtained by concatenating the bits of $x$ chosen by an arbitrary bounded Kolmogorov-Loveland selection rule, and $x_1$ consists of the bits of $x$ that were not selected by the selection rule, then $x_0$ and $x_1$ are random with respect to each other (and thus independent). \begin{comment} {\em Proof} (sketch). If we assume that $x_{even}$ and $x_{odd}$ are not indepenedent, then $C(x_{odd} {\upharpoonright} n \mid x_{even}{\upharpoonright} n) < C(x_{odd} {\upharpoonright} n ) - 3 \log n$, for infinitely many $n$. Since, for all $n$, $C(x{\upharpoonright} 2n) \leq C(x_{even} {\upharpoonright} n) + C(x_{odd} {\upharpoonright} n \mid x_{even}{\upharpoonright} n )+ O(1)$, it follows that $C(x{\upharpoonright} 2n) \leq 2n - 3\log n + O(1)$ for infinitely many $n$, and thus, $H(x{\upharpoonright} 2n) \leq 2n - \log n + O(1)$ for infinitely many $n$, which contradicts that $x$ is random.~\QE \end{comment} We show that the similar result for sequences with constructive Hausdorff dimension $\sigma \in (0,1)$ is not valid. In fact, our next result is stronger, and essentially gives a negative answer to the uniform variant of Question~(b). We posit the following task for two functions $f_1$ and $f_2$ mapping sequences to sequences: \smallskip TASK B: for every $x \in \{0,1\}^\infty$ with ${\rm dim}(x) > 0$, the following should hold: \begin{itemize} \item[(a)] $f_1(x)$ and $f_2(x)$ exist, \item[(b)] $f_1(x)$ and $f_2(x)$ are C-superlogarithmic, \item[(c)] $f_1(x)$ and $f_2(x)$ are finitary-independent. \end{itemize} \begin{theorem} \label{t:impossible-two-sources} There are no Turing reductions $f_1$ and $f_2$ satisfying TASK B. \end{theorem} {\em Proof}. Similar to the proof of Theorem~\ref{t:impossible-one-source}.~\QE \medskip The non-uniform variants of Questions ~(a) and~(b) remain open. In the particular case when $f$ is a wtt-reduction, we present impossibility results analogous to those in Theorem~\ref{t:impossible-one-source} and Theorem~\ref{t:impossible-two-sources}. The proofs are similar, with the difference that we use Fact~\ref{f:NiesReimann} instead of Fact~\ref{f:bds}. \begin{theorem} \label{t:wtt-1} For all rational $\sigma \in (0,1)$, there exists $\dim (x) = \sigma$ such that for every wtt-reduction $f$, at least one of the following holds true: \begin{itemize} \item[(a)] $f(x)$ does not exist, \item[(b)] $f(x)$ is not finitary-independent with $x$, \item[(c)] $f(x)$ is not C-superlogarithmic. \end{itemize} \end{theorem} \begin{theorem} \label{t:unif-impossible-two-sources} For all rational $\sigma \in (0,1)$, there exists $x$ with $\dim (x) = \sigma$ such that for every wtt-reductions $f_1$ and $f_2$, at least one of the following holds true: \begin{itemize} \item[(a)] $f_1(x)$ does not exist or $f_2(x)$ does not exist, \item[(b)] $f_1(x)$ and $f_2(x)$ are not finitary-independent, \item[(c)] $f_1(x)$ is not C-superlogarithmic or $f_2(x)$ is not C-superlogarithmic. \end{itemize} \end{theorem} Theorem~\ref{t:unif-impossible-two-sources} has an interesting implication regarding sequences with constructive Hausdorff dimension in the interval $(0,1)$. Suppose, for example that we want to construct a sequence with constructive Hausdorff dimension 1/2. The first idea that comes to mind is to take a random sequence $x = x(1) x(2) \ldots$ and either consider the sequence $y = x(1) 0 x(2) 0 \ldots$ (we insert $0$s in all even positions) or the sequence $z= x(1)x(1) x(2) x(2) \ldots$ (we double every bit). The sequences $y$ and $z$ have constructive Hausdorff dimension 1/2. Theorem~\ref{t:unif-impossible-two-sources} shows, roughly speaking, that there are sequences with dimension strictly between $0$ and $1$, where the partial randomness is due necessarily to one of the two methods stated above. Formally, for every rational $\sigma \in (0,1)$, there is a sequence $x$ with $\mathrm{dim}(x) = \sigma$ so that no matter what wtt method we use for selecting from $x$ two subsequences, either one of the resulting subsequences has low complexity or the two resulting subsequences are not independent. \subsection{If we have two sources} \label{s:twosources} We have seen some limits on the possibility of constructing a finitary-independent sequences starting from one sequence. What if we are given two finitary-independent sequences: is it possible to construct from them more finitary-independent sequences? First we observe that if $x$ and $y$ are two independent sequences and $g$ is an arbitrary Turing reduction, then it does not necessarily follow that $x$ and $g(y)$ are independent (as one may expect). On the other hand it does follow that $x$ and $g(y)$ are finitary-independent. \begin{proposition}{\rm \cite{frank}}\label{p:createdep} There are two independent sequences $x$ and $y$ and a Turing reduction $g$ such that $x$ and $g(y)$ are not independent. \end{proposition} {\em Proof}. Let $z$ be a random sequence and let $u, v$, and $w$ be sequences such that $z = u \oplus v \oplus w$. By van Lambalgen Theorem \cite{vlam:j:randomness}, each of the sequences $u, v$, and $w$ are random relative to the join of the other two. We define the sequences $x$ and $y$ as follows: \[ \begin{array}{ll} x(2^n) & = u(n), \mbox{ for all $n \in \mathbb{N}$} \\ x(m) &= v(m), \mbox{ for every $m$ that is not a power of $2$}\\ y(2^n) & = u(n), \mbox{ for all $n \in \mathbb{N}$} \\ y(m) &= w(m), \mbox{ for every $m$ that is not a power of $2$} \end{array} \] \begin{claim} The sequences $x$ and $y$ are independent. \end{claim} {\em Proof}. Suppose $x$ and $y$ are not independent. Then, similarly to Proposition~\ref{p:cond} (e), for infinitely many $n$, $C^x (y {\upharpoonright} n) < C(y {\upharpoonright} n) - 7 \log n$. Then \[ \begin{array}{ll} C^{u \oplus v}(w {\upharpoonright} n) & \leq C^{u \oplus v}(y {\upharpoonright} n) + 2 \log n + O(1) \\ & \quad\quad \mbox{(because $w{\upharpoonright} n$ and $y {\upharpoonright} n$ differ in only $\log n$ bits)}\\ & \leq C^x (y {\upharpoonright} n) + 2 \log n + O(1) \\ & \quad \quad \mbox{(because queries to $x$ can be replaced by queries to $u$ and $v$)} \\ & \leq C(y {\upharpoonright} n) - 7 \log n + 2 \log n +O(1), \\ & \quad \quad \mbox{for infinitely many $n$} \\ & \leq C(w {\upharpoonright} n) + 2 \log n - 7 \log n + 2 \log n +O(1) \\ & = C(w {\upharpoonright} n) - 2 \log n +O(1) \\ & \leq n - 3 \log n + O(1). \end{array} \] This contradicts that w is random with respect to $u \oplus v$.~\QE \medskip It is easy to define a Turing reduction $g$ such that $g(y) = u$. Notice that $C^x(u {\upharpoonright} n) = O(\log n)$, because $u$ is many-one reducible to $x$. On the other hand $C(u {\upharpoonright} n) \geq n - 2\log n +O(1)$, for every $n$, because $u$ is random. Therefore $x$ and $g(y)$ are not independent.~\QE \begin{comment} (Q3): Let $x$ and $y$ be independent sequences and let $g$ be a Turing reduction (or, for weaker versions of this question, wtt-reduction, or tt-reduction, or m-reduction). Then $x$ and $g(y)$ are independent. This does not hold. See Stephan's email. \medskip {\bf PANIC: e-mail from Frank Stephan: \begin{verbatim} ------ Forwarded Message > From: Frank STEPHAN <fstephan@comp.nus.edu.sg> > Date: Wed, 30 May 2007 12:51:22 +0800 > To: <cristian@cs.auckland.ac.nz> > Subject: Questions > > Hello Cris, > > I think the answer to (Q3) is negative. Assume that u join v join w is > Martin-Loef random. Define > > x(2^n) = u(n) for all n, > x(m) = v(m) for all m which are not a power of 2, > y(2^n) = u(n) for all n, > y(m) = w(m) for all m which are not a power of 2. > > As the u-part of x and y is logarithmic, it does not affect the > definition of independence, so x and y should be independent as > v and w are Martin-Loef random relative to each other. But u is > many-one reducible to x and to y, hence u is not Martin-Loef random > relative to x or y. Hence > > K^x(u {\upharpoonright} n) in O(log n), > K(u {\upharpoonright} n) is n-2*log(n) for almost all n. > > So neither u and x nor u and y are independent. > > Concerning question (Q1), you gave already the answer for the > prefix-free complexity H. Given every x and every H-trivial y, > x and y are independent: > > - H^y(x {\upharpoonright} n) differs from H(x {\upharpoonright} n) only by a constant (Nies > showed this property for all H-trivial sequences), > - H^z(y {\upharpoonright} n) is in O(log n) for all z and hence it does not matter > whether z is recursive or z = x. > > I would think that Question (Q2) and hence Question (Q1.1) have > a positive answer but I am not sure. > > By the way, I have in many papers C for plain and H for prefix-free > complexity as in early papers by Downey and Nies; it also > coincides with the first and second letter of Chaitin, but I do not > know whether this was the reason for their choice, which they now > revoked. But I kept it like this in the Chaitin book. Today Nies > hates the "H", therefore papers with him have a "K" instead. > > Regards, Frank > > \end{verbatim} } \medskip {\bf PANIC- Marius, May 30: Q3 does not hold. \begin{verbatim} Stephan's sketch only misses showing that $x$ and $y$ are independent. But this can be shown as follows. Suppose $x$ and $y$ are not independent. Then, for infinitely many $n$, $C^x (y {\upharpoonright} n) < C(y {\upharpoonright} n) - 5 \log n$. Then we have C^{u,v}(w {\upharpoonright} n) \leq C^{u,v}(y {\upharpoonright} n) + 2 \log n + O(1) (because w{\upharpoonright} n and y {\upharpoonright} n differ in only \log n bits) \leq C^x (y {\upharpoonright} n) + 2 \log n + O(1) (because queries to $x$ can be replaced by queries to u and v) \leq C(y {\upharpoonright} n) - 5 \log n + 2 \log n +O(1), for infinitely many n \leq C(w {\upharpoonright} n) + 2 \log n - 5 \log n + 2 \log n +O(1) = C(w {\upharpoonright} n) - \log n +O(1) \leq n - \log n +O(1). This contradicts that w is random with respect to u \join v. \end{verbatim} } \end{comment} We do not know if the facts that $x$ and $y$ are finitary-independent and $g$ is a Turing reduction, imply that $x$ and $g(y)$ are finitary-independent This would show that finitary-dependency cannot be created. The following weaker result holds. \begin{proposition} \label{p:no-dependency} If $x$ and $y$ are independent, and $g$ is a Turing reduction, then $x$ and $g(y)$ are finitary-independent (provided $g(y)$ exists). \end{proposition} {\em Proof}. Since $x$ and $y$ are independent, there exists a constant $c$ such that for all n, \[ C^y(x {\upharpoonright} n) \geq C(x {\upharpoonright} n) - c \log n. \] Suppose that $x$ and $g(y)$ are not finitary-independent. Then there are infinitely many $n$ such that $C(x{\upharpoonright} n \mid g(y){\upharpoonright} n) < C(x {\upharpoonright} n) - (c+4) \log n$. Since $C^y(x {\upharpoonright} n) \leq C(x {\upharpoonright} n \mid g(y){\upharpoonright} n) + 2 \log n + O(1)$, it would follow that, for infinitely many $n$, \[ C^y(x {\upharpoonright} n) \leq C(x {\upharpoonright} n) - (c+1)\log n, \] which contradicts the first inequality.~\QE \medskip \begin{corollary} \label{c:windepnotindep} There are sequences that are finitary-independent but not independent. \end{corollary} {\em Proof}. The sequences $x$ and $g(y)$ from Proposition~\ref{p:createdep} are not independent, but they are finitary-independent by Proposition~\ref{p:no-dependency}.~\QE \medskip As we mentioned, we do not know if Proposition~\ref{p:no-dependency} can be strengthened to hold if $x$ and $y$ are finitary-independent. However, for such $x$ and $y$, there exists a simple procedure that starting with the pair $(x,y)$, produces a new pair of finitary-independent sequences. Namely, we take the pair $(x, y_{odd})$. \begin{proposition} If $x$ and $y$ are finitary-independent, then $x$ and $y_{odd}$ are finitary-independent. \end{proposition} {\em Proof}. Suppose that for every constant $c$ there are infinitely many $n$ such that $C(x {\upharpoonright} n \mid y_{odd} {\upharpoonright} n) < C(x {\upharpoonright} n) - c \cdot \log n$. Note that, for all $n$, $C(x {\upharpoonright} n \mid y {\upharpoonright} 2n) \leq C(x {\upharpoonright} n \mid y_{odd} {\upharpoonright} n) +O(1)$. Our assumption implies that for every constant $c$ there are infinitely many $n$ such that $C(x {\upharpoonright} n \mid y {\upharpoonright} 2n) < C(x {\upharpoonright} n) - c \log n +O(1)$. By Proposition~\ref{p:cond}, (a), this contradicts the fact that $x$ and $y$ are finitary-independent.~\QE \medskip The next issue that we study is whether given a pair of $\mbox{(finitary-)independent}$ strings $(x,y)$, it is possible to effectively produce another string that is $\mbox{(finitary-)independent}$ with both $x$ and $y$. We give a positive answer for the case when $x$ and $y$ are both random. The similar question for non-random $x$ and $y$ remains open (but see Section~\ref{s:finiteindep}). \begin{theorem} \label{t:doubleindep} There exists an effective transformation $f$ with polynomially-bounded use such that if $x$ and $y$ are random and independent (respectively finitary-independent), then $f(x,y)$ is independent (respectively, finitary-independent) with both $x$ and $y$, and the independence is not trivial (recall Remark~\ref{r:trivial}). \end{theorem} {\bf Remark:} Contrast with Proposition~\ref{p:notindep}, where we have shown that for every $x$, for every effective transformation $f$ with polynomially-bounded use, $x$ and $f(x)$ are not finitary-independent. \smallskip {\em Proof}. We take $f(x,y) = x ~{\mathrm{XOR}}~ y$ and take into account Proposition~\ref{p:weakindepXOR}.~\QE \subsection{Producing independence: the finite case} \label{s:finiteindep} An interesting issue is whether given as input several sequences that are (finitary-) independent, there is an effective way to construct a sequence that is (finitary-) independent with each sequence in the input (and the independence is not trivial). A result of this type is obtained for the case when the input consists of two random sequences $x$ and $y$ in Theorem~\ref{t:doubleindep}. We do not know if in Theorem~\ref{t:doubleindep} we can remove the assumption that $x$ and $y$ are random. In what follows we will consider the simpler case of strings. In this setting we are able to give a positive answer for the situation when we start with three\footnote{The case when the input consists of two independent strings remains open.} input strings that are independent (and not necessarily random). First we define the analogue of independence for strings. \begin{definition} Let $c \in \mathbb{R}^+$ and $k \in \mathbb{N}$. We say that strings $x_1, x_2, \ldots, x_k$ in $\{0,1\}^*$ are $c$-independent if \[ C(x_1 x_2 \ldots x_k) \geq C(x_1) + C(x_2) + \ldots + C(x_k) - c (\log|x_1| + \log|x_2| + \ldots + \log|x_k|). \] \end{definition} \if01 {\bf $C(x_k) - c (\log|x_1|x_2| \cdots |x_k|)$ is not better than $C(x_k) - c (\log|x_1| + \log|x_2| + \ldots + \log|x_k|)$??} \fi The main result of this section is the following theorem, whose proof draws from the techniques of~\cite{zim:t:extractKolm}. \begin{theorem} \label{t:stringindep} For all constants $\sigma > 0$ and $\sigma_1 \in (0, \sigma)$, there exists a computable function $f: \{0,1\}^* \times \{0,1\}^* \times \{0,1\}^* \rightarrow \{0,1\}^*$ with the following property: For every $c \in \mathbb{R}^+$ there exists $c \in \mathbb{R}^+$ such that if the input consists of a triplet of $c$-independent strings having sufficiently large length $n$ and plain complexity at least $\sigma \cdot n$, then the output is $c$-independent with each element in the input triplet and has length $\lfloor \sigma_1 n \rfloor$. More precisely, if \begin{itemize} \item[\rm (i)]$(x,y,z)$ are $c$-independent, \item[\rm (ii)] $|x| = |y| = |z| = n$, and \item[\rm (iii)]$C(x) \geq \sigma \cdot n$, $C(y) \geq \sigma \cdot n$, $C(z) \geq \sigma \cdot n$, \end{itemize} then, provided $n$ is large enough, the following pairs of strings $(f(x,y,z), x)$, $(f(x,y,z), y)$, $(f(x,y,z), z)$ are $c$-independent, $|f(x,y,z)| = \lfloor \sigma_1 n\rfloor$, and $C(f(x,y,z)) \geq \lfloor \sigma_1 n\rfloor - O(\log n)$. \end{theorem} Before we delve into the proof, we establish several preliminary facts. \begin{lemma} \label{l:threeindep} If $x_1, x_2, x_3$ are three strings that are $c$-independent, then \[ C(x_1 \mid x_2 x_3) \geq C(x_1) - (c+2)(\log|x_1| + \log |x_2| + \log |x_3|) - O(1). \] \end{lemma} {\em Proof}. The following inequalities hold for every three strings and in particular for the strings $x_1$, $x_2$, and $x_3$: \[ C(x_1 x_2 x_3) \leq C(x_2 x_3) + C(x_1 \mid x_2 x_3) + 2 \log |x_1| +O(1), \] and \[ C(x_2 x_3) \leq C(x_2) + C(x_3) + 2 \log|x_2| + O(1). \] Then \[ \begin{array}{ll} C(x_1 \mid x_2 x_3) & \geq C(x_1 x_2 x_3) - C(x_2 x_3) - 2 \log |x_1| - O(1) \\ & \geq C(x_1) + C(x_2) + C(x_3) - c (\log |x_1| + \log |x_2| + \log |x_3|) \\ &\phantom{x} - (C(x_2) + C(x_3) + 2 \log |x_2| + O(1)) - 2 \log |x_1| - O(1) \\ & \geq C(x_1) - (c+2) (\log|x_1| + \log |x_2| + \log |x_3|) - O(1). \end{array} \] ~\QE \medskip The next lemma establishes a combinatorial fact about the possibility of coloring the cube $[N] \times [N] \times [N]$ with $M$ colors such that every planar rectangle contains all the colors in about the same proportion. Here $N$ and $M$ are natural numbers, $[N]$ denotes the set $\{1, 2, \ldots, N\}$, $[M]$ denotes the set $\{1, 2, \ldots, M\}$and a planar rectangle is a subset of $[N] \times [N] \times [N]$ having one of the following three forms: $B_1 \times B_2 \times \{k\}$, $B_1 \times \{k\} \times B_2$, or $\{k\} \times B_1 \times B_2$, where $k \in [N]$, $B_1 \subseteq [N]$ and $B_2 \subseteq [N]$. \begin{lemma} \label{l:coloring} Let $0 < \sigma_1 < \sigma_2 < 1$. For every $n$ sufficiently large, it is possible to color the cube $[2^n] \times [2^n] \times [2^n]$ with $M = 2^{\lfloor \sigma_1 n \rfloor}$ colors in such a way that every planar rectangle satisfying $\lll B_1 \rVert = a 2^{\lceil \sigma_2 n \rceil}$ and $\lll B_2 \rVert = b 2^{\lceil \sigma_2 n \rceil}$ for some natural numbers $a$ and $b$ contains at most $(2/M) \lll B_1 \rVert \lll B_2 \rVert$ occurrences of color $c$, for every color $c \in [M]$. \end{lemma} {\em Proof}. We use the probabilistic method. Let $N = 2^n$. We color each cell of the $[N] \times [N] \times [N]$ cube with one color chosen independently and uniformly at random from $[M]$. For $i,j,k \in [N]$, let $T(i,j,k)$ be the random variable that designates the color of the cell $(i,j,k)$ in the cube. For every fixed cell $(i,j,k)$ and for every fixed color $c \in [M]$, ${\rm Prob}(T(i,j,k) = c) = 1/M$, because the colors are assigned independently and uniformly at random. Let us first consider some fixed subsets $B_1$ and $B_2$ of $[N]$ having size $2^{\lceil \sigma_2 n \rceil}$, a fixed $k \in [N]$, and a fixed color $c \in [M]$. Let $A$ be the event ``the fraction of occurences of $c$ in the planar rectangle $B_1 \times B_2 \times \{k\}$ is greater than $2/M$." Using the Chernoff bounds, it follows that $${\rm Prob}(A) < e^{-(1/3) (1/M) N^{2\sigma_2}}.$$ The same upper bounds hold for the probabilities of the similar events regarding the planar rectangles $B_1 \times \{k\} \times B_2$ and $\{k\} \times B_1 \times B_2$. Thus, if we consider the event $B$ ``there is some color with a fraction of appearances in one of the three planar rectangles mentioned above greater than $(2/M)$", then, by the union bound, \begin{equation} \label{e:union} {\rm Prob}(B) < 3M e^{-(1/3) (1/M) N^{2\sigma_2}}. \end{equation} The number of ways to choose $B_1 \subseteq [N]$ with $\lll B_1 \rVert = 2^{\lceil \sigma_2 n \rceil}$, $B_2 \subseteq [N]$ with $\lll B_2 \rVert = 2^{\lceil \sigma_2 n \rceil}$ and $k \in [N]$ is approximately (ignoring the truncation) ${ N \choose N^{\sigma_2}} \cdot{ N \choose N^{\sigma_2}} \cdot N$, which is bounded by \begin{equation} \label{e:comb} e^{2N^{\sigma_2}} \cdot e^{2N^{\sigma_2} (1-\sigma_2) \ln(N)} \cdot e^{\ln N}, \end{equation} (we have used the inequality ${n \choose k} < (en/k)^k$). Clearly, for our choice of $M$, the right hand side in~(\ref{e:comb}) times the right hand side in~(\ref{e:union}) is less than $1$. It means that there exists a coloring where no color appears a fraction larger than $(2/M)$ in every planar rectangle with $B_1$ and $B_2$ having size exactly $2^{\lceil \sigma_2 n \rceil}$. For planar rectangles having the sizes of $B_1$ and $B_2$ an integer multiple of $2^{\lceil \sigma_2 n \rceil}$, the assertion holds as well because such rectangles can be partitioned into subrectangles having the size exactly $2^{\lceil \sigma_2 n \rceil}$.~\QE \medskip {\em Proof} of {\bf Theorem~\ref{t:stringindep}.} We take $n$ sufficiently large so that all the following inequalities hold. Let $x^*$, $y^*$ and $z^*$ be a triplet of strings of length $n$ satisfying the assumptions in the statement. Let $N = 2^n$ and let us consider a constant $\sigma_2 \in (\sigma_1, \sigma)$. By exhaustive search we find a coloring $T: [N] \times [N] \times [N] \rightarrow [M]$ satisfying the properties in Lemma~\ref{l:coloring}. Identifying the strings $x^*$, $y^*$ and $z^*$ with their indeces in the lexicographical ordering of $\Sigma^n$, we define $w^* = T(x^*, y^*, z^*)$. Note that the length of $w^*$ is $\log M = \lfloor \sigma_1 n \rfloor$, which we denote $m$. We will show that $C(w^* \mid z^*) \geq m - c' \log m$, for $c' = 3c + d + 13$, for a constant $d$ that will be specified later. Since $C(w^*) \leq m + O(1)$, it follows that $w^*$ and $z^*$ are independent. In a similar way, it can be shown that $w^*$ and $x^*$ are independent, and $w^*$ and $y^*$ are independent. For the sake of obtaining a contradiction, suppose that $C(w^* \mid z^*) < m - c' \log m$. The set $A = \{w \mid C(w \mid z^*) < m - c' \log m \}$ has size $< 2^{m - c' \log m}$ and, by our assumption, contains $w^*$. Let $t_1$ be such that $C(x^*) = t_1$ and $t_2$ be such that $C(y^* \mid z^*) = t_2$. Note that $t_1 > \sigma_2 n$. The integer $t_2$ is also larger than $\sigma_2 n$, because $C(y^* \mid z^*) \geq C(y^* \mid z^* x^*) - 2 \log n - O(1) \geq C(y^*) - (c+4) (3 \log n) - O(1) \geq \sigma n - (3c+12) \log n - O(1)> \sigma_2 n$. For the second inequality we have used Lemma~\ref{l:threeindep}. Let $B_1 = \{x \in \Sigma^n \mid C(x) \leq t_1\}$. Note that $B_1$ has size bounded by $2^{t_1 + 1}$. We take a set $B_1'$ including $B_1$ having size exactly $2^{t_1 + 1}$. Similarly, let $B_2 = \{y \in \Sigma^n \mid C(y \mid z^*) \leq t_2\}$ and let $B_2'$ be a set that includes $B_2$ and has size exactly $2^{t_2+1}$. Let $k$ be the index of $z^*$ in the lexicographical ordering of $\Sigma^n$. By Lemma~\ref{l:coloring}, it follows that for every $a \in [M]$, $$\lll T^{-1} (a) \cap (B_1' \times B_2' \times \{k\}) \rVert \leq (2/M) \lll B_1' \rVert \lll B_2' \rVert .$$ Consequently, \[ \begin{array}{ll} \lll T^{-1}(A) \cap (B_1 \times B_2 \times \{k\}) \rVert & \leq \lll T^{-1}(A) \cap (B_1' \times B_2' \times \{k\}) \rVert \\[1ex] & = \sum_{a \in A} \lll T^{-1}(a) \cap (B_1' \times B_2' \times \{k\}) \rVert \\[1ex] & < 2^{m - c' \log m} \cdot (2/2^m) \lll B_1' \rVert \lll B_2' \rVert = 2^{t_1 + t_2 + 3 - c' \log m}. \end{array} \] Note that given $z^*$, $m - c' \log m$, $t_1$ and $t_2$, we can enumerate $T^{-1}(A) \cap (B_1 \times B_2 \times \{k\})$. Since $(x^*, y^*, z^*)$ is in this set, it follows that the complexity of $x^*y^*$ given $z^*$ is bounded by the rank of the triplet $(x^*, y^*, z^*)$ in a fixed enumeration of the set and the information needed to perform the enumeration. Thus, \[ \begin{array}{ll} C(x^* y^* \mid z^*) & \leq t_1 + t_2 + 3 - c' \log m + 2 \log(m -c'\log m) + 2 \log t_1 + 2 \log t_2 +O(1) \\ & \leq t_1 + t_2 - (c'-2) \log m + 2 \log t_1 + 2 \log t_2 +O(1). \end{array} \] On the other hand, by the conditional version of the Symmetry of Information Equation~(\ref{e:symmetry}), there exists a constant $d$ such that for all strings $u,v,w$, $C(uv \mid w) \geq C(v \mid w) + C(u \mid uw) - d (\log |uv|)$. It follows that \[ \begin{array}{ll} C(x^* y^* \mid z^*) & \geq C(y^* \mid z^*) + C(x^* \mid y^* z^*) - d \log n -O(1) \\ & \geq t_2 + t_1 - (c+2) (3 \log n) - d \log n - O(1)\\ & = t_1 + t_2 - (3c + d + 6) \log n - O(1). \end{array} \] For the second inequality we have used Lemma~\ref{l:threeindep}. Note that $t_1 < n + O(1)$ and $t_2 < n + O(1)$ and $m = \sigma_1 n$. Combining the above inequalities, we obtain $(c'-2) \log \sigma_1 n \leq (3c+d+ 10) \log n +O(1)$. Since $c' = 3c + d + 13$, we have obtained a contradiction.~\QE \section{Acknowledgments} We are grateful to Andr\'{e} Nies and Frank Stephan for their insightful comments. In particular, Definition~\ref{d:strongindep} has emerged after several discussions with Andr\'{e}, and Proposition~\ref{p:ce} and Proposition~\ref{p:createdep} are due to Frank \cite{frank}. We also thank Jan Reimann for his assistance with establishing Fact~\ref{f:NiesReimann}. \begin{comment} \section{Other results and issues} {\bf PANIC: Do we keep those? } If $x$ has Kolmogorov rate $\sigma$ and $y$ is finitary-independent with $x$, then for every $\epsilon >0$, $x ~{\mathrm{XOR}}~ y$ has Kolmogorov rate at least $\sigma -\epsilon$. {\em Proof} (sketch). use $C(x{\upharpoonright} n \mid y{\upharpoonright} n) < C((x ~{\mathrm{XOR}}~ y) {\upharpoonright} n) + O(1)$. {\bf refinement of independence:} let $f$ be a function $f: N \rightarrow N$. $x$ and $y$ have $f$-dependency, if for all $n$, \[ C(x {\upharpoonright} n ~y{\upharpoonright} n) \geq C(x {\upharpoonright} n) + C(y {\upharpoonright} n) - O(f(n)). \] Is there a hierarchy theorem: if $f_1 < f_2$, are there strings $x$ and $y$ that have $f_2$- dependency but not $f_1$-dependency? \end{comment}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Precise timing of pulsars has shown that they spin down steadily because of magnetic braking \citep{Richards:1969}. The braking index $n=\Omega \ddot{\Omega}/\dot{\Omega}^{2}$, with $\Omega$ the angular frequency and dots denoting time derivatives, is a characteristic quantity of the spin-down evolution of a pulsar. According to the vacuum dipole model by \cite{Deutsch:1955} a pulsar radiates energy at a rate $\dot{E}=-(B_{d}^{2}R_{*}^{6}\Omega^{4}/6 c^{3})\sin^{2}\alpha$, where $B_{d}$ is the dipole field intensity at its magnetic pole, $R_{*}$ the radius, $\alpha$ is the inclination angle between the rotational axis and the magnetic moment and $c$ is the speed of light. Numerical simulations of the magnetosphere find the same dependence on $\Omega$ and $B_{d}$, with somewhat different dependence on $\alpha$ \citep{Spitkovsky:2006}. The loss of kinetic energy is therefore described by \begin{equation}\label{eq:energy} {d(I_{m} \Omega^{2})\over dt}=-K B_{d}^{2}\Omega^{4}, \end{equation} where $I_{m}$ is the moment of inertia onto which the torque acts and $K$ an appropriate numerical factor. Assuming constant $I_{m}$, $K$ and $B_{d}$ gives $\dot{\Omega} \propto \Omega^{3}$, yielding the well-known braking index for magnetic dipole spin-down, $n=3$ \citep{Pacini:1968, GunnOstriker:1969}. Phase coherent timing of 8 young pulsars has allowed the measurement of their $\ddot{\Omega}$ (Table 1), giving values of $n<3$ \citep{Manchester:1985, Lyne:1993, Camilo:2000, Livingstone:2007, Espinoza:2011}, smaller than expected in the simple magnetic dipole braking model. A braking index smaller than 3 can arise from either an increase in the torque acting on the pulsar with time (right hand side of eq.~[\ref{eq:energy}]), or a decrease in the moment of inertia with time (left hand side of eq.~[\ref{eq:energy}]). Several interpretations of the observed braking indices have been put forward. One suggestion is that magnetospheric effects change the spin dependence of the torque acting on the pulsar \citep{Contopoulos:2006}, with $n<3$ resulting when the corotating region of the magnetosphere closes within the light cylinder. A fallback disc around a pulsar can provide extra torque \citep{Menou:2001}, however it has been shown recently that their formation is not very likely \citep{Perna:2014}, and only one example, a passive debris disk, has been found in observational searches \citep{Wang:2006}. Alternatively, it has been proposed that the effective moment of inertia of a NS can decrease as normal matter turns into superfluid and decouples from the spin down \citep{Sedrakian:1998, Ho:2012}, or because of glitch activity \citep{Alpar:2006}. A promising avenue is magnetic field evolution \citep{Blandford:1988}. Previous studies considered the reemergence of an assumed buried field by Ohmic diffusion \citep{Muslimov:1996, Ho:2011} or field evolution in the core \citep{Ruderman:1998} to drive a changing dipole moment and deviation from $n=3$. It has also been suggested that magnetic field variations contribute to timing noise in pulsars \citep{Pons:2012, Xie:2013,Tsang:2013}. An evolving magnetic field yields a braking index \begin{eqnarray} n= 3- 4\tau_{c} \dot{B}_{d}/B_{d}\,, \label{EQN} \end{eqnarray} where $\tau_{c}=-\Omega/(2\dot{\Omega})$ is the characteristic spin-down age and $\left|B_{d}/\dot{B}_{d}\right|$ is the timescale on which the dipole component of the magnetic field evolves. To set the scale, the measured braking index $n=2.51$ for the Crab pulsar requires that its magnetic dipole moment is increasing at a rate of 1\% per century, $B_d/\dot B_d\approx 10^4\ {\rm years}$. A similar order of magnitude is inferred for other sources (see Figure 1). \begin{figure} \includegraphics[width=\columnwidth]{dn.pdf} \caption{The observed deviation from $n=3$ as a function of the characteristic age of the pulsar. The dashed lines show the expected deviation $3-n = 4\tau_c/t_B$ for magnetic field evolution timescales $t_B=10^3$, $10^4$, and $10^5\ {\rm years}$. The increase in the observed values of $3-n$ with $\tau_c$ favours models where $3-n$ depends on the characteristic age. The observations are consistent with a range of $t_B$ of about a factor of 10 around a value $t_B\sim 10^4\ {\rm years}$. The error bars in $3-n$ are not shown, but are $\lesssim 10\%$ (see Table 1) and much smaller than the range of values. \label{Figure:dn}} \end{figure} \begin{table*} \resizebox{2.0\columnwidth}{!}{ \label{Table} \begin{tabular}{ l| r| r| r | r |r | r| r| r } Name & $P$~(s) & $\dot{P}$ & $B_{d}$ & $\tau_{c}$ & n & Age & $B_{d,i}$ & $B_{\phi ,14}$ \\ & & $(10^{-12}$s/s) & ($10^{12}$G) & (yr) & & (yr) & ($10^{12}$G) & \\ \hline J1734-3333 & 1.17 & 2.28& $52.3 $ & 8120 & 0.9(2) & $>1,300$ & 45.9 & 13 \\ J1846-0258 & 0.325& 7.08& 48.5 & 729 & 2.65(1) & 1000$^{+3300}_{-100}$ & 41.3 & 32 \\ J1119-6127 & 0.408 & 4.02 & 41.0 & 1,611 &2.684(2)& 7,100$^{+500}_{-2900}$ & 34.5 & 12 \\ B1509-58 &0.151 & 1.54& 15.4 & 1,556 & 2.839(3)& $<$21,000 & 14.2 & 6.8 \\ B0540-69 & 0.0505 & 0.479 & 4.98 & 1,673 & 2.140(9) & 1,000$^{+660}_{-240}$ & 4.16 & 9\\ B0531+21 (Crab) & 0.0331 & 0.423 & 3.79 & 1,242 & 2.51(1) & 960 & 3.30 & 5.6\\ B0833-45 (Vela) & 0.0893 & 0.125 & 3.38 & 11,300 &1.4(2) &11,000$^{+5000}_{-5600}$ & 1.80 & 12 \\ J0537-6910 & 0.0161 & 0.0518 & 0.93 &4,930 & -1.5(1) & 2,000$^{+3000}_{-1000}$ & 0.46 & 40 \\ \end{tabular} } \caption{The parameters of the pulsars with measured braking indices (from \citep{Ho:2012}) and model fits. The last two columns give details of our numerical models that fit each object (section 3). The models have $c_{3}=-c_{1}$, $B_{d,i}$ is the intensity of the dipole component at birth, and $B_{\phi, 14}$ is the maximum intensity of the toroidal component in units of $10^{14}\ {\rm G}$. We could not accommodate the long observed ages of J1119-6127 and B1509-58, in the models presented here, these pulsars reach their observed magnetic fields at ages $\sim$ 3000 and 2000 years from their birth respectively.} \end{table*} The neutron star crust is a natural place to look for such short timescale evolution. The crust, with thickness of $\sim 1$km, consists of a highly conducting crystal lattice of positively charged nuclei and free electrons. Provided that Lorentz forces can be balanced by the crust elastic forces, the evolution of the magnetic field is dominated by two main processes: Hall drift and Ohmic dissipation \citep{Jones:1988, Goldreich:1992,Pons:2013}. Hall drift is the evolution of the magnetic field as magnetic field lines are advected by the electron fluid and conserves the energy of the magnetic field; while, because of the finite conductivity, some magnetic flux is converted into heat through Ohmic dissipation. The typical timescale for the Hall drift is $t_{Hall}= 4 \pi {\rm e} n_{\rm e} L^{2}/(c B)$, with $n_{\rm e}$ the electron number density, $L$ the scale height for the magnetic field, $B$ the intensity of the magnetic field and ${\rm e}$ the elementary electron charge. This timescale can be short in the outer parts of the crust where $n_{\rm e}$ is small. The Hall timescale cannot be made arbitrarily small, however, because at very low electron densities shear stresses in the crust are no longer able to balance the Lorentz forces that develop as the magnetic field evolves. In the very outermost layers, at densities $n_{\rm e}<n_{\rm e,melt}=2.3\times 10^{31}\ {\rm cm^{-3}}\ T_8^3 (Z/26)^{-5}$, the crust melts and cannot provide shear stress. However, even in the solid layers the magnetic stresses can grow to be large enough that the distortion of the crystal lattice exceeds the breaking strain of the crust. For a breaking strain of $\epsilon=0.1$ \citep{Horowitz:2009}, setting $B^2/8\pi = \epsilon\mu_s$ where $\mu_s\approx 10^{-2}P$ is the shear modulus of the crust and $P$ the pressure, gives an estimate of the electron density below which the crust will break, \begin{equation}\label{eq:nb} n_{\rm e,break}=1.5\times 10^{33}\ {\rm cm^{-3}}\ \left({B\over 10^{13}\ {\rm G}}\right)^{3/2}\left({\epsilon\over 0.1}\right)^{-3/4}. \end{equation} The Hall timescale at that density is \begin{equation}\label{eq:tHall0} t_{\rm Hall}\approx 6400\ {\rm years}\ \left({n_{\rm e}\over 10^{33}\ {\rm cm^{-3}}}\right)\left({L^2\over 10^{10}\ {\rm cm^2}}\right)\left({B\over 10^{13}\ {\rm G}}\right)^{-1}, \end{equation} an interesting match to the $B_d/\dot B_d\approx 10^4\ {\rm years}$ timescale needed to explain the braking index of the Crab pulsar\footnote{We have arbitrarily used a lengthscale of $1\ {\rm km}$, of order the crust thickness in this estimate. We discuss in detail in section 2 the appropriate definition of the Hall timescale. For now, equation (\ref{eq:tHall0}) motivates us to investigate Hall drift in more detail as the source of pulsar braking indices.}. Recently, we showed that the initial configuration of the magnetic field in the neutron star crust qualitatively affects the subsequent evolution by Hall drift \citep{Gourgouliatos:2014a}. If the magnetic field is in an MHD equilibrium at the time of crust formation (likely since the crust takes many Alfven crossing times to form; \cite{Gourgouliatos:2013}), the dipole field lines are initially advected towards the magnetic pole, increasing the magnetic dipole moment. The opposite behaviour had been seen in previous simulations that chose a different initial condition based on the lowest order Ohmic diffusion eigenmode (e.g.~\cite{Pons:2007}). However, once a quadrupolar toroidal magnetic field of appropriate polarity and intensity is included in the initial state, it temporarily increases the intensity of the dipole component [cf. model B, Fig.~3 \citep{Pons:2007}] In this paper, we investigate whether growth of the dipole moment driven by the Hall effect could be responsible for the low values of braking index observed in young pulsars. We show that, given a strong subsurface toroidal field component at the time the crust forms, Hall drift can indeed restructure the magnetic field inside the crust of young neutron stars, enhancing the dipole component of the magnetic field, and increasing appropriately the torque to account for the measured braking indices. In section 2, we calculate the growth rate of the dipole moment using numerical simulations and with a semi-analytical model. We compare to observed braking indices in section 3. We discuss these results and conclude in section 4. \begin{figure*} \includegraphics[width=1.8 \columnwidth]{Field_Br_Cr.pdf} \caption{The evolution of the poloidal magnetic field under the influence of the toroidal field. The toroidal field is supported by poloidal currents, plotted in red, with the arrows indicating the direction of the motion of the electrons. The moving electrons advect the poloidal field lines, plotted in black, towards the poles, enhancing the dipole moment of the field. The evolution is significant near the surface of the crust, while the field near the base of the crust remains unaffected.} \label{Figure:1} \end{figure*} \section{Growth of the Dipole Moment due to Hall Drift} In this section, we calculate the time evolution of the dipole moment. We start with numerical simulations of crust field evolution (section 2.1), and then develop a semi-analytic model that captures the main physics and reproduces the behavior observed in our numerical simulations (section 2.2). Of particular importance is the evolution of the toroidal field near the surface of the star, and we discuss this in detail in section 2.3. \subsection{Numerical simulation of magnetic field evolution in the crust}\label{sec:numerical} To study the effect of Hall drift on the braking index we simulated the evolution of an axially symmetric field in a NS crust using a finite-difference, forward-time integrating, 2-D axisymmetric scheme presented in \cite{Gourgouliatos:2014a}. In the simulations we performed we assumed a NS radius $R_{*}=10^{6}$cm, and a crust thickness of $8\times10^4$cm. We used two electron number density $n_{\rm e}$ profiles for the NS crusts depending on the $B_{d}$ field of the NS we simulated, to approximately take into account the fact that the crust will break at low density as the field evolves. For the pulsars with $B_{d}<10^{13}$G we assumed that $n_{\rm e}=2.8\times10^{40} (1.017-r/R_{*})^{4}$cm$^{-3}$, so that the minimum density at the outermost point of our simulation ($r=R_{*}$) is $2.5\times 10^{33}$cm$^{-3}$ and the $n_{\rm e}$ has a range of three orders of magnitude. For the pulsars with $B_{d}>10^{13}$G, we assumed $n_{\rm e}=1.3\times 10^{40}(1.037 -r/R_{*})^{4}$cm$^{-3}$, so that the minimum density is $2.5\times 10^{34}$cm$^{-3}$ and has a range of two orders of magnitude. In both models the highest electron density at the base of the crust is $2.5\times 10^{36}$cm$^{-3}$. These profiles are good approximations of more precise crust models by \cite{Cumming:2004} with temperatures $\approx 10^8\ {\rm K}$ and for which the electron density closely follows $n_{\rm e} \propto z^{4}$ where $z$ is the depth from the surface of the crust. The choice of initial condition for the magnetic field is of crucial importance. While the late time evolution is towards a particular ``Hall attractor" state \citep{Gourgouliatos:2014b}, the early evolution is a response to the initial structure of the magnetic field. Gourgouliatos \& Cumming (2014b) found that a pure poloidal dipole will, under the action of the Hall effect, first generate a toroidal field in the star; then the poloidal currents associated with the toroidal field advect the poloidal field lines towards the magnetic poles, increasing the dipole moment. We find that to achieve an increase in dipole moment at the rate needed to explain pulsar braking indices, we must short-circuit this process by putting in a strong toroidal field at the beginning of the simulation, when the crust forms. A sub-surface toroidal field has previously been suggested to be present in neutron stars. Spin-down measurements of neutron stars determine the dipole moment of the star, but higher order mulitpoles or toroidal field components are not well-constrained. Recent modelling and observations suggest that NSs with relatively weak dipole moments may have substantially larger surface and internal fields \citep{Geppert:2006,Shabaltas:2012,Tiengo:2013, Geppert:2013}. From a theoretical perspective, a newborn NS may have significant multipolar structure in its poloidal field resulting from convection prior to crust formation \citep{Thompson:2001}, and is likely to host a strong toroidal component as differential rotation following collapse winds poloidal fields \citep{Thompson:1993}. Here we consider a quadrupolar toroidal field as would be generated by radial differential rotation acting on the poloidal dipole. We considered three different models for the initial poloidal field structure: a dipole $(\ell=1)$, an octupole $(\ell=3)$, and an equal and opposite mixture of a dipole and octupole. We decompose the fields on the surface of the NS in multipoles with coefficients $c_{\ell}=(2\ell +1)/(2\ell +2) \int_{-1}^{1} B_{r} P_{\ell} (\mu) d\mu$, where $P_{\ell}$ the $\ell^{\rm th}$ order Legendre Polynomial, $B_{r}$ the radial component of the magnetic field on the surface of the NS and the cosine of the polar angle $\mu=\cos\theta$. We used a quadrupole $(\ell=2)$ toroidal field, with the same polarity with the dipole. The profile of the toroidal field used was $B_{\phi}(r,\mu)=B_{\phi, 14}~2.5\times10^{18}~(1-r/R_{*})^{2}(r-R_{c})\sin\theta\cos\theta/r~$G, where $R_{c}$ is the crust inner radius, $\theta$ is the angle from the magnetic pole and $B_{\phi,14}$ is the maximum value of the toroidal field in the crust (reached at $\theta=\pi/4$ and $r=9.46\times 10^{5}$cm), in units of $10^{14}$G. We find that a quadrupole toroidal field with the same polarity as the poloidal dipole field winds the poloidal field and stretches the field lines towards the poles (see Figure \ref{Figure:1}). This distortion, caused by advection of the poloidal field lines by the poloidal current loops that support the toroidal field, enhances the dipole moment, and at the same time the field develops an octupole poloidal component of the same polarity. If instead the initial field is of octupole structure, the toroidal field generates a dipole component on the surface of the NS through the same process. Figure \ref{Figure:dipole} shows the evolution of the dipole moment with time for models with $B_{\phi,14}=7$, and the three different initial poloidal field structures. In all three models, the dipole moment changes significantly on timescales of thousands of years. Note that while the redistribution of the poloidal field lines is occurring, Hall drift does not generate new magnetic flux \citep{Reisenegger:2007}. The total magnetic energy does slowly decrease as it is converted to heat through Ohmic dissipation. \begin{figure} \includegraphics[width=\columnwidth]{dipole.pdf} \caption{The evolution of the dipole moment for three different initial poloidal field configurations, starting with a dipole only, equal and opposite mixture of dipole and octupole, or octupole only. The models shown have $B_{\phi,14}=7$. \label{Figure:dipole}} \end{figure} \subsection{Semianalytical model of the growth of the dipole moment} The growth of the dipole moment in the simulations can be understood by considering the passive advection of the poloidal field by the currents associated with the much stronger toroidal field. To do this, we write the axially symmetric magnetic field in terms of scalars $\Psi(r,\mu)$ and $I(r,\mu)$ as $\bm{B}=\nabla \Psi \times \nabla \phi + I \nabla \phi$. The poloidal flux function $\Psi$ evolves according to equation~[7] of \cite{Gourgouliatos:2014a}, \begin{equation} {\partial\Psi\over\partial t} + r^2\sin^2\theta \chi \left(\nabla I\times\nabla\phi\right)\cdot\nabla\Psi = 0, \end{equation} where the second term describes the advection of the poloidal field lines by poloidal currents associated with the toroidal magnetic field. The radial gradient of the toroidal field, $\partial I/\partial r$, dominates, giving an advection equation for $\Psi$, \begin{equation}\label{eq:advect_psi} {\partial\Psi\over\partial t}=-v_\mu{\partial\Psi\over\partial \mu}\hspace{1cm} v_\mu = -\left(1-\mu^2\right)\chi{\partial I\over \partial r}. \end{equation} Near the surface, where the Hall time is short, $I$ decreases outwards giving $v_\mu>0$ and the poloidal field lines are transported towards the pole. Equation (\ref{eq:advect_psi}) allows us to derive evolution equations for the multipole moments. We expand $\Psi$ at the surface of the star as $\Psi = \sum_\ell a_\ell P_\ell^1\left(\mu\right) \sin\theta$, and assuming that the angular dependence of $I$ does not change as it evolves, $\partial I/\partial r = I^\prime(r,t)\mu (1-\mu^2)$, an integral of equation (\ref{eq:advect_psi}) over the surface gives \begin{equation}\label{eq:dadt} {d a_\ell\over dt} = \sum_m {a_mf_{\ell m}\over t_{\rm Hall}(t)} \end{equation} where the coefficients \begin{equation} f_{\ell m} = \int^1_{-1}d\mu\ {P_\ell^1\left(\mu\right)\mu\sqrt{1-\mu^2}\over 2}{\partial\over\partial\mu}\left(\sqrt{1-\mu^2}P^1_m\left(\mu\right)\right) \end{equation} give the coupling between different $\ell$s, and we define the Hall timescale $t_{\rm Hall}$ by \begin{equation}\label{eq:tHall} t_{\rm Hall}^{-1}(t) = 2I^\prime(t) \chi (1-\mu^2)=I^\prime(t) {c\over 2\pi n_{\rm e} e R_{*}^2}. \end{equation} The first few coefficients are $f_{11}=1/5$, $f_{13}=-18/35$, $f_{31}=f_{33}=2/15$. Equations (\ref{eq:dadt}) are a set of evolution equations for the multipole moments given the time evolution of the background toroidal field. Considering the dipole term only as a first approximation, we find that the dipole moment will grow on a timescale \begin{equation}\label{eq:tB_dipole} t_B={a_1\over da_1/dt} = 5 t_{\rm Hall}. \end{equation} Equation (\ref{eq:tB_dipole}) matches our simulations that start with a dipole field well; when the field has an octupole component initially, the evolution is faster, closer to $t_B\approx t_{\rm Hall}$. In the dipole case to achieve $t_B\approx 10^4\ {\rm years}$, as needed to match the observed braking index of the Crab for example, requires a Hall time at the surface of $t_{\rm Hall}\approx 2000\ {\rm years}$. To solve equations (\ref{eq:dadt}) for the poloidal field evolution, we need the Hall timescale which depends on $\partial I/\partial r$ (eq.~[\ref{eq:tHall}]). To evaluate the Hall time as a function of time, we have solved the evolution of $I$ with depth in detail by integrating the evolution equation for $I$ including Hall and Ohmic terms \begin{equation}\label{eq:dIdt} {\partial I\over \partial t} = - v_I {\partial I\over \partial r}+{\partial\over \partial r}\left(\eta{\partial I\over\partial r}\right) \end{equation} where the advection velocity is $v_I=I\sin\theta\,{\partial \chi/\partial \theta} = -2I\chi\cos\theta=-2I\chi\mu$ (see eq.~[8] of \citealt{Gourgouliatos:2014a}). We follow $I$ in time on a radial grid using the method of lines including the detailed hydrostatic structure of the crust, equation of state, and electrical conductivity as described in \cite{Cumming:2004} (we assume an isothermal crust with $T=10^8\ {\rm K}$). We include the Hall term only for $n_{\rm e}>n_{\rm e,break}$, where $n_{\rm e,break}$ is given by equation~(\ref{eq:nb}). The value of $\partial I/\partial r$ at $n_{\rm e}=n_{\rm e,break}$ was used in equation (\ref{eq:dadt}) to simultaneously solve for the evolution of the multipole moments (we follow odd $\ell$s up to $\ell=9$). We find that this approach reproduces the results from the numerical simulations presented in section \ref{sec:numerical}, including the cases where the initial state has a significant octupole component. This is a useful check on the numerical simulations, as we can include a much larger range of $n_e$ in these 1D calculations of $I$ than in the full 2D numerical simulations. \begin{figure} \includegraphics[width=\columnwidth]{Bprof.pdf} \caption{The evolution of the radial profile of $B_\phi$ with time for a model that has $B_\phi\propto n_e^{1/2}\propto z^2$ initially, shown as the red dashed line. The profiles are shown at times of 1, 10, 100, 1000, and 10000 years. The kinks in the curves occur at electron capture boundaries where the composition changes as a result of the changing equilibrium nucleus with density. The vertical blue dotted line shows the estimated depth at which crust breaking will occur given the initial field profile (eq.~[\ref{eq:nb}]). Also shown is the scaling $B_\phi\propto n_e$ expected for constant Ohmic flux.\label{Figure:Bprof}} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{tHall.pdf} \caption{Hall timescale as a function of the poloidal and toroidal field, eqs.~(\ref{eq:tHall1}) and (\ref{eq:tHall2}). For a given poloidal field, there is a minimum Hall timescale possible which is when $B_\phi=B_P$ and toroidal stresses begin to dominate. The red circles show the ages and $t_B=4t_c/(3-n)$ for the pulsars with measured braking indices (see Table 1).\label{Figure:tHall}} \end{figure} \subsection{Evolution of the radial profile of $B_\phi$ and crust breaking} Given the crucial role of the toroidal field gradient, $\partial B_\phi/\partial r$, we now discuss its evolution in more detail. At low densities, the radial gradient of $I$ is set by Ohmic diffusion, which causes the radial profile of the toroidal field to adjust on an Ohmic timescale so that the Ohmic flux $\propto\eta\,dB_\phi/dr$ is constant with depth. In our simulations, we assume an electrical conductivity $\sigma=2.1\times 10^{22}\ n_{{\rm e},33}^{2/3}$ (a close match to the conductivity profile of an isothermal crust at a temperature of $10^8\ {\rm K}$), giving an Ohmic timescale of \begin{equation} t_{\rm Ohm} = {4H^2\over \eta} = 300\ {\rm years}\ n_{{\rm e},33}^{4/3}\left({Y_e\over 0.4}\right)^2, \end{equation} where $H=28\ {\rm m}\ n_{{\rm e},33}^{1/3}(Y_e/0.4)$ is the pressure scale height in the outer crust, and $Y_e$ is the electron fraction. At the densities in our simulations $n_{{\rm e},33}=2.5$ and $25$, the Ohmic time is 1000 and 30,000 years respectively. In the outer crust, degenerate electrons set the pressure giving $P\propto n_e^{4/3}$. Hydrostatic balance then implies \begin{equation} {dP\over dr} = -\rho g\Rightarrow {1\over n_e^{2/3}}{d n_e\over dr} \approx {\rm constant}, \end{equation} where $g$ is the local gravity. Constant Ohmic flux therefore implies $dB_\phi/d n_e\approx $ constant, giving $B_\phi\propto n_{\rm e}\propto P^{3/4}$, and \begin{equation}\label{eq:Iprime} I^\prime\approx R_* {dB_\phi\over dr}={3\over 4}{RB_\phi\over H}. \end{equation} Figure \ref{Figure:Bprof} shows $B_\phi$ as a function of $n_e$ at different times, showing that a region of constant Ohmic flux grows from the surface inwards over time. The numerical solutions closely match the expected $B_\phi\propto n_e$ scaling. The fact that $\partial I/\partial r$ evolves to a particular value simplifies the calculation of the Hall time. Substituting this value of $I^\prime$ from equation (\ref{eq:Iprime}) into equation (\ref{eq:tHall}) for the Hall time gives the strength of the toroidal field needed to achieve a given Hall timescale, \begin{equation}\label{eq:Bphi} B_\phi = 6.3\times 10^{12}\ {\rm G}\ n_{{\rm e},33}^{4/3}\ \left({2000\ {\rm years}\over t_{\rm Hall}}\right) \end{equation} where we set $Y_e=0.4$ and $R_*=10\ {\rm km}$. The evolution of $B_\phi$ as a function of time depends on its initial radial profile. If $B_\phi$ increases more slowly with density than $B\propto n_{\rm e}$, Ohmic relaxation causes $B_\phi$ to decrease with time at a given $n_{\rm e}$; if $B_\phi$ increases more rapidly with density than $B\propto n_{\rm e}$, Ohmic relaxation will cause $B_\phi$ to increase with time at a given $n_{\rm e}$. Therefore $t_B$ can be either increasing and decreasing with time; the models shown have $t_B$ increasing with time. We can estimate the shortest possible Hall time by substituting equation (\ref{eq:nb}) for the density at which the crust breaks. In the simple estimate leading to equation (\ref{eq:nb}), we wrote the magnetic stress as $\propto B^2$. In reality, the magnetic stress is not isotropic. When $B_\phi<B_P$ at $n_{\rm e}=n_{\rm e,break}$ ($B_P$ is the poloidal field strength), the appropriate component of the stress to consider is the $B_\theta B_r$ component, which is $\approx B_P^2$. Equations (\ref{eq:Bphi}) and (\ref{eq:nb}) then give the Hall timescale as a function of the poloidal and toroidal field strength \begin{equation}\label{eq:tHall1} t_{\rm Hall} = 2200\ {\rm years}\ \left({B_\phi\over 10^{13}\ {\rm G}}\right)^{-1}\left({B_P\over 10^{13}\ {\rm G}}\right)^{2}\left({\epsilon\over 0.1}\right)^{-1}. \end{equation} However, if $B_\phi>B_P$, then the appropriate magnetic stress is $\propto B_PB_\phi$, and so we should set $B^2\sim B_\phi B_P$ when estimating the stress that will break the crust. The toroidal field strength then drops out, and we find a minimum Hall timescale \begin{equation}\label{eq:tHall2} t_{{\rm Hall},{\rm min}} = 2200\ {\rm years}\ \left({B_P\over 10^{13}\ {\rm G}}\right)\left({\epsilon\over 0.1}\right)^{-1}. \end{equation} The Hall time is shown as a function of $B_P$ in Figure \ref{Figure:tHall}. The pulsars with poloidal fields $B_d<10^{13}\ {\rm G}$ have $t_B$ much longer than $t_{\rm Hall,min}$ and are easily accommodated without breaking the crust. However, the pulsars with $B_d>10^{13}\ {\rm G}$ have $t_B$ close to $t_{\rm Hall,min}$ and our simulations exceed the breaking strain at low densities as we describe in the next section. \section{Comparison with Observations} Having shown that Hall drift associated with a strong toroidal field in the crust leads to growth of the dipole moment on interesting timescales, we now compare the simulations with observed braking indices. The magnetic fields of the pulsars with measured braking indices are in the range $10^{12}$--$5\times 10^{13}$G, and their estimated ages are $\approx 10^{3}$--$10^{4}$ years (see Table 1). To remove the dependence on $\tau_{c}$ (see Figure \ref{Figure:dn}), we solve equation~(\ref{EQN}) to obtain a measured $B_{d}/\dot{B}_{d}$ for each pulsar, and plot it as a function of the pulsar's age in Figure \ref{Figure:2}. A similar diagram was shown by \cite{Ho:2012}, but for moment of inertia evolution rather than magnetic field evolution. The curves in Figure \ref{Figure:2} show the calculated evolution timescale for different choices of toroidal field strength and poloidal field geometry and strength. The evolution time depends mainly on the toroidal magnetic field (see eq.~[\ref{eq:tHall1}]) and the multipole decomposition of the poloidal field (eq.~[\ref{eq:tB_dipole}] and related discussion). In particular, a stronger toroidal field leads to faster evolution, and the presence of higher order poloidal multipoles provides a reservoir of magnetic flux that can be transferred to the dipole component. Figure \ref{Figure:2} shows that a toroidal field strength in the range $10^{13}$--$10^{14}$G in the outer $\approx 100\ {\rm m}$ of the crust gives a good match to the observed braking indices. An initial dipole field is able to reproduce the observations of the Crab, PSR~J1734-3333, PSR~J1846-0258, and PSR~B0549-69. The remaining 4 pulsars require an octupole component to be present to boost the dipole growth rate. We also find that crust breaking occurs in models of the pulsars with higher poloidal fields $\gtrsim 10^{13}\ {\rm G}$. As we described in section 2, our simulations take into account crust breaking in an approximate way by considering two crust models with differing minimum densities. But in addition, we calculate the magnetic stresses that develop during each simulation following \cite{Perna:2011}, and compare them to the maximum stress that can be accommodated by the solid, to check whether the crust exceeds the breaking strain. For pulsars with stronger fields $\gtrsim 10^{13}\ {\rm G}$, models which match the observed braking indices at the right ages exceed the breaking strain of the crust at low densities by factors of several. In Figure \ref{Figure:3} we show models of the evolution in the $P$--$\dot P$ diagram. For each pulsar, we choose the initial poloidal and toroidal magnetic fields (see Table 1) so that the subsequent Hall evolution leads to the observed magnetic field and braking index at the age of the pulsar. Two pulsars, B1509-58 and J1119-6127 (dashed lines in Fig.~\ref{Figure:3}) have estimated ages that are much longer than the ones that can be accommodated by a steady spin-down over the pulsar's lifetime. In those two cases, we show models in which the current age of the pulsar is $\approx$ 2000 and 3000 years respectively, in which case we are able to match the observed values of $P$, $\dot P$, and $n$. \begin{figure} \includegraphics[width=\columnwidth]{B_Bdot.pdf} \caption{The observed $B/\dot{B}$ with pulsar age, for a variety of combinations of poloidal and toroidal fields, and the values of $(3-n)/4\tau_{c}$ for the eight pulsars whose braking index is known through phase coherent timing, shown in Table 1. The blue lines correspond to a dipole ($\ell=1$) initial field, the green to a mixed dipole and octupole ($\ell=3$), and the red to an octupole only initial field. The solid lines correspond to models with minimum electron density $n_{\rm e}$ of $2.5\times 10^{33}$cm$^{-3}$, and $B_{\phi,14}=7$, while the dashed lines have $B_{\phi,14}=14$. The dotted lines correspond to a crust with minimum $n_{\rm e}$ of $2.5\times10^{34}$cm$^{-3}$ and $B_{\phi,14}=28$. We have used the latter crust models in pulsars with dipole fields above $10^{13}$G, to account the fact that stronger magnetic fields cannot take advantage of all the density range of the crust, as they may deform the outer layers because of strong Lorentz forces.} \label{Figure:2} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{P_Pdot.pdf} \caption{$P$-$\dot{P}$ diagram for the eight pulsars for which the braking index is measured, taking into account the magnetic field evolution. The points correspond to the observed $P$ and $\dot{P}$. The direction of the arrow is related to the braking index \citep{Espinoza:2011} $\tan\omega=2-n$, where $\omega$ is the angle with horizontal axis. Examples of evolutionary tracks are plotted, which evolve to the right values of magnetic field and braking index at the pulsar's current age, assuming an initial magnetic field in the ranges shown in Figure \ref{Figure:2}. $P_{0}$ is solved for, given the evolution of the magnetic field and the pulsars position in the $P$-$\dot{P}$ diagram. The tracks of B1509-58 and J1119-6127 are shown as dashed lines, as their estimated ages are much longer than the ones that can be accommodated by a steady spin-down over the pulsar's lifetime.} \label{Figure:3} \end{figure} \section{Discussion and Conclusions} We have shown that an internal quadrupolar toroidal magnetic field of strength $\approx 10^{14}\ {\rm G}$ in young neutron stars leads to an increase of the dipole moment, driven by the Hall effect. The relevant timescale is the Hall timescale at the lowest density in the crust at which the Hall effect can operate, most likely set by yielding of the solid under the magnetic stresses that develop as the field evolves. This depth sets the minimum timescale for field evolution, and is quickly populated with toroidal field by ohmic diffusion, so that the growth of the dipole moment occurs on a characteristic timescale associated with this depth (eq.~[\ref{eq:tHall1}]). The observed braking indices of pulsars with inferred dipole fields of $\lesssim 10^{13}\ {\rm G}$ can be accommodated in these models, although with a significant octupole component needed in some cases. For the stronger field pulsars, those with $B_d\gtrsim 10^{13}\ {\rm G}$, we find that the magnetic stresses in the crust exceed the maximum shear stress before the pulsar reaches its current age. Therefore, it is not clear whether Hall drift can explain the braking indices of the higher field pulsars: in the limit where the crust cannot support shear stress, ie. behaves as a liquid, the Hall effect would be expected to effectively switch off, with hydrodynamic motions shorting out the Hall electric fields. It is worth noting that we have assumed a breaking strain $\epsilon\sim 0.1$ \citep{Horowitz:2009}. If the breaking strain is significantly smaller than this, crust yielding would play a role even in the weaker poloidal field pulsars. Further modelling of crust yielding and its back-reaction on the magnetic field evolution is needed to accurately follow the evolution in higher field pulsars. The fact that the crust exceeds its breaking strain due to Hall evolution could explain the magnetar activity observed in PSR~J1846-0258 and other high magnetic field radio pulsars \citep{Gavriil:2008}. This calculation focuses in the early evolution of the magnetic field structure of neutron stars ($\sim10^{4}$yr), during which thermal feedback does not have an important effect, since the relevant timescale is much longer \citep{Aguilera:2008, Vigano:2013}. Following the early evolution the dipole component magnetic field has an overall trend to decrease as the crustal magnetic field is Ohmically dissipated, which is consistent with the suggested magnetic field decay from population synthesis studies \citep{Igoshev:2014, Gullon:2014}. While the magnetic field decays, the dipole component of the magnetic field may temporarily increase, because of whistler wave oscillations \citep{Gourgouliatos:2014a}, which has been used to explain the second period derivative measurements inferred by timing noise \citep{Zhang:2012}. Magnetic field studies in neutron stars, including the one presented in this work, are constrained to axially symmetric models. The extension to non-axially symmetric calculations may open different paths in the magnetic field evolution and shed light to the role of instabilities and turbulent cascade. These issues stress the importance for the development of a 3-D code that will address such questions. Even if Hall evolution is not enough to explain the entire deviation of the braking index from $n=3$, a mild toroidal field can still have a significant impact on the spin evolution, and should be taken into account in other interpretations of braking indices. For example, \cite{Ho:2012} suggested that braking index measurements could be used to determine the rate at which the mass of superfluid in the neutron star core is increasing as the star cools. The fact that magnetic field evolution may be changing the dipole moment, even at a low level, will impact constraints on the neutron superfluid component, and therefore NS equation of state, derived in this way. A larger sample of observed braking indices is needed to test models. Unfortunately, the sample is not likely to increase substantially in size in the near future. An alternative way to test models is a measurement of the second braking index $p=(\Omega^{2}/\dot{\Omega}^{3})\dddot{\Omega}$ \citep{Blandford:1988}. In the context of magnetic field evolution models, the sign of $p$ depends on whether the magnetic field growth is accelerating or decelerating. \cite{Livingstone:2005} were able to measure $p$ for PSR B1509-58, finding $p=18.3\pm2.9$, the central value of which implies that $\ddot{B}/{B}>0$, but is also consistent with $\ddot{B}/B<0$ within $2\sigma$. Similar conclusions can be inferred by the measurement of the second braking index of the Crab pulsar \citep{Lyne:1988, Lyne:1993}. In the models presented here, measurements of $p$ constrain the initial profile of the toroidal field, specifically the dependence of $B_\phi$ on electron density. We have made a particular choice for the toroidal field geometry, a quadrupole, in order to achieve growth of the magnetic dipole moment. A quadrupolar toroidal field arises naturally in Hall evolution of a poloidal dipole field; however, in order to grow the dipole quickly enough, we introduce the toroidal field at the beginning of the simulation. There have been several suggestions that young neutron stars host internal toroidal fields (e.g.~\cite{Geppert:2006, Thompson:1993,Shabaltas:2012,Tiengo:2013}), and the particular choice we made here would arise from radial differential rotation acting on a poloidal dipole field before the crust solidifies, but the unknown state of the magnetic field at the time of crust formation remains a major uncertainty. The idea that Hall drift may be operating, and have observable consequences, in pulsars is intriguing. The Hall effect is usually discussed as the mechanism underlying observed activity in magnetars \citep{Perna:2011} but the observed braking indices of young pulsars may be telling us that the Hall effect plays a role in a much wider range of systems, including neutron stars with significantly smaller dipole fields than magnetars. \section*{Acknowledgements} We thank Dave Tsang, Vicky Kaspi, Hongjun An and Rob Ferdman for insightful discussions. KNG was supported by the Centre de Recherche en Astrophysique du Qu\'ebec. AC is supported by an NSERC Discovery Grant and is an Associate Member of the CIFAR Cosmology and Gravity program. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The compressible Euler equations is the oldest system of nonlinear PDEs modeling the motion of gases. Under Lagrangian coordinates, the compressible Euler equations in one space dimension takes the following form \begin{align} \tau_t-u_x&=0\,,\label{lagrangian1}\\ u_t+p_x&=0\,,\label{lagrangian2}\\ \Big(\frac{1}{2}u^2+ \mathcal{E} \Big)_t+(u\,p)_x&=0\,, \label{lagrangian3} \end{align} where $x$ is the Lagrangian spatial variable, $t\in\mathbb{R}^+$ is the time. $\tau=\rho^{-1}$ denotes the specific volume for the density $\rho$. $p$, $u$ and $\mathcal{E}$ stand for the pressure, the velocity, and the specific internal energy, respectively. For polytropic ideal gases, it holds that \begin{equation} p=K\,e^{\frac{S}{c_v}}\,\tau^{-\gamma},\ \ \mathcal{E}=\frac{p\tau}{\gamma-1}\, , \ \ \gamma>1\,, \label{introduction 3} \end{equation} where $S$ is the entropy, $K$ and $c_v$ are positive constants, see~\cite{courant} or \cite{smoller}. For most gases, the adiabatic exponent $\gamma$ lies between $1$ and $3$, that is $1<\gamma<3$. For $C^1$ solutions, it follows that (\ref{lagrangian3}) is equivalent to the ``entropy equation'': \begin{equation} S_t=0\,. \label{s con} \end{equation} Therefore, when entropy is constant, the flow is called isentropic, then (\ref{lagrangian1}) and (\ref{lagrangian2}) become a closed system, known as the $p$-system (or isentropic Euler equations) \begin{align} \tau_t-u_x&=0\,,\label{p1}\\ u_t+p_x&=0\,,\label{p2} \end{align} with \begin{equation}\label{p3} p=K_1\,\tau^{-\gamma}\,, \end{equation} where, $K_1>0$ is a constant. Compressible Euler equations (1.1)--(1.3) and p-system (1.6)--(1.7) are two of the most important physical models for hyperbolic conservations laws \begin{equation}\label{CL} {\bf u}_t+{\bf f}({\bf u})_x=0\,, \end{equation} where ${\bf u}={\bf u}(x,t)\in \mathbb{R}^n$ is the unknown vector and ${\bf f}:\mathbb{R}^n\rightarrow\mathbb{R}^n$ is the nonlinear flux. It is a general belief that system (\ref{CL}) typically develops discontinuity singularity, i.e. shock wave, no matter how small and smooth the data are. This belief has been justified in a series of beautiful works by Lax \cite{lax2} in 1964 for general systems with two unknowns, and by \cite{john, lizhoukong0,lizhoukong,Liu1} for general $n\times n$ systems. These results confirm that for general strictly hyperbolic systems, if the initial datum is a generic small smooth perturbation near a constant equilibrium, then the initial compression in any truly nonlinear (not weakly linearly degenerate \cite{lizhoukong0}) characteristic field develops singularity in finite time. Such lack of regularity is the major difficulty in analyzing these systems. With enormous efforts, the well-posedness theories of small total variation solutions for \eqref{CL} including compressible Euler equations and p-system are fairly well understood \cite{Bressan,Dafermos, Glimm}. The next natural question is on the theory of large data, which is, however, widely open. Even for some important physical systems, such as compressible Euler equations and p-system, the basic question, like if the shock will form in finite time, is not completely understood when the smallness condition on the initial data is missing. We will address this open problem in this paper for p-system and full compressible Euler equations. The beautiful result of Lax \cite{lax2} along with some expositions such as \cite{Evans} left readers an impression that, at least for p-system, for $C^1$ initial data away from vacuum, singularity will form in finite time if and only if there is some compression initially, without smallness assumption. This, however, is not quite accurate. When adopting \cite{lax2} to p-system, the control on a crucial term $\frac{1}{c(v)}$ for the sound speed $c(v)=\sqrt{-p_v}$ is very important. This term is singular if density tends to zero. On the other hand, the $L^\infty$ estimate through Riemann invariants offers an upper bound of density, without control on the lower bound. For small solutions, the smallness condition comes into play. For large solutions, this becomes a serious issue. In general, it is not possible to have a positive constant lower bound for density. Indeed, a Riemann problem connecting two extreme sides of two interacting strong rarefaction waves generates vacuum instantaneously when $t>0$, \cite{smoller}. Smoothing out this data implies the existence of a $C^1$-solution such that $\inf_{x}\rho(x,t)\rightarrow0$ as $t\rightarrow+\infty$. An example of Lipschitz continuous solutions can be found in \cite{Riemann} and \cite{courant}, where the density decays in time at a rate of $\frac{1}{t}$. If one looks into this problem more carefully, the argument in \cite{lax2} is valid only for p-system with large initial data and with pressure law (\ref{p3}) when $\gamma\ge 3$, which does not include the most practical case $1<\gamma<3$ in gas dynamics. In fact, when $\gamma\ge 3$, the control of lower bound of density is not needed, see also a generalization to full Euler equations by Chen, Young and Zhang \cite{G6}. Therefore, the real matter of the open problem is to establish the finite time singularity formation for both p-system and full compressible Euler equations for the most physical case $1<\gamma<3$. A more in-depth discussion on Lax's result \cite{lax2} will be presented in section 2 of this paper. The main purpose of this paper is to establish the finite time shock formation result for both p-system and full Euler equations without the smallness assumption on initial data, when gases are in physical regime, i.e., $1<\gamma<3$. We introduce a brand new elementary and neat approach to establish the time-dependent density lower bound, which is good enough to achieve our characteristic analysis leading to the finite time shock formation results even when initial data are large. For isentropic flow with $\gamma$-law pressure, our result shows that if the initial datum is smooth with a positive lower bound for density, then the classical solution of the Cauchy problem of p-system breaks down in finite time {\bf if and only if} there is an initial compression. The precise statement is in Theorem \ref{p_sing_thm}. We emphasize that the approach introduced in the proof of this theorem is neat and elementary, but also very powerful. This approach can be generalized to deal with the p-system with general pressure law, and give a complete picture on the finite time singularity formation for large solutions. Such generalization is presented in Section 2.4 of this paper. Furthermore, this approach is applicable to the full Euler equations for non-isentropic flows. We see from above that for isentropic flow, the singularity formation theory is not different no matter the data is small or large. However, the life is very complicated for non-isentropic flow. When initial data is small, \cite{lizhoukong0,lizhoukong,Liu1} showed that if the initial data is a generic small smooth perturbation near a constant equilibrium, then the initial compression in any truly nonlinear (not weakly linearly degenerate \cite{lizhoukong0}) characteristic field develops singularity in finite time, like in the p-system. When data is large, this expectation is not true for full Euler equations. In Section 3.5, we provide an explicit example showing that for certain class of non-trivial initial data, even it is periodic in space variable with non-zero derivatives, global classical solutions exist. This is in a sharp contrast to the small data theory, where the non-trivial periodic initial data lead to finite time break down of classical solutions. Therefore, in order to prove finite time singularity formation results for full Euler equations with large data, it is natural to impose some conditions to exclude this class of initial data. In Section 3, we identified such kind of conditions and successfully established the finite time singularity formation results when initial compression is slightly stronger than a critical value, which can be attained by the global classical solutions constructed in our example. More detailed discussion was also provided at the end of this paper. When some restrictions, such as compactness of support of the initial data near certain constant equilibrium, are imposed, there are some wonderful results on the finite time singularity formation for compressible Euler equations in higher space dimensions. We refer the readers to some of these results, see \cite{tms1,rammaha,sideris} for classical compressible Euler equations, and see \cite{Chris, ps} for relativistic Euler equations. The results in this paper in one space dimension offer more complete and clear pictures on the mechanism, occurrence, and the type of singularity formations. \section{Shock formation for p-system}\label{section2} In this section, we study singularity formation for p-system \eqref{p1}$\sim$\eqref{p3}. The proof of our main theorem (Theorem \ref{p_sing_thm}) is based on the study of Lax's characteristic decomposition established first for general hyperbolic system with two unknowns in \cite{lax2}. For the readers' convenience, in Subsection \ref{subsection2.1}, we first review this well-known result of Lax \cite{lax2}. Then in Subsection \ref{subsection2.2}, we present a careful adoption of Lax's method in \cite{lax2} to p-system with $\gamma$-law pressure. We will then explain why Lax's result \cite{lax2} for small smooth initial data for general $2\times 2$ system actually offers the singularity formation result for p-system without smallness restrictions on initial data for $\gamma$-law pressure provided that $\gamma\geq 3$. We will also spell out why his result does not include the most physical cases when $1<\gamma<3$. In the latter case, a careful study on the lower bound of density is needed, which is achieved in Subsection \ref{subsection2.3}, leading to the first main result of this paper Theorem \ref{p_sing_thm}. Finally, in Subsection \ref{general_p}, we extend the result for p-system with $\gamma$-law pressure to p-system with general pressure law. \subsection{Lax's result for $2\times 2$ systems\label{subsection2.1}} This part is basically taken from Lax's paper \cite{lax2} in 1964. Consider a system of two first-order partial differential equations \begin{equation}\label{22_1} \begin{split} u_t+f_x=0\,,\\ v_t+g_x=0\,, \end{split} \end{equation} where $f$ and $g$ are functions of $u$ and $v$. Carrying out the differentiation in \eqref{22_1}, we obtain \begin{equation}\label{22_2} {\bf u}_t +A\, {\bf u}_x=0\,, \end{equation} where \[{\bf u}= \left( \begin{array}{l} u\\ v \end{array}\right) \qquad \text{and} \qquad A=\left( \begin{array}{lr} f_u&f_v\\ g_u&g_v \end{array}\right)\,. \] Suppose that this system is strictly hyperbolic, i.e. the matrix $A$ has real and distinct eigenvalues $\lambda<\mu$ for relevant values of $u$ and $v$. Use ${\bf l}_{\lambda}$ and ${\bf l}_{\mu}$ to denote the left eigenvectors of $A$ corresponding to eigenvalues $\lambda$ and $\mu$, respectively. Multiplying \eqref{22_2} by ${\bf l}_{\lambda}$ and ${\bf l}_{\mu}$ respectively, we have \[ {\bf l}_{\lambda}\cdot {\bf u}'=0\,,\qquad {\bf l}_{\mu}\cdot {\bf u}^\backprime=0\,, \] where we denote \[ \prime=\partial_t+\lambda\partial_x\,,\qquad \backprime=\partial_t+\mu\partial_x\,. \] Suppose there exist integrating factors $\phi$ and $\psi$ such that $\phi {\bf I}_{\lambda} =\nabla_{(u,v)} w(u, v)$, and $\psi {\bf I}_{\mu} =\nabla_{(u,v)} z(u, v)$, and therefore \begin{equation}\label{22_3} w'=\phi_\lambda\,{\bf l}_{\lambda}\cdot {\bf u}'=0,\, \ z^\backprime=0. \end{equation} for some functions $w(u,v)$ and $z(u,v)$, which are called Riemann invariants along characteristics with characteristic speeds $\lambda$ and $\mu$, respectively. Therefore, the $L^\infty$ norms on $w$ and $z$ are bounded by the initial data. We remark that such $\phi$ and $\psi$ always exist at least locally. Thus, for general hyperbolic systems with two unknowns, there always exist two Riemann invariants for different families, if we restrict the initial data to be a small perturbation near a constant equilibrium. This is also one of the reasons that Lax's result in \cite{lax2} is a small data theory. For many general systems, such as p-system, Riemann invariants are naturally well-defined globally, therefore, the smallness restriction is not an issue for this step. We focus on $w$, the case on $z$ can be treated in a similar manner. Differentiating $w'=0$ in \eqref{22_3} on $x$, we have \begin{equation}\label{22-4} w_{tx}+\lambda w_{xx}+\lambda_{w} w_x^2+ \lambda_{z} w_x z_x=0\,. \end{equation} Also by \eqref{22_3}, we observe from \[ 0=z^{\backprime}=z'-(\lambda-\mu)z_x\,, \] that \begin{equation}\label{22-5} z_x=\frac{z'}{\lambda-\mu}\,. \end{equation} Substituting \eqref{22-5} into \eqref{22-4} and denoting \[\alpha:=w_x\,,\] one finds \begin{equation}\label{22-6} \alpha'+\lambda_{w} \alpha^2+\frac{\lambda_{z}}{\lambda-\mu} z' \alpha=0\,. \end{equation} Let $h$ be a function of $w$ and $z$ satisfying \[ h_z=\frac{\lambda_{z}}{\lambda-\mu} \,. \] Using $w'=0$ in \eqref{22_3}, we have \[ h'=h_w w'+h_z z'=\frac{\lambda_{z}}{\lambda-\mu} z'\,. \] This, together with (\ref{22-6}), gives \begin{equation}\label{al_lax} \alpha'+\lambda_{w} \alpha^2+ h' \alpha=0\,. \end{equation} Multiplying \eqref{al_lax} by $e^h$ and denoting \[ \widetilde{\alpha}:=e^h \alpha\,, \] we finally obtain \begin{equation}\label{22_8} \widetilde{\alpha}'=-a {\widetilde{\alpha}}^2\, \end{equation} with \[ a:=e^{-h}\lambda_{w}\,. \] This Riccati type equation gives us a clear passage to study the singularity formation and/or global existence of classical solutions for hyperbolic system with two unknowns. In fact, we could formally solve gradient variable $\widetilde{\alpha}$ along a characteristic $x(t)$ defined by $$\frac{d x(t)}{dt}=\lambda,\ x(0)=x_0,$$ to obtain \[ \frac{1}{\widetilde{\alpha}(x(t), t)}=\frac{1}{\widetilde{\alpha}(x_0,0)}+\int_0^t a(x(\sigma), \sigma) ~d\sigma \] where the integral is taken along the characteristic curve $x(t)$. Note that $a\neq 0$ if $\lambda_w\neq 0$, which is corresponding to the nonlinearity of the system. One does not expect singularity formation for linearly degenerate fields \cite{Liu1}. For simplicity, suppose that $a$ is always non-zero, which is also satisfied by the solution of p-system if initially $a\neq0$. To fix the idea, we only consider the case $a>0$. If $\widetilde{\alpha}(0)<0$, i.e. initial solution is compressive somewhere in the $\lambda$ direction, then $\widetilde{\alpha}(t)$ breaks down if there exists a time $t_*>0$ such that \begin{equation}\label{22_7} \int_0^{t_*} a(x(\sigma), \sigma) ~d\sigma= -\frac{1}{\widetilde{\alpha}(x_0,0)} \,. \end{equation} which could be relaxed to \begin{equation}\label{22_9} \int_0^{\infty} a(x(\sigma), \sigma) ~d\sigma= \infty \,. \end{equation} In \cite{lax2}, Lax considered the hyperbolic system with uniformly strict hyperbolicity, i.e. characteristic speeds $\lambda$ and $\mu$ are uniformly away from each other. With the help of smallness condition on initial data, there is a positive constant ${\bar a}$ such that $a\ge {\bar a}>0$, if the initial data are chosen so, hence \eqref{22_9} is automatically justified. When smallness condition on the initial data is lacking, in principle, one expects the similar results following Lax \cite{lax2} if the Riemann invariants are defined globally, and \eqref{22_9} is satisfied. For p-system \eqref{p1}$\sim$\eqref{p3}, the Riemann invariants are defined globally, it remains to check \eqref{22_9}. We will explain how far Lax's theory can reach in the next subsection. \subsection{Lax's large data theory on p-system: $\gamma\ge 3$.\label{subsection2.2}} We adopt Lax's theory on singularity formation to the following Cauchy problem of p-system \eqref{p1}$\sim$\eqref{p3}, i.e., \begin{equation}\label{cp} \begin{cases} &\tau_t-u_x=0\,,\\ &u_t+p_x=0\,, \ p=K_1\,\tau^{-\gamma}, \\ &\tau(x,0)=\tau_0(x),\ u(x, 0)=u_0(x), \end{cases} \end{equation} where, $K_1>0$ and $\gamma>1$ are constants. If the initial data are chosen to be a small smooth perturbation near a constant state $({\bar \tau}, {\bar u})$ with ${\bar \tau}>0$, then Lax's theory in \cite{lax2} applied directly. Our main concern in this subsection is how far it could reach when initial data are not small. From now on, we make the following assumption throughout the rest of Section \ref{section2}: \begin{assumption}\label{p-assumption} Assume that $(\tau_0(x), u_0(x))$ are $C^1$ functions, and there are uniform positive constants $M_1$ and $M_2$ such that $$\|(\tau_0, u_0)(x)\|_{C^1}\le M_1,\ \tau_0\ge M_2.$$ \end{assumption} A direct calculation shows that \eqref{cp} has two characteristic speeds $$\lambda=-\mu=-c,$$ where $c$ is the Lagrangian sound speed \begin{equation} c:=\sqrt{-p_\tau}= \sqrt{K_1\,\gamma}\,{\tau}^{-\frac{\gamma+1}{2}}\,. \label{c def} \end{equation} The forward and backward characteristics are defined by \[ \frac{dx^{+}}{dt}=c \com{and} \frac{dx^{-}}{dt}=-c\,, \] respectively. We denote the corresponding directional derivatives along them by \[ \partial_+ := \dbyd t+c\,\dbyd x \com{and} \partial_- := \dbyd t-c\,\dbyd x\,, \] respectively. Furthermore, introducing the following useful quantity, c.f. \cite{G3}, \begin{equation}\label{3.00} \eta := \int^\infty_\tau{c\,d\tau} =\textstyle\frac{2\sqrt{K_1\gamma}}{\gamma-1}\, \tau^{-\frac{\gamma-1}{2}}>0\,, \end{equation} the globally defined Riemann invariants of \eqref{cp} are \begin{equation}\label{3.0} r:=u-\,\eta \com{and} s:=u+\,\eta\,, \end{equation} which satisfy \begin{equation}\label{3.1} \partial_+ s=0 \com{and} \partial_- r=0\,, \end{equation} respectively. Since $\eta$, $p$ and $c$ are all functions of $\tau$, their relations are as follows \begin{equation} \tau=K_{\tau}\,\eta^{-\frac{2}{\gamma-1}}\,,\quad p=K_p\, \eta^{\frac{2\gamma}{\gamma-1}}\,,\ \quad c=\sqrt{-p_\tau}=K_c\, \eta^{\frac{\gamma+1}{\gamma-1}}\,, \end{equation} where $K_\tau$, $K_p$ and $K_c$ are positive constants given by \begin{equation} K_\tau:=\Big(\frac{2\sqrt{K_1\gamma}}{\gamma-1}\Big)^\frac{2}{\gamma-1}\,, \quad K_p:=K_1\,K_\tau^{-\gamma},\com{and} K_c:=\textstyle\sqrt{K_1\gamma}\,K_\tau^{-\frac{\gamma+1}{2}}\,. \label{Kdefs} \end{equation} Clearly, one has \begin{equation} K_p=\textstyle\frac{\gamma-1}{2\gamma}K_c \com{and} K_\tau K_c=\frac{\gamma-1}{2}\,. \label{KpKcRela} \end{equation} In this paper, we always use $K$ with some subscripts to denote positive constants. We will not alert the readers again if there is no ambiguity. We observe from \eqref{3.1} that the $L^\infty$ norm of $(r, s)$ are bounded by the initial data, which leads to a uniform $L^\infty$ bounds on $u$ and $\eta(\tau)$ with the help of \eqref{3.0}. From \eqref{3.00}, one finds the uniform positive lower bound on the specific volume $\tau$, or equivalently, the uniform upper bound on the density $\rho=\frac{1}{\tau}$. However, we remark that, such estimates do not offer any control on the lower bound of density $\rho$ (or, upper bound of $\tau$). Following the procedure of last subsection in deriving \eqref{22_8}, c.f. \cite{G3}, the good gradient variables are \[ y := \eta^{\frac{\gamma+1}{2(\gamma-1)}}\,s_x \com{and} q := \eta^{\frac{\gamma+1}{2(\gamma-1)}}\,r_x\,, \] which satisfy the following Riccati type equations: \begin{align} \partial_+ y &= - a_2 \, y^2\,, \label{p_y_eq}\\ \partial_- q &= - a_2 \, q^2\,, \label{p_q_eq} \end{align} where \begin{align}\label{a2} {a}_2 &:= K_c\,{\textstyle\frac{\gamma+1}{2(\gamma-1)}}\, \eta^{\frac{3-\gamma}{2(\gamma-1)}}\,. \end{align} We note the behavior of ${a}_2$ is purely determined by $\eta$. Since $\eta$ has a uniform upper bound, when $\gamma\ge 3$, there exists a uniform constant ${\bar a}_2>0$, such that $a_2\ge {\bar a}_2$. In this case \eqref{22_9} is justified, and Lax's theory applies without smallness condition. {\begin{proposition}\label{prop_p_1}(A corollary from \cite{lax2}) Assume that $(\tau_0(x), u_0(x))$ satisfy the conditions in Assumption \ref{p-assumption}. When $\gamma\geq3$, classical solution of (\ref{cp}) breaks down if there is a point $x_*\in\mathbb{R}$ such that \begin{equation}\label{laxcompress} s_x(x_*,0)<0 \com{or} r_x(x_*,0)< 0\,. \end{equation} \end{proposition}} \begin{proof} We will show that if $s_x(x,0)<0$ or $r_x(x,0)< 0$ for some $x$, then singularity forms in finite time. Without loss of generality, we assume that $s_x(x^*,0)<0$, then $y(x^*,0)<0$ for some $x^*$. Denote the forward characteristic passing $(x^*,0)$ as $x^+(t)$. By \eqref{p_y_eq}, \begin{equation}\label{p_prop_2_proof} \frac{1}{y(x^+(t), t)}=\frac{1}{y(x^*, 0)}+\int_0^t \, {a_2}(x^+(\sigma), \sigma)\;d\sigma\,, \end{equation} where $a_2\ge {\bar a}_2$ for some uniform constant ${\bar a}_2>0$. Therefore, the right hand side of \eqref{p_prop_2_proof} approaches zero in finite time, which means singularity happens in finite time. \end{proof} However, for most physical gases where $1<\gamma<3$, the positive lower bound of the function ${a}_2$ requires positive lower bound of density. We remark that, for generic smooth initial data without initial vacuum, even for the global smooth solutions of \eqref{cp}, the density does not have constant positive lower bound. An example of Lipschitz continuous solutions can be found in \cite{Riemann} and \cite{courant}, where the density decays in time at a rate of $\frac{1}{t}$. Therefore, a new idea is needed to obtain a sufficient sharp control on the time-dependent positive lower bound of density. This will be one of our main contributions in this paper, which will be addressed in the next subsection. \subsection{Shock formation in p-system: $\gamma>1$\label{subsection2.3}} In this section, for all $\gamma>1$, we prove the singularity formation for the Cauchy problem in p-system when initial data contain some compression, and otherwise the global existence of smooth solutions. This is achieved by establishing a sharp enough time-dependent positive lower bound on density. The following theorem is the first main result of this paper. \begin{theorem}\label{p_sing_thm} For $\gamma>1$, if $(\tau_0(x), u_0(x))$ satisfy conditions in Assumption \ref{p-assumption}, then the Cauchy problem (\ref{cp}) has a unique global-in-time classical solution if and only if \begin{equation}\label{p_lemma_con} s_x(x,0)\geq0 \com{and} r_x(x,0)\geq 0,\com{for all} x\in\mathbb{R}\,. \end{equation} \end{theorem} \begin{remark} At a point $(x,t)$, the solution of \eqref{cp} is said to be forward rarefactive (resp. compressive) if $s_x(x,t)\geq0$ (resp. $s_x(x,t)<0$); the solution is said to be backward rarefactive (resp. compressive) if $r_x(x,t)\geq0$ (resp. $r_x(x,t)<0$). Hence this theorem can be understood as that classical global-in-time solution of p-system exists if and only if the initial data are nowhere compressive. If \eqref{p_lemma_con} is not satisfied at any point, that is, if the initial data contain some compression, then gradient blowup happens in finite time. \end{remark} In order to prove Theorem \ref{p_sing_thm}, the following observation plays an important role. From \eqref{p_y_eq} and \eqref{p_q_eq}, using comparison principle for ODEs, with the help of the following two non-negative constants $Y$ and $Q$ defined by \begin{equation} Y=\max\Big\{0,\ \sup_x\{y(x,0)\}\Big\}, \quad Q= \max\Big\{0,\ \sup_x\{q(x,0)\}\Big\}, \end{equation} it is easy to see the following lemma holds. \begin{lemma}\label{lemma_p_2} If $(\tau_0(x), u_0(x))$ satisfy Assumption \ref{p-assumption}, it holds for $C^1$ solution $(\tau, u)(x,t)$ of (\ref{cp}) that \[y(x,t)\leq Y, \quad \text{and} \quad q(x,t)\leq Q\,.\] \end{lemma} With the help of this Lemma \ref{lemma_p_2}, we are able to prove the following key estimate on the lower bound of density (equivalently, upper bound of $\tau$), for $1<\gamma<3$, covering most physical cases for polytropic gases. \begin{lemma}\label{density_low_bound_1-3} Let $(\tau, u)(x,t)$ be a $C^1$ solution of (\ref{cp}) defined on time interval $[0, T]$ for some $T>0$, with initial data $(\tau_0(x), u_0(x))$ satisfying conditions in Assumption \ref{p-assumption}. If $1<\gamma<3$, then for any $x\in \mathbb{R} $ and $t\in [0, T)$, then there is a positive constant $K_0$ depending only on $\gamma$, such that \[ \tau(x, t)\le \big[{\tau_0}^{\frac{3-\gamma}{4}}(x)+K_0(Y+Q) t\big]^{\frac{4}{3-\gamma}}. \] \end{lemma} \begin{proof} From the definition of $y$ and $q$, it is clear that $$y=s_x\sqrt{\frac{c}{K_c}},\ \quad q=r_x \sqrt{\frac{c}{K_c}} ,$$ which implies that $$(y+q)=\sqrt{\frac{c}{K_c}} (r_x+s_x)=2u_x\sqrt{\frac{c}{K_c}}.$$ Therefore, we read from the mass equation that $$\sqrt{c}\,\tau_t=\frac12 \sqrt{K_c}(y+q).$$ Using the formula of sound speed \eqref{c def}, Lemma \ref{lemma_p_2}, one finds \begin{equation} \tau^{-\frac{\gamma+1}{4}}\tau_t\le \frac12(K_1\gamma)^{-\frac14}\sqrt{K_c}(Y+Q). \end{equation} When $1<\gamma<3$, $\frac{\gamma+1}{4}<1$, then for any $x\in \mathbb{R}$, and $t\in [0, T)$, a simple time integration shows that $$\tau(x, t)\le \big[{\tau_0}^{\frac{3-\gamma}{4}}(x)+K_0(Y+Q) t\big]^{\frac{4}{3-\gamma}},$$ where $K_0=\frac{3-\gamma}{8}(K_1\gamma)^{-\frac14}\sqrt{K_c}$. This completes the proof of this lemma. \end{proof} \begin{remark} We remark that, for purely rarefactive initial data, i.e. the initial data satisfying the conditions of Assumption 2.1 and \eqref{p_lemma_con}, Lin \cite{lin2} proved that the density of any Lipschitz solution of \eqref{cp} has a positive lower bound of order $\frac{1}{1+t}$ through a relatively complicated approximation generated by a polygonal scheme. This Lemma \ref{density_low_bound_1-3} works for generic data as long as $\gamma\in (1, 3)$. Although the time-dependent bound is not as sharp as that in \cite{lin2}, the proof is much simpler and elementary. For $\gamma=3$, the same idea in this proof gives an upper bound of $\tau$ with exponential growth in time. A generalization of \cite{lin2} to generic initial data for all $\gamma>1$ has been carried out in our work \cite{CPZ}. \end{remark} We now give a proof for Theorem \ref{p_sing_thm}. \begin{proof}[\bf Proof of Theorem \ref{p_sing_thm}]\underline{\bf 1) Sufficiency}. In this part, we prove that under Assumption 2.1, if the initial data satisfy the condition \eqref{p_lemma_con}, then problem \eqref{cp} admits a unique global $C^1$ solution. The proof is based on the local existence results and a continuity argument with the help of a priori $C^1$ estimates and lower bound of density provided in \cite{lin2}. First of all, the local-in-time existence of $C^1$ solutions for \eqref{cp} can be proved by classical method, c.f. symmetric hyperbolic system theory, see \cite{LiBook}, where the life-span depends on the $C^1$-norm of the initial data and the positive lower bound of $\tau_0$. Second, the uniform $L^\infty$ bounds of $(\tau, u)(x,t)$ follow from those of $(r, s)(x,t)$ which are constant along their characteristics, respectively; see \eqref{3.1}. For Lipschitz norms, we know from \eqref{p_lemma_con}, that $y(x,0)\ge 0$ and $q(x, 0)\ge 0$, and thus $\|y(x,t)\|_{L^\infty}\le Y$, and $\|q(x,t)\|_{L^\infty}\le Q$. Now, the result of \cite{lin2} offers that, for initial data satisfying Assumption 2.1 and \eqref{p_lemma_con}, for any Lipschitz continuous solutions of \eqref{cp}, there is a positive constant ${\bar K}_0$, independent of time, such that $$\tau\le {\bar K}_0 (1+t).$$ We know this is true for our $C^1$ solution as well, in view of the weak-strong uniqueness for \eqref{cp}, c.f. \cite{Dafermos}. Therefore, we deduce from the definitions of $y$, $q$, and $\eta$ that, there exists a function ${\widetilde C}(t)$ satisfying $1\le {\widetilde C}(t)<\infty$ for any positive finite time $t$ that $$\|(r_x, s_x, \tau_x, u_x)(x, t)\|_{L^\infty}\le {\widetilde C}(t).$$ This estimate enables us to extend the local $C^1$ solution to a global one. \noindent\underline{\bf 2) Necessity}. In this part, we shall prove that under Assumption 2.1, if the initial data fail to satisfy the condition \eqref{p_lemma_con} at one point ${x^*}\in \mathbb{R}$, the $C^1$ solution of \eqref{cp} must blow up in its derivatives in finite time. Without loss of generality, we assume that $s_x(x^*,0)<0$, then $y(x^*, 0)<0$. When $\gamma\ge 3$, this was shown in the last section. Here, we only have to consider the case $1<\gamma<3$, in which $ {a}_2$ vanishes when density goes to zero. We denote the forward characteristic passing $(x^*,0)$ as $x^+(t)$. In view of \eqref{p_prop_2_proof}, $$ \frac{1}{y(x^+(t), t)}=\frac{1}{y(x^*, 0)}+\int_0^t \, {a_2}(x^+(\sigma), \sigma)\;d\sigma\, . $$ To show $y$ blows up in finite time, it is enough to show that \[ \int_0^\infty \, {a_2(x^+(t), t)}\;dt=\infty\,, \] where the integral is along characteristic $x^+(t)$. We read from Lemma \ref{density_low_bound_1-3}, and the definition of $a_2$ that \[ a_2(x^+(t), t)\geq \frac{\gamma +1}{4} K_{\tau}^{-\frac{\gamma +1}{4}}\big[\tau_0^{\frac{3-\gamma}{4}}(x^*)+K_0(Y+Q)t\big]^{-1}. \] Hence, \[\int_0^\infty \, {a_2(x^+(t), t)}\;dt=\infty\,.\] Therefore, $y$ and $s_x$ blow up in finite time. The proof of the theorem is completed. \end{proof} \begin{remark}\label{remark_shock} Finally, we give a remark why the singularity in Theorem \ref{p_sing_thm} is in fact a shock wave satisfying Lax entropy condition. First, at $(x^*,t^*)$ where the first singularity formation happens, there are some characteristics in the same family interacting with each other. Let's prove it by contradiction. Assume there are no characteristics in the same family interacting with each other at or before time $t^*$, then $C^1$ solution exists when $t\in[0,t^*]$, which contradicts to the singularity formation at time $t^*$. \begin{figure}[htp] \centering\label{fig0} \includegraphics[width=.5\textwidth]{proof1} \caption{Shock formation} \end{figure} Hence, without loss of generality, we could find two forward characteristics $l_1$ and $l_2$ interacting with each other (see Figure 1) at $(x^*,t^*)$, and $s_x<0$ in the region bounded by $l_1$ and $l_2$, $(x^*,t^*)$ and the $x$-axis on the upper $(x,t)$-plane, because $s_x\rightarrow -\infty$ when $(x,t)$ approaches $(x^*,t^*)$. We note that along $l_i$, $i=1,2$, $s$ is constant. Hence the solution is discontinuous when singularity forms because \[ \lim_{x\rightarrow x^*-}s(x,t^*)> \lim_{x\rightarrow x^*+}s(x,t^*).\] Finally we check the Lax entropy condition. For smooth solutions before blowup, by \eqref{cp} and \eqref{3.00}$\sim$\eqref{3.1}, \begin{eqnarray} -cs_x&=&s_t\nonumber\\ &=&u_t+\eta_t\nonumber\\ &=&-p_x+\eta_t\nonumber\\ &=&\partial_-\eta\,.\nonumber \end{eqnarray} This implies that $\partial_-c\rightarrow+\infty$ when $(x,t)$ approaches $(x^*,t^*)$, and hence the left limit of sound speed is greater than the right limit at $(x^*,t^*)$. Therefore, Lax entropy condition is satisfied for this discontinuity. If one extends the solution beyond the time when singualrity happens by solving Riemann problem of \eqref{cp} at $(x^*,t^*)$, a shock appears in the admissible weak solution. \end{remark} \subsection{p-system with general pressure law\label{general_p}} In this subsection, we generalize the method developed in previous section to the following Cauchy problem for p-system, \begin{equation}\label{gp} \begin{cases} &\tau_t-u_x=0\, ,\\ &u_t+p_x=0\, , \\ &\tau(x,0)=\tau_0(x),\ u(x, 0)=u_0(x), \end{cases} \end{equation} with general pressure law $p(\tau)\in C^3(0,\infty)$ satisfying \begin{equation}\label{p_gen} p_\tau<0,\ \ p_{\tau\tau}>0 \end{equation} and \begin{equation}\label{p_gen2} \lim_{\tau\rightarrow 0}p(\tau)=\infty,\ \ \lim_{\tau\rightarrow\infty}p(\tau)=0 \com{and} \int_1^\infty \sqrt{-p_\tau}~d\tau<\infty. \end{equation} Here condition \eqref{p_gen} is dictated by physics when one uses p-system to model gas dynamics, c.f. \cite{menikoff}. Furthermore, we assume that \begin{equation}\label{c_con_gen} \int_0^1 \sqrt{-p_\tau}~d\tau=\infty \end{equation} which includes the $\gamma$-law pressure case. We also identified the following condition: \begin{assumption} There exists some positive constant $A$, such that for any $\tau>0$, \begin{equation}\label{condition1} (5+A)(p_{\tau\tau})^2-4p_\tau p_{\tau\tau\tau}\geq 0\,. \end{equation} \end{assumption} \begin{remark} This Condition \eqref{condition1} is fairly mild because the constant $A$ can be arbitrarily large. For example, the $\gamma$-law pressure $p=k\tau^{-\gamma}$ with $\gamma>0$ satisfies conditions \eqref{condition1} and \eqref{p_gen}, and the pressure $p=k\tau^{-\gamma}$ with $\gamma>1$ satisfies conditions \eqref{condition1} and \eqref{p_gen}$\sim$\eqref{c_con_gen}. \end{remark} Applying Lax's method in Sections 2.1-2.2 to this case (the detailed calculations can be found in \cite{G4}), it is not hard to find the Lagrangian sound speed is \[ c\equiv c(\tau)=\sqrt{-p_\tau}, \] and Riemann invariants \[ s:=u+\displaystyle\int_\tau^{1} c(\tau) d\tau \com{and} r:=u-\displaystyle\int_\tau^1 c(\tau) d\tau, \] which satisfy \begin{equation}\label{sr_con_gen} \partial_+ s=0\com{and} \partial_-r=0\,, \quad \partial_{\pm}=\partial_t\pm c \partial_x. \end{equation} If we define \begin{equation} y:=\sqrt{c}\,s_x,\qquad q:=\sqrt{c}\,r_x,\label{GE yq def} \end{equation} then \begin{eqnarray} \partial_+y&=&-a({\tau}) y^2, \label{new ode1}\\ \partial_-q&=&-a({\tau}) q^2, \label{new ode2} \end{eqnarray} where \begin{eqnarray} a({\tau}):=\frac{p_{{\tau}{\tau}}}{4(-p_{\tau})^{\frac{5}{4}}}>0\label{GE a2}. \end{eqnarray} Similar to Lemma \ref{lemma_p_2}, define non-negative constants $$Y=\max\Big\{0,\ \sup_x\{y(x,0)\}\Big\}, \quad Q= \max\Big\{0,\ \sup_x\{q(x,0)\}\Big\},$$ we have {\begin{lemma}\label{lemma_p_2_gen} If $(\tau_0(x), u_0(x))$ satisfy Assumption \ref{p-assumption}, then it holds for $C^1$ solution $(\tau, u)(x,t)$ of (\ref{gp}) that \[y(x,t)\leq Y, \quad \text{and} \quad q(x,t)\leq Q\,.\] \end{lemma}} Then we could state our theorem for the general pressure law case. \begin{theorem}\label{p_sing_gen} Assume that initial data $(\tau_0(x), u_0(x))$ satisfy Assumption 2.1. The pressure satisfies \eqref{p_gen}$\sim$\eqref{c_con_gen} and the Assumption 2.9. Then global-in-time classical solution of (\ref{gp}) exists if and only if \begin{equation}\label{p_lemma_con_gen} s_x(x,0)\geq0 \com{and} r_x(x,0)\geq 0,\com{for all} x\in \mathbb{R}\,. \end{equation} \end{theorem} \begin{remark} It is clear from our proof below that for the singularity formation, conditions \eqref{p_gen2}$\sim$\eqref{c_con_gen} are not necessary. \end{remark} \begin{proof} As usual, one reads from \eqref{sr_con_gen} that $\|(r, s)(x,t)\|_{L^{\infty}}\le \|(r, s)(x, 0)\|_{L^\infty}$. Therefore, on finds uniform $L^\infty$ bounds for $u(x,t)$ and $\displaystyle\int_\tau^{1} c(\tau) d\tau$. It then follows from \eqref{c_con_gen} that there are positive constant $\tau_{min}$ and $c_{max}$ depending only on the initial data such that \[ \tau(x, t)\ge \tau_{min},\ \quad c(x,t)\le c_{max}. \] If condition \eqref{p_lemma_con_gen} holds, the global existence could be proved in an exactly same way as in the first part of the proof of Theorem \ref{p_sing_thm} together with the positive lower bound on density provided in \cite{lin2}. If condition \eqref{p_lemma_con_gen} fails, by a similar argument as in the second part of the proof of Theorem \ref{p_sing_thm}, in order to prove singularity formation in finite time, it is sufficient to show \begin{equation}\label{int_a_infty_gen} \int_0^\infty a({\tau}(x(t),t)~dt=\infty\, , \end{equation} which is true if we can prove \begin{equation}\label{a_reci} \frac{1}{a(\tau(x,t))}= \frac{4(-p_{\tau})^{\frac{5}{4}}}{p_{{\tau}{\tau}}}\leq K_2+K_3 t \end{equation} for some positive constants $K_2$ and $K_3$. Indeed, a direct computation gives \[ \frac{1}{2}(y+q)=\frac{1}{2}\sqrt{c}(s_x+r_x)=\sqrt{c}\,u_x=\sqrt{c}\,\tau_t. \] Then by Lemma \ref{lemma_p_2_gen}, we have \[ \Big(\int_{\tau_{min}}^{\tau}(-p_\tau(\tau))^{\frac{1}{4}}~d\tau\Big)_t=\Big(\int_{\tau_{min}}^{\tau}\sqrt{c(\tau)}~d\tau\Big)_t=\frac{1}{2}(y+q)\leq\frac{1}{2}(Y+Q). \] Hence \begin{equation}\label{gen_final} \int_{\tau_{min}}^{\tau(x,t)}(-p_\tau(\tau))^{\frac{1}{4}}~d\tau\leq \int_{\tau_{min}}^{\tau(x,0)}(-p_\tau(\tau))^{\frac{1}{4}}~d\tau +\frac{1}{2}(Y+Q)t\leq K_4+K_5 t \end{equation} for some positive constants $K_4$ and $K_5$. Using the fact $\tau>\tau_{min}>0$ and \eqref{gen_final}, \eqref{a_reci} follows if we can show that \begin{equation}\label{gen_final2} \Big(\frac{4(-p_{\tau})^{\frac{5}{4}}}{p_{{\tau}{\tau}}}\Big)_\tau\leq A(-p_\tau(\tau))^{\frac{1}{4}}, \end{equation} for some positive constant $A$. Since \eqref{gen_final2} follows from \eqref{condition1} in Assumption 2.9, we finish the proof of this theorem. \end{proof} \section{Full compressible Euler equations} In this section, we consider the following Cauchy problem of full compressible Euler equations \begin{equation}\label{fulleuler0} \begin{cases} \tau_t-u_x=0\,,\\ u_t+p_x=0\,,\ p=Ke^{\frac{S}{c_v}}\tau^{-\gamma},\ \gamma>1,\\ S_t=0\,,\\ (\tau, u, S)(x, 0)=(\tau_0, u_0, S_0)(x). \end{cases} \end{equation} Here, we replaced the energy equation with entropy equation. For smooth solutions, we see that $S(x,t)=S_0(x):=S(x)$. Throughout this section, we require that the initial data $(\tau_0, u_0, S_0)(x)$ satisfy conditions in the following assumption. \begin{assumption}\label{assu} Assume that $(\tau_0(x), u_0(x))\in C^1(\mathbb{R})$, $S_0(x)\in C^2(\mathbb{R})$, and there are uniform positive constants $M_1$ and $M_2$ such that $$\|(\tau_0, u_0)(x)\|_{C^1}+\|S_0(x)\|_{C^2}\le M_1,\ \tau_0\ge M_2.$$ \end{assumption} For smooth solutions, it is often convenient to choose some new variables. Define \begin{equation} m:=e^{\frac{S}{2c_v}}>0\label{m def}, \quad c:=\sqrt{-p_\tau}= \sqrt{K\,\gamma}\,{\tau}^{-\frac{\gamma+1}{2}}\,e^{\frac{S}{2c_v}}\, , \end{equation} and \begin{equation} \eta := \int^\infty_\tau{\frac{c}{m}\,d\tau} = \textstyle\frac{2\sqrt{K\gamma}}{\gamma-1}\, \tau^{-\frac{\gamma-1}{2}}>0\,,\label{z def} \end{equation} where $c$ is the nonlinear Lagrangian sound speed. Direct calculations show that (c.f. \cite{G3, G6}) \begin{align} \tau&=K_{\tau}\,\eta^{-\frac{2}{\gamma-1}}\,,\nonumber\\ p&=K_p\, m^2\, \eta^{\frac{2\gamma}{\gamma-1}}\,,\label{tau p c}\\ c&=c(\eta,m)=K_c\, m\, \eta^{\frac{\gamma+1}{\gamma-1}}\,.\nonumber \end{align} We remark that, we still use $\eta$, $c$, and many other functions appeared in Section 2 for full Euler equations. These functions are natural extensions from isentropic flows to adiabatic ones, in the sense that they are different to each other only by a positive constant multiple when $S$ is chosen as a constant. For $C^1$ solutions, the problem \eqref{fulleuler0} is equivalent to (c.f. \cite{Dafermos,smoller}) \begin{equation}\label{fulleuler1} \begin{cases} \eta_t+\frac{c}{m}\,u_x=0\,, \\ u_t+m\,c\,\eta_x+2\frac{p}{m}\,m_x=0\,,\\ m_t=0\,,\\ (\eta, u, m)(x, 0)=(\eta_0, u_0, m_0)(x)=(\eta(\tau_0(x)), u_0(x), m(S_0(x))). \end{cases} \end{equation} Due to the linear degeneracy, in the regime of smooth solutions, $m$ is independent of time, we thus fix $m=m(x)=m_0(x)$ in the rest of this paper. Therefore, formally, one can still treat \eqref{fulleuler1} as a system of two (significant) equations, with fluxes (pressure) depending on $x$ explicitly. Like in the case of isentropic flows, two truly nonlinear characteristic fields are \begin{equation}\label{pmc_full} \frac{dx^+}{dt}=c \com{and} \frac{dx^-}{dt}=-c\,, \end{equation} and we denote the corresponding directional derivatives along these by \[ \partial_+ := \dbyd t+c\,\dbyd x \com{and} \partial_- := \dbyd t-c\,\dbyd x\,, \] respectively. Comparing with p-system, one of significant differences for full Euler system is the disappearance of Riemann invariances, in fact, the Riemann variables are \begin{equation} r:=u-m\,\eta\,,\qquad s:=u+m\,\eta\,. \label{r_s_def} \end{equation} which vary along characteristics \begin{align} \partial_+ s&=\frac{1}{2\gamma}\,\frac{c\, m_x}{m}\,(s-r)\,,\label{s_eqn}\\ \partial_- r&=\frac{1}{2\gamma}\,\frac{c\, m_x}{m}\,(s-r)\,.\label{r_eqn} \end{align} Therefore, without smallness assumption of the solutions, the first non-trivial question one encounters is how to achieve $L^\infty$ estimates on the solutions. We remark that this question is trivial for isentropic case since $r$ and $s$ are invariant along their characteristics. Fortunately, this question is answered recently by G. Chen, R. Young and Q. Zhang in \cite{G6} under the following additional condition: \begin{assumption}\label{BV} Assume that the initial entropy $S_0(x)$ has finite total variation, so that \begin{equation} V := \frac{1}{2c_v}\int_{-\infty}^{+\infty}|S'(x)|\;dx = \int_{-\infty}^{+\infty}\frac{|m'(x)|}{m(x)}\;dx<\infty\,, \label{Vdef} \end{equation} \end{assumption} From Assumption 3.1, it is clear that there are positive constants $M_L$, $M_U$, $M_s$ and $M_r$ such that \begin{equation} 0 < M_L < m(x) < M_U\,, \quad |s_0(x)|<M_s,\ \quad |r_0(x)|<M_r\,. \label{m_bounds} \end{equation} For ${\overline V}=\frac{V}{2\gamma}$, we now define \begin{align*} N_1 &:= M_s+\overline V\,M_r+\overline V\,(\overline V\,M_s+{\overline V}^2\,M_r) \,e^{{\overline V}^2},\\ N_2 &:= M_r+\overline V\,M_s+\overline V\,(\overline V\,M_r+{\overline V}^2\,M_s) \,e^{{\overline V}^2}. \end{align*} The following proposition is proved in \cite{G6} by a highly non-trivial characteristic method. \begin{proposition}{\em \cite{G6}} \label{Thm_upper} Assume the initial data $(\tau_0, u_0, S_0)(x)$ satisfy the conditions in Assumptions 3.1 and 3.2. If $(\tau(x, t), u(x,t), S(x))$ is a $C^1$ solution of \eqref{fulleuler0} for $t\in[0,T)$ for some positive $T$, then it holds that \begin{equation} |s(x,t)|\le N_1{M_U}^{\frac{1}{2\gamma}}, \quad |r(x,t)|\le N_2{M_U}^{\frac{1}{2\gamma}}, \end{equation} \begin{equation}\label{u_rho_bounds} |u(x,t)|\leq\frac{N_1+N_2}{2}{M_U}^{\frac{1}{2\gamma}},\ \eta(x,t)\leq\frac{N_1+N_2}{2}{M_L}^{\frac{1}{2\gamma}-1}:=E_U. \end{equation} Therefore, there are positive constants $M_{\rho}$ such that \begin{equation} \rho\le M_{\rho},\quad \tau\ge \frac{1}{M_{\rho}}.\end{equation} \end{proposition} The second major obstacle appears in the equations of gradient variables. Like in p-system, following the wisdoms of many previous works, c.f. \cite{G3, lax2, linliuyang}, a good choice is \begin{align} y &:= m^{-\frac{3(3-\gamma)}{2(3\gamma-1)}}\, \eta^{\frac{\gamma+1}{2(\gamma-1)}}\, (s_x - {\textstyle\frac{2}{3\gamma-1}}\,m_x\,\eta), \nonumber\\ q &:= m^{-\frac{3(3-\gamma)}{2(3\gamma-1)}}\, \eta^{\frac{\gamma+1}{2(\gamma-1)}}\, (r_x + {\textstyle\frac{2}{3\gamma-1}}\,m_x\,\eta)\,, \label{intr main} \end{align} which satisfy \begin{align} \partial_+ y &= a_0- a_2 \, y^2, \nonumber\\ \partial_- q &= a_0- a_2 \, q^2, \label{yq odes} \end{align} where \begin{align} {a}_0 &:= {\textstyle\frac{K_c}{\gamma}}\, \big[{\textstyle\frac{\gamma-1}{3\gamma-1}}\,m\,m_{xx} - {\textstyle\frac{(3\gamma+1) (\gamma-1)}{(3\gamma-1)^2}}\,m_x^2\big]\, m^{-\frac{3(3-\gamma)}{2(3\gamma-1)}}\, \eta^{\frac{3(\gamma+1)}{2(\gamma-1)}+1},\nonumber\\ {a}_2 &:= K_c\,{\textstyle\frac{\gamma+1}{2(\gamma-1)}}\, m^{\frac{3(3-\gamma)}{2(3\gamma-1)}}\, \eta^{\frac{3-\gamma}{2(\gamma-1)}}. \label{adefs} \end{align} Clearly, $a_0=0$ if $S_0(x)$ (thus $m(x)$) is a constant. For general adiabatic flows, $a_0$ is not constant zero. \eqref{yq odes} are not in Riccati type, and these different ODE structures lead to different behaviors of solutions, when initial data are not uniformly small around a constant state. Indeed, the classical theory of \cite{lizhoukong0,lizhoukong,Liu1} confirms that, when initial data are uniformly arbitrarily small near a constant sate away from vacuum, $y$ and/or $q$ blows up in finite time if and only if $y(x, 0)<0$ or $q(x, 0)<0$ at some point $x\in\mathbb{R}$. We first present an example to show that this is not the case for large initial data. \begin{example}\label{keyex} For any $C^1$ functions $S(x)$ and $\tau(x)>0$, \[ u=0,\quad S=S(x)\quad \text{and}\quad \tau=\tau(x) \] is a global $C^1$ (stationary) solution of \eqref{fulleuler0} if in the initial data $S$ and $\tau$ are chosen such that \begin{equation}\label{pxx0} p_x(\tau(x), S(x))=0. \end{equation} Therefore, if we choose a smooth non-constant function $S(x)$, then choose \begin{equation} \tau(x)=K_{\tau, S}\ \exp\left\{\frac{S(x)}{\gamma c_v}\right\}, \end{equation} for any positive constant $K_{\tau, S}$, $(\tau(x), 0, S(x))$ is a smooth stationary solution of \eqref{fulleuler0}. In particular, if one chooses $S(x)$ to be a non-constant periodic function, so is $\tau$, this gives a non-constant solution of \eqref{fulleuler0} which is periodic in both space and time. In order to fulfill the condition in Assumption \ref{BV}, a choice of $S(x)$ is $\frac{1}{x^2+1}$. For such class of solutions $(\tau(x), 0, S(x))$, a direct calculation shows \begin{equation} \label{excom} -q(x)=y(x)=\textstyle\frac{\gamma-1}{\gamma(3\gamma-1)}m_x m^{\frac{3(\gamma-3)}{2(3\gamma-1)}} \eta^{\frac{3\gamma-1}{2(\gamma-1)}}\,. \end{equation} which is non-zero at point $x$ if $S'(x)\neq 0$. Therefore, either $q(x)<0$ or $y(x)<0$, but no singularity forms in the solution. \end{example} This example shows that weak nonlinear compression in initial data does not necessarily lead to finite time singularity formation when data could be large. This motivates our search for a critical strength of the nonlinear compression which offers finite time gradient blowup, which will be carried out in the next two subsections. In particular, the Section 3.1 is for the case when initial entropy has finite total variation, c.f. Assumption 3.2; while the Section 3.2 contains results without this condition. In addition to these obstacles, like in the case of isentropic flows, we still need to further generalize our method in p-system to non-isentropic case to find a sharp enough time-dependent density lower bound. \subsection{Singularity formation: $\|S_0(x)\|_{BV}<\infty$} In this subsection, we assume that the initial data $(\tau_0(x), u_0(x), S_0(x))$ satisfy the conditions in Assumptions 3.1 and 3.2, so the estimates in Proposition 3.3 hold. The structure of \eqref{yq odes} leads us to study the ratio $\frac{a_0}{a_2}$ which dominates behaviors of solutions to \eqref{yq odes}. A direct calculation is carried out as follows \begin{align}\label{a0overa2} {\frac{a_0}{a_2}} = {{\textstyle\frac{2(\gamma-1)^2}{\gamma(\gamma+1)(3\gamma-1)}}\, \big(m\,m_{xx}-{\textstyle\frac{3\gamma+1}{3\gamma-1}}\,m_x^2\big)}\, \eta^{\frac{3\gamma-1}{(\gamma-1)}}\,m^{-\frac{3(3-\gamma)}{(3\gamma-1)}}. \end{align} We define \begin{equation} b(x)=S_{xx}-\frac{1}{c_v(3\gamma-1)} S_x^2, \label{bdef} \end{equation} and it is easy to see that \begin{equation} \label{bm} m^2b(x)={2c_v}\big(m\,m_{xx}-{\textstyle\frac{3\gamma+1}{3\gamma-1}}\,m_x^2\big). \end{equation} Therefore, $b(x)$ has the same sign as $a_0$. Also, we note from the definition of $m$ that there is a positive constant $M_3$ such that \begin{equation}\label{m_xx} |m\,m_{xx}-{\textstyle\frac{3\gamma+1}{3\gamma-1}}\,m_x^2|\le M_3. \end{equation} If we define a positive constant $N$ by \begin{equation} N := \begin{cases} \sqrt{\frac{2(\gamma-1)^2}{\gamma(\gamma+1)(3\gamma-1)}\,M_3} \ E_U^{\frac{3\gamma-1}{2(\gamma-1)}}\ M_L^{-\frac{3(3-\gamma)}{2(3\gamma-1)}}, & 1<\gamma<3\,,\\ \sqrt{\frac{2(\gamma-1)^2}{\gamma(\gamma+1)(3\gamma-1)}\,M_3} \,E_U^{\frac{3\gamma-1}{2(\gamma-1)}}\, M_U^{-\frac{3(3-\gamma)}{2(3\gamma-1)}}, & \gamma\ge 3\,, \end{cases} \label{Ndef} \end{equation} we see \begin{equation} \frac{a_0}{a_2}\le N^2. \end{equation} \begin{remark} One may have a puzzle on why we need $N$, a global positive constants here. A natural thought would be to use \begin{equation} {\hat N}:= \begin{cases} N, &\quad \text{if}\ b(x)> 0,\ \text{for some}\ x, \\ 0, &\quad \text{if} \ b(x)\le 0,\ \text{for all}\ x, \end{cases} \label{Nhatdef} \end{equation} to replace $N$. From the initial conditions of $S(x)$ based on physical considerations, we remark that the case for $b\le 0$ for all $x\in \mathbb{R}$ cannot happen here. Actually, from the relation \eqref{bm}, one finds that $b\le 0$ is equivalent to $m\,m_{xx}-{\textstyle\frac{3\gamma+1}{3\gamma-1}}m_x^2\le 0$, which is equivalent to $$\big(m^{-\frac{2}{3\gamma-1}}\big)_{xx}\ge 0.$$ Therefore, $m^{-\frac{2}{3\gamma-1}}$ is a convex function over $\mathbb{R}$ if $b\le 0$ for all $x\in \mathbb{R}$. This contradicts the fact that $0<M_L\le m(x)\le M_U$. Therefore, in this subsection, ${\hat N}=N$. However, since singularity formation is local behavior, we will relax constraints on $S(x)$ in next subsection. \end{remark} Like Lemma \ref{lemma_p_2}, we are able to find uniform upper bounds for $y$ and $q$. Since it is a little more complicated than Riccati equation, we can simply compare \eqref{yq odes} with the following ones \begin{equation} \partial_+ {\widetilde y}=a_2(N^2-{\widetilde y}^2),\ \quad \partial_- {\widetilde q}=a_2(N^2-{\widetilde q}^2). \end{equation} Therefore, it is easy to see the following lemma. \begin{lemma}\label{full_lemma1} If $(\tau_0, u_0, S_0)(x)$ satisfy conditions in Assumptions 3.1 and 3.2, it holds for $C^1$ solution $(\tau(x, t), u(x, t), S(x))$ of \eqref{fulleuler0} that \[y(x,t)\leq\max\Big\{N,\ \sup_x\{y(x,0)\}\Big\}=:\bar Y\,,\] \[ q(x,t)\leq\max\Big\{N,\ \sup_x\{q(x,0)\}\Big\}=:\bar Q\,.\] \end{lemma} The following lemma contains density lower bound estimate. \begin{lemma}\label{density_low_bound_3.1} Let $(\tau(x,t), u(x,t), S(x))$ be a $C^1$ solution of (\ref{fulleuler0}) defined on time interval $[0, T]$ for some $T>0$, with initial data $(\tau_0(x), u_0(x), S_0(x))$ satisfying conditions in Assumptions 3.1 and 3.2. If $1<\gamma<3$, then for any $x\in \mathbb{R} $ and $t\in [0, T)$, there is a positive constant $K_6$ depending only on $\gamma$ and $M_U$, such that \[ \tau(x, t)\le \big[{\tau_0}^{\frac{3-\gamma}{4}}(x)+K_6({\bar Y}+{\bar Q}) t\big]^{\frac{4}{3-\gamma}}. \] \end{lemma} \begin{proof} From the mass equation in \eqref{fulleuler0}, \eqref{r_s_def}, and \eqref{intr main}, it is clear that \begin{equation} \begin{split} \tau_t=u_x &=\frac12 (r_x+s_x)\\ &=m^{\frac{3(3-\gamma)}{2(3\gamma-1)}}\, \eta^{-\frac{\gamma+1}{2(\gamma-1)}}(q+y)\\ &\le M_U^{\frac{3(3-\gamma)}{2(3\gamma-1)}}\, \eta^{-\frac{\gamma+1}{2(\gamma-1)}}({\bar Y}+{\bar Q}), \end{split} \end{equation} where we have used Lemma \ref{full_lemma1}. With the help of \eqref{z def}, we thus have \begin{equation} \tau^{-\frac{\gamma+1}{4}}\tau_t\le M_U^{\frac{3(3-\gamma)}{2(3\gamma-1)}}\, (\textstyle\frac{2\sqrt{K\gamma}}{\gamma-1})^{-\frac{\gamma+1}{2(\gamma-1)}}({\bar Y}+{\bar Q}), \end{equation} which implies that \begin{equation} \tau(x,t)\le \big[\tau_0(x)+ K_6 ({\bar Y}+{\bar Q}) t\big]^{\frac{4}{3-\gamma}}, \end{equation} where $$K_6=\frac{3-\gamma}{4}M_U^{\frac{3(3-\gamma)}{2(3\gamma-1)}}\, (\textstyle\frac{2\sqrt{K\gamma}}{\gamma-1})^{-\frac{\gamma+1}{2(\gamma-1)}}.$$ \end{proof} In the following theorem, we show that $N$ is a critical measurement for the strength of initial nonlinear compression, which leads to finite time gradient blowup of solutions. \begin{theorem} \label{Thm singularity2} For $\gamma>1$, if $(\tau_0(x), u_0(x), S_0(x))$ satisfy conditions in Assumption 3.1 and 3.2, and if for $N$ defined in \eqref{Ndef}, it holds that \begin{equation} \inf_x\;\Big\{\ y(x,0),\ q(x,0)\ \Big\} < -N\,, \label{yq-N} \end{equation} then for the $C^1$ solutions $(\tau(x,t), u(x,t), S(x))$ of \eqref{fulleuler0}, $|u_x|$ and/or $|\tau_x|$ blow up in finite time. \end{theorem} \begin{remark}\label{re_3.5} The case when $\gamma\geq3$ was carried out in \cite{G6}, where density lower bound is not needed. \end{remark} \begin{proof} Suppose that \eq{yq-N} holds. Without loss of generality, we can assume that $\inf_x y(x, 0)<-N$, the case when $\inf_x q(x, 0)<-N$ is similar. Then there exist $\varepsilon>0$ and $x_0\in \mathbb{R}$ such that \begin{equation}\label{SS90} y(x_0,0) <- (1+\varepsilon)\,N\,. \end{equation} We denote the forward characteristic passing $(x_0,0)$ as $x^+(t)$. Along this characteristic $x^+(t)$, from the definition of $N$, we have for any $t\ge 0$ such that $x^+(t)$ is well-defined, \[ \partial_+ y(x^+(t), t) =a_2(\frac{a_0}{a_2}-y^2)<0, \com{and} y(x^+(t), t)\le y(x_0,0) < -(1+\varepsilon)\,N. \] Therefore, $$\frac{y^2(x^+(t), t)}{(1+\varepsilon)^2}> N^2\ge \frac{a_0}{a_2},$$ which implies that \[ \partial_+ y (x^+(t), t)= a_2(\frac{a_0}{a_2}-y^2(x^+(t), t)) <-{\textstyle\frac{\varepsilon(2+\varepsilon)}{(1+\varepsilon)^2}}\, a_2\,y^2(x^+(t),t)\,. \] Integrating it in time, we get \begin{equation} \frac{1}{y(x^+(t), t))} \ge {\frac{1}{y(x_0,0)} + {\textstyle\frac{\varepsilon(2+\varepsilon)}{(1+\varepsilon)^2}} \int_0^t {a_2}(x^+(\sigma), \sigma)\;d\sigma}\,, \label{SS9 1} \end{equation} where the integral is along the forward characteristic. To show $y$ blows up in finite time, it is enough to show that \begin{equation}\label{blowupa2} \int_0^\infty \, {a_2(x^+(t), t)}\;dt=\infty\,. \end{equation} When $\gamma\geq 3$, from the definition of $a_2$ in \eqref{adefs}, we see that $$a_2\ge K_c\,{\textstyle\frac{\gamma+1}{2(\gamma-1)}}\, M_L^{\frac{3(3-\gamma)}{2(3\gamma-1)}}\, E_U^{\frac{3-\gamma}{2(\gamma-1)}},$$ thus \eqref{blowupa2} follows. When $1<\gamma<3$, we read from Lemma \ref{density_low_bound_3.1}, and the definition of $a_2$ in \eqref{adefs} that \[ a_2(x^+(t), t)\geq K_c\,{\textstyle\frac{\gamma+1}{2(\gamma-1)}}\, M_L^{\frac{3(3-\gamma)}{2(3\gamma-1)}} (\textstyle\frac{2\sqrt{K\gamma}}{\gamma-1})^{\frac{3-\gamma}{2(\gamma-1)}}\big[\tau_0^{\frac{3-\gamma}{4}}(x^*)+K_6({\bar Y}+{\bar Q})t\big]^{-1}, \] therefore, \eqref{blowupa2} also follows. Therefore, for any $\gamma>1$, $y$ and $s_x$ blow up in finite time. The proof of the theorem is completed. \end{proof} \subsection{Singularity formation for general entropy function} As explained earlier, singularity formation in Euler equations is a local behavior. We will remove several global constraints on entropy functions imposed in last subsection to include physically interesting cases such as spatially periodic solutions. For this purpose, in this subsection, we only impose conditions in Assumption 3.1 for the initial data, but not Assumption 3.2. Without Assumption 3.2, we do not have the global uniform $L^\infty$ estimates in Proposition \ref{Thm_upper}. However, we note that this result is proved by characteristic method, we thus could follow the same argument as in \cite{G6} to establish a local version. For this purpose, we fix two initial points $\alpha<\beta\in \mathbb{R}$, denote the forward characteristic starting from $(\alpha, 0)$ by $x_\alpha^+(t)$ and the backward characteristic starting from $(\beta, 0)$ by $x_\beta^-(t)$. Assume that $(\tau(x,t), u(x,t), S(x))$ is a $C^1$ solution of \eqref{fulleuler0} on the time interval $[0, T_1]$ for some positive $T_1$. In the trapezoid showed in figure 2 below, the top edge $t=T\le T_1$ can shrink into one point, if $x_\alpha^+(T)=x_\beta^-(T)$. We denote this trapezoid domain by $\Omega_{\alpha, \beta, T}$, which is determined by the initial interval $[\alpha, \beta]$, $x_\alpha^+(t)$, $x_\beta^-(t)$, and $t=T$. \begin{figure}[htp] \centering\label{fig1} \includegraphics[width=.5\textwidth]{proof2} \caption{A domain of determination $\Omega_{\alpha,\beta,T}$} \end{figure} With the help of Assumption 3.1, on the interval $[\alpha, \beta]$, one has \begin{equation}\label{newv} V_{\alpha,\beta}:= \frac{1}{2c_v}\int_{\alpha}^{\beta} |S'(\xi)|\;d\xi \le \frac{1}{2c_v}M_1|\beta-\alpha|. \end{equation} Therefore, if we define $\bar V_{\alpha,\beta} := \frac{ V_{\alpha,\beta}}{2\gamma}$, and \begin{align*} {N_1}_{\alpha,\beta} &:= M_s+\bar V_{\alpha,\beta}\,M_r+\bar V_{\alpha,\beta}\,(\bar V_{\alpha,\beta}\,M_s+{\bar V_{\alpha,\beta}}^2\,M_r) \,e^{{\bar V_{\alpha,\beta}}^2},\\ {N_2}_{\alpha,\beta} &:= M_r+\bar V_{\alpha,\beta}\,M_s+\bar V_{\alpha,\beta}\,(\bar V_{\alpha,\beta}\,M_r+{\bar V_{\alpha,\beta}}^2\,M_s) \,e^{{\bar V_{\alpha,\beta}}^2}, \end{align*} the same proof in \cite{G6} gives \begin{proposition} \label{Thm_upperlocal} Assume the initial data $(\tau_0, u_0, S_0)(x)$ satisfy the conditions in Assumption 3.1. If $(\tau(x, t), u(x,t), S_0(x))$ is a $C^1$ solution of \eqref{fulleuler0} for $t\in[0,T_1)$ for some positive $T_1$, then it holds, for every point $(x,t)\in \Omega_{\alpha,\beta, T}$ and $T\le T_1$, that \begin{equation}\label{u_rho_bounds2} \begin{split} &|s(x,t)|\le {N_1}_{\alpha,\beta} {M_U}^{\frac{1}{2\gamma}}, \quad |r(x,t)|\le {N_2}_{\alpha,\beta}{M_U}^{\frac{1}{2\gamma}}\\ & |u(x,t)|\leq\frac{ {N_1}_{\alpha,\beta} + {N_2}_{\alpha,\beta} }{2}{M_U}^{\frac{1}{2\gamma}},\\ &\eta(x,t)\leq\frac{ {N_1}_{\alpha,\beta} + {N_2}_{\alpha,\beta} }{2}{M_L}^{\frac{1}{2\gamma}-1}:={\widetilde E}_U. \end{split} \end{equation} Therefore, there are positive constants ${\widetilde M}_{\rho}$ such that \begin{equation} \rho\le {\widetilde M}_{\rho},\quad \tau\ge \frac{1}{{\widetilde M}_{\rho}}.\end{equation} \end{proposition} For later use, we give an estimate on the expected time $T_{\alpha,\beta}$ where $x_\alpha^+(t)$ and $x_\beta^-(t)$ intersect if no singularity develops before this time. Using \eqref{pmc_full}, a simple calculation shows that \[T_{\alpha,\beta}\ge \frac{\beta-\alpha}{2}M_U^{-1}{\widetilde E}_U^{-\frac{\gamma+1}{\gamma-1}}. \] Using the definition of $M_3$ in \eqref{m_xx}, we define \begin{equation} N_{\alpha,\beta}:= \begin{cases} \sqrt{\frac{2(\gamma-1)^2}{\gamma(\gamma+1)(3\gamma-1)}\,M_3} \ {{\widetilde E}_U}^{\frac{3\gamma-1}{2(\gamma-1)}}\ M_L^{-\frac{3(3-\gamma)}{2(3\gamma-1)}}, & 1<\gamma<3\,,\\ \sqrt{\frac{2(\gamma-1)^2}{\gamma(\gamma+1)(3\gamma-1)}\,M_3} \,{{\widetilde E}_U}^{\frac{3\gamma-1}{2(\gamma-1)}}\, M_U^{-\frac{3(3-\gamma)}{2(3\gamma-1)}}, & \gamma\ge 3\ . \end{cases} \label{Nxzdef} \end{equation} Therefore, \begin{equation} \frac{a_0}{a_2}(x, t)\le { N}_{\alpha, \beta}^2,\ \forall (x, t)\in\Omega_{\alpha, \beta, T}. \end{equation} Furthermore, we define \begin{equation} \begin{split} &{\widetilde Y}:=\max\Big\{{ N}_{\alpha, \beta},\ \sup_{x\in[\alpha, \beta]} \{y(x,0)\}\Big\},\\ &{\widetilde Q}:=\max\Big\{{N}_{\alpha, \beta},\ \sup_{x\in[\alpha, \beta]}\{q(x,0)\}\Big\}. \end{split}\end{equation} It is now clear that, the same method used in the proof of Lemma \ref{density_low_bound_3.1} gives \begin{lemma}\label{density_low_bound_3.2} Let $(\tau(x,t), u(x,t), S(x))$ be a $C^1$ solution of (\ref{fulleuler0}) defined in $\Omega_{\alpha, \beta, T}$, with initial data $(\tau_0(x), u_0(x), S_0(x))$ satisfying conditions in Assumption 3.1. If $1<\gamma<3$, then for any $(x, t)\in \Omega_{\alpha, \beta, T}$, there is a positive constant ${\widetilde K}_6$ depending only on $\gamma$ and $M_U$, such that \[ \tau(x, t)\le \big[{\tau_0}^{\frac{3-\gamma}{4}}(x)+{\widetilde K}_6({\widetilde Y}+{\widetilde Q}) t\big]^{\frac{4}{3-\gamma}}. \] \end{lemma} Therefore, we have the following estimate on $a_2$ for any $(x, t)\in \Omega_{\alpha, \beta, T}$ \begin{equation}\label{a2new} a_2(x,t)\ge \begin{cases} K_c\,{\textstyle\frac{\gamma+1}{2(\gamma-1)}}\, M_L^{\frac{3(3-\gamma)}{2(3\gamma-1)}}\, {\widetilde E}_U^{\frac{3-\gamma}{2(\gamma-1)}}:=K_8,\quad \text{if}\ \gamma\ge 3,\\ K_7\big[M_2^{\frac{3-\gamma}{4}}+{\widetilde K}_6({\widetilde Y}+{\widetilde Q})t\big]^{-1}, \ \text{if}\ 1<\gamma<3. \end{cases} \end{equation} where $$K_7=K_c\,{\textstyle\frac{\gamma+1}{2(\gamma-1)}}\, M_L^{\frac{3(3-\gamma)}{2(3\gamma-1)}} (\textstyle\frac{2\sqrt{K\gamma}}{\gamma-1})^{\frac{3-\gamma}{2(\gamma-1)}}.$$ We further introduce the following constants $K_9$ and $K_{10}$ by \begin{equation} \label{final constant} K_{9}={\widetilde K}_6({\widetilde Y}+{\widetilde Q})M_2^{\frac{\gamma-3}{4}}, \quad K_{10}=K_7 M_2^{\frac{\gamma-3}{4}}, \end{equation} so that \begin{equation} \label{a213} a_2\ge K_{10}[1+K_9t]^{-1}, \ \text{if}\ 1<\gamma<3. \end{equation} We introduce another below constant to assist the measurement on the nonlinear compression. Let positive constant $B_{\alpha,\beta}$ be a solution of \begin{equation}\label{thm3_con} \frac{B_{\alpha,\beta}(2+B_{\alpha,\beta})}{{(1+B_{\alpha,\beta})}}\ge \begin{cases} \left(K_8\,{N_{\alpha,\beta}}{T_{\alpha,\beta}}\right)^{-1},\quad\quad\ \ \gamma\geq3\,,\vspace{.2cm}\\ \left(\frac{K_{10}}{K_9}\,N_{\alpha,\beta}\,\ln(1+{K_9} T_{\alpha,\beta}) \right )^{-1},\ 1<\gamma<3\,. \end{cases} \end{equation} \begin{theorem} \label{Thm singularity3} Assume the initial data $(\tau_0, u_0, S_0)(x)$ satisfy conditions in Assumption 3.1. If there exists some interval $(\alpha,\beta)$ such that the initial data satisfy \begin{equation}\label{thm3_1} \inf_{x\in [\alpha, \beta]} \{y(x,0), q(x, 0)\} < -{ N}_{\alpha,\beta}(1+B_{\alpha,\beta})\,, \end{equation} then $|u_x|$ and/or $|\tau_x|$ blow up in finite time. \end{theorem} \begin{remark} The right hand side of \eqref{thm3_con} only depends on the initial data. For any given entropy function satisfying conditions in Assumption 3.1, condition \eqref{thm3_con} will be satisfied when $B_{\alpha,\beta}$ is large enough, i.e. $y(x,0)$ or $q(x,0)$ is negative enough. This means that singularity forms in finite time when the initial compression is strong enough somewhere. One good choice of $B_{\alpha,\beta}$ is \begin{equation}\label{thm3_con2} B_{\alpha,\beta} =\begin{cases} \left(K_8\,{N_{\alpha,\beta}}{T_{\alpha,\beta}}\right)^{-1},\ \gamma\geq3\,,\vspace{.2cm}\\ \left(\frac{K_{10}}{K_9}\,N_{\alpha,\beta}\,\ln(1+{K_9} T_{\alpha,\beta}) \right )^{-1},\ 1<\gamma<3\,, \end{cases} \end{equation} This result is consistent with Theorem \ref{Thm singularity2}. In fact, when the initial entropy has finite total variation, $T_{x,\infty}=\infty$ while $N_{x,\infty}$, ${K_8}$ and ${K_9}$, $K_{10} $ are all finite, so $B_{x,\infty}$ can be arbitrarily small. Hence, if $y(x,0)< -N_{x,\infty}$ or $q(x, 0)<-N_{x, \infty}$ for some $x$, then blowup happens in finite time. \end{remark} \begin{proof} We only consider the solution in $\Omega_{\alpha,\beta, T_{\alpha, \beta}}$, and prove that singularity formation happens in this region. Without loss of generality, we assume that there is a point $x_*\in [\alpha, \beta]$ such that $y(x_*, 0)<-{ N}_{\alpha, \beta} (1+B_{\alpha, \beta})$, the case for $q$ is similar. Denote the forward characteristic starting from $(x_*, 0)$ by $x^+(t)$. We will show that $y$ goes to negative infinity along $x^+(t)$ before time $T_{\alpha,\beta}$. From \eqref{yq odes}, and the definition of ${ N}_{\alpha,\beta}$ in \eqref{Nxzdef}, it is clear that, along $x^+(t)$, for $t\in[0, T_{\alpha, \beta}]$ and as long as solution is $C^1$, it holds that \[ \partial_+ y(x^+(t), t) =a_2(\frac{a_0}{a_2}-y^2)<0,\com{and} y(x^+(t), t)< -{ N}_{\alpha,\beta}(1+B_{\alpha,\beta}). \] Therefore, $$\frac{y^2(x^+(t), t)}{(1+B_{\alpha, \beta})^2}> {N}_{\alpha, \beta}^2\ge \frac{a_0}{a_2},$$ which implies that \[ \partial_+ y (x^+(t), t)= a_2(\frac{a_0}{a_2}-y^2(x^+(t), t)) <-{\textstyle\frac{B_{\alpha, \beta}(2+B_{\alpha, \beta})}{(1+B_{\alpha,\beta})^2}}\, a_2\,y^2(x^+(t), t)\,. \] Integrating it in time, we get \begin{equation}\label{thm3_2} \frac{1}{y(x^+(t), t)} \ge {\frac{1}{y(x_*,0)} + \frac{B_{\alpha,\beta}(2+B_{\alpha,\beta})}{(1+B_{\alpha,\beta})^2}\int_0^t\, {a_2}(x^+(\sigma), \sigma)\;d\sigma}\,. \end{equation} where the integral is along the forward characteristic. Hence the blowup happens at a time $t_1$ when the right hand side of (\ref{thm3_2}) equals to zero, i.e. when \begin{equation} -\frac{1}{y(x_*, 0)}=\frac{B_{\alpha,\beta}(2+B_{\alpha,\beta})}{{(1+B_{\alpha,\beta})^2}}\int_0^{t_1}\, {a_2}(x^+(\sigma), \sigma)\;d\sigma\,. \end{equation} It is clear from the estimates on $a_2$ in \eqref{a2new} that such a finite $t_1$ exists. However, we still need to show that $t_1<T_{\alpha,\beta}$. From \eqref{thm3_1}, we only need to show that \begin{equation}\label{thm3_3} \frac{1}{N_{\alpha,\beta}}\leq\frac{B_{\alpha,\beta}(2+B_{\alpha,\beta})}{{(1+B_{\alpha,\beta})}}\int_0^{T_{\alpha,\beta}}\, {a_2(x^+(t), t)}\;dt\,. \end{equation} When $\gamma\geq 3$, we read from \eqref{a2new} that $a_2\ge K_8$, \eqref{thm3_3} follows directly from \eqref{thm3_con}. When $1<\gamma< 3$, we read from \eqref{a213} that $$a_2\ge K_{10}[1+{K}_9 t]^{-1},$$ therefore \begin{equation}\begin{split} &\frac{B_{\alpha,\beta}(2+B_{\alpha,\beta})}{{(1+B_{\alpha,\beta})}}\int_0^{T_{\alpha,\beta}}\, {a_2(x^+(t), t)}\;dt\\ &\ge \frac{B_{\alpha,\beta}(2+B_{\alpha,\beta})}{{(1+B_{\alpha,\beta})}}\int_0^{T_{\alpha,\beta}} K_{10}[1+K_9t]^{-1}\ dt\\ &=\frac{B_{\alpha,\beta}(2+B_{\alpha,\beta})}{{(1+B_{\alpha,\beta})}}\frac{K_{10}}{K_9}\ln (1+K_9T_{\alpha,\beta} ), \end{split} \end{equation} which, together with \eqref{thm3_con}, implies \eqref{thm3_3}. Hence we complete the proof of this theorem. \end{proof} \subsection{Further discussion\label{section_3.5}} In Subsections 3.1-3.2, we showed that if the initial compression is strong enough, singularity develops in finite time for solutions of \eqref{fulleuler0}. It is also evident by the Example \ref{keyex} that relatively strong compression is necessary to guaranty finite time blowup occurs. One of the questions would be, say in Theorem \ref{Thm singularity2}, does the constant $N$ measures the critical strength efficiently? We now show this $N$ is the best possible one. We now revisit the stationary solutions \begin{equation}\label{ss}(\tau(x), u(x), S(x))=(K_{\tau, S}\ \exp\left\{\frac{S(x)}{\gamma c_v}\right\}, 0, S(x)), \end{equation} for any smooth function $S(x)$ satisfying Assumptions 3.1-3.2, and any positive constant $K_{\tau, S}$, constructed in Example \ref{keyex} . We also recall \eqref{excom} \begin{equation}\label{final_remark1} -q(x)=y(x)=\textstyle\frac{\gamma-1}{\gamma(3\gamma-1)}m_x m^{\frac{3(\gamma-3)}{2(3\gamma-1)}} \eta^{\frac{3\gamma-1}{2(\gamma-1)}}\,. \end{equation} We remark that now everything is fixed except the choice of $S(x)$. $m$, $\eta$ are both fucntions of $S(x)$. For convenience, we will use $m(x)=e^{\frac{S}{2c_v}}$ for our argument below. Note from \eqref{a0overa2} that if \begin{equation}\label{a0>0} m\,m_{xx}-{\textstyle\frac{3\gamma+1}{3\gamma-1}}\,m_x^2\geq 0, \end{equation} $N$ is the best possible upper bound of \begin{equation}\label{final_remark2} \textstyle\sqrt{\frac{a_0}{a_2}} = \sqrt{{\frac{2(\gamma-1)^2}{\gamma(\gamma+1)(3\gamma-1)}}\, \big(m\,m_{xx}-{\textstyle\frac{3\gamma+1}{3\gamma-1}}\,m_x^2\big)}\, \eta^{\frac{3\gamma-1}{2(\gamma-1)}}\,m^{-\frac{3(3-\gamma)}{2(3\gamma-1)}}\,. \end{equation} We now show that there exists some $S(x)$ (or equivalently $m(x)$) so that initially $|y(x)|=|q(x)|=\textstyle\sqrt{\frac{a_0}{a_2}}$ at some $x$. Comparing \eqref{final_remark1} with \eqref{final_remark2}, we see this happens when \[\textstyle(\frac{\gamma-1}{\gamma(3\gamma-1)})^2m_x^2={\textstyle\frac{2(\gamma-1)^2}{\gamma(\gamma+1)(3\gamma-1)}}\, \big(m\,m_{xx}-{\textstyle\frac{3\gamma+1}{3\gamma-1}}\,m_x^2\big),\] which is equivalent to \begin{equation}\label{final_remark3} mm_{xx}-{\textstyle\frac{3\gamma+1}{3\gamma-1}}\,m_x^2-{\textstyle\frac{\gamma+1}{2\gamma(3\gamma-1)}} m_x^2 =0. \end{equation} It is clear that if $m(x)$ satisfies \eqref{final_remark3}, it satisfies \eqref{a0>0}. A direct calculation shows that for positive $m$ (or equivalently a bounded $S(x)$), \eqref{final_remark3} is equivalent to \begin{equation}\label{final_remark4} (m^\theta)_{xx}=0,\ \text{for}\quad \theta=1-\frac{6\gamma^2+3\gamma+1}{2\gamma(3\gamma-1)}. \end{equation} Clearly, for any point ${\bar x}\in \mathbb{R}$, we are able to choose a smooth function $m(x)$ such that $m^\theta(x)$ reaches its inflection point at ${\bar x}$ and $m'({\bar x})\neq 0$. Indeed, using the formula $\tau=K_{\tau, S}\exp\left\{\frac{S}{\gamma c_v}\right\}$ along with \eqref{final_remark1}, $$-q(x)=y(x)=K_{\tau, S}^{-\frac{3\gamma-1}{4}}\frac{\gamma-1}{\theta \gamma(3\gamma-1)}(m^{\theta})_x.$$ Thus it confirms that ${\bar x}$ is exactly the (local) maximum of $|q(x)|=|y(x)|$, which can be easily chosen as the global maximum for a class of $m(x)$. From the analysis above, it is clear that the constant $N$ is an optimal measurement on the strength of compression for finite time singularity formation in general, because, if the condition \eqref{yq-N} fails in Theorem \ref{Thm singularity2}, then there exists a class of initial data admitting global stationary solutions of the form \eqref{ss}, such that \begin{equation} \inf_x\;\Big\{\ y(x,0),\ q(x,0)\ \Big\} = -N\, . \end{equation} In these examples, the maximum strength of compression $N$ is attained. \section*{Acknowledgement} { We sincerely appreciate Professor Alberto Bressan for his very helpful suggestions and discussions when we wrote this paper. The research of R. Pan was supported in part by NSF under grant DMS-1108994. The research of S. Zhu was partially supported by National Natural Science Foundation of China under grant 11231006, Natural Science Foundation of Shanghai under grant 14ZR1423100 and China Scholarship Council.}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Biological sequences are statistically heterogeneous, in the sense that local compositions and correlations in different regions of the sequences can be very different from one another. They must therefore treated as collections of statistically stationary segments (or \emph{domains}), to be discovered by the various segmentation schemes found in the literature (see review by Braun and M\"uller \cite{Braun1998StatisticalScience13p142}, and list of references in Ref.~\citeonline{Cheong2007IRJSSS}). Typically, these segmentation schemes are tested on (i) artificial sequences composed of a small number of segments, (ii) control sequences obtained by concatenating known coding and noncoding regions, or (iii) control sequences obtained by concatenating sequences from chromosomes know to be statistically distinct. They are then applied on a few better characterized genomic sequences, and compared against each other, to show general agreement, but also to demonstrate better sensitivity in delineating certain genomic features. To the best of our knowledge, there are no studies reporting a full and detailed comparison of the segmentation of a sequence against its distribution of carefully curated gene calls. There are also no studies comparing the segmentations of closely related genomes. In such sequences, there are homologous stretches, interrupted by lineage specific regions, and the natural question is whether homologous regions in different genomes will be segmented in exactly the same way by the same segmentation scheme. In this paper, we answer this question, without comparing the segmentation of homologous regions. Instead, through careful observations of how segment boundaries, or \emph{domain walls}, are discovered by two different entropic segmentation schemes, we realized that a subsequence can be segmented differently by the same scheme, if it is part of two different full sequences. We call this dependence of a segmentation on the detailed arrangement of segments the \emph{context sensitivity problem}. In Sec.~\ref{section:contextsensitivityprobleminrealgenomes}, we will describe how the context sensitivity problem manifests itself in real genomes, when these are segmented using a sliding-window entropic segmentation scheme, which examines local contexts in the sequences, versus segmentation using a recursive entropic segmentation scheme, which examines the global contexts of the sequences. We then show how the context sensitivity problem prevents us from coarse graining by using larger window sizes, stopping recursive segmentation earlier, or by simply removing weak domain walls from a fine-scale segmentation. We follow up in Sec.~\ref{section:meanfieldanalysis} with a mean-field analysis of the local and global context sensitivity problems, showing how the positions and strengths of domain walls, and order in which these are discovered, are affected by these contexts. In particular, we identify repetitive sequences as the worst case scenario to encounter during segmentation. Finally, in Sec.~\ref{section:conclusions}, we summarize and discuss the impacts of our findings, and explain why we believe the context sensitivity problem plagues \emph{all} segmentation schemes. \section{Context Sensitivity Problem in Real Bacterial Genomes} \label{section:contextsensitivityprobleminrealgenomes} In this section, we investigate the manifestations of the context sensitivity problem in two real bacterial genomes, those of \emph{Escherichia coli} K-12 MG1655 and \emph{Pseudomonas syringae} DC3000, when these are segmented using two entropic segmentation schemes. The first entropic segmentation scheme, based on statistics comparison of a pair of sliding windows, is sensitive to the local context of segments within the pair of sliding windows, and we shall show in Sec.~\ref{subsection:pairedslidingwindows} that the positions and strengths of domain walls discovered by the scheme depends sensitively on the window size. The second entropic segmentation scheme is recursive in nature, adding new domain walls at each stage of the recursion. We shall show in Sec.~\ref{subsection:recursivegenome} that this scheme is sensitive to the global context of segments within the sequence, and that domain walls are not discovered according to their true strengths. In Sec.~\ref{subsection:bottomupsegmentationhistory}, we show that there is no statistically consistent way to coarse grain a segmentation by removing the weakest domain walls, and agglomerating adjacent segments. \subsection{Paired Sliding Windows Segmentation Scheme} \label{subsection:pairedslidingwindows} Using the paired sliding windows segmentation scheme described in App.~\ref{section:pairedslidingwindowssegmentationscheme}, the number $M$ of order-$K$ Markov-chain segments discovered depends on the size $n$ of the windows used, as shown in Table \ref{table:K0n} for \emph{E. coli} K-12 MG1655. Because $M$ decreases as $n$ is increased, we are tempted to think that we can change the granularity of the segmental description of a sequence by tuning $n$, such that there are more and shorter segments when $n$ is made smaller, while there are fewer and longer segments when $n$ is made larger. Thus, as $n$ is increased, we expect groups of closely spaced domain walls to be merged as the short segments they demarcate are agglomerated, and be replaced by a peak close to the position of the strongest peak. \begin{table}[hbtp] \centering \caption{Number of $K = 0$ domain walls in the \emph{E. coli} K-12 MG1655 genome ($N = 4639675$ bp), obtained using the paired sliding window segmentation scheme for different window sizes $1000 \leq n \leq 5000$.} \label{table:K0n} \vskip .5\baselineskip \begin{tabular}{|c|c|c|c|c|c|} \hline $n$ & 1000 & 2000 & 3000 & 4000 & 5000 \\ \hline $M$ & 2781 & 1414 & 952 & 721 & 577 \\ \hline \end{tabular} \end{table} Indeed, we do find this expected merging of proximal domain walls in Fig.~\ref{figure:EcoliK12qrK0n1kn2kn3kn4kn5ki0i40k} and Fig.~\ref{figure:PsyringaeqrK0n1kn2kn3kn4kn5ki25ki75k}, which shows the square deviation spectra for the $(0, 40000)$ region of the \emph{E. coli} K-12 MG1655 genome and the $(25000, 75000)$ region of the \emph{P. syringae} DC3000 genome respectively. In the $(0, 40000)$ region of the \emph{E. coli} K-12 MG1655 genome shown in Fig.~\ref{figure:EcoliK12qrK0n1kn2kn3kn4kn5ki0i40k}, we find the group of domain walls, $i_a \approx 16500$, $i_b \approx 17500$, and $i_c \approx 18700$, and the pair of domain walls, $i_g \approx 33800$ and $i_h \approx 35000$, which are distinct in the $n = 1000$ square deviation spectrum, merging into the domain walls $i_{abc}$ and $i_{gh}$ in the $n \geq 3000$ square deviation spectra. In the $(25000, 75000)$ region of the \emph{P. syringae} DC3000 genome shown in Fig.~\ref{figure:PsyringaeqrK0n1kn2kn3kn4kn5ki25ki75k}, we find the pair of domain walls, $j_a \approx 45000$ and $j_b \approx 46600$, and the pair of domain walls, $j_c \approx 50400$ and $j_d \approx 51800$, which are distinct in the $n = 1000$ square deviation spectrum, merging into the domain walls $j_{ab}$ and $j_{cd}$ in the $n \geq 3000$ and $n = 5000$ square deviation spectra respectively. \begin{figure}[hbtp] \centering \includegraphics[scale=0.5,clip=true]% {EcoliK12.q.rK0n1kn2kn3kn4kn5k.0.40k.eps} \caption{The $K = 0$ square deviation spectra in the region $(0, 40000)$ of the \emph{E. coli} K-12 MG1655 genome, obtained using the paired sliding window segmentation scheme with window sizes (top to bottom) $n = 1000$, 2000, 3000, 4000, and 5000.} \label{figure:EcoliK12qrK0n1kn2kn3kn4kn5ki0i40k} \end{figure} \begin{figure}[hbtp] \centering \includegraphics[scale=0.5,clip=true]% {Psyringae.q.rK0n1kn2kn3kn4kn5k.25k.75k.eps} \caption{The $K = 0$ square deviation spectra in the region $(25000, 75000)$ of the \emph{P. syringae} DC3000 genome, obtained using the paired sliding window segmentation scheme with window sizes (top to bottom) $n = 1000$, 2000, 3000, 4000, and 5000.} \label{figure:PsyringaeqrK0n1kn2kn3kn4kn5ki25ki75k} \end{figure} However, we also find unexpected changes in the relative strengths of the domain walls, as $n$ is increased. In the $(0, 40000)$ region of the \emph{E. coli} K-12 MG1655 genome shown in Fig.~\ref{figure:EcoliK12qrK0n1kn2kn3kn4kn5ki0i40k}, we find that $i_d \approx 21800$, which appears as a broad, weak, and noisy bump in the $n = 1000$ square deviation spectrum, becoming stronger and more defined as $n$ is increased, and finally becomes as strong as the domain wall $i_{abc}$ in the $n = 5000$ square deviation spectrum. In this region of the \emph{E. coli} K-12 MG1655 genome, we also find that the domain walls $i_b \approx 17500$ and $i_f \approx 30000$ are equally strong in the $n = 1000$ square deviation spectrum, but as $n$ is increased, $i_b$ becomes stronger while $i_f$ becomes weaker. In the $(25000, 75000)$ region of the \emph{P. syringae} DC3000 genome shown in Fig.~\ref{figure:PsyringaeqrK0n1kn2kn3kn4kn5ki25ki75k}, we find that the domain walls $j_c \approx 50400$ and $j_f \approx 58200$ are equally strong, and also the domain walls $j_d \approx 51800$ and $j_e \approx 57300$ are equally strong, in the $n = 1000$ square deviation spectrum. However, as $n$ is increased, $j_c$ becomes stronger than $j_f$, while $j_d$ becomes stronger than $j_e$. More importantly, all these domain walls --- the strongest in this $(25000, 75000)$ region of the $n = 1000$ square deviation spectrum --- become weaker as $n$ is increased, to be superseded by the domain walls $j_{ab} \approx 45000$, $j_g \approx 65400$ and $j_h \approx 72400$, which become stronger as $n$ is increased. As it turned out, $(j_c, j_f)$ overlaps significantly with the interval interval $(50000, 59000)$, which incorporates three lineage-specific regions (LSRs 5, 6, and 7, all of which virulence related) identified by Joardar \emph{et al} \cite{Joardar2005MolPlantPathol6p53}. It is therefore biologically significant that $j_c$ and $j_f$ are strong domain walls in the $n = 1000$ square deviation spectrum. On the other hand, it is not clear what kind of biological meaning we can attach to $j_{ab}$, $j_g$, and $j_h$ being the strongest domain walls in the $n = 5000$ square deviation spectrum. \begin{table}[htbp] \centering \caption{Positions of strong domain walls in the $(0, 40000)$ region of the \emph{E. coli} K-12 MG1655 genome and the $(25000, 75000)$ region of the \emph{P. syringae} DC3000 genome, determined after match filtering the square deviation spectra obtained using the paired sliding window segmentation scheme with window sizes $n = 3000, 4000, 5000$.} \label{table:slidingwindowshifts} \vskip .5\baselineskip \begin{tabular}{|c|c|c|c|c|c|c|}\hline & \multicolumn{3}{c|}{\emph{E. coli} K-12 MG1655} & \multicolumn{3}{c|}{\emph{P. syringae} DC3000} \\ \hline $n$ & $i_{abc}$ & $i_d$ & $i_h$ & $j_{ab}$ & $j_g$ & $j_h$ \\ \hline 3000 & 16200 & 21800 & 34100 & 46600 & 66600 & 71500 \\ 4000 & 16300 & 21700 & 34400 & 45900 & 65900 & 72500 \\ 5000 & 16100 & 22100 & 34700 & 45700 & 65500 & 72500 \\ \hline \end{tabular} \end{table} There is another, more subtle, effect that increasing the size of the sliding windows has on the domain walls: their positions, as determined from peaks in the square deviation spectrum after match filtering, are shifted. The shifting positions of some of the strong domain walls in the $(0, 40000)$ region of the \emph{E. coli} K-12 MG1655 genome and the $(25000, 75000)$ region of the \emph{P. syringae} DC3000 genome are shown in Table \ref{table:slidingwindowshifts}. In general, the positions and strengths of domain walls can change when the window size used in the paired sliding windows segmentation scheme is changed, because windows of different sizes examine different local contexts. As a result of this local context sensitivity, whose nature we will illustrate using a mean-field picture in Sec.~\ref{subsection:meanfieldwindowedspectrum}, the sets of strong domain walls determined using two different window sizes $n$ and $n' > n$ are different. If $n$ and $n'$ are sufficiently different, the sets of strong domain walls, i.e.~those stronger than a specified cutoff, may have very little in common. Therefore, we cannot think of the segmentation obtained at window size $n'$ as the coarse grained version of the segmentation obtained at window size $n$. \subsection{Optimized Recursive Jensen-Shannon Segmentation Scheme} \label{subsection:recursivegenome} Using the optimized recursive Jensen-Shannon segmentation scheme described in Ref.~\citeonline{Cheong2007IRJSSS}, we obtained one series of segmentations each for \emph{E. coli} K-12 MG1655 and \emph{P. syringae} DC3000, shown in Fig.~\ref{figure:hierarchyofrecursivesegmentations} and Fig.~\ref{figure:pstm2m50} respectively. Two features are particularly striking about these plots. First, there exist domain walls stable with respect to segmentation optimization. These \emph{stable domain walls} remain close to where they were first discovered by the optimized recursive segmentation scheme. Second, there are \emph{unstable domain walls} that get shifted by as much as 10\% of the total length of the genome when a new domain wall is introduced. For example, in Fig.~\ref{figure:hierarchyofrecursivesegmentations} for the \emph{E. coli} K-12 MG1655 genome, we find the domain wall $i_{10} = 4051637$ in the optimized segmentation with $M = 10$ domain walls shifted to $i_{10} = 4469701$ in the optimized segmentation with $M = 11$ domain walls ($\delta i_{10} = +418064$), and also the domain wall $i_7 = 2135183$ in the optimized segmentation with $M = 15$ domain walls shifted to $i_7 = 2629043$ in the optimized segmentation with $M = 16$ domain walls ($\delta i_7 = +493860$). Based on the observation that some unstable domain walls are discovered, lost, later rediscovered and become stable, we suggested in Ref.~\citeonline{Cheong2007IRJSSS} that for a given segmentation with $M$ domain walls, stable domain walls are statistically more significant than unstable domain walls, while stable domain walls discovered earlier are more significant than stable domain walls discovered later in the optimized recursive segmentation. \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{ecolik12m2m50.eps} \caption{Series of optimized recursive Jensen-Shannon segmentations of the \emph{E. coli} K-12 MG1655 genome, for (top to bottom) $2 \leq M \leq 50$ domain walls. The two stable domain walls that appear in the $M = 2$ optimized segmentation are close to the replication origin and replication terminus.} \label{figure:hierarchyofrecursivesegmentations} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{pstm2m50.eps} \caption{Series of optimized recursive Jensen-Shannon segmentations of the \emph{P. syringae} DC3000 genome, for (top to bottom) $2 \leq M \leq 55$ domain walls. Compared to the \emph{E. coli} K-12 MG1655 genome, there are perceptibly more unstable domain walls in the \emph{P. syringae} DC3000 genome.} \label{figure:pstm2m50} \end{figure} From Fig.~\ref{figure:hierarchyofrecursivesegmentations} and Fig.~\ref{figure:pstm2m50}, we also find that the \emph{E. coli} K-12 MG1655 and \emph{P. syringae} DC3000 genomes have very different segmental textures. At this coarse scale ($M \sim 50$ segments), we find many short segments, many long segments, but few segments of intermediate lengths in the \emph{E. coli} K-12 MG1655 genome. In contrast, at the same granularity, the \emph{P. syringae} DC3000 genome contains many short segments, many segments of intermediate lengths, but few long segments. We believe these segmental textures are consistent with the different evolutionary trajectories of the two bacteria. \emph{E. coli} K-12 MG1655, which resides in the highly stable human gut environment, has a more stable genome containing fewer large-scale rearrangements which appear to be confined to hotspots within the $(2600000, 3600000)$ region. The genome of \emph{P. syringae} DC3000, on the other hand, has apparently undergone many more large-scale rearrangements as its lineage responded to multiple evolutionary challenges living in the hostile soil environment. We find many more large shifts in the optimized domain wall positions in \emph{P. syringae} DC3000 compared to \emph{E. coli} K-12 MG1655, because of the more varied context of the \emph{P. syringae} DC3000 genome. However, large shifts in the optimized domain wall positions arise generically in all bacterial genomes, because of the sensitivity of optimized domain wall positions to the contexts they are restricted to. In Sec.~\ref{subsection:meanfieldwindowlessspectrum}, we will illustrate using a mean-field picture how the recursive segmentation scheme decides where to subdivide a segment, i.e.~add a new domain wall, after examining the global context within the segment. We then show how this global context changes when the segment is reduced or enlarged during segmentation optimization, which can then cause a large shift in the position of the new domain wall. Because of this \emph{global context sensitivity}, we find in Fig.~\ref{figure:pstm2m50} a large shift of the domain wall $j_9 = 1723734$, which is stable when there are $36 \leq M \leq 51$ optimized domain walls in the segmentation, to its new position $j_9 = 1818461$ ($\delta j_9 = +94727$) when one more optimized domain wall is added. We say that a domain wall is \emph{stable at scale $M$} if it is only slightly shifted, or not at all, within the optimized segmentations with between $M - \delta M$ and $M + \delta M$ domain walls, where $\delta M \ll M$. Given a series of recursively determined optimized segmentations, we know which domain walls in an optimized segmentation containing $M$ domain walls are stable at scale $M$, and which domain walls in an optimized segmentation containing $M' > M$ domain walls are stable at scale $M'$. However, these two sets of stable domain walls can disagree significantly because of the recursive segmentation scheme's sensitivity to global contexts. Again, we cannot think of the optimized segmentation containing $M$ domain walls as a coarse grained version of the optimized segmentation containing $M'$ domain walls. \subsection{Coarse-Graining by Removing Domain Walls} \label{subsection:bottomupsegmentationhistory} In Sec.~\ref{subsection:pairedslidingwindows}, we saw the difficulties in coarse graining the segmental description of a bacterial genome by using larger window sizes, due to the paired sliding windows segmentation scheme's sensitivity to local context. We have also seen in Sec.~\ref{subsection:recursivegenome} a different set of problems associated with coarse graining by stopping the optimized recursive Jensen-Shannon segmentation earlier, due this time to the scheme's sensitivity to global context. Another way to do coarse graining would be to start from a fine segmentation, determined using a paired sliding window segmentation scheme with small window size, or properly terminated recursive segmentation scheme, and then remove the weakest domain walls. Our goal is to agglomerate shorter, weakly distinct segments into longer, more strongly distinct segments. Although this sounds like the recursive segmentation scheme playbacked in reverse, there are subtle differences: in the recursive segmentation scheme, strong domain walls may be discovered after weak ones are discovered, so our hope with this coarse graining scheme is that we target weak domain walls after `all' domain walls are discovered. \begin{figure}[htbp] \centering \includegraphics[scale=0.45,clip=true]{EcoliK12.q.topsegK.K0n1000.eps} \caption{Bottom-up segmentation history for \emph{E. coli} K-12 MG1655 derived from the initial $(K = 0, n = 1000)$ paired sliding windows segmentation containing $M = 2781$ domain walls. (Inset) Bottom-up segmentation history from $M = 1600$ domain walls remaining to $M = 1400$ domain walls remaining, showing the fine structure of dips below the smooth envelope.} \label{figure:EcoliK12qtopsegKK0n1000} \end{figure} Like recursive segmentation, there are many detail variations on the implementation of such a coarse graining scheme. The first thing we do is to select a cutoff strength $\Delta^*$, which we can think of as a knob we tune to get a desired granularity for our description of the genome: we keep a large number of domain walls if $\Delta^*$ is small, and keep a small number of domain walls if $\Delta^*$ is large. After selecting $\Delta^*$, we can then remove all domain walls weaker than $\Delta^*$ in one fell swoop, or remove them progressively, starting from the weakest domain walls. However we decide to remove domain walls weaker than $\Delta^*$, the strengths of the remaining domain walls must be re-evaluated after some have been removed from the segmentation. This is done by re-estimating the maximum-likelihood transition probabilities, and using them to compute the Jensen-Shannon divergences between successive coarse-grained segments, which are the strengths of our remaining domain walls. For the purpose of benchmarking, we start from the $(K = 0, n = 1000)$ paired sliding windows segmentation containing $M = 2781$ domain walls for the \emph{E. coli} K-12 MG1655 genome, and remove the weakest domain wall each time to generate a \emph{bottom-up segmentation history}, shown in Fig.~\ref{figure:EcoliK12qtopsegKK0n1000}. As we can see, the strength of the weakest domain wall as a function of the number of domain wall remaining consists of a smooth envelope, and dips below this envelope. We distinguish between sharp dips, which are the signatures of what we called \emph{tunneling events}, and broad dips, which are the signatures of what we called \emph{cascade events}. \begin{figure}[hbt] \centering \includegraphics[scale=0.8,clip=true]{NC_000913.topsegK.L1586.eps} \caption{A tunneling event occuring between $M = 1586$ and $M = 1584$ domain walls remaining in the bottom-up segmentation history of \emph{E. coli} K-12 MG1655 ($N = 4639675$ bp), starting from the $(K = 0, n = 1000)$ initial segmentation containing $M = 2791$ domain walls. Three segments in the $(4604497, 4632896)$ region of the genome are shown. The short segment involved in this tunneling event consists of the single gene \emph{yjjX} on the negative strand (green), flanked by two segments consisting of genes found predominantly on the positive strand (red). At each stage of the bottom-up segmentation history, the domain wall removed is highlighted in red.} \label{figure:NC000913topsegKL1586} \end{figure} \begin{figure}[hbt] \centering \includegraphics[scale=0.8,clip=true]{NC_000913.topsegK.L1846.eps} \caption{A cascade event occuring between $M = 1846$ and $M = 1841$ domain walls remaining in the bottom-up segmentation history of \emph{E. coli} K-12 MG1655 ($N = 4639675$ bp), starting from the $(K = 0, n = 1000)$ initial segmentation containing $M = 2791$ domain walls. Six segments in the $(1142115, 1157158)$ region of the genome are shown. The first domain wall to be removed in this cascade event lies close to the boundary between the gene \emph{rne}, believed to be RNase E, on the negative strand (green), and the gene \emph{yceQ}, coding for a hypothetical protein, on the positive strand (red). The second domain wall to be removed in the cascade is in the middle of the gene \emph{rpmF} on the positive strand, the third is close to the boundary between \emph{fabF} and \emph{pabC}, the fourth is close to the boundary between \emph{pabC} and \emph{yceG}, and the last is close to the boundary between \emph{holB} and \emph{ycfH}. At each stage of the bottom-up segmentation history, the domain wall removed is highlighted in red.} \label{figure:NC000913topsegKL1846} \end{figure} Looking more closely at the segment statistics, we realized that a tunneling event involves a short segment flanked by two long segments which are statistically similar to one another, but different from the short segment. This statistical dissimilarity between the short segment and its long flanking segments is reflected in the moderate strengths $\Delta_L$ and $\Delta_R$ of the left and right domain walls of the short segment. Let us say the right domain wall is slightly weaker than the left domain wall, i.e. $\Delta_R \lesssim \Delta_L$. As the bottom-up segmentation history progresses, there will reach a stage where we remove the right domain wall. When this happens, the short segment will be assimilated by its right flanking segment. Because the right flanking segment is long, absorbing the short segment represents only a small perturbation in its segment statistics. The longer right segment that results is still statistically similar to the left segment. Therefore, when we recompute the strength $\Delta_L$ of the remaining domain wall, we find that it is now smaller than the strength $\Delta_R$ of the domain wall that was just removed. This remaining domain wall therefore becomes the next to be removed in the bottom-up segmentation history, afterwhich the next domain wall to be removed occurs somewhere else in the sequence, and has strength slightly larger than $\Delta_R$. The signature of a tunneling event is therefore a sharp dip in the bottom-up segmentation history. Biologically, a short segment with a tunneling event signature is likely to represent an insertion sometime in the evolutionary past of the organism. A tunneling event in the $(K = 0, n = 1000)$ bottom-up segmentation history is shown in Fig. \ref{figure:NC000913topsegKL1586}. In contrast, a cascade event involves a cluster of short segments of varying statistics flanked by two long segments that are statistically similar. The domain walls separating the short segments from each other and from the long flanking segments are then removed in succession. This sequential removal of domain walls gives rise to an extended dip in the bottom-up segmentation history, with a complex internal structure that depends on the actual distribution of short segments. Biologically, a cluster of short segments participating in a cascade event points to a possible recombination hotspot on the genome of the organism. A cascade event in the $(K = 0, n = 1000)$ bottom-up segmentation history is shown in Fig. \ref{figure:NC000913topsegKL1846}. Clearly, by removing more and more domain walls, we construct a proper hierarchy of segmentations containing fewer and fewer domain walls, which agrees intuitively with our notion of what coarse graining is about. We also expected to obtain a unique coarse-grained segmentation, containing only domain walls stronger than $\Delta^*$, by removing all domain walls weaker than $\Delta^*$. It turned out the picture that emerge from this coarse graining procedure is more complicated, based on which we identified three main problems. First, let us start with a segmentation containing domain walls weaker than $\Delta^*$, and decide to remove these domain walls in a single step. Recomputing the strengths of the remaining domain walls, we would find that some of these will be weaker than $\Delta^*$, and so cannot claim to have found the desired coarse-grained segmentation. Naturally, we iterate the process, removing all domain walls weaker than $\Delta^*$, and recomputing the strengths of the remaining domain walls, until all remaining domain walls are stronger than $\Delta^*$. Next, we try removing domain walls weaker than $\Delta^*$ one at a time, starting from the weakest, and recompute domain wall strengths after every removal. The strengths of a few of the remaining domain walls will change each time the weakest domain wall is removed, sometimes becoming stronger, and sometimes becoming weaker, but we continue removing the weakest domain wall until all remaining domain walls are stronger than $\Delta^*$. Comparing the segmentations obtained using the two coarse-graining procedures, we will find that they can be very different. This difficulty occurs for all averaging problems, so we are not overly concerned, but argue instead that removing the weakest domain wall each time is like a renormalization-group procedure, and should therefore be more reliable than removing many weak domain walls all at once. Once we accept this decremental procedure for coarse graining, we arrive at the second problem. Suppose we do not stop coarse graining after arriving at the first segmentation with all domain walls stronger than $\Delta^*$, but switch strategy to target and removing segments associated with tunneling and cascade events. The segmentations obtained after all domain walls associated with such segments will contain only domain walls stronger than $\Delta^*$, but the segmentations in the intermediate steps will contain domain walls weaker than $\Delta^*$. If we keep coarse graining until no tunneling or cascade events weaken domain walls below $\Delta^*$, we would end up with a series of coarse-grained segmentations containing different number of domain walls. These segmentations do not have the same minimum domain wall strengths, but are related to each other through stages in which some domain walls are weaker than $\Delta^*$. We worry about this series of segmentations when there exist domain walls with equal or nearly equal strengths. If at any stage of the coarse graining, these domain walls become the weakest overall, and we stick to removing one domain wall at a time, we can remove any one of these equally weak domain walls. If we track the different bottom-up segmentation histories associated with each choice, we will find that the coarse-grained segmentations for which all domain walls first become stronger than $\Delta^*$ can be very different. However, if we coarse grain further by targetting tunneling and cascading segments, we would end up with the same coarse-grained segmentation for which no domain walls ever become weaker than $\Delta^*$. Another way to think of this coarsest segmentation is that it is the one for which no domain wall stronger than $\Delta^*$ can be added without first adding a domain wall weaker than $\Delta^*$. Third, we know from the bottom-up segmentation history that short segments participating in tunneling events can be absorbed into their long flanking segments without appreciably changing the strengths of the latter's other domain walls. Clearly, absorbing statistically very distinct short segments increases the heterogenuity of the coarse-grained segment. This is something we have to accept in coarse graining, but ultimately, what we really want at each stage of the coarse graining is for segments to be no more heterogeneous than some prescribed segment variance. Unfortunately, the segment variances are not related to the domain wall strengths in a simple fashion, and even if we know how to compute these segment variances, there is no guarantee that a coarse graining scheme based on these will be less problematic. The bottomline is, all these problems arise because domain wall strengths change wildly as segments are agglomerated in the coarse graining process, due again to the context sensitivity of the Jensen-Shannon divergence (or any other entropic measure, for that matter). \section{Mean-Field Analyses of Segmentation Schemes} \label{section:meanfieldanalysis} From our segmentation and coarse graining analyses of real genomes in Sec.~\ref{section:contextsensitivityprobleminrealgenomes}, we realized that these cannot be thought of as consisting of long segments that are strongly dissimilar to its neighboring long segments, within which we find short segments that are weakly dissimilar to its neighboring short segments. In fact, the results suggest that there are short segments that are strongly dissimilar to its neighboring long segments, which are frequently only weakly dissimilar to its neighboring long segments. This mosaic and non-hierarchical structure of segments is the root of the context sensitivity problem, which we will seek to better understand in this section. \begin{figure}[htbp] \centering \includegraphics[width=.7\linewidth]{meanfieldpicture.eps} \caption{Going from a discrete description to a continuum description of a nucleotide sequence.} \label{figure:meanfieldpicture} \end{figure} To do this, we go first to a continuum description of discrete genomic sequences, as shown in Fig.~\ref{figure:meanfieldpicture}, where we allow the sequence positions and the various $K$-mer frequencies to vary continuously. To eliminate spatial inhomogenuities in the statistics of the interval $[i, j > i)$, which we want to model as a statistically stationary segment in the \emph{mean-field limit}, we distribute its $K$-mer statistics uniformly along the segment. More precisely, if $f_{\mathbf{t} s}^{[i, j)}$ is the number of times the $(K+1)$-mer $\alpha_{t_K}\cdots\alpha_{t_1}\alpha_s$, which we also refer to as the \emph{transition} $\mathbf{t} \to s$, appears in $[i, j)$, we define the mean-field count $f_{\mathbf{t} s}^{[i', j')}$ of the transition $\mathbf{t} \to s$ within the subinterval $[i', j' > i') \subseteq [i, j)$ to be \begin{equation} f_{\mathbf{t}s}^{[i', j')} \equiv \frac{j' - i'}{j - i}\, f_{\mathbf{t}s}^{[i, j)}. \end{equation} Within this mean-field picture, we discuss in Sec.~\ref{subsection:meanfieldwindowedspectrum} how the paired sliding-window scheme's ability to detect domain walls depends on the size $n$ of the pair of sliding windows. We show, in contrast to the positions and strengths being determined exactly by this segmentation scheme for domain walls between segments both longer than $n$, that domain walls between segments, one or both of which are shorter than $n$, are weakened and shifted in the mean-field limit. Following this, we show in Sec.~\ref{subsection:meanfieldwindowlessspectrum} that the strengths of the domain walls obtained from the recursive segmentation scheme are context sensitive, and approach the exact strengths only as we approach the terminal segmentation. We explain why optimization is desirable at every step of the recursive segmentation, before going on to explain why repetitive sequences are the worst kind of sequences to segment in Sec.~\ref{subsection:oscillatorysequence}. In this section, we present numerical examples for $K = 0$ Markov chains, but all qualitative conclusions are valid for Markov chains of order $K > 0$. \subsection{Paired Sliding Windows Segmentation Scheme} \label{subsection:meanfieldwindowedspectrum} For a pair of windows of length $n$ sliding across a mean-field sequence, there are three possibilities (see Fig.~\ref{figure:windowedcases}): \begin{enumerate} \item both windows lie entirely within a single mean-field segment; \item the two windows straddle two mean-field segments, i.e. a single domain wall within one of the windows; \item the two windows straddle multiple mean-field segments. \end{enumerate} The first situation is trivial, as the left and right windowed counts are identical, \begin{equation} f_{\mathbf{t}s}^L = f_{\mathbf{t}s}^R = \frac{n}{N_{\text{seg}}}\, f_{\mathbf{t}s}^{\text{seg}}, \end{equation} $N_{\text{seg}}$ being the length of the mean-field segment, and $f_{\mathbf{t}s}^{\text{seg}}$ being the transition counts within the mean-field segment. The Jensen-Shannon divergence, or the square deviation between the two windows therefore vanishes identically. The second situation, which is what the paired sliding windows segmentation scheme is designed to handle, is analyzed in App.~\ref{subsection:meanfieldlineshape}. Based on that analysis, we showed that the position and strength of the domain wall between the two mean-field segments can be determined exactly. We also derived the mean-field lineshape for match filtering. \begin{figure}[htbp] \centering \includegraphics{windowedcases.eps} \caption{The three possible situations that we encounter when we slide a symmetric pair of windows across a sequence composed of many mean-field segments: (1) both windows lie entirely within a single mean-field segment; (2) the two windows straddle two mean-field segments; and (3) the two windows straddle multiple mean-field segments.} \label{figure:windowedcases} \end{figure} In this subsection, our interest is in understanding how the paired sliding windows segmentation scheme behaves in the third situation. Clearly, the precise structure of the mean-field divergence spectrum will depend on the local context the pair of windows is sliding across, so we look at an important special case: that of a pair of length-$n$ windows sliding across a segment shorter than $n$. In Fig. \ref{figure:wJSshortsegment}, we show two lineshapes which are expected to be generic, for (i) the long segments flanking the short segment are themselves statistically dissimilar (top plot); and (ii) the long segments flanking the short segment are themselves statistically similar (bottom plot). In case (i), the mean-field lineshape obtained as the pair of windows slides across the short segment consists of a single peak at one of its ends. This peak is broader than that of a simple domain wall by the width of the short segment, and therefore, if we perform match filtering using the quadratic mean-field lineshape in Eq. \eqref{equation:meanfieldJS}, the center of the match-filtered peak would occur not at either ends of the short segment, but somewhere in the interior. \begin{figure}[htbp] \centering \includegraphics[scale=0.45,clip=true]{wJSshortsegment.eps} \caption{The Jensen-Shannon divergence $\Delta(z)$ (solid curves) of a pair of sliding windows of length $n = 1$ as it slides across the binary mean-field segments (left to right) $a$, $b$, and $c$, with lengths $N_a > 1$, $N_b < 1$, and $N_c > 1$ respectively. On the above plots, the left and right ends of segment $b$ are highlighted by the dashed vertical lines at the normalized sequence positions $z = 0$ and $z = 0.5$ respectively. For the top plot, the probabilities associated with the mean-field segments are $P_a(0) = 1 - P_a(1) = 0.30$, $P_b(0) = 1 - P_b(1) = 0.50$, and $P_c(0) = 1 - P_c(1) = 0.60$. For the bottom plot, the probabilities associated with the mean-field segments are $P_a(0) = 1 - P_a(1) = 0.20$, $P_b(0) = 1 - P_b(1) = 0.70$, and $P_c(0) = 1 - P_c(1) = 0.22$.} \label{figure:wJSshortsegment} \end{figure} In case (ii), the mean-field lineshape obtained as the pair of windows slides across the short segment consists of a pair of peaks, both of which are narrower than the mean-field lineshape of a single domain wall. After we perform match filtering, the center of the match-filtered left peak would be left of the true left domain wall, while the center of the match-filtered right peak would be right of the true right domain wall. Case (ii) is of special interest to us, as it is the context that give rise to tunneling events in the bottom-up segmentation history. Both contexts give rise to shifts in the domain wall positions, as well as to changes in the strengths of the unresolved domain walls, and thus may be able to explain some of the observations made in Sec.~\ref{subsection:pairedslidingwindows}. In case (i), the domain wall strength can increase or decrease, depending on how different the two long flanking segments are compared to the short segment. In case (ii), the domain wall strengths always decrease. \subsection{Optimized Recursive Jensen-Shannon Segmentation Scheme} \label{subsection:meanfieldwindowlessspectrum} To understand how the optimized recursive Jensen-Shannon segmentation is sensitive to global context, let us first understand what happens when the segments discovered recursively are not optimized, and then consider the effects of segmentation optimization. In Fig.~\ref{figure:JS10}, we show the Jensen-Shannon divergence spectrum for a sequence consisting of ten mean-field segments. As we can see, the mean-field Jensen-Shannon divergence is everywhere convex, except at the domain walls. These are associated with peaks or kinks in the divergence spectrum, depending on the global context within the sequence. Under special distributions of the segment statistics, domain walls may even have vanishing divergences. \begin{figure}[htbp] \centering \includegraphics[scale=0.45,clip=true]{JS10.eps} \caption{The Jensen-Shannon divergence $\Delta(z)$ (red solid curve) as a function of the normalized cursor position $z$ within an artificial binary sequence composed of ten mean-field segments, characterized by the probabilities (left to right) $\mathbf{P}(0) = (0.55, 0.05, 0.20, 0.60, 0.65, 0.30, 0.45, 0.05, 0.45, 0.15)$. The blue bars indicate the true strengths of each of the nine domain walls, at $z_1 = 0.15$, $z_2 = 0.25$, $z_3 = 0.35$, $z_4 = 0.50$, $z_5 = 0.65$, $z_6 = 0.70$, $z_7 = 0.85$, $z_8 = 0.90$, and $z_9 = 0.95$, while the number at each domain wall indicate which recursion step it is discovered. (Inset) The Jensen-Shannon divergence $\Delta(z)$ (red solid curve) as a function of the normalized cursor position $z$ within an artificial binary sequence composed of two mean-field segments, characterized by the probabilities $P_L(0) = 0.10$ and $P_R(0) = 0.90$. The domain wall at $z = 0.60$ is indicated by the blue dashed vertical line.} \label{figure:JS10} \end{figure} All nine domain walls in the ten-segment sequence are recovered if we allow the recursive Jensen-Shannon segmentation without segmentation optimization to go to completion. However, as shown in Fig.~\ref{figure:JS10}, these domain walls are not discovered in the order of their true strengths (heights of the blue bars), given by the Jensen-Shannon divergence between the pairs of segments they separate. In fact, just like in the coarse graining procedure described in Sec.~\ref{subsection:bottomupsegmentationhistory}, the Jensen-Shannon divergence at each domain wall changes as the recursion proceeds, as the context it is found in gets refined. For this ten-segment sequence, the recursive segmentation scheme's sensitivity to global context results in the third strongest domain wall being discovered in the first recursion step, the second and fourth strongest domain walls being discovered in the second recursion step, and the strongest domain wall being discovered only in the third recursion step. To see the extent to which optimization ameliorate the global context sensitivity of the recursive segmentation scheme, let us imagine the ten-segment sequence to be part of a longer sequence being recursively segmented. Let us further suppose that under segmentation optimization, the segment $(0.95, 1.00)$ gets incorporated by the sequence to the right of $(0.00, 1.00)$. With this, we now examine in detail a nine-segment sequence $(0.00, 0.95)$, whose mean-field divergence spectrum is shown in Fig.~\ref{figure:JS10c9}, instead of the original ten-segment sequence $(0.00, 1.00)$. From Fig.~\ref{figure:JS10c9}, we find the divergence maximum of the nine-segment sequence is at $z_3 = 0.35$, the second strongest of the nine domain walls, instead of the third strongest domain wall at $z_7 = 0.85$ for the ten-segment sequence. In proportion to the length of the ten-segment sequence, this shift from the third strongest domain wall to the second strongest domain wall is huge, by about half the length of the sequence, when the change in context involves a loss of only 5\% of the total length. In Sec.~\ref{subsection:recursivegenome}, we saw instances of such large shifts in optimized domain wall positions when we recursively add one new domain wall each time to a real genome. \begin{figure}[htbp] \centering \includegraphics[scale=0.45,clip=true]{JS10c9.eps} \caption{The windowless Jensen-Shannon divergence spectrum $\Delta(z)$ (red solid curve) of the nine-segment binary sequence, after losing the short segment at its right end. The blue bars indicate the strength of each of the nine domain walls.} \label{figure:JS10c9} \end{figure} In this example of the ten-segment sequence, we saw that segmentation optimization has the potential to move an existing domain wall, from a weaker (the third strongest overall), to a stronger (the second strongest overall, and if the global context is different, perhaps even to the strongest overall) position. However, the nature of the context sensitivity problem is such that no guarantee can be offered on the segmentation optimization algorithm always moving a domain wall from a weaker to a stronger position. Nevertheless, segmentation optimization frequently does move a domain wall from a weaker position to a stronger position, and it always make successive segments as statistically distinct from each other as possible. This is good enough a reason to justify the use of segmentation optimization. \subsection{Repetitive Sequences} \label{subsection:oscillatorysequence} In this last subsection of Sec.~\ref{section:meanfieldanalysis}, let us look at repetitive sequences, for which the context sensitivity problem is the most severe. Such sequences, which are composed of periodically repeating motifs, are of biological interest because they arise from a variety of recombination processes, and are fairly common in real genomic sequences. In general, a motif $a_1a_2\cdots a_r$ that is repeated in a repetitive sequence can consists of $r$ statistically distinct subunits, but for simplicity, let us look only at $ab$-repeats, and highlight statistical signatures common to all repetitive sequences. \begin{figure}[htbp] \centering \includegraphics[scale=0.45,clip=true]{wabx8mf.eps} \caption{The Jensen-Shannon divergence spectrum (top, red solid curve) before, and (bottom, red solid curve) after match filtering and quality enhancement, for a pair of windows of size $n = 1$ sliding across a repetitive binary $K = 0$ sequence $cababababababababc$, where the subunits $a$ (light green) and $b$ (light yellow) both have lengths $n_a = n_b = 0.7$, and are characterized by the probabilities $P_a(0) = 1 - P_a(1) = 0.1$ and $P_b(0) = 1 - P_b(1) = 0.9$. The terminal $c$ segments (white), assumed to have lengths much larger than $n = 1$, are characterized by the probability $P_c(0) = 1 - P_c(1) = 0.5$.} \label{figure:wabx8mf} \end{figure} When we segment the repetitive sequence $abababababababab$ using the paired sliding windows segmentation scheme with window size $n$, we obtained the mean-field Jensen-Shannon divergence spectrum shown in the top plot of Fig.~\ref{figure:wabx8mf}. In this figure, sequence positions are normalized such that $n = 1$, while the lengths of the repeating segments $a$ and $b$ are chosen to be both less than the window size, i.e.~$n_a = n_b = 0.7 < n$. To understand contextual effects at the ends of the repetitive sequence, we include the terminal segments $c$ in our analysis. These terminal $c$ segments are assumed to have lengths $n_c \gg n$, and statistics intermediate between those of $a$ and $b$. As we can see from the top plot of Fig.~\ref{figure:wabx8mf}, all domain walls between $a$ and $b$ segments ($ab$ \emph{domain walls}) correspond to peaks in the mean-field divergence spectrum. The two $ab$ domain walls near the ends of the repetitive sequence are the strongest, while the rest have the same diminished strength (compared to the Jensen-Shannon divergence between the $a$ and $b$ segments). From the top plot of Fig.~\ref{figure:wabx8mf}, we also see that no peaks are associated with the $ca$ and $bc$ domain walls. Instead, we find a spurious peak left of the $ca$ domain wall, and another spurious peak right of the $bc$ domain wall. As discussed in App.~\ref{section:pairedslidingwindowssegmentationscheme}, the mean-field lineshape of a simple domain wall is very nearly piecewise quadratic, with a total width of $2n$. This observation is extremely helpful when we deal with real divergence spectra, where statistical fluctuations produce spurious peaks with various shapes and widths. By insisting that only peaks that are (i) approximately piecewise quadratic, with (ii) widths close to $2n$, are statistically significant, we can determine a smaller, and more reliable set of domain walls through match filtering. In the top plot of Fig.~\ref{figure:wabx8mf}, all our peaks have widths smaller than $2n$. In the mean-field limit, these are certainly not spurious, but if we imagine putting statistical fluctuations back into the divergence spectrum, and suppose we did not know beforehand that there are segments shorter than $n$ in this sequence, it would be reasonable to accept by fiat whatever picture emerging from the match filtering procedure. For $cababababababababc$, the match-filtered, quality enhanced divergence spectrum is shown as the bottom plot of Fig.~\ref{figure:wabx8mf}, where we find the two spurious peaks shifted deeper into the $c$ segments by the match filtering procedure. In this plot, the two strong $ab$ domain walls near the ends of the repetitive sequence continue to stand out, but the rest of the $ab$ domain walls are now washed out by match filtering. If we put statistical noise back into the picture, the fine structures marking these remaining $ab$ domain walls will disappear, and we end up with a featureless plateau in the interior of the repetitive sequence. We might then be misled into thinking that this $cababababababababc$ sequence consists of only five segments $ca'c'b'c$, where $a'$ is $a$ contaminated by a small piece of $c$, $b'$ is $b$ contaminated by a small piece of $c$, and $c'$, which lies between the two strong $ab$ domain walls, will be mistaken for a segment with $K = 0$ statistics similar to $c$, even though it is not statistically stationary. Next, let us analyze the recursive Jensen-Shannon segmentation of $abababababababab$, where we cut the repetitive sequence first into two segments, then each of these into two subsegments, and so on and so forth, until all the segments are discovered. In the top plot of Fig.~\ref{figure:abx8rJS}, we show the top-level Jensen-Shannon divergence spectrum, based on which we will cut $abababababababab$ into two segments. In this figure, we find \begin{enumerate} \item a series of $k$ peaks of unequal strengths, with stronger peaks near the ends, and weaker peaks in the middle of the repetitive sequence; \item $k - 1$ domain walls having vanishing divergences; \item the ratio of strengths of the strongest peak to the weakest peak is roughly $k/2$, \end{enumerate} where $k$ is the number of repeated motifs. These statistical signatures are shared by all repetitive sequences, with the detail distribution and statistical characteristics of the subunits within the repeated motif affecting only the shape and strength of the peaks. Here we see extreme context sensitivity reflected in the fact that domain walls with the same true strength can have very different, and even vanishing, strengths when the segment structure of the sequence is examined recursively. \begin{figure}[htbp] \centering \includegraphics[scale=0.45,clip=true]{abx8rJS.eps} \caption{(Top) The top-level Jensen-Shannon divergence spectrum (red solid curve) obtained in the recursive segmentation of a repetitive binary sequence consisting of subunits $a$ (light green, $P_a(0) = 1 - P_a(1) = 0.1$) and $b$ (light yellow, $P_b(0) = 1 - P_b(1) = 0.9$) repeated eight times. (Bottom) The Jensen-Shannon divergence spectra obtained when $abababababababab$ is recursively segmented from the right end.} \label{figure:abx8rJS} \end{figure} From the bottom plot of Fig.~\ref{figure:abx8rJS}, we find that one or both of the peaks near the ends of the repetitive sequence are always the strongest, as recursion progresses. This is true when the repetitive sequence consists of repeating motifs with more complex internal structure, and also true when we attach terminal segments to the repetitive sequence. Therefore, successive cuts are always made at one end or the other of the repetitive sequence. For $ab$-repeats, the peaks near both ends are equally strong in the mean-field limit, so we can choose to always cut at the right end of $abababababababab$, as shown in the bottom plot of Fig.~\ref{figure:abx8rJS}. As the repetitive sequence loses its rightmost segment at every step, and the global context alternates between being dominated by $a$ segments to being dominated by $b$ segments, we find oscillations in the strengths of the remaining domain walls. This oscillation, which is a generic behaviour of all repetitive sequences under recursive segmentation, can be seen more clearly for the $ab$-repetitive sequence in Figure \ref{figure:abx8osc}, where instead of cutting off one segment at a time, we move the cut continuously inwards from the right end. \begin{figure}[htbp] \centering \includegraphics[scale=0.45,clip=true]{abx8osc.eps} \caption{The windowless Jensen-Shannon divergences at $z = 10.0$ (at a domain wall) and $z = 9.5$ (away from a domain wall) of the repetitive binary sequence $abababababababab$, with $P_a(0) = 0.1$ and $P_b(0) = 0.9$, as functions of the cut $10 \leq z \leq 16$.} \label{figure:abx8osc} \end{figure} \section{Summary and Discussions} \label{section:conclusions} In this paper, we defined the \emph{context sensitivity problem}, in which the \emph{same} group of statistically stationary segments are segmented \emph{differently} by the \emph{same} segmentation scheme, when it is encapsulated within \emph{different larger contexts} of segments. We then described in Sec.~\ref{section:contextsensitivityprobleminrealgenomes} the various manifestions of context sensitivity when real bacterial genomes are segmented using the paired sliding windows and optimized recursive Jensen-Shannon segmentation schemes, which are sensitive to local and global contexts respectively. For the single-pass paired sliding windows segmentation scheme, we found that the positions and relative strengths of domain walls can change dramatically when we change the window size, and hence the local contexts examined. For the optimized recursive segmentation scheme, we found that there can be large shifts in the optimized domain wall positions as recursion progresses, due to the change in global context when we go from examining a sequence to examining its subsequence, and \emph{vice versa}. In Sec.~\ref{section:contextsensitivityprobleminrealgenomes}, we also looked into the issue of coarse graining the segmental description of a bacterial genome. We argued that coarse graining by using larger window sizes, or stopping recursive segmentation earlier can be biologically misleading, because of the context sensitivity problem, and explored an alternative coarse graining procedure which involves removing the weakest domain walls and agglomerating the segments they separate. This coarse graining procedure was found to be fraught with difficulties, arising again from the context sensitivity of domain wall strengths. Ultimately, the goal of coarse graining is to reduce the complexity of the segmented models of real genomes. This can be achieved by reducing the number of segments, or by reducing the number of segment \emph{types} or \emph{classes} (see, for example, the work by Azad \emph{et al}. \cite{Azad2002PhysicalReviewE66a031913}). We realized in this paper that the former is unattainable, and proposed to accomplish the latter through statistical clustering of the segments. Based on what we understand about the context sensitivity problem, we realized that it would be necessary to segment a given genomic sequence as far as possible, to the point before genes are cut into multiple segments (unless they are known to contain multiple domains). We are in the process of writing the results of our investigations into this manner of coarse graining, in which no domain walls are removed, but statistically similar segments are clustered into a small number of segment classes. In Sec.~\ref{section:meanfieldanalysis}, we analyzed the paired sliding windows and optimized recursive segmentation schemes within a mean-field picture. For the former, we explained how the presence of segments shorter than the window size lead to shifts in the positions, and changes in the strengths of domain walls. For the latter, we illustrate the context dependence of the domain walls strengths, how this leads to large shifts in the optimized domain wall positions, and also to the domain walls being discovered out of order by their true strengths. We showed that all domain walls in a sequence will be recovered in the mean-field limit, if we allow the recursive segmentation to go to completion, but realized that for real sequences subject to statistical fluctuations, there is a danger of stopping the the recursion too early. When this happens, we will generically pick up weak domain walls, but miss stronger ones --- a problem that can be partly alleviated through segmentation optimization, in which domain walls are moved from weaker to stronger positions. We devoted one subsection to explain why the context sensitivity problem is especially severe in repetitive sequences. Finally, let us say that while we have examined only two entropic segmentation schemes in detail, we believe the context sensitivity problem plagues all segmentation schemes. The manifestations of the context sensitivity problem will of course be different for different segmentation schemes, but will involve (i) getting the domain wall positions wrong; (ii) getting the domain wall strengths wrong; or (iii) missing strong domain walls. A proper analysis of the context sensitivity of the various segmentation schemes is beyond the scope of this paper, but let us offer some thoughts on segmentation schemes based on based on hidden Markov models (HMMs), which are very popular in the bioinformatics literature. In HMM segmentation, model parameters are typically estimated using the Baum-Welch algorithm, which first computes the forward and backward probabilities of each hidden state, use these to estimate the transition frequencies, which are used to update the model parameters. Computation of forward and backward probabilities are sensitive to local context, in that the hidden states assigned to a given collection of segments will be different, if the sequences immediately flanking the segments are different. Updating of model parameters, on the other hand, is sensitive to global context, because very different arrangement of segments and segment classes can give rise to the same summary of transition frequencies. The signatures of this dual local-global context sensitivity is buried within the sequence of posterior probabilities obtained from iterations of the Baum-Welch algorithm. Ultimately, the context sensitivity problem is a very special case of the problem of mixed data, which is an active area of statistical research. We hope that through the results presented in this paper, the bioinformatics community will come to better recognize the nuances sequence context poses to its proper segmentation. \begin{appendix} \subsection{Generalized Jensen-Shannon Divergences} \label{section:generalizedJensenShannondivergences} In Ref.~\citeonline{Cheong2007IRJSSS} we explained that dinucleotide correlations and codon biases in biological sequences \cite{Grantham1981NucleicAcidsResearch9pR43, Shepherd1981ProcNatlAcadSciUSA78p1596, Staden1982NucleicAcidsResearch10p141, Fickett1982NucleicAcidsResearch10p5303, Herzel1995PhysicaA216p518} are better modeled by Markov chains of order $K > 0$ over the quaternary alphabet $\mathcal{S} = \rm\{A, C, G, T\}$ \cite{Thakur2007PhysicalReviewE75a011915}, rather than Bernoulli chains over $\mathcal{S}$ \cite{BernaolaGalvan1996PhysicalReviewE53p5181, RomanRoldan1998PhysicalReviewLetters80p1344}, or Bernoulli chains over the extended alphabet $\mathcal{S}^K$ \cite{BernaolaGalvan2000PhysicalReviewLetters85p1342, Nicorici2003FINSIG03, Nicorici2004EURASIPJournalonAppliedSignalProcessing1p81}. In the sequence segmentation problem, our task is to decide whether there is a domain wall at sequence position $i$ within a given sequence $\mathbf{x} = x_1 x_2 \cdots x_{i-1} x_i x_{i+1} \cdots x_N$, where $x_j \in \mathcal{S}, 1 \geq j \geq N$. The simplest model selection scheme that would address this problem would involve the comparison of the one-segment sequence likelihood $P_1$, whereby the sequence $\mathbf{x}$ is treated as generated by a single Markov process, against the two-segment sequence likelihood $P_2$, whereby the subsequences $\mathbf{x}_L = x_1 x_2 \cdots x_{i-1}$ and $\mathbf{x}_R = x_i x_{i+1} \cdots x_N$ are treated as generated by two different Markov processes. To model $\mathbf{x}$, $\mathbf{x}_L$, and $\mathbf{x}_R$ as Markov chains of order $K$, we determine the order-$K$ \emph{transition counts} $f_{\mathbf{t} s}$, $f_{\mathbf{t} s}^L$, $f_{\mathbf{t} s}^R$, subject to the normalizations \begin{equation} f_{\mathbf{t} s} = f_{\mathbf{t} s}^L + f_{\mathbf{t} s}^R, \quad \sum_{\mathbf{t}\in\mathcal{S}^K}\sum_{s=1}^S f_{\mathbf{t} s} = N. \end{equation} Here $S = 4$ is the size of the quaternary alphabet $\mathcal{S}$, and $\mathbf{t}$ is a shorthand notation for the $K$-tuple of indices $(t_1, t_2, \dots, t_K), 1 \leq t_k \leq S$. The transition counts $f_{\mathbf{t} s}$, $f_{\mathbf{t} s}^L$, and $f_{\mathbf{t} s}^R$ are the number of times the $(K+1)$-mer $\alpha_{t_K} \cdots \alpha_{t_1} \alpha_s$ appear in the sequences $\mathbf{x}$, $\mathbf{x}_L$, and $\mathbf{x}_R$ respectively. The sequences $\mathbf{x}$, $\mathbf{x}_L$, and $\mathbf{x}_R$ are then assumed to be generated by the Markov processes with \emph{maximum-likelihood transition probabilities} \begin{equation} \hat{p}_{\mathbf{t} s} = \frac{f_{\mathbf{t} s}}{\sum_{s'=1}^S f_{\mathbf{t} s'}}, \quad \hat{p}_{\mathbf{t} s}^L = \frac{f_{\mathbf{t} s}^L}{\sum_{s'=1}^S f_{\mathbf{t} s'}^L}, \quad \hat{p}_{\mathbf{t} s}^R = \frac{f_{\mathbf{t} s}^R}{\sum_{s'=1}^S f_{\mathbf{t} s'}^R}, \end{equation} respectively. Within these maximum-likelihood Markov-chain models, the one- and two-segment sequence likelihoods are given by \begin{equation} \begin{aligned} P_1 &= \prod_{\mathbf{t}\in\mathcal{S}^K}\prod_{s=1}^S \left(\hat{p}_{\mathbf{t} s}\right)^{f_{\mathbf{t} s}}, \\ P_2 &= \prod_{\mathbf{t}\in\mathcal{S}^K}\prod_{s=1}^S \left(\hat{p}_{\mathbf{t} s}^L\right)^{f_{\mathbf{t} s}^L} \left(\hat{p}_{\mathbf{t} s}^R\right)^{f_{\mathbf{t} s}^R}, \end{aligned} \end{equation} respectively. Because we have more free parameters to fit the observed sequence statistics in the two-segment model, $P_2 \geq P_1$. The generalized Jensen-Shannon divergence, a symmetric variant of the relative entropy known more commonly as the \emph{Kullback-Leibler divergence}, is then given by \begin{equation}\label{equation:JensenShannondivergence} \Delta(i) = \log\frac{P_2}{P_1} = \sum_{\mathbf{t}\in\mathcal{S}^K}\sum_{s = 1}^S \left[ -f_{\mathbf{t}s} \log \hat{p}_{\mathbf{t}s} + f_{\mathbf{t}s}^L \log \hat{p}_{\mathbf{t}s}^L + f_{\mathbf{t}s}^R \log \hat{p}_{\mathbf{t}s}^R \right]. \end{equation} This test statistic, generalized from the Jensen-Shannon divergence described in Ref.~\citeonline{Lin1991IEEETransactionsonInformationTheory37p145}, measures quantitatively how much better the two-segment model fits $\mathbf{x}$ compared to the one-segment model. \subsection{Paired Sliding Windows Segmentation Scheme} \label{section:pairedslidingwindowssegmentationscheme} A standard criticism on using sliding windows to detect segment structure within a heterogeneous sequence is the compromise between precision and statistical significance. For the comparison between two windowed statistics to be significant, we want the window size $n$ to be large. On the other hand, to be able to determine a change point precisely, we want the window size $n$ to be small. There is therefore no way, with a single window of length $n$, to independently select both a desired statistical significance and desired precision. In this appendix, we devise a sliding window segmentation scheme in which, instead of one window, we use a pair of adjoining windows, each of length $n$. By comparing the left windowed statistics to the right windowed statistics, a change point is detected at the center of the pair of windows \emph{when} the two windowed statistics are most different. A given difference between the two windowed statistics becomes more signficant as the window size $n$ is increased. A larger window size also suppresses statistical fluctuations, making it easier to locate the change point. Therefore, increasing the window size $n$ improves both statistical significance and precision, even though they cannot be adjusted independently. In App.~\ref{subsection:statisticswithinapairofslidingwindows}, we describe the proper test statistic to use for change point detection within the model selection framework. Then in App.~\ref{subsection:hypothesistestingwithapairofslidingwindows}, we show how a similar test statistic spectrum can be obtained within the hypothesis testing framework. In App.~\ref{subsection:performanceonrealgenomicsequences}, we show some examples of the scheme being applied to real genomic sequences. In App.~\ref{subsection:meanfieldlineshape}, we derived the mean-field lineshape of a domain wall in this paired sliding window segmentation scheme, and use it to perform match filtering. \subsubsection{Model Selection Within a Pair of Sliding Windows} \label{subsection:statisticswithinapairofslidingwindows} To detect domain walls between different segments within a heterogeneous sequence, we can slide a pair of adjoining windows each of length $n$ across the sequence, and monitor the left and right windowed statistics at different sequence positions, as shown in Figure \ref{figure:pairedslidingwindow}. \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{pairedslidingwindow.eps} \caption{A pair of sliding windows, each of length $n$. A change point at the center of the pair of sliding windows can be detected by comparing the statistics within the left and right windows.} \label{figure:pairedslidingwindow} \end{figure} If we model the different segments by Markov chains of order $K$, the left and right windowed statistics are summarized by the transition count matrices \begin{equation} \textsf{\textbf{F}}^L = \left[ f_{\mathbf{t}s}^L \right], \quad \textsf{\textbf{F}}^R = \left[ f_{\mathbf{t}s}^R \right] \end{equation} respectively, where the transition counts sums to the window size, \begin{equation} \sum_{\mathbf{t}}\sum_s f_{\mathbf{t}s}^L = \sum_{\mathbf{t}}\sum_s f_{\mathbf{t}s}^R = n. \end{equation} From these transition count matrices, we can determine the maximum-likelihood estimates \begin{equation} \hat{\textsf{\textbf{P}}}^L = \left[ p_{\mathbf{t}s}^L \right], \quad p_{\mathbf{t}s}^L = \frac{f_{\mathbf{t}s}^L}{\sum_{s'} f_{\mathbf{t}s'}^L}; \quad \hat{\textsf{\textbf{P}}}^R = \left[ p_{\mathbf{t}s}^R \right], \quad p_{\mathbf{t}s}^R = \frac{f_{\mathbf{t}s}^R}{\sum_{s'} f_{\mathbf{t}s'}^R} \end{equation} of the transition matrices for the left and right windows. We then compute the transition count matrix \begin{equation} \textsf{\textbf{F}} = \left[ f_{\mathbf{t}s} = f_{\mathbf{t}s}^L + f_{\mathbf{t}s}^R \right], \end{equation} and therefrom the transition matrix \begin{equation} \hat{\textsf{\textbf{P}}} = \left[ p_{\mathbf{t}s} \right], \quad p_{\mathbf{t}s} = \frac{f_{\mathbf{t}s}}{\sum_{s'} f_{\mathbf{t}s'}}, \end{equation} assuming a one-segment model for the combined window of length $2n$, before calculating the windowed Jensen-Shannon divergence using Eq. \eqref{equation:JensenShannondivergence} in App.~\ref{section:generalizedJensenShannondivergences}. By sliding the pair of windows along the sequence, we obtain a windowed Jensen-Shannon divergence spectrum $\Delta(i)$, which tells us where along the sequence the most statistically significant change points are located. \subsubsection{Hypothesis Testing With a Pair of Sliding Windows} \label{subsection:hypothesistestingwithapairofslidingwindows} Change point detection using statistics within the pair of sliding windows can also be done within the hypothesis testing framework. Within this framework, we ask how likely it is to find maximum-likelihood estimates $\hat{\textsf{\textbf{P}}}^L$ for the left window, and $\hat{\textsf{\textbf{P}}}^R$ for the right window, when the pair of windows straddles a statistically stationary region generated by the transition matrix $\textsf{\textbf{P}}$. In the central limit regime, Whittle showed that the probability of obtaining a maximum-likelihood estimate $\hat{\textsf{\textbf{P}}}$ from a finite sequence generated by the transition matrix $\textsf{\textbf{P}}$ is given by \cite{Whittle1955JRoyalStatSoc17p235} \begin{equation}\label{equation:whittleformula} P(\hat{\textsf{\textbf{P}}}|\textsf{\textbf{P}}) = C \exp\left[ \frac{1}{2} \sum_{\mathbf{t}}\sum_s\sum_{s'} \frac{n}{P_{\mathbf{t}}} \left(1 - \frac{\delta_{ss'}}{p_{\mathbf{t}s}}\right) \left(\hat{p}_{\mathbf{t}s} - p_{\mathbf{t}s}\right) \left(\hat{p}_{\mathbf{t}s'} - p_{\mathbf{t}s'}\right) \right], \end{equation} where $C$ is a normalization constant, $n$ the length of the sequence, and $P_{\mathbf{t}}$ is the equilibrium distribution of $K$-mers in the Markov chain. For $n \gg K$, the left and right window statistics are essentially independent, and so the probability of finding $\hat{\textsf{\textbf{P}}}^L$ in the left window and finding $\hat{\textsf{\textbf{P}}}^R$ in the right window, when the true transition matrix is $\textsf{\textbf{P}}$, is $P(\hat{\textsf{\textbf{P}}}^L | \textsf{\textbf{P}}) P(\hat{\textsf{\textbf{P}}}^R | \textsf{\textbf{P}})$. In principle we do not know what $\textsf{\textbf{P}}$ is, so we replace it by $\hat{\textsf{\textbf{P}}}$, the maximum-likelihood transition matrix estimated from the combined statistics in the left and right windows. Based on Eq. \eqref{equation:whittleformula}, the test statistic that we compute as we slide the pair of windows along the sequence is the \emph{square deviation} \begin{equation}\label{equation:centrallimitsquaredeviation} r = -\sum_{\mathbf{t}}\sum_s\sum_{s'} \frac{n}{\hat{P}_{\mathbf{t}}} \left(1 - \frac{\delta_{ss'}}{\hat{p}_{\mathbf{t}s}}\right) \left[ \left(\hat{p}_{\mathbf{t}s}^L - \hat{p}_{\mathbf{t}s}\right) \left(\hat{p}_{\mathbf{t}s'}^L - \hat{p}_{\mathbf{t}s'}\right) + \left(\hat{p}_{\mathbf{t}s}^R - \hat{p}_{\mathbf{t}s}\right) \left(\hat{p}_{\mathbf{t}s'}^R - \hat{p}_{\mathbf{t}s'}\right) \right], \end{equation} which is more or less the negative logarithm of $P(\hat{\textsf{\textbf{P}}}^L | \textsf{\textbf{P}}) P(\hat{\textsf{\textbf{P}}}^R | \textsf{\textbf{P}})$. To compare the square deviation spectrum $r(i)$ obtained for different window sizes, we simply divide $r(i)$ by the window size $n$. From Eq. \eqref{equation:centrallimitsquaredeviation}, we find that $r$ receive disproportionate contributions from rare states ($\hat{P}_{\mathbf{t}}$ small) as well as rare transitions ($\hat{p}_{\mathbf{t}s}$ small). \subsubsection{Application to Real Genomic Sequences} \label{subsection:performanceonrealgenomicsequences} The average length of coding genes in \emph{Escherichia coli} K-12 MG1655 is 948.9 bp. This sets a `natural' window size to use for our sliding window analysis. In Figure \ref{figure:EcoliK12qrwJSK0n1000i0i40k}, we show the windowed $K = 0$ Jensen-Shannon divergence and square deviation spectra for \emph{Escherichia coli} K-12 MG1655, obtained for a window size of $n = 1000$ bp, overlaid onto the distribution of genes. As we can see from the figure, the two spectra are qualitatively very similar, with peak positions that are strongly correlated with gene and operon boundaries \cite{Salgado2006NucleicAcidsResearch34pD394}. \begin{figure}[hbtp] \centering \includegraphics[scale=0.5,clip=true]{EcoliK12.q.rwJSK0n1000.0.40k.eps} \includegraphics[scale=0.5,clip=true]{NC_000913.gene.layout.0.40k.eps} \caption{The windowed $K = 0$ Jensen-Shannon divergence (magenta) and square deviation (black) spectra in the interval $(0, 40000)$ of the \emph{Escherichia coli} K-12 MG1655 genome, which has a length $N = 4639675$ bp. Annotated genes on the positive (red) and negative (green) strands are shown below the graph.} \label{figure:EcoliK12qrwJSK0n1000i0i40k} \end{figure} For example, we see that the strongest peak in the $n = 1000$ windowed spectrum is at $i \sim 30000$. The gene \emph{dapB}, believed to be an enzyme involved in lysine (which consists solely of purines) biosynthesis, lies upstream of this peak, while the \emph{carAB} operon, believed to code for enzymes involved in pyrimidine ribonucleotide biosynthesis, lies downstream of the peak. Another strong peak marks the end of the \emph{carAB} operon, distinguishing it statistically from the gene \emph{caiF}, and yet another strong peak distinguishes \emph{caiF} from the \emph{caiTABCDE} operon, whose products are involved in the central intermediary metabolic pathways, further downstream. In Figure \ref{figure:EcoliK12qrK0K1K2n1000i0i40k}, we show the square deviation spectra for the same $(0, 40000)$ interval of the \emph{E. coli} K-12 MG1655 genome, but for different Markov-chain orders $K = 0, 1, 2$. As we can see, these square deviation spectra share many qualitative features, but there are also important qualitative differences. For example, the genes \emph{talB} and \emph{mogA}, which lies within the interval $(8200, 9900)$, are not strongly distinguished from the genes \emph{yaaJ} upstream and \emph{yaaH} downstream at the 1-mer ($K = 0$) level. They are, however, strongly distinguished from the flanking genes at the 2-mer ($K = 1$) and 3-mer ($K = 2$) levels. \begin{figure}[hbtp] \centering \includegraphics[scale=0.5,clip=true]{EcoliK12.q.rK0K1K2n1000.0.40k.eps} \includegraphics[scale=0.5,clip=true]{NC_000913.gene.layout.0.40k.eps} \caption{The windowed $K = 0$ (top), $K = 1$ (middle), and $K = 2$ (bottom) square deviation spectra in the interval $(0, 40000)$ of the \emph{E. coli} K-12 MG1655 genome, which has a length of $N = 4639675$ bp. Annotated genes on the positive (red) and negative (green) strands are shown below the graph.} \label{figure:EcoliK12qrK0K1K2n1000i0i40k} \end{figure} \subsubsection{Mean-Field Lineshape and Match Filtering} \label{subsection:meanfieldlineshape} In the second situation shown in Fig.~\ref{figure:windowedcases}, let us label the two mean-field segments $a$ and $b$, with lengths $N_a$ and $N_b$. Suppose it is the left window that straddles both $a$ and $b$, while the right window lies entirely within $b$. The right-window counts are then simply \begin{equation} f_{\mathbf{t}s}^R = \frac{n}{N_b}\, f_{\mathbf{t}s}^b, \end{equation} while the left-window counts contain contributions from both $a$ and $b$, i.e. \begin{equation} f_{\mathbf{t}s}^L = \frac{n - z}{N_a}\, f_{\mathbf{t}s}^a + \frac{z}{N_b}\, f_{\mathbf{t}s}^b, \end{equation} where $z$ is the distance of the domain wall from the center of the pair of windows. The total counts from both windows are then \begin{equation} f_{\mathbf{t}s} = \frac{n - z}{N_a}\, f_{\mathbf{t}s}^a + \frac{z}{N_b}\, f_{\mathbf{t}s}^b + \frac{n}{N_b}\, f_{\mathbf{t}s}^b. \end{equation} Using the transition counts $f_{\mathbf{t}s}^L$, $f_{\mathbf{t}s}^R$, and $f_{\mathbf{t}s}$, we then compute the maximum-likelihood transition probabilities $\hat{p}_{\mathbf{t}s}^L$, $\hat{p}_{\mathbf{t}s}^R$, and $\hat{p}_{\mathbf{t}s}$, before substituting the transition counts and transition probabilities into Eq.~\eqref{equation:JensenShannondivergence} for the Jensen-Shannon divergence. Because of the logarithms in the definition for the Jensen-Shannon divergence, we get a complicated function in terms of the observed statistics $f_{\mathbf{t}s}^a$, $f_{\mathbf{t}s}^b$, $N_a$ and $N_b$, and the distance $z$ between the domain wall and the center of the pair of windows. Different observed statistics $f_{\mathbf{t}s}^a$, $f_{\mathbf{t}s}^b$, $N_a$ and $N_b$ give mean-field divergence functions of $z$ that are not related by a simple scaling. However, these mean-field divergence functions $\Delta(z)$ do have qualitative features in common: \begin{enumerate} \item $\Delta(z) = 0$ for $|z| \geq n$, where the pair of windows is entirely within $a$ or entirely within $b$; \item $\Delta(z)$ is maximum at $z = 0$, when the center of the pair of windows coincide with the domain wall; \item $\Delta(z)$ is convex everywhere within $|z| < n$, except at $z = 0$. \end{enumerate} This tells us that the position and strength of the domain wall between two mean-field segments both longer than the window size $n$ can be determined exactly. In Figure \ref{figure:wJSlineshape} we show $\Delta(z)$ for two binary $K = 0$ mean-field segments, where $P_a(0) = 1 - P_a(1) = 0.9$, and $P_b(0) = 1 - P_b(1) = 0.1$. We call the peak function $\Delta(z)$ the \emph{mean-field lineshape} of the domain wall. As we can see from Figure \ref{figure:wJSlineshape}, this mean-field lineshape can be very well approximated by the piecewise quadratic function \begin{equation}\label{equation:meanfieldJS} \tilde{\Delta}(z) = \begin{cases} \left(1 + \frac{z}{n}\right)^2 \bar{\Delta}(0), & -1 < z < 0; \\ \left(1 - \frac{z}{n}\right)^2 \bar{\Delta}(0), & 0 \leq z < 1; \\ 0, & \text{everywhere else}, \end{cases} \end{equation} where $\bar{\Delta}(0)$ is the mean-field Jensen-Shannon divergence of the domain wall at $z = 0$. If instead of the windowed Jensen-Shannon divergence $\Delta(z)$, we compute the windowed square deviation $r(z)$ in the vicinity of a domain wall, we will obtain a mean-field lineshape that is strictly piecewise quadratic, i.e. \begin{equation}\label{equation:meanfieldr} \tilde{r}(z) = \begin{cases} \left(1 + \frac{z}{n}\right)^2 \bar{r}(0), & -1 < z < 0; \\ \left(1 - \frac{z}{n}\right)^2 \bar{r}(0), & 0 \leq z < 1; \\ 0, & \text{everywhere else}, \end{cases} \end{equation} where $\bar{r}(0)$ is the mean-field square deviation of the domain wall at $z = 0$. \begin{figure}[htbp] \centering \includegraphics[scale=0.45,clip=true]{wJSlineshape.eps} \caption{The Jensen-Shannon divergence $\Delta(z)$ (solid curve) of a pair of sliding windows of length $n = 1$ as a function of the distance $z$ between the domain wall separating a mean-field binary segment $a$ with $P_a(0) = 1 - P_a(1) = 0.9$ and a mean-field binary segment $b$ with $P_b(0) = 1 - P_b(1) = 0.1$, and the center of the pair of windows. Also shown as the dashed curve is a piecewise quadratic function which rises from $z = \pm 1$ to the same maximum at $z = 0$, but vanishes everywhere else.} \label{figure:wJSlineshape} \end{figure} Going back to a real sequence composed of two nearly stationary segments of discrete bases, we expect to find statistical fluctuations masking the mean-field lineshape. But now that we know the mean-field lineshape is piecewise quadratic for the square deviation $r(z)$ (or very nearly so, in the case of the windowed Jensen-Shannon divergence $\Delta(z)$), we can make use of this piecewise quadratic mean-field lineshape to match filter the raw square deviation spectrum. We do this by assuming that there is a mean-field square-deviation peak at each sequence position $i$, fit the spectrum within $(i - n, i + n)$ to the mean-field lineshape in Eq. \eqref{equation:meanfieldr}, and determine the smoothed spectrum $\bar{r}(i)$. In Fig. \ref{figure:EcoliK12qrrmrmRK0n1000i0i40k}, we show the match-filtered square deviation spectrum $\bar{r}(i)$ in the interval $0 \leq i \leq 40000$ of the \emph{E. coli} K-12 MG1655 genome. As we can see, $\bar{r}(i)$ is smoother than $r(i)$, but the peaks in $\bar{r}(i)$ are also so broad that distinct peaks in $r(i)$ are not properly resolved. \begin{figure}[hbtp] \centering \includegraphics[scale=0.5,clip=true]{EcoliK12.q.rrmrmRK0n1000.0.40k.eps} \includegraphics[scale=0.5,clip=true]{NC_000913.gene.layout.0.40k.eps} \caption{The interval $0 \leq i \leq 40000$ of the \emph{E. coli} K-12 MG1655 genome ($N = 4639675$ bp), showing (top to bottom) the windowed $K = 0$ square deviation spectrum $r(i)$, the match-filtered square deviation spectrum $\bar{r}(i)$, the residue spectrum $R(i)$, and the quality enhanced square deviation spectrum $\bar{r}(i)/R(i)$. Annotated genes on the positive (red) and negative (green) strands are shown below the graph.} \label{figure:EcoliK12qrrmrmRK0n1000i0i40k} \end{figure} Fortunately, more information is available from the match filtering. We can also compute how well the raw spectrum $r(j)$ in the interval $i - n \leq j \leq i + n$ match the mean-field lineshape $\tilde{r}(j)$ by computing the residue \begin{equation} R(i) = \sum_{j = i - n}^{i + n} \left[r(j) - \tilde{r}(j)\right]^2. \end{equation} filtering the raw divergence spectrum. In Fig. \ref{figure:EcoliK12qrrmrmRK0n1000i0i40k}, we show the residue spectrum $R(i)$ for the $0 \leq i \leq 40000$ region of the \emph{E. coli} K-12 MG1655 genome. In the residue spectrum, we see a series of dips at the positions of peaks in the square deviation spectrum. Since $R(i)$ is small when the match is good, and large when the match is poor, $1/R(i)$ can be thought of as the quality factor of a square deviation peak. A smoothed, and accentuated spectrum is obtained when we divide the smoothed square deviation by the residue at each point. The quality enhanced square deviation spectrum $\bar{r}(i)/R(i)$ is also shown in Fig. \ref{figure:EcoliK12qrrmrmRK0n1000i0i40k}. It is much more convenient to determine the position of significant domain walls from such a spectrum. \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Studying thermal evolution of isolated neutron stars in X-rays is of a great importance for better understanding the evolution of such objects and provides a possibility to investigate their composition and structure (see e.g., \cite{P04,P09,YP04}). The thermal X-ray radiation from the neutron star (NS) at the center of the Cassiopeia A (Cas A) supernova remnan \footnote The supernova remnant in Cassiopeia A contains a young ($\approx 330$ yr old \cite{F06}) neutron star which was discovered by Chandra satellite \cit {T99,H00} in 1999.} attracts much attention nowadays. A few years ago Heinke \& Ho \cite{H09,H10} have analyzed Chandra observation data during 10 years and reported an anomalous steady decline of the surface temperature, $T_{s} . The authors have interpreted this data as a direct observation of Cas A NS cooling, the phenomenon which has never been observed before for any isolated NS. We shall discuss later the current state of these observations, at the moment we note that although the real cooling rate is under debate one can not exclude that the Cas A NS cooling is extraordinarily fast. Even a $1\%$ decline of the cooling curve in 10 years would signal very fast cooling. Such a rapid drop in surface temperature (if it occurs) is in conflict with standard cooling scenarios based on the efficient modified Urca process. If the NS in Cas A underwent standard cooling (through neutrino emission from the core due to the modified Urca process) its surface temperature decline in 10 years would be $0.2\%-0.3\%$ \cite{Y01,p06}. The rapid decline but relatively high surface temperature (about $2.12\times 10^{6}$ K) require a dramatic change in the neutrino emission properties of the NS. Some exotic scenarios of cooling have been suggested that employ nonstandard assumptions on NS physics and evolution, involving softened pion modes \cite{B12}, quarks \cite{s13,n13}, axions \cite{L14} or cooling after an r-mode heating process \cite{y11}. An existence of softened pions or quarks in the NS core depends mostly on the matter density but not on the temperature. If this rapid cooling was constant from the birth of the NS, the current temperature would have to be much smaller than is currently measured. It is reasonable to suggest \cite{P,St} that the cooling was initially slow but greatly accelerated later. In this case the rapid temperature decline could be naturally explainable in a frame of the minimal cooling paradigm \cite{P04,P09} that assumes that rapid cooling of the neutron star is triggered by neutron superfluidity in the core. This scenario implies that neutrons have recently become superfluid (in $^{3}$P$_{2}$ triplet-state) in the NS core, triggering a huge neutrino flux from pair breaking and formation (PBF) processes that accelerates the cooling \cite{P,St}, while protons were already in a superconducting $^{1}$S$_{0}$ singlet-state with a larger critical temperature. Although the above mechanism is consistent with the commonly accepted cooling paradigm, the theoretical simulation has shown \cite{St,E13}, that the PBF processes in the neutron triplet condensate are not enough effective to explain the rapid temperature decline. This has stimulated the present work. It is commonly believed \cite{T70,H70,Baldo,Elg} that the pair condensation in the superdense neutron matter occurs into the $^{3}$P$_{2}$ state (with a small admixture of $^{3}$F$_{2}$) with a preferred magnetic quantum number m_{j}=0$. This model has been conventionally used for estimates of the PBF neutrino energy losses in the minimal cooling scenarios. Let us remind that, in the case of $^{3}$P$_{2}\left( m_{j}=0\right) $ pairing, the PBF $\bar{\nu}\nu $ emissivity is evaluated as \cite{L10} (we use natural units, $\hbar =c=k_{B}=1$): \begin{equation} Q(m_{j}=0)\simeq \frac{2}{5\pi ^{5}}G_{F}^{2}C_{\mathsf{A}}^{2}p_{F}M^{\ast }T^{7}F\left( T/T_{c}\right) ~, \label{Qnu} \end{equation where $G_{F}=1.166\times 10^{-5}$ GeV$^{-2}$ is the Fermi coupling constant, $C_{\mathsf{A}}$ is the axial-vector coupling constant of neutrons, $p_{F}$ is the Fermi momentum of neutrons, $M^{\ast }\equiv p_{F}/V_{F}$ is the neutron effective mass; the function $F$ is given by \begin{equation} F\left( T/T_{c}\right) =\int \frac{d\mathbf{n}}{4\pi }\frac{\Delta _{\mathbf n}}^{2}}{T^{2}}\int_{0}^{\infty }dx\frac{z^{4}}{\left( \exp z+1\right) ^{2}}, \label{F0} \end{equation where $z=\sqrt{x^{2}+\Delta _{\mathbf{n}}^{2}/T^{2}}$, and the superfluid energy gap, \begin{equation} \Delta _{\mathbf{n}}\left( \theta ,T\right) =\sqrt{\frac{1}{2}\left( 1+3\cos ^{2}\theta \right) }\ \Delta \left( T\right) , \label{Dn} \end{equation is anisotropic. It depends on polar angle $\theta $ of the quasiparticle momentum and temperature\footnote Notice that our definition of the gap amplitude differs from the gap definition used, in Ref. \cite{YKL} by the factor of $\sqrt{2}$.}. In the present letter I argue that the enlarged neutrino energy losses can be explained in terms of the conventional minimal cooling paradigm assuming that the enhanced neutrino radiation can be a natural consequence of the phase transition of the $^{3}$P$_{2}$ condensate into a multicomponent state. Modern calculations \cite{Clark,Khodel} have shown that, besides the one-component state with $m_{j}=0$, there are also multicomponent $^{3}$P _{2}$ states involving several magnetic quantum numbers $m_{j}=0,\pm 1,\pm 2$ that compete in energy and represent various phases of the condensate in equilibrium\footnote Do not confuse with "angulons" which represent Goldstone bosons associated with broken rotational symmetry in a $^{3}$P$_{2}\left( m_{j}=0\right) $ condensed neutron superfluid \cite{Bed13}. These collective excitations represent small angular oscillations of the condensate. The complete set of the oscillation modes of the $^{3}$P$_{2}\left( m_{j}=0\right) $ condensate in the superfluid neutron liquid is analyzed in \cite{L10b}. Neutrino emission due to decay of these collective oscillations produces a negligibly small contribution into the NS cooling \cite{L13}.}. The general form of a unitary $^{3}$P$_{2}$ state includes $m_{j}=0,\pm 1,\pm 2$, and the superfluid energy gap can be defined by the relation \cite{L10a} \begin{equation} D^{2}\left( \mathbf{n},\tau \right) =\mathbf{\bar{b}}^{2}\left( \mathbf{n \right) \ \Delta ^{2}\left( \tau \right) , \label{D} \end{equation where $\tau \equiv T/T_{c}$ is the relative temperature; the (temperature dependent) gap amplitude is of the for \begin{equation} \Delta ^{2}=\Delta _{0}^{2}+2\Delta _{1}^{2}+2\Delta _{2}^{2}~, \label{del} \end{equation and $\mathbf{\bar{b}}\left( \mathbf{n}\right) $ is a real vector normalized by the condition \begin{equation} \left\langle \bar{b}^{2}\left( \mathbf{n}\right) \right\rangle \equiv \left( 4\pi \right) ^{-1}\int \bar{b}^{2}\left( \mathbf{n}\right) d\mathbf{n}=1~. \label{Norm} \end{equation Its angular dependence is represented by the unit vector $\mathbf{n=p}/p$ which defines the polar angles $\left( \theta ,\varphi \right) $ on the Fermi surface \begin{equation} \mathbf{n=}\left( \sin \theta \cos \varphi ,\sin \theta \sin \varphi ,\cos \theta \right) \equiv \left( n_{1},n_{2},n_{3}\right) . \label{n} \end{equation The properly normalized vector $\mathbf{\bar{b}}$ can be written by utilizing notation adopted in Refs. \cite{Clark,Khodel}, where $\lambda _{1}\equiv \sqrt{6}\Delta _{1}/\Delta _{0}$ and $\lambda _{2}\equiv \sqrt{6 \Delta _{2}/\Delta _{0}$ : \begin{equation} \mathbf{\bar{b}}=\sqrt{\frac{1}{2}}\frac{\Delta _{0}}{\Delta }\left( \begin{array}{ccc} -n_{1}+n_{1}\lambda _{2}-n_{3}\lambda _{1}~, & -n_{2}-n_{2}\lambda _{2}~, & 2n_{3}-n_{1}\lambda _{1 \end{array \right) ~. \label{burb} \end{equation} According to modern theories, there are several multicomponent states that compete in energy depending on the temperature. Accordingly the phase transitions can occur between these states when the temperature goes down. The possible phase states of the $^{3}$PF$_{2}$ condensate are cataloged in Ref. \cite{Clark}. In Table 1 we have collected the nodeless states which are especially interesting. Immediately below the critical temperature, the superfluid condensate can appear in either the one-component phase $O_{0}$, corresponding to $m_{j}=0$, or in one of the two two-component phases, O_{\pm 3}$. These lowest-energy states are nearly degenerate. The higher nearly degenerate group is composed of the phases $O_{1}$ and $O_{2}$. The energy split between the two groups shrinks along with the temperature decrease \cite{Clark} and can result in a phase transition at some temperature\footnote Authors predict the transition temperature $T\simeq 0.7T_{c}$ at p_{F}\simeq 2.1$ fm$^{-1}$.} $T<T_{c}$, depending on the matter density. The small difference in the gap amplitudes, $\sim 2\%$, inherent for various phases of the condensate, is crucial for the phase transitions, but this small inequality can be disregarded in evaluation of the neutrino energy losses. \begin{table}[tbp] \caption{Various phases of the $^{3}$P$_{2}$ condensate and their relative neutrino emissivity Z \begin{tabular}{lcccccc} & & & & & & \\ {phase} & {$\Delta_0 /\Delta$} & {$\lambda_1$} & {$\lambda_2$} & {$Z$} & & \\ \hline\hline $O_0$ & 1 & 0 & 0 & 1 & & \\ $O_{\pm 3}$ & $\frac{1}{2}$ & 0 & $\pm 3$ & $3.25$ & & \\ $O_{1} $ & $\frac{5}{\sqrt{14}\sqrt{17-3\sqrt{21}}}$ & $\frac{3}{5}\sqrt 2\left( 17-3\sqrt{21}\right) }$ & $\frac{3}{5}\left( \sqrt{21}-4\right)$ & 2.3528$ & & \\ $O_{2}$ & $\frac{5}{\sqrt{14}\sqrt{17+3\sqrt{21}}}$ & $\frac{3}{5}\sqrt 2\left( 17+3\sqrt{21}\right) }$ & $-\frac{3}{5}\left( \sqrt{21}+4\right)$ & 3.8258$ & & \\ & & & & & & \end{tabular \end{table} \section{Neutrino emission from a multicomponent phase} The neutrino emissivities of the multicomponent phase states have been analyzed in Ref. \cite{L10a} in the approximation of averaged gap. The calculation technique, developed in that work, allows us to derive a more accurate expression taking into account the gap anisotropy. To this end we have to use Eq (68) of Ref. \cite{L10a} and the polarization tensor, as given just below Eq (65). Starting from these expressions we consider the case of $\omega ^{2}>2\mathbf{\bar{b}}^{2}\Delta ^{2}$ which is fulfilled for the PBF processes. Then after performing integrations over $d^{3}q$ one can obtain the neutrino energy losses per unit volume and time in the \Lambda $ state (we abbreviate the set of numbers $\Delta _{0}/\Delta ,\lambda _{1},\lambda _{2}$ as $\Lambda $). \begin{equation} Q_{\Lambda }=\frac{2}{5\pi ^{5}}C_{A}^{2}G_{F}^{2}p_{F}M^{\ast }T^{7}F_{\Lambda }\left( \tau \right) ~, \label{Q} \end{equation wher \begin{equation} F_{\Lambda }\left( \tau \right) =\left( 4-3\frac{\Delta _{0}^{2}}{\Delta ^{2 }\right) y^{2}\int \frac{d\mathbf{n}}{4\pi }\bar{b}^{2}\left( \mathbf{n \right) \int_{0}^{\infty }dx\frac{z^{4}}{\left( 1+\exp z\right) ^{2}} \label{F} \end{equation with$\ z=\sqrt{x^{2}+\bar{b}^{2}\left( \mathbf{n}\right) y^{2}}$,~ $y\left( \tau \right) =\Delta \left( T\right) /T$, and the function $\bar{b ^{2}\left( \mathbf{n}\right) $ is given by \begin{eqnarray} \bar{b}^{2}\left( \mathbf{n}\right) &=&\frac{1}{4}\frac{\Delta _{0}^{2}} \Delta ^{2}}\left[ 2+\lambda _{1}^{2}+2\lambda _{2}^{2}+\left( 6+\lambda _{1}^{2}-2\lambda _{2}^{2}\right) \cos ^{2}\theta \right. \notag \\ &&\left. -2\lambda _{1}\left( 1+\lambda _{2}\right) \sin 2\theta \,\cos \varphi +\left( \lambda _{1}^{2}-4\lambda _{2}\right) \sin ^{2}\theta \,\cos 2\varphi \right] \label{b2} \end{eqnarray At $\lambda _{1}=\lambda _{2}=0$ and $\Delta =\Delta _{0}$ the expression \ref{Q}) recovers Eq. (\ref{Qnu}). For numerical evaluation of the neutrino losses, as given in Eq. (\ref{Q}), it is necessary to know the function $y\left( \tau \right) =\Delta \left( T\right) /T$, which in general is to be found with the aid of gap equations. However, as mentioned above, the difference in the gap amplitudes for various phases can be neglected in evaluation of the neutrino energy losses. This substantially simplifies the problem because for the case $m_{j}=0$ the function is well investigated\footnote We use the simple fit $\sqrt{2}\mathsf{v}_{B}\left( \tau \right) $ suggested in Ref. \cite{YKL}.}. \section{Modeling of the cooling process} To get an idea of how the phase state of the superfluid condensate can influence the NS surface temperature let us consider a simple model of cooling of the superfluid neutron core enclosed in a thin envelope. We assume that the bulk matter consists mostly of $^{3}$P$_{2}$ superfluid neutrons. The neutrino emission due to $^{1}$S$_{0}$ proton pairing is strongly suppressed in the non-relativistic system \cite{FRS76,LP06}, but the energy gap arising in the quasiparticle spectrum below the condensation temperature suppresses the most mechanisms of neutrino emission which are efficient in the normal (nonsuperfluid) nucleon matter ($\nu \bar{\nu}$ bremsstrahlung, modified Urca processes etc.) \cite{YLS}. As was found in Ref. \cite{St,P} this scenario puts stringent constraints on the temperature for the onset of neutron superfluidity in the Cas A NS. Namely, the transition temperature dependence on the density should have a wide peak with maximum $T_{c}(\rho )\approx (5-8)\times 10^{8}$~K. In the temperature range which we are interested in, the thermal luminosity of the surface is negligible in comparison to the neutrino luminosity of PBF processes in the NS core. In this case the equation of global thermal balance \cite{gs80} reduces to \begin{equation} C(\widetilde{T})\,{\frac{d\widetilde{T}}{dt}}=-L(\widetilde{T}). \label{aa} \end{equation Here $L(\widetilde{T})$ is the total PBF luminosity of the star (redshifted to a distant observer), while $C(\widetilde{T})$ is the stellar heat capacity. These quantities are given by (see details in Ref. \cite{Y}): \begin{eqnarray} L(\widetilde{T}) &=&\int dV\,Q_{\Lambda }(T,\rho )\exp (2\Phi (r)), \label{eq:Lnu} \\ C(\widetilde{T}) &=&\int dV\,C_{V}(T,\rho ), \label{C} \end{eqnarray where $C_{V}(T,\rho )$ is the specific heat capacity, \begin{equation*} dV=4\pi r^{2}\left( 1-\frac{2Gm(r)}{r}\right) ^{-1/2}dr, \end{equation* where $G$ stands for gravitation constant, $m\left( r\right) $ is the gravitational mass enclosed within radius $r$, and $\Phi (r)$ is the metric function that determines gravitational redshift. A thermally relaxed star has an isothermal interior which extends from the center to the heat blanketing envelope. Following \cite{gs80} we have assumed that the isothermal region is restricted by the condition $\rho >\rho \left( r_ \mathsf{b}}\right) =10^{10}$ \textrm{g cm}$^{-3}$. Taking into account the effects of General Relativity (e.g., \cite{thorne77}), isothermality at r<r_{\mathsf{b}}$ means spatially constant redshifted internal temperature \widetilde{T}(t)$, while the local internal temperature \begin{equation} T(r,t)=\widetilde{T}(t)\exp \left( -\Phi (r)\right) , \label{bb} \end{equation depends on radial coordinate $r$. Generally, the redshift factor has to be calculated using the Tolman-Oppenheimer-Volkoff equation. In vacuum, outside the star and at the stellar surface this factor is of the form \begin{equation} \exp \Phi (r)=\left( 1-\frac{2Gm(r)}{r}\right) ^{1/2}. \label{ex} \end{equation For simplicity we shall use this expression in the crust of the star, as a model. The main temperature gradient is formed in the thermally insulating outer envelope at $r>r_{\mathsf{b}}$. Since the envelope is thin one can set $r_ \mathsf{b}}\simeq R$ and $m\left( r_{\mathsf{b}}\right) \simeq M$, where $R$ and $M$ are the radius and mass of the NS, respectively. Then the temperature $T_{\mathsf{b}}=T\left( r_{\mathsf{b}}\right) $ at the bottom of the thermally insulating envelope of the star can be written a \begin{equation} T_{\mathsf{b}}=\left( 1-\frac{R_{g}}{R}\right) ^{-1/2}\widetilde{T}, \label{TbT} \end{equation where \begin{equation} R_{g}\equiv 2GM\simeq 2.953\frac{M}{M_{\odot }}~\mathrm{km} \label{x} \end{equation is the Schwarzschild radius. One can convert the internal $T_{\mathsf{b}}$ to the observed effective surface temperature $T_{\mathsf{s}}$ using the simple analytical relationship found by Gundmundsson, Pethick \& Epstein \cite{GPE}: \begin{equation} T_{\mathsf{s}}/10^{6}\mathrm{K}\simeq 0.87g_{s14}^{1/4}(T_{\mathsf{b}}/10^{8 \mathrm{K})^{0.55}. \label{TeTb} \end{equation Here $g_{s14}=g_{s}/10^{14}\mathrm{cm~s}^{-2}$ where \begin{equation} g_{s}=\frac{GM}{R^{2}\sqrt{1-R_{g}/R}}\simeq \frac{1.328\times 10^{14}} \sqrt{1-R_{g}/R}}\frac{M/M_{\odot }}{R_{6}^{2}}~\mathrm{cm~s}^{-2}, \label{gs} \end{equation with $R_{6}\equiv R/\left( 10^{6}\mathrm{cm}\right) $, is the acceleration of gravity as measured at the surface. Given the strong dependence of the PBF processes on the temperature $T$ and density $\rho $, the overall effect of emission of neutrino pairs can only be assessed by complete calculations of the neutron star cooling which are beyond the scope of this paper. We do not aim to carry out exact calculations. Our goal is to demonstrate that the NS cooling rate substantially depends on the phase state of the $^{3}$P$_{2}$ condensate of superfluid neutrons. A rough estimate can be made in a simplified model, where both the superfluid transition temperature, $T_{c}$, and the real temperature, $T=T_{\mathsf{core}}$, are constant over the core. In the temperature range of our interest, the specific heat is governed by the neutron component (the contribution of electrons and strongly superfluid protons is negligibly small) and can be described as \begin{equation} C\simeq \frac{1}{3}T_{\mathsf{core}}R_{B}(T_{\mathsf{core}}/T_{c})\int dVp_{F}M^{\ast }, \label{cc} \end{equation where $R_{B}(\tau )$ is the superfluid reduction factor, as given in Eq. (18) of Ref. \cite{YLS}. Making use of Eq. (\ref{Q}) we obtain the PBF luminosity in the form \begin{equation} L=\frac{2}{5\pi ^{5}}G_{F}^{2}C_{\mathsf{A}}^{2}T_{\mathsf{core }^{7}F_{\Lambda }(T_{\mathsf{core}}/T_{c})\int dVp_{F}M^{\ast }e^{2\Phi (r)}, \label{dd} \end{equation where $F_{\Lambda }(\tau )$ is given by Eq. (\ref{F}). Insertion of Eqs. (\ref{bb}), (\ref{cc}) and (\ref{dd}) into Eq. (\ref{aa}) allows us to obtain the following equation for the non-redshifted temperature $T(r_{\mathsf{core}},t)\equiv T_{\mathsf{core}}(t)$ at the edge of the core, at $r=r_{\mathsf{core}}$: \begin{equation} \frac{dT_{\mathsf{core}}}{dt}=-\frac{3\alpha }{R_{B}\left( T_{\mathsf{core }/T_{c}\right) }\frac{2}{5\pi ^{5}}G_{F}^{2}C_{\mathsf{A}}^{2}T_{\mathsf{cor }}^{6}F_{\Lambda }\left( T_{\mathsf{core}}/T_{c}\right) . \label{Tbeq} \end{equation Here the constant $\alpha \equiv \alpha (r_{\mathsf{core}})$ is defined as \begin{equation} \alpha \equiv \frac{\int dVp_{F}M^{\ast }e^{2\Phi \left( r\right) }}{\exp \Phi \left( r_{\mathsf{core}}\right) \int dVp_{F}M^{\ast }}, \label{alpha} \end{equation where the integration is over the core volume, $r\leq r_{\mathsf{core}}$. In Eq. (\ref{Tbeq}) $T_{\mathsf{core}}$ is the real temperature in the core, particularly, at the crust-core interface which corresponds to the density of about $1.5\times 10^{14}$ \textrm{g/cm}$^{3}$ at $r=r_{\mathsf{core}}$. One can convert it to the redshifted internal temperature $\widetilde{T}(t) ~a \begin{equation} \widetilde{T}=\left( 1-\frac{2Gm\left( r_{_{\mathsf{core}}}\right) }{r_{_ \mathsf{core}}}}\right) ^{1/2}T_{\mathsf{core}}\simeq \left( 1-\frac{R_{g}} r_{_{\mathsf{core}}}}\right) ^{1/2}T_{\mathsf{core}}. \label{Tt} \end{equation When obtaining the second equality we have neglected the mass of the crust which is small ($\sim 1\%$) in comparison with the mass of the core \cit {PR95}. This allows us to set $m\left( r_{_{\mathsf{core}}}\right) \simeq M . \ From Eqs. (\ref{TbT}) and (\ref{Tt}) one can find the temperature at the bottom of the thermally insulating envelop \begin{equation} T_{\mathsf{b}}=\left( 1-\frac{R_{g}}{R}\right) ^{-1/2}\left( 1-\frac{R_{g}} r_{_{\mathsf{core}}}}\right) ^{1/2}T_{\mathsf{core}}. \label{TbTc} \end{equation Insertion of this expression into Eq. (\ref{TeTb}) allows one to find the observed (non-redshifted) surface temperature $T_{\mathsf{s}}$: \begin{equation} T_{\mathsf{s}}/10^{6}\mathrm{K}\simeq 0.87g_{s14}^{1/4}\left( \frac 1-R_{g}/r_{_{\mathsf{core}}}}{1-R_{g}/R}\right) ^{\frac{0.55}{2}}(T_{\mathsf core}}/10^{8}\mathrm{K})^{0.55}. \label{Ts} \end{equation $\,$Assuming that the crust thickness is about $0.1R$ \cite{PR95} one can set $r_{\mathsf{core}}\simeq 0.9R$. We adopt $R=10.3~\mathrm{km}$ and $M=1.65M_{\odot }$. In this case \begin{equation} 0.87g_{s14}^{1/4}\left( \frac{1-R_{g}/r_{_{\mathsf{core}}}}{1-R_{g}/R \right) ^{\frac{0.55}{2}}\simeq 1.\,\allowbreak 098, \label{factor} \end{equation which yield \begin{equation} T_{\mathsf{s}}/10^{6}\mathrm{K}\simeq 1.\,\allowbreak 098(T_{\mathsf{core }/10^{8}\mathrm{K})^{0.55}. \label{Tb} \end{equation Thus our simulation of the NS cooling is reduced to numerical solving of Eqs. (\ref{Tbeq}) and (\ref{Tb}). \section{Simulation results} In Fig. 1 we demonstrate the cooling curves of the superfluid neutron star with a constant $T_{c}$ over the core. The curves obtained for the superfluid phases listed in Table 1 and are labeled respectively. \begin{figure}[h] \includegraphics{fig1.eps} \caption{(Color on line) \textit{Left:} Cooling curves for Cas A NS which has a superfluid neutron core and a low-mass heat blanketing envelope. T_{c}=7\times 10^{8}$~K is taken constant over the core. Four curves correspond to different phases of triplet pairing. $O_{0}$ is the cooling curve of the one-component phase $m_{j}=0$. The remaining curves correspond to the $O_{1}$, $O_{2}$, and $O_{\pm 3}$ phases. Calculated temperature declines over 10 years are given near the curves (in percent). \textit{Right } Same but with $T_{c}=5\times 10^{8}$~K. } \label{fig:fig1} \end{figure} The case $O_{0}$ corresponds to the one-component state of the neutron superfluid with $m_{j}=0$. The remaining three curves correspond to the phases $O_{1}$, $O_{2}$ and $O_{\pm 3}$. Two panels of Figure 1 demonstrate the corresponding simulated cooling curves for the cases of $T_{c}=7\times 10^{8}$ K and $T_{c}=5\times 10^{8}$ K. We show the cooling curves over a period of about 25 years including 10 years of observations. Note that we show the non-redshifted effective surface temperature. Calculated temperature declines over 10 years are given near the curves (in percent). As it is seen from these curves a satisfactory agreement with observable temperature declines can be easily obtained by a proper choice of the phase state of $^{3}$P$_{2}$ condensate and adjusting the parameters of superfluidity. Certainly the approximation of the constant superfluid transition temperature over the neutron star core is too crude, and simulations with realistic $T_{c}\left( \rho \right) $ profile can be more persuasive. Such a numerical simulation is beyond the scope of this work. Although it is necessary to note that similar simulations were done in \cite{E13}, where five phenomenological $T_{c}\left( \rho \right) $ profiles over the NS core were considered, but the free parameter was used for artificial increase of the PBF neutrino emissivity from the $^{3}$P$_{2}\left( m_{j}=0\right) $ pairing. These more realistic calculations are in agreement with our qualitative estimates. Our primary goal is to clarify the possible origin for the increased neutrino losses. One can make a simple estimate of the relative efficiency of PBF processes for various phases of the superfluid neutron matter. To this end we can evaluate Eq. (\ref{Q}) in the approximation of averaged gap that reduces to the replacement $\bar{b}^{2}\rightarrow \left\langle \bar{b ^{2}\right\rangle =1$. We then recover the result obtained in Eq. (74) of Ref. \cite{L10a}) \begin{equation} \bar{Q}_{\Lambda }\simeq Z\left( \Lambda \right) \bar{Q}(m_{j}=0)~, \label{Qm} \end{equation where $\bar{Q}(m_{j}=0)$ is given by Eq. (\ref{Qnu}) but with a replacement \Delta _{\mathbf{n}}^{2}\rightarrow \Delta ^{2}$, an \begin{equation} Z\left( \Lambda \right) =\left( 4-3\frac{\Delta _{0}^{2}}{\Delta ^{2} \right) ~, \label{r} \end{equation These factors representing the relative efficiency of PBF processes for various phases of the $^{3}$P$_{2}$ superfluid neutron matter are shown in Table 1. \section{Discussion and conclusion} Our simple analytic expression (\ref{Q}) for the PBF neutrino emissivity from the multicomponent phases of the $^{3}$P$_{2}$ superfluid neutron liquid shows that the PBF neutrino losses from the multicomponent condensate can be a few times larger than the corresponding neutrino losses from the one-component condensate with $m_{j}=0$. We have employed Eq. (\ref{Q}) for a simple cooling model of a superfluid neutron core enclosed in a thin envelope assuming that the superfluid transition temperature $T_{c}$ is constant over the core. In this simple model we have demonstrated that the NS surface temperature is sensitive to the phase state of the superfluid condensate of neutrons, and this allows one to qualitatively explain the anomalously rapid cooling of the Cas A NS (if it occurs). In other words, we have demonstrated the principal possibility of simulations of rapid cooling in frame of the minimal cooling paradigm without any artificial change of the PBF neutrino emissivity from the $^{3}$P$_{2}(m_{j}=0)$ pairing, as was suggested in Refs. \cite{St,E13}. In a realistic case the superfluid transition temperature $T_{c}$ as well as the phase state of the condensate are dependent on the matter density and therefore the phase state of the superfluid liquid can vary along with the distance from the core center. However, the qualitative effects will not be modified by the inclusion of more realistic physics. All the effects discussed above make it possible to explain an anomalously rapid cooling of NSs in many details. The involving relevance of the multicomponent condensation of neutrons into simulation of the Cas A NS cooling depends on its actual cooling rate which is controversial at the moment. Heinke \& Ho \cite{H09,H10} have analyzed the archival data from the Chandra X-ray Observatory ACIS-S detector in Graded mode between 2000 and 2009 and reported a steady decline of the surface temperature, $T_{s}$, by about $4\%$. New observational work on Cas A has shown, however, that the above mentioned rapid cooling of the Cas A NS is not so evident due to systematic uncertainties inherent in the observations and associated with calibration problems of Chandra detectors \cite{E13,P13}. Elshamouty et al. \cite{E13} compared the results from all the Chandra detectors and found the weighted mean of the temperature decline rate of 2.9\pm 0.5_{\mathsf{stat}}\pm 1_{\mathsf{sys}}\%$ over 10 years of observations using the data of all detectors, and a weaker decline of 1.4\pm 0.6_{\mathsf{stat}}\pm 1_{\mathsf{sys}}\%$ excluding the data from the ASIS-S detector in the graded mode which suffers from the grade migration. In contrast, Posselt et al. \cite{P13} do not confirm the existence of statistically significant temperature decline and attribute the observed effect to the degradation of the Chandra ASIS-S detector in soft channels. The authors state that the previously reported rapid cooling of the Cas A NS is likely a systematic artifact, and they cannot exclude the standard slow cooling for this NS. Their results (2006-2012) are consistent with no temperature decline at all, or a smaller temperature decline than that reported before although the involved uncertainties are too large to firmly exclude the previously reported fast cooling. Further observations are necessary to assess the rate of temperature drop with higher accuracy. Let us notice, however, that the discussed problem of the multicomponent condensation of neutrons can be of interest not only to the Cas A NS cooling but can be relevant also for other superfluid NSs.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Fast radio bursts (FRBs) are a new class of ms-long radio flashes of unknown extragalactic origin, for which the host galaxy identification and redshift determination have become feasible in several cases in recent years \citep{Tendulkar17,Bannister19,Prochaska19,Ravi19b,Marcote20,Macquart20}. The combination of short duration, specific luminosity ($\lesssim 10^{34}$~erg\,s$^{-1}$\,Hz$^{-1}$), and brightness temperature ($T_b\gtrsim 10^{35}$~K) suggests that the progenitor is a compact source emitting through a coherent process (see \citealt{CordesChatterjee19,Petroff19_rev} for recent reviews). Especially after the discovery of repeating FRB sources \citep{Spitler16,CHIME19b,CHIME19c,Kumar19,Fonseca20} magnetars caught further attention as one of the most promising candidates \citep{Popov10,Lyubarsky14,Beloborodov17,Metzger19}. From the theoretical side, connections with sources of gamma--ray bursts (GRBs) are not ruled out (see \citealt{Platts19} for a review of theoretical models), as young ms-magnetars could be the endpoint of GRB progenitors \citep{Usov92,Thompson94,Bucciantini07,Metzger11}, and cataclysmic models cannot be ruled out as long as one-off FRBs are observed. Nonetheless, observations carried out so far seem to exclude a systematic link between FRBs and standard cosmological GRBs \citep{Tendulkar16,Guidorzi19,Guidorzi20,Martone19a,Cunningham19,Anumarlapudi20}. The possibility, suggested by \citet{RaviLasky14}, that a FRB could result from the final collapse of a newborn supramassive neutron star some $10$ to $10^4$ s after a binary neutron star merger, which would be signalled by a short GRB, found no confirmation through the search for FRB counterparts in the case of four promptly localised short GRBs \citep{Bouwhuis20}. The extreme magnetic field ($B\sim 10^{14}$--$10^{15}$~G) of magnetars is thought to power their high-energy emission, which is characterised by periods of quiescence, interspersed with active intervals described by sporadic short X--ray bursts (typical duration of $\sim 0.1$~s) with luminosities in the range $10^{36}$--$10^{43}$~erg\,s$^{-1}$, and rarely by giant flares (GFs). These consist of an initial short spike with a peak luminosity in the range $10^{44}$--$10^{47}$~erg/s, followed by a several-hundred-second-long fainter tail modulated by the star's spin period. Only three GFs from magnetars, two in the Galaxy and one in the Large Magellanic Cloud, have been observed so far, although a few extragalactic candidates have also been reported \citep{Frederiks07,Mazets08,Svinkin20,Frederiks20,Yang20}. See \citet{Turolla15,Mereghetti15,KaspiBeloborodov17} for reviews on magnetars. For most FRBs the possibility of a simultaneous magnetar GF could not be discarded, mainly due to the limited sensitivity of past or currently flying $\gamma$--ray detectors, combined with FRB distances (e.g., \citealt{Martone19a,Guidorzi19,Guidorzi20}). Yet, ignoring possible different beaming factors between radio and high-energy emission, a one-to-one correspondence between FRBs and magnetar GFs has to be excluded because of the meaningful lack of a radio detection associated with the most luminous Galactic GF yet observed \citep{Tendulkar16}. A turning point recently came with the discovery of Galactic FRB\,200428 simultaneously with a hard X--ray burst from lately reactivated magnetar SGR\,1935+2154 \citep{LiHXMT20,Mereghetti20,Ridnaia20}: as seen with the Canadian Hydrogen Intensity Mapping Experiment (CHIME; \citealt{CHIME19b}), the FRB consists of two peaks 30-ms apart, with an energy of $3\times10^{34}$~erg and peak luminosity of $7\times10^{36}$~erg\,s$^{-1}$ in the 400--800-MHz band \citep{CHIME20b}. It was also detected with the Survey for Transient Astronomical Radio Emission 2 (STARE2; \citealt{BochenekSTARE2}) in the 1281--1468~MHz band with a burst energy of $2\times10^{35}$~erg and a luminosity of $4\times 10^{38}$~erg\,s$^{-1}$ \citep{Bochenek20}. This FRB is $\sim40$ times less energetic than the least energetic extragalactic one so far measured. The X/$\gamma$-ray counterpart also consists of two main peaks temporally coincident with their radio analogues, once the delay expected from the dispersion measure is accounted for. Interestingly, while the released energy is typical of magnetar bursts, this event exhibits an unusually structured, slowly rising time profile compared with that of a typical short burst. In addition, its spectrum, unusually hard as well, is fitted with a cutoff power-law with photon index between $0.4$ and $1.6$ and cutoff energy in the range $65$--$85$~keV, corresponding to a released energy of $1\times10^{40}$~erg and peak luminosity of $1\times10^{41}$~erg\,s$^{-1}$ \citep{LiHXMT20,Ridnaia20}. This FRB followed an intense hard X--ray burst activity which had culminated with a burst forest on April 27, 2020 \citep{Palmer20,Younes20}. On the one hand, this event finally provides direct evidence that magnetars can originate FRBs along with hard X--ray bursts; on the other hand, the lack of radio counterparts to many other hard X--ray bursts from the same source, with upper limits $10^8$ times fainter than FRB\,200428, shows the rarity of this kind of joint emission \citep{Lin20}. The discovery of \object{FRB\,180916.J0158$+$65}, the as-yet nearest extragalactic FRB source with measured redshift, at a luminosity distance of $149.0\pm 0.9$~Mpc \citep{Marcote20} along with its being a repeater, made it a desirable target for multi-wavelength surveys. The subsequent discovery of a periodic modulation in its radio burst activity with $P=16.35\pm0.15$~days, with a FRB rate of up to $\sim 1$~hr$^{-1}$ for a $\pm2.7$-day window around peak \citep{CHIME20a}, opened up to planning multiwavelength campaigns around the expected peaks. In a 33-ks {\em Chandra X-ray Observatory} observation which covered one FRB detected with CHIME, \citet{Scholz20} detected no X--ray source, with upper limits on the released energy of $1.6\times10^{45}$~erg and $4\times10^{45}$~erg at the FRB time and at any time, respectively, in the $0.5$--$10$-keV energy band. They also derived an upper limit of $6\times10^{46}$~erg in the $10$--$100$~keV band for 12 bursts from \object{FRB\,180916.J0158$+$65}\ that were visible with {\em Fermi}/Gamma-ray Burst Monitor (GBM; \citealt{Meegan09}). {\em XMM-Newton} detected no source down to $E<10^{45}$~erg in the $0.3$--$10$-keV energy band at the times of three radio bursts, that were discovered at 328~MHz with the Sardinia Radio Telescope \citep{Pilia20}. Comparable upper limits of $3\times10^{46}$~erg on the energy released in the optical band simultaneous with FRBs from \object{FRB\,180916.J0158$+$65}\ were also derived in a statistical framework using survey data of the Zwicky Transient Facility \citep{Andreoni20}. The Hard X--ray Modulation Telescope (HXMT), named ``Insight'' after launch on June 15, 2017, is the first Chinese X--ray astronomy satellite \citep{Li07,Zhang20_HXMT}. It carries on board three main instruments: the Low Energy X--ray telescope (LE; 1--15~keV; \citealt{Chen20_HXMT}), the Medium Energy X--ray telescope (ME; 5--30~keV; \citealt{Cao20_HXMT}), and the High Energy X--ray telescope (HE; \citealt{Liu20_HXMT}). The HE consists of 18 NaI/CsI detectors covering the 20--250~keV energy band for pointing observations. Moreover, it also works as an all-sky monitor in the $0.2$--3~MeV energy range. The unique combination of a very large geometric area ($\sim5100$\,cm$^{2}$) and of continuous event tagging with timing accuracy $<10\,\mu$s, was already exploited in the search for possible $\gamma$--ray counterparts to a sample of 39~FRBs down to ms or sub-ms scales in the keV--MeV energy range. As a result, the association with cosmological GRBs was excluded on a systematic basis \citep[hereafter G20]{Guidorzi20}. In this work we report the results of followup observations of \object{FRB\,180916.J0158$+$65}\ that were carried out with \emph{Insight--\/}HXMT\ around one of the expected peaks of radio activity, from February 4 to 7, 2020, for which no observations have been reported to date. Section~\ref{sec:dataset} describes data set and reduction, whereas the analysis is reported in Section~\ref{sec:data_an}. Results are in reported in Section~\ref{sec:res} and discussed in Section~\ref{sec:disc}. We conclude in Section~\ref{sec:conc}. \section{Data set} \label{sec:dataset} \emph{Insight--\/}HXMT\ observed \object{FRB\,180916.J0158$+$65}\ from 2020-02-04 12:40:19 to 2020-02-07 07:39:44 UT as a Target of Opportunity observation, requested in correspondence of a predicted maximum from radio observations. The detailed log of the \emph{Insight--\/}HXMT\ observation is listed in Table~\ref{tab:log}. The different net exposures of the instruments are due to the filtering criteria adopted for the generation of the good time intervals. \begin{table*} \caption{Log of the \emph{Insight--\/}HXMT\ observation of \object{FRB\,180916.J0158$+$65}. The observation lasted 241142~s, from 2020-02-04 12:40:19 to 2020-02-07 07:39:44 UT.} \label{tab:log} \centering\begin{tabular}{ccrrr} \hline\hline Inst & Energy & Net Exp$^{\rm (a)}$ & Net Exp$^{\rm (b)}$ & Net Count Rate$^{\rm (c)}$ \\ & (keV) & (ks) & (ks)& (c/s) \\ \hline LE & 1--10 & $29.7$ & $29.2$ & $0.452 \pm 0.022$ \\ ME & 10--30 & $67.3$ & $67.1$ & $-0.015 \pm 0.027$ \\ HE & 25--80 & $47.0$ & $46.5$ & $-2.516 \pm 0.374$ \\ LE \& ME \& HE & 1--80 & -- & $19.2$ & --\\ LE \& ME$^{\rm (d)}$ & 1--30 & -- & $9.9$ & --\\ ME \& HE$^{\rm (d)}$ & 25--80 & -- & $23.9$ & --\\ HE alone$^{\rm (e)}$ & 25--80 & -- & $3.3$ & --\\ \hline \end{tabular} \begin{list}{}{} \item[$^{\rm (a)}$]{Net exposure resulted from the the standard filtering described in the text.} \item[$^{\rm (b)}$]{Net exposure resulted after applying the background interpolation procedure.} \item[$^{\rm (c)}$]{Derived with HXMTDAS tasks (see Section~\ref{sec:dataset}).} \item[$^{\rm (d)}$]{Filtered data are available only for two instruments.} \item[$^{\rm (e)}$]{Neither LE nor ME have simultaneous filtered data.} \end{list} \end{table*} For our analysis we used the software package HXMTDAS version 2.02.1\footnote{\url{http://enghxmt.ihep.ac.cn/software.jhtml}}. The screening of the raw events was performed by means of the the \texttt{legtigen, megtigen,} and \texttt{hegtigen} tasks\footnote{\url{http://enghxmt.ihep.ac.cn/SoftDoc.jhtml}}. The standard filtering criteria were adopted, namely: the Earth elevation angle $\mathrm{ELV}>10\deg$; the cutoff rigidity $\mathrm{COR}>8$~GeV; the pointing offset angle $\mathrm{ANG\_DIST}<0.04\deg$. We also excluded data taken close to the South Atlantic Anomaly (SAA) by selecting T\_SAA and TN\_SAA both greater than 300~sec. For the LE instrument, we selected only data for which the Bright Earth Angle was greater than $30\deg$. From the cleaned event files we then extracted the light curves with 1~ms time resolution. For the HE instrument we also extracted the light curves for each HE unit, in order to improve the efficiency of the Multi-Detector-Search algorithm to the LE+ME+HE light curves. Because of the uncertainties in the background evaluation (as evident by the net count rates listed in Table~\ref{tab:log}), performed by the tasks \texttt{lebkgmap} \citep{LEBCKG}, \texttt{mebkgmap} \citep{MEBCKG}, and \texttt{hebkgmap} \citep{HEBCKG}, the background was evaluated by applying a procedure that interpolates the background starting from a 10-s binned light curve, applying polynomials with increasing order to individual orbits up to the point where both $\chi^2$s and runs tests (2-tails) had a P-value $>0.01$, so as to avoid both under- and over-fitting. The time bins for which this procedure could not come up with the required P-values were discarded and the resulting net exposure is also reported in Table~\ref{tab:log}. Consequently, this procedure can only detect relatively short bursts, whereas a possible relatively faint, constant source, or varying over timescales $>10$~s cannot be detected. Table~\ref{tab:log} also reports the net exposure for each combination of instruments, whose data are available simultaneously: only the LE and HE combination without ME does not appear to have a significant exposure. We included the time intervals for which only HE data are available, because the multiplicity of its independent detectors still enables an effective search for transients. Concerning the HE, in the following we ignored the blocked collimator detector, which was devised to measure the local background of HE \citep{Liu20_HXMT}, therefore using the data from the remaining 17 detectors. Hereafter, only the filtered time bins for each instrument are considered in the analysis. There is no FRB reported during our observations: in particular, on 2020-02-04 CHIME reported four bursts, whose last one was at 01:17:21.37~UT, so more than 11 hours before the beginning of \emph{Insight--\/}HXMT\ observations. The next FRB reported by CHIME from this source was 15 days later \footnote{\url{https://www.chime-frb.ca/repeaters/180916.J0158+65}.}. Assuming the period of $16.35$~days, the expected peak of radio activity considered by us was at 2020-02-05 04:55~UT: the \emph{Insight--\/}HXMT\ observing window spans the time interval from $-0.7$ to $2.1$~days around it, so completely within the $\pm 2.7$-day interval characterised by the expected peak burst rate of $1.0\pm0.5$~hr$^{-1}$ \citep{CHIME20a}. The total net exposure used in the present work is $0.65$~days, which corresponds to 23\% of the overall observing window. \section{Data analysis} \label{sec:data_an} A transient increase of the count rate of the detectors can be caused by two different kinds of phenomena: (i) an electromagnetic wave associated with a transient event, whose photons interact with a number of detectors; (ii) high-energy charged particles interacting with individual detectors. The main distinctive property of the e.m. wave is a common spectral and temporal evolution as recorded by the different detectors, whereas the particle-induced event deposits its energy in one or a few detectors and in an uncorrelated way, resulting in spikes in the count rates significantly in excess of what is expected from counting statistics. In particular, when dim astrophysical transients and short integration times are considered, the very few expected counts can be more easily confused with particle spikes. Consequently, searching for simultaneous excesses over a number of detectors is the most effective way for discriminating them. To this aim, G20 developed the so-called multi-detector search (hereafter MDS) method that exploited the segmented nature of the \emph{Insight--\/}HXMT/HE instrument to search for transient candidates possibly associated with FRBs. While in that case only CsI events were considered (being transparent to the collimators), here the data to be analysed are from a pointed observation and mainly differ in two aspects: 1) concerning the HE, NaI instead of CsI events are considered; 2) the data of the other two instruments operating in the corresponding softer energy bands are included. In the light of this, we had to tweak and adapt the original MDS algorithms as described below. In order to find the optimal compromise between sensitivity and false positive rate, we preliminarily characterised the background statistical properties for each detector. \subsection{Background statistical properties} \label{sec:statnoise} Prior to investigating the nature of possible candidates, we assumed that their signal does not affect the overall count distribution, since the great majority of the recorded counts is assumed to be background. For each of the 19 detectors (LE, ME, and 17 HE-NaI units) we accumulated the overall 1-ms count distribution. In order to test whether this is compatible with being a statistical realisation of a variable Poisson process, whose expected value for each time bin is given by the locally estimated background, we simulated 100 realisations for each bin. For each detector we thus ended up with a distribution of expected counts having 100 times as many bins as the corresponding real one. We then compared each of the 19 real count distributions with their corresponding synthetic ones. As a result, the total recorded counts are slightly, but significantly in excess of pure Poisson noise by the following amounts: $1.2$\%, $0.6$\%, and $0.5$\% for the LE, ME, and average HE, respectively). These are caused by the occasional presence of spikes that are visible in individual detectors or, in any case, incompatible with the signal that is expected from a plane wave. We also found that a possible way to reject most of them is increasing the lower threshold on the photon energy, at least for the HE units. Although this excess component accounts for $\lesssim 1$\% of the total background variance, it can affect the estimate of the statistical significance of peaks in the light curves, especially at short ($\sim$~few ms) integration times. It must be therefore taken into account in calculating the expected false positive rate. More details are reported in Appendix~\ref{sec:app_A}. \subsection{Multi-detector search} \label{sec:MDS} The diversity of the three instruments and energy bands, coupled with the different combinations of available data shown in Table~\ref{tab:log}, forced us to conceive a set of three complementary trigger criteria, that address several alternative cases, in which a candidate can be found in principle. For each case, the philosophy is the same as the one of the MDS conceived in G20: for a given criterion, the threshold that must be exceeded by the counts for a generic bin, depends on (a) the local interpolated background; (b) the integration time; (c) the minimum number of detectors to be triggered simultaneously, so as to end up with a desired combined probability. In addition, in the present work the threshold must also depend on the kind of detector as well as on the data available at any given time bin. Following these guidelines, for each case we have come up with a set of thresholds, expressed in units of Gaussian $\sigma$'s following the same convention as in G20. Overall, a candidate must fulfil at least one trigger criterion. A detailed description is reported in Appendix~\ref{sec:app_B}. Because of the presence of a small, but significant extra-Poissonian variance in the background counts (Sect.~\ref{sec:statnoise}), we had to ensure that the false positive rate was not underestimated, or, equivalently, that the confidence level of any possible candidate is not overestimated. Therefore, we further calibrated the thresholds by running the MDS on 100 synthetic samples, that were obtained by shuffling all the 1-ms bins along with their associated counts and expected background counts for any detector, independently of each other. This procedure preserves the properties of the count distribution for each detector, while at the same time it offers a way for calculating the probability for any possible combination of simultaneous excesses in different detectors. In this way we ended up with a robust procedure for estimating the related multivariate probability distribution, having relaxed any assumption on the nature of the statistical noise of any individual detector. More details can be found in Appendix~\ref{sec:app_B}. Not only can the MDS be used for other FRB sources that will be targeted by \emph{Insight--\/}HXMT, but it may also help identify bursts from other sources not necessarily related to FRBs, such as weak short GRBs possibly associated with gravitational wave sources. \section{Results} \label{sec:res} Table~\ref{tab:FPrate} reports the results of the number of candidates as a function of the integration time along with the corresponding number of expected false positives, which already accounts for the multi-trials related to the total number of time bins that were screened. We found only one candidate from the screening of 10-ms time bins, centred at 2020-02-06 23:54:55.793 UT: with reference to the three MDS criteria (Appendix~\ref{sec:app_B}), this event triggered criterion 2, that is, both LE and ME exceeded their thresholds, while only one of the HE units did. Should this be real, it would be a relatively spectrally soft event. The number of expected false positives for 10-ms integration time is $0.10$, so that the chance probability of having at least one fake candidate is $9.5$\%. More correctly, when the same probability is calculated taking into account the trials related to all the explored integration times together, the total number of expected false positives rises to $1.06$, i.e. fully consistent with the only candidate. In order to better evaluate its nature, we also inspected its counts in both LE and ME, and found that they were just above the respective thresholds. We found no simultaneous events reported by {\em Fermi}/GBM, {\em INTEGRAL} SPI-ACS, {\em Swift}/BAT, and Konus/WIND. The search for coincident subthreshold triggers in the case of {\em Fermi}/GBM\footnote{\url{https://gcn.gsfc.nasa.gov/fermi_gbm_subthresh_archive.html}} and of {\em Swift}/BAT\footnote{\url{https://gcn.gsfc.nasa.gov/gcn/swift_sub_sub_threshold.html}} gave no results either. Overall, we conclude that, based on \emph{Insight--\/}HXMT\ data alone, we cannot reject the possibility that the candidate is not astrophysical. \begin{table} \caption{Number of candidates and expected false positives as a function of the integration time.} \label{tab:FPrate} \centering\begin{tabular}{rcc} \hline\hline $\Delta\,t^{\rm (a)}$ & $N_{\rm exp}^{\rm (b)}$ & $N_{\rm cand}^{\rm (c)}$ \\ (ms) & & \\ \hline 1 & $0.42$ & 0\\ 4 & $0.03$ & 0\\ 10 & $0.10$ & 1\\ 64 & $0.07$ & 0\\ 256 & $0.26$ & 0\\ 1024 & $0.18$ & 0\\ \hline \end{tabular} \begin{list}{}{} \item[$^{\rm (a)}$]{Integration time of a single bin.} \item[$^{\rm (b)}$]{Expected number of false positives.} \item[$^{\rm (c)}$]{Number of candidates.} \end{list} \end{table} \subsection{Upper limits and technique sensitivity} \label{sec:ULsens} Given the lack of a confident detection of transient candidates with durations in the range $10^{-3}$--$1$~s, we derived corresponding upper limits on fluence, as a function of duration, by assuming three different energy spectra: a non-thermal power-law with photon index $\Gamma=2$, which is often found to adequately describe the photon spectrum of high-energy transient events, and an optically thin thermal bremsstrahlung (hereafter, {\sc ottb}) $dN/dE\propto E^{-\Gamma}\,\exp{(-E/E_0)}$, with index $\Gamma=1$ and two different values for the cutoff energy $E_0$: $200$ and $50$~keV. Concerning the cutoff power-law, we opted for an {\sc ottb}, because it was adopted to fit the initial spikes of the few Galactic magnetar giant flares \citep{Mazets99,Feroci99,Hurley99,Hurley05,Palmer05,Frederiks07b}, some extragalactic magnetar giant flare candidates \citep{Frederiks07,Mazets08,Frederiks20}, as well as the hard X--ray burst from SGR\,1935+2154 associated with FRB\,200428, when $\Gamma$ is left free to vary from 1 \citep{LiHXMT20,Ridnaia20,Mereghetti20}. We then characterised the sensitivity of the MDS as follows: we defined a grid of points in the $F$--$\Delta\,t$ plane, where $F$ is the 1--100-keV fluence and $\Delta\,t$ is the duration of a hypothetical transient. For each spectral model and for each point of this grid we simulated 200 synthetic transients, that were added to the real counts at a given set of times uniformly distributed along the entire observing window, and counted how many of them were identified by the MDS. The results for the {\sc ottb} with $E_0=50$~keV and for the power-law are shown in the contour plots of Figure~\ref{fig:sensitivity}. The results for the {\sc ottb} with $E_0=200$~keV are omitted for the sake of clarity, because they are intermediate between the other two. \begin{figure*} \centering \includegraphics[width=\linewidth]{deteff_2D_multi_hor_just_OTTB50keV_PL2.eps} \caption{Detection probability for a flare as a function of duration $\Delta\,t$, fluence $F$ (left-hand $y$ axis) and isotropic--equivalent released energy $E_{\rm iso}$ (right-hand $y$ axis) in the 1--100~keV energy band for two different energy spectra: an {\sc ottb} with $kT=50$~keV ({\em left}), and a power-law with photon index $\Gamma=2$ ({\em right}). The different solid lines correspond to constant luminosity values.} \label{fig:sensitivity} \end{figure*} We also calculated the corresponding isotropic-equivalent released energy in the same energy band, $E_{\rm iso}$ (right-hand vertical axes in Fig.~\ref{fig:sensitivity}) at the distance of \object{FRB\,180916.J0158$+$65}\ and, in addition, we show the lines corresponding to a constant luminosity for three different values: $10^{46}$, $10^{47}$, and $10^{48}$~erg/s. Looking at the regions with 90\% probability for a transient to be detected, for events as short as a few ms, the minimum detectable energies are $\sim 10^{45.6}= 4\times10^{45}$~erg. In terms of luminosity, the corresponding minimum values are a few $\times10^{48}$~erg/s. Considering longer transients, up to $\sim 0.1$~s, the minimum detectable energy and luminosity values become respectively $\sim10^{46}$~erg and $10^{47}$~erg/s in the worst case. \subsection{Any radio bursts during \emph{Insight--\/}HXMT\ observations?} \label{sec:simultFRB} Although no radio burst has been reported to date during the \emph{Insight--\/}HXMT\ observing window, it is worth estimating the probability that \object{FRB\,180916.J0158$+$65}\ gave no FRBs. Ignoring the complex dependence on frequency \citep{CHIME20a,Pilia20,Chawla20}, here we focus on the homogeneous sample detected with CHIME. Considering the $\pm0.9$-d interval centred on the peak of radio activity, which has a burst rate of $1.8_{-0.8}^{+1.3}$~hr$^{-1}$ \citep{CHIME20a}, the net exposure of \emph{Insight--\/}HXMT\ is $\sim 8.5$~hr. The probability of no FRBs while \emph{Insight--\/}HXMT\ was observing is $2\times10^{-4}$ at most. This would suggest that \emph{Insight--\/}HXMT\ observations covered one FRB at least, and more probably a few of them (the probability of $\le 2$ FRBs is $<1$\%). This holds true as long as a constant burst rate is assumed for the same window around all peaks. While it was already shown that the burst rate changes significantly for windows with different durations around the peak times \citep{CHIME20a}, nothing is said on whether the constant rate assumption for a given window over different peak times is compatible with observations. We therefore tested this possibility by taking the CHIME exposures from 28 August 2018 to 30 September 2019, for which data are available\footnote{\url{https:// chime-frb-open-data.github.io}.}. There are 19 radio bursts detected with CHIME within the $\pm 0.9$-d window around as many peaks\footnote{The number of bursts is equal to the number of peaks by accident.}. To each peak $i$ we assigned the probability $p_i=E_i/E$ for a burst to occur within its $\pm 0.9$-d window, where $E_i$ is the exposure of that window and $E=\sum_i E_i$ is the total exposure. From the multinomial distribution we then calculated the information $I$ of the real sample $\{N_i\}$, where $N_i$ is the number of FRBs observed in window $i$ as follows: \begin{equation} I = -\ln{\Big(P_{\rm multi}(\{N_i\})\Big)} = -\ln{(N!)} + \sum_{i}\Big(\ln{(N_i!)} - N_i\,\ln{p_i}\Big)\;, \end{equation} where $N = \sum_i N_i = 19$. We then generated $10^5$~samples with $N$ FRBs distributed over the same exposures and compared the distribution of the simulated information content with the real value. As a result, only for the $1.7$\% of simulated samples the information was higher than the real one. In other words, under the assumption of a constant rate around all peaks, the probability of having a distribution equally or less probable than the observed one is $1.7$\%, equivalent to $2.4\,\sigma$ (Gaussian). In conclusion, although this assumption cannot be rejected with the present data, it suggests that different periods could be characterised by different radio activity at peak. Very recently, the upgraded Giant Metrewave Radio Telescope (uGMRT; \citealt{uGMRT}) detected 15 bursts from \object{FRB\,180916.J0158$+$65}\ in three successive cycles and found extreme variability during the active phase around peak \citep{Marthi20}. Should this be strengthened by future data, we cannot reject the possibility that \object{FRB\,180916.J0158$+$65}\ emitted no FRB during these \emph{Insight--\/}HXMT\ observations. \section{Discussion} \label{sec:disc} The discovery of FRB\,200428, a sub-energetic FRB from recently reactivated Galactic source SGR\,1935+2154, provided the first compelling evidence that magnetars are occasionally FRB sources. The question as to whether they are also responsible for the more energetic extragalactic siblings of FRB\,200428 is still open. In particular, some extragalactic FRBs could be due to the most energetic subset of the magnetar population, which in principle could be way more energetic than the so-far known Galactic sources \citep{Margalit20}. A possibility is offered by young (age $\lesssim 10^9$~s) hyperactive magnetars with internal fields $B\sim B_{16}\times 10^{16}$~G and magnetic energy reservoir of $E\sim 2\times 10^{49}\,B_{16}^2$~erg \citep{Beloborodov17}, especially like the ones that can be formed in compact mergers and whose magnetic activity would be enhanced by the large differential rotation at birth \citep{Beloborodov20}. Thus, looking for magnetar burst activity from known extragalactic FRB sources has gained prominence. The initial spikes of giant flares from Galactic magnetars so far observed have $L\lesssim 10^{47}$~erg/s, $E\lesssim 10^{46}$~erg, and durations $\Delta\,t\lesssim 0.1$~s \citep{Mazets79,Feroci99,Hurley99,Palmer05,Hurley05,Mazets99}. When giant flare candidates of extragalactic origin are considered, the observed released energies and luminosities can be larger by more than one order of magnitude (e.g., \citealt{Mazets08}). In this context, our upper limits exclude the occurrence of giant flares similar to more energetic than the brightest ones observed from known Galactic magnetars, at least during 23\% of the 3-day window centred on one of the peaks of the expected radio burst activity of \object{FRB\,180916.J0158$+$65}\,. This holds true regardless of the possible simultaneous occurrence of radio bursts. Hard X-ray bursts as energetic as the one associated with Galactic FRB\,200428 are way below our sensitivity limits and could not be detected in any case. Even assuming the same $\gamma$-to-radio fluence ratio, which lies in the range $5\times10^4$--$3\times10^5$, and rescaling the energy range of \object{FRB\,180916.J0158$+$65}\ radio bursts ($10^{37}$--$4\times10^{38}$~erg), each potentially associated high-energy burst should have $E\sim 3\times10^{42}$--$10^{44}$~erg, which is below our limits. The fine temporal coincidence between radio and hard X-rays in the case of SGR\,1935+2154 points toward a causal link between the two. That the hard X-ray burst also exhibits some unusual features, like the temporal profile and the spectral hardness, is likely connected with the rarity of the joint manifestation. High-energy bursts are thought to originate in the magnetar magnetosphere as a result of some twisting and buildup of free magnetic energy, which is suddenly released through reconnection and consequent pair plasma acceleration. These processes could either be induced by crustal fractures or have a magnetospheric origin \citep{Lyutikov03b}, where the former seems to be favoured in the case of GFs \citep{Feroci01,Hurley05}. In this context, FRBs could be coherent curvature radiation from pairs \citep{Katz14,Kumar17,YangZhang19} or due to an yet-unidentified process \citep{LyutikovPopov20}, but in any case, taking place in the magnetosphere, a scenario which would account for the simultaneity of radio and hard X-rays \citep{LiHXMT20}. Alternatively, FRBs could be synchrotron maser radiation caused by relativistic magnetised shocks driven by plasmoids at outer radii ($10^{14}$--$10^{16}$~cm), that were launched by flares \citep{Lyubarsky14,Beloborodov17,Beloborodov20,Metzger19}. In this scenario FRBs would be much more collimated than the high-energy emission due to relativistic beaming of the plasmoids: this would explain both the negligible time delay between radio and hard X--rays, and the rarity of the FRB emission associated with magnetar bursts. Besides, the energy ratio between flare and the associated FRB, $\sim 10^5$, is compatible with the expected radiative efficiency \citep{PlotnikovSironi19,Beloborodov20}. While FRB\,200428 could belong to the low-energy tail of the extragalactic FRB energy distribution, which cannot be explored at cosmological distances with the current instrumentation \citep{Bochenek20}, the question as to whether the same mechanism at play for SGR\,1935+2154 can be scaled up by 4--6 decades in the case of GFs, remains open and therefore justifies the search for them from FRB sources. The origin of the periodicity found in the radio activity of \object{FRB\,180916.J0158$+$65}, which might also be the case for FRB\,121102 \citep{Rajwade20}, and its implications on the possible high-energy flaring activity of the putative magnetar, depend on the model. Although the possibility that the periodicity is due to the star's rotation period is not ruled out \citep{Beniamini20}, one can identify two main alternative scenarios: 1) a tight binary system, where a classical magnetar emission is modulated by the phase-dependent absorption conditions related to the massive companion's wind \citep{Lyutikov20}, or where the modulation reflects the orbit-induced spin precession \citep{YangZou20}, or other variants \citep{Gu20,IokaZhang20}; 2) an isolated precessing magnetar \citep{Levin20,ZanazziLai20,Sobyanin20}, where, among the different possibilities, Lens-Thirring precession due to a tilted disc could be an option \citep{Chen20}. In the context of a systematic monitoring of the high-energy activity of \object{FRB\,180916.J0158$+$65}, our observations help constrain the rate of possible GFs with a unique broadband sensitivity from 1 to 100~keV, with upper limits on the released energy of possible bursts of $E\lesssim10^{46}$~erg or even less for durations $\Delta\,t\lesssim0.1$~s. Considering the different energy bands, these values are comparable with those obtained in the X--rays with {\em Chandra} and {\em XMM-Newton} \citep{Scholz20,Pilia20} and significantly more sensitive than those obtained with all-sky monitors such as {\em Fermi}/GBM \citep{Scholz20}. Under the assumption that the frequency of radio bursts of \object{FRB\,180916.J0158$+$65}\ around peak is the same for all periods, our observations almost certainly covered one or more of them. However, the analysis of the CHIME data suggests that different cycles could be characterised by different radio activity around peak. \section{Conclusions} \label{sec:conc} We followed up the periodic FRB repeating source \object{FRB\,180916.J0158$+$65}, which also happens to be the closest extragalactic FRB source to date ($149$~Mpc), during the peak expected from February 4 to 7, 2020, with the three instruments aboard \emph{Insight--\/}HXMT, exploiting its unique combination of sensitivity and broadband. We searched for burst activity with a duty cycle of $\sim 1/4$ and found nothing down to $E\lesssim10^{46}$~erg (1--100~keV energy band) or even less for durations $\Delta\,t\lesssim0.1$~s. This rules out the occurrence of the most energetic giant flares yet observed from Galactic magnetars. No other observations of \object{FRB\,180916.J0158$+$65}\ and, especially, no radio burst around that peak have been reported to date. Assuming that its radio burst activity around peak is the same for all periods, our observations almost certainly covered some bursts. Nevertheless, presently available CHIME data suggest that the source likely experiences different degrees of radio activity at peak across different cycles, leaving the possibility that \emph{Insight--\/}HXMT\ monitored the source during a burst-free interval. Yet, the search for magnetar flaring activity is motivated by two main reasons: 1) at least a sizeable fraction of extragalactic FRB sources are likely to be magnetars, that are possibly more active and have stronger magnetic fields than their Galactic siblings; 2) the complex relation between FRBs and simultaneous hard X-ray bursts revealed by SGR\,1935+2154 is still to be understood. Only through systematic multi-wavelength campaigns will the nature and the role of extragalactic magnetars as FRB sources be clarified. In this respect, the \emph{Insight--\/}HXMT\ observations reported in the present work also served as a test bed and calibration of the search methods expressly devised for this purpose, with regard to the future joint campaigns that are planned for \object{FRB\,180916.J0158$+$65}\ as well as for other suitable repeating FRBs. \begin{acknowledgements} We thank the referee Kevin Hurley for the swift and detailed comments that improved the manuscript. This work is supported by the National Program on Key Research and Development Project (2016YFA0400800) and the National Natural Science Foundation of China under grants 11733009, U1838201 and U1838202. This work made use of data from the {\em Insight}-HXMT mission, a project funded by China National Space Administration (CNSA) and the Chinese Academy of Sciences (CAS). We acknowledge financial contribution from the agreement ASI-INAF n.2017-14-H.0. We acknowledge use of the CHIME/FRB Public Database, provided at \url{https://www.chime-frb.ca/} by the CHIME/FRB Collaboration. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \IEEEPARstart{T}{he} latest video coding standard, High Efficiency Video Coding (HEVC) \cite{6316136}, developed by the Joint Collaborative Team on Video Coding (JCT-VC) in 2012, improves the coding performance significantly. Compared with the previous video coding standard, H.264/AVC \cite{1218189}, HEVC saves about 50\% bits with the same perceptual video quality using several elaborate methods. Specifically, for intra coding \cite{6317153}, the coding unit (CU) is recursively divided into a quadtree-based structure from the largest CU 64$\times$64 to the smallest CU 8$\times$8. In addition, up to 35 intra-prediction modes for the prediction unit (PU) are allowed. These two techniques are beneficial to the coding performance, with the expense of an enormous complexity increase, which makes it intractable for some real-time applications. Therefore, there is a need for complexity reduction of HEVC intra coding. The past few years have witnessed many fast algorithms for HEVC intra coding, which can be classified into two main categories: fast CU size decision and fast intra-mode decision. For CU size decision, due to the flexibility of CU size in HEVC, the recursive process of the quad-tree CU partition will bring tremendous complexity. Many approaches try to predict the CU partition pattern in advance; thus, the brute-force recursive RD optimization (RDO) search can be avoided. Some heuristic methods \cite{6983560,6470665,zhao2014fast,7538911,7521923,zhang2019statistical,6738325,6890319, chiang2019fast} exploit the intermediate characteristics of current CU and the spatial correlations with the neighboring CUs to early determine the coding tree unit (CTU) partition result. Specifically, the approaches in \cite{zhang2019statistical,6738325,6890319, chiang2019fast} extract texture features of CU to determine the CU size. The Hadamard (HAD) cost and RD cost are utilized in \cite{6983560,6470665,zhao2014fast} to execute early CU split and early CU termination. Kim \textit{et al.} \cite{7521923} proposed splitting a CU based on the number of high-frequency key points. Recently, several machine learning approaches \cite{7457241,7024895,kuanar2019adaptive,7588907,duanmu2016fast,westland2019decision,kuang2019online,7547305,8019316,chung2017hevc,8384310} have been proposed for fast CTU partition. In \cite{duanmu2016fast, westland2019decision}, decision trees are trained for early termination decisions. Zhang \textit{et al.} \cite{7457241} proposed a CU depth decision method based on support vector machine (SVM). Bayesian decision rules are utilized in \cite{7024895,kuang2019online} to decide the CU size. Since 2016, CNN has been studied for CTU partition due to its ability to automatically extract features for CTU structure determination. Liu \textit{et al.} \cite{7547305} proposed a VLSI friendly algorithm with a shallow CNN structure for CTU partition, and Xu \textit{et al.} \cite{8384310} developed a deep CNN-based approach to predict the CTU partition. Kuanar \textit{et al.} \cite{kuanar2019adaptive} proposed using a CNN to detect textures and object shapes in CUs and then classify the spatial patterns into four classes for CU depth prediction. As for the intra-mode decision, it leads to much higher complexity if each intra-mode performs an RDO calculation. The current HEVC encoder has already adopted a three-step algorithm to expedite the process of intra-mode decision \cite{piao2010encoder}. In the first step, rough mode decision (RMD) is used to select several candidate modes with the least Hadamard transform-based costs (HAD costs). Second, the most probable modes (MPMs) are added to the candidate list. Finally, all the modes in the candidate list go through RDO, which requires higher complexity, to find the best mode. In the past few years, many approaches \cite{6201851,6466835,7169266,7149261,7024895,6662471,7805540,shen2013fast,7362704,7457241,7051618,8412615, kuanar2019adaptive, chiang2019fast} have been proposed for fast intra-mode decisions to further reduce the complexity. Specifically, texture and edge information was extracted in \cite{6466835} and \cite{7149261} to predict the possible best intra-prediction mode. Hu \textit{et al.} \cite{7024895} applied a fast intra-mode decision algorithm based on the outliers with entropy coding refinement. For the mode searching procedure, a progressive rough mode searching technique was proposed in \cite{6662471}. Similarly, Liao \textit{et al.} \cite{7805540} adjusted the order of MPM and RMD and then used the depth of current PU to choose the most probable modes. Jaballah \textit{et al.} \cite{7362704} proposed clustering the set of intra modes into groups and selectively choosing the candidates for RDO calculations. Laude \textit{et al.} \cite{laude2016deep} used CNN models to directly predict the optimal mode to avoid extra RDO calculations. \cite{7457241} proposed a gradient-based fast algorithm, calculating the average gradients in the vertical direction and horizontal direction for every PU, to reduce the number of candidate modes. Ryu \textit{et al.} \cite{8412615} adopted the random forest algorithm to estimate the possible intra-prediction modes. Most of these methods tend to discover the relationship between the mode and the content attributes and then heuristically detect special content statistical features (such as homogeneous blocks or vertical texture blocks) to simplify the RMD or RDO procedures. In general, the aforementioned methods have made good explorations on fast intra coding from different specific perspectives and have achieved good performance improvements. However, there are still many comprehensive factors of fast intra coding to be considered, such as the efficiency of feature extraction, the validity of the best candidate list, the tradeoff between complexity and coding performance, the impact of different quantizations, and parallelism for easy hardware implementation. Therefore, with comprehensive consideration of the above factors, we propose a learned fast HEVC intra coding (LHFI) framework in this paper. A special low-complex asymmetric-kernel CNN (AK-CNN) is designed to efficiently extract near-horizontal and near-vertical textures, which are important patterns for intra-coding mode prediction. For fast CU/PU size decision, our AK-CNN can perform the decisions of early split and early termination. For fast intra-mode decision, we introduce the concept of minimum number of RDO candidates (MNRC) and use the AK-CNN to predict the number of valid best RDO candidates for every PU adaptively. To provide the configurable trade-offs between the complexity reduction and coding performance, we adopt the confidence threshold scheme and then use the evolutionary algorithm to explore the optimal combination of the threshold values. We also propose a novel interpolation-based prediction scheme for LFHI framework to achieve its generalization capability to variant quantization cases. Finally, it is important to note that neighboring reconstructed pixels are not required for the LFHI framework, which guarantees LFHI's parallelism for friendly hardware implementation. Our experimental results demonstrate that the LFHI framework has a high degree of parallelism and better complexity-efficiency tradeoff and can reduce the encoding time of HEVC by 75.2\%, with a negligible 2.09\% Bj$\phi$ntegaard delta bit-rate (BD-BR) increase, over the JCT-VC test sequences. The performance significantly outperforms other state-of-the-art approaches. In brief, the main contributions of this paper are summarized as follows: \begin{itemize} \item We design a novel AK-CNN structure for effective texture feature extraction, which can be used for both fast CU/PU size decision and intra-mode decision. \item We propose a solution for fast intra-mode decision from a new perspective. A significant attribute of every PU: MNRC is introduced. Based on the prediction of MNRC, we can remove the redundant RDO candidates in a safer manner. \item We propose an evolution optimized threshold decision (EOTD) scheme to explore the optimal configurable complexity-efficiency tradeoffs for HEVC intra coding. \item With the proposed interpolation-based prediction scheme, our framework is generalized to all quantization parameters (QPs) superbly. \end{itemize} The remainder of this paper is organized as follows. Section II introduces the overview of HEVC intra-coding scheme. In Section III, we introduce the proposed LFHI framework. The detailed schemes for fast CU/PU size decisions and fast intra-mode decisions are described in Section IV and Section V, respectively. We present the experimental results and analysis in Section VI and then conclude the paper in Section VII. We will release the code and dataset later online at \url{http://staff.ustc.edu.cn/~chenzhibo/resources.html} for public research usage. \section{Overview of HEVC Intra Coding} In HEVC intra coding, the CU sizes can be 64$\times$64, 32$\times$32, 16$\times$16 and 8$\times$8, which correspond to depths 0$\sim$3. For the smallest CU 8$\times$8, two different PU sizes (8$\times$8 and 4$\times$4) are possible. The PU size 4$\times$4 indicates that there are four PUs of equal size in that CU. In this paper, the PU 4$\times$4 is viewed as depth 4. For every PU, up to 35 intra-prediction modes, including planar, DC and 33 angular modes, are allowed. Fig. \ref{intra} shows the prediction directions associated with the angular modes. It requires extremely high computational complexity to execute the RDO calculation for all modes. \begin{figure}[h] \centerline{\includegraphics[scale=0.28]{direction.png}} \caption{Angular intra-prediction modes. H and V indicate the horizontal and vertical directionalities, respectively\cite{6317153}.} \label{intra} \end{figure} The flowchart of the encoder of HEVC reference software HM 16.9 \cite{HM} is shown in Fig. \ref{flow1}. As we can see, the encoder already adopts a three-step intra-mode decision fast algorithm. In the first step, RMD, a low-complexity procedure, is designed to construct a candidate list for RDO based on the Hadamard transform-based cost ($J_{HAD}$), which is calculated by \begin{equation} J_{HAD}=SATD+\lambda\cdot R_{mode} \end{equation} where $SATD$ represents the sum of absolute transformed difference. $\lambda$ is the Lagrangian multiplier decided by the QP, and $R_{mode}$ is the number of bits to encode the information of intra-prediction mode. After the above calculations, three best modes with the least $J_{HAD}$ are selected for PUs with the sizes 64$\times$64, 32$\times$32 and 16$\times$16, while eight optimal modes are chosen for PUs with the sizes 8$\times$8 and 4$\times$4. In the second step, the three modes generated from the modes of neighboring PUs, \textit{i.e.}, the MPMs, are added to the candidate list. In the last step, all the modes in the list go through RDO to find the best intra-prediction mode based on the RD-cost ($J_{RDO}$), which is computed by \begin{equation} J_{RDO}=SSE+\lambda\cdot R_{total} \end{equation} where $SSE$ denotes the sum of the squared errors and $R_{total}$ is the number of total bits used to encode the CU. If the depth of current CU does not reach the largest depth, the depth is increased by 1, and four sub-CUs continue to perform the above process. For the smallest CU, two different PU sizes (8$\times$8 and 4$\times$4) are tested. This three-step algorithm reduces a large number of intra coding calculations in the HEVC encoder. However, as we mentioned in the introduction section, there is still large space for further improvement. \begin{figure}[] \centerline{\includegraphics[scale=0.9]{flowchart-HEVC.png}} \caption{HEVC intra-coding flowchart.} \label{flow1} \end{figure} \section{Proposed Framework} \begin{figure}[h] \centering \centerline{\includegraphics[scale=0.9]{flowchart-proposed.png}} \setlength{\belowcaptionskip}{0pt} \caption{Flowchart of the proposed LFHI framework. A: fast CU/PU size decision. B: fast mode decision.} \label{flow2} \vspace{-0.1cm} \end{figure} Fig. \ref{flow2} shows the flowchart of our proposed LFHI framework for fast intra coding, which aims at reducing the computational complexity in CU/PU size decision and intra-mode decision while maintaining the RD performance. Unlike other approaches that make use of the reconstructed pixels or some intermediate variables such as RD-cost, our framework only uses the pixels in the current CU; thus, it can be implemented as preprocessing before encoding. As a consequence, our scheme has a high degree of parallelism. For fast CU/PU size decision (Block A in Fig. \ref{flow2}), we employ AK-CNNs as classifiers in four depths. The classifiers output the splitting decision. The CU/PU is split unless the decision is nonsplitting or it reaches the maximal depth. As for the fast intra-mode decision (Block B in Fig. \ref{flow2}), for PU in every depth, the AK-CNNs decide the number of best candidate modes for RDO calculation adaptively, which enables the RMD to select a flexible number of modes. After above preprocessing operations, the encoder ensures the final CU/PU size and intra-mode RDO modes according to this information. The details for fast CU/PU size decision and fast intra-mode decision are described in the following sections. \section{Fast CU/PU-Size Decision} \subsection{Four-Level Classifier} In HEVC intra coding, the best CU/PU size is decided by searching all the CU/PU partition pattern with RDO calculation, which means that every CU/PU is encoded for several times, resulting in redundancy and enormous computational complexity. If we can construct a classifier that can determine the CU/PU size in advance, large encoding computational complexity can be reduced. \begin{figure*}[h] \centerline{\includegraphics[scale=0.98]{network.png}} \caption{Structure of AK-CNN. Taking the luminance of the current CU as input, the AK-CNN will output the splitting decision. Closer attention will be paid to the near-horizontal and near-vertical textures.} \label{ak} \vspace{-0.1cm} \end{figure*} Note that the entire CTU partition classification problem can be viewed as a combination of four-level binary classifications. Due to the large number of modes of the CTU partition, direct prediction of the partition mode is inaccessible and resource-wasting. Therefore, we try to design a scheme with separate classifiers at four decision levels. \subsection{AK-CNN Structure} Some previous approaches attempted traditional machine learning tools (such as SVM) to implement such classifiers. Several manual features, such as texture strength and local variance, are extracted. However, since the results are heavily dependent on the design of the features, these methods introduce limitations and lack generalization in certain specific situations. Recently, CNNs have achieved dominant performance in many vision tasks, such as classification and segmentation. CNNs' outstanding feature extraction and learning capabilities make them extremely competitive. In this paper, we propose a special AK-CNN as the classifier to take full advantage of the breadth of neural networks. Taking the luminance of current CU as input, the AK-CNN outputs a splitting decision. Since we need four classifiers to decide the CU/PU size, we build four CNNs to implement the above classifiers at each level of the selected QP. Due to the low complexity requirement, a relatively shallow and light architecture is adopted in this paper. For this particular task, we design a novel asymmetric-kernel structure. As shown in Fig. \ref{intra}, to reflect the statistical prevalence of angles and the effectiveness of signal prediction processing, the modes are intentionally designed to provide denser coverage of near-vertical and near-horizontal angles \cite{6316136}. As a result, the textures of near-vertical and near-horizontal are more important for the intra prediction, which is also significant for the partition result. Therefore, it is necessary and meaningful for the CNN to pay more attention to such texture patterns. To this end, our neural network extends the first convolutional layer to three different branches. The first branch is the traditional convolutional layer, with normal square kernels, while the remaining two branches have convolutional kernels of asymmetric shape that target at detecting near-vertical or near-horizontal textures. This structure could enable the neural network to detect the texture features more effectively and efficiently. All three branches output equal-sized feature maps. Then, we concatenate them in depth and put them into two convolutional layers with small kernels to learn the correlation between these features. The combination of these features could help the CNN better understand the content characteristics. Next, the extracted features flow through three fully connected layers to get the final prediction. In our neural network, the activation function of all convolutional layers and hidden fully connected layers is leaky rectified linear unit (LeakyReLU) with $\alpha = 0.25$, while the output layer is activated with Softmax function. Fig. \ref{ak} presents the details of AK-CNN for block 64$\times$64, and the information about other AK-CNNs is given in Table \ref{tab-ak}. \begin{table}[!h] \renewcommand{\arraystretch}{1.25} \centering \caption{Proposed AK-CNN structure} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{AK-CNN Configuration} \\ \hline \multicolumn{3}{|c|}{A} & \multicolumn{3}{c|}{B} \\ \hline \begin{tabular}[c]{@{}c@{}}conv\\ 7x3 -4\\ stride:2\end{tabular} & \begin{tabular}[c]{@{}c@{}}conv\\ 5x5 -8\\ stride:2\end{tabular} & \begin{tabular}[c]{@{}c@{}}conv\\ 3x7 -4\\ stride:2\end{tabular} & \begin{tabular}[c]{@{}c@{}}conv\\ 5x1 -4\\ stride:1\end{tabular} & \begin{tabular}[c]{@{}c@{}}conv\\ 3x3 -8\\ stride:1\end{tabular} & \begin{tabular}[c]{@{}c@{}}conv\\ 1x5 -4\\ stride:1\end{tabular} \\ \hline \multicolumn{3}{|c|}{concatenate} & \multicolumn{3}{c|}{concatenate} \\ \hline \multicolumn{3}{|c|}{\begin{tabular}[c]{@{}c@{}}conv 3x3 -32\\ stride:2\end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}conv 3x3 -32\\ stride:2 (1) \end{tabular}} \\ \hline \multicolumn{3}{|c|}{\begin{tabular}[c]{@{}c@{}}conv 3x3 -32\\ stride:2\end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}conv 3x3 -32\\ stride:2\end{tabular}} \\ \hline \multicolumn{3}{|c|}{$FC_1$: 96} & \multicolumn{3}{c|}{$FC_1$: 96} \\ \hline \multicolumn{3}{|c|}{$FC_2$: 16} & \multicolumn{3}{c|}{$FC_2$: 16} \\ \hline \multicolumn{3}{|c|}{$FC_3$: 2} & \multicolumn{3}{c|}{$FC_3$: 2} \\ \hline \end{tabular} \begin{tablenotes} \item Network A is for block 32$\times$32, while network B is for 16$\times$16 and 8$\times$8. \end{tablenotes} \vspace{-0.cm} \label{tab-ak} \end{table} Based on the AK-CNN, redundant RDO calculations are avoided, which could lead to a large computational complexity reduction. \subsection{Evolution Optimized Threshold Decision} For any classifier, it could not be absolutely accurate under all circumstances, and a false prediction will result in coding performance degradation. Therefore, we adopt a confidence threshold scheme to ameliorate this issue. For AK-CNN, the output Softmax value of the chosen action (splitting or nonsplitting) can be considered as the confidence for this classification trial. In general, the larger the value, the more confident the classifier is. Therefore, via setting the threshold of Softmax value, we can decrease the false prediction rate in the subsets with large Softmax values, which we define as the confident classifications. Classification results with uncertainty (do not pass the threshold) will not be adopted. For those blocks, we will check them with RDO calculations as normal. Fig. \ref{th} shows the results of prediction accuracy and ratio of confident classifications with the change of threshold in four QPs in depth 3 (from PU 8$\times$8 to PU 4$\times$4) in our validation set. \begin{figure}[!h] \centerline{\includegraphics[scale=.168]{th.pdf}} \caption{Relationship between threshold and accuracy/ratio in depth 3 in our validation set.} \label{th} \end{figure} The value setting of threshold is important, since a large threshold leads to high prediction accuracy (small coding performance degradation), while the computational complexity reduction rate is small, considering that many blocks tend to be checked. By setting the different thresholds, we can achieve a scalable complexity reduction scheme. However, how to achieve the best RD-complexity curve remains questionable. As we need to set four thresholds for each depth, it is actually a multiple-variable optimization problem. The complexity reduction rate depends on the ratio of fast decision, and the RD performance is dependent on the prediction accuracy, both of which rely on the thresholds. Therefore, we can assume the complexity reduction rate $C$ and RD performance degradation $R$ satisfy the following: \begin{equation} \begin{cases} & C = f_1(th_1,th_2,th_3,th_4) \\ \\ & R = f_2(th_1,th_2,th_3,th_4)\ \end{cases} \end{equation} where $th_{i}$ denotes the threshold value at depth $i$. Our goal is to achieve higher complexity reduction rate $C$ and lower performance degradation $R$. To reach this target, we need to find a group of combinations of the threshold values. Each combination of thresholds $th$ in this group should satisfy the following expression for all other $th'$: \begin{equation} C(th)>C(th')\quad \quad \quad if\ R(th)\geq R(th'). \end{equation} Such $th$ point is called a \textit{Pareto optimal point}\cite{hochman1969pareto}. Any improvement in a Pareto optimal point in one objective has to result in deterioration in the other objective \cite{miettinen2012nonlinear}. The aforementioned joint optimization of the complexity reduction rate and RD performance by adjusting four threshold values is a really challenging task. To solve such a problem, we introduce the evolutionary algorithm (EA), which is naturally good at dealing with noncontinuous and nonconvex optimization problems. Specifically, we adopt the multi-objective evolutionary algorithm based on decomposition (MOEA/D\cite{zhang2007moea}). It decomposes an optimization problem into several scalar subproblems, which could save the computation and be used to deal with disparately scaled objectives. In this paper, multiple chromosomes, representing distinct threshold values, are evolved simultaneously subject to complexity reduction rate $C$ and RD performance degradation $R$ as two adversarial objectives, which are the two evaluation metrics of this task. During the iterations of evolutionary algorithm, we set the range of threshold values to satisfy the following condition to obtain a solid result based on accuracy $acc$: \begin{equation} 80\% \leq acc(th_i) \leq 98\% \quad \quad for\ 0\leq i \leq 3. \end{equation} The initial population of our evolutionary algorithm is generated from uniform sampling, which denotes several combinations of threshold values. Then, we allocate them with weight vectors $\lambda_j$ to set up $N_{sub}$ subproblems. In the crossover phase, we adopt the differential evolution operator to generate new individuals. We also perform point mutation to modify the newly generated individuals. After that, the complexity reduction rate $C$ and RD performance degradation $R$ are calculated as the evaluation indicator of each individual. If the individual among $m_z$ points is not a dominated variable, then we update both the object and output. Tchebycheff aggregation function \cite{miettinen2012nonlinear} is used as the scalar optimization function $g^{te}$. Algorithm \ref{alg} shows the details of the whole iterations of EOTD. \begin{algorithm}[h] \caption{Evolution-based Approach for Threshold Values} \begin{algorithmic}[1] \Require \begin{itemize} \item $f(x)$: multi-objective functions; \item $SC$: stopping criterion; \item $V$: validation set; \item $g^{te}$: scalar optimization function. \end{itemize} \Ensure Pareto set of threshold values. \State \textbf{Step 1 Initialization:} \State Initialize the population $x$, compute $f(x)$ in $V$, and obtain the reference point $z$. \State \textbf{Step 2 Update:} \For{$i = 1,2,$...$,N_{sub}$} \State \textbf{Step 2.1 Reproduction and mutation: } \State Using the differential evolution operator to generate a \State new combination of threshold values $x'$, and then, \State apply point mutation on it \State \textbf{Step 2.2 Update of $z$:} \For{$j = 1,2,$...$,m_{z}$} \State if $f_j(x')<z_j$, then set $z_j=f_j(x')$. \EndFor \State \textbf{Step 2.3 Update of neighborhood solutions:} \For{each neighborhood index $j$} \State if $g^{te}(x|{\lambda}_{j},z)\leq g^{te}(x_j|{\lambda}_{j},z)$, \State then set $x_j = x$ and $f(x_j) = f(x)$. \EndFor \EndFor \State \textbf{Step 3 Stopping condition:} \State If $SC$ is satisfied or the boundary is reached, then stop and output $x$. Otherwise, go to \textbf{Step 2}. \end{algorithmic} \label{alg} \end{algorithm} \subsection{Variant QP Adaption} Due to the requirement of variant QP settings in the video encoding configuration, our fast algorithm also needs to adapt to the variant QPs. Previous CNN-based approaches (such as \cite{8384310}) always take the QP as an input of the neural network and use one model to adapt to all QPs. However, this leads to two main problems: \begin{enumerate} \item \textbf{Large complexity}: using one CNN to learn the relationship between the partition patterns and the blocks for all QPs precisely needs a higher number of parameters in the network (high learning ability), which brings high complexity in every single classification trial. \item \textbf{Low precision}: it is difficult for such a single model to learn the partition patterns for every QP precisely. Instead, the model tends to predict the common results among different QPs. Therefore, the prediction precision declines, resulting in large coding performance degradation. Furthermore, most of the existing methods only use the training data of QPs in common settings, like \{22,27,32,37\}, as input. As a consequence, for the data not in this set (\textit{e.g.}, QP = 25), the CNN cannot predict it well. \end{enumerate} In this paper, we introduce a novel scheme to employ four light CNNs to cover the variant QP range, instead of one large model. The basic idea is to use the models of the two nearest QPs to cooperatively predict the partition patterns for the chosen QP. First, we train four light AK-CNNs at the anchor QPs, \textit{i.e.}, QPs in \{22,27,32,37\}. For these QPs, the AK-CNNs output the partition result precisely with low computational complexity. Then, we use these four AK-CNNs to generalize to other QPs based on the relationship between the splitting rates among different QPs. We denote the splitting rate of QP $q$ in depth $i$ as $p_{i}^{q}$, which indicates the possibility of a block in depth $i$ to be further split. It is calculated as follows: \begin{equation} p_{i}^{q}= \frac{\sum_{k=i+1}^{4}S_k}{\sum_{k=i}^{4}S_k} \quad when\ QP=q \end{equation} where $S_k$ is the sum of pixel numbers of all blocks in depth $k$. The splitting rate in our validation set is shown in Fig. \ref{split}, from which we can observe that the splitting rate of every QP can be a linear combination of the splitting rates of the two nearest QPs. \begin{figure}[] \vspace{-0.2cm} \centerline{\includegraphics[scale=.14]{prob.png}} \caption{Splitting rate among different QPs.} \label{split} \end{figure} Furthermore, the prediction probability vector (Softmax value) yielded from the AK-CNNs can also be considered as a special form of splitting rate for a specific block. Therefore, we can obtain the prediction probability vectors for other QPs by combining the vectors of the anchor QPs. The combination coefficients can be calculated from the equation set of the splitting rate: \begin{equation} \begin{cases} & p_{i}^{q} = a_{i}^{q}\cdot p_{i}^{m}+b_{i}^{q}\cdot p_{i}^{n} \\ \\ & a_{i}^{q} + b_{i}^{q} = 1 \end{cases} \end{equation} where $m$ and $n$ are the two nearest QPs of $q$. By solving this equation set, we can obtain the combination coefficients $a_{i}^{q}$ and $b_{i}^{q}$ for QP $q$ in depth $i$: \begin{equation} \begin{cases} & a_{i}^{q} = (p_{i}^{q} - p_{i}^{n})/(p_{i}^{m} - p_{i}^{n}) \\ \\ & b_{i}^{q} = (p_{i}^{m} - p_{i}^{q})/(p_{i}^{m} - p_{i}^{n}). \end{cases} \end{equation} Such coefficients are stored as prior information for further usage. We can obtain the prediction vectors $y_{i}^{q}$ of QP $q$ by interpolation as follows: \begin{equation} y_{i}^{q} = a_{i}^{q}\cdot y_{i}^{m}+b_{i}^{q}\cdot y_{i}^{n}. \end{equation} \section{Fast Intra-Mode Decision} \subsection{Minimum Number of RDO Candidates} For every PU, although the three-step fast intra-mode decision scheme reduces the number of modes for RDO, there is still room for further improvement. The final number of candidate modes (3 for PUs from 64$\times$64 to 16$\times$16 and 8 for 8$\times$8 and 4$\times$4) guarantees coding performance since all these candidate modes need to go through RDO calculation to get the best mode. However, it is unreasonable to indiscriminately perform the same RDO calculation procedure on all PUs in one depth, which results in many redundant computations. In this paper, we define the position (rank) of the best mode in the candidate list generated from the RMD procedure as the minimum number of RDO candidates (MNRC), which can be considered as a flexible threshold to maintain the RD performance. To investigate the characteristics of MNRC, we obtain its distribution from sequence BQMall, which is shown in Fig. \ref{mnrc}. It clearly illustrates that there are still lots of redundancies (for MNRC \textless \ 3) based on the existing three-step scheme. \begin{figure}[h] \vspace{-0.3cm} \centerline{\includegraphics[scale=.4]{dis_mnrc-eps-converted-to.pdf}} \caption{The MNRC distribution of PUs in depth 0$\sim$2 for sequence BQMall in class C when QP=32.} \label{mnrc} \end{figure} \begin{figure*}[!h] \centerline{\includegraphics[scale=.42]{MNRC.pdf}} \vspace{-0.3cm} \centering \caption{A specific case of MNRC. The complex block is likely to be difficult for intra prediction; thus, the MNRC tends to be larger.} \label{mnrc-sample} \end{figure*} If the MNRC of every PU can be obtained in advance, then we can adaptively set the number of RDO candidate modes for the PU (according to the MNRC); thus, the encoding complexity can be reduced without degrading the RD performance. Thus, the question becomes how to get the MNRC accurately. Fig. \ref{mnrc-sample} gives an example of MNRC corresponding to PUs. We can observe that the complex blocks (difficult for intra prediction) tend to have larger MNRCs, while simple and flat blocks have smaller MNRCs. Therefore, in this paper, we can assume that the MNRC is strongly related to the content of the block. Thus, it is possible and reasonable to predict the MNRC from the block content itself in advance. \subsection{Expectation Regression Model} Now, we need to model the relationship between MNRC and the block itself. Similar to the above fast CU/PU size decision, we also employ CNN as the predictor. \begin{table}[] \renewcommand{\arraystretch}{1.5} \caption{Designed number of candidate modes for RDO in three categories} \centering \footnotesize \begin{tabular}{cccccc} \bottomrule & 64x64 & 32x32 & 16x16 & 8x8 & 4x4 \\\midrule \multicolumn{1}{c|}{Class 1} & 1 & 1 & 1 & 2 & 2 \\ \multicolumn{1}{c|}{Class 2} & 2 & 2 & 2 & 5 & 5 \\ \multicolumn{1}{c|}{Class 3} & 3 & 3 & 3 & 8 & 8 \\\bottomrule \end{tabular} \label{tab-num} \end{table} For the construction of the model, the intuitive thought is to treat it as a traditional Softmax-based classification problem or to use a single output-node network to regress the MNRC value. However, these methods make the training extremely difficult and lack the ability of fine-grained refinement. Instead, we propose an expectation regression model to solve this problem. First, we simplify the alternative actions to 3 in all depths by classifying the 8 RDO candidate modes for PUs 8$\times$8 and 4$\times$4 into 3 categories, as reported in Table \ref{tab-num}, which is aimed at alleviating the training difficulty. Then, CNNs are employed to regress the expectation vector of these three actions in different situations, which can be deemed as a special case of fitting the action-value function (\textit{i.e.}, Q-function \cite{sutton1998reinforcement}) in reinforcement learning. During the test phase, we choose the action with the maximal expectation value. The transformation from the MNRC label to the expectation value is reported in Table \ref{tab-expect} for the following reasons: \begin{enumerate} \item For the correct action, complexity reduction and coding performance achieve the optimal as we expect, so the expectation value is set to $1$. \item If the chosen action is lower than the ground-truth MNRC gear, then it is detrimental to the RD performance since the number of modes to go through RDO calculation is not enough, which should be avoided, so we set the expectation value to 0 in this situation. \item For the last case, \textit{i.e.}, the chosen action is higher than the ground-truth MNRC gear, there is no RD performance degradation but some extra complexity, so we set the value to $p$ or $q$, depending on the distance between the chosen action and the ground truth. \end{enumerate} \begin{table}[h] \renewcommand{\arraystretch}{1.5} \caption{Expectation values according to the MNRC label and corresponding prediction action} \centering \footnotesize \begin{tabular}{cccc} \bottomrule & Pred 1 & Pred 2 & Pred 3 \\\midrule \multicolumn{1}{c|}{Class 1} & 1 & $p$ & $q$ \\ \multicolumn{1}{c|}{Class 2} & 0 & 1 & $p$ \\ \multicolumn{1}{c|}{Class 3} & 0 & 0 & 1 \\\bottomrule \end{tabular} \begin{tablenotes} \item \quad \quad \quad \quad \quad \, 0 $<$ $q$ $<$ $p$ $<$ 1 \end{tablenotes} \vspace{-0.cm} \label{tab-expect} \end{table} By adjusting the parameters of $p$ and $q$, we can obtain models achieving different tradeoffs between complexity reduction and coding performance. Low coding performance degradation can be achieved with relatively low complexity reduction rate, while more complexity can be reduced at the expense of relatively worse coding performance. To validate its effectiveness, we train 2 types of CNNs for fast intra-mode decision, including a conservative setting and an aggressive setting, which focus on the coding performance and the complexity reduction, respectively. For the CNN structure, we adopt the AK-CNN in the fast CU/PU size decision part. The network is very similar except for the output layer, where there are three output-nodes in this layer (for PU size 4$\times$4, given that the input information is very little, we adopt a shallower and simpler network instead of AK-CNN). In our work, CNNs are trained with batches, the size of which is $N$. The expectation value vectors of the ground-truth labels and the prediction outputs are denoted as $y_{i}^{'}$ and $y_{i}$, respectively. The training loss of the fast intra-mode decision is defined as \begin{equation} L_{intra-mode} = \frac{1}{N}\sum_{i=i}^{N}(y_{i}^{'}-y_{i})^{2}. \end{equation} During the test phase, the final action $a_{i}^{*}$ is picked as the position of the maximal value in the prediction output vector: \begin{equation} a_{i}^{*} = \mathop{\arg\max}_{a_{i}} y_{i}(a_{i}). \end{equation} \section{Experimental Results and Analysis} In this section, we describe the details of our experiments. Part A describes the dataset we established. In parts B and C, we present the configuration of our experiments and the training details of our CNN models. The results and analysis of fast CU/PU size decision and fast intra-mode decision are detailed in part D and part E, respectively. Then, we reveal the result of our overall framework in part F. Next, we analyze the effectiveness of the proposed structure in part G. Finally, the complexity analysis of our framework is shown in part H. \subsection{Extended HEVC Intra Coding (EHIC) Dataset} \begin{table*}[] \centering \renewcommand{\arraystretch}{1.33} \caption{Results for the JCT-VC test set of fast CU/PU size decision} \setlength{\tabcolsep}{2.3mm} \begin{tabular}{clcccccccccccc} \hline \multirow{2}{*}{Class} & \multicolumn{1}{c}{\multirow{2}{*}{Sequence}} & \multicolumn{2}{c}{TIP-16 \cite{7547305}} & \multicolumn{2}{c}{TCSVT-17 \cite{7457241}} & \multicolumn{2}{c}{TIP-18 \cite{8384310}} & \multicolumn{2}{c}{LFSD-1} & \multicolumn{2}{c}{LFSD-2} & \multicolumn{2}{c}{LFSD-3} \\ \cline{3-14} & \multicolumn{1}{c}{} & \begin{tabular}[c]{@{}c@{}}BD-BR\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\Delta$T\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}BD-BR\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\Delta$T\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}BD-BR\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\Delta$T\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}BD-BR\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\Delta$T\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}BD-BR\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\Delta$T\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}BD-BR\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\Delta$T\\ (\%)\end{tabular} \\ \hline \multirow{2}{*}{A} & PeopleOnStreet & 2.27 & 61.8 & 0.63 & 40.9 & 2.37 & 61.0 & 0.06 & 41.6 & 2.10 & 71.6 & 3.61 & 77.8 \\ & Traffic & 2.35 & 60.7 & 0.61 & 42.4 & 2.55 & 70.8 & 0.05 & 42.7 & 1.95 & 73.0 & 3.31 & 79.9 \\ \rowcolor{mygray} \multicolumn{2}{c}{Class A Average} & 2.31 & 61.3 & 0.62 & 41.7 & 2.46 & 65.9 & 0.06 & 42.2 & 2.03 & 72.3 & 3.46 & 78.9 \\ \hline \multirow{5}{*}{B} & BasketballDrive & 3.49 & 70.9 & 0.49 & 54.5 & 4.27 & 76.3 & 0.09 & 47.8 & 2.56 & 76.3 & 4.26 & 81.6 \\ & BQTerrace & 2.26 & 62.4 & 0.43 & 39.1 & 1.84 & 64.7 & 0.12 & 43.4 & 1.97 & 69.4 & 3.36 & 75.9 \\ & Cactus & 2.59 & 60.4 & 0.54 & 40.7 & 2.27 & 61.0 & 0.07 & 42.9 & 1.95 & 72.8 & 3.33 & 78.8 \\ & Kimono & 2.47 & 62.6 & 0.63 & 58.9 & 2.59 & 83.5 & 0.07 & 54.6 & 1.55 & 83.6 & 2.30 & 86.1 \\ & ParkScene & 1.86 & 60.3 & 0.57 & 45.2 & 1.96 & 67.5 & 0.03 & 38.2 & 1.54 & 73.6 & 2.56 & 79.7 \\ \rowcolor{mygray} \multicolumn{2}{c}{Class B Average} & 2.53 & 63.3 & 0.53 & 47.7 & 2.59 & 70.6 & 0.08 & 45.4 & 1.91 & 75.1 & 2.89 & 80.4 \\ \hline \multirow{4}{*}{C} & BasketballDrill & 4.26 & 56.9 & 0.59 & 36.6 & 2.86 & 53.0 & 0.29 & 41.2 & 2.97 & 67.3 & 4.82 & 73.5 \\ & BQMall & 2.93 & 57.8 & 0.33 & 36.5 & 2.09 & 58.4 & 0.09 & 46.9 & 1.72 & 70.7 & 2.96 & 76.1 \\ & PartyScene & 2.19 & 51.1 & 0.40 & 28.1 & 0.66 & 44.5 & 0.04 & 42.5 & 0.82 & 63.3 & 1.52 & 68.9 \\ & RaceHorses & 2.08 & 56.2 & 0.53 & 42.7 & 1.97 & 57.1 & 0.13 & 44.0 & 1.95 & 70.7 & 3.13 & 76.3 \\ \rowcolor{mygray} \multicolumn{2}{c}{Class C Average} & 2.87 & 55.5 & 0.46 & 36.0 & 1.90 & 53.3 & 0.14 & 43.7 & 1.87 & 68.0 & 3.11 & 73.7 \\ \hline \multirow{4}{*}{D} & BasketballPass & 2.89 & 59.3 & 0.43 & 37.0 & 1.84 & 56.4 & 0.06 & 43.9 & 1.63 & 70.3 & 2.73 & 76.3 \\ & BlowingBubbles & 2.54 & 53.6 & 0.27 & 28.7 & 0.62 & 40.5 & 0.04 & 40.3 & 0.95 & 61.5 & 1.73 & 67.8 \\ & BQSquare & 1.48 & 52.6 & 0.32 & 33.2 & 0.91 & 45.8 & 0.08 & 47.6 & 0.91 & 64.3 & 1.56 & 69.1 \\ & RaceHorese & 2.43 & 53.6 & 0.29 & 34.3 & 1.32 & 55.8 & 0.07 & 41.7 & 1.47 & 67.7 & 2.53 & 73.8 \\ \rowcolor{mygray} \multicolumn{2}{c}{Class D Average} & 2.34 & 54.8 & 0.33 & 33.3 & 1.17 & 49.6 & 0.06 & 43.4 & 1.24 & 66.0 & 2.14 & 71.8 \\ \hline \multirow{3}{*}{E} & FourPeople & 3.05 & 60.0 & 0.70 & 46.2 & 3.11 & 71.3 & 0.08 & 50.6 & 2.39 & 75.1 & 4.03 & 80.0 \\ & Johnny & 4.42 & 72.2 & 1.01 & 56.8 & 3.82 & 70.7 & 0.17 & 59.6 & 2.73 & 81.1 & 4.19 & 84.0 \\ & KristenAndSara & 3.12 & 69.3 & 0.70 & 53.4 & 3.46 & 74.8 & 0.11 & 56.8 & 2.28 & 79.1 & 3.78 & 82.6 \\ \rowcolor{mygray} \multicolumn{2}{c}{Class E Average} & 3.53 & 67.2 & 0.80 & 52.1 & 3.46 & 72.3 & 0.12 & 55.7 & 2.47 & 78.4 & 4.00 & 82.2 \\ \bottomrule[1.5pt] \rowcolor{mygray} \multicolumn{2}{c}{\textbf{Average}} & \textbf{2.54} & \textbf{60.1} & \textbf{0.53} & \textbf{42.0} & \textbf{2.25} & \textbf{61.8} & \textbf{0.09} & \textbf{45.9} & \textbf{1.86} & \textbf{71.7} & \textbf{3.10} & \textbf{77.1} \\ \bottomrule[1.5pt] \end{tabular} \begin{tablenotes} \item \ \ \ \ LFSD-1, LFSD-3 indicate the leftmost and rightmost points, i.e., the LR and HR modes, while LFSD-2 is for the optimal tradeoff (OT) point. \end{tablenotes} \label{tab-size} \end{table*} As CNN is employed as the classifier in our work, a large amount of training data is required. Although a CU partition dataset CPH \cite{8384310} already exists, it does not provide the label for PU partition and intra-mode decision. Thus, we build a complete dataset for extended HEVC intra coding, namely, the EHIC dataset, including fast CU/PU size decision and intra-mode decision. Both high-definition and low-definition raw images are collected from the Uncompressed Color Image Database (UCID) \cite{schaefer2003ucid}, Raw Images Dataset (RAISE)\cite{dang2015raise} and DIVerse 2K Resolution Image Dataset (DIV2K) \cite{agustsson2017ntire}, corresponding to the resolution range of the JCT-VC standard test set \cite{bossen2013test}. These images are randomly divided into a training set (90\%) and validation set (10\%). Then, all the images are encoded by the HEVC reference software HM 16.9, while four QPs in \{22, 27, 32, 37\} are applied for encoding. During encoding, we collect the ranking position of the final intra-prediction mode in the candidate list generated from the RMD procedure for every PU as the MNRC label. After encoding, we collect labels indicating the splitting or nonsplitting operations of all CUs for the training of size decision. Next, the CU/PUs cropped from the images are combined with the corresponding labels as the samples of our dataset. In addition, the RD-cost of each block is also collected during encoding, which can be used for further optimization of the training process for fast CU/PU size decision. Generally, the coding performance degradation due to misclassification of different blocks is also different. Therefore, it is not reasonable to consider the splitting decision as an indiscriminate classification problem. We define the coding performance loss $Loss_{RD}$ as follows: \begin{equation} \begin{cases} & Loss_{RD}=\frac{|J_{before} - J_{after}|}{J_{before} + J_{after}} \\ \\ & J=SSE+\lambda\cdot R_{total}\ \end{cases} \end{equation} Here, $J_{before}$ and $J_{after}$ are the RD-cost values of the block before and after splitting processing, respectively. Based on such evaluating indicator, we introduce a new format of loss function to optimize the training for fast CU/PU size decision. Similar to \textit{focal loss} \cite{lin2018focal}, the loss function of block with small $Loss_{RD}$ is allocated with a small weight $w$. The optimized training loss function is designed as follows: \begin{equation} Loss_{size} = \begin{cases} & -w\sum\limits_{i}y_{i}^{'}log(y_{i})\qquad if\ Loss_{RD}<th_{RD}\\ \\ & -\sum\limits_{i}y_{i}^{'}log(y_{i}) \qquad \ \ \ \ otherwise. \end{cases} \end{equation} Here, $th_{RD}$ is the preset threshold to modify the loss for RD performance, and $y_{i}^{'}$ and $y_{i}$ are the ground-truth labels and the prediction outputs, respectively. As a consequence, AK-CNNs will pay more attention to the blocks with large $Loss_{RD}$. The intuition of this design is that the fast decision for the blocks with small $Loss_{RD}$ is of little importance, both splitting or nonsplitting will not result in too much coding performance degradation. This method can bring a better performance. \subsection{Configuration of Experiments} In our experiments, all the schemes are implemented into the HEVC reference software HM 16.9, including both the fast CU/PU size decision and fast mode decision. In HM 16.9, the all-intra main configuration is used with the default configuration file \textit{encoder\_intra\_main.cfg} \cite{bossen2013test}, and four QP values in \{22,27,32,37\} are chosen to compress the frames. Our experiments are tested on 18 video sequences of the JCT-VC standard test set. In our experiments, the coding performance is measured by the Bj$\phi$ntegaard delta bit-rate (BD-BR) \cite{bjontegaard2001calculation} compared with the original HM 16.9, and the encoding time-saving rate $\Delta T$ is calculated as \begin{equation} \Delta T = \frac{T_{HM}-T_{test}}{T_{HM}} \end{equation} where $T_{test}$ is the encoding time of the proposed method in the test mode and $T_{HM}$ is the encoding time of the anchor of HM 16.9. All experiments are conducted on a computer with an Intel (R) Core (TM) i7-7700K CPU and 16 GB RAM. An NVIDIA GeForce GTX 1080Ti GPU was used to accelerate the training process, but it was disabled when testing the HEVC complexity reduction. The \textit{TensorFlow} \cite{abadi2016tensorflow} platform is used for our CNN models. \subsection{CNN Training Details} Here, we present the details of the CNN training. All the CNN models of our work are trained from the training set of EHIC dataset, while the hyper-parameters are fine-tuned on the validation set. The convolutional layers and fully connected layers are all initialized from Gaussian distributions. For training these models, we use the Adam optimizer \cite{kingma2014adam} with an initial learning rate of 5$\times$$10^{-3}$ for 150 epochs. The learning rate is divided by a factor of 10 every 50 epochs. Dropout \cite{srivastava2014dropout} is added after all hidden fully connected layers to avoid overfitting, and the drop ratio is set to 0.5. The batch size is 64 for the models of large CU/PU (depth = 0, 1) and 256 for the models of small CU/PU (depth = 2, 3, 4). \begin{figure}[h] \vspace{-0.2cm} \centerline{\includegraphics[scale=.35]{configurable.png}} \caption{Configurable tradeoffs arise from EOTD. Hexagon points will be used for further comparison.} \vspace{-0.4cm} \label{config} \end{figure} \begin{table}[!h] \renewcommand{\arraystretch}{1.12} \caption{Ratio of fast skip in every depth for three modes} \centering \setlength{\tabcolsep}{2.2mm} \begin{tabular}{lllccc}\toprule & & & RD Check & Early Term & Early Split \\\midrule \multirow{20}{*}{LR} & \multirow{4}{*}{Class A} & Depth 0 & 17.2\% & 0.0\% & 82.8\% \\ & & Depth 1 & 55.2\% & 3.1\% & 41.7\% \\ & & Depth 2 & 76.4\% & 13.2\% & 10.4\% \\ & & Depth 3 & 69.1\% & 29.0\% & 1.9\% \\ \cline{2-6} & \multirow{4}{*}{Class B} & Depth 0 & 46.8\% & 0.2\% & 53.0\% \\ & & Depth 1 & 61.6\% & 8.8\% & 29.6\% \\ & & Depth 2 & 70.9\% & 16.3\% & 12.8\% \\ & & Depth 3 & 75.1\% & 22.4\% & 2.5\% \\ \cline{2-6} & \multirow{4}{*}{Class C} & Depth 0 & 17.0\% & 0.0\% & 83.0\% \\ & & Depth 1 & 29.3\% & 1.4\% & 69.3\% \\ & & Depth 2 & 60.0\% & 8.1\% & 31.9\% \\ & & Depth 3 & 73.8\% & 19.6\% & 6.6\% \\ \cline{2-6} & \multirow{4}{*}{Class D} & Depth 0 & 11.8\% & 0.0\% & 88.2\% \\ & & Depth 1 & 27.7\% & 0.8\% & 71.5\% \\ & & Depth 2 & 52.7\% & 6.2\% & 41.1\% \\ & & Depth 3 & 71.3\% & 17.0\% & 11.7\% \\ \cline{2-6} & \multirow{4}{*}{Class E} & Depth 0 & 42.5\% & 0.6\% & 56.9\% \\ & & Depth 1 & 54.2\% & 12.4\% & 33.4\% \\ & & Depth 2 & 63.5\% & 25.0\% & 11.5\% \\ & & Depth 3 & 63.4\% & 33.3\% & 3.3\% \\\midrule \multirow{20}{*}{OT} & \multirow{4}{*}{Class A} & Depth 0 & 0.0\% & 2.4\% & 97.6\% \\ & & Depth 1 & 5.4\% & 21.9\% & 72.7\% \\ & & Depth 2 & 18.4\% & 39.7\% & 41.9\% \\ & & Depth 3 & 23.8\% & 59.9\% & 16.3\% \\ \cline{2-6} & \multirow{4}{*}{Class B} & Depth 0 & 0.0\% & 6.1\% & 93.9\% \\ & & Depth 1 & 6.7\% & 40.5\% & 52.8\% \\ & & Depth 2 & 17.9\% & 41.3\% & 40.8\% \\ & & Depth 3 & 29.9\% & 51.3\% & 18.8\% \\ \cline{2-6} & \multirow{4}{*}{Class C} & Depth 0 & 0.0\% & 0.6\% & 99.4\% \\ & & Depth 1 & 2.9\% & 8.6\% & 88.5\% \\ & & Depth 2 & 13.0\% & 22.6\% & 64.4\% \\ & & Depth 3 & 26.9\% & 41.8\% & 31.3\% \\ \cline{2-6} & \multirow{4}{*}{Class D} & Depth 0 & 0.0\% & 0.1\% & 99.9\% \\ & & Depth 1 & 3.0\% & 8.2\% & 88.8\% \\ & & Depth 2 & 11.2\% & 19.0\% & 69.8\% \\ & & Depth 3 & 24.6\% & 36.5\% & 38.9\% \\ \cline{2-6} & \multirow{4}{*}{Class E} & Depth 0 & 0.0\% & 18.7\% & 81.3\% \\ & & Depth 1 & 5.1\% & 39.9\%& 55.0\% \\ & & Depth 2 & 16.4\% & 47.5\% & 36.1\% \\ & & Depth 3 & 22.6\% & 62.1\% & 15.3\% \\\midrule \multirow{20}{*}{HR} & \multirow{4}{*}{Class A} & Depth 0 & 0.0\% & 2.4\% & 97.6\% \\ & & Depth 1 & 0.0\% & 24.4\% & 75.6\% \\ & & Depth 2 & 0.9\% & 47.1\% & 52.0\% \\ & & Depth 3 & 10.1\% & 67.8\% & 22.1\% \\ \cline{2-6} & \multirow{4}{*}{Class B} & Depth 0 & 0.0\% & 6.1\% & 93.9\% \\ & & Depth 1 & 0.0\% & 43.9\% & 56.1\% \\ & & Depth 2 & 0.8\% & 48.3\% & 50.9\% \\ & & Depth 3 & 13.7\% & 60.9\% & 25.4\% \\ \cline{2-6} & \multirow{4}{*}{Class C} & Depth 0 & 0.0\% & 0.6\% & 99.4\% \\ & & Depth 1 & 0.0\% & 10.0\% & 90.0\% \\ & & Depth 2 & 0.7\% & 27.6\% & 71.7\% \\ & & Depth 3 & 11.1\% & 49.5\% & 39.3\% \\ \cline{2-6} & \multirow{4}{*}{Class D} & Depth 0 & 0.0\% & 0.1\% & 99.9\% \\ & & Depth 1 & 0.0\% & 9.7\% & 90.3\% \\ & & Depth 2 & 0.6\% & 23.0\% & 76.4\% \\ & & Depth 3 & 9.9\% & 43.5\% & 46.6\% \\ \cline{2-6} & \multirow{4}{*}{Class E} & Depth 0 & 0.0\% & 18.7\% & 81.3\% \\ & & Depth 1 & 0.0\% & 42.2\% & 57.8\% \\ & & Depth 2 & 0.8\% & 54.2\% & 45.0\% \\ & & Depth 3 & 9.8\% & 69.7\% & 20.5\% \\\bottomrule \end{tabular} \begin{tablenotes} \item Early Termination is abbreviated as Early Term in this table. \end{tablenotes} \vspace{-0.6cm} \label{tab-ratio} \end{table} \subsection{Performance Evaluation of CU/PU size decision} \textbf{Scalable complexity reduction.} First, we validate the scalable complexity reduction ability of our approach. As we use the confidence thresholds in four depths, different tradeoffs between RD performance and complexity reduction are achieved. Then, we obtain the best combinations of the threshold values (\textit{Pareto optimal points}) from the evolutionary algorithm. According to the different combinations of thresholds, we enable our framework with fine-grained scalability in complexity reduction. Fig. \ref{config} shows the results of RD performance degradation and the complexity reduction, which are averaged at four QPs in \{22,27,32,37\} over the 18 JCT-VC test sequences. From Fig. \ref{config}, we can observe that the tradeoff between the RD performance and complexity reduction is well kept: we can achieve almost perfect RD performance maintenance with low complexity reduction or reduce a large percentage of encoding complexity with a relatively larger BD-BR increase. This scalability enhances the usability of our approach, and based on this, we can achieve different levels of coding complexity reduction according to different requirements of RD performance degradation. \begin{figure*}[!h] \centerline{\includegraphics[scale=.27]{QP.pdf}} \caption{Generalization capability among a large range of QPs.} \vspace{-0.2cm} \label{qp} \end{figure*} \textbf{Evaluation in terms of complexity reduction.} Then, we evaluate the performance of our learned fast size decision (LFSD) in terms of complexity reduction. We pick three points in Fig. \ref{config} (the hexagon marks) to represent our scalable fast approach. The leftmost and rightmost points are chosen, which can achieve extreme RD performance maintaining and complexity reduction, namely, the low RD-degradation (LR) and the high RD-degradation (HR) modes, respectively. In addition, we also pick a middle point to represent the balanced tradeoff between RD performance and complexity reduction, which is called the optimal tradeoff (OT) mode. Then, we compare them with three state-of-the-art results \cite{7547305,7457241} and \cite{8384310}, as reported in Table \ref{tab-size}. In the LR mode, the fast algorithm is executed only if the AK-CNNs are exceedingly confident with the decisions, so it reduces the encoding complexity merely by 45.9\%, which is still better than \cite{7457241} ($\Delta T = 42.0\%$). As for the OT and HR modes, the ratio of fast decision is much higher. Therefore, these two schemes reduce the encoding complexity by 71.7\% and 77.1\%, respectively. These performances significantly outperform \cite{7547305} ($\Delta T = 60.1\%$) and \cite{8384310} ($\Delta T = 61.8\%$), especially the HR mode, saving an additional 17.0\% and 15.3\% of the encoding time. Since our approach implements the complete fast CTU partition operation, including fast PU size decision, it can achieve lower complexity of HEVC intra coding than others. Here, we also analyze where and how our approach reduces the encoding complexity. We count and then calculate the ratio of blocks on which we implement fast skip (including early split and early termination) and RD check in four depths for the above three modes during encoding, as reported in Table \ref{tab-ratio}, the results in which are averaged over four QPs in \{22,27,32,37\}. For the LR mode, since our algorithm is very conservative, only 48.1\% RD checks are omitted on average. Most of the fast skips are distributed at depth 0, as over 80\% of 64$\times$64 CUs are early split safely except for class B and class E. For the sequences in these two classes, there are plenty of flat background areas, so the possibility of large CU size is quite high. As a result, RD checks are needed for such blocks (although the definition of sequences in class A is high, there are very few smooth large blocks). As for the aggressive HR mode, almost all the RD checks are skipped, with only some for the small 8$\times$8 CUs remaining due to its difficultly of prediction. Consequently, the HR mode can reduce the encoding complexity dramatically. To achieve the balance, the OT mode skips plenty of the RD checks at the high level (for large CU sizes), while it implements some necessary RD checks for small blocks. Thus, it can greatly reduce the complexity, without incurring too much RD performance degradation. Obviously, our algorithm can intelligently execute the check or skip actions according to the characteristics of the content. \textbf{Evaluation on RD performance.} Next, we compare the RD performance with other three approaches in terms of BD-BR. From Table \ref{tab-size}, we can observe that our approach of LR mode incurs a mere 0.09\% BD-BR increase, almost the same as the anchor of HM 16.9, much better than \cite{7457241} (0.53\% increase). For the OT mode, the BD-BR increase is on average 1.86\%, which significantly outperforms \cite{7547305} (2.54\% increase) and \cite{8384310} (2.25\% increase). This promising result owes to the high accuracies of the AK-CNNs and the confidence threshold control. As for the HR mode, the BD-BR increase reaches 3.10\%. Due to the requirement of high complexity reduction, many nonconfident decisions are made to accelerate the encoding; thus, the RD performance degradation is relatively high. \begin{table*}[!!h] \centering \renewcommand{\arraystretch}{1.35} \caption{Results for the JCT-VC test set of fast intra-mode decision and overall performance} \setlength{\tabcolsep}{2.3mm} \begin{tabular}{clcccccccccc} \hline \multirow{2}{*}{Class} & \multicolumn{1}{c}{\multirow{2}{*}{Sequence}} & \multicolumn{2}{c}{TCSVT-17 \cite{7457241}} & \multicolumn{2}{c}{TIP-18 \cite{8412615}} & \multicolumn{2}{c}{LFMD*} & \multicolumn{2}{c}{LFMD} & \multicolumn{2}{c}{LFHI} \\ \cline{3-12} & \multicolumn{1}{c}{} & \begin{tabular}[c]{@{}c@{}}BD-BR\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\Delta$T\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}BD-BR\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\Delta$T\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}BD-BR\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\Delta$T\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}BD-BR\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\Delta$T\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}BD-BR\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\Delta$T\\ (\%)\end{tabular} \\ \hline \multirow{2}{*}{A} & PeopleOnStreet & 0.29 & 16.5 & 0.50 & 19.3 & 0.38 & 25.1 & 0.19 & 20.9 & 2.35 & 75.5 \\ & Traffic & 0.15 & 17.6 & 0.50 & 16.5 & 0.37 & 24.6 & 0.19 & 21.2 & 2.20 & 76.8 \\ \rowcolor{mygray} \multicolumn{2}{c}{Class A Average} & 0.22 & 17.1 & 0.50 & 17.9 & 0.38 & 24.9 & 0.19 & 21.1 & 2.28 & 76.2 \\ \hline \multirow{5}{*}{B} & BasketballDrive & 0.12 & 16.3 & 0.50 & 17.3 & 0.36 & 26.2 & 0.19 & 22.9 & 2.78 & 79.0 \\ & BQTerrace & 0.15 & 17.4 & 0.40 & 16.8 & 0.17 & 26.7 & 0.08 & 21.7 & 2.08 & 73.3 \\ & Cactus & 0.13 & 12.3 & 0.50 & 18.5 & 0.32 & 25.1 & 0.16 & 21.2 & 2.14 & 75.7 \\ & Kimono & -0.01 & 15.6 & 0.10 & 19.8 & 0.21 & 26.4 & 0.12 & 23.2 & 1.67 & 84.5 \\ & ParkScene & 0.09 & 12.7 & 0.30 & 18.3 & 0.25 & 24.1 & 0.13 & 20.4 & 1.70 & 75.9 \\ \rowcolor{mygray} \multicolumn{2}{c}{Class B Average} & 0.10 & 14.9 & 0.36 & 18.1 & 0.26 & 25.7 & 0.14 & 21.9 & 2.07 & 77.7 \\ \hline \multirow{4}{*}{C} & BasketballDrill & 0.36 & 11.0 & 0.50 & 16.1 & 0.32 & 25.6 & 0.16 & 22.9 & 3.18 & 71.6 \\ & BQMall & 0.23 & 17.5 & 0.60 & 16.3 & 0.44 & 25.8 & 0.22 & 23.4 & 1.99 & 74.8 \\ & PartyScene & 0.23 & 12.3 & 0.80 & 17.3 & 0.42 & 24.3 & 0.20 & 20.6 & 1.07 & 68.5 \\ & RaceHorses & 0.10 & 15.0 & 0.40 & 18.0 & 0.26 & 24.0 & 0.12 & 21.3 & 2.16 & 73.0 \\ \rowcolor{mygray} \multicolumn{2}{c}{Class C Average} & 0.23 & 14.0 & 0.58 & 16.9 & 0.36 & 24.9 & 0.18 & 22.1 & 2.10 & 72.0 \\ \hline \multirow{4}{*}{D} & BasketballPass & 0.36 & 18.7 & 0.60 & 15.2 & 0.46 & 24.8 & 0.23 & 21.3 & 1.90 & 74.7 \\ & BlowingBubbles & 0.21 & 13.7 & 0.70 & 19.5 & 0.41 & 22.4 & 0.19 & 19.0 & 1.17 & 66.6 \\ & BQSquare & 0.52 & 12.6 & 0.80 & 18.9 & 0.45 & 23.5 & 0.21 & 20.5 & 1.20 & 69.4 \\ & RaceHorese & 0.37 & 13.7 & 0.60 & 18.5 & 0.41 & 22.4 & 0.19 & 19.7 & 1.69 & 71.1 \\ \rowcolor{mygray} \multicolumn{2}{c}{Class D Average} & 0.37 & 14.7 & 0.68 & 18.0 & 0.43 & 23.3 & 0.21 & 20.1 & 1.49 & 70.5 \\ \hline \multirow{3}{*}{E} & FourPeople & 0.11 & 22.6 & 0.50 & 17.1 & 0.43 & 27.1 & 0.21 & 25.2 & 2.65 & 78.2 \\ & Johnny & 0.28 & 21.6 & 0.50 & 17.5 & 0.46 & 26.5 & 0.26 & 25.1 & 3.05 & 83.0 \\ & KristenAndSara & 0.17 & 23.5 & 0.50 & 17.4 & 0.44 & 25.8 & 0.25 & 24.7 & 2.58 & 81.6 \\ \rowcolor{mygray} \multicolumn{2}{c}{Class E Average} & 0.19 & 22.6 & 0.50 & 17.3 & 0.44 & 26.5 & 0.24 & 25.0 & 2.76 & 80.9 \\ \bottomrule[1.5pt] \rowcolor{mygray} \multicolumn{2}{c}{\textbf{Average}} & \textbf{0.21} & \textbf{16.1} & \textbf{0.52} & \textbf{17.7} & \textbf{0.36} & \textbf{25.0} & \textbf{0.18} & \textbf{22.0} & \textbf{2.09} & \textbf{75.2} \\ \bottomrule[1.5pt] \end{tabular} \begin{tablenotes} \item LFMD* indicates our aggressive scheme, while LFMD is for the conservative scheme. LFHI is our overall framework, which is the assembled system of LFSD and LFMD. \end{tablenotes} \vspace{-0.3cm} \label{tab-mode} \end{table*} \textbf{Generalization capability at different QPs.} In addition to four QPs evaluated above (QP = 22, 27, 32, 37), we further test our approach for reducing complexity of intra-mode HEVC at other QPs in [22, 37]. To verify the generalization capability of our approach, we use the proposed interpolation-based approach. Fig. \ref{qp} illustrates the bit-rate difference, PSNR loss and encoding time reduction of our approach at different QPs. Note that the results are averaged over the 18 JCT-VC test video sequences. In this figure, the hexagon marks denote the test results at four anchor QPs, whereas the circle marks represent the test results at 12 interpolated QPs. We can see that the performance of the interpolated QPs is very stable. The three curves maintain superb monotonicity and minuscule fluctuate. It is difficult to distinguish the interpolated QPs. This performance is much better than the work that uses one large CNN targeting all QPs, such as \cite{8384310}. As for the time of feed-forward of CNNs, although we have to run the CNN twice for the interpolated QPs, our work still performs well in terms of complexity reduction due to the low complexity of our extremely light network structure (the inference time accounts for less than 1\% of the total encoding time of the anchor of HM 16.9). Obviously, our special interpolation-based design can better satisfy the requirements of different QPs, which is more effective. Furthermore, we provide the RD curves of our approach and the anchor of HM 16.9 for two JCT-VC sequences, including the high-definition sequence, PeopleOnStreet in Class A, and the low-definition sequence, BQMall in Class D. Fig. \ref{curve} shows a comparison of the different curves. From this figure, we can observe that the difference of RD performance between the anchor of HM 16.9 and our approach is very small for all bit-rate points. It shows that our approach can adapt to the different bit-rate points admirably, for both high-definition and low-definition sequences. \begin{figure}[!!h] \vspace{-0.4cm} \centerline{\includegraphics[scale=.2]{comp.pdf}} \caption{RD curve comparison. (a) PeopleOnStreet in Class A. (b) BQMall in Class D.} \vspace{-0.2cm} \label{curve} \end{figure} \subsection{Performance Evaluation of Mode Decision} \textbf{Evaluation on RDO rounds.} First, we examine the number of modes for RDO in our approach. As we train the AK-CNNs for fast intra-mode decision in two types of settings, we apply both the conservative and aggressive schemes to two JCT-VC test sequences, Traffic in Class A and RaceHorses in Class D, and then compare it with the original HM 16.9. The average number of candidate modes for RDO (MPMs are not included) is shown in Fig. \ref{rdot}. We can find that the final number of RDO modes is curtailed for a large percent, which leads to a significant reduction in complexity. Furthermore, the number of RDO modes in every depth is different among different sequences, which is highly correlated with the content, showing the adaptiveness of our scheme. \begin{figure*}[!!h] \centerline{\includegraphics[scale=.16]{rdot.pdf}} \vspace{-0cm} \caption{Number of RDO modes comparisons between HM and our scheme. (a) Traffic in class A. (b) RaceHorses in class D. Slanted lines indicate HM, while dots and horizontal lines are for the conservative and aggressive schemes, respectively.} \label{rdot} \vspace{-0.3cm} \end{figure*} \textbf{Evaluation in terms of complexity reduction.} Next, we evaluate the performance of our learned fast mode decision (LFMD) at complexity reduction. We implement both the conservative and aggressive schemes on the 18 JCT-VC test sequences and then compare them with two state-of-the-art results \cite{7457241} and \cite{8412615}, as reported in Table \ref{tab-mode}. Our conservative scheme can reduce the encoding time by 22.0\%, which is better than both \cite{7457241} ($\Delta T = 16.1\%$) and \cite{8412615} ($\Delta T = 17.7\%$). As for the aggressive scheme, it curtails the number of modes for RDO for a larger percent, so the encoding time saving rate is higher, which reaches 25.0\%. This performance exceeds that of the other two methods significantly, as 8.9\% and 7.3\% encoding time is additionally saved. Note that our approaches can reduce the complexity more than the other two methods. This is because the heuristic approaches \cite{7457241} and \cite{8412615} only consider some situations, such as homogeneous, vertical texture and horizontal texture blocks. On the contrary, our data-driven approach can classify blocks more comprehensively, reducing more modes for RDO. Therefore, our approach can improve the efficiency of HEVC intra coding. \textbf{Evaluation on RD performance.} Then, we use the RD performance to evaluate our approach, in terms of BD-BR. Table \ref{tab-mode} tabulates the BD-BR results, with the original HM 16.9 as the anchor. Our conservative scheme incurs a BD-BR increase of 0.18\%, better than \cite{8412615} (0.52\% increase) and \cite{7457241} (0.21\% increase), which is owing to the high precision of the MNRC mechanism and AK-CNN predictors. For our aggressive scheme, it reduces more candidates for RDO; thus, the BD-BR increase reaches 0.36\%. More importantly, we calculate the standard deviations of the BD-BR increases of the four approaches. We can observe that both our conservative scheme ($std= 0.05\%$) and aggressive scheme ($std= 0.09\%$) have less fluctuations than \cite{8412615} ($std= 0.16\%$) and \cite{7457241} ($std= 0.13\%$) among 18 test sequences, which indicates that our CNN-based approach has better generalization capability than those two heuristic methods. It is also very interesting to see how our algorithm can preserve the coding performance with so many RDO candidate modes removed. Here, we record the final chosen modes in all depths with three settings: 1. original HM with the default configuration, 2. brute-force full search without the HM fast intra-mode algorithm, and 3. proposed method with the conservative scheme. Taking the full search result as reference, we report the similarities of other two experimental results in Table \ref{tab-simi}. \begin{table}[] \renewcommand{\arraystretch}{1.3} \caption{Similarity of chosen modes with the full search scheme} \label{tab:HDCA_perform} \centering \begin{tabular}{lllll}\toprule & \multicolumn{2}{c}{HM \textit{vs} FS} & \multicolumn{2}{c}{Ours \textit{vs} FS} \\ \hline & Traffic & RacHor & Traffic & RacHor \\\midrule \multicolumn{1}{l|}{Depth 0} & 64.5\% & \multicolumn{1}{l|}{69.5\%} & 59.6\% & 63.5\% \\ \multicolumn{1}{l|}{Depth 1} & 50.5\% & \multicolumn{1}{l|}{46.4\%} & 48.6\% & 44.5\% \\ \multicolumn{1}{l|}{Depth 2} & 47.6\% & \multicolumn{1}{l|}{46.5\%} & 46.7\% & 44.8\% \\ \multicolumn{1}{l|}{Depth 3} & 43.9\% & \multicolumn{1}{l|}{44.4\%} & 43.6\% & 43.6\% \\ \multicolumn{1}{l|}{Depth 4} & 39.9\% & \multicolumn{1}{l|}{38.5\%} & 39.5\% & 37.7\% \\\bottomrule \end{tabular} \begin{tablenotes} \item RaceHorses is abbreviated as RacHor, while FS is for full search. \end{tablenotes} \vspace{-0.3cm} \label{tab-simi} \end{table} From this table, we can see that the similarities of both original HM and our method are not very high, especially for small blocks. The main reason we believe is that different reconstructed pixels are used as reference, which is a very important factor for intra prediction. In other word, for one block, the optimal intra-mode can be different with different reconstructed pixels. Consequently, directly comparing the final chosen modes may not be an effective method. Here, we use \textit{cover rate} to measure the prediction effectiveness, which is defined as the ratio of PUs for which the RDO rounds are not smaller than the MNRC. This value can reveal the ability of finding the optimal mode as effective as the original HM under different reconstructed pixels. The cover rates of these two sequences are reported in Table \ref{tab-cover}. We can see that the cover rate of our scheme is rather high, which guarantees RD performance. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{Cover rate of two sequences in all depths} \centering \begin{tabular}{@{}ccc@{}} \toprule \multicolumn{1}{l}{} & Traffic & RaceHorses \\ \midrule \multicolumn{1}{c|}{Depth 0} & 77.5\% & 78.4\% \\ \multicolumn{1}{c|}{Depth 1} & 87.6\% & 85.8\% \\ \multicolumn{1}{c|}{Depth 2} & 90.7\% & 90.1\% \\ \multicolumn{1}{c|}{Depth 3} & 96.4\% & 95.1\% \\ \multicolumn{1}{c|}{Depth 4} & 96.1\% & 95.4\% \\ \bottomrule \end{tabular} \vspace{-0.4cm} \label{tab-cover} \end{table} \subsection{Performance Evaluation of the Overall Approach} We assemble the fast CU/PU size decision (OT mode) and the fast mode decision (conservative scheme) to obtain an overall system since these two modes achieve balanced trade-off between complexity reduction and RD performance. Then, we implement this system on the 18 JCT-VC test sequences. Table \ref{tab-mode} reports the BD-BR and time saving rate of the proposed overall system. For the RD performance, the overall system incurs 2.09\% extra BD-BR, which is negligible but still better than \cite{7547305} and \cite{8384310}. The time saving rate reaches 75.2\%, which allows for the encoder to accelerate at a 4$\times$ speed. \subsection{Analysis for Asymmetric Kernel} Previous researches such as \cite{dai2017deformable} explore convolutional kernels of different shapes for better recognition. In this paper, we design the special asymmetric kernel targeting at the near-vertical and near-horizontal textures, which is essential for intra coding. To demonstrate the effectiveness, we replace the asymmetric kernels with the normal square kernels (\textit{e.g.}, 5$\times$9 to 7$\times$7). Using the same training strategy, we obtain a group of CNN models as reference. Then, we compare the performance in our validation set for fast size decision. Table \ref{tab-pred} reports the result, which is averaged over four QPs. \begin{table}[!!h] \renewcommand{\arraystretch}{1.3} \caption{Prediction accuracy comparison} \centering \begin{tabular}{@{}ccc@{}} \toprule \multicolumn{1}{l}{} & Asymmetric kernel & Square kernel \\ \midrule \multicolumn{1}{c|}{Depth 0} & 92.51\% & 92.42\% \\ \multicolumn{1}{c|}{Depth 1} & 86.03\% & 85.74\% \\ \multicolumn{1}{c|}{Depth 2} & 81.87\% & 81.53\% \\ \multicolumn{1}{c|}{Depth 3} & 77.43\% & 77.06\% \\ \bottomrule \end{tabular} \label{tab-pred} \end{table} From Table \ref{tab-pred}, we can observe that the asymmetric kernel helps get better prediction precision than the normal square kernel. With the diversified features from the multibranch of asymmetric kernels and square kernels, AK-CNN models are able to better recognize the characteristics of current block. As a consequence, the prediction performance is improved. \subsection{Model Complexity Analysis} \textbf{Computational complexity.} Employing CNNs in our algorithm will introduce an additional computational complexity. Actually, our CNN models for both tasks consume less than 1\% of the encoding time required by the anchor of HM 16.9, and this has been taken into consideration of encoding time reduction. Here, we give a detailed analysis about the additional computational complexity of our CNN models via counting the number of floating-point operations, including additions and multiplications. We calculate and obtain that the number of floating-point operations of our CNN models ranges from 1.28$\times$$10^{4}$ (depth = 4) to 6.35$\times$$10^{5}$ (depth = 0) in single precision (32-bit). This result shows that our models perform much fewer floating-point operations than the common CNN models such as AlexNet \cite{krizhevsky2012imagenet} ($\sim$ 3$\times$$10^{9}$ floating-point operations) or VGG-16 \cite{simonyan2014very} ($\sim$ 4$\times$$10^{10}$ floating-point operations) by at least four orders of magnitude. For fast CU/PU size decision part, early termination mechanism is adopted, which means that if current CU/PU is not split, the inference of CNNs for the sub-CU/PU will be skipped. For the fast mode decision part, the CNN inference is only performed for the depth decided by the size decision part. As a result, a large proportion of the computational complexity of the CNNs can be reduced. The inference ratio is defined as the percentage of blocks on which we need to implement prediction. Here, we provide the inference ratio of our CNN models in all depths for two sequences, the high-definition sequence (BasketballDrive in Class B) and the low-definition sequence (RaceHorses in Class D), in Table \ref{tab-infer}. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{Inference ratios of CNN models at all depths} \centering \begin{tabular}{ccccc} \toprule & \multicolumn{2}{c}{Size Decision} & \multicolumn{2}{c}{Mode Decision} \\ \hline & BasDri & RacHor & BasDri & RacHor \\ \midrule \multicolumn{1}{c|}{Depth 0} & 100.0\% & \multicolumn{1}{c|}{100.0\%} & 13.5\% & 3.1\% \\ \multicolumn{1}{c|}{Depth 1} & 86.5\% & \multicolumn{1}{c|}{96.9\%} & 39.6\% & 13.5\% \\ \multicolumn{1}{c|}{Depth 2} & 46.9\% & \multicolumn{1}{c|}{83.4\%} & 29.8\% & 28.1\% \\ \multicolumn{1}{c|}{Depth 3} & 17.1\% & \multicolumn{1}{c|}{55.3\%} & 13.4\% & 34.6\% \\ \multicolumn{1}{c|}{Depth 4} & - & \multicolumn{1}{c|}{-} & 3.7\% & 20.7\% \\ \bottomrule \end{tabular} \begin{tablenotes} \item BasketballDrive and RaceHorses are abbreviated as BasDri and RacHor, respectively. \end{tablenotes} \vspace{-0.1cm} \label{tab-infer} \end{table} More importantly, since our CNN models only make use of the pixels in current CU/PU, without the need for any intermediate features during encoding, the prediction inference can be highly parallelized. The CPU test platform of our experiments, an Intel (R) Core (TM) i7-7700K CPU, supports 5.75$\times$$10^{11}$ single-precision floating-point operations per second \cite{intel}, which exceeds the computation of our CNN models by a large margin. Thus, we are able to set a large batch size for the input blocks. Based on the highly efficient deep learning framework $TensorFlow$, such a calculation can be accelerated with the generalized matrix multiplication (GEMM) \cite{lawson1979basic}. As a result, the inference time of our CNN models is greatly reduced, making it less than 1\% of the encoding time of the anchor of HM 16.9. Compared with the large percentage of encoding time reduction, our approach introduces very little time overhead in alleviating the complexity of HEVC intra coding. In addition, with the specified hardware support, such as GPUs or field programmable gate arrays (FPGAs), the inference time of our CNN models can be further reduced drastically. When running on our GPU platform for testing, the inference can be further accelerated hundreds of times. With increasingly many terminals equipped with such hardware, we believe that our CNN-based approach can be really competitive in the future. \textbf{Space complexity.} The space complexity is also very important. Thus, we need to use the shallow and thin CNNs. Actually, the neural network structure in our work is really light. For fast CU/PU size decision, the number of parameters of the AK-CNNs ranges from 4.33$\times$$10^{4}$ to 4.40$\times$$10^{4}$, which is much fewer than AlexNet (6.10$\times$$10^{7}$) and VGG-16 (1.38$\times$$10^{8}$). Table \ref{tab-param} presents a comparison of our AK-CNNs and a similar work \cite{8019316} in terms of the number of parameters. \begin{table}[!h] \vspace{-0.1cm} \renewcommand{\arraystretch}{1.3} \caption{Comparison on amount of parameters} \centering \begin{tabular}{@{}ccc@{}} \toprule \multicolumn{1}{l}{} & Ours & \cite{8019316} \\ \midrule \multicolumn{1}{c|}{Depth 0} & 43,986 & 1,384,464 \\ \multicolumn{1}{c|}{Depth 1} & 43,602 & 1,160,208 \\ \multicolumn{1}{c|}{Depth 2} & 43,346 & 1,104,144 \\ \multicolumn{1}{c|}{Depth 3} & 43,346 & None \\ \bottomrule \end{tabular} \label{tab-param} \end{table} \section{Conclusions} In this paper, we propose a LFHI framework to reduce the complexity of HEVC intra coding while maintaining the RD performance. A novel network structure with multibranch asymmetric convolutional kernels (\textit{i.e.}, AK-CNN) is proposed to better extract texture features from blocks without too much complexity. Then, we introduce the MNRC as a new concept into the fast intra-mode decision; thus, the candidates for RDO can be reduced. The EOTD scheme is used to achieve configurable complexity-efficiency tradeoffs to meet different needs. In addition, we design an interpolation-based prediction method to deal with the problem of variant QPs. To meet the need of training data, we establish the EHIC dataset, with which offline training can be efficient. Compared with the original HM 16.9, our approach reduces the encoding time by 75.2\% with a negligible 2.09\% BD-BR increase over the JCT-VC standard test sequences. In future work, we will consider extending our scheme to H.266/VVC for further optimization. The partition pattern in VVC is much more complicated than HEVC, so directly applying the separate prediction scheme is not very suitable, a problem for which we will attempt to find an appropriate solution. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The past two decades have shown an increasing research effort in networked dynamical systems. To a large extend this increase has been caused by technological developments such as the emergence of the internet and the growing relevance of smart power grids. Also the spreading interest in social networks and biological systems have contributed to this surge \cite{Newman2003,ME2010,liu2016control,Ruths2014}. A fundamental issue in networked systems is that of controllability. This issue deals with the question whether all parts of the global network can be adequately influenced or manipulated by applying control inputs only locally to the network. A vast amount of literature has been devoted to several variations on this issue, see \cite{tanner2004,LCWX2008,RJME2009,LYB2011,EMCCB2012,WCT2017} and the references therein. In most of the literature, a networked system is a collection of input-state-output systems, called agents, together with an interconnection structure between them. Some of these systems can also receive input from outside the network, and are called leaders. The remaining systems are called followers. At a higher level of abstraction, a networked system can be described by a direct graph, called the network graph, where the vertices represent the input-state-output systems and the edges represent the interactions between them. Controllability of the networked system then deals with the question whether the states of all agents can be steered from any initial state to any final state in finite time by applying suitable input signals to the network through the leaders. Using the observation that the underlying graph plays an essential role in the controllability properties of the networked system \cite{EMCCB2012}, an increasing amount of literature has been devoted to uncovering this connection, see \cite{GP2013,MM2013,CM2014} and the references therein. In order to be able to zoom in on the role of the network graph, it is common to proceed with the simplest possible dynamics at the vertices of the graph, and to take the agents to be single integrators, with a one-dimensional state space. These single integrators are interconnected through the network graph, and the interconnection strengths are given by the weights on the edges. Based on this, the overal networked system can obviously be represented by a linear input-state-output system of the form \[ \dot{x} = Ax + Bu, \] where the system matrix $A \in \mathbb{R}^{n \times n}$ represents the network structure with the given edge weights, and the matrix $B \in \mathbb{R}^{n \times m}$ encodes which $m$ vertices are the leaders. The $n$-dimensional state vector $x$ consists of the states of the $n$ agents, and the $m$-dimensional vecor $u$ collects the input signals to the $m$ leader vertices. Roughly speaking, the research on network controllability based on the above model can be subdivided into three directions. The first direction is based on the assumption that the edge weights in the network are known exactly. In this case the matrix $A$ is a given constant matrix, and specific dynamics is considered for the network. For example, the system matrix can be defined as the adjacency matrix of the graph \cite{CS2010}, or the graph Laplacian matrix \cite{tanner2004,RJME2009,EMCCB2012,MH2016,ZCC2014,ZCC2011}. Furthermore, a framework for controllability was also introduced in \cite{YZDW2013}, offering tools to treat controllability of complex networks with arbitrary structure and edge weights. Related results can be found in \cite{YZ2014,nie2015}. We also refer to \cite{TT2018,TR2014}. A second research direction deals with the situation that the exact values of the edge weights are not known, but only information on whether these weights are zero or nonzero is available. In this case, the system matrix is not a known, given, matrix, but rather a matrix with a certain zero/nonzero pattern: some of the entries are known to be equal to zero, the other entries are unknown. This framework deals with the concept of {\em structural controllability}. Up to now, two types of structural controllability have been studied, namely {\em weak} structural controllability and {\em strong} structural controllability. A networked system of the form above is called weakly structurally controllable if there exists at least one choice of values for the nonzero entries in the system matrices such that the corresponding matrix pair $(A,B)$ is controllable. The networked system is called strongly structurally controllable if, roughly speaking, for {\em all} choices of values for the nonzero entries the matrix pair $(A,B)$ is controllable. Conditions for weak and strong structural controllability can be expressed entirely in terms of the underlying network graph, using concepts like cactus graphs, maximal matchings, and zero forcing sets, see \cite{Lin74,MY1979,LYB2011,CM2013,MZC2014,WRS2014,TD2015}. A third, more recent, research direction again deals with weak and strong structural controllability. However, the nonzero entries in the pattern matrices defining the networked system can no longer take arbitrary nonzero real values, independently of each other. Instead, in this framework the situation is considered that there are certain constraints on some of the nonzero entries. These constraints can require that some of the nonzero entries have given values, see e.g. \cite{MHM2018}, or that there are given linear dependencies between some of the nonzero entries, see \cite{LM2017}. In particular, in \cite{LM2017} necessary and sufficient conditions for weak structural controllability were established in terms of {\em colored graphs.} The present paper contributes to this third research direction. In the context of strong structural controllability it deals with the situation that the nonzero entries in the system matrix can no longer take arbitrary values. Instead, the values of certain a priori specified nonzero entries in the system matrix are constrained to be identical. Obviously, in real world networks it is indeed a typical situation that certain edge weights are equal, either by symmetry considerations or by the physics of the underlying problem. An example is provided by the case of undirected networks, in which the network graph has to be symmetric. Another example is provided by networked systems defined in terms of so-called network-of-networks \cite{CAM2014}, which are obtained by taking the Cartesian product of smaller factor networks. For each factor network, the internal edge weights are independent. However, by applying the Cartesian product, some edge weights in the overal network will become identical. In the present paper we will establish conditions for this new notion of strong structural controllability in terms of colored graphs. The main contributions of this paper are as follows: \begin{enumerate} \item We introduce a new color change rule and define the corresponding notion of zero forcing set. To do this, we consider colored bipartite graphs and establish a necessary and sufficient graph-theoretic condition for nonsingularity of the pattern class associated with this bipartite graph. \item We provide a sufficient graph theoretic condition for our new notion of strong structural controllability in terms of zero forcing sets. \item We introduce so called elementary edge operations that can be applied to the original network graph and that preserve the property of strong structural controllability. \item A sufficient graph theoretic condition for strong structural controllability is developed based on the notion of edge-operations-color-change derived set which is obtained by applying elementary edge operations and the color change rule iteratively. \end{enumerate} The organization of this paper is as follows. In Section 2, some preliminaries are presented. In Section 3, we give a formal definition of the main problem treated in this paper in terms of systems defined on colored graphs. In Section 4, we establish our main result, giving a sufficient graph-theoretic condition for strong structural controllability of systems defined on colored graphs. Section 5 provides two additional sufficient graph-theoretic conditions. To establish these conditions, we introduce the concept of elementary edge operations and the associated notion of edge-operations-color-change derived set. This set is obtained from the initial coloring set by an iterative procedure involving successive and alternating applications of elementary edge operations and the color change rule. Finally, Section 6 formulates the conclusions of this paper. We note that a preliminary version \cite{JTBC2018} of this paper has appeared in the proceedings of NecSys 2018. In that note, the condition for strong structural controllability in terms of our new concept of zero forcing set was stated without giving any of the proofs. The present paper provides these proofs, and in addition provides new conditions for strong structural controllability in terms of elementary edge operations and the concept of edge-operations-color-change derived set that were not yet given in \cite{JTBC2018}. \section{Preliminaries} In this paper, we will use standard notation. Let $\mathbb{C}$ and $\mathbb{R}$ denote the fields of complex and real numbers, respectively. The spaces of $n$-dimensional real and complex vectors are denoted by $\mathbb{R}^{n}$ and $\mathbb{C}^{n}$, respectively. Likewise, the spaces of $n \times m$ real and complex matrices are denoted by $\mathbb{R}^{n \times m}$ and $\mathbb{C}^{n \times m}$, respectively. For a given $n \times m$ matrix $A$, the entry in the $i$th row and $j$th column is denoted by $A_{ij}$. For a given $m \times n$ matrix $A$ and for given subsets $S = \{s_1,s_2,\ldots,s_k\} \subseteq \{1,2,\ldots,m\}$ and $T = \{t_1,t_2,\ldots,t_l\} \subseteq \{1,2,\ldots,n\}$ we define the $k \times l$ submatrix of $A$ associated with $S$ and $T$ as the matrix $A_{S,T}$ with $(A_{S,T})_{ij} := A_{s_i t_j}$. Similarly, for a given $n$-dimensional vector $x$, we denote by $x_{T}$ the subvector of $x$ consisting of the entries of $x$ corresponding to $T$. For a given square matrix $A$, we denote its determinant by $\det(A)$. Finally, $I$ and $\mathbf{0}$ will denote the identity and zero matrix of appropriate dimensions, respectively. \subsection{Elements of Graph Theory} Let $\mathcal{G} = (V,E)$ be a directed graph, with vertex set $V = \{1,2, ...,n\}$, and the edge set $E$ a subset of $V \times V$. In this paper, we will only consider simple graphs, that is, the edge set $E$ does not contain edges of the form $(i,i)$. In our paper, the phrase `directed graph' will always refer to a simple directed graph. We call vertex $j$ an out-neighbor of vertex $i$ if $(i,j) \in E$. We denote by $N(i) := \{j \in V \mid (i,j) \in E \}$ the set of all out-neighbors of $i$. Given a subset $S$ of the vertex set $V$ and a subset $X \subseteq S$, we denote by \[ N_{V \setminus S}(X) = \{j \in V \setminus S \mid \exists~ i \in X ~\mbox{such that} ~(i,j) \in E \}, \] the set of all vertices outside $S$, but an out-neighbor of some vertex in $X$. A directed graph $\mathcal{G}_{1} = (V_{1},E_{1})$ is called a subgraph of $\mathcal{G}$ if $V_{1} \subseteq V$ and $E_{1} \subseteq E$. Associated with a given directed graph $\mathcal{G} = (V,E)$ we consider the set of matrices \[ \mathcal{W}(\mathcal{G}) := \{ W \in \mathbb{R}^{n \times n} \mid W_{ij} \neq 0 \mbox{ iff } (j,i) \in E \}. \] For any such $W$ and $(j,i) \in E$, the entry $W_{ij}$ is called the weight of the edge $(j,i)$. Any such matrix $W$ is called a {\em weighted adjacency matrix} of the graph. For a given directed graph $\mathcal{G} = (V,E)$, we denote the associated graph with weighted adjacency matrix $W$ by $\mathcal{G}(W) = (V,E,W)$. This is then called the {\em weighted graph} associated with the graph $\mathcal{G}= (V,E)$ and weighted adjacency matrix $W$. Finally, we define the graph $\mathcal{G} = (V,E)$ to be an {\em undirected graph} if $(i,j) \in E$ whenever $(j,i) \in E$. In that case the order of $i$ and $j$ in $(i,j)$ does not matter and we interpret the edge set $E$ as the set of unordered pairs $\{i,j\}$ where $(i,j) \in E$. An undirected graph $\mathcal{G} = (V,E)$ is called {\em bipartite} if there exist nonempty disjoint subsets $X$ and $Y$ of $V$ such that $X \cup Y = V$ and $\{i,j\} \in E$ only if $i \in X$ and $j \in Y$. Such {\em bipartite graph} is denoted by $G = (X,Y,E_{XY})$ where we denote the edge set by $E_{XY}$ to stress that it contains edges $\{i,j\}$ with $i \in X$ and $j \in Y$. In this paper we will use the symbol $\mathcal{G}$ for arbitrary directed graphs and $G$ for bipartite graphs. A set of $t$ edges $ {m} \subseteq E_{XY}$ is called a {\em $t$-matching} in $G$, if no two distinct edges in $ {m}$ share a vertex. In the special case that $|X| = |Y| =t$, such a $t$-matching is called a {\em perfect matching}. For a bipartite graph $G = (X,Y,E_{XY})$, with vertex sets $X$ and $Y$ given by $X = \{x_{1},x_{2}, \ldots, x_{s}\}$ and $Y = \{y_{1},y_{2}, \ldots, y_{t}\}$, we define {\em the pattern class} of $G$ by \[\mathcal{P}(G) = \{M \in \mathbb{C}^{t \times s}\mid M_{ji} \neq 0 \mbox{ iff } \{x_{i},y_{j}\} \in E_{XY}\}.\] Note that matrices $M \in \mathcal{P}(G)$ may not be square since the cardinalities of $X$ and $Y$ can differ. Also note that, in the context of pattern classes for undirected bipartite graphs, we allow complex matrices. \subsection{Controllability of Systems Defined on Graphs} For a directed graph $\mathcal{G} = (V,E)$ with vertex set $V = \{1,2,\ldots,n\}$, the {\em qualitative class} of $\mathcal{G}$ is defined as the family of matrices \[ \mathcal{Q}(\mathcal{G}) = \{A \in \mathbb{R}^{n \times n} \mid \mbox{for } i \neq j: A_{ij} \neq 0 \mbox{ iff } (j,i) \in E \}. \] Note that the diagonal entries of $A \in \mathcal{Q}(\mathcal{G})$ do not depend on the structure of $\mathcal{G}$ and can take arbitrary real values. Next, we specify a subset $V_{L} = \{v_{1},v_2, \ldots, v_{m}\}$ of $V$, called the {\em the leader set}, and consider the following family of leader/follower systems defined on the graph $\mathcal{G}$ with dynamics \begin{equation}\label{e:system} \dot{x} = Ax + Bu , \end{equation} where $x \in \mathbb{R}^{n}$ is the state and $u \in \mathbb{R}^{m}$ is the input. The systems \eqref{e:system} have the distinguishing feature that the matrix $A$ belongs to $\mathcal{Q}(\mathcal{G})$ and $B = B(V;V_L)$ is defined as the $n \times m$ matrix given by \begin{equation}\label{e:inputmatrix} B_{ij} = \left\{ \begin{array}{lcl} 1 ~ \mbox{if} ~ i = v_{j},\\ 0 ~\mbox{otherwise}.\end{array}\right. \end{equation} An important notion associated with systems defined on a graph $\mathcal{G}$ as in \eqref{e:system} is the notion of strong structural controllability. \begin{defn}\label{d:cos} Let $\mathcal{Q}' \subseteq \mathcal{Q}(\mathcal{G})$. The system defined on the directed graph $\mathcal{G}=(V,E)$ with dynamics \eqref{e:system} and leader set $V_{L} \subseteq V$ is called {\em strongly structurally controllable with respect to $\mathcal{Q}'$} if the pair $(A,B)$ is controllable for all $A \in \mathcal{Q}'$. In that case we will simply say that $(\mathcal{G};V_{L})$ is controllable with respect to $\mathcal{Q}'$. \end{defn} One special case of the above notion is that $(\mathcal{G};V_{L})$ is controllable with respect to $\mathcal{Q}(\mathcal{G})$. In that case, we will simply say that $(\mathcal{G};V_{L})$ is controllable. Another special case is that $(\mathcal{G};V_{L})$ is controllable with respect to $\mathcal{Q}' $ where, for a given weighted adjacency matrix $W \in \mathcal{W}(\mathcal{G})$, $\mathcal{Q}'$ is the subclass of $\mathcal{Q}(\mathcal{G})$ defined by \[\mathcal{Q}_{W}(\mathcal{G}) = \{A \in \mathcal{Q}(\mathcal{G}) \mid \mbox{for}~ i \neq j: A_{ij} = W_{ij} \}.\] This subclass is called the {\em weighted qualitative class} associated with $W$. Note that the off-diagonal elements of $A \in \mathcal{Q}_{W}(\mathcal{G})$ are fixed by those of the given adjacency matrix, while, again, the diagonal entries of $A \in \mathcal{Q}_{W}(\mathcal{G})$ can take arbitrary real values. Obviously \[\mathcal{Q}(\mathcal{G}) = \bigcup_{W \in \mathcal{W}(\mathcal{G})} \mathcal{Q}_{W}(\mathcal{G}).\] Since there is a unique weighted graph $\mathcal{G}(W) = (V,E,W)$ associated with the graph $\mathcal{G}= (V,E)$ and weighted adjacency matrix $W$, we will simply say that {\em $(\mathcal{G}(W);V_{L})$ is controllable} if $(\mathcal{G};V_{L})$ is controllable with respect to $\mathcal{Q}_{W}(\mathcal{G})$. \subsection{Zero Forcing Set and Controllability of $(\mathcal{G};V_{L})$}\label{s:ZFS} Let $\mathcal{G} = (V,E)$ be a directed graph with vertices colored either black or white. We now introduce the following color change rule \cite{A2008}: if $v$ is a black vertex in $\mathcal{G}$ with exactly one white out-neighbor $u$, then we change the color of $u$ to black, and write $v \xrightarrow{c} u$. Such a color change is called a {\em force}. A subset $C$ of $V$ is called a {\em coloring set} if the vertices in $C$ are initially colored black and those in $V \setminus C$ initially colored white. Given a coloring set $C \subseteq V$, the derived set $\mathcal{D}(C)$ is the set of black vertices obtained after repeated application of the color change rule, until no more changes are possible. It was shown in \cite{A2008} that the derived set is indeed uniquely defined, in the sense that it does not depend on the order in which the color changes are applied to the original coloring set $C$. A coloring set $C \subseteq V$ is called a {\em zero forcing set for} $\mathcal{G}$ if $\mathcal{D}(C) = V$. Given a zero forcing set for $\mathcal{G}$, we can list the forces in the order in which they were performed to color all vertices in the graph black. Such a list is called a \emph{chronological list of forces}. It was shown in \cite{MZC2014} that controllability of $(\mathcal{G};V_{L})$ can be characterized in terms of zero forcing sets. \begin{prp} \label{p:MCT} Let $\mathcal{G} = (V,E)$ be a directed graph and $V_{L} \subseteq V$ be the leader set. Then, $(\mathcal{G};V_{L})$ is controllable if and only if $V_{L}$ is a zero forcing set. \end{prp} \subsection{Balancing Set and Controllability of $(\mathcal{G}(W);V_{L})$} Consider the weighted graph $\mathcal{G}(W) = (V,E,W)$ associated with the directed graph $\mathcal{G} = (V,E)$ and weighted adjacency matrix $W \in \mathcal{W}(\mathcal{G})$. For $i = 1, \ldots, n$, let $x_{i}$ be a variable assigned to vertex $i$. Assume that for a given subset of vertices $C \subseteq V$, $x_{j} = 0$ for all $ j \in C$. We call $C$ {\em the set of zero vertices}. The values of the other vertices of $\mathcal{G}(W)$ are initially undetermined. To every vertex $j \in C$, we assign a so called {\em balance equation}: \begin{equation}\label{e:BE} \sum_{k \in N_{V \setminus C}(\{j\})} x_{k}W_{kj} = 0. \end{equation} Note that for weighted undirected graphs, in which case $W = W^{T}$, the balance equation \eqref{e:BE} coincides with the one introduced in \cite{MHM2018}. Assume that there is a subset of zero vertices $X \subseteq C$ such that the system of $|X|$ balance equations corresponding to the vertices in $X$ implies that $x_{k} = 0$ for all $k \in Y$ with $C \cap Y = \emptyset$. The updated set of zero vertices is now defined as $C' = C \cup Y$. In this case, we say that {\em zeros extend from $X$ to $Y$}, written as $X \xrightarrow{z} Y$. This one step procedure of making the values of possibly additional vertices equal to zero is called {\em the zero extension rule\/}. Define the {\em derived set\/} $\mathcal{D}_{z}(C)$ to be the set of zero vertices obtained after repeated application of the zero extension rule until no more zero vertices can be added. Although not explicitly stated in \cite{MHM2018}, it can be shown that the derived set is uniquely defined, in the sense that it does not depend on the particular zero extensions that are applied to the original set of zero vertices $C$. An initial zero vertex set $C \subseteq V$ is called a {\em balancing set} if the derived set $\mathcal{D}_{z}(C)$ is $V$. Given a balancing set, one can list the zero extensions in the order in which they were performed. Such a list is called a \emph{chronological list of zero extensions.} A necessary and sufficient condition for strong structural controllability with respect to $\mathcal{Q}_{W}(\mathcal{G})$ for the special case that $W = W^T$ was given in \cite{MHM2018}: \begin{prp} \label{BSC1} Let $\mathcal{G}$ be a simple undirected graph, $V_{L} \subseteq V$ be the leader set and $W \in \mathcal{W}(\mathcal{G})$ be a weighted adjacency matrix with $W = W^{T}$. Then $(\mathcal{G}(W);V_{L})$ is controllable if and only if $V_{L}$ is a balancing set. \end{prp} \section{Problem formulation} In this section we will introduce the main problem that is considered in this paper. At the end of the section, we will also formulate two preliminary results that will be needed in the sequel. In order to proceed, we will now first formalize that the weights of a priori given edges in the network graph are constrained to be equal. This can be expressed as a condition that some of the off-diagonal entries in the matrices belonging to the qualitative class $\mathcal{Q}(\mathcal{G})$ are equal. To do this, we introduce a partition \[ \pi = \{E_{1},E_{2},\ldots,E_{k}\} \] of the edge set $E$ into disjoint subsets $E_r$ whose union is the entire edge set $E$. The edges in a given cell $E_r$ are constrained to have identical weights. We then define the {\em colored qualitative class} associated with $\pi$ by \[\begin{split} \mathcal{Q}_{\pi}(\mathcal{G}) = &\{A \in \mathcal{Q}(\mathcal{G}) \mid A_{ij} = A_{kl} \\ & \mbox{ if } (j,i), (l,k) \in E_{r} \mbox{ for some } r\}. \end{split} \] In order to visualize the partition $\pi$ of the edge set in the graph, two edges in the same cell $E_{r}$ are said to have the same color. The colors will be denoted by the symbols $c_1, c_2, \ldots, c_k$ and the edges in cell $E_r$ are said to have color $c_r$. This leads to the notion of {\em colored graph}. A {\em colored graph} is a directed graph together with a partition $\pi$ of the edge set, which is denoted by $\mathcal{G}(\pi) = (V,E,\pi)$. In the sequel, sometimes the symbols $c_i$ will also be used to denote independent nonzero variables. A set of real values obtained by assigning to each of these variables $c_i$ a particular real value is called a \emph{realization} of the color set. \begin{ex} \label{ex:coloredgraph} Consider the colored graph $\mathcal{G}(\pi) = (V,E,\pi)$ associated with the directed graph $\mathcal{G} = (V,E)$ and edge partition $\pi =\{E_{1},E_{2},E_{3}\}$, where $ E_{1}= \{(1,4), (1,6)\}$, $E_{2}=\{(2,4),(2,5)\}$ and $E_{3} = \{(3,5), (3,6)\}$ as depicted in Figure \ref{g:SDCG}. \begin{figure}[h!] \centering \begin{tikzpicture}[scale=0.5] \tikzset{VertexStyle1/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 2pt, outer sep = 0pt, minimum size = 10 pt}, edge/.style={->,> = latex', text = black} } \tikzset{VertexStyle2/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 2pt, outer sep = 0pt, minimum size = 10 pt}} \node[VertexStyle2](1) at (0,3) {$1$}; \node[VertexStyle2](2) at (-2,0) {$2$}; \node[VertexStyle2](3) at (2,0) {$3$}; \node[VertexStyle1](4) at (-5,-3) {$4$}; \node[VertexStyle1](5) at (0,-3) {$5$}; \node[VertexStyle1](6) at (5,-3) {$6$}; \Edge[ style = {->,> = latex'},color=green, label = $c_{2}$,labelstyle={inner sep=0pt}](2)(4); \Edge[ style = {->,> = latex',pos = 0.5},color=green, label = $c_{2}$,labelstyle={inner sep=0pt}](2)(5); \Edge[style = {->,> = latex',pos = 0.5},color= blue, label = $c_{3}$, labelstyle={inner sep=0pt}](3)(5); \Edge[ style = {->,> = latex'},color= blue , label = $c_{3}$,labelstyle={inner sep=0pt}](3)(6); \Edge[style={bend left,->,> = latex'},color = red, label = $c_{1}$,labelstyle={inner sep=0pt}](1)(6) \Edge[style={bend right,->,> = latex'},color = red, label = $c_{1}$,labelstyle={inner sep=0pt}](1)(4) \Edge[style={bend right,->,> = latex'},color = red, label = $c_{1}$,labelstyle={inner sep=0pt}](4)(5) \Edge[style={bend left,->,> = latex'},color = blue, label = $c_{3}$,labelstyle={inner sep=0pt}](6)(5) \end{tikzpicture} \caption{A colored directed graph with leader set $\{1,2,3\}$.} \label{g:SDCG} \end{figure} Edges having the same color means that the weight of these edges are constrained to be equal. In this example, the edges in $E_1$ have color $c_1$ (blue), those in $E_2$ have color $c_2$ (green), and those in $E_3$ have color $c_3$ (red). The corresponding colored qualitative class consists of all matrices of the form \[\begin{bmatrix}\lambda_1& 0 & 0 & 0 & 0 & 0\\ 0 & \lambda_2& 0 & 0 & 0 & 0\\ 0 & 0& \lambda_3& 0 & 0 & 0 \\ c_1 & c_2 & 0 & \lambda_4& 0 & 0 \\ 0 & c_2 & c_3 & c_1 &\lambda_5& c_3 \\ c_1&0 & c_3 &0& 0 & \lambda_6 \end{bmatrix}\] where $\lambda_i$ is an arbitrary real number for $i=1,2,\ldots,6$ and $c_i$ is an arbitrary {\em nonzero} real number for $i=1,2,3.$ \end{ex} \smallskip Given a colored directed graph $\mathcal{G}(\pi) = (V,E,\pi)$ with edge partition $\pi = \{E_{1}, E_{2}, \ldots, E_{k}\}$, we define the corresponding family of weighted adjacency matrices \[\begin{split} \mathcal{W}_{\pi}(\mathcal{G}) := &\{ W \in \mathcal{W}(\mathcal{G}) \mid W_{ij} = W_{kl} \\ & \mbox{ if } (j,i),(l,k) \in E_{r} \mbox{ for some } r\}. \end{split} \] Note that any weighted adjacency matrix $W \in \mathcal{W}_{\pi}(\mathcal{G})$ is associated with a unique realization of the color set. Obviously, the colored qualitative class $\mathcal{Q}_{\pi}(\mathcal{G})$ is equal to the union of all the subclasses $\mathcal{Q}_{W}(\mathcal{G})$ with $W \in \mathcal{W}_{\pi}(\mathcal{G})$, i.e, \begin{equation} \label{e:union} \mathcal{Q}_{\pi}(\mathcal{G}) = \bigcup_{W \in \mathcal{W}_{\pi}(\mathcal{G})} \mathcal{Q}_{W}(\mathcal{G}). \end{equation} If $(\mathcal{G};V_{L})$ is controllable with respect to $\mathcal{Q}' = \mathcal{Q}_{\pi}(\mathcal{G})$ (see Definition \ref{d:cos}) we will simply say that {\em $(\mathcal{G}(\pi);V_{L})$ is controllable}. In that case, we call the system {\em colored strongly structurally controllable}. For example, the system with graph depicted in Figure \ref{g:SDCG} is colored strongly structurally controllable as will be shown later. The aim of this paper is to establish graph-theoretic tests for colored strong structural controllability of a given graph. In order to obtain such conditions, we now first make the observation that conditions for strong structural controllability can be expressed in terms of balancing sets. Generalizing Proposition \ref{BSC1} to the case of weighted directed graphs, we have the following lemma: \begin{lem} \label{l:BSC} Let $\mathcal{G} = (V,E)$ be a directed graph with leader set $V_{L}$ and let $W \in \mathcal{W}(\mathcal{G})$. Then $(\mathcal{G}(W); V_{L})$ is controllable if and only if $V_{L}$ is a balancing set. \end{lem} The proof can be found in the Appendix. The following lemma follows immediately from Lemma \ref{l:BSC} by noting that \eqref{e:union} holds. \begin{lem}\label{l:CSSCBS} Let $\mathcal{G} = (V,E)$ be a directed graph with leader set $V_{L}$ and let $\pi$ be a partition of the edge set. Then $(\mathcal{G}(\pi);V_{L})$ is controllable if and only if $V_{L}$ is a balancing set for all weighted graphs $\mathcal{G}(W) = (V,E,W)$ with $W \in \mathcal{W}_{\pi}(\mathcal{G})$. \end{lem} Obviously, the necessary and sufficient conditions presented in Lemma~\ref{l:CSSCBS} cannot be verified easily, as the set $\mathcal{W}_{\pi}(\mathcal{G})$ contains infinitely many elements. Therefore, we aim at establishing graph-theoretic conditions under which $(\mathcal{G}(\pi);V_{L})$ is controllable. \section{Zero Forcing Sets For Colored Graphs} In order to provide a graph-theoretic condition for colored strong structural controllability, in this section we introduce a new color change rule and then define the corresponding notion of zero forcing set. To do this, we first consider colored bipartite graphs and establish a necessary and sufficient graph-theoretic condition for nonsingularity of the associated pattern class. \subsection{Colored Bipartite Graphs} Consider the bipartite graph $G = (X,Y,E_{XY})$, where the vertex sets $X$ and $Y$ are given by $X = \{x_{1},x_{2}, \ldots, x_{s}\}$ and $Y = \{y_{1},y_{2}, \ldots, y_{t}\}$. We will now introduce the notion of {\em colored} bipartite graph. Let $\pi_{XY} = \{E_{XY}^{1}, E_{XY}^{2}, \ldots, E_{XY}^{\ell}\}$ be a partition of the edge set $E_{XY}$ with associated colors $c_{1}, c_{2}, \ldots, c_{\ell}$. This partition is now used to formalize that certain entries in the pattern class $\mathcal{P}(G)$ are constrained to be equal. Again, the edges in a given cell $E^{r}_{XY}$ are said to have the same color. The pattern class of the colored bipartite graph $G(\pi) = (X,Y,E_{XY},\pi_{XY})$ is then defined as the following set of complex $t \times s$ matrices \[ \begin{split} & \mathcal{P}_{\pi}(G) = \big\{M \in \mathcal{P}(G) \mid M_{ji} = M_{hg} \\ & \mbox{ if } \{x_{i},y_{j}\}, \{x_{g},y_{h} \}\in E^{r}_{XY} \mbox{ for some } r\big\}. \end{split} \] Assume now that $|X| = |Y|$ and let $t=|X|$. Suppose that $p$ is a perfect matching of $G(\pi)$. The {\em spectrum} of $p$ is defined to be the set of colors (counting multiplicity) of the edges in $p$. More specifically, if the perfect matching $p$ is given by $p = \big\{\{x_{1},y_{\gamma(1)}\}, \ldots, \{x_{t},y_{\gamma(t)}\}\big\}$, where $\gamma$ denotes a permutation of $(1,2,\ldots,t)$, and $c_{i_1}, c_{i_2}, \ldots, c_{i_t}$ are the respective colors of the edges in $p$, then the spectrum of $p$ is $\{c_{i_1}, c_{i_2}, \ldots, c_{i_t}\}$ where the same color can appear multiple times. In addition, we define the {\em sign} of the perfect matching $p$ as $\sign(p) = (-1)^{m}$, where $m$ is the number of swaps needed to obtain $(\gamma(1), \gamma(2), \ldots , \gamma(t))$ from $(1,2, \ldots, t)$. Since every perfect matching is associated with a unique permutation, with a slight abuse of notation, we sometimes use the perfect matching $p$ to represent its corresponding permutation. Two perfect matchings are called {\em equivalent} if they have the same spectrum. Obviously this yields a partition of the set of all perfect matchings of $G(\pi)$ into {\em equivalence classes} of perfect matchings. We denote these equivalence classes of perfect matchings by $\mathbb{P}_{1},\mathbb{P}_{2},\ldots,\mathbb{P}_{l}$, where perfect matchings in the same class $\mathbb{P}_{i}$ are equivalent. Clearly, $\mathbb{P}_{i} \cap \mathbb{P}_{j} = \emptyset$ for $i \neq j$. Correspondingly, we then define the {\em spectrum of the equivalence class $\mathbb{P}_{i}$} to be the (common) spectrum of the perfect matchings in this class, and denote it by $\spec(\mathbb{P}_{i})$. Finally, we define the {\em the signature of the equivalence class $\mathbb{P}_{i}$} to be the sum of the signs of all perfect matchings in this class, which is given by \[\sgn(\mathbb{P}_{i}) = \sum_{p \in \mathbb{P}_{i}} \sign(p).\] \begin{ex} \label{ex:coloredbg} Consider the colored bipartite graph $G(\pi)$ depicted in Figure \ref{g:CBG1}. It contains three perfect matchings, $p_{1}$, $p_{2}$ and $p_{3}$, respectively, depicted in Figure \ref{g:CBG}(b)-(d). Clearly, $p_{1}$ and $p_{3}$ are equivalent. The equivalence classes of perfect matchings are then $\mathbb{P}_{1} = \{p_{1},p_{3}\}$ and $\mathbb{P}_{2} = \{p_{2}\}$. Clearly, $\sgn(\mathbb{P}_{1}) = 0$ and $\sgn(\mathbb{P}_{2}) = -1$. \end{ex} \begin{figure}[h!] \centering \begin{subfigure}{0.4\textwidth}\label{f:1} \centering \begin{tikzpicture}[scale=0.7] \tikzset{VertexStyle1/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 1.5pt, outer sep = 0pt, minimum size = 6 pt}, edge/.style={->,> = latex', text = black} } \tikzset{VertexStyle2/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 2pt, outer sep = 0pt, minimum size = 10 pt}} \node[VertexStyle2](1) at (-1.5,2) {$1$}; \node[VertexStyle2](2) at (-1.5,0) {$2$}; \node[VertexStyle2](3) at (-1.5,-2) {$3$}; \node[VertexStyle1](4) at (1.5,2) {$4$}; \node[VertexStyle1](5) at (1.5,0) {$5$}; \node[VertexStyle1](6) at (1.5,-2) {$6$}; \node[](7) at (-1.5,3){$X$}; \node[](8) at (1.5,3){$Y$}; \Edge[style = {,> = latex',pos = 0.5},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](1)(4); \Edge[ style = {,> = latex',pos = 0.2},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](1)(5); \Edge[style = {,> = latex',pos = 0.7},color=blue , label = $c_{3}$,labelstyle={inner sep=0pt}](1)(6); \Edge[ style = {,> = latex',pos = 0.7},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](2)(4); \Edge[ style = {,> = latex',pos = 0.3},color=red , label = $c_{1}$,labelstyle={inner sep=0pt}](2)(5); \Edge[style = {,> = latex',pos = 0.3},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](3)(4); \Edge[style = {,> = latex',pos = 0.5},color=blue , label = $c_{3}$,labelstyle={inner sep=0pt}](3)(6); \end{tikzpicture} \caption{Colored bipartite graph $G(\pi)$.} \label{g:CBG1} \end{subfigure} \vspace{.2cm} \begin{subfigure}{0.4\textwidth}\label{f:2} \centering \begin{tikzpicture}[scale=0.7] \tikzset{VertexStyle1/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 1.5pt, outer sep = 0pt, minimum size = 8 pt}, edge/.style={->,> = latex', text = black} } \tikzset{VertexStyle2/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 2pt, outer sep = 0pt, minimum size = 10 pt}} \node[VertexStyle2](1) at (-1.5,2) {$1$}; \node[VertexStyle2](2) at (-1.5,0.5) {$2$}; \node[VertexStyle2](3) at (-1.5,-1) {$3$}; \node[VertexStyle1](4) at (1.5,2) {$4$}; \node[VertexStyle1](5) at (1.5,0.5) {$5$}; \node[VertexStyle1](6) at (1.5,-1) {$6$}; \node[](7) at (-1.5,3){{$X$}}; \node[](8) at (1.5,3){{$Y$}}; \Edge[ style = {,> = latex',pos = 0.5},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](1)(4); \Edge[ style = {,> = latex',pos = 0.5},color=red , label = $c_{1}$,labelstyle={inner sep=0pt}](2)(5); \Edge[style = {,> = latex',pos = 0.5},color=blue , label = $c_{3}$,labelstyle={inner sep=0pt}](3)(6); \fill[fill=green,draw=white] (-1.5,-2) rectangle +(6mm,4mm); \fill[fill=red,draw=white] (-0.25,-2) rectangle +(6mm,4mm); \fill[fill=blue,draw=white] (1.0,-2) rectangle +(6mm,4mm); \end{tikzpicture} \caption{Perfect matching $p_{1}$ with $\sign(p_{1}) = 1$.} \label{g:CBG2} \end{subfigure} \vspace{.2cm} \begin{subfigure}{0.4\textwidth}\label{f:3} \centering \begin{tikzpicture}[scale=0.7] \tikzset{VertexStyle1/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 1.5pt, outer sep = 0pt, minimum size = 8 pt}, edge/.style={->,> = latex', text = black} } \tikzset{VertexStyle2/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 2pt, outer sep = 0pt, minimum size = 10 pt}} \node[VertexStyle2](1) at (-1.5,2) {$1$}; \node[VertexStyle2](2) at (-1.5,0.5) {$2$}; \node[VertexStyle2](3) at (-1.5,-1) {$3$}; \node[VertexStyle1](4) at (1.5,2) {$4$}; \node[VertexStyle1](5) at (1.5,0.5) {$5$}; \node[VertexStyle1](6) at (1.5,-1) {$6$}; \node[](7) at (-1.5,3){{$X$}}; \node[](8) at (1.5,3){{$Y$}}; \Edge[style = {,> = latex',pos = 0.3},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](1)(5); \Edge[ style = {,> = latex',pos = 0.3 },color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](2)(4); \Edge[style = {,> = latex',pos = 0.5},color=blue , label = $c_{3}$,labelstyle={inner sep=0pt}](3)(6); \fill[fill=green,draw=white] (-1.5,-2) rectangle +(6mm,4mm); \fill[fill=green,draw=white] (-0.25,-2) rectangle +(6mm,4mm); \fill[fill=blue,draw=white] (1.0,-2) rectangle +(6mm,4mm); \end{tikzpicture} \caption{Perfect matching $p_{2}$ with $\sign(p_{2}) = -1$.} \label{g:CBG3} \end{subfigure} \vspace{.2cm} \begin{subfigure}{0.4\textwidth}\label{f:4} \centering \begin{tikzpicture}[scale=0.7] \tikzset{VertexStyle1/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 1.5pt, outer sep = 0pt, minimum size = 8 pt}, edge/.style={->,> = latex', text = black} } \tikzset{VertexStyle2/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 2pt, outer sep = 0pt, minimum size = 10 pt}} \node[VertexStyle2](1) at (-1.5,2) {$1$}; \node[VertexStyle2](2) at (-1.5,0.5) {$2$}; \node[VertexStyle2](3) at (-1.5,-1) {$3$}; \node[VertexStyle1](4) at (1.5,2) {$4$}; \node[VertexStyle1](5) at (1.5,0.5) {$5$}; \node[VertexStyle1](6) at (1.5,-1) {$6$}; \node[](7) at (-1.5,3){$X$}; \node[](8) at (1.5,3){$Y$}; \Edge[ style = {,> = latex',pos = 0.2},color=blue , label = $c_{3}$,labelstyle={inner sep=0pt}](1)(6); \Edge[ style = {,> = latex',pos = 0.3},color=red , label = $c_{1}$,labelstyle={inner sep=0pt}](2)(5); \Edge[style = {,> = latex',pos = 0.2},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](3)(4); \fill[fill=blue,draw=white] (-1.5,-2) rectangle +(6mm,4mm); \fill[fill=red,draw=white] (-0.25,-2) rectangle +(6mm,4mm); \fill[fill=green,draw=white] (1.0,-2) rectangle +(6mm,4mm); \end{tikzpicture} \caption{Perfect matching $p_{3}$ with $\sign(p_{3}) = -1$.} \label{g:CBG4} \end{subfigure} \caption{Example of a colored bipartite graph and its perfect matchings.} \label{g:CBG} \end{figure} We are now ready to state a necessary and sufficient condition for nonsingularity of all matrices in the colored pattern class $\mathcal{P}_{\pi}(G)$. \begin{thm}\label{t:NoP} Let $G(\pi) = (X,Y,E_{XY},\pi_{XY})$ be a colored bipartite graph and $|X| = |Y|$. Then, all matrices in $\mathcal{P}_{\pi}(G)$ are nonsingular if and only if there exists at least one perfect matching and exactly one equivalence class of perfect matchings has nonzero signature. \end{thm} \begin{IEEEproof} Denote the cardinality of $X$ and $Y$ by $t$. Let $A \in \mathcal{P}_{\pi}(G)$. By the Leibniz Formula for the determinant, we have \[\det(A) = \sum_{\gamma} \sign(\gamma) \prod_{i = 1}^{t}A_{i \gamma(i)},\] where the sum ranges over all permutations $\gamma$ of $(1,2,\ldots,t)$ and where $\sign(\gamma) = (-1)^m$ with $m$ the number of swaps needed to obtain $(\gamma(1), \gamma(2), \ldots , \gamma(t))$ from $(1,2, \ldots, t)$. Note that $\prod_{i = 1}^{t}A_{i \gamma(i)} \neq 0$ if and only if there exists at least one perfect matching $ p =\{\{x_{1},y_{\gamma(1)}\}, \ldots, \{x_{|X|},y_{\gamma(t)}\}\}$ in $G(\pi)$. In that case, we have \[ \det(A) = \sum_{p} \sign(p) \prod_{i = 1}^{t}A_{i p(i)} , \] where $p$ ranges over all perfect matchings and $\sign(p)$ denotes the sign of the perfect matching (we now identify perfect matchings with their permutations). Suppose now there are $l$ equivalence classes of perfect matchings $\mathbb{P}_{1},\mathbb{P}_{2},\ldots,\mathbb{P}_{l}$. Then we obtain \begin{equation} \label{e:det} \det(A) = \sum_{j=1}^l \left( \sgn(\mathbb{P}_{j}) \prod_{i=1}^t A_{i p(i)} \right), \end{equation} where, for $j = 1,2, \ldots l$, in the product appearing in the $j$th term, $p$ is an arbitrary matching in $\mathbb{P}_j$. We will now prove the `if' part. Assume that there exists at least one perfect matching, and exactly one equivalence class of perfect matchings has nonzero signature. Without loss of generality, assume that the equivalence class $\mathbb{P}_{1}$ has nonzero signature. Obviously, for every $A \in \mathcal{P}_{\pi}(G)$, we then have \[ \det(A) = \sgn(\mathbb{P}_{1}) \prod_{i=1}^t A_{i p(i)} \neq 0 , \] where $p \in \mathbb{P}_1$ is arbitrary, in other words, every $A \in \mathcal{P}_{\pi}(G)$ is nonsingular. Next, we prove the `only if' part. For this, assume that all $A \in \mathcal{P}_{\pi}(G)$ are nonsingular, but one of the following holds: \begin{itemize} \item[(i)] there does not exist any perfect matching, \item[(ii)] no equivalence class of perfect matchings with nonzero signature exists, or \item[(iii)] there exist at least two equivalence classes of perfect matchings with nonzero signature. \end{itemize} We will show that all these cases lead to contradiction. In case (i), we must obviously have $\det(A) = 0$ for any $A \in \mathcal{P}_{\pi}(G)$ which gives a contradiction. For case (ii), it follows from \eqref{e:det} that $\det(A) = 0$ since all equivalence classes have zero signature. Therefore, we reach a contradiction again. Finally, consider case (iii). Without loss of generality, assume $\mathbb{P}_{1}$ and $\mathbb{P}_{2}$ have nonzero signature. The signatures of the remaining equivalence classes can be either zero or nonzero. In the sequel we associate the colors $c_1, c_2, \ldots, c_{\ell}$ of the cells $E_{XY}^1, E_{XY}^2, \ldots E_{XY}^{\ell}$ with independent, nonzero, variables $c_1, c_2, \ldots, c_{\ell}$ that can take values in $\mathbb{C}$. The spectrum of an equivalence class $\mathbb{P}_{j}$ then uniquely determines a monomial $c_1^{i_1}c_2^{i_2} \ldots c_{\ell}^{i_{\ell}}$, where the powers $i_1, i_2, \ldots i_{k}$ corres\-pond to the multiplicities of the colors $c_1, c_2, \ldots ,c_{\ell}$ in the perfect matchings in $\mathbb{P}_{j}$. We also identify each entry of a matrix $A$ in $\mathcal{P}_{\pi}(G)$ with the color of its corresponding edge. In particular, for such $A$ we have \[A_{ij} = \left\{ \begin{split} & c_{r} & \mbox{ if } (j,i) \in E_{r} \mbox{ for some } r,\\ & 0 & \mbox{ otherwise,} \end{split} \right. \] From the expression \eqref{e:det} for the determinant of $A$ it can be seen that the perfect matchings in the equivalence class $\mathbb{P}_{j}$ yield a contribution $\sgn(\mathbb{P}_{j}) c_1^{i_1}c_2^{i_2} \ldots c_{\ell}^{i_\ell}$, where the degrees correspond to the multiplicities of the colors of the perfect matchings in $\mathbb{P}_{j}$. By assumption we have that $\spec(\mathbb{P}_{1})$ and $\spec(\mathbb{P}_{2})$ are not equal. Without loss of generality, we assume that the multiplicity of $c_{1}$ as an element of $\spec(\mathbb{P}_{1})$ is unequal to the multiplicity of $c_{1}$ as an element of $\spec(\mathbb{P}_{2})$. Denote these multiplicities by $j_{1}$ and $j_{2}$, respectively, with $j_1 \neq j_2$. Then for all values of $c_{2}, \ldots, c_\ell$, the determinant of $A$ has the form \begin{equation} \label{e:det1} \det(A) = \sgn(\mathbb{P}_{1})a_{1}c_{1}^{j_{1}}+\sgn(\mathbb{P}_{2})a_{2}c_{1}^{j_{2}}+f(c_{1}), \end{equation} where $a_{1}$ and $a_{2}$ depend on $c_{2}, \ldots, c_k$ and $f(c_{1})$ is a polynomial in $c_{1}$. The polynomial $f(c_1)$ corresponds to the remaining equivalence classes. It can happen that some of these equivalence classes also contain the color $c_1$ in their spectrum with multiplicity $j_1$ or $j_2$. By moving the corresponding monomials to the first two terms in \eqref{e:det1} we obtain \begin{equation} \label{e:det2} \det(A) = b_{1}c_{1}^{j_{1}} + b_{2}c_{1}^{j_{2}} + f'(c_{1}), \end{equation} with $b_1$ and $b_2$ depending on $c_{2}, \ldots, c_k$. Note that the first term in \eqref{e:det2} corresponds to the equivalence classes containing $c_1$ in their spectrum with multiplicity $j_1$, and likewise the second term with multiplicity $j_2$. The remaining polynomial $f'(c_1)$ does not contain monomials with $c_{1}^{j_{1}}$ and $c_{1}^{j_{2}}$. It is now easily verified that nonzero $c_{2}, \ldots, c_\ell$ can be chosen such that $b_1 \neq 0$ and $b_2 \neq 0$. By the fundamental theorem of algebra we then have that the polynomial equation $b_{1}c_{1}^{j_{1}}+ b_{2}c_{1}^{j_{2}} + f'(c_{1})=0$ has at least one nonzero root, since both $b_{1}$ and $b_{2}$ are nonzero. This implies that for some choice of nonzero complex values $c_{1}, c_{2}, \ldots, c_\ell$ we have $\det(A) =0$. In other words, not all $A \in \mathcal{P}_{\pi}(G)$ are nonsingular. This is a contradiction. \end{IEEEproof} \begin{ex} For the colored bipartite graph in Figure \ref{g:CBG1}, the pattern class consists of all matrices of the form \[\begin{bmatrix} c_2 & c_2 & c_2 \\ c_2 & c_1 & 0 \\ c_3 & 0 & c_3 \\ \end{bmatrix}\] where $c_1,c_2$ and $c_3$ are arbitrary nonzero complex numbers. In Example \ref{ex:coloredbg} we saw that there is exactly one equivalence class of perfect matchings with nonzero signature. By Theorem 8 we thus conclude that all these matrices are nonsingular. \end{ex} \subsection{Color Change Rule and Zero Forcing Sets} In this subsection, we will introduce a tailor-made zero forcing notion for colored graphs. Let $\mathcal{G}(\pi) = (V,E,\pi)$ be a colored directed graph with $\pi = \{E_{1}, E_{2}, \ldots, E_{k}\}$ the partition of $E$. For given disjoint subsets $X = \{x_{1},x_{2}, \ldots, x_{s}\}$ and $Y = \{y_{1},y_{2}, \ldots, y_{t}\}$ of $V$, we define an associated colored bipartite graph $G(\pi) = (X,Y,E_{XY}, \pi_{XY})$ as follows: \[E_{XY} := \{ \{x_i,y_j\} \mid (x_i,y_j) \in E, ~x_i \in X,~ y_j \in Y \}.\] Obviously, the partition $\pi$ induces a partition $\pi_{XY}$ of $E_{XY}$ by defining \[E_{XY}^{r} := \{ \{x_i,y_j\} \in E_{XY} \mid (x_i,y_j) \in E_r \}, ~r =1,2 \ldots, k.\] Note that for some $r$, this set might be empty. Removing these, we get a partition \[\pi_{XY} = \{E_{XY}^{i_{1}}, E_{XY}^{i_{1}}, \ldots, E_{XY}^{i_{\ell}}\}\] of $E_{XY}$, with associated colors $c_{i_{1}}, c_{i_{2}}, \ldots, c_{i_{\ell}}$, with $\ell \leq k$. Without loss of generality we renumber $c_{i_{1}}, c_{i_{2}}, \ldots, c_{i_{\ell}}$ as $c_{1}, c_{2}, \ldots, c_{\ell}$ and the edges in cell $E^{r}_{XY}$ are said to have color $c_r$. As before, a subset $C$ of $V$ is called a {\em coloring set} if the vertices in $C$ are initially colored black and those in $V \setminus C$ initially colored white. We will now define the notion of {\em color-perfect white neighbor}. \begin{defn}\label{d:CPN} Let $X \subseteq C$ and $Y \subseteq V$ with $|Y| = |X|$. We call $Y$ a {\em color-perfect white neighbor\/} of $X$ if \begin{enumerate} \item $Y = N_{V \setminus C}(X)$, i.e. $Y$ is equal to the set of white out-neighbors of $X$, and \item in the associated colored bipartite graph $G=(X,Y,E_{XY},\pi_{XY})$ there exists a perfect matching and exactly one equivalence class of perfect matchings has nonzero signature. \end{enumerate} \end{defn} Based on the notion of color-perfect white neighbor, we now introduce the following color change rule: if $X\subseteq C$ and $Y$ is a color-perfect white neighbor of $X$, then we change the color of all vertices in $Y$ to black, and write $X {\xrightarrow{c}} Y$. Such a color change is called a {\em force}. We define {\em a derived set\/} $\mathcal{D}_c(C)$ as a set of black vertices obtained after repeated application of the color change rule, until no more changes are possible. In contrast with the original color change rule (see Section \ref{s:ZFS}), under our new color change rule derived sets will no longer be uniquely defined, and may depend on the particular list of forces that is applied to the original coloring set $C$. This is illustrated by Example \ref{ex:counterexample} in the Appendix. A coloring set $C \subseteq V$ is called a {\em zero forcing set for} $\mathcal{G}(\pi)$ if there exits a derived set $\mathcal{D}_c(C)$ such that $\mathcal{D}_c(C) = V$. Before illustrating the new color change rule, we remark on its relation to the one defined earlier. \begin{remark} Given a directed graph $\mathcal{G} = (V,E)$, one can obtain a colored graph $\mathcal{G}(\pi) = (V,E,\pi)$ by assigning to every edge a different color, i.e., $|\pi| = |E|$. Clearly, the colored qualitative class $\mathcal{Q}_{\pi}(\mathcal{G})$ coincides with the qualitative class $ \mathcal{Q}(\mathcal{G})$. In addition, the original color change rule for $\mathcal{G}$ introduced in Section \ref{s:ZFS} can be seen to be a special case of the new one for $\mathcal{G}(\pi)$. This observation in mind, we will use the same terminology for these two color change rules and it will be clear from the context which one is employed. \end{remark} We now illustrate the new color change rule by means of an example. \begin{ex} \label{ex:ZFS} \begin{figure}[h!] \centering \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.4] \tikzset{VertexStyle1/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 1.5pt, outer sep = 0pt, minimum size = 8 pt}, edge/.style={->,> = latex', text = black} } \tikzset{VertexStyle2/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 2pt, outer sep = 0pt, minimum size = 10 pt}} \node[VertexStyle2](1) at (-3,4) {$1$}; \node[VertexStyle2](2) at (-3,0) {$2$}; \node[VertexStyle2](3) at (-3,-4) {$3$}; \node[VertexStyle1](4) at (3,4) {$4$}; \node[VertexStyle1](5) at (3,0) {$5$}; \node[VertexStyle1](6) at (3,-4) {$6$}; \node[VertexStyle1](7) at (9,4) {$7$}; \node[VertexStyle1](8) at (9,0) {$8$}; \node[VertexStyle1](9) at (9,-4) {$9$}; \Edge[style = {->,> = latex',pos = 0.5},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](1)(4); \Edge[ style = {->,> = latex',pos = 0.2},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](1)(5); \Edge[style = {->,> = latex',pos = 0.7},color=blue , label = $c_{3}$,labelstyle={inner sep=0pt}](1)(6); \Edge[ style = {->,> = latex',pos = 0.7},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](2)(4); \Edge[ style = {->,> = latex',pos = 0.3},color=red , label = $c_{1}$,labelstyle={inner sep=0pt}](2)(5); \Edge[style = {->,> = latex',pos = 0.3},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](3)(4); \Edge[style = {->,> = latex',pos = 0.5},color=blue , label = $c_{3}$,labelstyle={inner sep=0pt}](3)(6); \Edge[style = {->,> = latex',pos = 0.5},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](4)(7); \Edge[ style = {->,> = latex',pos = 0.2},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](4)(8); \Edge[style = {->,> = latex',pos = 0.3},color=blue , label = $c_{3}$,labelstyle={inner sep=0pt}](5)(8); \Edge[ style = {->,> = latex',pos = 0.7},color=blue , label = $c_{3}$,labelstyle={inner sep=0pt}](5)(9); \Edge[ style = {->,> = latex',pos = 0.2},color=red , label = $c_{1}$,labelstyle={inner sep=0pt}](6)(7); \Edge[style = {->,> = latex',pos = 0.3},color=red , label = $c_{1}$,labelstyle={inner sep=0pt}](6)(9); \Edge[style={bend right,->,> = latex'},color = blue, label = $c_{3}$,labelstyle={inner sep=0pt}](1)(3) \Edge[style={bend left,->,> = latex'},color = blue, label = $c_{3}$,labelstyle={inner sep=0pt}](8)(9) \Edge[style={->,> = latex'},color = red, label = $c_{1}$,labelstyle={inner sep=0pt}](4)(5) \end{tikzpicture} \caption{Initial.} \label{g:CZFS1} \end{subfigure} \medskip \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.4] \tikzset{VertexStyle1/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 1.5pt, outer sep = 0pt, minimum size = 8 pt}, edge/.style={->,> = latex', text = black} } \tikzset{VertexStyle2/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 2pt, outer sep = 0pt, minimum size = 10 pt}} \node[VertexStyle2](1) at (-3,4) {$1$}; \node[VertexStyle2](2) at (-3,0) {$2$}; \node[VertexStyle2](3) at (-3,-4) {$3$}; \node[VertexStyle2](4) at (3,4) {$4$}; \node[VertexStyle2](5) at (3,0) {$5$}; \node[VertexStyle2](6) at (3,-4) {$6$}; \node[VertexStyle1](7) at (9,4) {$7$}; \node[VertexStyle1](8) at (9,0) {$8$}; \node[VertexStyle1](9) at (9,-4) {$9$}; \Edge[style = {->,> = latex',pos = 0.5},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](1)(4); \Edge[ style = {->,> = latex',pos = 0.2},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](1)(5); \Edge[style = {->,> = latex',pos = 0.7},color=blue , label = $c_{3}$,labelstyle={inner sep=0pt}](1)(6); \Edge[ style = {->,> = latex',pos = 0.7},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](2)(4); \Edge[ style = {->,> = latex',pos = 0.3},color=red , label = $c_{1}$,labelstyle={inner sep=0pt}](2)(5); \Edge[style = {->,> = latex',pos = 0.3},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](3)(4); \Edge[style = {->,> = latex',pos = 0.5},color=blue , label = $c_{3}$,labelstyle={inner sep=0pt}](3)(6); \Edge[style = {->,> = latex',pos = 0.5},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](4)(7); \Edge[ style = {->,> = latex',pos = 0.2},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](4)(8); \Edge[style = {->,> = latex',pos = 0.3},color=blue , label = $c_{3}$,labelstyle={inner sep=0pt}](5)(8); \Edge[ style = {->,> = latex',pos = 0.7},color=blue , label = $c_{3}$,labelstyle={inner sep=0pt}](5)(9); \Edge[ style = {->,> = latex',pos = 0.2},color=red , label = $c_{1}$,labelstyle={inner sep=0pt}](6)(7); \Edge[style = {->,> = latex',pos = 0.3},color=red , label = $c_{1}$,labelstyle={inner sep=0pt}](6)(9); \Edge[style={bend right,->,> = latex'},color = blue, label = $c_{3}$,labelstyle={inner sep=0pt}](1)(3) \Edge[style={bend left,->,> = latex'},color = blue, label = $c_{3}$,labelstyle={inner sep=0pt}](8)(9) \Edge[style={->,> = latex'},color = red, label = $c_{1}$,labelstyle={inner sep=0pt}](4)(5) \end{tikzpicture} \caption{Step $1$.} \label{g:CZFS2} \end{subfigure} \medskip \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.4] \tikzset{VertexStyle1/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 1.5pt, outer sep = 0pt, minimum size = 8 pt}, edge/.style={->,> = latex', text = black} } \tikzset{VertexStyle2/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 2pt, outer sep = 0pt, minimum size = 10 pt}} \node[VertexStyle2](1) at (-3,4) {$1$}; \node[VertexStyle2](2) at (-3,0) {$2$}; \node[VertexStyle2](3) at (-3,-4) {$3$}; \node[VertexStyle2](4) at (3,4) {$4$}; \node[VertexStyle2](5) at (3,0) {$5$}; \node[VertexStyle2](6) at (3,-4) {$6$}; \node[VertexStyle2](7) at (9,4) {$7$}; \node[VertexStyle2](8) at (9,0) {$8$}; \node[VertexStyle2](9) at (9,-4) {$9$}; \Edge[style = {->,> = latex',pos = 0.5},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](1)(4); \Edge[ style = {->,> = latex',pos = 0.2},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](1)(5); \Edge[style = {->,> = latex',pos = 0.7},color=blue , label = $c_{3}$,labelstyle={inner sep=0pt}](1)(6); \Edge[ style = {->,> = latex',pos = 0.7},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](2)(4); \Edge[ style = {->,> = latex',pos = 0.3},color=red , label = $c_{1}$,labelstyle={inner sep=0pt}](2)(5); \Edge[style = {->,> = latex',pos = 0.3},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](3)(4); \Edge[style = {->,> = latex',pos = 0.5},color=blue , label = $c_{3}$,labelstyle={inner sep=0pt}](3)(6); \Edge[style = {->,> = latex',pos = 0.5},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](4)(7); \Edge[ style = {->,> = latex',pos = 0.2},color=green , label = $c_{2}$,labelstyle={inner sep=0pt}](4)(8); \Edge[style = {->,> = latex',pos = 0.3},color=blue , label = $c_{3}$,labelstyle={inner sep=0pt}](5)(8); \Edge[ style = {->,> = latex',pos = 0.7},color=blue , label = $c_{3}$,labelstyle={inner sep=0pt}](5)(9); \Edge[ style = {->,> = latex',pos = 0.2},color=red , label = $c_{1}$,labelstyle={inner sep=0pt}](6)(7); \Edge[style = {->,> = latex',pos = 0.3},color=red , label = $c_{1}$,labelstyle={inner sep=0pt}](6)(9); \Edge[style={bend right,->,> = latex'},color = blue, label = $c_{3}$,labelstyle={inner sep=0pt}](1)(3) \Edge[style={bend left,->,> = latex'},color = blue, label = $c_{3}$,labelstyle={inner sep=0pt}](8)(9) \Edge[style={->,> = latex'},color = red, label = $c_{1}$,labelstyle={inner sep=0pt}](4)(5) \end{tikzpicture} \caption{Step $2$.} \label{g:CZFS3} \end{subfigure} \caption{An example of a zero forcing set.} \label{g:CZFS} \end{figure} Figure \ref{g:CZFS} illustrates the repeated application of zero forcing in the context of colored graphs. In Figure \ref{g:CZFS1}, initially, vertices $\{1,2,3\}$ are black and the remaining vertices are white. As shown in Example~\ref{ex:coloredbg}, $\{4,5,6\}$ is a color-perfect white neighbor of $\{1,2,3\}$. Therefore, we have $\{1,2,3\} \xrightarrow{c} \{4,5,6\} $. Next, observe that the colored bipartite graph associated with $X= \{4,5,6\}$ and $Y= \{7,8,9\}$ has two perfect matchings, with identical spectrum and the same sign $1$. Hence the single equivalence class has signature $2$. As such, $ \{7,8,9\}$ is a color-perfect white neighbor of $\{4,5,6\}$. Therefore, we have $\{4,5,6\} \xrightarrow{c} \{7,8,9\}$. Consequently, we conclude that the vertex set $\{1,2,3\}$ is a zero forcing set for $\mathcal{G}(\pi)$. \end{ex} Next, we explore the relationship between zero forcing sets and controllability of $(\mathcal{G}(\pi);V_{L})$. First we show that color changes do not affect the property of controllability. This is stated in the following theorem. \begin{thm}\label{t:cec} Let $\mathcal{G}(\pi)$ be a colored directed graph and let $C \subseteq V$ be a coloring set. Suppose that $X \xrightarrow{c} Y$ with $X \subseteq C$ and $Y \subseteq V \setminus C$. Then, $(\mathcal{G} (\pi);C)$ is controllable if and only if $(\mathcal{G}(\pi);C\cup Y)$ is controllable. \end{thm} \begin{IEEEproof} Due to Lemma~\ref{l:CSSCBS}, it suffices to show that $\mathcal{D}_{z}(C) = V$ if and only if $\mathcal{D}_{z}(C \cup Y) = V$ for all weighted graphs $\mathcal{G}(W) = (V,E,W)$ with $W \in \mathcal{W}_{\pi}(\mathcal{G})$. Here, $C$ and $C \cup Y$ are taken as zero vertex sets. Let $W \in \mathcal{W}_{\pi}(\mathcal{G})$ and $\mathcal{G}(W) = (V,E,W)$. By definition of the color change rule, $X \xrightarrow{c} Y$ means that $Y = N_{V \setminus C}(X)$ and there exists exactly one equivalence class of perfect matchings with nonzero signature in the colored bipartite graph $G = (X, Y, E_{XY}, \pi_{XY})$. By applying Theorem \ref{t:NoP} we then find that all matrices in the pattern class of $G$ are nonsingular. Now, let $x_{1},x_2,\ldots,x_{n}$ be variables assigned to the vertices in $V$, with $x_{j} = 0$ for $j \in C$ and $x_j$ undetermined for the remaining vertices. For the vertices $j \in C$, consider the balance equations \eqref{e:BE}. By the fact that $W_{kj} = 0$ for all $k \in V \setminus C$ with $k \notin N_{V \setminus C}(\{j\})$, the system of balance equations \eqref{e:BE} for the vertices $j \in X$ can be written as \begin{equation}\label{e:balancing} x_{Y}^T W_{Y,X} = 0. \end{equation} We now observe that the submatrix $W_{Y,X}$ of $W$ belongs to the pattern class of $G$. Using the fact that all matrices in this pattern class are nonsingular, we obtain that $x^T_{Y} = 0$. By the definition of the zero extension rule, we have that $X \xrightarrow{z} Y$ for $\mathcal{G}(W)$ with the set of zero vertices $C$. It then follows immediately that $C \cup Y \subseteq \mathcal{D}_{z}(C)$ and thus $\mathcal{D}_{z}(C \cup Y) = \mathcal{D}_{z}(C)$. As a consequence, $C$ is a balancing set for $G(W)$ if and only if $C \cup Y$ is a balancing set for $G(W)$. Since this holds for arbitary choice of $W$ in $W_\pi(G)$, the result follows immediately from Lemma 6. \end{IEEEproof} By Theorem \ref{t:cec} colored strong structural controllability is invariant under application of the color change rule. We then obtain the following corollary. \begin{col}\label{c:cec} Let $\mathcal{G}(\pi)$ be a colored directed graph, let $V_L \subseteq V$ be a leader set and let $\mathcal{D}_{c}(V_{L})$ be a derived set. Then $(\mathcal{G} (\pi);V_L)$ is controllable if and only if $(\mathcal{G}(\pi);\mathcal{D}_{c}(V_{L}))$ is controllable. \end{col} As an immediate consequence of Corollary \ref{c:cec} we arrive at the main result of this section which provides sufficient graph-theoretic condition for controllability of $(\mathcal{G}(\pi);V_{L})$. \begin{thm}\label{t:coloredzfs} Let $\mathcal{G}(\pi)=(V,E,\pi)$ be a colored directed graph with leader set $V_{L} \subseteq V$. If $V_{L}$ is a zero forcing set, then $(\mathcal{G}(\pi);V_{L})$ is controllable. \end{thm} \begin{IEEEproof} The proof follows immediately from Corollary \ref{c:cec} and the fact that, trivially, $(\mathcal{G}(\pi);V)$ is controllable. \end{IEEEproof} To conclude this section, we will provide a counter example to show that the condition in Theorem \ref{t:coloredzfs} is not a necessary condition. \begin{ex} \label{e:counterexample1} \begin{figure}[h!] \centering \begin{tikzpicture}[scale=0.5] \tikzset{VertexStyle1/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 2pt, outer sep = 0pt, minimum size = 10 pt}, edge/.style={->,> = latex', text = black} } \tikzset{VertexStyle2/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 2pt, outer sep = 0pt, minimum size = 10 pt}} \node[VertexStyle2](1) at (-2,2) {$1$}; \node[VertexStyle2](2) at (2,2) {$2$}; \node[VertexStyle1](3) at (-5,-2) {$3$}; \node[VertexStyle1](4) at (0,-2) {$4$}; \node[VertexStyle1](5) at (5,-2) {$5$}; \Edge[style={bend left,->,> = latex'},label = $c_{2}$,color = red,labelstyle={inner sep=0pt}](1)(2); \Edge[style={bend right,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](1)(3); \Edge[ label =$c_{1}$, style = {->,> = latex',pos = 0.7},color = blue,labelstyle={inner sep=0pt}](1)(4); \Edge[style = {->,> = latex',pos = 0.7},color=red,label = $c_{2}$,labelstyle={inner sep=0pt}](2)(3); \Edge[label =$c_{2}$, style = {->,> = latex'},color=red ,labelstyle={inner sep=0pt}](2)(4); \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](2)(5) \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](5)(4) \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](4)(3) \end{tikzpicture} \caption{An example to show that $V_{L}$ being a zero forcing set is not a necessary condition for controllability of $(\mathcal{G}(\pi);V_{L})$.} \label{g:CSDCG} \end{figure} Consider the colored graph $\mathcal{G}(\pi)$ depicted in Figure \ref{g:CSDCG} with leader set $V_{L} = \{1,2\}$. Clearly, since none of the subsets $\{1,2\}$, $\{1\}$ and $\{2\}$ have color-perfect white neighbors, there does not exist a derived set $\mathcal{D}_{c}(V_{L})$ that equals $V$. Hence $V_{L}$ is not a zero forcing set. We will show that, however, $(\mathcal{G}(\pi);V_{L})$ is controllable. Due to Theorem \ref{l:CSSCBS}, it is sufficient to show that $V_{L}$ is a balancing set for all weighted graphs $\mathcal{G}(W)$ with $W \in \mathcal{W}_{\pi}(\mathcal{G})$. To do this, let $W \in \mathcal{W}_{\pi}(\mathcal{G})$ correspond to a realization $\{c_{1},c_{2}\}$ of the color set, with $c_{1}$ and $c_{2}$ nonzero real numbers. Assign variables $x_{1},\ldots,x_{5}$ to the vertices in $V$. Let $x_{1} = x_{2}= 0$ and let $x_{3}$, $x_{4}$ and $x_5$ be undetermined. The system of balance equations \eqref{e:BE} for the vertices $1$ and $2$ in $V_L$ is then given by \begin{align}\label{e:hsbe} \begin{split} c_{1}x_{3} + c_{1}x_{4}&=0,\\ c_{2}x_{3} + c_{2}x_{4} +c_{1}x_{5}&=0. \end{split} \end{align} Since $c_{1} \neq 0$ and $c_{2} \neq 0$, the homogeneous system \eqref{e:hsbe} is equivalent to the system \begin{align}\label{e:hsbe1} \begin{split} c_{1}x_{3} + c_{1}x_{4}&=0,\\ c_{1}x_{5}&=0, \end{split} \end{align} which yields $x_{5} = 0$. By the definition of the zero extension rule, we therefore have $\{1,2\} \xrightarrow{z} \{5\}$. Repeated application of the zero extension rule yields that $V_{L}$ is a balancing set. Since the matrix $W \in \mathcal{W}_{\pi}(\mathcal{G})$ was taken arbitrary, we conclude that $V_{L}$ is a balancing set for all weighted graphs $\mathcal{G}(W)$ with $W \in \mathcal{W}_{\pi}(\mathcal{G})$. Thus we have found a counter example for the necessity of the condition in Theorem \ref{t:coloredzfs}. \end{ex} \section{Elementary Edge Operations and Derived Colored Graphs} In the previous section, in Theorem \ref{t:coloredzfs}, we have established a sufficient condition for colored strong structural controllability. In the present section we will establish another sufficient graph-theoretic condition. This new condition is based on the so-called {\em elementary edge operations}. These are operations that can be performed on the given colored graph, and that preserve colored strong structural controllability. These edge operations on the graph are motivated by the observation that elementary operations on the systems of balance equations appearing in the zero extension rule do not modify the set of solutions to these linear equations. Indeed, in Example~\ref{e:counterexample1}, we verified that $\{1,2\} \xrightarrow{z} \{5\}$ for all weighted graphs $\mathcal{G}(W)$ with $W \in \mathcal{W}_{\pi}(\mathcal{G})$. This is due to the fact that the system of balance equations \eqref{e:hsbe} is equivalent to \eqref{e:hsbe1}, implying that $x_{5} = 0$ for all nonzero values $c_{1}$ and $c_{2}$. To generalize and visualize this idea on the level of the colored graph, we now introduce the following two types of elementary edge operations. Let $C \subseteq V$ be a coloring set, i.e., the set of vertices initially colored black. The complement $V \setminus C$ is the set of white vertices. For two vertices $u, v \in C$ (where $u$ and $v$ can be same vertex), we define \[\mathcal{E}_{u}(v) := \{(v,j) \in E \mid j \in N_{V \setminus C}(u)\}\] the subset of edges between $v$ and white out-neighbors of $u$. We now introduce the following two elementary edge operations: \begin{enumerate} \item{ ({\em Turn color}) If all edges in $\mathcal{E}_{u}(u)$ have the same color, say $c_{i}$, then change the color of these edges to any other color in the color set.} \item{({\em Remove edges}) Assume $ N_{V \setminus C}(u) \subseteq N_{V \setminus C}(v)$. If for any $k \in N_{V \setminus C}(u)$, the two edges $(u,k)$ and $(v,k)$ have the same color, then remove all edges in $\mathcal{E}_{u}(v)$.} \end{enumerate} The above elementary edge operations can be applied sequentially and, obviously, will not introduce new colors or add new edges. In the sequel, we will denote an edge operation by the symbol $o$. Applying the edge operation $o$ to $\mathcal{G}(\pi)$, we obtain a new colored graph $\mathcal{G}'(\pi') = (V,E',\pi')$. We then call $\mathcal{G}'(\pi')$ a derived graph of $\mathcal{G}(\pi)$ associated with $C$ and $o$. We denote such derived graph by $\mathcal{G}(\pi,C,o)$. An application of a sequence of elementary edge operations is illustrated in the following example. \begin{ex}\label{ex:eeo} For the colored graph $\mathcal{G}(\pi) = (V,E,\pi)$ depicted in \ref{g:exeop1}, let $C = \{1,2\}$ be the coloring set. For the vertex $1 \in C$, we have $\mathcal{E}_{1}(1) = \{(1,3),(1,4)\}$ in which both edges have the same color $c_{1}$. We apply the turn color operation to change the colors of $(1,3)$ and $(1,4)$ to $c_{2}$. Denote this operation by $o_{1}$. We then obtain the derived colored graph $\mathcal{G}(\pi,C,o_1)$ of $\mathcal{G}(\pi)$ with respect to $C$ and $o_1$, which is denoted by $\mathcal{G}_{1}(\pi_{1})$ and shown in \ref{g:exeop2}. In addition, for the nodes $1 \mbox{ and } 2$ in $\mathcal{G}_{1}(\pi_{1})$, we have $ N_{V \setminus C}(1) \subseteq N_{V \setminus C}(2)$, where $N_{V \setminus C}(1) = \{3, 4\}$ and $N_{V \setminus C}(2) = \{3,4,5\}$. Besides, for any $k \in N_{V \setminus C}(1)$, the two edges $(1,k)$ and $(2,k)$ have the same color. Performing the edge removal operation denoted by $o_{2}$, we then remove all the edges in $\mathcal{E}_{1}(2) = \{(2,3),(2,4)\}$. Thus we obtain the derived colored graph $\mathcal{G}_1(\pi_1, C, o_2)$ of $\mathcal{G}_1(\pi_1)$ with respect to $C$ and $o_2$, which is denoted by $\mathcal{G}_{2}(\pi_{2})$ and depicted in \ref{g:exeop3}. \end{ex} \begin{figure}[h!] \centering \begin{subfigure}{0.5\textwidth} \centering \begin{tikzpicture}[scale=0.45] \tikzset{VertexStyle1/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 2pt, outer sep = 0pt, minimum size = 10 pt}, edge/.style={->,> = latex', text = black} } \tikzset{VertexStyle2/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 2pt, outer sep = 0pt, minimum size = 10 pt}} \node[VertexStyle2](1) at (-2,2) {$1$}; \node[VertexStyle2](2) at (2,2) {$2$}; \node[VertexStyle1](3) at (-5,-2) {$3$}; \node[VertexStyle1](4) at (0,-2) {$4$}; \node[VertexStyle1](5) at (5,-2) {$5$}; \Edge[style={bend left,->,> = latex'},label = $c_{2}$,color = red,labelstyle={inner sep=0pt}](1)(2); \Edge[style={bend right,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](1)(3); \Edge[ label =$c_{1}$, style = {->,> = latex',pos = 0.7},color = blue,labelstyle={inner sep=0pt}](1)(4); \Edge[style = {->,> = latex',pos = 0.7},color=red,label = $c_{2}$,labelstyle={inner sep=0pt}](2)(3); \Edge[label =$c_{2}$, style = {->,> = latex'},color=red ,labelstyle={inner sep=0pt}](2)(4); \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](2)(5) \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](5)(4) \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](4)(3) \end{tikzpicture} \caption{Initial colored graph $\mathcal{G}(\pi) = (V,E,\pi)$.} \label{g:exeop1} \end{subfigure} \medskip \begin{subfigure}{0.5\textwidth} \centering \begin{tikzpicture}[scale=0.45] \tikzset{VertexStyle1/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 2pt, outer sep = 0pt, minimum size = 10 pt}, edge/.style={->,> = latex', text = black} } \tikzset{VertexStyle2/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 2pt, outer sep = 0pt, minimum size = 10 pt}} \node[VertexStyle2](1) at (-2,2) {$1$}; \node[VertexStyle2](2) at (2,2) {$2$}; \node[VertexStyle1](3) at (-5,-2) {$3$}; \node[VertexStyle1](4) at (0,-2) {$4$}; \node[VertexStyle1](5) at (5,-2) {$5$}; \Edge[style={bend left,->,> = latex'},label = $c_{2}$,color = red,labelstyle={inner sep=0pt}](1)(2); \Edge[style={bend right,->,> = latex'},label = $c_{2}$,color = red,labelstyle={inner sep=0pt}](1)(3); \Edge[ label =$c_{2}$, style = {->,> = latex',pos = 0.7},color = red,labelstyle={inner sep=0pt}](1)(4); \Edge[style = {->,> = latex',pos = 0.7},color = red,label = $c_{2}$,labelstyle={inner sep=0pt}](2)(3); \Edge[label =$c_{2}$, style = {->,> = latex'},color = red ,labelstyle={inner sep=0pt}](2)(4); \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](2)(5) \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](5)(4) \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](4)(3) \end{tikzpicture} \caption{Derived colored graph $\mathcal{G}_{1}(\pi_{1}) = \mathcal{G}(\pi,C, o_1)$ where $o_1$ represents `turning the colors of $(1,3)$ and $(1,4)$ to $c_{2}$'.} \label{g:exeop2} \end{subfigure} \medskip \begin{subfigure}{0.5\textwidth} \centering \begin{tikzpicture}[scale=0.45] \tikzset{VertexStyle1/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 2pt, outer sep = 0pt, minimum size = 10 pt}, edge/.style={->,> = latex', text = black}} \tikzset{VertexStyle2/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 2pt, outer sep = 0pt, minimum size = 10 pt}} \node[VertexStyle2](1) at (-2,2) {$1$}; \node[VertexStyle2](2) at (2,2) {$2$}; \node[VertexStyle1](3) at (-5,-2) {$3$}; \node[VertexStyle1](4) at (0,-2) {$4$}; \node[VertexStyle1](5) at (5,-2) {$5$}; \Edge[style={bend right,->,> = latex'},label = $c_{2}$,color = red,labelstyle={inner sep=0pt}](1)(3); \Edge[style={bend left,->,> = latex'},label = $c_{2}$,color = red,labelstyle={inner sep=0pt}](1)(2); \Edge[ label =$c_{2}$, style = {->,> = latex',pos = 0.5},color = red,labelstyle={inner sep=0pt}](1)(4); \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](2)(5) \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](5)(4) \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](4)(3) \end{tikzpicture} \caption{Derived colored graph $\mathcal{G}_{2}(\pi_{2}) = \mathcal{G}_1(\pi_1,C,o_2)$ where $o_2$ represents `removing all the edges in $\mathcal{E}_{1}(2) = \{(2,3),(2,4)\}$'.} \label{g:exeop3} \end{subfigure} \caption{Example of performing elementary edge operations. } \label{g:exeop} \end{figure} Each elementary edge operation $o$ corresponds to a single vertex $u \in C$ or a pair of vertices $u,v \in C$. In the sequel we will denote this subset of $C$ corresponding to $o$ by $C(o)$. Thus, $C (o)$ is either a singleton or a set consisting of two elements. Next, we study the relationship between elementary edge operations and controllability of $(\mathcal{G}(\pi);V_{L})$. First we show that elementary edge operations preserve zero extension. This issue is addressed in the following lemma. \begin{lem}\label{l:eoet2} Let $\mathcal{G}(\pi)$ be a colored directed graph and $C$ be a coloring set. Let $o$ represent an edge operation and let $\mathcal{G'}(\pi') = \mathcal{G}(\pi,C,o)$ be a derived graph with respect to $C$ and $o$. Let $W \in \mathcal{W}_{\pi}(\mathcal{G})$ be a weighted adjacency matrix and let $W' \in \mathcal{W}_{\pi'}(\mathcal{G}')$ be the corresponding matrix associated with the same realization of the colors. Let $X \subseteq C \setminus C(o)$ and define $X' := C(o) \cup X$. Then, interpreting $C$ as the set of zero vertices, for any $Y \subseteq V$ we have $X' \xrightarrow{z} Y$ in the weighted graph $\mathcal{G}(W)$ if and only if $X' \xrightarrow{z} Y$ in the weighted graph $\mathcal{G'}(W')$. \end{lem} \begin{IEEEproof} By suitably relabeling the vertices, we may assume that $W$ has the form \[ W = \begin{bmatrix} W_{1,1} & W_{1,2} & \ldots & W_{1,6}\\ W_{2,1} & W_{2,2} & \ldots & W_{2,6}\\ W_{3,1} & W_{3,2} & \ldots & W_{3,6}\\ W_{4,1} & W_{4,2} & \ldots & W_{4,6}\\ W_{5,1} & W_{5,2} & \ldots & W_{5,6}\\ W_{6,1} & W_{6,2} & \ldots & W_{6,6}\\ \end{bmatrix}, \] where the first row block corresponds to the vertices indexed by $ C(o)$, the second row block corresponds to the vertices indexed by $X$, the third row block corresponds to the vertices indexed by $C \setminus X'$, the fourth row block corresponds to the vertices indexed by $N_{V \setminus C}(C(o))$, the fifth row block corresponds to the vertices indexed by $N_{V \setminus C}(X') \setminus N_{V \setminus C}(C(o))$ and the last row block corresponds to the remaining white vertices. The column blocks of $W$ result from the same labeling. Correspondingly, the matrix $W'$ must then be equal to \[ W' = \begin{bmatrix} W_{1,1} & W_{1,2} & \ldots & W_{1,6}\\ W_{2,1} & W_{2,2} & \ldots & W_{2,6}\\ W_{3,1} & W_{3,2} & \ldots & W_{3,6}\\ W'_{4,1} & W_{4,2} & \ldots & W_{4,6}\\ W_{5,1} & W_{5,2} & \ldots & W_{5,6}\\ W_{6,1} & W_{6,2} & \ldots & W_{6,6}\\ \end{bmatrix}. \] for some matrix $W'_{4,1}$. Since the fourth and fifth row blocks correspond to the vertices indexed by $N_{V \setminus C}(C(o)) $ and $N_{V \setminus C}(X') \setminus N_{V \setminus C}(C(o))$, respectively, it follows easily that $W_{5,1} = \mathbf{0}$, $W_{6,1} = \mathbf{0}$ and $W_{6,2} = \mathbf{0}$. Consider the submatrices $W_{N_{V \setminus C}(X'), X'} = \begin{bmatrix} W_{4,1} & W_{4,2} \\ \mathbf{0} & W_{5,2} \\ \end{bmatrix}$ and $W'_{N_{V \setminus C}(X'), X'}= \begin{bmatrix} W'_{4,1} & W_{4,2} \\ \mathbf{0} & W_{5,2} \\ \end{bmatrix}$ of $W$ and $W'$, respectively. We then distinguish two cases: \begin{enumerate} \item Suppose the edge operation $o$ represents a color turn operation. In that case, $C(o)$ only contains one vertex, in other words, both $W_{4,1}$ and $W'_{4,1}$ consist of only one column. Hence, it follows that $W'_{4,1} = \alpha W_{4,1}$ for a suitable nonzero real number $\alpha$. \item Suppose the edge operation $o$ represents an edge removal operation. In that case $C(o)$ contains two vertices, say $u$ and $v$, and both $W_{4,1}$ and $W'_{4,1}$ consist of two columns. We may assume that $u$ and $v$ correspond to the first and second column of these matrices, respectively, and the edges in $\mathcal{E}_u(v)$ are removed. This implies that $$W'_{4,1} = W_{4,1} \begin{bmatrix} 1 & -1 \\ 0 & 1 \\ \end{bmatrix}.$$ \end{enumerate} Clearly, $W_{N_{V \setminus C}(X'), X'}$ and $W'_{N_{V \setminus C}(X'), X'}$ are column equivalent. Next, again assign variables $x_{1},\ldots,x_{n}$ to every vertex in $V$, where $x_{i}$ is equal to $0$ if $i \in C$ and otherwise undetermined. For the vertex $j \in C$ we consider the balance equation \eqref{e:BE}. By the fact that $W_{kj} = 0$ for all $k \in V \setminus C$ with $k \notin N_{V \setminus C}(\{j\})$ and $N_{V \setminus C}(\{j\}) \subseteq N_{V \setminus C}(X')$, equation \eqref{e:BE} is equivalent to \begin{equation}\label{e:BE2} \sum_{k \in N_{V \setminus C}(X')} x_{k}W_{kj} = 0. \end{equation} Again using the notation for the submatrix $W_{N_{V \setminus C}(X'), X'}$ and subvector $x_{N_{V \setminus C}(X')}$, we can rewrite the system of balance equations \eqref{e:BE2} for $j \in X'$ as \begin{equation}\label{e:ghe1} x_{N_{V \setminus C}(X')}^{T}W_{N_{V \setminus C}(X'), X'} = 0. \end{equation} Similarly, for the graph $\mathcal{G}' (W')$, we obtain the following system of balance equations for $j \in X'$: \begin{equation}\label{e:ghe2} x_{N_{V \setminus C}(X')}^{T}W'_{N_{V \setminus C}(X'), X'} = 0. \end{equation} Since $W'_{N_{V \setminus C}(X'), X'}$ and $W_{N_{V \setminus C}(X'), X'}$ are column equivalent, the solution sets of \eqref{e:ghe1} and \eqref{e:ghe2} coincide. By definition of the zero extension rule we therefore have that, for any vertex set $Y$, $X' \xrightarrow{z} Y$ in $\mathcal{G}(W)$ if and only if $X' \xrightarrow{z} Y$ in $\mathcal{G}' (W')$. This completes the proof. \end{IEEEproof} It follows from the previous that colored strong structural controllability is preserved under elementary edge operations. Indeed, we have \begin{thm}\label{t:eeo} Let $\mathcal{G}(\pi)$ be a colored directed graph, $V_L \subseteq V$ be a leader set, and $o$ an elementary edge operation. Let $\mathcal{G}' (\pi') = \mathcal{G}(\pi,V_L,o)$ be a derived colored graph of $\mathcal{G}(\pi)$ with respect to $V_L$ and $o$. Then we have that $(\mathcal{G}(\pi);V_{L})$ is controllable if and only if $(\mathcal{G'} (\pi');V_L)$ is controllable. \end{thm} \begin{IEEEproof} The proof follows from Lemma \ref{l:CSSCBS} and Lemma \ref{l:eoet2}. \end{IEEEproof} As an immediate consequence of Theorem \ref{t:eeo} and Theorem \ref{t:coloredzfs} we see that if the leader set $V_L$ of the original colored graph $\mathcal{G}(\pi)$ is a zero forcing set for the derived graph $\mathcal{G}' (\pi') = \mathcal{G}(\pi,V_L,o)$, then $(\mathcal{G} (\pi);V_L)$ is controllable. Obviously, this result can immediately be extended to derived graphs obtained by applying a finite sequence of edge operations. This leads to the following sufficient graph theoretic condition for controllability of $(\mathcal{G}(\pi);V_{L})$. \begin{col} \label{c:ecs} Let $\mathcal{G}(\pi)$ be a colored directed graph and let $V_L$ be a leader set. Let $\mathcal{G}'(\pi')$ be a colored graph obtained by applying finitely many elementary edge operations. Then $(\mathcal{G}(\pi),V_L)$ is controllable if $V_L$ is a zero forcing set for $\mathcal{G}'(\pi')$. \end{col} \begin{ex} Again consider the colored graph in Example \ref{e:counterexample1}. We already saw that $V_L =\{1,2\}$ is not a zero forcing set. However, we also showed that we do have strong structural controllability for this colored graph. This can now also be shown graph theoretically by means of Corollary \ref{c:ecs}: the leader set $V_L$ is a zero forcing set for the derived graph in Figure \ref{g:exeop3}, so the original colored graph in Figure \ref{g:exeop1} yields a controllable system. \end{ex} By combining Theorem \ref{t:eeo} and Corollary \ref{c:cec} we are now in the position to establish yet another procedure for checking controllability of a given colored graph $(\mathcal{G} (\pi);V_L)$. First, distinguish the following two steps: \begin{enumerate} \item As the first step, apply the color change operation to compute a derived set $\mathcal{D}_{c}(V_{L})$. If this derived set is equal to $V$ we have controllability. If not, we can not yet decide whether we have controllability or not. \item As a next step, then, apply an edge operation $o$ to $\mathcal{G} (\pi)$ to obtain $\mathcal{G}_{1}(\pi_{1})$, where $\mathcal{G}_{1}(\pi_{1}) = \mathcal{G} (\pi,\mathcal{D}_{c}(V_{L}), o)$ is a derived graph of $\mathcal{G}(\pi)$ with coloring set $\mathcal{D}_{c}(V_{L})$ and edge operation $o$. \end{enumerate} By Theorem \ref{t:eeo} and Corollary \ref{c:cec}, it is straightforward to verify that $(\mathcal{G} (\pi);V_L)$ is controllable if and only if $(\mathcal{G}_{1}(\pi_{1});\mathcal{D}_{c}(V_{L}))$ is controllable. We can now repeat steps 1 and 2, applying them to $\mathcal{G}_{1}(\pi_{1})$. Successive and alternating application of these two steps transforms the original leader set $V_L$ using several color change operations associated with the several derived graphs appearing in the process. After finitely many iterations we thus arrive at a so called {\em edge-operations-color-change derived set} of $V_L$, that will be denoted by $\mathcal{D}_{ec}(C)$. This set will remain unchanged in case we again apply step 1 or step 2. Since controllability is preserved, we arrive at the following theorem. \begin{thm}\label{t:eocd} Let $\mathcal{G}(\pi)$ be a colored directed graph and let $V_L \subseteq V$ be a leader set. Let $\mathcal{D}_{ec}(V_{L})$ be an edge-operations-color-change derived set of $V_{L}$. We then have that $(\mathcal{G} (\pi);V_L)$ is controllable if $\mathcal{D}_{ec}(V_{L}) = V$. \end{thm} \begin{remark} Obviously, a derived set $\mathcal{D}_{c}(V_{L})$ of $V_{L}$ in $\mathcal{G}(\pi)$ is always contained in an edge-operations-color-change derived set $\mathcal{D}_{ec}(V_{L})$ of $V_{L}$. Hence the condition in Theorem \ref{t:eocd} is weaker than the conditions in Theorem \ref{t:coloredzfs} and Corollary \ref{c:ecs}. \end{remark} In the following example we illustrate the application of Theorem \ref{t:eocd} to check controllability of a given colored graph and leader set. \begin{figure}[h!]\label{fi4al1} \centering \begin{subfigure}{0.45\textwidth} \centering \begin{tikzpicture}[scale=0.16] \tikzset{VertexStyle1/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 2pt, outer sep = 0pt, minimum size = 8 pt}} \tikzset{VertexStyle2/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 2pt, outer sep = 0pt, minimum size = 8 pt}} \tikzset{VertexStyle3/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 0.8pt, outer sep = 0pt, minimum size = 8 pt}} \node[VertexStyle2](1) at (8.840,2.3005) {$1$}; \node[VertexStyle2](2) at (10.4707,-6.6488) {$2$}; \node[VertexStyle1](3) at (-8.6887,-1.2771) {$3$}; \node[VertexStyle1](4) at (-5.0950,5.9327) {$4$}; \node[VertexStyle2](5) at (-15.4428,8.0927) {$5$}; \node[VertexStyle2](6) at (-14.3838,-8.6365) {$6$}; \node[VertexStyle2](7) at (-2.2519 , -6.7736) {$7$}; \node[VertexStyle1](8) at (-7.5493,-8.1412) {$8$}; \node[VertexStyle1](9) at (18.7030,-7.1350) {$9$}; \node[VertexStyle3](10) at (-16.1152, -1.0685) {$10$}; \node[VertexStyle3](11) at (5.0224, 8.3263) {$11$}; \node[VertexStyle3](12) at (2.4145, -2.6967) {$12$}; \Edge[ style = {->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](6)(8); \Edge[ style = {->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](6)(10); \Edge[ style = {->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](5)(10); \Edge[ style = {bend right,->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](10)(3); \Edge[ style = {->,> = latex',pos = 0.2},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](5)(3); \Edge[ style = {bend right,->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](10)(3); \Edge[ style = ->,color=red ,label = $c_{1}$](4)(11); \Edge[style={bend left,->,> = latex'},label = $c_{2}$,color = red,labelstyle={inner sep=0pt}](1)(12); \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](1)(2); \Edge[ label =$c_{2}$, style = {->,> = latex',pos = 0.2},color = red,labelstyle={inner sep=0pt}](1)(3); \Edge[style = {->,> = latex',pos = 0.4},color=red,label = $c_{2}$,labelstyle={inner sep=0pt}](7)(3); \Edge[style = {->,> = latex',pos = 0.4},color=red,label = $c_{2}$,labelstyle={inner sep=0pt}](7)(12); \Edge[label =$c_{2}$, style = {->,> = latex',pos = 0.4},color=red ,labelstyle={inner sep=0pt}](2)(12); \Edge[style={bend right,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](2)(9) \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](5)(4) \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](11)(9) \Edge[style={bend right,->,> = latex',pos = 0.2},label = $c_{2}$,color = red,labelstyle={inner sep=0pt}](11)(12) \Edge[style={->,> = latex',pos = 0.6},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](10)(4) \Edge[ style = {->,> = latex',pos = 0.5},color=green,label = $c_{3}$ ,labelstyle={inner sep=0pt}](5)(12) \Edge[ label =$c_{3}$, style = {->,> = latex',pos = 0.7},color = green,labelstyle={inner sep=0pt}](1)(4) \Edge[style={bend left,->,> = latex'},label = $c_{3}$,color = green,labelstyle={inner sep=0pt}](2)(8) \end{tikzpicture} \caption{Initial colored graph $\mathcal{G}(\pi) = (V,E,\pi)$ with coloring set $V_L = \{1,2,5,6,7\}$. Let $\mathcal{G}_{0}(\pi_{0}) = \mathcal{G}(\pi).$ Compute a derived set $\mathcal{D}_{c}(V_L) = \{1,2,5,6,7\}$ of $V_L$ in $\mathcal{G}_{0}(\pi_{0})$ and set $\mathcal{D}_{0} = \mathcal{D}_{c}(V_L)$.} \label{g:exa1} \end{subfigure} \smallskip \begin{subfigure}{0.45\textwidth} \centering \begin{tikzpicture}[scale=0.16] \tikzset{VertexStyle1/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 2pt, outer sep = 0pt, minimum size = 8 pt}} \tikzset{VertexStyle2/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 2pt, outer sep = 0pt, minimum size = 8 pt}} \tikzset{VertexStyle3/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 0.8pt, outer sep = 0pt, minimum size = 8 pt}} \tikzset{VertexStyle4/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 0.8pt, outer sep = 0pt, minimum size = 8 pt}} \node[VertexStyle2](1) at (8.840,2.3005) {$1$}; \node[VertexStyle2](2) at (10.4707,-6.6488) {$2$}; \node[VertexStyle1](3) at (-8.6887,-1.2771) {$3$}; \node[VertexStyle1](4) at (-5.0950,5.9327) {$4$}; \node[VertexStyle2](5) at (-15.4428,8.0927) {$5$}; \node[VertexStyle2](6) at (-14.3838,-8.6365) {$6$}; \node[VertexStyle2](7) at (-2.2519 , -6.7736) {$7$}; \node[VertexStyle1](8) at (-7.5493,-7.1412) {$8$}; \node[VertexStyle1](9) at (18.7030,-7.1350) {$9$}; \node[VertexStyle3](10) at (-16.1152, -1.0685) {$10$}; \node[VertexStyle3](11) at (5.0224, 8.3263) {$11$}; \node[VertexStyle3](12) at (2.4145, -2.6967) {$12$}; \Edge[ style = {->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](6)(8); \Edge[ style = {->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](6)(10); \Edge[ style = {->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](5)(10); \Edge[ style = {bend right,->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](10)(3); \Edge[ style = {->,> = latex',pos = 0.2},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](5)(3); \Edge[ style = ->,color=red ,label = $c_{1}$](4)(11); \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](1)(2); \Edge[style = {->,> = latex',pos = 0.4},color=red,label = $c_{2}$,labelstyle={inner sep=0pt}](7)(3); \Edge[style = {->,> = latex',pos = 0.4},color=red,label = $c_{2}$,labelstyle={inner sep=0pt}](7)(12); \Edge[label =$c_{2}$, style = {->,> = latex',pos = 0.4},color=red ,labelstyle={inner sep=0pt}](2)(12); \Edge[style={bend right,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](2)(9) \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](5)(4) \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](11)(9) \Edge[style={bend right,->,> = latex',pos = 0.2},label = $c_{2}$,color = red,labelstyle={inner sep=0pt}](11)(12) \Edge[style={->,> = latex',pos = 0.6},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](10)(4) \Edge[ style = {->,> = latex',pos = 0.5},color=green,label = $c_{3}$ ,labelstyle={inner sep=0pt}](5)(12) \Edge[ label =$c_{3}$, style = {->,> = latex',pos = 0.7},color = green,labelstyle={inner sep=0pt}](1)(4) \Edge[style={bend left,->,> = latex'},label = $c_{3}$,color = green,labelstyle={inner sep=0pt}](2)(8) \end{tikzpicture} \caption{ Derived colored graph $\mathcal{G}_{1}(\pi_{1}) = \mathcal{G}(\pi,\mathcal{D}_{0},o_0)$ of $\mathcal{G}(\pi)$ with respect to $\mathcal{D}_{0}$ and $o_0$ such that $o_0$ represents `removing edges $(1,12) \mbox{ and } (1,3)$'.} \label{g:exa2} \end{subfigure} \smallskip \begin{subfigure}{0.45\textwidth} \centering \begin{tikzpicture}[scale=0.16] \tikzset{VertexStyle1/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 2pt, outer sep = 0pt, minimum size = 8 pt}} \tikzset{VertexStyle2/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 2pt, outer sep = 0pt, minimum size = 8 pt}} \tikzset{VertexStyle3/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 0.8pt, outer sep = 0pt, minimum size = 8 pt}} \tikzset{VertexStyle4/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 0.8pt, outer sep = 0pt, minimum size = 8 pt}} \node[VertexStyle2](1) at (8.840,2.3005) {$1$}; \node[VertexStyle2](2) at (10.4707,-6.6488) {$2$}; \node[VertexStyle1](3) at (-8.6887,-1.2771) {$3$}; \node[VertexStyle2](4) at (-5.0950,5.9327) {$4$}; \node[VertexStyle2](5) at (-15.4428,8.0927) {$5$}; \node[VertexStyle2](6) at (-14.3838,-8.6365) {$6$}; \node[VertexStyle2](7) at (-2.2519 , -6.7736) {$7$}; \node[VertexStyle1](8) at (-7.5493,-7.1412) {$8$}; \node[VertexStyle1](9) at (18.7030,-7.1350) {$9$}; \node[VertexStyle3](10) at (-16.1152, -1.0685) {$10$}; \node[VertexStyle4](11) at (5.0224, 8.3263) {$11$}; \node[VertexStyle3](12) at (2.4145, -2.6967) {$12$}; \Edge[ style = {->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](6)(8); \Edge[ style = {->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](6)(10); \Edge[ style = {->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](5)(10); \Edge[ style = {bend right,->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](10)(3); \Edge[ style = {->,> = latex',pos = 0.2},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](5)(3); \Edge[ style = ->,color=red ,label = $c_{1}$](4)(11); \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](1)(2); \Edge[style = {->,> = latex',pos = 0.4},color=red,label = $c_{2}$,labelstyle={inner sep=0pt}](7)(3); \Edge[style = {->,> = latex',pos = 0.4},color=red,label = $c_{2}$,labelstyle={inner sep=0pt}](7)(12); \Edge[label =$c_{2}$, style = {->,> = latex',pos = 0.4},color=red ,labelstyle={inner sep=0pt}](2)(12); \Edge[style={bend right,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](2)(9) \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](5)(4) \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](11)(9) \Edge[style={bend right,->,> = latex',pos = 0.2},label = $c_{2}$,color = red,labelstyle={inner sep=0pt}](11)(12) \Edge[style={->,> = latex',pos = 0.6},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](10)(4) \Edge[ style = {->,> = latex',pos = 0.5},color=green,label = $c_{3}$ ,labelstyle={inner sep=0pt}](5)(12) \Edge[ label =$c_{3}$, style = {->,> = latex',pos = 0.7},color = green,labelstyle={inner sep=0pt}](1)(4) \Edge[style={bend left,->,> = latex'},label = $c_{3}$,color = green,labelstyle={inner sep=0pt}](2)(8) \end{tikzpicture} \caption{ Derived set $\mathcal{D}_{1} = \{1,2,4,5,6,7,11\}$ of $\mathcal{D}_{0}$ in the colored graph $\mathcal{G}_{1}(\pi_{1})$.} \label{g:exa3} \end{subfigure} \smallskip \begin{subfigure}{0.45\textwidth} \centering \begin{tikzpicture}[scale=0.16] \tikzset{VertexStyle1/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 2pt, outer sep = 0pt, minimum size = 8 pt}} \tikzset{VertexStyle2/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 2pt, outer sep = 0pt, minimum size = 8 pt}} \tikzset{VertexStyle3/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 0.8pt, outer sep = 0pt, minimum size = 8 pt}} \tikzset{VertexStyle4/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 0.8pt, outer sep = 0pt, minimum size = 8 pt}} \node[VertexStyle2](1) at (8.840,2.3005) {$1$}; \node[VertexStyle2](2) at (10.4707,-6.6488) {$2$}; \node[VertexStyle1](3) at (-8.6887,-1.2771) {$3$}; \node[VertexStyle2](4) at (-5.0950,5.9327) {$4$}; \node[VertexStyle2](5) at (-15.4428,8.0927) {$5$}; \node[VertexStyle2](6) at (-14.3838,-8.6365) {$6$}; \node[VertexStyle2](7) at (-2.2519 , -6.7736) {$7$}; \node[VertexStyle1](8) at (-7.5493,-7.1412) {$8$}; \node[VertexStyle1](9) at (18.7030,-7.1350) {$9$}; \node[VertexStyle3](10) at (-16.1152, -1.0685) {$10$}; \node[VertexStyle4](11) at (5.0224, 8.3263) {$11$}; \node[VertexStyle3](12) at (2.4145, -2.6967) {$12$}; \Edge[ style = {->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](6)(8); \Edge[ style = {->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](6)(10); \Edge[ style = {->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](5)(10); \Edge[ style = {bend right,->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](10)(3); \Edge[ style = {->,> = latex',pos = 0.2},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](5)(3); \Edge[ style = ->,color=red ,label = $c_{1}$](4)(11); \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](1)(2); \Edge[style = {->,> = latex',pos = 0.4},color=red,label = $c_{2}$,labelstyle={inner sep=0pt}](7)(3); \Edge[style = {->,> = latex',pos = 0.4},color=red,label = $c_{2}$,labelstyle={inner sep=0pt}](7)(12); \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](5)(4) \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](11)(9) \Edge[style={bend right,->,> = latex',pos = 0.2},label = $c_{2}$,color = red,labelstyle={inner sep=0pt}](11)(12) \Edge[style={->,> = latex',pos = 0.6},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](10)(4) \Edge[ style = {->,> = latex',pos = 0.5},color=green,label = $c_{3}$ ,labelstyle={inner sep=0pt}](5)(12) \Edge[ label =$c_{3}$, style = {->,> = latex',pos = 0.7},color = green,labelstyle={inner sep=0pt}](1)(4) \Edge[style={bend left,->,> = latex'},label = $c_{3}$,color = green,labelstyle={inner sep=0pt}](2)(8) \end{tikzpicture} \caption{Derived colored graph $\mathcal{G}_{2}(\pi_{2}) = \mathcal{G}_{1}(\pi_{1},\mathcal{D}_{1},o_1)$ with $\mathcal{D}_{1} = \{1,2,4,5,6,7,11\}$ and $o_1$ such that $o_1$ represents `removing edges $(2,12) \mbox{ and } (2,9)$'.} \label{g:exa4} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \begin{tikzpicture}[scale=0.16] \tikzset{VertexStyle1/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 2pt, outer sep = 0pt, minimum size = 8 pt}} \tikzset{VertexStyle2/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 2pt, outer sep = 0pt, minimum size = 8 pt}} \tikzset{VertexStyle3/.style = {shape = circle, ball color = white!100!black, text = black, inner sep = 0.8pt, outer sep = 0pt, minimum size = 8 pt}} \tikzset{VertexStyle4/.style = {shape = circle, ball color = black!80!yellow, text = white, inner sep = 0.8pt, outer sep = 0pt, minimum size = 8 pt}} \node[VertexStyle2](1) at (8.840,2.3005) {$1$}; \node[VertexStyle2](2) at (10.4707,-6.6488) {$2$}; \node[VertexStyle2](3) at (-8.6887,-1.2771) {$3$}; \node[VertexStyle2](4) at (-5.0950,5.9327) {$4$}; \node[VertexStyle2](5) at (-15.4428,8.0927) {$5$}; \node[VertexStyle2](6) at (-14.3838,-8.6365) {$6$}; \node[VertexStyle2](7) at (-2.2519 , -6.7736) {$7$}; \node[VertexStyle2](8) at (-7.5493,-7.1412) {$8$}; \node[VertexStyle2](9) at (18.7030,-7.1350) {$9$}; \node[VertexStyle4](10) at (-16.1152, -1.0685) {$10$}; \node[VertexStyle4](11) at (5.0224, 8.3263) {$11$}; \node[VertexStyle4](12) at (2.4145, -2.6967) {$12$}; \Edge[ style = {->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](6)(8); \Edge[ style = {->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](6)(10); \Edge[ style = {->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](5)(10); \Edge[ style = {bend right,->,> = latex'},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](10)(3); \Edge[ style = {->,> = latex',pos = 0.2},color=blue,label = $c_{1}$ ,labelstyle={inner sep=0pt}](5)(3); \Edge[ style = ->,color=red ,label = $c_{1}$](4)(11); \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](1)(2); \Edge[style = {->,> = latex',pos = 0.4},color=red,label = $c_{2}$,labelstyle={inner sep=0pt}](7)(3); \Edge[style = {->,> = latex',pos = 0.4},color=red,label = $c_{2}$,labelstyle={inner sep=0pt}](7)(12); \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](5)(4) \Edge[style={bend left,->,> = latex'},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](11)(9) \Edge[style={bend right,->,> = latex',pos = 0.2},label = $c_{2}$,color = red,labelstyle={inner sep=0pt}](11)(12) \Edge[style={->,> = latex',pos = 0.6},label = $c_{1}$,color = blue,labelstyle={inner sep=0pt}](10)(4) \Edge[ style = {->,> = latex',pos = 0.5},color=green,label = $c_{3}$ ,labelstyle={inner sep=0pt}](5)(12) \Edge[ label =$c_{3}$, style = {->,> = latex',pos = 0.7},color = green,labelstyle={inner sep=0pt}](1)(4) \Edge[style={bend left,->,> = latex'},label = $c_{3}$,color = green,labelstyle={inner sep=0pt}](2)(8) \end{tikzpicture} \caption{Derived set $\mathcal{D}_{2} = V$ of $\mathcal{D}_{1}$ in the colored graph $\mathcal{G}_{2}(\pi_{2})$. Return that $(\mathcal{G}(\pi);V_{L})$ is controllable.} \label{g:exa5} \end{subfigure} \caption{An example of application of Theorem \ref{t:eocd}} \label{g:exa} \end{figure} \begin{ex}\label{ex:algorithm} Consider the colored graph $\mathcal{G}(\pi) = (V,E,\pi)$ depicted in Figure \ref{g:exa1} with $V_L = \{1,2,5,6,7\}$ the leader set. To start with, we compute a derived set $\mathcal{D}_{c}(V_L) = \{1,2,5,6,7\}$ of $V_L$ in $\mathcal{G}(\pi)$, and denote it by $\mathcal{D}_{0}$. For the vertices $1 , 7 \in \mathcal{D}_{0}$, in $\mathcal{G}(\pi)$ we have $ N_{V \setminus \mathcal{D}_{0}}(7) \subseteq N_{V \setminus \mathcal{D}_{0}}(1)$, and for any $k \in N_{V \setminus \mathcal{D}_{0}}(7)$, the two edges $(1,k)$ and $(7,k)$ have the same color. Thus we remove all edges in $\mathcal{E}_{7}(1) = \{(1,3),(1,12)\}$ and denote this edge operation by $o_{0}$. In this way we obtain a derived colored graph $\mathcal{G}_{1}(\pi_{1})= \mathcal{G}(\pi,\mathcal{D}_{0},o_0)$ of $\mathcal{G}(\pi)$ with respect to $\mathcal{D}_{0}$ and $o_0$, that is depicted in Figure \ref{g:exa2}. We proceed to compute a derived set $ \mathcal{D}_{c}(\mathcal{D}_{0}) = \{1,2,4,5,6,7,11\}$ of $\mathcal{D}_{0}$ in $\mathcal{G}_{1}(\pi_{1})$ as shown in Figure \ref{g:exa3} and denote this derived set by $\mathcal{D}_{1}$. Since $\mathcal{D}_{1} \neq V$ and $\mathcal{D}_{1} \neq \mathcal{D}_{0}$, the procedure will continue. For the nodes $2 , 11 \in \mathcal{D}_{1} $ in the graph $\mathcal{G}_{1}(\pi_{1})$, we have $ N_{V \setminus \mathcal{D}_{1} }(\{11\}) \subseteq N_{V \setminus \mathcal{D}_{1}}(2)$, and for any $k \in N_{V \setminus \mathcal{D}_{1} }(\{11\})$, the two edges $(2,k)$ and $(11,k)$ have the same color. Thus we eliminate all the edges in $\mathcal{E}_{11}(2) = \{(2,12),(2,9)\}$ and denote this operation by $o_1$. We then obtain a derived colored graph $\mathcal{G}_{2}(\pi_{2}) = \mathcal{G}_{1}(\pi_{1},\mathcal{D}_{1},o_1)$ of $\mathcal{G}_{1}(\pi_{1})$ with respect to $\mathcal{D}_{1}$ and $o_1$, and $\mathcal{G}_{2}(\pi_{2})$ is depicted in Figure \ref{g:exa4}. We then compute a derived set $\mathcal{D}_{c}(\mathcal{D}_{1})$ of $\mathcal{D}_{1}$ in $\mathcal{G}_{2}(\pi_{2})$ as shown in Figure \ref{g:exa5}. This derived set is denoted by $\mathcal{D}_{2}$ and turns out to be equal to the original vertex set $V$. Thus we obtain that an edge-operations-color-change derived set $\mathcal{D}_{ec}(V_L)$ is equal to $V$, and conclude that $(\mathcal{G}(\pi);V_L)$ is controllable. \end{ex} \section{Conclusion} In this paper we have studied strong structural controllability of leader/follower networks. In contrast to existing work, in which the nonzero off-diagonal entries of matrices in the qualitative class are completely independent, in this paper we have studied the general case that there are equality constraints among these entries, in the sense that a priori given entries in the system matrix are restricted to take arbitrary but identical nonzero values. This has been formalized using the concept of colored graph and by introducing the new concept of colored strong structural controllability. In order to obtain conditions under which colored strong structural controllability holds for a given leader-follower system, we have introduced a new color change rule and a new concept of zero forcing set. These have been used to formulate a sufficient condition for controllability of the colored graph with a given leader set. We have shown that this condition is not necessary, by giving an example of a colored strong structurally controllable colored graph and leader set for which our sufficient condition is not satisified. Motivated by this example, we have proceeded to establish the concept of elementary edge operations on colored graphs. It has been shown that these edge operations preserve colored strong structural controllability. Based on these elementary edge operations and the color change rule, a second sufficient graph theoretic condition for colored strong structural controllability has been provided. Finally, we have established a condition for colored strong structural controllability in terms of the new notion of edge-operations-color-change derived set. This derived set is obtained from the original leader set by applying edge operations and the color change rule sequentially in alternating manner. This iterative procedure has been illustrated by means of a concrete example. The main new ideas of this paper are a new color change rule, and the concept of elementary edge operations for colored directed graphs. We have established several conditions for colored strong structural controllability using these new concepts. The conditions that we provided are not necessary, and finding necessary and sufficient conditions is still an open problem. Another open problem is to establish methods to characterize strong structural controllability for the case that given entries in the system matrices satisfy linear relations (instead of requiring them to take identical values). For {\em weak} structural controllability this was studied in \cite{LM2017}. In this paper we have focused on finding graph-theoretic conditions rather than providing suitable algorithms, see e.g. \cite{WRS2014}. Establishing an efficient algorithm to check colored strong structural controllability could also be a future research problem. Finally, other system-theoretic concepts like strong targeted controllability \cite{MCT2015, WCT2017} and identifiability \cite{WTC2018} for systems defined on colored graphs are possible research directions for the future.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Substitutive dynamical systems for substitutions with dominant Pisot eigenvalue are widely known to yield pure discrete spectrum in the symbolic setting as well as for tiling spaces, cf.\ \cite{Rauzy:82,Fog02,Barge-Kwapisz:06,CANTBST,AkiBBLS}. The aim of this paper is to extend substitutive dynamical systems to the non-stationary (i.e., time inhomogeneous) framework. The iteration of a single transformation is replaced by a sequence of transformations, along a sequence of spaces; see e.g.\ \cite{Arnoux-Fisher:01,Arnoux-Fisher:05,Fisher:09} for sequences of substitutions and Anosov maps as well as for relations to Vershik's adic systems. In this setting, the Pisot condition is replaced by the requirement that the second Lyapunov exponent of the dynamical system is negative, leading to hyperbolic dynamics with a one-dimensional unstable foliation. This requirement has an arithmetical meaning, as it assures a.e.\ strong convergence of continued fraction algorithms associated with these dynamical systems; see \cite{SCHWEIGER,Berthe:11,Berthe-Delecroix,AD13}. We consider \emph{$S$-adic} symbolic dynamical systems, where the letter~$S$ refers to ``substitution''. These shift spaces are obtained by iterating different substitutions in a prescribed order, generalizing the substitutive case where a single substitution is iterated. An $S$-adic expansion of an infinite word~$\omega$ is given by a sequence $(\sigma_n,i_n)_{n \in \mathbb{N}}$, where the $\sigma_n$ are substitutions and the $i_n$ are letters, such that $\omega = \lim_{n\to\infty} \sigma_0 \sigma_1 \cdots \sigma_{n-1}(i_n)$. Under mild assumptions (needed in order to exclude degenerate constructions), the orbit closure under the action of the shift~$\Sigma$ on the infinite word~$\omega$ is a minimal symbolic dynamical system equipped with an $S$-adic substitutive structure, and has zero entropy~\cite{Berthe-Delecroix}. The $S$-adic shifts are closely related to Vershik's adic systems~\cite{Vershik:81}, which have provided the terminology ``$S$-adic''. More generally they belong to the family of fusion systems (see \cite{PriebeFrank-Sadun11,PriebeFrank-Sadun14}), which also includes Bratteli-Vershik systems and multidimensional cut-and-stack transformations, and pertain to arithmetic dynamics~\cite{Sidorov}. The connections with continued fractions are natural in this framework: they had big influence on the set-up of the $S$-adic formalism, inspired by the Sturmian dynamics which is thoroughly described by regular continued fractions; see e.g.\ \cite{Arnoux-Fisher:01,Berthe-Ferenczi-Zamboni:05}. In the classical Pisot substitutive setting, the basic object is a single Pisot substitution, i.e., a substitution~$\sigma$ whose incidence matrix~$M_{\sigma}$ has a Pisot number as dominant eigenvalue. When the characteristic polynomial of~$M_{\sigma}$ is furthermore assumed to be irreducible, then the associated symbolic dynamical system $(X_{\sigma}, \Sigma)$ is conjectured to have pure discrete spectrum. This is the {\em Pisot substitution conjecture}. For more details and partial results on this conjecture, see \cite{Fog02,ST09,CANTBST,AkiBBLS}. One now classical approach for exhibiting the translation on a compact abelian group to which $(X_{\sigma}, \Sigma)$ is conjectured to be isomorphic relies on the associated {\it Rauzy fractal}. This explicitly constructable set (with fractal boundary) forms a fundamental domain for the $\mathbb{Z}$-action provided by the Kronecker group translation (or at least for a Kronecker factor). We extend classical notions, results, and problems studied in the Pisot substitutive case to the $S$-adic framework. We are able to define Rauzy fractals associated with $S$-adic symbolic dynamical systems, with the Pisot assumption being extended to the $S$-adic framework by requiring the second Lyapunov exponent to be negative. In other words, we work with $S$-adic shifts whose associated cocyles (provided by the incidence matrices of the substitutions) display strong convergence properties analogous to the Pisot case. Combinatorially, this reflects in certain balancedness properties of the associated language. This also allows us to define analogs of the stable/unstable splitting in the Pisot substitution case. In order to prove discrete spectrum, we associate with any Pisot $S$-adic shift a Rauzy fractal that lives in the analog of the stable space. We then introduce a family of coverings and multiple tilings, including periodic and aperiodic ones, that comes together with set-equations playing the role of the graph-directed iterated function system in the Pisot substitutive case. A~particular choice of a periodic tiling yields number-theoretic applications and the isomorphism with a toral translation, whereas other (aperiodic) choices allow the study of the associated coverings. We then express a criterion for the multiple tilings to be indeed tilings, which yields pure discrete spectrum. This criterion is a coincidence type condition in the same vein as the various coincidence conditions (algebraic, combinatorial, overlap, etc.) introduced in the substitutive framework (first in~\cite{Dekking} for substitutions of constant length and then extended to the most general substitutive framework, see e.g.\ \cite{Solomyak:97,AL11}). The idea of constructing Rauzy fractals associated with multidimensional continued fractions is already present in \cite{Ito:89,Ito:95b}, but the problem remained to prove tiling properties, and even the question whether subpieces of the Rauzy fractal do not overlap could not be answered. Furthermore, although there exist results for the generation of discrete hyperplanes in connection with continued fraction algorithms \cite{Ito-Ohtsuki:93,Ito-Ohtsuki:94,Arnoux-Berthe-Ito:02,BBJS13,BBJS14}, more information on convergence and renormalization properties is needed in order to deduce spectral properties. In~\cite{Arnoux-Mizutani-Sellami}, $S$-adic sequences are considered where the substitutions all have the same Pisot irreducible unimodular matrix; in our case, the matrices are allowed to be different at each step. \subsection*{Main results} In our first result we describe geometric and dynamical properties of an $S$-adic shift $(X,\Sigma)$ under very general combinatorial conditions. In particular, we are able to associate Rauzy fractals with $(X,\Sigma)$ that are compact, the closure of their interior, and have a boundary of zero measure. We deduce covering and (multiple) tiling properties of these Rauzy fractals and, subject to a combinatorial condition (a coincidence type condition), we are able to show that they form a periodic tiling. Due to the lack of a dominant eigenvector and the fact that we loose the self-similarity properties present for substitutive systems, these proofs require new ideas and do not run along the lines of the substitutive setting. In particular, a crucial point is to prove that the boundary of Rauzy fractals has measure zero (see Proposition \ref{p:boundary}). The tiling property of the Rauzy fractals is then used to prove that $(X,\Sigma)$ is conjugate to a translation on a torus of suitable dimension. In this case, the subpieces of the Rauzy fractal turn out to be bounded remainder sets, and the elements of~$X$ are natural codings for this translation. Since the assumptions on the shift are very mild, this result can be used to establish a metric result stating that almost all shifts of certain families of $S$-adic shifts (under the above-mentioned Pisot condition in terms of Lyapunov exponents) have the above properties. We apply these constructions to two multidimensional continued fraction algorithms, the Arnoux-Rauzy and the Brun algorithm, that are proved to satisfy our Pisot assumptions as well as the combinatorial coincidence condition. Arnoux-Rauzy substitutions are known to be Pisot~\cite{Arnoux-Ito:01}. Purely substitutive Arnoux-Rauzy words are even natural codings of toral translations \cite{Berthe-Jolivet-Siegel:12,Barge-Stimac-Williams:13}. This is not true for arbitrary non-substitutive Arnoux-Rauzy words (see \cite{Cassaigne-Ferenczi-Zamboni:00,Cassaigne-Ferenczi-Messaoudi:08}), but we are able to show this property for large classes of them; to our knowledge, no such examples (on more than 2 letters) were known before. Moreover, we deduce from a recent result by Avila and Delecroix~\cite{AD13} that almost every Arnoux-Rauzy word is a toral translation. This proves a conjecture of Arnoux and Rauzy that goes back to the early nineties (see e.g.\ \cite{Cassaigne-Ferenczi-Zamboni:00,Berthe-Ferenczi-Zamboni:05}) in a metric sense. We also prove that any linearly recurrent Arnoux-Rauzy shift with recurrent directive sequence has pure discrete spectrum. Brun's algorithm~\cite{BRUN} is one of the most classical multidimensional generalizations of the regular continued fraction expansion \cite{BRENTJES,SCHWEIGER}. This algorithm generates a sequence of simultaneous rational approximations to a given pair of points (each of these approximations is a pair of points having the same denominator). It is also closely related to the modified Jacobi-Perron algorithm introduced by Podsypanin in~\cite{POD77}, which is a two-point extension of the Brun algorithm. It is shown to be strongly convergent almost everywhere with exponential rate \cite{FUKE96,Schratzberger:98,Meester,Broise} and has an invariant ergodic probability measure equivalent to the Lebesgue measure which is known explicitly~\cite{ArnouxNogueira93}. The substitutive case has been handled in \cite{Barge14,BBJS14}: Brun substitutions have pure discrete spectrum. Applying our theory, we prove that for almost all $(x_1,x_2) \in [0,1)^2$, there is an $S$-adic shift associated with a certain (explicitly given) Brun expansion which is measurably conjugate to the translation by $(x_1,x_2)$ on the torus~$\mathbb{T}^2$. This implies that Brun substitutions yield natural codings of almost all rotations on the two-dimensional torus. The subpieces of the associated Rauzy fractals provide (measurable) bounded remainder sets for this rotation. \subsection*{Motivation} Our motivation comes on the one hand from number theory. Indeed, Rauzy fractals are known to provide fundamental domains for Kronecker translations on the torus~$\mathbb{T}^d$ (together with Markov partitions for the corresponding toral automorphisms). They are also used to obtain best approximation results for cubic fields~\cite{Hubert-Messaoudi:06}, and serve as limit sets for simultaneous Diophantine approximation for cubic extensions in terms of self-similar ellipses provided by Brun's algorithm~\cite{ito:03,Ito:07}. Using our new theory, it is now possible to reach Kronecker translations with non-algebraic parameters, which extends the usual (Pisot) algebraic framework and the scope of potential number-theoretic applications considerably. On the other hand, the results of the present paper extend discrete spectrum results to a much wider framework. Furthermore, our theory enables us to give explicit constructions for higher dimensional non-stationary Markov partitions for ``non-stationary hyperbolic toral automorphisms'', in the sense of~\cite{Arnoux-Fisher:05}. In~\cite{Arnoux-Fisher:05} such non-stationary Markov partitions are defined and $2$-dimen\-sio\-nal examples are given. Our new results (including the tilings by $S$-adic Rauzy fractals) might also help in the quest for a convenient symbolic representation of the Weyl chamber flow; see e.g.~\cite[Section~6]{Gorodnik}, in the case of two letters this is performed in~\cite{Arnoux-Fisher:01}. Another direction of research will be the investigation of the spectrum of $S$-adic shifts. In particular, it would be interesting to explore how Host's result~\cite{Host:86} on the continuity of eigenvalues extends to this more general setting. We will come back to these subjects in a forthcoming paper. \section{Mise en sc\`ene}\label{sec:miseenscene} \subsection{Substitutions} A~\emph{substitution} $\sigma$ over a finite alphabet $\mathcal{A} = \{1,2,\ldots,d\}$ is an endomorphism of the free monoid~$\mathcal{A}^*$ (that is endowed with the operation of concatenation). We assume here that all our substitutions are non-erasing, i.e., they send non-empty words to non-empty words. The \emph{incidence matrix} (or abelianization) of~$\sigma$ is the square matrix $M_\sigma = (|\sigma(j)|_i)_{i,j\in\mathcal{A}} \in \mathbb{N}^{d\times d}$. Here, the notation $|w|_i$ stands for the number of occurrences of the letter~$i$ in $w \in \mathcal{A}^*$, and $|w|$ will denote the length of~$w$. We say that $\sigma$ is \emph{unimodular} if $|\!\det M_\sigma| = 1$. The map \[ \mathbf{l}:\ \mathcal{A}^* \to\mathbb{N}^d, \ w \mapsto \tr{(|w|_{1},|w|_2,\ldots, |w|_{d})} \] is called the \emph{abelianization map}. Note that $\mathbf{l}(\sigma(w)) = M_\sigma \mathbf{l}(w)$ for all $w\in \mathcal{A}^*$. A~substitution is called \emph{Pisot irreducible} if the characteristic polynomial of its incidence matrix is the minimal polynomial of a Pisot number. \subsection{$S$-adic words and languages} Let $\boldsymbol{\sigma} = (\sigma_n)_{n\in\mathbb{N}}$ be a sequence of substitutions over the alphabet~$\mathcal{A}$. To keep notation concise, we set $M_n = M_{\sigma_n}$ for $n \in \mathbb{N}$, and we abbreviate products of consecutive substitutions and their incidence matrices by \[ \sigma_{[k,\ell)} = \sigma_k \sigma_{k+1} \cdots \sigma_{\ell-1} \quad \mbox{and} \quad M_{[k,\ell)} = M_k M_{k+1} \cdots M_{\ell-1} \quad (0 \le k \le \ell). \] The language associated with~$\boldsymbol{\sigma}$ is defined by $\mathcal{L}_{\boldsymbol{\sigma}} = \mathcal{L}_{\boldsymbol{\sigma}}^{(0)}$, where \[ \mathcal{L}_{\boldsymbol{\sigma}}^{(m)} = \big\{w \in \mathcal{A}^*:\, \mbox{$w$ is a factor of $\sigma_{[m,n)}(i)$ for some $i \in\mathcal{A}$, $n\in\mathbb{N}$}\big\} \qquad (m \in \mathbb{N}). \] Here, $w$~is a \emph{factor} of $v \in \mathcal{A}^*$ if $v \in \mathcal{A}^* w \mathcal{A}^*$. Furthermore, $w$~is a \emph{prefix} of~$v$ if $v \in w \mathcal{A}^*$. Similarly, $w$~is a factor and a prefix of an infinite word $\omega \in \mathcal{A}^\mathbb{N}$ if $\omega \in \mathcal{A}^* w \mathcal{A}^\mathbb{N}$ and $\omega \in w \mathcal{A}^\mathbb{N}$, respectively. The sequence~$\boldsymbol{\sigma}$ is said to be \emph{algebraically irreducible} if, for each $k \in \mathbb{N}$, the characteristic polynomial of $M_{[k,\ell)}$ is irreducible for all sufficiently large~$\ell$. The sequence~$\boldsymbol{\sigma}$ is said to be \emph{primitive} if, for each $k \in \mathbb{N}$, $M_{[k,\ell)}$ is a positive matrix for \emph{some} $\ell > k$. This notion extends primitivity of a single substitution~$\sigma$, where $M_\sigma^\ell$ is required to be positive for some $\ell > 0$, to sequences. Note that \cite{Durand:00a,Durand:00b,Durand-Leroy-Richomme:13} use a more restrictive definition of primitive sequences of substitutions. Following \cite{Arnoux-Mizutani-Sellami}, we say that an infinite word $\omega \in \mathcal{A}^\mathbb{N}$ is a \emph{limit word} of $\boldsymbol{\sigma} = (\sigma_n)_{n\in\mathbb{N}}$ if there is a sequence of infinite words $(\omega^{(n)})_{n\in\mathbb{N}}$ with \[ \omega^{(0)} = \omega, \quad \omega^{(n)} = \sigma_n\big(\omega^{(n+1)}\big) \quad \mbox{for all}\ n \in \mathbb{N}, \] where the substitutions~$\sigma_n$ are naturally extended to infinite words. We also say that $\omega$ is an \emph{$S$-adic limit word} with \emph{directive sequence} $\boldsymbol{\sigma}$ and $S = \{\sigma_n:\, n \in \mathbb{N}\}$. We can write \[ \omega = \lim_{n\to\infty} \sigma_{[0,n)}(i_n), \] where $i_n$ denotes the first letter of~$\omega^{(n)}$, provided that $\lim_{n\to\infty} |\sigma_{[0,n)}(i_n)| = \infty$ (which holds in particular when $\boldsymbol{\sigma}$ is primitive). In case that~$\boldsymbol{\sigma}$ is a periodic sequence, there exists a limit word~$\omega$ such that $\omega^{(n)} = \omega$ for some $n \ge 1$, i.e., $\omega$~is the fixed point of the substitution~$\sigma_{[0,n)}$. We will refer to this case as the \emph{periodic case}. Note that we do not require $S$ to be finite since we want to include $S$-adic shifts issued from (multiplicative) multidimensional continued fraction expansions. For more on $S$-adic sequences, see e.g.\ \cite{Berthe-Delecroix,Durand-Leroy-Richomme:13,Arnoux-Mizutani-Sellami}. \subsection{Symbolic dynamics, $S$-adic shifts, and $S$-adic graphs} An infinite word~$\omega$ is said to be \emph{recurrent} if each factor of~$\omega$ occurs infinitely often in~$\omega$. It is is said to be \emph{uniformly recurrent} if each factor occurs at an infinite number of positions with bounded gaps. The recurrence function $R(n)$ of a uniformly recurrent word~$\omega$ is defined for any~$n$ as the smallest positive integer~$k$ for which every factor of size~$k$ of~$\omega$ contains every factor of size~$n$. An infinite word~$\omega$ is said to be \emph{linearly recurrent} if there exists a constant~$C$ such that $R(n) \leq Cn$, for all~$n$. The \emph{shift operator}~$\Sigma$ maps $(\omega_n)_{n\in\mathbb{N}}$ to $(\omega_{n+1})_{n\in\mathbb{N}}$. A~dynamical system $(X,\Sigma)$ is a \emph{shift space} if $X$ is a closed shift-invariant set of infinite words over a finite alphabet, with the product topology of the discrete topology. The system $(X,\Sigma)$ is \emph{minimal} if every non-empty closed shift-invariant subset equals the whole set; it is called \emph{uniquely ergodic} if there exists a unique shift-invariant probability measure on~$X$. The \emph{symbolic dynamical system generated by an infinite word~$\omega$} is defined as~$(X_\omega,\Sigma)$, where $X_\omega = \overline{\{\Sigma^n(\omega) :\, n \in \mathbb{N}\}}$ is the closure of the $\Sigma$-orbit of~$\omega$. This system is \emph{minimal} if and only if $\omega$ is uniformly recurrent \cite[Proposition~4.7]{Queffelec:10}. Let $\mu$ be a shift-invariant measure defined on $(X,\Sigma)$. A~measurable eigenfunction of the system $(X,\Sigma,\mu)$ with associated eigenvalue $\lambda \in \mathbb{R}$ is an $L^2(X,\mu)$ function that satisfies $f(\Sigma^n(\omega)) = e^{2\pi i\lambda n} f(\omega)$ for all $n \in \mathbb{N}$ and $\omega \in X$. The system $(X,\Sigma)$ is said to be \emph{weakly mixing} if there are no nontrivial measurable eigenvalues. It has \emph{pure discrete spectrum} if $L^2(X,\mu)$ is spanned by the measurable eigenfunctions. In the present paper we consider two types of symbolic dynamical systems in which the previous definitions make sense. Namely, \emph{$S$-adic shifts} and edge shifts associated with \emph{$S$-adic graphs}. They will be defined now. Let $S$ be a set of substitutions, $S$ can be finite or infinite. The \emph{$S$-adic shift} or \emph{$S$-adic system} with directive sequence~$\boldsymbol{\sigma}\in S^{\mathbb{N}}$ is $(X_{\boldsymbol{\sigma}}, \Sigma)$, where $X_{\boldsymbol{\sigma}}$ denotes the set of infinite words~$\omega$ such that each factor of~$\omega$ is an element of~$\mathcal{L}_{\boldsymbol{\sigma}}$. If $\boldsymbol{\sigma}$ is primitive, then one checks that $(X_{\boldsymbol{\sigma}}, \Sigma) = (X_\omega,\Sigma)$ for any limit word~$\omega$ of~$\boldsymbol{\sigma}$; see e.g. \cite[Theorem 5.2]{Berthe-Delecroix}. Let $S$ be a set of substitutions and let $G=(V,E)$ be a strongly connected directed graph with set of vertices $V$ and set of edges $E$. The graph $G$ may be an infinite graph with multiple edges. Let $\tau: E \to S$ be a map that associates a substitution $\tau(e)=\sigma_e$ with each edge $e\in E$ and call $(G,\tau)$ an \emph{$S$-adic graph} (see~\cite[Section~3.3]{Berthe-Delecroix}). Let $s(e)$ and $r(e)$ be the source and the range of an edge $e\in E$. Then with $G$ we associate the edge shift $(E_G,\Sigma)$ with \[ E_G = \{ (\gamma_n) \in E^\mathbb{N} \;:\;r(\gamma_n)=s(\gamma_{n+1}) \hbox{ for each }n\in \mathbb{N} \}. \] The set $D_G = \{ (\sigma_n)=(\tau(\gamma_n)) \;:\; (\gamma_n) \in E_G \} $ consists of all directive sequences corresponding to labellings of infinite walks in $G$. In what follows, we will allow $E$ to be infinite (which allows $S$-adic shifts with infinitely many different substitutions). However, we will always assume that $V$ is finite. We will then speak of an $S$-adic graph with finitely many vertices. Very often we will deal with the full shift $E_G\cong D_G=S^\mathbb{N}$ corresponding to a graph with one vertex and finitely or countably infinitely many self loops (each of which is identified with a different substitution). Note that we use the same notation for the shift map $\Sigma$ acting on~$\mathcal{A}^\mathbb{N}$ and on~$E_G$. The \emph{cylinder} of a finite sequence $(\gamma_0, \gamma_1, \ldots, \gamma_{\ell-1}) \in E^\ell$ is \[ \mathcal{Z}(\gamma_0, \gamma_1, \ldots, \gamma_{\ell-1}) = \big\{(\tau_n)_{n\in\mathbb{N}} \in E_G:\, (\tau_0, \tau_1, \ldots, \tau_{\ell-1}) = (\gamma_0, \gamma_1, \ldots, \gamma_{\ell-1})\big\}. \] In what follows we will often identify an edge $e\in E$ with the substitution $\tau(e)$, a finite walk with the associated product of substitutions, and an element of $(\gamma_n)\in E_G$ with the associated directive sequence $(\tau(\gamma_n))$. In particular, we will talk about the incidence matrix of an edge or a walk, and about primitivity, algebraic irreducibility, and the language $\mathcal{L}_{\boldsymbol{\gamma}}$ of an element $\boldsymbol{\gamma}\in E_G$. \subsection{Balance and letter frequencies} \label{subsec:letterfreq} A~pair of words $u, v \in \mathcal{A}^*$ with $|u| = |v|$ is \emph{$C$-balanced} if \[ -C \le |u|_j - |v|_j \le C \quad \mbox{for all}\ j \in \mathcal{A}. \] A~language~$\mathcal{L}$ is $C$-balanced if each pair of words $u, v \in \mathcal{L}$ with $|u| = |v|$ is $C$-balanced. The language~$\mathcal{L}$ is said to be \emph{balanced} if there exists~$C$ such that $\mathcal{L}$ is $C$-balanced. (In previous works, this property was sometimes called \emph{finitely balanced}, and balancedness referred to the case $C = 1$.) A~(finite or infinite) word is $C$-balanced or balanced if the language of its factors has this property. Note that the language of a Pisot irreducible substitution is balanced; see e.g.~\cite{Adamczewski:03,Adamdis}. The \emph{frequency} of a letter $i \in \mathcal{A}$ in $\omega \in \mathcal{A}^\mathbb{N}$ is defined as $f_i = \lim_{|p|\to\infty} |p|_i/|p|$, where the limit is taken over the prefixes~$p$ of~$\omega$, if the limit exists. The vector $\tr{(f_1,f_2,\ldots,f_d)}$ is then called the \emph{letter frequency vector}. Balancedness implies the existence of letter frequencies; see~\cite{Berthe-Tijdeman:02}. \subsection{Generalized Perron-Frobenius eigenvectors} \label{sec:gener-perr-frob} A~natural way to endow a shift space with a shift-invariant measure is to consider its factor frequencies (defined analogously as for letters). In the primitive substitutive case, letter frequencies are given by the Perron-Frobenius eigenvector. More generally, for a sequence of matrices $(M_n)_{n\in\mathbb{N}}$, we have by \cite[pp.~91--95]{Furstenberg:60} that \begin{equation} \label{e:topPF} \bigcap_{n\in\mathbb{N}} M_{[0,n)}\, \mathbb{R}^d_+ = \mathbb{R}_+ \mathbf{u} \quad \mbox{for some positive vector}\ \mathbf{u} \in \mathbb{R}_+^d, \end{equation} provided there are indices $k_1 < \ell_1 \le k_2 < \ell_2 \le \cdots$ and a positive matrix~$B$ such that $B = M_{[k_1,\ell_1)} = M_{[k_2,\ell_2)} = \cdots$. In particular, \eqref{e:topPF} holds for the sequence of incidence matrices of a primitive and recurrent sequence of substitutions $\boldsymbol{\sigma} = (\sigma_n)_{n\in\mathbb{N}}$ (even if $S$ is infinite). We call~$\mathbf{u}$ a \emph{generalized right eigenvector} of~$\boldsymbol{\sigma}$. Note that \eqref{e:topPF} is called \emph{topological Perron-Frobenius condition} in~\cite{Fisher:09}. In particular, the letter frequency vector $\mathbf{u}=\tr{(f_1,f_2,\ldots,f_d)}$ is a generalized right eigenvector when $\omega$ is a limit word of a primitive and recurrent sequence of substitutions. \subsection{Lyapunov exponents and Pisot condition} \label{sec:lyap-expon-pisot} Let $S$ be a finite or infinite set of substitutions with invertible incidence matrices and let $(G,\tau)$ be an $S$-adic graph with associated edge shift $(E_G, \Sigma, \nu)$, where $\nu$ is a (ergodic) probability measure. With each $\boldsymbol{\gamma} = (\gamma_n)_{n\in\mathbb{N}} \in E_G$, associate the linear \emph{cocycle} operator $A(\boldsymbol{\gamma}) = \tr{\!M}_0$ (where $M_0$ is the incidence matrix of the substitution~$\sigma_0=\tau(\gamma_0)$ associated with the first edge $\gamma_0$ of the walk $\boldsymbol{\gamma}$). Assume that this cocycle is \emph{log-integrable} in the sense that \[ \int_{E_G} \log\max\{ \|A(x) \|, \|A(x)^{-1} \| \} d\nu(x) < \infty \] (this condition is always satisfied if $G$ is a finite graph, that is, when $G$ has finitely many edges). Then the \emph{Lyapunov exponents} $\theta_1, \theta_2, \ldots, \theta_d$ of $(E_G, \Sigma, \nu)$ are recursively defined by \begin{align} \theta_1 + \theta_2 + \cdots + \theta_k & = \lim_{n\to\infty} \frac{1}{n} \int_{E_G} \log \|\wedge^k \big(A(\Sigma^{n-1}(x)) \cdots A(\Sigma(x)) A(x)\big)\|\, d\nu(x) \nonumber \\ & = \lim_{n\to\infty} \frac{1}{n} \int_{E_G} \log \|\wedge^k (\tr{M}_{[0,n)})\|\, d\nu = \lim_{n\to\infty} \frac{1}{n} \int_{E_G} \log \|\wedge^k M_{[0,n)}\|\, d\nu \label{eq:transposeequal} \end{align} for $1 \le k \le d$, where $\wedge^k$ denotes the $k$-fold wedge product. Here and in the following, $\|\cdot\|$ denotes the maximum norm~$\|\cdot\|_\infty$. Following \cite[\S 6.3]{Berthe-Delecroix}, we say that $(E_G, \Sigma, \nu)$ satisfies the \emph{Pisot condition} if \[ \theta_1 > 0 > \theta_2 \ge \theta_3 \ge \cdots \ge \theta_d. \] \subsection{Natural codings and bounded remainder sets}\label{sec:coding} Let $\Lambda$ be a full-rank lattice in~$\mathbb{R}^d$ and $T_\mathbf{t}: \mathbb{R}^d/\Lambda \to \mathbb{R}^d/\Lambda$, $\mathbf{x} \mapsto \mathbf{x} + \mathbf{t}$ a given toral translation. Let $R\subset\mathbb{R}^d$ be a fundamental domain for $\Lambda$ and $\tilde T_\mathbf{t}:R\to R$ the mapping induced by $T_\mathbf{t}$ on $R$. If $R = R_1 \cup \cdots \cup R_k$ is a partition of $R$ (up to measure zero) such that for each $1\le i\le k$ the restriction~$\tilde T_\mathbf{t}|_{R_i}$ is given by the translation ${\mathbf x}\mapsto{\mathbf x}+{\mathbf t}_i$ for some~${\mathbf t}_i\in\mathbb{R}^d$, and $\omega$ is the coding of a point $\mathbf{x} \in R$ with respect to this partition, we call~$\omega$ a \emph{natural coding} of $T_\mathbf{t}$. A~symbolic dynamical system $(X,\Sigma)$ is a \emph{natural coding} of $(\mathbb{R}^d/\Lambda, T_\mathbf{t})$ if $(X,\Sigma)$ and $(\mathbb{R}^d/\Lambda, T_\mathbf{t})$ are measurably conjugate and every element of~$X$ is a natural coding of the orbit of some point of the $d$-dimensional torus $\mathbb{R}^d/\Lambda$ (with respect to some fixed partition). A~subset~$A$ of $\mathbb{R}^d/\Lambda$ with Lebesgue measure~$\lambda(A)$ is said to be a~\emph{bounded remainder set} for the translation~$T_\mathbf{t}$ if there exists $C > 0$ such that, for a.e.\ $x \in \mathbb{R}^d/\Lambda$, \[ |\#\{n < N:\, T _\mathbf{t}^n(x) \in A \} - N \lambda (A)/\lambda(R) | < C \qquad \mbox{for all}\ N \in \mathbb{N}. \] Observe that if $(X,\Sigma)$ is a natural coding of a minimal translation $(\mathbb{R}^d/\Lambda, T _\mathbf{t})$ with balanced language, then the elements of its associated partition are bounded remainder sets \cite[Proposition~7]{Adamczewski:03}. Moreover, $A$~is a bounded remainder set if it is an atom of a partition that gives rise to a natural coding of a translation whose induced mapping on~$A$ is again a translation; see \cite{Rauzy:84} (we also refer to \cite{Ferenczi92} for an analogous characterization of bounded remainder sets). \subsection{(Multiple) tilings} \label{sec:multiple-tilings} We call a collection~$\mathcal{K}$ of compact subsets of a Euclidean space~$\mathcal{E}$ a \emph{multiple tiling} of~$\mathcal{E}$ if each element of~$\mathcal{K}$ is the closure of its interior and if there exists a positive integer~$m$ such that almost every point of~$\mathcal{E}$ (with respect to the Lebesgue measure) is contained in exactly $m$ elements of~$\mathcal{K}$. The integer~$m$ is called the \emph{covering degree} of the multiple tiling~$\mathcal{K}$. If $m=1$, then $\mathcal{K}$ is called a \emph{tiling} of~$\mathcal{E}$. A~point in~$\mathcal{E}$ is called \emph{$m$-exclusive} if it is contained in the interior of exactly $m$ tiles of~$\mathcal{K}$; it is called \emph{exclusive} if $m = 1$. \subsection{Rauzy fractals} \label{subsec:FR} For a vector $\mathbf{w} \in \mathbb{R}^d \setminus \{\mathbf{0}\}$, let \[ \mathbf{w}^\bot = \{\mathbf{x} \in \mathbb{R}^d:\, \langle \mathbf{w}, \mathbf{x} \rangle = 0\} \] be the hyperplane orthogonal to~$\mathbf{w}$ containing the origin, equipped with the $({d\!-\!1})$-dimensional Lebesgue measure~$\lambda_\mathbf{w}$. In particular, for $\mathbf{1} = \tr{(1,\ldots,1)}$, $\mathbf{1}^\bot$~is the hyperplane of vectors whose entries sum up to~$0$. The \emph{Rauzy fractal} (in the representation space~$\mathbf{w}^\bot$, $\mathbf{w} \in \mathbb{R}_{\ge0}^d \setminus \{\mathbf{0}\}$) associated with a sequence of substitutions $\boldsymbol{\sigma} = (\sigma_n)_{n\in\mathbb{N}}$ over the alphabet~$\mathcal{A}$ with generalized right eigenvector~$\mathbf{u}$ is \[ \mathcal{R}_\mathbf{w} = \overline{\{\pi_{\mathbf{u},\mathbf{w}}\, \mathbf{l}(p):\, p \in \mathcal{A}^*,\ \mbox{$p$ is a prefix of a limit word of $\boldsymbol{\sigma}$}\}}, \] where $\pi_{\mathbf{u},\mathbf{w}}$ denotes the projection along the direction of~$\mathbf{u}$ onto~$\mathbf{w}^\bot$. The Rauzy fractal has natural \emph{subpieces} (or \emph{subtiles}) defined by \[ \mathcal{R}_\mathbf{w}(i) = \overline{\{\pi_{\mathbf{u},\mathbf{w}}\, \mathbf{l}(p):\, p \in \mathcal{A}^*,\ \mbox{$p\hspace{.1em}i$ is a prefix of a limit word of $\boldsymbol{\sigma}$}\}}, \] We set $\mathcal{R} = \mathcal{R}_\mathbf{1}$ and $\mathcal{R}(i) = \mathcal{R}_\mathbf{1}(i)$. If $\omega\in\mathcal{A}^\mathbb{N}$ then $\{\mathbf{l}(p):\, \mbox{$p$ is a prefix of $\omega$}\}$ can be regarded as the set of vertex points of the \emph{broken line} corresponding to $\omega$ (see e.g.\ \cite[Section~5.2.2]{CANTBST}). The Rauzy fractal $\mathcal{R}_\mathbf{w}$ is the closure of the projection of the vertices of all broken lines corresponding to a limit word. When $\boldsymbol{\sigma}$ is a primitive, algebraically irreducible, and recurrent sequence of substitutions with balanced language~$\mathcal{L}_{\boldsymbol{\sigma}}$, then it follows from Proposition~\ref{p:strongconvergence} below that it is sufficient to take a single (arbitrary) limit word in the definition of the Rauzy fractal. The \emph{Rauzy boxes} (or suspensions of the Rauzy fractals) are \[ \widehat{\mathcal{R}}_\mathbf{w}(i) = \big\{x\, (\mathbf{e}_i - \pi_{\mathbf{u},\mathbf{w}}\, \mathbf{e}_i) - \mathbf{y}:\, x \in [0,1),\ \mathbf{y} \in \mathcal{R}_\mathbf{w}(i)\big\}, \] where $\mathbf{e}_i = \mathbf{l}(i)$ denotes the $i$-th standard unit vector in~$\mathbb{R}^d$. \subsection{Discrete hyperplanes and collections of tiles} \label{sec:discr-hyperpl-coll} Let $\boldsymbol{\sigma}$ be a sequence of substitutions over the alphabet~$\mathcal{A}$ with generalized right eigenvector~$\mathbf{u}$. For any vector $\mathbf{w} \in \mathbb{R}_{\ge0}^d \setminus \{\mathbf{0}\}$, we consider the collections of tiles \[ \mathcal{C}_\mathbf{w} = \big\{\pi_{\mathbf{u},\mathbf{w}}\, \mathbf{x} + \mathcal{R}_\mathbf{w}(i):\, [\mathbf{x},i] \in \Gamma(\mathbf{w})\big\} \quad \mbox{and} \quad \widehat{\mathcal{C}}_{\mathbf{w}} = \big\{\mathbf{z} + \widehat{\mathcal{R}}_{\mathbf{w}}(i):\, i\in\mathcal{A},\, \mathbf{z}\in \mathbb{Z}^d\big\}, \] where \[ \Gamma(\mathbf{w}) = \big\{[\mathbf{x}, i] \in \mathbb{Z}^d \times \mathcal{A}:\, 0 \le \langle \mathbf{w}, \mathbf{x}\rangle < \langle \mathbf{w}, \mathbf{e}_i \rangle\big\} \] is the \emph{discrete hyperplane}\footnote{A~geometric interpretation can be given to the notation $[\mathbf{x},i] \in \mathbb{Z}^d \times \mathcal{A}$ by setting $[\mathbf{x},i] = \{\mathbf{x} + \sum_{j\in\mathcal{A},\, j \neq i} \lambda_j \mathbf{e}_j:\, \lambda_j \in [0,1],\ j \in \mathcal{A}\}$, which turns $\Gamma(\mathbf{w})$ into a \emph{stepped hyperplane}.} approximating~$\mathbf{w}^\bot$. We endow $\Gamma(\mathbf{w})$ with a product metric of the distance induced by $||\cdot||=||\cdot||_\infty$ on $\mathbb{Z}^d$ and some metric on~$\mathcal{A}$. This notion of discrete hyperplane corresponds to the notion of standard discrete hyperplane in discrete geometry; see~\cite{Reveilles:91}. In the particular case $\mathbf{w} = \mathbf{1}$, the collection \[ \mathcal{C}_\mathbf{1} = \{\mathbf{x} + \mathcal{R}(i):\, \mathbf{x} \in \mathbb{Z}^d \cap \mathbf{1}^\bot,\, i \in \mathcal{A}\} \] consists of the translations of (the subtiles of) the Rauzy fractal by vectors in the lattice $\mathbb{Z}^d \cap \mathbf{1}^\bot$. The collection~$\mathcal{C}_\mathbf{1} $ generalizes the periodic tiling introduced for unimodular Pisot (irreducible) substitutions. For particular vectors~$\mathbf{v}$ that will be specified in Section~\ref{subsec:choicev}, the collection~$\mathcal{C}_\mathbf{v}$ generalizes the corresponding aperiodic tiling that is obtained in the Pisot case by taking for $\mathbf{v}$ a left Perron-Frobenius eigenvector of~$M_\sigma$; see e.g.~\cite{Ito-Rao:06}. We also recall the formalism of \emph{dual substitutions} introduced in~\cite{Arnoux-Ito:01}. For $[\mathbf{x}, i] \in \mathbb{Z}^d \times \mathcal{A}$ and a unimodular substitution $\sigma$ on~$\mathcal{A}$, let \begin{equation}\label{eq:dualsubst} E_1^*(\sigma)[\mathbf{x}, i] = \big\{[M_\sigma^{-1} (\mathbf{x} + \mathbf{l}(p)), j]:\, \mbox{$j \in \mathcal{A}$, $p\in\mathcal{A}^*$ such that $p\hspace{.1em}i$ is a prefix of $\sigma(j)$}\big\}. \end{equation} We will recall basic properties of $E_1^*$ in Section~\ref{sec:dualsubst}. In order to make this formalism work, we assume that our substitutions are unimodular. Observe that a non-unimodular theory in the Pisot substitutive case has also been developed; see e.g.\ \cite{MineThus} and the references therein. \subsection{Coincidences and geometric finiteness} A~sequence of substitutions $\boldsymbol{\sigma} = (\sigma_n)_{n\in\mathbb{N}}$ satisfies the \emph{strong coincidence condition} if there is $\ell \in \mathbb{N}$ such that, for each pair $(j_1,j_2) \in \mathcal{A} \times \mathcal{A}$, there are $i \in \mathcal{A}$ and $p_1, p_2 \in \mathcal{A}^*$ with $\mathbf{l}(p_1) = \mathbf{l}(p_2)$ such that $\sigma_{[0,\ell)}(j_1) \in p_1\hspace{.1em}i\hspace{.1em}\mathcal{A}^*$ and $\sigma_{[0,\ell)}(j_2) \in p_2\hspace{.1em}i\hspace{.1em}\mathcal{A}^*$. As in the periodic case, this condition will ensure that the subtiles~$\mathcal{R}(i)$ are disjoint in measure and, hence, define an exchange of domains on~$\mathcal{R}$ (see Proposition \ref{p:strongcoincidence}; the same conclusion is true for a suffix version of strong coincidence, see Remark~\ref{rem:-}). We say that $\boldsymbol{\sigma} = (\sigma_n)_{n\in\mathbb{N}}$ satisfies the \emph{geometric coincidence condition} if for each $R > 0$ there is $\ell \in \mathbb{N}$ such that, for all $n \ge \ell$, $E_1^*(\sigma_{[0,n)})[\mathbf{0},i_n]$ contains a ball of radius~$R$ of the discrete hyperplane $\Gamma(\tr{(M_{[0,n)})}\, \mathbf{1})$ for some $i_n \in \mathcal{A}$. This condition can be seen as an $S$-adic dual analogue to the geometric coincidence condition (or super-coincidence condition) in \cite{Barge-Kwapisz:06,Ito-Rao:06,CANTBST}, which provides a tiling criterion. Recall that the periodic tiling yields the isomorphism with a toral translation and thus pure discrete spectrum. This criterion is a coincidence type condition in the same vein as the various coincidence conditions introduced in the usual Pisot framework; see e.g.\ \cite{Solomyak:97,AL11}. In Proposition~\ref{p:gcc}, we give a variant of the geometric coincidence condition that can be checked algorithmically; see also Proposition~\ref{p:gccvariant}. A~more restrictive condition is the \emph{geometric finiteness property} stating that for each $R > 0$ there is $\ell \in \mathbb{N}$ such that $\bigcup_{i\in\mathcal{A}} E_1^*(\sigma_{[0,n)})[\mathbf{0},i]$ contains the ball $\{[\mathbf{x},i] \in \Gamma(\tr{(M_{[0,n)})}\, \mathbf{1}):\, \|\mathbf{x}\| \le R\}$ for all $n \ge \ell$. This implies that $\bigcup_{i\in\mathcal{A}} E_1^*(\sigma_{[0,n)})[\mathbf{0},i]$ generates a whole discrete plane if $n\to\infty$, and that $\mathbf{0}$ is an inner point of the Rauzy fractal; see Proposition~\ref{p:gccvariant}. This condition is a geometric variant of the finiteness property in the framework of beta-numeration~\cite{FS92}. \section{Main results}\label{sec:mainresults} \subsection{General results on $S$-adic shifts} Our first result in Theorem~\ref{t:1}, which sets the stage for all the subsequent results, gives a variety of properties of $S$-adic shifts $(X_{\boldsymbol{\sigma}},\Sigma)$ under general conditions. Indeed, the set $S$ of unimodular substitutions from which the directive sequence $\boldsymbol{\sigma}$ is formed may be finite or infinite in this theorem. Primitivity and algebraic irreducibility are the analogs of primitivity and irreducibility (of the characteristic polynomial of the incidence matrix) of a substitution $\sigma$ in the periodic case. To guarantee minimality of $(X_{\boldsymbol{\sigma}},\Sigma)$ in the $S$-adic setting, we require the directive sequence~$\boldsymbol{\sigma}$ to be primitive; to guarantee unique ergodicity, in our setting we also assume recurrence on top of this (see the proof of Lemma~\ref{lem:uniquelyergodic}). Moreover, we need to have balancedness of the language~$\mathcal{L}_{\boldsymbol{\sigma}}$ to ensure that the associated Rauzy fractal~$\mathcal{R}$ is bounded. To endow~$\mathcal{R}$ with a convenient subdivision structure (replacing the graph directed self-affine structure of the periodic case), uniform balancedness properties of the ``desubstituted'' languages~$\mathcal{L}_{\boldsymbol{\sigma}}^{(n)}$ are needed for infinitely many (but not all)~$n$. These assumptions are not very restrictive in the sense that they will enable us to prove metric results valid for almost all sequences of $S$-adic shifts under the Pisot condition as specified in Theorem~\ref{t:3}. \begin{theorem} \label{t:1} Let $S$ be a finite or infinite set of unimodular substitutions over the finite alphabet~$\mathcal{A}$ and let $\boldsymbol{\sigma} = (\sigma_n)_{n\in\mathbb{N}} \in S^{\mathbb{N}}$ be a primitive and algebraically irreducible directive sequence. Assume that there is $C > 0$ such that for each $\ell \in \mathbb{N}$, there is $n \ge 1$ with $(\sigma_{n},\ldots,\sigma_{n+\ell-1}) = (\sigma_{0},\ldots,\sigma_{\ell-1})$ and the language $\mathcal{L}_{\boldsymbol{\sigma}}^{(n+\ell)}$ is $C$-balanced. Then the following results are true. \renewcommand{\theenumi}{\roman{enumi}} \begin{enumerate} \itemsep1ex \item \label{i:11} The $S$-adic shift $(X_{\boldsymbol{\sigma}},\Sigma)$ is minimal and uniquely ergodic with unique invariant measure~$\mu$. \item \label{i:12} Each subtile~$\mathcal{R}(i)$, $i \in \mathcal{A}$, of the Rauzy fractal~$\mathcal{R}$ is a compact set that is the closure of its interior; its boundary has zero Lebesgue measure~$\lambda_{\mathbf{1}}$. \item \label{i:13} The collection~$\mathcal{C}_\mathbf{1}$ forms a multiple tiling of~$\mathbf{1}^\bot$, and the $S$-adic shift $(X_{\boldsymbol{\sigma}},\Sigma,\mu)$ admits as a factor (with finite fiber) a translation on the torus~$\mathbb{T}^{d-1}$. As a consequence, it is not weakly mixing. \item \label{i:14} If $\boldsymbol{\sigma}$ satisfies the strong coincidence condition, then the subtiles~$\mathcal{R}(i)$, $i \in \mathcal{A}$, are mutually disjoint in measure, and the $S$-adic shift $(X_{\boldsymbol{\sigma}},\Sigma,\mu)$ is measurably conjugate to an exchange of domains on~$\mathcal{R}$. \item\label{i:15} The collection~$\mathcal{C}_\mathbf{1}$ forms a tiling of~$\mathbf{1}^\bot$ if and only if $\boldsymbol{\sigma}$ satisfies the geometric coincidence condition. \end{enumerate} If moreover $\mathcal{C}_\mathbf{1}$ forms a tiling of~$\mathbf{1}^\bot$, then also the following results hold. \begin{enumerate} \setcounter{enumi}{5} \itemsep1ex \item \label{i:16} The $S$-adic shift $(X_{\boldsymbol{\sigma}},\Sigma,\mu)$ is measurably conjugate to a translation $T$ on the torus~$\mathbb{T}^{d-1}$; in particular, its measure-theoretic spectrum is purely discrete. \item\label{i:17} Each $\omega\in X_{\boldsymbol{\sigma}}$ is a natural coding of the toral translation~$T$ with respect to the partition $\{\mathcal{R}(i):\,i \in \mathcal{A}\}$. \item\label{i:18} The set $\mathcal{R}(i)$ is a bounded remainder set for the toral translation~$T$ for each $i\in\mathcal{A}$. \end{enumerate} \end{theorem} Note that the assumptions in Theorem~\ref{t:1} obviously imply that the sequence~$\boldsymbol{\sigma}$ is recurrent. \begin{remark}\label{rem:w} We will prove in Propositions~\ref{p:independentmultiple} and~\ref{p:tilingRd} that, under the conditions of Theorem~\ref{t:1}, for each $\mathbf{w} \in \mathbb{R}^d_{\ge0} \setminus \{\mathbf{0}\}$ the collection~$\mathcal{C}_\mathbf{w}$ forms a multiple tiling of~$\mathbf{w}^\bot$ with covering degree~$m$ not depending on~$\mathbf{w}$, and $\widehat{\mathcal{C}}_\mathbf{w}$ forms a multiple (lattice) tiling of~$\mathbb{R}^d$ with the same covering degree~$m$. In particular, if $m = 1$, then $\bigcup_{i\in\mathcal{A}} \widehat{\mathcal{R}}_\mathbf{w}(i)$ is a fundamental domain of $\mathbb{R}^d / \mathbb{Z}^d$. This will be the key result for defining non-stationary Markov partitions associated with two-sided Pisot $S$-adic systems (e.g., two-sided directive sequences in the framework of natural extensions of continued fraction algorithms), that we plan to investigate in a forthcoming paper. The vector~$\mathbf{w}$ is then given by a sequence $(\sigma_n)_{n<0}$. Moreover, taking $\mathbf{w} = \mathbf{e}_i$, we obtain that each subtile $\mathcal{R}(i)$ tiles periodically. This result seems to be new even in the periodic case. \end{remark} \begin{theorem} \label{t:3} Let $S$ be a finite or infinite set of unimodular substitutions and let $(G,\tau)$ be an $S$-adic graph with finitely many vertices and associated edge shift $(E_G, \Sigma, \nu)$. Assume that this shift is ergodic, the cocycle $A$ is log-integrable, and that it satisfies the Pisot condition. Assume further that $\nu$ assigns positive measure to each (non-empty) cylinder, and that there exists a cylinder corresponding to a substitution with positive incidence matrix. Then, for the directive sequence $\boldsymbol{\sigma}=(\tau(\gamma_n))$ of $\nu$-almost every walk $\boldsymbol{\gamma}=(\gamma_n) \in E_G$, \renewcommand{\theenumi}{\roman{enumi}} \begin{enumerate} \itemsep1ex \item Assertions (\ref{i:11})--(\ref{i:15}) of Theorem~\ref{t:1} hold; \item Assertions (\ref{i:16})--(\ref{i:18}) of Theorem~\ref{t:1} hold provided that the collection $\mathcal{C}_\mathbf{1}$ associated with~$\boldsymbol{\sigma}$ forms a tiling of $\mathbf{1}^\bot$. \end{enumerate} \end{theorem} \begin{remark}\label{rem:t:3} The setting of Theorem~\ref{t:3} covers the (additive) Arnoux-Rauzy and Brun algorithms (see Sections~\ref{sec:arnoux-rauzy-words} and~\ref{sec:brun-words-natural}; recall that the assumption that the cocycle $A$ is log-integrable is always satisfied when the $S$-adic graph is finite), but also includes many multiplicative continued fraction algorithms (which correspond to infinite sets~$S$). Most prominently, according to \cite{Perron:07} (see also \cite[Proposition~8]{SCHWEIGER}) the admissible sequences of the Jacobi-Perron algorithm can be represented by an $S$-adic graph with finitely many vertices and log-integrability of the associated cocycle is proved in~\cite{Lagarias}. For the two-dimensional case an associated (infinite) set of substitutions can be found for instance in~\cite{BBJS14} (this can easily be generalized to higher dimensions). Also, the acceleration of the Arnoux-Rauzy algorithm together with the invariant measure proposed in~\cite{AvilaHubSkrip} fits into the framework of Theorem~\ref{t:3}. \end{remark} We think that the conditions of Theorem~\ref{t:1} are enough to get a tiling of~$\mathbf{1}^\bot$ by~$\mathcal{C}_\mathbf{1}$ and, hence, measurable conjugacy of $(X_{\boldsymbol{\sigma}},\Sigma)$ to a toral translation. This extension of the well-known \emph{Pisot substitution conjecture} to the $S$-adic setting is made precise in the following statement. (Here, we also replace uniform balancedness of~$\mathcal{L}_{\boldsymbol{\sigma}}^{(n+\ell)}$ by the weaker condition that $\mathcal{L}_{\boldsymbol{\sigma}}$ is balanced.) Note that the word ``Pisot'' does not occur in the statement of the conjecture but the generalization of the Pisot hypothesis is provided by the balancedness assumption. \begin{conjecture}[$S$-adic Pisot conjecture] \label{c:1} Let $S$ be a finite or infinite set of unimodular substitutions over the finite alphabet~$\mathcal{A}$ and let $\boldsymbol{\sigma}\in S^\mathbb{N}$ be a primitive, algebraically irreducible, and recurrent directive sequence with balanced language~$\mathcal{L}_{\boldsymbol{\sigma}}$. Then $\mathcal{C}_\mathbf{1}$ forms a tiling of~$\mathbf{1}^\bot$, and the $S$-adic shift $(X_{\boldsymbol{\sigma}},\Sigma,\mu)$ is measurably conjugate to a translation on the torus~$\mathbb{T}^{d-1}$; in particular, its measure-theoretic spectrum is purely discrete. \end{conjecture} \begin{remark}\label{rem:LR} It would already be interesting to get this conjecture for sequences $\boldsymbol{\sigma}$ with linearly recurrent limit word $\omega$. In view of \cite[Proposition~1.1]{Durand:00b} this would entail to prove Conjecture~\ref{c:1} for $S$-adic shifts that are conjugate to \emph{proper $S$-adic shifts}. A \emph{proper $S$-adic shift} is an $S$-adic shift for which $S$ is a set of \emph{proper} substitutions. Recall that a substitution $\sigma$ is \emph{proper} if there are letters $r,l\in \mathcal{A}$ such that the word $\sigma(a)$ starts with $l$ and ends with $r$ for each $a\in \mathcal{A}$. It is shown in \cite[Proposition~25]{DHS:99} that in the substitutive case we always have linear recurrence (indeed, a substitutive dynamical system is always conjugate to a proper substitutive system, see~\cite[Section~5]{DHS:99}). Thus even this special case contains the classical Pisot substitution conjecture. We are able to prove that linearly recurrent Arnoux-Rauzy words with recurrent directive sequence give rise to $S$-adic shifts that have pure discrete spectrum (see Corollary~\ref{cor:AR}). In this context it would be also of interest to generalize Barge's result~\cite{Barge15,Barge14}, where pure discrete spectrum is proved for a large class of substitutive systems characterized by certain combinatorial properties (including beta substitutions), to the $S$-adic setting. We have provided a proof of the two-letter alphabet version of Conjecture~\ref{c:1} with the additional assumptions on uniform balancedness of Theorem~\ref{t:1} in \cite{BMST:15}. \end{remark} We work here with the $\mathbb{Z}$-action provided by the $S$-adic shift. However, under the assumptions of Theorem \ref{t:1} (with the balancedness assumption playing a crucial role), our results also apply to the $\mathbb{R}$-action of the associated tiling space (such as investigated e.g.\ in~\cite{CS:03}), according to~\cite{Sadun:15}. \subsection{Arnoux-Rauzy words and the conjecture of Arnoux and Rauzy}\label{sec:arnoux-rauzy-words} For certain sets $S$ of substitutions, we get the assertions of Theorems~\ref{t:1} and~\ref{t:3} unconditionally for a large collection of directive sequences in~$S^\mathbb{N}$. Arnoux and Rauzy~\cite{Arnoux-Rauzy:91} proposed a generalization of Sturmian words to three letters (which initiated an important literature around so-called episturmian words, see e.g.\ \cite{Berstel:07}). They proved that these \emph{Arnoux-Rauzy words} can be expressed as $S$-adic words if $S = \{\alpha_i:\, i \in \mathcal{A}\}$ is the set of \emph{Arnoux-Rauzy substitutions} over $\mathcal{A}=\{1,2,3\}$ defined by \begin{equation}\label{eq:AR} \alpha_i:\ i \mapsto i,\ j \mapsto ji\ \mbox{for}\ j \in \mathcal{A} \setminus \{i\}\qquad (i\in\mathcal{A})\,. \end{equation} It was conjectured since the early nineties (see e.g.~\cite[p.~1267]{Cassaigne-Ferenczi-Zamboni:00} or \cite[Section~3.3]{Berthe-Ferenczi-Zamboni:05}) that each Arnoux-Rauzy word is a natural coding of a translation on the torus. Cassaigne et al.~\cite{Cassaigne-Ferenczi-Zamboni:00} provided a counterexample to this conjecture by constructing unbalanced Arnoux-Rauzy words (unbalanced words cannot come from natural codings by a result of Rauzy~\cite{Rauzy:84}). Moreover, Cassaigne et al.~\cite{Cassaigne-Ferenczi-Messaoudi:08} even showed that there exist Arnoux-Rauzy words $\omega$ on three letters such that $(X_\omega,\Sigma)$ is weakly mixing (w.r.t.\ the unique $\Sigma$-invariant probability measure on~$X_\omega$). To our knowledge, positive examples for this conjecture so far existed only in the periodic case; cf.~\cite{Berthe-Jolivet-Siegel:12,Barge-Stimac-Williams:13}. The metric result in Theorem~\ref{t:3} allows us to prove the following theorem which confirms the conjecture of Arnoux and Rauzy almost everywhere. \begin{theorem} \label{t:5} Let $S$ be the set of Arnoux-Rauzy substitutions over three letters and consider the shift $(S^\mathbb{N},\Sigma,\nu)$ for some shift invariant ergodic probability measure~$\nu$ which assigns positive measure to each cylinder. Then $(S^\mathbb{N},\Sigma,\nu)$ satisfies the Pisot condition. Moreover, for $\nu$-almost all sequences $\boldsymbol{\sigma} \in S^\mathbb{N}$ the collection~$\mathcal{C}_\mathbf{1}$ forms a tiling, the $S$-adic shift $(X_{\boldsymbol{\sigma}},\Sigma)$ is measurably conjugate to a translation on the torus~$\mathbb{T}^2$, and the words in~$X_{\boldsymbol{\sigma}}$ form natural codings of this translation. \end{theorem} As an example of measure satisfying the assumptions of Theorem \ref{t:5}, consider the measure of maximal entropy for the suspension flow of the Rauzy gasket constructed in \cite{AvilaHubSkripbis} (see also \cite{AvilaHubSkrip}). Using Theorem~\ref{t:1} we are also able to provide a (uncountable) class of non-substitutive Arnoux-Rauzy words that give rise to translations on the torus $\mathbb{T}^{2}$. To this end we introduce a terminology that comes from the associated Arnoux-Rauzy continued fraction algorithm (which was also defined in \cite{Arnoux-Rauzy:91}). A directive sequence $\boldsymbol{\sigma}=(\sigma_n)\in S^\mathbb{N}$ that contains each $\alpha_i$ ($i=1,2,3$) infinitely often is said to have {\it bounded weak partial quotients} if there is $h \in \mathbb{N}$ such that $\sigma_n = \sigma_{n+1} = \cdots = \sigma_{n+h}$ does not hold for any $n \in \mathbb{N}$, and {\it bounded strong partial quotients} if every substitution in the directive sequence $\boldsymbol{\sigma}$ occurs with bounded gap. \begin{theorem} \label{t:4} Let $S=\{\alpha_1,\alpha_2,\alpha_3\}$ be the set of Arnoux-Rauzy substitutions over three letters. If $\boldsymbol{\sigma} \in S^\mathbb{N}$ is recurrent, contains each $\alpha_i$ ($i=1,2,3$) infinitely often and has bounded weak partial quotients, then the collection~$\mathcal{C}_\mathbf{1}$ forms a tiling, the $S$-adic shift $(X_{\boldsymbol{\sigma}},\Sigma)$ is measurably conjugate to a translation on the torus~$\mathbb{T}^2$, and the words in~$X_{\boldsymbol{\sigma}}$ form natural codings of this translation. \end{theorem} Note that examples of uniformly balanced words (for which $\omega^{(n)}$ is $C$-balanced for each~$n$) for the $S$-adic shifts generated by Arnoux-Rauzy substitutions are provided in \cite{Berthe-Cassaigne-Steiner}. In particular, boundedness of the strong partial quotients provides a nice characterization of linear recurrence for Arnoux-Rauzy words (see Proposition~\ref{prop:LR} below). This syndeticity condition is expressed on letters. With the extra assumption of recurrence (not only on letters but on any factor) of the directive sequence, we obtain pure discrete spectrum. We expect Corollary \ref{cor:AR} below to hold without this extra assumption of recurrence. \begin{corollary}\label{cor:AR} Any linearly recurrent Arnoux-Rauzy word~$\omega$ with recurrent directive sequence generates a symbolic dynamical system $(X_{\omega}, \Sigma)$ that has pure discrete spectrum. \end{corollary} It is well-known that Arnoux-Rauzy words can be defined also for $d>3$ letters (see e.g.~\cite{Berthe-Cassaigne-Steiner}). To apply our theory to these classes of words and prove the results of this section in this more general setting it would be necessary to extend the combinatorial results from \cite{Berthe-Jolivet-Siegel:12} to higher dimensions. Although this should be possible we expect it to be very tedious already in the case of 4 letters. \subsection{Brun words and natural codings of rotations with linear complexity}\label{sec:brun-words-natural} Let $\Delta_2:=\{(x_1,x_2)\in\mathbb{R}^2\,:\, 0\le x_1\le x_2 \le 1\}$ be equipped with the Lebesgue measure $\lambda_{2}$. Brun~\cite{BRUN} devised a generalized continued fraction algorithm for vectors $(x_1,x_2)\in\Delta_2$. This algorithm (in its additive form) is defined by the mapping $T_{\rm Brun}: \Delta_2\to\Delta_2$, \begin{equation}\label{eq:brunmap} T_{\rm Brun}: (x_1,x_2) \mapsto \begin{cases} \left(\frac{x_1}{1-x_2},\frac{x_2}{1-x_2}\right), & \hbox{for } x_2 \le \frac12, \\ \left(\frac{x_1}{x_2},\frac{1-x_2}{x_2}\right), & \hbox{for } \frac12 \le x_2 \le 1-x_1,\\ \left(\frac{1-x_2}{x_2},\frac{x_1}{x_2}\right), & \hbox{for }1-x_1 \le x_2 \leq 1; \end{cases} \end{equation} for later use, we define $B(i)$ to be the set of $(x_1,x_2)\in\Delta_2$ meeting the restriction in the $i$-th line of \eqref{eq:brunmap}, for $1\le i\le 3$. An easy computation shows that the linear (or ``projectivized'') version of this algorithm is defined for vectors $\mathbf{w}^{(0)}=(w_1^{(0)},w_2^{(0)},w_3^{(0)})$ with $0\le w_1^{(0)} \le w_2^{(0)} \le w_3^{(0)}$ by the recurrence $M_{i_n}\mathbf{w}^{(n)}=\mathbf{w}^{(n-1)}$, where $M_{i_n}$ is chosen among the matrices \begin{equation}\label{eq:brunmatrices} \begin{pmatrix}1&0&0\\0&1&0\\0&1&1 \end{pmatrix},\quad \begin{pmatrix}1&0&0\\0&0&1\\0&1&1 \end{pmatrix},\quad \begin{pmatrix}0&1&0\\0&0&1\\1&0&1 \end{pmatrix} \end{equation} according to the magnitude of $w_3^{(n-1)}-w_2^{(n-1)}$ compared to $w_1^{(n-1)}$ and~$w_2^{(n-1)}$. More precisely, we have $T_{\rm Brun}\big(w_1^{(n-1)}/w_3^{(n-1)},w_2^{(n-1)}/w_3^{(n-1)}\big) = \big(w_1^{(n)}/w_3^{(n)},w_2^{(n)}/w_3^{(n)}\big)$. We associate $S$-adic words with this algorithm by defining the \emph{Brun substitutions} \begin{equation}\label{eq:brun} \beta_1 : \begin{cases} 1 \mapsto 1 \\ 2 \mapsto 23 \\ 3 \mapsto 3 \end{cases} \quad \beta_2 : \begin{cases} 1 \mapsto 1 \\ 2 \mapsto 3 \\ 3 \mapsto 23 \end{cases} \quad \beta_3 : \begin{cases} 1 \mapsto 3 \\ 2 \mapsto 1 \\ 3 \mapsto 23 \end{cases} \end{equation} whose incidence matrices coincide with the three matrices in \eqref{eq:brunmatrices} associated with Brun's algorithm. Examples of uniformly balanced words (for which $\omega^{(n)}$ is $C$-balanced for each~$n$) for the $S$-adic shifts generated by Brun substitutions are provided in \cite{Delecroix-Hejda-Steiner}. We prove the following result on the related $S$-adic words. \begin{theorem} \label{t:6} Let $S = \{\beta_1,\beta_2,\beta_3\}$ be the set of Brun substitutions over three letters, and consider the shift $(S^\mathbb{N},\Sigma,\nu)$ for some shift invariant ergodic probability measure~$\nu$ that assigns positive measure to each cylinder. Then $(S^\mathbb{N},\Sigma,\nu)$ satisfies the Pisot condition. Moreover, for $\nu$-almost all sequences $\boldsymbol{\sigma} \in S^\mathbb{N}$ the collection~$\mathcal{C}_\mathbf{1}$ forms a tiling, the $S$-adic shift $(X_{\boldsymbol{\sigma}},\Sigma)$ is measurably conjugate to a translation on the torus~$\mathbb{T}^2$, and the words in~$X_{\boldsymbol{\sigma}}$ form natural codings of this translation.\end{theorem} We will now show that this result implies that the $S$-adic shifts associated with Brun's algorithm provide a natural coding of almost all rotations on the torus~$\mathbb{T}^2$. Indeed, by the (weak) convergence of Brun's algorithm for almost all $(x_1,x_2)\in\Delta_2$ (w.r.t.\ to the two-dimensional Lebesgue measure; see e.g.\ \cite{BRUN}), there is a bijection $\Phi$ defined for almost all $(x_1,x_2)\in\Delta_2$ that makes the diagram \begin{equation}\label{CDPhi} \begin{CD} \Delta_2 @>T_{\rm Brun}>> \Delta_2 \\ @VV \Phi V @ VV \Phi V \\ S^{\mathbb{N}} @>\Sigma>> S^\mathbb{N} \end{CD} \end{equation} commutative and that provides a measurable conjugacy between $(\Delta_2,T_{\rm Brun},\lambda_2)$ and $(S^{\mathbb{N}} ,\Sigma,\nu)$; the measure $\nu$ is specified in the proof of Theorem~\ref{t:7}. \begin{theorem}\label{t:7} For almost all $(x_1,x_2) \in \Delta_2$, the $S$-adic shift $(X_{\boldsymbol{\sigma}},\Sigma)$ with $\boldsymbol{\sigma} = \Phi(x_1,x_2)$ is measurably conjugate to the translation by $\big(\frac{x_1}{1+x_1+x_2},\frac{x_2}{1+x_1+x_2}\big)$ on~$\mathbb{T}^2$; then each $\omega\in X_{\boldsymbol{\sigma}}$ is a natural coding for this translation, $\mathcal{L}_{\boldsymbol{\sigma}}$~is balanced, and the subpieces of the Rauzy fractal provide bounded remainder sets for this translation. \end{theorem} This result has the following consequence. \begin{corollary}\label{cor:8} For almost all $\mathbf{t} \in \mathbb{T}^2$, there is $(x_1,x_2) \in \Delta_2$ such that the $S$-adic shift $(X_{\boldsymbol{\sigma}},\Sigma)$ with $\boldsymbol{\sigma} = \Phi(x_1,x_2)$ is measurably conjugate to the translation by~$\mathbf{t}$ on~$\mathbb{T}^2$. Moreover, the words in~$X_{\boldsymbol{\sigma}}$ form natural codings of the translation by~$\mathbf{t}$. \end{corollary} We believe that the codings mentioned in Theorem~\ref{t:7} have linear factor complexity, that is, for each such coding, there is $C>0$ such that the number of its factors of length~$n$ is less than~$Cn$. Indeed, S.~Labb\'e and J.~Leroy informed us that they are currently working on a proof of the fact that $S$-adic words with $S = \{\beta_1,\beta_2,\beta_3\}$ have linear factor complexity \cite{LabLer}. We thus get bounded remainder sets whose characteristic infinite words have linear factor complexity, contrarily to the examples provided e.g.\ in~\cite{Chevallier,Grepstad-Lev}. \section{Convergence properties}\label{sec:conv} In this section, we show that the Rauzy fractal~$\mathcal{R}$ corresponding to a sequence~$\boldsymbol{\sigma}$ is bounded if $\mathcal{L}_{\boldsymbol{\sigma}}$ is balanced. Moreover, we prove that under certain conditions the letter frequency vector of an $S$-adic word has rationally independent entries and give a criterion that ensures the strong convergence of the matrix products $M_{[0,n)}$ to one single direction (defined by a generalized right eigenvector provided by the letter frequency vector). All these results will be needed in the sequel. Throughout this section $S$ is a (finite or infinite) set of substitutions over the finite alphabet~$\mathcal{A}$ and $\boldsymbol{\sigma} \in S^{\mathbb{N}}$ is a directive sequence. \subsection{Boundedness of the Rauzy fractal} Recall that the Rauzy fractal~$\mathcal{R}$ is the closure of the projection of the vertices of the broken lines defined by limit words of~$\boldsymbol{\sigma}$; see Section~\ref{subsec:FR}. Therefore, $\mathcal{R}$~is compact if and only if the broken lines remain at bounded distance from the generalized right eigendirection~$\mathbb{R} \mathbf{u}$. The following result shows that this is equivalent with balancedness and establishes a connection between the degree of balancedness and the diameter of~$\mathcal{R}$; see also \cite[Proposition~7]{Adamczewski:03} and~\cite[Lemma~3]{Delecroix-Hejda-Steiner}. Recall that $\|\cdot\|$ denotes the maximum norm. \begin{lemma} \label{l:bounded} Let $\boldsymbol{\sigma}$ be a primitive sequence of substitutions with generalized right eigenvector~$\mathbf{u}$. Then $\mathcal{R}$ is bounded if and only if $\mathcal{L}_{\boldsymbol{\sigma}}$ is balanced. If $\mathcal{L}_{\boldsymbol{\sigma}}$ is $C$-balanced, then $\mathcal{R} \subset [-C,C]^d \cap \mathbf{1}^\bot$. \end{lemma} \begin{proof} Assume first that $\mathcal{R}$ is bounded. Then there exists $C$ such that $\|\pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(p)\| \le C$ for all prefixes $p$ of limit words of~$\boldsymbol{\sigma}$. Let $u, v \in \mathcal{L}_{\boldsymbol{\sigma}}$ with $|u| = |v|$. By the primitivity of $\boldsymbol{\sigma}$, $u$~and $v$ are factors of a limit word, hence, $\|\pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(u)\|,\, \|\pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(v)\| \le 2C$. As $\mathbf{l}(u) - \mathbf{l}(v) \in \mathbf{1}^\bot$, we obtain \[ \|\mathbf{l}(u) - \mathbf{l}(v)\| = \|\pi_{\mathbf{u},\mathbf{1}}\, (\mathbf{l}(u) - \mathbf{l}(v))\| \le \|\pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(u)\| + \|\pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(v)\| \le 4C, \] i.e., $\mathcal{L}_{\boldsymbol{\sigma}}$ is $4C$-balanced. Assume now that $\mathcal{L}_{\boldsymbol{\sigma}}$ is $C$-balanced and let $p$ be a prefix of a limit word~$\omega$. Write $\omega$ as concatenation of words~$v_k$, $k \in \mathbb{N}$, with $|v_k| = |p|$. Then $C$-balancedness yields $\|\pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(v_k) - \pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(p)\| \le C$ for all $k \in \mathbb{N}$, thus $\|\frac1n \sum_{k=0}^{n-1} \pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(v_k) - \pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(p)\| \le C$ for all $n \in \mathbb{N}$. As $M_{[0,n)}\, \mathbf{e}_i = \mathbf{l}(\sigma_{[0,n)}(i)) \in \mathbf{l}(\mathcal{L}_{\boldsymbol{\sigma}})$ for all $n \in \mathbb{N}$, $i \in \mathcal{A}$, the letter frequency vector of~$\omega$ (which exists because of balancedness~\cite{Berthe-Tijdeman:02}) is in~$\mathbb{R} \mathbf{u}$. Therefore, we have $\lim_{n\to\infty} \frac1n \sum_{k=0}^{n-1} \mathbf{l}(v_k) \in \mathbb{R} \mathbf{u}$, hence $\lim_{n\to\infty} \frac1n \sum_{k=0}^{n-1} \pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(v_k) = \mathbf{0}$, and consequently \begin{equation}\label{lem41end} \|\pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(p)\| = \|\lim_{n\to\infty} \frac1n \sum_{k=0}^{n-1} \pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(v_k) - \pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(p)\| \le C. \qedhere \end{equation} \end{proof} \subsection{Irrationality and strong convergence} In the periodic case with a unimodular irreducible Pisot substitution~$\sigma$, the incidence matrix~$M_\sigma$ has an expanding right eigenline and a contractive right hyperplane (that is orthogonal to an expanding left eigenvector), i.e., the matrix~$M_\sigma$ contracts the space~$\mathbb{R}^d$ towards the expanding eigenline. Moreover, irreducibility implies that the coordinates of the expanding eigenvector are rationally independent. These properties are crucial for proving that the Rauzy fractal~$\mathcal{R}$ has positive measure and induces a (multiple) tiling of the representation space~$\mathbf{1}^\bot$. In the $S$-adic setting, the cones $M_{[0,n)}\, \mathbb{R}_+^d$ converge ``weakly'' to the direction of the generalized right eigenvector~$\mathbf{u}$; see Section~\ref{sec:gener-perr-frob}. We give a criterion for $\mathbf{u}$ to have rationally independent coordinates in Lemma~\ref{l:independent}. As the weak convergence of $M_{[0,n)}\, \mathbb{R}_+^d$ to~$\mathbf{u}$ is not sufficient for our purposes, in Proposition~\ref{p:strongconvergence} we will provide a strong convergence property. \begin{lemma} \label{l:independent} Let $\boldsymbol{\sigma}$ be an algebraically irreducible sequence of substitutions with generalized right eigenvector~$\mathbf{u}$ and balanced language~$\mathcal{L}_{\boldsymbol{\sigma}}$. Then the coordinates of~$\mathbf{u}$ are rationally independent. \end{lemma} \begin{proof} Suppose that $\mathbf{u}$ has rationally dependent coordinates, i.e., there is $\mathbf{x} \in \mathbb{Z}^d \setminus \{\mathbf{0}\}$ such that $\langle \mathbf{x}, \mathbf{u} \rangle= 0$. Then $\langle \tr{(M_{[0,n)})}\, \mathbf{x}, \mathbf{e}_i \rangle = \langle \mathbf{x}, M_{[0,n)}\, \mathbf{e}_i \rangle = \langle \mathbf{x}, \mathbf{l}(\sigma_{[0,n)}(i)) \rangle$ is bounded (uniformly in~$n$) for each $i \in \mathcal{A}$, by the balancedness of~$\mathcal{L}_{\boldsymbol{\sigma}}$; cf.\ the proof of Lemma~\ref{l:bounded}. Therefore, $\tr{(M_{[0,n)})}\, \mathbf{x} \in \mathbb{Z}^d$ is bounded, and there is $k \in \mathbb{N}$ such that $\tr{(M_{[0,\ell)})}\, \mathbf{x} = \tr{(M_{[0,k)})}\, \mathbf{x}$ for infinitely many $\ell > k$. The matrix $M_{[0,k)}$ is regular since otherwise $M_{[0,\ell)}$ would have the eigenvalue~$0$ for all $\ell \ge k$, contradicting algebraic irreducibility. Thus $\tr{(M_{[0,k)})}\, \mathbf{x} \not= \mathbf{0}$ is an eigenvector of~$\tr{(M_{[k,\ell)})}$ to the eigenvalue~$1$, contradicting that $M_{[k,\ell)}$ has irreducible characteristic polynomial for large~$\ell$. \end{proof} \begin{proposition} \label{p:strongconvergence} Let $\boldsymbol{\sigma} = (\sigma_n)_{n\in\mathbb{N}}$ be a primitive, algebraically irreducible, and recurrent sequence of substitutions with balanced language~$\mathcal{L}_{\boldsymbol{\sigma}}$. Then \begin{equation} \label{e:Lsn} \lim_{n\to\infty} \sup\big\{ \|\pi_{\mathbf{u},\mathbf{1}} M_{[0,n)}\, \mathbf{l}(v)\|:\, v \in \mathcal{L}_{\boldsymbol{\sigma}}^{(n)}\big\} = 0. \end{equation} In particular, \begin{equation} \label{e:contraction} \lim_{n\to\infty} \pi_{\mathbf{u},\mathbf{1}} M_{[0,n)}\, \mathbf{e}_i = \mathbf{0} \quad \mbox{for all}\ i \in \mathcal{A}. \end{equation} \end{proposition} Note that \eqref{e:contraction} is the \emph{strong convergence} property used in the theory of multidimensional continued fraction algorithms; see e.g.\ \cite[Definition~19]{SCHWEIGER}. \begin{proof} First note that \eqref{e:contraction} follows from~\eqref{e:Lsn} since $i \in \mathcal{L}_{\boldsymbol{\sigma}}^{(n)}$ for all $i \in \mathcal{A}$, $n \in \mathbb{N}$, by primitivity. Let $\omega$ be a limit word of~$\boldsymbol{\sigma}$. Then, again by primitivity, for each $v \in \mathcal{L}_{\boldsymbol{\sigma}}^{(n)}$ we have $\mathbf{l}(v) = \mathbf{l}(p) - \mathbf{l}(q)$ for some prefixes $p, q$ of~$\omega^{(n)}$. Therefore, it is sufficient to prove that \begin{equation} \label{e:Lsn2} \lim_{n\to\infty} \sup\big\{ \|\pi_{\mathbf{u},\mathbf{1}} M_{[0,n)}\, \mathbf{l}(p)\|:\, \mbox{$p$ is a prefix of $\omega^{(n)}$}\big\} = 0. \end{equation} Choose $\varepsilon >0$ arbitrary but fixed. For all $n \in \mathbb{N}$, let $i_n$ be the first letter of $\omega^{(n)}$ and set \begin{align*} \mathcal{S}_n & = \{\pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(p):\, \mbox{$p$ is a prefix of $\sigma_{[0,n)}(i_n)$}\} \\ \tilde{\mathcal{R}} & = \overline{\{\pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(p):\, \mbox{$p$ is a prefix of $\omega$}\}}. \end{align*} Then $\lim_{n\to\infty} \mathcal{S}_n = \tilde{\mathcal{R}}$ (in Hausdorff metric) and $\pi_{\mathbf{u},\mathbf{1}} M_{[0,n)}\, \mathbf{l}(p) + \mathcal{S}_n \subset \tilde{\mathcal{R}}$ for all $p \in \mathcal{A}^*$ such that $p\, i_n$ is a prefix of~$\omega^{(n)}$. These two facts yield that \begin{equation}\label{uniform} \|\pi_{\mathbf{u},\mathbf{1}} M_{[0,n)}\, \mathbf{l}(p)\| < \varepsilon \end{equation} for all $p \in \mathcal{A}^*$ such that $p\hspace{.1em}i_n$ is a prefix of $\omega^{(n)}$ for $n$ large enough. For $p \in \mathcal{A}^*$, let $N(p) = \{{n \in \mathbb{N}}:\, \mbox{$p\hspace{.1em}i_n$ is a prefix of $\omega^{(n)}$}\}$. If $N(p)$ is infinite, then \eqref{uniform} immediately implies that \begin{equation}\label{Npinfinite} \lim_{n\in N(p),\,n\to\infty} \pi_{\mathbf{u},\mathbf{1}} M_{[0,n)}\, \mathbf{l}(p) =\mathbf{0}. \end{equation} Our next aim is to find a set of prefixes~$p$ spanning~$\mathbb{R}^d$ that all yield an infinite set~$N(p)$. By recurrence of $(\sigma_n)_{n\in\mathbb{N}}$, there is an increasing sequence of integers $(n_k)_{k\in\mathbb{N}}$ such that \begin{equation}\label{sigmank} (\sigma_{n_k}, \sigma_{n_k+1}, \ldots, \sigma_{n_k+k-1}) = (\sigma_0, \sigma_1, \ldots, \sigma_{k-1}) \end{equation} for all $k \in \mathbb{N}$. Using a Cantor diagonal argument we can choose a sequence of letters $(j_\ell)_{\ell\in\mathbb{N}}$ such that, for each $\ell \in \mathbb{N}$, we have that \begin{equation}\label{jell1} (i_{n_k},i_{n_k+1}, i_{n_k+2}, \ldots, i_{n_k+\ell}) = (j_0j_1, j_2, \ldots, j_\ell) \end{equation} holds for infinitely many $k \in \mathbb{N}$; denote the set of these~$k$ by~$K_\ell$. By the definition of~$i_n$, we have that $\sigma_{n-1}(i_n) \in i_{n-1} \mathcal{A}^*$. For $k \in K_\ell$, we gain thus \begin{equation}\label{jell2} \sigma_{\ell-1}(j_\ell) = \sigma_{n_k+\ell-1}(j_\ell) = \sigma_{n_k+\ell-1}(i_{n_k+\ell}) \in i_{n_k+\ell-1} \mathcal{A}^* = j_{\ell-1} \mathcal{A}^*. \end{equation} Let $P_\ell$ be the set of all $p \in \mathcal{A}^*$ such that $p\hspace{.1em}j_0$ is a prefix of $\sigma_{[0,\ell)}(j_\ell)$. Then, \eqref{jell2} implies that $P_0 \subset P_1 \subset \cdots$. Consider the lattice $L \subset \mathbb{Z}^d$ generated by $\bigcup_{\ell\in\mathbb{N}} \mathbf{l}(P_\ell)$. The set $\bigcup_{\ell\in\mathbb{N}} \mathbf{l}(P_\ell)$ contains arbitrarily large vectors. Therefore, if the lattice~$L$ does not have full rank, then the rational independence of the coordinates of~$\mathbf{u}$ (Lemma~\ref{l:independent}) implies that the maximal distance of elements of $\bigcup_{\ell\in\mathbb{N}} \mathbf{l}(P_\ell)$ from the line $\mathbb{R}\mathbf{u}$ is unbounded. Since $P_\ell \subset \mathcal{L}_{\boldsymbol{\sigma}}$, this contradicts the fact that $\mathcal{L}_{\boldsymbol{\sigma}}$ is balanced; cf.\eqref{lem41end} the proof of Lemma~\ref{l:bounded}. Hence, there is $\ell \in \mathbb{N}$ such that $\mathbf{l}(P_\ell)$ contains a basis of~$\mathbb{R}^d$. We now fix~$\ell$ such that $\mathbf{l}(P_\ell)$ contains a basis of~$\mathbb{R}^d$. If $p \in P_\ell$, i.e., if $p\hspace{.1em}j_0$ is a prefix of~$\sigma_{[0,\ell)}(j_\ell)$, then \eqref{sigmank} and~\eqref{jell1} imply that $p\hspace{.1em}j_0\, ({=}p\hspace{.1em}i_{n_k})$ is a prefix of~$\omega^{(n_k)}$ for all $k \in K_\ell$, thus ${\{n_k:\, k \in K_\ell\}} \subset N(p)$, which shows that $N(p)$ is infinite. Therefore we may apply \eqref{Npinfinite} to obtain that \[ \lim_{k\in K_\ell,\,k\to\infty} \pi_{\mathbf{u},\mathbf{1}} M_{[0,n_k)}\, \mathbf{l}(p) = \lim_{k\in N(p),\,k\to\infty} \pi_{\mathbf{u},\mathbf{1}} M_{[0,n_k)}\, \mathbf{l}(p) =\mathbf{0}. \] Since $\mathbf{l}(P_\ell)$ contains a basis of~$\mathbb{R}^d$, this yields that \begin{equation} \label{eq:st2} \lim_{k\in K_\ell,\,k\to\infty} \pi_{\mathbf{u},\mathbf{1}} M_{[0,n_k)}\, \mathbf{x} = \mathbf{0} \mbox{ for all } \mathbf{x} \in \mathbb{R}^d. \end{equation} Let $h \in \mathbb{N}$ be such that $M_{[0,h)}$ is a positive matrix. Then there is a finite set $Q \subset \mathcal{A}^*$ such that, for each $i \in \mathcal{A}$, $q\hspace{.1em}j_0$ is a prefix of $\sigma_{[0,h)}(i)$ for some $q \in Q$. Thus, for all sufficiently large $k \in K_\ell$, \begin{itemize} \item[(i)] $\|\pi_{\mathbf{u},\mathbf{1}} M_{[0,n_k)}\, \mathbf{l}(p)\| < \varepsilon$ for all $p \in \mathcal{A}^*$ such that $p\hspace{.1em}j_0 = p\hspace{.1em}i_{n_k}$ is a prefix of~$\omega^{(n_k)}$, using~\eqref{uniform}, \item[(ii)] and $\|\pi_{\mathbf{u},\mathbf{1}} M_{[0,n_k)}\, \mathbf{l}(q)\| < \varepsilon$ for all $q \in Q$, using \eqref{eq:st2} and the fact that $Q$ is finite. \end{itemize} Finally, let $p$ be a prefix of~$\omega^{(n)}$, $n \ge n_k+h$. Choose $i \in \mathcal{A}$ in a way that $\sigma_{[n_k,n)}(p)\sigma_{[n_k,n_k+h)}(i) = \sigma_{[n_k,n)}(p)\sigma_{[0,h)}(i)$ is a prefix of~$\omega^{(n_k)}$. Then $\sigma_{[n_k,n)}(p)\hspace{.1em}q\hspace{.1em}j_0 = \sigma_{[n_k,n)}(p)\hspace{.1em}q\hspace{.1em}i_{n_k}$ is a prefix of~$\omega^{(n_k)}$ for some $q \in Q$. Therefore, by (i) we have $\|\pi_{\mathbf{u},\mathbf{1}} M_{[0,n_k)} \mathbf{l}({q})\|< \varepsilon$ and (ii) implies that \[ \|\pi_{\mathbf{u},\mathbf{1}} M_{[0,n)}\, \mathbf{l}(p) + \pi_{\mathbf{u},\mathbf{1}} M_{[0,n_k)}\, \mathbf{l}(q)\| = \|\pi_{\mathbf{u},\mathbf{1}} M_{[0,n_k)} \, \mathbf{l}(\sigma_{[n_k,n)}(p)\hspace{.1em}q)\| < \varepsilon, \] if $k \in K_\ell$ is sufficiently large. Combining these two inequalities yields that $\|\pi_{\mathbf{u},\mathbf{1}} M_{[0,n)}\, \mathbf{l}(p)\| < 2\varepsilon$ for all prefixes~$p$ of~$\omega^{(n)}$, if $n \in \mathbb{N}$ is sufficiently large. As $\varepsilon$ was chosen arbitrary, this proves~\eqref{e:Lsn2} and thus the proposition. \end{proof} \begin{remark} The assumption of algebraic irreducibility cannot be omitted in Proposition~\ref{p:strongconvergence}. E.g., taking the primitive substitution $\sigma_n(1) = 121$, $\sigma_n(2) = 212$ for all~$n$, we have $M_{[0,n)} = \begin{pmatrix}2&1\\1&2\end{pmatrix}^n$, $\mathbf{u} = \tr(1,1)$, thus $\pi_{\mathbf{u},\mathbf{1}} M_{[0,n)} \mathbf{l}(1) = \tr(1/2,-1/2)$ and $\pi_{\mathbf{u},\mathbf{1}} M_{[0,n)} \mathbf{l}(2) = \tr(-1/2,1/2)$ for all~$n$; the limit words are the periodic words $1212\cdots$ and $2121\cdots$, hence, $\mathcal{L}_{\boldsymbol{\sigma}}$ is clearly balanced. \end{remark} \section{Set equations for Rauzy fractals and the recurrent left eigenvector}\label{sec:dual} The classical Rauzy fractal associated with a unimodular Pisot substitution $\sigma$ can be defined in terms of the dual substitution $E_1^*(\sigma)$ given in~\eqref{eq:dualsubst}. This dual substitution acts on the discrete hyperplane~$\Gamma(\mathbf{v})$ of the stable hyperplane~$\mathbf{v}^\bot$ of~$\sigma$; cf.\ e.g.~\cite{Arnoux-Ito:01}. Carrying this over to a sequence~$\boldsymbol{\sigma}\in S^{\mathbb{N}}$ requires considering an infinite sequence of hyperplanes~$(\mathbf{w}^{(n)})^\bot$, where, for each $n \in \mathbb{N}$, the dual substitution $E_1^*(\sigma_n)$ of $\sigma_n$ maps $\Gamma(\mathbf{w}^{(n)})$ to~$\Gamma(\mathbf{w}^{(n+1)})$. In Section~\ref{sec:dualsubst}, we formalize these concepts and relate them to the Rauzy fractals defined in Section~\ref{subsec:FR}. We first define Rauzy fractals on any hyperplane~$\mathbf{w}^\bot$, $\mathbf{w} \in \mathbb{R}_{\ge0}^d \setminus \{\mathbf{0}\}$, in order to obtain set equations that reflect the combinatorial properties of $S$-adic words geometrically. In Section \ref{subsec:choicev}, we specify the vector~$\mathbf{w}$ by defining a ``recurrent left eigenvector''~$\mathbf{v}$. This vector will allow us to obtain an associated sequence of hyperplanes $(\mathbf{v}^{(n)})^\bot$ such that the Rauzy fractals defined on these hyperplanes converge w.r.t.\ the Hausdorff metric; see Proposition~\ref{p:close}. It is this convergence property that will later enable us to derive topological as well as tiling properties of our ``$S$-adic Rauzy fractals''. Throughout this section we assume that $S$ is a finite or infinite set of unimodular substitutions over the finite alphabet~$\mathcal{A}$ and $\boldsymbol{\sigma} \in S^{\mathbb{N}}$ is a directive sequence. \subsection{The dual substitution and set equations}\label{sec:dualsubst} We now give some properties of the dual substitution $E_1^*(\sigma)$ defined in \eqref{eq:dualsubst}. Let $\mathbf{u}$ be a generalized right eigenvector, $\mathbf{w} \in \mathbb{R}_{\ge0}^d \setminus \{\mathbf{0}\}$. To simplify notation, we use the abbreviations \begin{equation} \label{e:projabb} \pi_{\mathbf{u},\mathbf{w}}^{(n)} = \pi_{(M_{[0,n)})^{-1}\mathbf{u},\tr{(M_{[0,n)})}\,\mathbf{w}} \qquad (n \in \mathbb{N}). \end{equation} Note that $\pi_{\mathbf{u},\mathbf{w}}^{(0)} =\pi_{\mathbf{u},\mathbf{w}}$. Moreover, we set \[ \mathbf{w}^{(n)} = \tr(M_{[0,n)})\, \mathbf{w} \qquad (n \in \mathbb{N}). \] The dual substitution~$E_1^*(\sigma)$ can be extended to subsets of discrete hyperplanes in the obvious way. Moreover, by direct calculation, one obtains that $E_1^*(\sigma \tau) = E_1^*(\tau) E_1^*(\sigma)$; cf.~\cite{Arnoux-Ito:01}. The following lemma contains further relevant properties of~$E_1^*$. \begin{lemma} \label{l:e1star} Let $\boldsymbol{\sigma} = (\sigma_n)\in S^{\mathbb{N}}$ be a sequence of unimodular substitutions. Then for all $k < \ell$, we~have \renewcommand{\theenumi}{\roman{enumi}} \begin{enumerate} \itemsep1ex \item \label{62i} $M_{[k,\ell)}\, (\mathbf{w}^{(\ell)})^\bot = (\mathbf{w}^{(k)})^\bot$, \item \label{62ii} $E_1^*(\sigma_{[k,\ell)})\, \Gamma(\mathbf{w}^{(k)}) = \Gamma(\mathbf{w}^{(\ell)})$, \item \label{62iii} for distinct $[\mathbf{x},i], [\mathbf{x}',i'] \in \Gamma(\mathbf{w}^{(k)})$, the sets $E_1^*(\sigma_{[k,\ell)})[\mathbf{x},i]$ and $E_1^*(\sigma_{[k,\ell)})[\mathbf{x}',i']$ are disjoint. \end{enumerate} \end{lemma} \begin{proof} The first assertion follows directly from the fact that $\mathbf{w}^{(\ell)} = \tr{(M_{[k,\ell)})}\, \mathbf{w}^{(k)}$. By the same fact, the other assertions are special cases of \cite[Theorem~1]{Fernique:06}. \end{proof} We need the following auxiliary result on the projections~$\pi_{\mathbf{u},\mathbf{w}}^{(n)}$. \begin{lemma}\label{lem:projections} Let $\boldsymbol{\sigma} = (\sigma_n)\in S^{\mathbb{N}}$ be a sequence of unimodular substitutions. Then for all $n\in \mathbb{N}$, we~have \begin{equation*} \pi_{\mathbf{u},\mathbf{w}}^{(n)}\, M_n = M_n\, \pi_{\mathbf{u},\mathbf{w}}^{(n+1)}. \end{equation*} \end{lemma} \begin{proof} Consider the linear mapping $M_n^{-1} \pi_{\mathbf{u},\mathbf{w}}^{(n)}\, M_n$. This mapping is idempotent, its kernel is $M_n^{-1} \mathbb{R}\, (M_{[0,n)})^{-1} \mathbf{u} = \mathbb{R}\, (M_{[0,n+1)})^{-1} \mathbf{u}$, and by Lemma~\ref{l:e1star}~(\ref{62i}) its image is~$(\mathbf{w}^{(n+1)})^\bot$. Thus $M_n^{-1} \pi_{\mathbf{u},\mathbf{w}}^{(n)}\, M_n$ is the projection to~$(\mathbf{w}^{(n+1)})^\bot$ along $(M_{[0,n+1)})^{-1} \mathbf{u}$. \end{proof} The following lemma gives an alternative definition of~$\mathcal{R}(i)$. \begin{lemma}\label{ref:alterdef} Let $\boldsymbol{\sigma}=(\sigma_n)_{n\in\mathbb{N}}\in S^{\mathbb{N}}$ be a primitive, algebraically irreducible, and recurrent sequence of unimodular substitutions with balanced language~$\mathcal{L}_{\boldsymbol{\sigma}}$. For each $i \in \mathcal{A}$ we have \[ \mathcal{R}(i) = \lim_{n\to\infty} \pi_{\mathbf{u},\mathbf{1}} M_{[0,n)}\, E_1^*(\sigma_{[0,n)})[\mathbf{0},i], \] where each $[\mathbf{y},j] \in E_1^*(\sigma_{[0,n)})[\mathbf{0},i]$ is identified with its first component $\mathbf{y} \in \mathbb{Z}^d$ and the limit is taken with respect to the Hausdorff metric. \end{lemma} \begin{proof} By the definition of $E_1^*(\sigma_{[0,n)})$ in~\eqref{eq:dualsubst}, we have \[ \pi_{\mathbf{u},\mathbf{1}} M_{[0,n)}\, E_1^*(\sigma_{[0,n)})[\mathbf{0},i] = \{\pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(p):\, p \in \mathcal{A}^*,\ \mbox{$p\hspace{.1em}i$ is a prefix of $\sigma_{[0,n)}(j)$ for some $j \in \mathcal{A}$}\}. \] If $p\hspace{.1em}i$ is a prefix of a limit word, we have thus $\pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(p) \in \pi_{\mathbf{u},\mathbf{1}} M_{[0,n)}\, E_1^*(\sigma_{[0,n)})[\mathbf{0},i]$ for all sufficiently large~$n$, hence $\mathcal{R}(i) \subset \lim_{n\to\infty} \pi_{\mathbf{u},\mathbf{1}} M_{[0,n)}\, E_1^*(\sigma_{[0,n)})[\mathbf{0},i]$. On the other hand, choose a limit word~$\omega$. Then for each~$n$ and for each $j \in \mathcal{A}$, there is a prefix~$p$ of $\omega^{(n)}$ such that $\omega$ starts with $\sigma_{[0,n)}({p}\hspace{.1em}j)$. Since $\|\pi_{\mathbf{u},\mathbf{1}} \mathbf{l}(\sigma_{[0,n)}({p}))\|$ is small for large~$n$ by Proposition~\ref{p:strongconvergence}, we obtain that $\pi_{\mathbf{u},\mathbf{1}} M_{[0,n)}\, E_1^*(\sigma_{[0,n)})[\mathbf{0},i]$ is close to $\mathcal{R}(i)$ for large~$n$. \end{proof} We now associate with a directive sequence $\boldsymbol{\sigma} = (\sigma_n)$ a sequence of Rauzy fractals~$\mathcal{R}^{(n)}_\mathbf{w}$ obtained by taking projections of each ``desubstituted'' limit word~$\omega^{(n)}$ to $(\tr{(M_{[0,n)})}\, \mathbf{w})^\bot$ along the direction $(M_{[0,n)})^{-1} \mathbf{u}$, which is the generalized right eigenvector of the shifted sequence $(\sigma_{m+n})_{m\in\mathbb{N}}$. For $\mathbf{w} \in \mathbb{R}_{\ge0}^d \setminus \{\mathbf{0}\}$, let $\mathcal{R}^{(n)}_\mathbf{w} = \bigcup_{i\in\mathcal{A}} \mathcal{R}^{(n)}_\mathbf{w}(i)$ with \begin{equation} \label{e:defRn} \mathcal{R}^{(n)}_\mathbf{w}(i) = \overline{\{\pi_{\mathbf{u},\mathbf{w}}^{(n)}\, \mathbf{l}(p):\, p \in \mathcal{A}^*,\ \mbox{$p\hspace{.1em}i$ is a prefix of $\omega^{(n)}$},\ \mbox{$\sigma_{[0,n)}(\omega^{(n)})$ is a limit word of $\boldsymbol{\sigma}$}\}}. \end{equation} Note that $\mathcal{R}^{(0)}_\mathbf{w}(i) = \mathcal{R}_\mathbf{w}(i)$. With the above notation, $\mathcal{R}^{(n)}_\mathbf{w}$ lives on the hyperplane~$(\mathbf{w}^{(n)})^\bot$. Similarly to Lemma~\ref{l:bounded}, we can give explicit bounds for these subtiles. \begin{lemma} \label{l:convhull2} Let $\mathbf{w} \in \mathbb{R}_{\ge 0}^d \setminus \{\mathbf{0}\}$. If $\mathcal{L}_{\boldsymbol{\sigma}}^{(n)}$ is $C$-balanced, then $\mathcal{R}^{(n)}_\mathbf{w} \subset \pi_{\mathbf{u},\mathbf{w}}^{(n)} \big([-C,C]^d \cap \mathbf{1}^\bot\big)$. \end{lemma} \begin{proof} By Lemma~\ref{l:bounded}, we have $\pi_{(M_{[0,n)})^{-1}\mathbf{u},\mathbf{1}}\, \mathcal{R}^{(n)}_\mathbf{w} \subset [-C,C]^d \cap \mathbf{1}^\bot$. Projecting by~$\pi_{\mathbf{u},\mathbf{w}}^{(n)}$, we obtain the result. \end{proof} The following lemma shows that the Rauzy fractals~$\mathcal{R}^{(n)}_\mathbf{w}$ mapped back via $M_{[0,n)}$ to the representation space~$\mathbf{w}^\bot$ tend to be smaller and smaller. \begin{lemma} \label{l:smallsubtiles} Let $\boldsymbol{\sigma} = (\sigma_n)\in S^{\mathbb{N}}$ be a primitive, algebraically irreducible, and recurrent sequence of unimodular substitutions with balanced language $\mathcal{L}_{\boldsymbol{\sigma}}$, and let $\mathbf{w} \in \mathbb{R}_{\ge 0}^d \setminus \{\mathbf{0}\}$. Then \[ \lim_{n\to\infty} M_{[0,n)} \mathcal{R}^{(n)}_\mathbf{w} = \{\mathbf{0}\}. \] \end{lemma} \begin{proof} As $M_{[0,n)} \pi_{\mathbf{u},\mathbf{w}}^{(n)} = \pi_{\mathbf{u},\mathbf{w}}\, M_{[0,n)}$ by Lemma~\ref{lem:projections} and $\pi_{\mathbf{u},\mathbf{w}} = \pi_{\mathbf{u},\mathbf{w}}\, \pi_{\mathbf{u},\mathbf{1}}$, we have $M_{[0,n)} \pi_{\mathbf{u},\mathbf{w}}^{(n)}\, \mathbf{l}(p) = \pi_{\mathbf{u},\mathbf{w}}\, \pi_{\mathbf{u},\mathbf{1}}\, M_{[0,n)}\, \mathbf{l}({p})$ for all prefixes~$p$ of~$\omega^{(n)}$. Now, the result follows from Proposition~\ref{p:strongconvergence}. \end{proof} For the Rauzy fractals $\mathcal{R}^{(n)}_\mathbf{w}$, we obtain a hierarchy of set equations, which replaces the self-affine structure present in the periodic case. As $\mathcal{R}^{(n)}_\mathbf{w}$ lives on the hyperplane~$(\mathbf{w}^{(n)})^\bot$, the decomposition below involves Rauzy fractals living in different hyperplanes. \begin{proposition} \label{p:setequation} Let $\boldsymbol{\sigma} = (\sigma_n)\in S^{\mathbb{N}}$ be a sequence of unimodular substitutions with generalized right eigenvector~$\mathbf{u}$. Then for each $[\mathbf{x},i]\in\mathbb{Z}^d\times \mathcal{A}$ and all $k < \ell$, we have the set equation \begin{equation}\label{e:setequationkl} \pi_{\mathbf{u},\mathbf{w}}^{(k)}\, \mathbf{x}+ \mathcal{R}^{(k)}_\mathbf{w}(i) = \bigcup_{[\mathbf{y},j] \in E_1^*(\sigma_{[k,\ell)})[\mathbf{x},i]} M_{[k,\ell)}\big(\pi_{\mathbf{u},\mathbf{w}}^{(\ell)}\, \mathbf{y} + \mathcal{R}^{(\ell)}_\mathbf{w}(j)\big). \end{equation} \end{proposition} \begin{proof} Let $\omega$ be a limit word. Each prefix~$p$ of~$\omega^{(k)}$ has a unique decomposition $p = \sigma_{[k,\ell)}(\tilde{p})\, q$ with $\tilde{p}\hspace{.1em}j$ a prefix of~$\omega^{(\ell)}$ and $q$ a proper prefix of $\sigma_{[k,\ell)}(j)$. Since $\mathbf{l}(\sigma_{[k,\ell)}(\tilde{p})) = M_{[k,\ell)}\, \mathbf{l}(\tilde{p})$, Lemma~\ref{lem:projections} implies that $\pi_{\mathbf{u},\mathbf{w}}^{(k)}\, \mathbf{l}(p) = \pi_{\mathbf{u},\mathbf{w}}^{(k)}\, \mathbf{l}(q) + M_{[k,\ell)}\, \pi_{\mathbf{u},\mathbf{w}}^{(\ell)}\, \mathbf{l}(\tilde{p})$. We gain that \[ \big\{ \pi_{\mathbf{u},\mathbf{w}}^{(k)}\, \mathbf{l}(p):\, \mbox{$p\hspace{.1em}i$ is a prefix of $\omega^{(k)}$}\big\} = \hspace{-1em} \bigcup_{\substack{q\in\mathcal{A}^*,\, j\in \mathcal{A}:\\[.2ex] \sigma_{[k,\ell)}(j)\in qi\mathcal{A}^*}} \hspace{-1em} \pi_{\mathbf{u},\mathbf{w}}^{(k)}\, \mathbf{l}(q) + M_{[k,\ell)}\, \big\{ \pi_{\mathbf{u},\mathbf{w}}^{(\ell)}\, \mathbf{l}(\tilde{p}):\, \mbox{$\tilde{p}\hspace{.1em}j$ is a prefix of $\omega^{(\ell)}$}\big\}. \] By the definition of $E_1^*(\sigma_{[k,\ell)})$, taking closures and translating by $\pi_{\mathbf{u},\mathbf{w}}^{(k)}\, \mathbf{x}$ yields the result. \end{proof} \subsection{Recurrent left eigenvector } \label{subsec:choicev} In the case of a single substitution~$\sigma$, choosing $\mathbf{w} = \mathbf{v}$, where $\tr{\mathbf{v}}$ is the Perron-Frobenius left eigenvector of~$M_\sigma$, the set equations give a graph-directed iterated function system for the subtiles $\mathcal{R}_\mathbf{v}(i)$; see~\cite{CANTBST}. For $\boldsymbol{\sigma} = (\sigma_n)\in S^{\mathbb{N}}$, the Rauzy fractals~$\mathcal{R}^{(n)}_\mathbf{w}$ are different from~$\mathcal{R}^{(0)}_\mathbf{w}$ and even live on different hyperplanes~$(\mathbf{w}^{(n)})^\bot$. Thus, in general \eqref{e:setequationkl} is an infinite system of set equations. Also, the construction of an analog of the left Perron-Frobenius eigenvector needs some work. Contrary to the cones $M_{[0,n)}\, \mathbb{R}_+^d$, there is no reason for the cones $\tr{(M_{[0,n)})}\, \mathbb{R}_+^d$ to be nested. Therefore, the intersection of these cones does not define a generalized left eigenvector of~$\boldsymbol{\sigma}$ and cannot be used to get a stable space. However, for a suitable choice of~$\mathbf{v}$, we have a subsequence $(n_k)_{k\in\mathbb{N}}$ such that the directions of $\mathbf{v}^{(n_k)} = \tr{(M_{[0,n_k)})}\, \mathbf{v}$ tend to that of~$\mathbf{v}$; in this case $\mathbf{v}$ is called a \emph{recurrent left eigenvector}. Using the assumptions of Theorem~\ref{t:1}, we can even guarantee that $\mathcal{R}^{(n_k)}_\mathbf{v}$ converges to~$\mathcal{R}_\mathbf{v}$ in Hausdorff limit for a suitable choice of $(n_k)$. The following lemma shows that, under the assumptions of primitivity and recurrence, one can easily exhibit recurrent left eigenvectors~$\mathbf{v}$. The precise statement involving a subsequence of a given sequence $(n_k)_{k\in\mathbb{N}}$ will be useful in the proof of Lemma~\ref{lem:th1star}. \begin{lemma} \label{l:findv} Let $\boldsymbol{\sigma} = (\sigma_n)\in S^{\mathbb{N}}$ be a primitive and recurrent sequence of substitutions and $(n_k)$ a strictly increasing sequence of non-negative integers. Then there is $\mathbf{v} \in \mathbb{R}_{\ge0}^d \setminus \{\mathbf{0}\}$ such that \begin{equation}\label{eq:recurrentcandidate} \lim_{k\in K,\,k\to\infty} \frac{\mathbf{v}^{(n_k)}}{\|\mathbf{v}^{(n_k)}\|} = \lim_{k\in K,\,k\to\infty}\frac{\tr{(M_{[0,n_k)})}\mathbf{v}}{\|\tr{(M_{[0,n_k)})}\mathbf{v}\|}= \mathbf{v} \end{equation} for some infinite set $K \subset \mathbb{N}$. Such a vector $\mathbf{v}$ is called a \emph{recurrent left eigenvector}. \end{lemma} \begin{proof} As $\boldsymbol{\sigma}$ is primitive, $M_{[0,h)}$ is a positive matrix for some $h \in \mathbb{N}$. By recurrence, we can find inductively an increasing sequence of integers $(\tilde{n}_j)_{j\in\mathbb{N}}$ with $\tilde{n}_0 = h$ and $M_{[0,\tilde{n}_j)}=M_{[\tilde{n}_{j+1}-\tilde{n}_j,\tilde{n}_{j+1})}$ for all $j \in \mathbb{N}$. This allows to define a sequence $(M_{-k})_{k\in\mathbb{N}}$ satisfying $M_{-k} = M_{\tilde{n}_j-k}$ for all $j$ such that $\tilde{n}_j \ge k$. As we have infinitely many indices $k > 0$ such that $M_{[-k,h-k)} = M_{[0,h)}$, the cones $\tr{(M_{[-k,0)})}\, \mathbb{R}^d_+$ converge to a single line as $k \to \infty$; see Section~\ref{sec:gener-perr-frob}. For $\tilde{n}_j \le n_k$, we have $\tr{(M_{[0,n_k)})}\, \mathbb{R}^d_{\ge 0}= \tr{(M_{[\tilde{n}_j,n_k)})}\, \tr{(M_{[-\tilde{n}_j,0)})}\, \mathbb{R}^d_{\ge 0}$. By \cite[Lemma~15.1]{Furstenberg:60}, this implies that the diameter of the cone $\tr{(M_{[0,n_k)})}\, \mathbb{R}^d_{\ge 0}$ is smaller than that of $\tr{(M_{[-\tilde{n}_j,0)})}\, \mathbb{R}^d_{\ge 0}$ in projective Hilbert metric; see also~\cite{Birkhoff}. Hence, the diameter of $\tr{(M_{[0,n_k)})}\, \mathbb{R}_{\ge0}^d$ converges to zero. By the compactness of the projective space~$\mathbb{P}(\mathbb{R}^{d-1})$, we can now choose an infinite set $K \subset \mathbb{N}$ such that $\bigcap_{k\in K} \tr{(M_{[0,n_k)})}\, \mathbb{R}^d_{\ge 0} = \mathbb{R}_{\ge0} \mathbf{v}$ for some $\mathbf{v} \in \mathbb{R}_{\ge0}^d \setminus \{\mathbf{0}\}$. For this choice of $\mathbf{v}$, \eqref{eq:recurrentcandidate} obviously holds. \end{proof} In the sequel, we will work with directive sequences that satisfy a list of conditions gathered in the following Property PRICE (which stands for Primitivity, Recurrence, algebraic Irreducibility, $C$-balancedness, and recurrent left Eigenvector). By Lemma~\ref{lem:th1star} below, this property is a consequence of the assumptions of Theorem~\ref{t:1}. Nevertheless, we prefer referring to property PRICE because we will frequently use the sequences $(n_k)$, $(\ell_k)$, and the recurrent left eigenvector~$\mathbf{v}$ involved in the definition. \begin{definition}[Property PRICE]\label{def:star} Assume that $S$ is a finite or infinite set of substitutions over the finite alphabet~$\mathcal{A}$. We say that a directive sequence $\boldsymbol{\sigma} = (\sigma_n)\in S^{\mathbb{N}}$ has Property \emph{PRICE} w.r.t.\ the strictly increasing sequences $(n_k)_{k\in\mathbb{N}}$ and $(\ell_k)_{k\in\mathbb{N}}$ and a vector $\mathbf{v} \in \mathbb{R}_{\ge0}^d \setminus \{\mathbf{0}\}$ if the following conditions hold\footnote{Note that (P) doesn't merely mean that the matrix $B$ occurs infinitely often but that it has to occur at the end of the recurring blocks defined in (R). So (P) doesn't follow from (R).}. \begin{itemize} \labitem{(P)}{defP} There exists $h \in \mathbb{N}$ and a positive matrix~$B$ such that $M_{[\ell_k-h,\ell_k)} = B$ for all $k \in \mathbb{N}$. \labitem{(R)}{defR} We have $(\sigma_{n_k}, \sigma_{n_k+1}, \ldots,\sigma_{n_k+\ell_k-1}) = (\sigma_0, \sigma_1, \ldots,\sigma_{\ell_k-1})$ for all $k\in\mathbb{N}$. \labitem{(I)}{defI} The directive sequence~$\boldsymbol{\sigma}$ is algebraically irreducible. \labitem{(C)}{defC} There is $C > 0$ such that $\mathcal{L}_{\boldsymbol{\sigma}}^{(n_k+\ell_k)}$ is $C$-balanced for all $k\in\mathbb{N}$. \labitem{(E)}{defE} We have $\lim_{k\to\infty} \mathbf{v}^{(n_k)}/\|\mathbf{v}^{(n_k)}\|= \mathbf{v}$. \end{itemize} We also simply say that $\boldsymbol{\sigma}$ satisfies Property PRICE if the five conditions hold for some not explicitly specified strictly increasing sequences $(n_k)_{k\in\mathbb{N}}$ and $(\ell_k)_{k\in\mathbb{N}}$ and some $\mathbf{v} \in \mathbb{R}_{\ge0}^d \setminus \{\mathbf{0}\}$. \end{definition} Note that Properties~\ref{defP}, \ref{defR} and~\ref{defC} in Definition~\ref{def:star} imply that $\boldsymbol{\sigma}$ is a primitive and recurrent directive sequence with balanced language~$\mathcal{L}_{\boldsymbol{\sigma}}$, and \ref{defE} means that $\mathbf{v}$ is a recurrent left eigenvector. The conditions of the following lemma are (apart from unimodularity, which we do not need here) that of Theorem~\ref{t:1}. \begin{lemma} \label{lem:th1star} Let $\boldsymbol{\sigma} = (\sigma_n)\in S^{\mathbb{N}}$ be a primitive and algebraically irreducible sequence of substitutions over the finite alphabet~$\mathcal{A}$. Assume that there is $C > 0$ such that for each $\ell \in \mathbb{N}$, there is $n \ge 1$ with $(\sigma_n, \ldots, \sigma_{n+\ell-1}) = (\sigma_0, \ldots, \sigma_{\ell-1})$ and $\mathcal{L}_{\boldsymbol{\sigma}}^{(n+\ell)}$ is $C$-balanced. Then Property PRICE holds. \end{lemma} \begin{proof} First observe that~\ref{defI} holds by assumption. By primitivity of~$\boldsymbol{\sigma}$, we can choose $\ell_0$ and~$h$ in a way that $M_{[\ell_0-h,\ell_0)}$ is positive. As the assumptions of the lemma imply that $\boldsymbol{\sigma}$ is recurrent, there exists a strictly increasing sequence $(\ell_k)$ of non-negative integers such that \ref{defP} holds. By assumption, there is an associated sequence $(n_k)$ of non-negative integers such that \ref{defR} and \ref{defC} hold. In view of Lemma~\ref{l:findv}, we can choose appropriate subsequences of $(\ell_k)$ and $(n_k)$, again called $(\ell_k)$ and~$(n_k)$, such that \ref{defE} holds. As taking subsequences doesn't affect \ref{defP}, \ref{defR}, \ref{defI}, and~\ref{defC}, this proves the lemma. \end{proof} We will use the following simple observation. \begin{lemma} \label{l:shiftedPRICE} Assume that the directive sequence $\boldsymbol{\sigma} = (\sigma_n)_{n\in\mathbb{N}}\in S^{\mathbb{N}}$ has Property PRICE w.r.t.\ the sequences $(n_k)_{k\in\mathbb{N}}$ and $(\ell_k)_{k\in\mathbb{N}}$ and the vector~$\mathbf{v}$. Then for each $h \in \mathbb{N}$ there is $k_0 \in \mathbb{N}$ such that the shifted sequence $(\sigma_{n+h})_{n\in\mathbb{N}}$ has Property PRICE w.r.t.\ the sequences $(n_{k+k_0})_{k\in\mathbb{N}}$ and $(\ell_{k+k_0}{-}h)_{k\in\mathbb{N}}$, and the vector~$\mathbf{v}^{(h)}$. \end{lemma} Property PRICE implies the following uniform convergence result for the projections~$\pi_{\mathbf{u},\mathbf{v}}^{(n_k)}$. \begin{lemma}\label{lem:projectionconvergence} Assume that the directive sequence $\boldsymbol{\sigma}\in S^{\mathbb{N}}$ has Property PRICE w.r.t.\ the sequences $(n_k)$ and $(\ell_k)$ and the vector~$\mathbf{v}$. Then \[ \lim_{k\to\infty} \max \big\{\|\pi_{\mathbf{u},\mathbf{v}}^{(n_k)}\, \mathbf{x} - \pi_{\mathbf{u},\mathbf{v}}\, \mathbf{x}\|:\, \|\mathbf{x}\| \le \max _{i\in \mathcal{A}} \|M_{[0,\ell_k)} \mathbf{e}_i\|,\, \|\pi_{\mathbf{u},\mathbf{v}}\, \mathbf{x}\| \le 1\big\} = 0. \] In particular, $\pi_{\mathbf{u},\mathbf{v}}^{(n_k)} \to \pi_{\mathbf{u},\mathbf{v}}$ for $k\to\infty$ in compact-open topology. \end{lemma} \begin{proof} Since \ref{defP}, \ref{defR}, \ref{defI}, and \ref{defC} hold, we obtain from Proposition~\ref{p:strongconvergence} that $\|\pi_{\mathbf{u},\mathbf{1}} M_{[0,\ell_k)} \mathbf{e}_i\| \to 0$ for each $i \in \mathcal{A}$ when $k\to\infty$. Since $\pi_{\mathbf{u},\mathbf{v}} = \pi_{\mathbf{u},\mathbf{v}} \pi_{\mathbf{u},\mathbf{1}}$, this implies that $\|\pi_{\mathbf{u},\mathbf{v}} M_{[0,\ell_k)} \mathbf{e}_i\| \to 0$. As $M_{[\ell_k-h,\ell_k)} = B$ is a positive matrix (that does not depend on~$k$), there is $c > 0$ such that $\max_{i\in \mathcal{A}} \|M_{[0,\ell_k)} \mathbf{e}_i\| \le c\, \min_{i\in\mathcal{A}} \|M_{[0,\ell_k)} \mathbf{e}_i||$ for all $k \in \mathbb{N}$. Thus the cone $M_{[0,\ell_k)} \mathbb{R}_+^d$ has small diameter at ``height'' $\max_{i\in \mathcal{A}} \|M_{[0,\ell_k)}\mathbf{e}_i\|$, hence, $\pi_{\tilde{\mathbf{u}}, \mathbf{v}}\, \mathbf{x}$ is close to $\pi_{\mathbf{u},\mathbf{v}}\, \mathbf{x}$ for all $\tilde{\mathbf{u}} \in M_{[0,\ell_k)}\mathbb{R}_+^d$ and $\mathbf{x}$ in the cylinder $\|\mathbf{x}\| \le \max _{i\in \mathcal{A}} \|M_{[0,\ell_k)} \mathbf{e}_i\|$, $\|\pi_{\mathbf{u},\mathbf{v}}\, \mathbf{x}\| \le 1$. More precisely, \[ \pi_{\mathbf{u},\mathbf{v}}\, \mathbf{x} - \pi_{\tilde{\mathbf{u}},\mathbf{v}}\, \mathbf{x} = \pi_{\mathbf{u},\mathbf{v}}(\mathbf{x} - \pi_{\tilde{\mathbf{u}},\mathbf{v}}\, \mathbf{x}) = \pi_{\mathbf{u},\mathbf{v}}\bigg( \frac{\|\mathbf{x} - \pi_{\tilde{\mathbf{u}},\mathbf{v}}\, \mathbf{x}\|}{\|\tilde{\mathbf{u}}\|}\, \tilde{\mathbf{u}} \bigg) = \frac{\|\mathbf{x} - \pi_{\tilde{\mathbf{u}},\mathbf{v}}\, \mathbf{x}\|}{\|\tilde{\mathbf{u}}\|}\, \pi_{\mathbf{u},\mathbf{v}}\, \tilde{\mathbf{u}} \] gives that \[ ||\pi_{\mathbf{u},\mathbf{v}}\, \mathbf{x} - \pi_{\tilde{\mathbf{u}},\mathbf{v}}\, \mathbf{x}|| \le \frac{\|\mathbf{x} - \pi_{\mathbf{u},\mathbf{v}}\, \mathbf{x}\| + \|\pi_{\mathbf{u},\mathbf{v}}\, \mathbf{x} - \pi_{\tilde{\mathbf{u}},\mathbf{v}}\, \mathbf{x}\|}{\|\tilde{\mathbf{u}}\|}\,|| \pi_{\mathbf{u},\mathbf{v}}\, \tilde{\mathbf{u}}|| \] and, hence, \[ \big\|\pi_{\mathbf{u},\mathbf{v}}\, \mathbf{x} - \pi_{\tilde{\mathbf{u}},\mathbf{v}}\, \mathbf{x}\big\| \le \frac{\|\mathbf{x} - \pi_{\mathbf{u},\mathbf{v}}\, \mathbf{x}\|}{\|\tilde{\mathbf{u}}\| - \|\pi_{\mathbf{u},\mathbf{v}}\, \tilde{\mathbf{u}}\|}\, \|\pi_{\mathbf{u},\mathbf{v}}\, \tilde{\mathbf{u}}\|. \] Thus we obtain for $\tilde{\mathbf{u}}$ and $\mathbf{x}$ with the above properties that \begin{align*} \big\|\pi_{\mathbf{u},\mathbf{v}}\, \mathbf{x} - \pi_{\tilde{\mathbf{u}},\mathbf{v}}\, \mathbf{x}\big\| & \le \frac{\max _{i\in \mathcal{A}} \|M_{[0,\ell_k)} \mathbf{e}_i\|+1}{\min _{i\in \mathcal{A}} \|M_{[0,\ell_k)} \mathbf{e}_i\|-\max _{i\in \mathcal{A}} \|\pi_{\mathbf{u},\mathbf{v}} M_{[0,\ell_k)} \mathbf{e}_i\|}\, \max _{i\in \mathcal{A}} \|\pi_{\mathbf{u},\mathbf{v}} M_{[0,\ell_k)} \mathbf{e}_i\| \\ & \le 2c\, \max _{i\in \mathcal{A}} \|\pi_{\mathbf{u},\mathbf{v}} M_{[0,\ell_k)} \mathbf{e}_i\| < \varepsilon, \end{align*} for sufficiently large~$k$. Moreover, the facts that $\lim_{k\to\infty} \mathbf{v}^{(n_k)}/\|\mathbf{v}^{(n_k)}\|= \mathbf{v}$, that $\|\pi_{\tilde{\mathbf{u}},\mathbf{v}}\, \mathbf{x}\|$ is bounded (by $1+\varepsilon$), and that $\langle \tilde{\mathbf{u}}, \mathbf{v} \rangle$ is bounded away from~$0$, yield that \[ \big\|\pi_{\tilde{\mathbf{u}},\mathbf{v}^{(n_k)}}\, \mathbf{x} - \pi_{\tilde{\mathbf{u}},\mathbf{v}}\, \mathbf{x}\big\| < \varepsilon \] for sufficiently large~$k$, thus $\|\pi_{\tilde{\mathbf{u}},\mathbf{v}^{(n_k)}}\, \mathbf{x} - \pi_{\mathbf{u},\mathbf{v}}\, \mathbf{x}\| < 2 \varepsilon$. We can choose $\tilde{\mathbf{u}} = (M_{[0,n_k)})^{-1} \mathbf{u}$ because the recurrence assertion~\ref{defR} gives $(M_{[0,n_k)})^{-1} \mathbf{u} \in M_{[0,\ell_k)} \mathbb{R}_+^d$. As $\pi_{\mathbf{u},\mathbf{v}}^{(n_k)} = \pi_{(M_{[0,n_k)})^{-1} \mathbf{u},\mathbf{v}^{(n_k)}}$, this proves the lemma. \end{proof} We are now able to prove the following convergence result for Rauzy fractals. \begin{proposition} \label{p:close} Let $S$ be a finite or infinite set of unimodular substitutions over a finite alphabet $\mathcal{A}$. Assume that the sequence $\boldsymbol{\sigma} = (\sigma_n)\in S^{\mathbb{N}}$ of unimodular substitutions has Property PRICE w.r.t.\ the sequences $(n_k)$ and $(\ell_k)$ and the vector~$\mathbf{v}$. Then, for each $i \in \mathcal{A}$ and each $\ell\in\mathbb{N}$, \begin{equation} \label{e:tilesconvergence} \lim_{k\to\infty} \mathcal{R}^{(n_k+\ell)}_\mathbf{v}(i) = \mathcal{R}^{(\ell)}_\mathbf{v}(i), \end{equation} where the limit is taken w.r.t.\ the Hausdorff metric. \end{proposition} \begin{proof} We first prove the result for $\ell=0$. For each $\varepsilon > 0$ and each sufficiently large $k \in \mathbb{N}$, the following inequalities hold: \renewcommand{\theenumi}{\roman{enumi}} \begin{enumerate} \itemsep1ex \item \label{i:close1} $\mathrm{diam} \big(M_{[0,\ell_k)} \mathcal{R}^{(\ell_k)}_\mathbf{v}(j)\big) < \varepsilon$ for each $j \in \mathcal{A}$, \item \label{i:close2} $\mathrm{diam} \big(M_{[0,\ell_k)} \mathcal{R}^{(n_k+\ell_k)}_\mathbf{v}(j)\big) < \varepsilon$ for each $j \in \mathcal{A}$, \item \label{i:close3} $\|\pi_{\mathbf{u},\mathbf{v}}^{(n_k)}\, M_{[0,\ell_k)}\, \mathbf{x} - \pi_{\mathbf{u},\mathbf{v}}\, M_{[0,\ell_k)}\, \mathbf{x}\| < \varepsilon$ for each $[\mathbf{x},j] \in E_1^*(\sigma_{[0,\ell_k)})[\mathbf{0},i]$. \end{enumerate} Inequality~(\ref{i:close1}) follows from Lemma~\ref{l:smallsubtiles}. To prove~(\ref{i:close2}), note first that, as $\mathcal{L}_{\boldsymbol{\sigma}}^{(n_k+\ell_k)}$ is $C$-balanced, \[ M_{[0,\ell_k)} \mathcal{R}^{(n_k+\ell_k)}_\mathbf{v}(j) \subset M_{[0,\ell_k)}\, \pi_{\mathbf{u},\mathbf{v}}^{(n_k+\ell_k)} \big([-C,C]^d \cap \mathbf{1}^\bot\big) = \pi_{\mathbf{u},\mathbf{v}}^{(n_k)} M_{[0,\ell_k)} \big([-C,C]^d \cap \mathbf{1}^\bot\big) \] by Lemmas~\ref{lem:projections} and~\ref{l:convhull2}. For $\mathbf{y} \in M_{[0,\ell_k)}\, [-C,C]^d$ with sufficiently large~$k$, we have $\|\pi_{\mathbf{u},\mathbf{v}}\, \mathbf{y}\| < \varepsilon/2$ by Proposition~\ref{p:strongconvergence} and $\|\pi_{\mathbf{u},\mathbf{v}}^{(n_k)}\, \mathbf{y} - \pi_{\mathbf{u},\mathbf{v}}\, \mathbf{y}\| < \varepsilon/2$ by Lemma~\ref{lem:projectionconvergence}, where we have used that $\|\mathbf{y}\| \le C \sum_{j\in\mathcal{A}} \|M_{[0,\ell_k)}\, \mathbf{e}_j\|$. This implies that $\|\pi_{\mathbf{u},\mathbf{v}}^{(n_k)}\, \mathbf{y}\| < \varepsilon$, and (\ref{i:close2}) follows. Finally, (\ref{i:close3}) is a consequence of Lemma~\ref{lem:projectionconvergence} because the definition of $E_1^*$ in~(\ref{eq:dualsubst}) yields for $[\mathbf{x},j] \in E_1^*(\sigma_{[0,\ell_k)})[\mathbf{0},i]$ that $M_{[0,\ell_k)}\, \mathbf{x} = \mathbf{l}(p)$ for some prefix~$p$ of $\sigma_{[0,\ell_k)}(j)$, $j \in \mathcal{A}$, hence $\|M_{[0,\ell_k)}\, \mathbf{x}\| \le \max_{j\in\mathcal{A}} \|M_{[0,\ell_k)}\, \mathbf{e}_j\|$ and $\|\pi_{\mathbf{u},\mathbf{v}}\, M_{[0,\ell_k)}\, \mathbf{x}\|$ is bounded by the balancedness of~$\mathcal{L}_{\boldsymbol{\sigma}}$. By~\eqref{e:setequationkl}, we have \[ \mathcal{R}_\mathbf{v}(i) = \bigcup_{[\mathbf{x},j] \in E_1^*(\sigma_{[0,\ell_k)})[\mathbf{0},i]} \big(\pi_{\mathbf{u},\mathbf{v}} M_{[0,\ell_k)}\, \mathbf{x} + M_{[0,\ell_k)} \mathcal{R}^{(\ell_k)}_\mathbf{v}(j)\big) \] and \[ \mathcal{R}^{(n_k)}_\mathbf{v}(i) = \bigcup_{[\mathbf{x},j] \in E_1^*(\sigma_{[n_k,n_k+\ell_k)})[\mathbf{0},i]} \big(\pi_{\mathbf{u},\mathbf{v}}^{(n_k)}\, M_{[n_k,n_k+\ell_k)}\, \mathbf{x} + M_{[n_k,n_k+\ell_k)} \mathcal{R}^{(n_k+\ell_k)}_\mathbf{v}(j)\big). \] As $\sigma_{[n_k,n_k+\ell_k)} = \sigma_{[0,\ell_k)}$ and $M_{[n_k,n_k+\ell_k)} = M_{[0,\ell_k)}$, the result for the case $\ell=0$ now follows from (\ref{i:close1})--(\ref{i:close3}) by an obvious application of the triangle inequality. The case of $\ell > 0$ is equivalent to proving that $\lim_{k\to\infty} \mathcal{R}^{(n_k)}_{\mathbf{v}^{(\ell)}}(i) = \mathcal{R}_{\mathbf{v}^{(\ell)}}(i)$ for the Rauzy fractals defined by the shifted sequence $(\sigma_{n+\ell})_{n\in\mathbb{N}}$. It is thus an immediate consequence of Lemma~\ref{l:shiftedPRICE}. \end{proof} \section{Some properties of Rauzy fractals} \label{sec:results} In this section, we introduce the collections~$\mathcal{C}^{(n)}_\mathbf{w}$ of translates of $\mathcal{R}_\mathbf{w}^{(n)}(i)$, $i \in \mathcal{A}$, and prove their covering properties. Moreover, we show that under certain conditions the set~$\mathcal{R}(i)$ is the closure of its interior and $\partial\mathcal{R}(i)$ has measure zero for each $i\in \mathcal{A}$; the proof of the latter property is the main task of this section. In the substitutive case, the proofs of the analogous results are based on the graph-directed iterated function system satisfied by the subtiles of the Rauzy fractal; see e.g.~\cite{CANTBST}. Since we do not have a graph-directed structure in our case, we rely on the infinite family of set equations in~\eqref{e:setequationkl}. Again, throughout this section we assume that $S$ is a finite or infinite set of unimodular substitutions over the finite alphabet~$\mathcal{A}$ and $\boldsymbol{\sigma} \in S^{\mathbb{N}}$ is a directive sequence. \subsection{Covering properties} For $\mathbf{w} \in \mathbb{R}^d_{\ge 0}\setminus\{\mathbf{0}\}$ and $n \in \mathbb{N}$, define the collection of tiles in $(\mathbf{w}^{(n)})^\bot$ \[ \mathcal{C}^{(n)}_\mathbf{w} = \{\pi_{\mathbf{u},\mathbf{w}}^{(n)}\, \mathbf{x} + \mathcal{R}^{(n)}_\mathbf{w}(i):\, [\mathbf{x},i] \in \Gamma(\mathbf{w}^{(n)})\}\,, \] where $\mathcal{R}^{(n)}_\mathbf{w}(i)$ are the Rauzy fractals defined in~\eqref{e:defRn} and $\pi_{\mathbf{u},\mathbf{w}}^{(n)}$ is as in~\eqref{e:projabb}. Note that $\mathcal{C}^{(0)}_\mathbf{w} = \mathcal{C}_\mathbf{w}$. The following simple lemma will be used frequently in the sequel. \begin{lemma} \label{l:covering} Let $\boldsymbol{\sigma} = (\sigma_n)\in S^{\mathbb{N}}$ be a sequence of unimodular substitutions with generalized right eigenvector~$\mathbf{u}$, $\mathbf{w} \in \mathbb{R}_{\ge0}^d \setminus \{\mathbf{0}\}$, and $k < \ell$. If $\mathbf{z} \in (\mathbf{w}^{(k)})^\bot$ lies in $m$ distinct tiles of~$\mathcal{C}^{(k)}_\mathbf{w}$, then $(M_{[k,\ell)})^{-1} \mathbf{z}$ lies in at least~$m$ distinct tiles of~$\mathcal{C}^{(\ell)}_\mathbf{w}$. If moreover there are distinct $[\mathbf{y},j], [\mathbf{y}',j'] \in E_1^*(\sigma_{[k,\ell)})[\mathbf{x},i]$, with $[\mathbf{x},i] \in \Gamma(\mathbf{w}^{(k)})$, such that $(M_{[k,\ell)})^{-1} \mathbf{z} \in {\big(\pi_{\mathbf{u},\mathbf{w}}^{(\ell)}\, \mathbf{y} + \mathcal{R}^{(\ell)}_\mathbf{w}(j)\big)} \cap {\big(\pi_{\mathbf{u},\mathbf{w}}^{(\ell)}\, \mathbf{y}' + \mathcal{R}^{(\ell)}_\mathbf{w}(j')\big)}$, then $(M_{[k,\ell)})^{-1} \mathbf{z}$ lies in at least $m+1$ distinct tiles of~$\mathcal{C}^{(\ell)}_\mathbf{w}$. \end{lemma} \begin{proof} This is an immediate consequence of the set equations~\eqref{e:setequationkl}, the fact that $E_1^*(\sigma_{[k,\ell)})[\mathbf{x},i] \subset \Gamma(\mathbf{w}^{(\ell)})$ for $[\mathbf{x},i] \in \Gamma(\mathbf{w}^{(k)})$ by Lemma~\ref{l:e1star}~(\ref{62ii}) and that $E_1^*(\sigma_{[k,\ell)})[\mathbf{x},i] \cap E_1^*(\sigma_{[k,\ell)})[\mathbf{x}',i'] = \emptyset$ for distinct $[\mathbf{x},i], [\mathbf{x}',i'] \in \Gamma(\mathbf{w}^{(k)})$ by Lemma~\ref{l:e1star}~(\ref{62iii}). \end{proof} In particular, Lemma~\ref{l:covering} implies that the covering degree of~$\mathcal{C}^{(n)}_\mathbf{w}$ is less than or equal to that of~$\mathcal{C}^{(n+1)}_\mathbf{w}$, where the \emph{covering degree} of a collection of sets~$\mathcal{K}$ in a Euclidean space~$\mathcal{E}$ is the maximal number~$m$ such that each point of~$\mathcal{E}$ lies in at least $m$ distinct elements of~$\mathcal{K}$. (For locally finite multiple tilings, this agrees with the definition of the covering degree in Section~\ref{sec:multiple-tilings}.) \begin{proposition} \label{p:covering} Let $S$ be a finite or infinite set of unimodular substitutions over a finite alphabet and let $\boldsymbol{\sigma} = (\sigma_n)_{n\in\mathbb{N}}\in S^{\mathbb{N}}$ be a primitive, algebraically irreducible, and recurrent directive sequence with balanced language~$\mathcal{L}_{\boldsymbol{\sigma}}$. Then for each $n \in \mathbb{N}$ and $\mathbf{w} \in \mathbb{R}_{\ge0}^d \setminus \{\mathbf{0}\}$, the collection of tiles~$\mathcal{C}^{(n)}_\mathbf{w}$ covers~$(\mathbf{w}^{(n)})^\bot$ with finite covering degree. For fixed~$\mathbf{w}$, the covering degree of~$\mathcal{C}^{(n)}_\mathbf{w}$ increases monotonically with~$n$. \end{proposition} \begin{proof} By the set equations~\eqref{e:setequationkl} and Lemma~\ref{l:e1star}~(\ref{62ii}), we have \begin{equation} \label{e:Cv} \bigcup_{\mathcal{T}\in\mathcal{C}_\mathbf{w}} \mathcal{T} = \bigcup_{[\mathbf{x},i] \in \Gamma(\mathbf{w})} \big(\pi_{\mathbf{u},\mathbf{w}}\, \mathbf{x} + \mathcal{R}_\mathbf{w}(i)\big) = \bigcup_{[\mathbf{x},i] \in \Gamma(\mathbf{w}^{(n)})} M_{[0,n)}\, \big(\pi_{\mathbf{u},\mathbf{w}}^{(n)}\, \mathbf{x} + \mathcal{R}^{(n)}_\mathbf{w}(i)\big) \end{equation} for each $n \in \mathbb{N}$. Moreover, $\mathbf{w}^{(n)} = \tr{(M_{[0,n)})}\, \mathbf{w}$ and $M_{[0,n)}\, \mathbb{Z}^d = \mathbb{Z}^d$ (by unimodularity) imply that \begin{align*} \{M_{[0,n)}\, \pi_{\mathbf{u},\mathbf{w}}^{(n)}\, \mathbf{x}:\, [\mathbf{x},i] \in \Gamma(\mathbf{w}^{(n)})\} & = \{\pi_{\mathbf{u},\mathbf{w}}\, M_{[0,n)}\, \mathbf{x}:\, \mathbf{x} \in \mathbb{Z}^d,\, 0 \le \langle \mathbf{w}^{(n)}, \mathbf{x} \rangle < \max_{i\in\mathcal{A}} \langle \mathbf{w}^{(n)}, \mathbf{e}_i \rangle\} \\ & = \{\pi_{\mathbf{u},\mathbf{w}}\, \mathbf{y}:\, \mathbf{y} \in \mathbb{Z}^d,\, 0 \le \langle \mathbf{w}, \mathbf{y} \rangle < \max_{i\in\mathcal{A}} \langle \mathbf{w}, M_{[0,n)}\, \mathbf{e}_i \rangle\}. \end{align*} As $\mathbf{u}$ has rationally independent coordinates by Lemma~\ref{l:independent}, the set $\{\pi_{\mathbf{u},\mathbf{w}}\, \mathbf{y}:\, \mathbf{y} \in \mathbb{Z}^d,\, 0 \le \langle \mathbf{w}, \mathbf{y} \rangle\}$ is dense in $\mathbf{w}^\bot$. Observing that $\lim_{n\to\infty} \max_{i\in\mathcal{A}} \langle \mathbf{w}, M_{[0,n)}\, \mathbf{e}_i \rangle = \infty$ by the primitivity of $(\sigma_n)_{n\in\mathbb{N}}$, we obtain that \[ \lim_{n\to\infty} \{M_{[0,n)}\, \pi_{\mathbf{u},\mathbf{w}}^{(n)}\, \mathbf{x}:\, [\mathbf{x},i] \in \Gamma(\mathbf{w}^{(n)})\} = \overline{\{\pi_{\mathbf{u},\mathbf{w}}\, \mathbf{y}:\, \mathbf{y} \in \mathbb{Z}^d,\, 0 \le \langle \mathbf{w}, \mathbf{y} \rangle\}} = \mathbf{w}^\bot, \] where the limit is taken with respect to the Hausdorff metric. Since $\lim_{n\to\infty} M_{[0,n)} \mathcal{R}^{(n)}_\mathbf{w}(i) = \{\mathbf{0}\}$ by Lemma~\ref{l:smallsubtiles}, this implies together with \eqref{e:Cv} that $\overline{\bigcup_{\mathcal{T}\in\mathcal{C}_\mathbf{w}} \mathcal{T}} = \mathbf{w}^\bot$. As $\mathcal{C}_\mathbf{w}$ is a locally finite collection of compact sets, this proves that $\mathcal{C}_\mathbf{w}$ covers~$\mathbf{w}^\bot$ and, hence, $\mathcal{C}^{(n)}_\mathbf{w}$ covers~$(\mathbf{w}^{(n)})^\bot$. As $\pi_{\mathbf{u},\mathbf{w}}\, \Gamma(\mathbf{w})$ is uniformly discrete in~$\mathbf{w}^\bot$ and the elements of~$\mathcal{C}_\mathbf{w}$ are translations of the subtiles~$\mathcal{R}_\mathbf{w}(i)$, which are compact by Lemma~\ref{l:bounded}, $\mathcal{C}^{(0)}_\mathbf{w}$~has finite covering degree. By Lemma~\ref{l:covering}, the covering degree of~$\mathcal{C}^{(n)}_\mathbf{w}$ is a monotonically increasing function in~$n$. By the set equations~\eqref{e:setequationkl}, Lemma~\ref{l:e1star}~(\ref{62ii}) and the definition of~$E_1^*$ in~\eqref{eq:dualsubst}, we also see that the covering degree of~$\mathcal{C}^{(n+1)}_\mathbf{w}$ is bounded by $\max_{i\in\mathcal{A}} \sum_{j\in\mathcal{A}} |\sigma_n(j)|_i$ times the covering degree of~$\mathcal{C}^{(n)}_\mathbf{w}$. \end{proof} We also need the following result about locally finite compact coverings (its proof is easy). \begin{lemma}\label{lem:intcovering} Let $\mathcal{K}$ be a locally finite covering of~$\mathbb{R}^k$ by compact sets. If $\mathcal{K}$ has covering degree~$m$ and $\mathbf{z} \in \mathbb{R}^k$ is contained in exactly $m$ elements of~$\mathcal{K}$, then $\mathbf{z}$ is contained in the interior of each of these $m$ elements. \end{lemma} \subsection{Interior of Rauzy fractals} We are now in a position to show that the Rauzy fractals are the closure of their interior. \begin{proposition} \label{p:closint} Let $S$ be a finite or infinite set of unimodular substitutions over a finite alphabet $\mathcal{A}$ and let $\boldsymbol{\sigma}\in S^{\mathbb{N}}$ be a primitive, algebraically irreducible, and recurrent directive sequence with balanced language~$\mathcal{L}_{\boldsymbol{\sigma}}$. Then each~$\mathcal{R}(i)$, $i \in \mathcal{A}$, is the closure of its interior. \end{proposition} \begin{proof} By Proposition~\ref{p:covering} and Baire's theorem, for each $n \in \mathbb{N}$, we have $\mathrm{int}(\mathcal{R}^{(n)}(i)) \ne \emptyset$ for some $i \in \mathcal{A}$. By the set equation in \eqref{e:setequationkl} and primitivity, we get that $\mathrm{int}(\mathcal{R}^{(n)}(i)) \ne \emptyset$ for all $i \in \mathcal{A}$, $n \in \mathbb{N}$. Therefore, again the set equation~\eqref{e:setequationkl} yields subdivisions of $\mathcal{R}_\mathbf{w}(i)$, $i \in \mathcal{A}$, into tiles with non-empty interior whose diameters tend to~$0$ by Lemma~\ref{l:smallsubtiles}. This proves the result. \end{proof} \subsection{Boundary of Rauzy fractals} Our next task is to show that the boundary of~$\mathcal{R}(i)$ has zero measure for each $i \in \mathcal{A}$. The proof of this result is quite technical and requires several preparatory lemmas. First, we show that each ``patch'' of $\Gamma(\mathbf{w})$ occurs relatively densely in each discrete hyperplane $\Gamma(\tilde{\mathbf{w}})$ with $\tilde{\mathbf{w}}$ sufficiently close to~$\mathbf{w}$. Note that this statement is a crucial step and requires the use of new ideas with respect to the substitutive case, since we loose here the possibility of using a classical Perron--Frobenius argument (see e.g. in \cite{ST09,CANTBST}). \begin{lemma}\label{lemma0b}\label{l:relativelydense} Let $r > 0$, $\mathbf{w} \in \mathbb{R}_{\ge0}^d\setminus\{\mathbf{0}\}$, and define the patch \[ P = \{[\mathbf{x},i] \in \Gamma(\mathbf{w}):\, \|\mathbf{x}\|\le r \}. \] There exist $\delta, R > 0$ such that, for each $\tilde{\mathbf{w}} \in \mathbb{R}_{\ge0}^d \setminus \{\mathbf{0}\}$ with $\|\tilde{\mathbf{w}} - \mathbf{w}\| \le \delta$ and each $[\mathbf{z},j] \in \Gamma(\tilde{\mathbf{w}})$, \begin{equation}\label{eq:samepatch} \{[\mathbf{x},i] \in \Gamma(\tilde{\mathbf{w}}):\, \|\mathbf{x} - \mathbf{y}\| \le r\} = P + \mathbf{y} \end{equation} for some $\mathbf{y} \in \mathbb{Z}^d$ with $\|\mathbf{y} - \mathbf{z}\| \le R$. \end{lemma} \begin{proof} The set $\{[\mathbf{x},i] \in \mathbb{Z}^d \times \mathcal{A}:\, \|\mathbf{x}\| \le r\}$ admits the partition $\{P, P^+, P^-\}$, with \begin{align*} P^+ & = \{[\mathbf{x},i] \in \mathbb{Z}^d \times \mathcal{A}:\, \|\mathbf{x}\| \le r,\, \langle \mathbf{w},\mathbf{x} \rangle \ge \langle \mathbf{w},\mathbf{e}_i \rangle\}, \\ P^- & = \{[\mathbf{x},i] \in \mathbb{Z}^d \times \mathcal{A}:\, \|\mathbf{x}\| \le r,\, \langle \mathbf{w},\mathbf{x} \rangle < 0\}. \end{align*} Let $\eta_1 = \min_{[\mathbf{x},i] \in P} \langle \mathbf{w}, \mathbf{e}_i - \mathbf{x} \rangle > 0$, $\eta_2 = \min_{[\mathbf{x},i] \in P^-} \langle \mathbf{w}, - \mathbf{x} \rangle > 0$, and set $\eta=\min\{\eta_1,\eta_2\}$. Choose $\delta > 0$ such that for all $\tilde{\mathbf{w}} \in \mathbb{R}_{\ge0}^d$ with $\|\tilde{\mathbf{w}} - \mathbf{w}\| \le \delta$ we have \begin{equation} \label{e:eta} \min_{[\mathbf{x},i] \in P} \langle \tilde{\mathbf{w}}, \mathbf{e}_i - \mathbf{x} \rangle \ge 2\eta/3 \qquad \mbox{and} \qquad \min_{[\mathbf{x},i] \in P^-} \langle \tilde{\mathbf{w}}, -\mathbf{x} \rangle \ge 2\eta/3, \end{equation} as well as \begin{equation} \label{e:eta_b} \min_{[\mathbf{x},i] \in P} \langle \tilde{\mathbf{w}}, \mathbf{x} \rangle \ge -\eta/3 \qquad \mbox{and} \qquad \min_{[\mathbf{x},i] \in P^+} \langle \tilde{\mathbf{w}}, \mathbf{x} -\mathbf{e}_i\rangle \ge - \eta/3, \end{equation} and set $R = 6\, (r+1)\, (\|\mathbf{w}\|+\delta) / \eta$. Let now $[\mathbf{z},j] \in \Gamma(\tilde{\mathbf{w}})$ with $\|\tilde{\mathbf{w}} - \mathbf{w}\| \le \delta$. To find $\mathbf{y} \in \mathbb{Z}^d$ satisfying $\|\mathbf{y} - \mathbf{z}\| \le R$ and~\eqref{eq:samepatch}, choose $\mathbf{x}', \mathbf{x}'' \in\mathbb{Z}^d$ with $\|\mathbf{x}'\|, \|\mathbf{x}''\| \le r+1$ such that $\langle \tilde{\mathbf{w}}, \mathbf{x}' \rangle$ is equal to the smaller of the two minima in~\eqref{e:eta}, and $\langle \tilde{\mathbf{w}}, \mathbf{x}'' \rangle$ is equal to the smaller of the two minima in~\eqref{e:eta_b}; this choice is possible by the definition of the minima. Let $\mathbf{y} = \mathbf{z} - h\, (\mathbf{x}' + \mathbf{x}'')$ with $h \in \mathbb{Z}$ such that \begin{equation} \label{e:yh} - \langle \tilde{\mathbf{w}}, \mathbf{x}'' \rangle \le \langle \tilde{\mathbf{w}}, \mathbf{z} - h\, (\mathbf{x}' + \mathbf{x}'') \rangle < \langle \tilde{\mathbf{w}}, \mathbf{x}' \rangle\,; \end{equation} such an $h$ exists (uniquely) since $\langle \tilde{\mathbf{w}}, \mathbf{x}' + \mathbf{x}'' \rangle \ge \eta/3 > 0$ by \eqref{e:eta} and~\eqref{e:eta_b}. Let $[\mathbf{x},i] \in \mathbb{Z}^d \times \mathcal{A}$ with $\|\mathbf{x}\| \le r$. By \eqref{e:yh} and the definition of $\mathbf{x}'$ and~$\mathbf{x}''$, we have \begin{align*} \langle \tilde{\mathbf{w}}, \mathbf{x} + \mathbf{y} \rangle < \langle \tilde{\mathbf{w}}, \mathbf{x} + \mathbf{x}' \rangle & \le \begin{cases}\langle \tilde{\mathbf{w}}, \mathbf{e}_i \rangle & \mbox{if}\ [\mathbf{x},i] \in P, \\ 0 & \mbox{if}\ [\mathbf{x},i] \in P^-,\end{cases} \\ \langle \tilde{\mathbf{w}}, \mathbf{x} + \mathbf{y} \rangle \ge \langle \tilde{\mathbf{w}}, \mathbf{x} - \mathbf{x}'' \rangle & \ge \begin{cases}0 & \mbox{if}\ [\mathbf{x},i] \in P, \\ \langle \tilde{\mathbf{w}}, \mathbf{e}_i \rangle & \mbox{if}\ [\mathbf{x},i] \in P^+,\end{cases} \end{align*} thus $[\mathbf{x}+\mathbf{y},i] \in \Gamma(\tilde{\mathbf{w}})$ if $[\mathbf{x},i] \in P$ and $[\mathbf{x}+\mathbf{y},i] \notin \Gamma(\tilde{\mathbf{w}})$ if $[\mathbf{x},i] \in P^- \cup P^+$, i.e., \eqref{eq:samepatch} holds. To show that $\|\mathbf{y} - \mathbf{z}\| \le R$, note that $\frac{\langle\tilde{\mathbf{w}},\mathbf{z}+\mathbf{x}''\rangle}{\langle\tilde{\mathbf{w}},\mathbf{x}'+\mathbf{x}''\rangle} - 1 < h \le \frac{\langle\tilde{\mathbf{w}},\mathbf{z}+\mathbf{x}''\rangle}{\langle\tilde{\mathbf{w}},\mathbf{x}'+\mathbf{x}''\rangle}$. Using the equalities $-\eta/3 \le \langle \tilde{\mathbf{w}}, \mathbf{x}'' \rangle \le 0$ (given by~\eqref{e:eta_b} and since $[\mathbf{0},i] \in P$), $0 \le \langle \tilde{\mathbf{w}}, \mathbf{z} \rangle \le \langle \tilde{\mathbf{w}}, \mathbf{e}_j \rangle \le \|\tilde{\mathbf{w}}\| \le \|\mathbf{w}\| + \delta$, and $\langle \tilde{\mathbf{w}}, \mathbf{x}' + \mathbf{x}'' \rangle \ge \eta/3$, we obtain that $-2 < h \le 3\,(\|\mathbf{w}\| + \delta)/\eta$, thus $|h| \le 3\,(\|\mathbf{w}\| + \delta)/\eta$ and \[ \|\mathbf{y} - \mathbf{z}\| \le |h|\, (\|\mathbf{x}'\| + \|\mathbf{x}''\|) \le 6\, (r+1)\, (\|\mathbf{w}\|+\delta) / \eta = R. \qedhere \] \end{proof} \begin{lemma} \label{l:interiornk} Assume that the sequence $\boldsymbol{\sigma} = (\sigma_n)\in S^{\mathbb{N}}$ of unimodular substitutions has Property PRICE w.r.t.\ the sequences $(n_k)$ and $(\ell_k)$ and the vector~$\mathbf{v}$. Then there exists $\ell \in \mathbb{N}$ such that for each pair $i, j \in \mathcal{A}$, there is $[\mathbf{y},j] \in E_1^*(\sigma_{[0,\ell)})[\mathbf{0},i]$ such that \renewcommand{\theenumi}{\roman{enumi}} \begin{enumerate} \itemsep1ex \item \label{i:66i} $M_{[0,\ell)}\, \big(\pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \mathbf{y} + \mathcal{R}^{(\ell)}_\mathbf{v}(j)\big) \subset \mathrm{int}\big(\mathcal{R}_\mathbf{v}(i)\big)$ and \item \label{i:66ii} $M_{[0,\ell)}\, \big(\pi_{\mathbf{u},\mathbf{v}}^{(n_k+\ell)}\, \mathbf{y} + \mathcal{R}^{(n_k+\ell)}_\mathbf{v}(j)\big) \subset \mathrm{int}\big(\mathcal{R}^{(n_k)}_\mathbf{v}(i)\big)$ for all sufficiently large $k \in \mathbb{N}$. \end{enumerate} Moreover, the covering degree of~$\mathcal{C}_\mathbf{v}^{(n)}$ is equal to that of~$\mathcal{C}_\mathbf{v}$ for all $n \in \mathbb{N}$. \end{lemma} \begin{proof} We first show that (\ref{i:66i}) and~(\ref{i:66ii}) hold for some $i \in \mathcal{A}$, $\ell \in \mathbb{N}$, $[\mathbf{y},j] \in E_1^*(\sigma_{[0,\ell)})[\mathbf{0},i]$. Let $m$ be the covering degree of~$\mathcal{C}_{\mathbf{v}}$, which is positive and finite according to Proposition~\ref{p:covering}. Let $\mathbf{z} \in \mathbf{v}^\bot$ be a point lying in exactly $m$ tiles of~$\mathcal{C}_{\mathbf{v}}$. By Lemma~\ref{lem:intcovering}, $\mathbf{z}$~lies in the interior of each of these tiles, and the same is true for some open neighborhood~$U$ of~$\mathbf{z}$. Let $\pi_{\mathbf{u},\mathbf{v}}\, \tilde{\mathbf{x}} + \mathcal{R}(i)$ be one of these tiles. By the set equation~\eqref{e:setequationkl} and Lemma~\ref{l:smallsubtiles}, there is $\ell \in \mathbb{N}$ and $[\tilde{\mathbf{y}},j] \in E_1^*(\sigma_{[0,\ell)})[\tilde{\mathbf{x}},i]$ such that \[ M_{[0,\ell)}\, \big(\pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \tilde{\mathbf{y}} + \mathcal{R}^{(\ell)}_\mathbf{v}(j)\big) \subset U \subset \mathrm{int}\big(\pi_{\mathbf{u},\mathbf{v}}\, \tilde{\mathbf{x}} + \mathcal{R}_\mathbf{v}(i)\big). \] Shifting by $-\pi_{\mathbf{u},\mathbf{v}}\, \tilde{\mathbf{x}}$, we see that (\ref{i:66i}) holds for $\mathbf{y} = \tilde{\mathbf{y}} - M_{[0,\ell)}^{-1}\, \tilde{\mathbf{x}}$. By Lemma~\ref{lem:projectionconvergence}, Proposition~\ref{p:close} and since $\mathbf{u} \in \mathbb{R}_+^d$, $\mathbf{v} \in \mathbb{R}_{\ge0}^d \setminus \{\mathbf{0}\}$, we may choose $r > 0$ such that, for all $k \in \mathbb{N}$, $\pi_{\mathbf{u},\mathbf{v}}^{(n_k)}\, \mathbf{x} \in \pi_{\mathbf{u},\mathbf{v}}^{(n_k)}\, U - \mathcal{R}^{(n_k)}_\mathbf{v}$ with $|\langle \mathbf{v}^{(n_k)}, \mathbf{x} \rangle| < \|\mathbf{v}^{(n_k)}\|$ implies $\|\mathbf{x}\| \le r$. In the following, assume that $k$ is sufficiently large. Setting $P = \{[\mathbf{x},i] \in \Gamma(\mathbf{v}):\, \|\mathbf{x}\|\le r\big\}$, Lemma~\ref{l:relativelydense} yields that there is $\mathbf{y}_k \in \mathbb{Z}^d$ such that $\{[\mathbf{x}+\mathbf{y}_k,i] \in \Gamma(\mathbf{v}^{(n_k)}):\, \|\mathbf{x}\| \le r\} = P + \mathbf{y}_k$. Let $[\mathbf{x}+\mathbf{y}_k,i] \in \Gamma(\mathbf{v}^{(n_k)})$ be such that \begin{equation} \label{e:Unk} \pi_{\mathbf{u},\mathbf{v}}^{(n_k)} (\mathbf{y}_k + U) \cap \big(\pi_{\mathbf{u},\mathbf{v}}^{(n_k)} (\mathbf{x} + \mathbf{y}_k) + \mathcal{R}^{(n_k)}_\mathbf{v}(i)\big) \ne \emptyset. \end{equation} Then we have $\pi_{\mathbf{u},\mathbf{v}}^{(n_k)}\, \mathbf{x} \in \pi_{\mathbf{u},\mathbf{v}}^{(n_k)}\, U - \mathcal{R}^{(n_k)}_\mathbf{v}$ and $|\langle \mathbf{v}^{(n_k)}, \mathbf{x} \rangle| < \|\mathbf{v}^{(n_k)}\|$ because both $\langle \mathbf{v}^{(n_k)}, \mathbf{x}+\mathbf{y}_k \rangle$ and $\langle \mathbf{v}^{(n_k)}, \mathbf{y}_k \rangle$ are in $[0,\|\mathbf{v}^{(n_k)}\|)$, hence, $\|\mathbf{x}\| \le r$. This gives that $[\mathbf{x}+\mathbf{y}_k,i] \in P + \mathbf{y}_k$, i.e., $[\mathbf{x},i] \in P$. By~\eqref{e:Unk} and Proposition~\ref{p:close}, $\pi_{\mathbf{u},\mathbf{v}}\, \mathbf{x} + \mathcal{R}_\mathbf{v}(i)$ must be one of the $m$ tiles of~$\mathcal{C}_\mathbf{v}$ that contain~$U$. In particular, the covering degree of~$\mathcal{C}_\mathbf{v}^{(n_k)}$ is at most~$m$. By Proposition~\ref{p:covering}, the covering degree is at least $m$ and, hence, equal to~$m$. Therefore, we have $\pi_{\mathbf{u},\mathbf{v}}^{(n_k)} (\mathbf{y}_k + U) \subset \pi_{\mathbf{u},\mathbf{v}}^{(n_k)} (\mathbf{x} + \mathbf{y}_k) + \mathcal{R}^{(n_k)}_\mathbf{v}(i)$ for all $[\mathbf{x},i] \in \Gamma(\mathbf{v})$ satisfying $U \subset \pi_{\mathbf{u},\mathbf{v}}\, \mathbf{x} + \mathcal{R}_\mathbf{v}(i)$. By Lemma~\ref{lem:projectionconvergence} and Proposition~\ref{p:close}, we get that \[ M_{[0,\ell)}\, \big(\pi_{\mathbf{u},\mathbf{v}}^{(n_k+\ell)}\, \tilde{\mathbf{y}} + \mathcal{R}^{(n_k+\ell)}_\mathbf{v}(j)\big) \subset \pi_{\mathbf{u},\mathbf{v}}^{(n_k)}\, U \subset \mathrm{int}\big(\pi_{\mathbf{u},\mathbf{v}}^{(n_k)}\, \tilde{\mathbf{x}} + \mathcal{R}_\mathbf{v}^{(n_k)}(i)\big) \] with $\ell, [\tilde{\mathbf{x}},i], [\tilde{\mathbf{y}},j]$ as in the preceding paragraph, hence, (\ref{i:66ii}) holds for $\mathbf{y} = \tilde{\mathbf{y}} - M_{[0,\ell)}^{-1}\, \tilde{\mathbf{x}}$. To prove the statements for arbitrary $i,j\in \mathcal{A}$, choose $h \in \mathbb{N}$ such that $M_{[0,h)}$ positive. Applying the results from the preceding paragraphs and using Lemma~\ref{l:shiftedPRICE}, there are $i' \in \mathcal{A}$, $\ell' \in \mathbb{N}$, and $[\mathbf{y}',j'] \in E_1^*(\sigma_{[h,h+\ell')})[\mathbf{0},i']$ such that \begin{equation} \label{e:62i'} M_{[h,h+\ell')}\, \big(\pi_{\mathbf{u},\mathbf{v}}^{(h+\ell')}\, \mathbf{y}' + \mathcal{R}^{(h+\ell')}_\mathbf{v}(j')\big) \subset \mathrm{int}\big(\mathcal{R}^{(h)}_\mathbf{v}(i')\big) \end{equation} and, for sufficiently large~$k$, \begin{equation} \label{e:62i''} M_{[h,h+\ell')}\, \big(\pi_{\mathbf{u},\mathbf{v}}^{(n_k+h+\ell')}\, \mathbf{y}' + \mathcal{R}^{(n_k+h+\ell')}_\mathbf{v}(j')\big) \subset \mathrm{int}\big(\mathcal{R}^{(n_k+h)}_\mathbf{v}(i')\big). \end{equation} Choose $\ell > h+\ell'$ such that $M_{[h+\ell',\ell)}$ is positive. Then for each pair $i,j \in \mathcal{A}$, there are $\mathbf{x}', \mathbf{y} \in \mathbb{Z}^d$ such that $[\mathbf{x}',i'] \in E_1^*(\sigma_{[0,h)})[\mathbf{0},i]$ and $[\mathbf{y},j] \in E_1^*(\sigma_{[h+\ell',\ell)})[\mathbf{y}' + (M_{[h,h+\ell')})^{-1} \mathbf{x}',j']$. We get that \[ [\mathbf{y}, j] \in E_1^*(\sigma_{[h+\ell',\ell)})[\mathbf{y}' + (M_{[h,h+\ell')})^{-1} \mathbf{x}', j'] \subset E_1^*(\sigma_{[h,\ell)})[\mathbf{x}', i'] \subset E_1^*(\sigma_{[0,\ell)})[\mathbf{0},i], \] and (\ref{i:66i}) and~(\ref{i:66ii}) are true by \eqref{e:62i'} and~\eqref{e:62i''}, respectively. We have seen that the covering degree of~$\mathcal{C}_\mathbf{v}^{(n_k)}$ is equal to that of~$\mathcal{C}_\mathbf{v}$ for all sufficiently large~$k$. As the covering degree increases monotonically by Proposition~\ref{p:covering}, this holds also for all~$\mathcal{C}_\mathbf{v}^{(n)}$. \end{proof} We can now prove that the boundary of $\mathcal{R}(i)$ has zero measure for each $i \in \mathcal{A}$. \begin{proposition} \label{p:boundary} Let $S$ be a finite or infinite set of unimodular substitutions over a finite alphabet $\mathcal{A}$ and let $\boldsymbol{\sigma}\in S^{\mathbb{N}}$ be a directive sequence with Property PRICE. Then $\lambda_{\mathbf{1}}(\partial (\mathcal{R}(i)) = 0$ for each $i \in \mathcal{A}$. \end{proposition} \begin{proof} Let the sequence~$(n_k)$ and the vector~$\mathbf{v}$ be as in Definition~\ref{def:star}, and set \begin{align*} C_{m,n}(i,j) & = \#\big\{\mathbf{y} \in \mathbb{Z}^d:\, [\mathbf{y},j] \in E_1^*(\sigma_{[m,n)}) [\mathbf{0},i]\big\}, \\ D_{m,n}(i,j) & = \#\big\{\mathbf{y} \in \mathbb{Z}^d:\, [\mathbf{y},j] \in E_1^*(\sigma_{[m,n)}) [\mathbf{0},i],\, M_{[m,n)} \big(\pi_{\mathbf{u},\mathbf{v}}^{(n)}\, \mathbf{y} + \mathcal{R}^{(n)}_\mathbf{v}(j)\big) \cap \partial \mathcal{R}^{(m)}_\mathbf{v}(i) \ne \emptyset\big\}, \end{align*} for $i,j \in \mathcal{A}$, $m \le n$. Our main task is to show that \begin{equation} \label{e:limDC} \lim_{n\to\infty} \frac{D_{0,n}(i,j)}{C_{0,n}(i,j)} = 0 \qquad \mbox{for all}\ i,j \in \mathcal{A}. \end{equation} Let $\ell\in \mathbb{N}$ be as in the statement of Lemma~\ref{l:interiornk}. We thus have, for each pair $i,j \in \mathcal{A}$, at least one~$\mathbf{y}$ such that $[\mathbf{y},j] \in E_1^*(\sigma_{[0,\ell)}) [\mathbf{0},i]$ and $M_{[0,\ell)} \big(\pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \mathbf{y} + \mathcal{R}^{(\ell)}_\mathbf{v}(j)\big) \cap \partial \mathcal{R}_\mathbf{v}(i) = \emptyset$, i.e., $D_{0,\ell}(i,j) \le C_{0,\ell}(i,j) - 1$. Set $c = 1 - 1/\max_{i,j\in\mathcal{A}} C_{0,\ell}(i,j) < 1$. Since all subtiles of $M_{[0,\ell)}\, (\pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \mathbf{y} + \mathcal{R}^{(\ell)}_\mathbf{v}(j))$ are also contained in $\mathrm{int}(\mathcal{R}_\mathbf{v}(i))$, we obtain for each $n \ge \ell$ that \[ D_{0,n}(i,j) \le \sum_{j'\in\mathcal{A}} D_{0,\ell}(i,j')\, C_{\ell,n}(j',j) \le c \sum_{j'\in\mathcal{A}} C_{0,\ell}(i,j')\, C_{\ell,n}(j',j) = c\, C_{0,n}(i,j). \] Let us refine this inequality using Lemma~\ref{l:interiornk}~(\ref{i:66ii}). For sufficiently large~$k$, we have $D_{n_k,n_k+\ell}(i,j) \le C_{n_k,n_k+\ell}(i,j) - 1 = C_{0,\ell}(i,j) - 1$, and each subtile $\pi_{\mathbf{u},\mathbf{v}}^{(n_k+\ell)}\, \mathbf{y} + \mathcal{R}^{(n_k+\ell)}_\mathbf{v}(j)$ that is in the interior of a subtile $\pi_{\mathbf{u},\mathbf{v}}^{(n_k)}\, \mathbf{x} + \mathcal{R}^{(n_k)}_\mathbf{v}(i')$ of~$\mathcal{R}_\mathbf{v}(i)$ is clearly also in the interior of~$\mathcal{R}_\mathbf{v}(i)$. Thus we have \begin{align*} D_{0,n}(i,j) & \le \sum_{j',i',j''\in\mathcal{A}} D_{0,\ell}(i,j')\, C_{\ell,n_k}(j',i') D_{n_k,n_k+\ell}(i',j'')\, C_{n_k+\ell,n}(j'',j) \\ & \le c^2 \sum_{j',i',j''\in\mathcal{A}} C_{0,\ell}(i,j')\, C_{\ell,n_k}(j',i')\, C_{0,\ell}(i',j'')\, C_{n_k+\ell,n}(j'',j) = c^2\, C_{0,n}(i,j) \end{align*} for $n \ge n_k+\ell$. A~similar argument with $h$ different values of~$n_k$ yields for each $h \in \mathbb{N}$ that $D_{0,n}(i,j) \le c^{h+1}\, C_{0,n}(i,j)$ for sufficiently large~$n$, thus \eqref{e:limDC} is true. By Lemma~\ref{lem:projectionconvergence}, Proposition~\ref{p:close} and since $\pi_{\mathbf{u},\mathbf{v}}\, \Gamma(\mathbf{v})$ is uniformly discrete, there exists $m \in \mathbb{N}$ such that, for all $k \in \mathbb{N}$, each point of~$(\mathbf{v}^{(n_k)})^\bot$ lies in at most $m$ tiles of~$\mathcal{C}^{(n_k)}_\mathbf{v}$. Then \begin{align*} \lambda_\mathbf{v} \big(\partial \mathcal{R}_\mathbf{v}(i)\big) & \le \sum_{j\in\mathcal{A}} D_{0,n}(i,j)\, \lambda_\mathbf{v} \big(M_{[0,n)}\, \mathcal{R}^{(n)}_\mathbf{v}(j)\big) \qquad \mbox{for all}\ n \in\mathbb{N}, \\ \lambda_\mathbf{v} \big(\mathcal{R}_\mathbf{v}(i)\big) & \ge \frac{1}{m} \sum_{j\in\mathcal{A}} C_{0,n_k}(i,j)\, \lambda_\mathbf{v} \big(M_{[0,n_k)}\, \mathcal{R}^{(n_k)}_\mathbf{v}(j)\big) \qquad \mbox{for all}\ k \in\mathbb{N}, \end{align*} by the set equations~\eqref{e:setequationkl}, thus \[ \frac{\lambda_\mathbf{v}(\partial\mathcal{R}_\mathbf{v}(i)\big)}{\lambda_\mathbf{v}(\mathcal{R}_\mathbf{v}(i))} \le \frac{m\, \sum_{j\in\mathcal{A}} D_{0,n_k}(i,j)}{\sum_{j\in\mathcal{A}} C_{0,n_k}(i,j)}\, \frac{\max_{j\in\mathcal{A}} \lambda_\mathbf{v} (M_{[0,n_k)}\, \mathcal{R}^{(n_k)}_\mathbf{v}(j))}{\min_{j\in\mathcal{A}} \lambda_\mathbf{v} (M_{[0,n_k)}\, \mathcal{R}^{(n_k)}_\mathbf{v}(j))} \qquad \mbox{for all}\ k \in\mathbb{N}. \] It remains to show that the latter fraction is bounded. Let $h \in \mathbb{N}$ be such that $M_{[0,h)}$ is a positive matrix. For sufficiently large~$k$, we have $M_{[n_k,n_k+h)} = M_{[0,h)}$ and thus \begin{align*} \frac{\max_{i\in\mathcal{A}} \lambda_\mathbf{v} (M_{[0,n_k)}\, \mathcal{R}^{(n_k)}_\mathbf{v}(i))}{\min_{i\in\mathcal{A}} \lambda_\mathbf{v} (M_{[0,n_k)}\, \mathcal{R}^{(n_k)}_\mathbf{v}(i))} & \le \frac{\max_{i\in\mathcal{A}} \sum_{j\in\mathcal{A}} C_{0,h}(i,j) \max_{j\in\mathcal{A}} \lambda_\mathbf{v} (M_{[0,n_k+h)}\, \mathcal{R}^{(n_k+h)}_\mathbf{v}(j))}{\max_{j\in\mathcal{A}} \lambda_\mathbf{v} (M_{[0,n_k+h)}\, \mathcal{R}^{(n_k+h)}_\mathbf{v}(j))} \\ & = \max_{i\in\mathcal{A}} \sum_{j\in\mathcal{A}} C_{0,h}(i,j). \end{align*} Together with~\eqref{e:limDC}, we obtain that $\lambda_\mathbf{v}(\partial \mathcal{R}_\mathbf{v}(i)) = 0$ and, hence, $\lambda_\mathbf{1}(\partial \mathcal{R}(i)) = 0$. \end{proof} We also get the following strengthening of Proposition~\ref{p:close} for the difference between~$\mathcal{R}_\mathbf{v}^{(\ell)}$ and~$\pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \mathcal{R}_\mathbf{v}^{(n_k+\ell)}$. One can prove in a similar way that $\lim_{k\to\infty} \lambda_{\mathbf{v}^{(\ell)}} \big(\pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \mathcal{R}^{(n_k+\ell)}_\mathbf{v}(i) \setminus \mathcal{R}_\mathbf{v}^{(\ell)}(i)\big) = 0$, but we will not need this result. \begin{lemma} \label{l:close2} Assume that the sequence $\boldsymbol{\sigma} = (\sigma_n)\in S^{\mathbb{N}}$ of unimodular substitutions has Property PRICE w.r.t.\ the sequences $(n_k)$ and $(\ell_k)$ and the vector~$\mathbf{v}$. Then, for each $i \in \mathcal{A}$ and $\ell \in \mathbb{N}$, \begin{equation} \label{e:close2} \lim_{k\to\infty} \lambda_{\mathbf{v}^{(\ell)}} \big(\mathcal{R}_\mathbf{v}^{(\ell)}(i) \setminus \pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \mathcal{R}^{(n_k+\ell)}_\mathbf{v}(i)\big) = 0. \end{equation} \end{lemma} \begin{proof} Let $\ell = 0$, the case $\ell > 0$ then being a consequence of Lemma~\ref{l:shiftedPRICE}. For $\varepsilon > 0$ and $X \subset \mathbf{v}^\bot$, let $X_\varepsilon = \{\mathbf{x} \in \mathbf{v}^\bot:\, \|\mathbf{x} - \mathbf{y}\| \le \varepsilon\ \mbox{for some}\ \mathbf{y} \in X\}$. With the notation of the proof of Proposition~\ref{p:boundary}, we obtain that \[ \lambda_\mathbf{v} \Big((\mathcal{R}_\mathbf{v}(i))_\varepsilon \setminus \mathcal{R}_\mathbf{v}(i)\Big) \le \sum_{j\in\mathcal{A}} D_{0,n}(i,j)\, \lambda_\mathbf{v} \Big(\big(M_{[0,n)} \mathcal{R}^{(n)}_\mathbf{v}(j)\big)_\varepsilon\Big). \] Let $\varepsilon' > 0$ be arbitrary but fixed. By the proof of Proposition~\ref{p:boundary}, we have some $n \in \mathbb{N}$ such that $\sum_{j\in\mathcal{A}} D_{0,n}(i,j)\, \lambda_\mathbf{v} \big(M_{[0,n)} \mathcal{R}^{(n)}_\mathbf{v}(j)\big) < \varepsilon'$. Choose $\varepsilon > 0$ such that \[ \sum_{j\in\mathcal{A}} D_{0,n}(i,j)\, \lambda_\mathbf{v} \big(\big(M_{[0,n)} \mathcal{R}^{(n)}_\mathbf{v}(j)\big)_\varepsilon\big) < \varepsilon'. \] This is possible since, for compact $X \subset \mathbf{v}^\bot$, we have $\bigcap_{\varepsilon>0} X_\varepsilon = X$, thus ${\lim_{\varepsilon\to0} \lambda_\mathbf{v}(X_\varepsilon) = \lambda_\mathbf{v}(X)}$. For sufficiently large~$k$, we have $\pi_{\mathbf{u},\mathbf{v}} \mathcal{R}^{(n_k)}_\mathbf{v}(i) \subset (\mathcal{R}_\mathbf{v}(i))_\varepsilon$ by Proposition~\ref{p:close}, which implies that $\lambda_\mathbf{v} \big(\pi_{\mathbf{u},\mathbf{v}}\, \mathcal{R}^{(n_k)}_\mathbf{v}(i) \setminus \mathcal{R}_\mathbf{v}(i)\big) < \varepsilon'$. As the choice of~$\varepsilon'$ was arbitrary, this yields~\eqref{e:close2}. \end{proof} \section{Tilings and coincidences} \label{sec:tilings} Let $S$ be a finite or infinite set of unimodular substitutions over the finite alphabet~$\mathcal{A}$ and let $\boldsymbol{\sigma}\in S^{\mathbb{N}}$ be a directive sequence. In this section, we prove several tiling results for Rauzy fractals associated with $\boldsymbol{\sigma}$. First we show that the collections $\mathcal{C}_\mathbf{w}$ form multiple tilings under general conditions and prove that the subdivision of the Rauzy fractals induced by the set equation consists of measure disjoint pieces. In the second part we deal with various coincidence conditions that imply further measure disjointness properties of Rauzy fractals and lead to criteria for $\mathcal{C}_\mathbf{w}$ to be a tiling. \subsection{Tiling properties}\label{sec:tilings2} We start this section by giving a general criterion for the collection $\mathcal{C}_\mathbf{v}$ to be a multiple tiling. \begin{lemma} \label{l:mtiling} Assume that the sequence~$\boldsymbol{\sigma}\in S^{\mathbb{N}}$ of unimodular substitutions has Property PRICE with recurrent left eigenvector~$\mathbf{v}$. Then the collection~$\mathcal{C}_\mathbf{v}$ forms a multiple tiling of~$\mathbf{v}^\bot$. \end{lemma} \begin{proof} Let $(n_k)$ be the strictly increasing sequence associated with~$\boldsymbol{\sigma}$ according to Definition~\ref{def:star}, let $m$ be the covering degree of~$\mathcal{C}_\mathbf{v}$, which is positive and finite by Proposition~\ref{p:covering}, and let $X$ be the set of points lying in at least $m+1$ tiles of~$\mathcal{C}_\mathbf{v}$. We have to show that $X$ has zero measure. By Lemma~\ref{l:interiornk}, each $(\mathbf{v}^{(n_k)})^\bot$ with sufficiently large~$k$ contains points lying in exactly $m$ tiles of~$\mathcal{C}^{(n_k)}_\mathbf{v}$. Moreover, by Lemma~\ref{l:relativelydense}, there exists a constant $R > 0$ such that each ball of radius~$R$ in~$\Gamma(\mathbf{v}^{(n_k)})$ contains $\mathbf{y}_k$ as in the proof of Lemma~\ref{l:interiornk}. Since $\|\mathbf{x} - \pi_{\mathbf{u},\mathbf{v}}^{(n_k)}\, \mathbf{x}\|$, with $[\mathbf{x},i] \in \Gamma(\mathbf{v}^{(n_k)})$, is bounded, we obtain that there exists $R' > 0$ such that each ball of radius $R'$ in~$(\mathbf{v}^{(n_k)})^\bot$ contains a point lying in exactly $m$ tiles of~$\mathcal{C}^{(n_k)}_\mathbf{v}$, for all sufficiently large~$k$. On the other hand, by Lemma~\ref{l:covering}, each point in $(M_{[0,n_k)})^{-1} X \subset (\mathbf{v}^{(n_k)})^\bot$ is covered at least $m+1$ times by elements of~$\mathcal{C}_\mathbf{v}^{(n_k)}$. Assume that $X$ has positive measure. Then, as the boundaries of~$\mathcal{R}(i)$ and thus of~$\mathcal{R}_\mathbf{v}(i)$ have zero measure by Proposition~\ref{p:boundary}, there are points in~$X$ that are not contained in the boundary of any element of~$\mathcal{C}_\mathbf{v}$. Thus $X$ contains a ball of positive diameter, and, by Proposition~\ref{p:strongconvergence}, $(M_{[0,n_k)})^{-1} X$ contains a ball of radius~$R'$ for all sufficiently large~$k$. This contradicts the fact that each ball of radius~$R'$ in $(\mathbf{v}^{(n_k)})^\bot$ contains a point that is covered at most $m$ times. Therefore, $X$~has zero measure, i.e., $\mathcal{C}_\mathbf{v}$~forms a multiple tiling with covering degree~$m$. \end{proof} \begin{lemma} \label{l:mtilingPvn} Assume that the sequence~$\boldsymbol{\sigma}\in S^{\mathbb{N}}$ of unimodular substitutions has Property PRICE with recurrent left eigenvector~$\mathbf{v}$. Then, for each $n \in \mathbb{N}$, $\mathcal{C}^{(n)}_\mathbf{v}$~is a multiple tiling of~$(\mathbf{v}^{(n)})^\bot$, with covering degree equal to that of~$\mathcal{C}_\mathbf{v}$. \end{lemma} \begin{proof} If $(\sigma_n)_{n\in\mathbb{N}}$ has Property PRICE w.r.t.\ the sequences $(n_k)$ and $(\ell_k)$ and the vector~$\mathbf{v}$, then there is $k_0\in\mathbb{N}$ such that $(\sigma_{m+n})_{m\in\mathbb{N}}$ has Property PRICE w.r.t.\ the sequences $(n_{k+k_0})$ and $(\ell_{k+k_0}{-}n)$ and the vector~$\mathbf{v}^{(n)}$ by Lemma~\ref{l:shiftedPRICE}, thus $\mathcal{C}^{(n)}_\mathbf{v}$~is a multiple tiling of~$(\mathbf{v}^{(n)})^\bot$ by Lemma~\ref{l:mtiling}. By Lemma~\ref{l:interiornk}, the covering degree of~$\mathcal{C}^{(n)}_\mathbf{v}$ is equal to that of~$\mathcal{C}_\mathbf{v}$. \end{proof} \begin{proposition} \label{p:disjoint} Let $S$ be a finite or infinite set of unimodular substitutions and let $\boldsymbol{\sigma}\in S^{\mathbb{N}}$ be a directive sequence with Property PRICE. Then the unions in the set equations \eqref{e:setequationkl} of Proposition~\ref{p:setequation} are disjoint in measure. \end{proposition} \begin{proof} Let $\mathbf{v}$ be a recurrent left eigenvector as in Definition~\ref{def:star}, let $m$ be the covering degree of the multiple tilings~$\mathcal{C}^{(n)}_\mathbf{v}$, according to Lemma~\ref{l:mtilingPvn}, and $k < \ell$. Then the set of points in~$(\mathbf{v}^{(\ell)})^\bot$ lying in at least $m+1$ tiles of~$\mathcal{C}^{(\ell)}_\mathbf{v}$ has zero measure and each point in $(\mathbf{v}^{(k)})^\bot$ lies in at least $m$ tiles of~$\mathcal{C}^{(k)}_\mathbf{v}$. Therefore, Lemma~\ref{l:covering} implies that the intersection of $\pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \mathbf{y} + \mathcal{R}^{(\ell)}_\mathbf{v}(j)$ and $\pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \mathbf{y}' + \mathcal{R}^{(\ell)}_\mathbf{v}(j')$ has zero measure for distinct $[\mathbf{y},j], [\mathbf{y}',j'] \in E_1^*(\sigma_{[k,\ell)})[\mathbf{x},i]$, with $[\mathbf{x},i] \in \Gamma(\mathbf{v}^{(k)})$. By translation, this also holds for all $[\mathbf{x},i] \in \mathbb{Z}^d \times \mathcal{A}$ such that $\langle \mathbf{v}^{(k)}, \mathbf{e}_i \rangle > 0$. Projecting by~$\pi_{\mathbf{u},\mathbf{w}}^{(\ell)}$, we obtain that $\pi_{\mathbf{u},\mathbf{w}}^{(\ell)}\, \mathbf{y} + \mathcal{R}^{(\ell)}_\mathbf{w}(j)$ and $\pi_{\mathbf{u},\mathbf{w}}^{(\ell)}\, \mathbf{y}' + \mathcal{R}^{(\ell)}_\mathbf{w}(j')$ are disjoint in measure for all $\mathbf{w} \in \mathbb{R}_{\ge0}^d \setminus \{\mathbf{0}\}$. It remains to consider the case that $\langle \mathbf{v}^{(k)}, \mathbf{e}_i \rangle = 0$. By primitivity of~$\boldsymbol{\sigma}$, there is $h \in \mathbb{N}$ such that $\mathbf{v}^{(h)} \in \mathbb{R}_+^d$. For sufficiently large~$\kappa$, we have thus $\mathbf{v}^{(n_\kappa+k)} \in \mathbb{R}_+^d$ and the previous paragraph implies that the intersection of $\pi_{\mathbf{u},\mathbf{v}}^{(n_\kappa+\ell)}\, \mathbf{y} + \mathcal{R}^{(n_\kappa+\ell)}_\mathbf{v}(j)$ and $\pi_{\mathbf{u},\mathbf{v}}^{(n_\kappa+\ell)}\, \mathbf{y}' + \mathcal{R}^{(n_\kappa+\ell)}_\mathbf{v}(j')$ has zero measure for distinct $[\mathbf{y},j], [\mathbf{y}',j'] \in E_1^*(\sigma_{[k,\ell)})[\mathbf{0},i]$. As $\lim_{\kappa\to\infty} \pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \pi_{\mathbf{u},\mathbf{v}}^{(n_\kappa+\ell)}\, \mathbf{y} = \pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \mathbf{y}$ by Lemma~\ref{lem:projectionconvergence} and $\lim_{\kappa\to\infty} \lambda_{\mathbf{v}^{(\ell)}} \big(\mathcal{R}_\mathbf{v}^{(\ell)}(j) \setminus \pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \mathcal{R}^{(n_\kappa+\ell)}_\mathbf{v}(j)\big) = 0$ by Lemma~\ref{l:close2}, we obtain that the intersection of $\pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \mathbf{y} + \mathcal{R}^{(\ell)}_\mathbf{v}(j)$ and $\pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \mathbf{y}' + \mathcal{R}^{(\ell)}_\mathbf{v}(j')$ also has zero measure. \end{proof} \begin{lemma} \label{l:measure} Assume that the sequence $\boldsymbol{\sigma}=(\sigma_n)\in S^{\mathbb{N}}$ of unimodular substitutions has Property PRICE with recurrent left eigenvector~$\mathbf{v}$. Let $m$ be the covering degree of the multiple tiling~$\mathcal{C}_\mathbf{v}$, and identify $[\mathbf{0},i]$ with a face of the unit hypercube orthogonal to~$\mathbf{e}_i$. Then \begin{equation} \label{e:lambdav} \tr{\big(\lambda_\mathbf{v}(\mathcal{R}_\mathbf{v}(1)), \ldots, \lambda_\mathbf{v}(\mathcal{R}_\mathbf{v}(d))\big)} = m\, \tr{\big(\lambda_\mathbf{v}(\pi_{\mathbf{u},\mathbf{v}}\, [\mathbf{0},1]), \ldots, \lambda_\mathbf{v}(\pi_{\mathbf{u},\mathbf{v}}\, [\mathbf{0},d])\big)} \in \mathbb{R} \mathbf{u}\,. \end{equation} \end{lemma} \begin{proof} As in the proof of \cite[Lemma~2.3]{Ito-Rao:06}, we see that $\tr{\big(\lambda_\mathbf{v}(\pi_{\mathbf{u},\mathbf{v}}\, [\mathbf{0},1]), \ldots, \lambda_\mathbf{v} (\pi_{\mathbf{u},\mathbf{v}}\, [\mathbf{0},d])\big)} \in \mathbb{R} \mathbf{u}$. Using the set equations~\eqref{e:setequationkl} and Proposition~\ref{p:disjoint}, we obtain that \[ \begin{pmatrix} \lambda_\mathbf{v}(\mathcal{R}_\mathbf{v}(1)) \\ \vdots \\ \lambda_\mathbf{v}(\mathcal{R}_\mathbf{v}(d)) \end{pmatrix} = M_{[0,n)} \begin{pmatrix} \lambda_\mathbf{v}(M_{[0,n)} \mathcal{R}^{(n)}_\mathbf{v}(1)) \\ \vdots \\ \lambda_\mathbf{v}(M_{[0,n)} \mathcal{R}^{(n)}_\mathbf{v}(d)) \end{pmatrix} \] for all $n \in \mathbb{N}$. Then~\eqref{e:topPF} implies that $\tr{\big(\lambda_\mathbf{v}(\mathcal{R}_\mathbf{v}(1)), \ldots, \lambda_\mathbf{v}(\mathcal{R}_\mathbf{v}(d))\big)} \in \mathbb{R} \mathbf{u}$, hence, \[ \big(\lambda_\mathbf{v}(\mathcal{R}_\mathbf{v}(1)), \ldots, \lambda_\mathbf{v}(\mathcal{R}_\mathbf{v}(d))\big) = r\, \big(\lambda_\mathbf{v}(\pi_{\mathbf{u},\mathbf{v}}\, [\mathbf{0},1]), \ldots, \lambda_\mathbf{v}(\pi_{\mathbf{u},\mathbf{v}}\, [\mathbf{0},d])\big) \] for some $r \in \mathbb{R}$. Now, as $\{\pi_{\mathbf{u},\mathbf{v}} (\mathbf{x} + [\mathbf{0},i]):\, [\mathbf{x},i] \in \Gamma(\mathbf{v})\}$ forms a tiling of~$\mathbf{v}^\bot$, and $\mathcal{C}_\mathbf{v}$ has covering degree~$m$, we have $r = m$. \end{proof} The following result seems to be new even in the periodic case: Rauzy fractals induce tilings on any given hyperplane; in particular, $\mathcal{R}_{\mathbf{e}_i}(i)$ tiles ${\mathbf{e}_i}^\bot$ periodically for each $i \in \mathcal{A}$. \begin{proposition}\label{p:independentmultiple} Let $S$ be a finite or infinite set of unimodular substitutions over a finite alphabet and assume that $\boldsymbol{\sigma}\in S^{\mathbb{N}}$ has Property PRICE. Then, for each $\mathbf{w} \in \mathbb{R}^d_{\ge0} \setminus \{\mathbf{0}\}$, the collection $\mathcal{C}_\mathbf{w}$ forms a multiple tiling of~$\mathbf{w}^\bot$, with covering degree not depending on~$\mathbf{w}$. \end{proposition} \begin{proof} Let $\mathbf{v}$ be a recurrent left eigenvector as in Definition~\ref{def:star} and $\mathbf{w} \in \mathbb{R}^d_{\ge0} \setminus \{\mathbf{0}\}$. Consider the collections $\mathcal{D}_\mathbf{w}^{(n)} = \{\mathcal{S}_\mathbf{w}^{(n)}(\mathbf{x},i):\, [\mathbf{x},i] \in \Gamma(\mathbf{w})\}$, $n \in \mathbb{N}$, with \[ \mathcal{S}_\mathbf{w}^{(n)}(\mathbf{x},i) = \bigcup_{[\mathbf{y},j] \in E_1^*(\sigma_{[0,n)})[\mathbf{x},i] \cap \Gamma(\mathbf{v}^{(n)})} M_{[0,n)} \big(\pi_{\mathbf{u},\mathbf{w}}^{(n)}\, \mathbf{y} + \mathcal{R}_\mathbf{w}^{(n)}(j)\big). \] By Lemma~\ref{l:mtilingPvn}, the collections $\pi_{\mathbf{u},\mathbf{w}}^{(n)}\, \mathcal{C}^{(n)}_\mathbf{v} = \{\pi_{\mathbf{u},\mathbf{w}}^{(n)}\, \mathbf{y} + \mathcal{R}_\mathbf{w}^{(n)}(j):\, [\mathbf{y},j] \in \Gamma(\mathbf{v}^{(n)})\}$ are multiple tilings with covering degree~$m$ not depending on~$n$. Therefore, for each $n \in \mathbb{N}$ by Lemma~\ref{l:e1star}~(\ref{62iii}), almost all points in~$\mathbf{w}^\bot$ lie in at most~$m$ sets of~$\mathcal{D}_\mathbf{w}^{(n)}$. Next we show that $\mathcal{S}_\mathbf{w}^{(n)}(\mathbf{x},i)$ tends to $\pi_{\mathbf{u},\mathbf{w}}\, \mathbf{x} + \mathcal{R}_\mathbf{w}(i)$ in measure. For any $[\mathbf{y},j] \in E_1^*(\sigma_{[0,n)})[\mathbf{x},i]$, we have $p,s \in \mathcal{A}^*$ such that $\mathbf{y} = (M_{[0,n)})^{-1} (\mathbf{x} + \mathbf{l}(p))$, $\sigma_{[0,n)}(j) = p\hspace{.1em}i\hspace{.1em}s$. Since \[ \langle \mathbf{v}^{(n)}, \mathbf{y} \rangle = \langle \mathbf{v}, \mathbf{x} + \mathbf{l}(p) \rangle = \langle \mathbf{v}, \mathbf{x} - \mathbf{l}(i\hspace{.1em}s) \rangle + \langle \mathbf{v}^{(n)}, \mathbf{e}_j \rangle \] and $[\mathbf{y},j] \in \Gamma(\mathbf{v}^{(n)})$ if and only if $0 \le \langle \mathbf{v}^{(n)}, \mathbf{y} \rangle < \langle \mathbf{v}^{(n)}, \mathbf{e}_j \rangle$, we have $[\mathbf{y},j] \not\in \Gamma(\mathbf{v}^{(n)})$ if and only if \[ \langle \mathbf{v}, \mathbf{l}(p) \rangle < - \langle \mathbf{v}, \mathbf{x} \rangle \quad \mbox{or} \quad \langle \mathbf{v}, \mathbf{l}(i\hspace{.1em}s) \rangle \le \langle \mathbf{v}, \mathbf{x} \rangle. \] As $\mathbf{v} \in \mathbb{R}_{\ge0}^d \setminus \{\mathbf{0}\}$ and each letter in~$\mathcal{A}$ occurs in $\sigma_{[0,n)}(j)$ with bounded gaps (by primitivity of~$\boldsymbol{\sigma}$), there is only a bounded number of faces $[\mathbf{y},j] \in E_1^*(\sigma_{[0,n)})[\mathbf{x},i] \setminus \Gamma(\mathbf{v}^{(n)})$ for each~$n$ (with the bound depending on~$\mathbf{x}$). By \eqref{e:setequationkl} and Lemma~\ref{l:smallsubtiles}, we obtain that \begin{align*} & \lim_{n\to\infty} \lambda_\mathbf{w}\big(\big(\pi_{\mathbf{u},\mathbf{w}}\, \mathbf{x} + \mathcal{R}_\mathbf{w}(i)\big) \setminus \mathcal{S}_\mathbf{w}^{(n)}(\mathbf{x},i)\big) \\ & \qquad = \lim_{n\to\infty} \lambda_\mathbf{w} \Bigg( \bigcup_{[\mathbf{y},j] \in E_1^*(\sigma_{[0,n)})[\mathbf{x},i] \setminus \Gamma(\mathbf{v}^{(n)})} M_{[0,n)} \big(\pi_{\mathbf{u},\mathbf{w}}^{(n)}\, \mathbf{y} + \mathcal{R}_\mathbf{w}^{(n)}(j)\big) \Bigg) = 0 \end{align*} for all $[\mathbf{x},i] \in \mathbb{Z}^d \times \mathcal{A}$. Therefore, almost all points in~$\mathbf{w}^\bot$ lie in at most~$m$ sets of~$\mathcal{C}_\mathbf{w}$. Projecting the sets in~\eqref{e:lambdav} to~$\mathbf{w}^\bot$, we obtain that \[ \big(\lambda_\mathbf{w}(\mathcal{R}_\mathbf{w}(1)), \ldots, \lambda_\mathbf{w}(\mathcal{R}_\mathbf{w}(d))\big) = m\, \big(\lambda_\mathbf{w}(\pi_{\mathbf{u},\mathbf{w}}([\mathbf{0},1])), \ldots, \lambda_\mathbf{w}(\pi_{\mathbf{u},\mathbf{w}}([\mathbf{0},d]))\big)\,. \] As almost all points in~$\mathbf{w}^\bot$ lie in at most $m$ different sets $\pi_{\mathbf{u},\mathbf{w}}\, \mathbf{x} + \mathcal{R}_\mathbf{w}(i)$, this implies that $\mathcal{C}_\mathbf{w}$ forms a multiple tiling of~$\mathbf{w}^\bot$ with covering degree~$m$. \end{proof} The following proposition generalizes a result of \cite{Ito-Rao:06}. \begin{proposition} \label{p:tilingRd} Let $\mathbf{w}\in \mathbb{R}^d_{\ge 0} \setminus \{\mathbf{0}\}$. Then $\widehat{\mathcal{C}}_{\mathbf{w}}$ forms a multiple tiling of~$\mathbb{R}^d$ with covering degree~$m$ if and only if $\mathcal{C}_{\mathbf{w}}$ forms a multiple tiling of~$\mathbf{w}^\bot$ with covering degree~$m$. \end{proposition} \begin{proof} For $\mathbf{x} \in \mathbb{Z}^d$, we have \[ \big({-}\mathbf{x} + \widehat{\mathcal{R}}_{\mathbf{w}}(i)\big) \cap \mathbf{w}^\bot = \begin{cases} -(\pi_{\mathbf{u},\mathbf{w}}\, \mathbf{x} + \mathcal{R}_{\mathbf{w}}(i)) & \mbox{if}\ [\mathbf{x},i] \in \Gamma(\mathbf{w}), \\ \emptyset & \mbox{otherwise}, \end{cases} \] since $\langle \mathbf{w}, x\, (\mathbf{e}_i - \pi_{\mathbf{u},\mathbf{w}}\, \mathbf{e}_i)\rangle = x \langle \mathbf{w}, \mathbf{e}_i\rangle$ and $\pi_{\mathbf{u},\mathbf{w}} (\mathbf{e}_i - \pi_{\mathbf{u},\mathbf{w}}\, \mathbf{e}_i) = \mathbf{0}$. This implies that for $\mathbf{x}, \mathbf{y} \in \mathbb{Z}^d$ we have \[ \big({-}\mathbf{x} + \widehat{\mathcal{R}}_{\mathbf{w}}(i)\big) \cap \big(\mathbf{y} + \mathbf{w}^\bot\big) = \begin{cases}\mathbf{y} - \big(\pi_{\mathbf{u},\mathbf{w}} (\mathbf{x} + \mathbf{y}) + \mathcal{R}_{\mathbf{w}}(i)\big) & \mbox{if}\ [\mathbf{x}+\mathbf{y},i] \in \Gamma(\mathbf{w}), \\ \emptyset & \mbox{otherwise}, \end{cases} \] i.e., the intersection of $\widehat{\mathcal{C}}_{\mathbf{w}}$ with $\mathbf{y} + \mathbf{w}^\bot$ is a translation of~$\mathcal{C}_{\mathbf{w}}$. Moreover, we have \begin{equation} \label{e:yzu} \big({-}\mathbf{x} + \widehat{\mathcal{R}}_{\mathbf{w}}(i)\big) \cap \big(\mathbf{y} + z\, \mathbf{u} + \mathbf{w}^\bot\big) = \big({-}\mathbf{x} + \widehat{\mathcal{R}}_{\mathbf{w}}(i)\big) \cap \big(\mathbf{y} + \mathbf{w}^\bot\big) + z\, \mathbf{u} \end{equation} for all $0 \le z < \langle \mathbf{w}, \mathbf{e}_i - \mathbf{x}-\mathbf{y}\rangle$. This proves the statement of the proposition when $\{\langle\mathbf{w},\mathbf{y}\rangle:\, \mathbf{y}\in\mathbb{Z}^d\}$ is dense in~$\mathbb{R}$, i.e., when $\mathbf{w}$ is not a multiple of a rational vector. If $\mathbf{w}$ is a multiple of a rational vector, then $\{\langle\mathbf{w},\mathbf{y}\rangle:\, \mathbf{y}\in\mathbb{Z}^d\} = c\, \langle \mathbf{w},\mathbf{u}\rangle\, \mathbb{Z}$ for some $c > 0$. Now, \eqref{e:yzu} holds for all $\mathbf{x}, \mathbf{y} \in \mathbb{Z}^d$, $0 \le z < c$, hence the statement of the proposition holds in this case as well. \end{proof} \subsection{Coincidences} In this subsection, we show that strong coincidence implies non-overlapping of the pieces~$\mathcal{R}(i)$. Moreover, we prove that geometric coincidence is equivalent to tiling. We also give variants of the geometric coincidence condition that can be checked algorithmically in certain cases. \begin{proposition} \label{p:strongcoincidence} Let $S$ be a finite or infinite set of unimodular substitutions over the finite alphabet $\mathcal{A}$ and assume that $\boldsymbol{\sigma}\in S^{\mathbb{N}}$ has Property PRICE and satisfies the strong coincidence condition. Then the subtiles~$\mathcal{R}(i)$, $i \in \mathcal{A}$, are pairwise disjoint in measure. \end{proposition} \begin{proof} Let the sequence~$(n_k)$ and the vector~$\mathbf{v}$ be as in Definition~\ref{def:star}. By the definition of $E_1^*$ strong coincidence can be reformulated by saying that there is $\ell \in \mathbb{N}$ such that, for each pair of distinct $j_1,j_2 \in \mathcal{A}$, there are $i \in \mathcal{A}$ and $\mathbf{y} \in \mathbb{Z}^d$ such that $[\mathbf{y},j_1], [\mathbf{y}, j_2] \in E_1^*(\sigma_{[0,\ell)})[\mathbf{0},i]$. Thus Proposition~\ref{p:disjoint} yields that \begin{equation} \label{e:j1j2disjoint} \lambda_{\mathbf{v}^{(\ell)}}\big(\mathcal{R}^{(\ell)}_\mathbf{v}(j_1) \cap \mathcal{R}^{(\ell)}_\mathbf{v}(j_2)\big) = \lambda_{\mathbf{v}^{(\ell)}}\big(\big(\pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \mathbf{y} + \mathcal{R}^{(\ell)}_\mathbf{v}(j_1)\big) \cap \big(\pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \mathbf{y} + \mathcal{R}^{(\ell)}_\mathbf{v}(j_2)\big)\big)= 0. \end{equation} We can replace~$\ell$ by any $n \ge \ell$ since, for distinct $j_1,j_2 \in \mathcal{A}$, we have $[\mathbf{0},j_1] \in E_1^*(\sigma_{[\ell,n)})[\mathbf{0},j_1']$ and $[\mathbf{0},j_2] \in E_1^*(\sigma_{[\ell,n)})[\mathbf{0},j_2']$, where $j_1'$ and $j_2'$ are the first letters of $\sigma_{[\ell,n)}(j_1)$ and~$\sigma_{[\ell,n)}(j_2)$, respectively, thus $\lambda_{\mathbf{v}^{(n)}}(\mathcal{R}^{(n)}_\mathbf{v}(j_1) \cap \mathcal{R}^{(n)}_\mathbf{v}(j_2)) = 0$ by Proposition~\ref{p:disjoint} and~\eqref{e:j1j2disjoint}. By Lemma~\ref{l:close2}, this implies that $\lambda_\mathbf{v}(\mathcal{R}_\mathbf{v}(j_1) \cap \mathcal{R}_\mathbf{v}(j_2)) = 0$. \end{proof} \begin{remark}[Negative strong coincidence]\label{rem:-} It is sometimes convenient (see Section \ref{sec:examples}) to use the following variant of the strong coincidence condition for suffixes: a~sequence of substitutions $\boldsymbol{\sigma} = (\sigma_n)_{n\in\mathbb{N}}\in S^{\mathbb{N}}$ satisfies the \emph{negative strong coincidence condition} if there is $\ell \in \mathbb{N}$ such that, for each pair $(j_1,j_2) \in \mathcal{A} \times \mathcal{A}$, there are $i \in \mathcal{A}$ and $s_1, s_2 \in \mathcal{A}^*$ with $\mathbf{l}(s_1) = \mathbf{l}(s_2)$ such that $i\hspace{.1em}s_1$ is a suffix of~$\sigma_{[0,\ell)}(j_1)$ and $i\hspace{.1em}s_2$ is a suffix of~$\sigma_{[0,\ell)}(j_2)$, where $v$ is a \emph{suffix} of $w \in \mathcal{A}^*$ if $w \in \mathcal{A}^* v$. Assume that $\boldsymbol{\sigma}$ has Property PRICE. Then also negative strong coincidence allows to conclude that the sets $\mathcal{R}(i)$, $i \in \mathcal{A}$, are pairwise disjoint in measure. Indeed, negative strong coincidence implies that $[\mathbf{l}(j_1) - \mathbf{y}, j_1], [\mathbf{l}(j_2) - \mathbf{y}, j_2]\in E_1^*(\sigma_{[0,\ell)})[\mathbf{0},i]$, with $\mathbf{y} = (M_{[0,\ell)})^{-1}\, \mathbf{l}(i\hspace{.1em}s_1)$, thus \[ \lambda_{\mathbf{v}^{(\ell)}} \big( \big(\pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \mathbf{l}(j_1) + \mathcal{R}^{(\ell)}_\mathbf{v}(j_1)\big) \cap \big(\pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \mathbf{l}(j_2) + \mathcal{R}^{(\ell)}_\mathbf{v}(j_2)\big) \big) = 0. \] By the definition of~$\mathcal{R}^{(\ell)}_\mathbf{v}$ and its subtiles, we have \[ \bigcup_{j\in\mathcal{A}} \mathcal{R}^{(\ell)}_\mathbf{v}(j) = \mathcal{R}^{(\ell)}_\mathbf{v} = \bigcup_{j\in\mathcal{A}} \big(\pi_{\mathbf{u},\mathbf{v}}^{(\ell)}\, \mathbf{l}(j) + \mathcal{R}^{(\ell)}_\mathbf{v}(j)\big). \] From disjointness in the union on the right, we get that $\lambda_\mathbf{v}^{(\ell)}\big(\mathcal{R}^{(\ell)}_\mathbf{v}\big) = \sum_{j\in\mathcal{A}} \lambda_\mathbf{v}^{(\ell)}\big(\mathcal{R}^{(\ell)}_\mathbf{v}(j)\big)$, hence, the union on the left is also disjoint in measure. The remainder of the proof is now exactly the same as in Proposition~\ref{p:strongcoincidence}. \end{remark} \begin{proposition} \label{p:gcc} Let $S$ be a finite or infinite set of unimodular substitutions over a finite alphabet and assume that $\boldsymbol{\sigma}\in S^{\mathbb{N}}$ has Property PRICE. Then the following assertions are equivalent. \renewcommand{\theenumi}{\roman{enumi}} \begin{enumerate} \item \label{i:gcc1} The collection $\mathcal{C}_\mathbf{w}$ forms a tiling of~$\mathbf{w}^\bot$ for some $\mathbf{w} \in \mathbb{R}_{\ge0}^d \setminus \{\mathbf{0}\}$. \item \label{i:gcc2} The collection $\mathcal{C}_\mathbf{w}$ forms a tiling of~$\mathbf{w}^\bot$ for all $\mathbf{w} \in \mathbb{R}_{\ge0}^d \setminus \{\mathbf{0}\}$. \item \label{i:gcc3} The sequence $\boldsymbol{\sigma}$ satisfies the geometric coincidence condition, that is, for each $R > 0$ there is $\ell \in \mathbb{N}$, such that, for all $n \ge \ell$, \begin{equation} \label{e:gcc3} \big\{[\mathbf{y},j] \in \Gamma(\tr{(M_{[0,n)})}\, \mathbf{1}):\, \|\mathbf{y} - \mathbf{z}_n\| \le R\big\} \subset E_1^*(\sigma_{[0,n)})[\mathbf{0},i_n] \end{equation} for some $i_n \in \mathcal{A}$, $\mathbf{z}_n \in (M_{[0,n)})^{-1} \mathbf{1}^\bot$. \item \label{i:gcc4} There are $n \in \mathbb{N}$, $i \in \mathcal{A}$, $\mathbf{z} \in \mathbb{R}^d$, such that \[ \big\{[\mathbf{y},j] \in \Gamma(\tr{(M_{[0,n)})}\, \mathbf{1}):\, \|\pi_{(M_{[0,n)})^{-1}\mathbf{u},\mathbf{1}} (\mathbf{y} - \mathbf{z})\| \le C\big\} \subset E_1^*(\sigma_{[0,n)})[\mathbf{0},i], \] with $C \in \mathbb{N}$ chosen in a way that $\mathcal{L}_{\boldsymbol{\sigma}}^{(n)}$ is $C$-balanced. \end{enumerate} \end{proposition} \begin{proof} We show the implications (\ref{i:gcc1}) $\Leftrightarrow$ (\ref{i:gcc2}) $\Rightarrow$ (\ref{i:gcc3}) $\Rightarrow$ (\ref{i:gcc4}) $\Rightarrow$ (\ref{i:gcc1}). \medskip \noindent (\ref{i:gcc1}) $\Leftrightarrow$ (\ref{i:gcc2}). This is a special case of Proposition~\ref{p:independentmultiple}. \medskip \noindent (\ref{i:gcc2}) $\Rightarrow$ (\ref{i:gcc3}). By the tiling property for $\mathbf{w} = \mathbf{1}$, $\mathcal{R}(i)$~contains an exclusive open ball~$\mathcal{B}(i)$ for each $i \in\mathcal{A}$. For $[\mathbf{y},j] \in \Gamma(\tr{(M_{[0,n)})}\, \mathbf{1})$, we have thus $[\mathbf{y},j] \in E_1^*(\sigma_{[0,n)})[\mathbf{0},i]$ if $M_{[0,n)} (\pi_{\mathbf{u},\mathbf{1}}^{(n)}\, \mathbf{y} + \mathcal{R}^{(n)}_\mathbf{1}(j)) \cap \mathcal{B}(i) \neq \emptyset$. Let $i \in \mathcal{A}$ and $\tilde{\mathbf{z}} \in \mathcal{B}(i)$. By Proposition~\ref{p:strongconvergence} and Lemma~\ref{l:smallsubtiles}, we obtain that \eqref{e:gcc3} holds for $i_n = i$ and $\mathbf{z}_n = (M_{[0,n)})^{-1} \tilde{\mathbf{z}}$, provided that $n$ is sufficiently large. \medskip \noindent (\ref{i:gcc3}) $\Rightarrow$ (\ref{i:gcc4}). Let the sequences $(\ell_k)$ and $(n_k)$, the positive matrix~$B$, and~$C$ be as in Definition~\ref{def:star}. Then there is a constant $c > 0$ such that $\|\mathbf{x}\| \le c_1 \|\pi_{\tilde{\mathbf{u}},\mathbf{1}}\, \mathbf{x}\| +c_2$ for all $\tilde{\mathbf{u}} \in \mathbb{R}_+^d$, $\mathbf{x} \in \mathbb{R}^d$ with $0 \le \langle \mathbf{x}, \mathbf{w} \rangle < \|\mathbf{w}\|$ for some $\mathbf{w} \in \tr B\, \mathbb{R}_+^d$. Let $k$ be such that \eqref{e:gcc3} holds for $R = c_1 C+c_2$, $n = n_k + \ell_k$ and some $i_n \in \mathcal{A}$, $\mathbf{z}_n \in (M_{[0,n)})^{-1} \mathbf{1}^\bot$. Let $\tilde{\mathbf{u}} = (M_{[0,n_k+\ell_k)})^{-1} \mathbf{u}$, $\mathbf{w} = \tr{(M_{[0,n_k+\ell_k)})}\, \mathbf{1}$, and consider $[\mathbf{y},j] \in \Gamma(\mathbf{w})$ with $\|\pi_{\tilde{\mathbf{u}},\mathbf{1}} (\mathbf{y} - \mathbf{z}_n)\| \le C$. Since $\mathbf{w} \in \tr B\, \mathbb{R}_+^d$, $0 \le \langle \mathbf{y}, \mathbf{w} \rangle < \|\mathbf{w}\|$, and $\langle \mathbf{z}_n, \mathbf{w}\rangle = 0$, we have $\|\mathbf{y} - \mathbf{z}_n\| \le c_1C+c_2$, thus \eqref{e:gcc3} implies that $[\mathbf{y}, j] \in E_1^*(\sigma_{[0,n)})[\mathbf{0},i_n]$. As $\mathcal{L}_{\boldsymbol{\sigma}}^{(n_k+\ell_k)}$ is $C$-balanced, we get (\ref{i:gcc4}) with $i = i_n$, $\mathbf{z} = \mathbf{z}_n$. \medskip \noindent (\ref{i:gcc4}) $\Rightarrow$ (\ref{i:gcc1}). Let $n, i, \mathbf{z}, C$ be as in~(\ref{i:gcc4}). By Lemmas~\ref{l:bounded} and~\ref{l:e1star} and Proposition~\ref{p:setequation}, there is a neighborhood~$U$ of~$\pi_{\mathbf{u},\mathbf{1}}^{(n)}\, \mathbf{z}$ such that $M_{[0,n)}\, U$ lies in~$\mathcal{R}(i)$ and intersects no other tile of~$\mathcal{C}_\mathbf{1}$. By Proposition~\ref{p:independentmultiple}, this implies that~$\mathcal{C}_\mathbf{1}$ is a tiling. \end{proof} \begin{proposition}\label{p:gccvariant} Let $S$ be a finite or infinite set of unimodular substitutions over the finite alphabet $\mathcal{A}$ and assume that $\boldsymbol{\sigma}\in S^{\mathbb{N}}$ has Property PRICE. The collection $\mathcal{C}_\mathbf{1}$ forms a tiling of~$\mathbf{1}^\bot$ if and only if $\boldsymbol{\sigma}$ satisfies the strong coincidence condition and for each $R > 0$ there exists $\ell \in \mathbb{N}$ such that $\bigcup_{i\in\mathcal{A}} E_1^*(\sigma_{[0,n)})[\mathbf{0},i]$ contains a ball of radius~$R$ of $\Gamma(\tr{(M_{[0,n)})}\, \mathbf{1})$ for all $n \ge \ell$. If $\boldsymbol{\sigma}$ satisfies the geometric finiteness property, then $\mathbf{0}$ is an inner point of~$\mathcal{R}$ and $\mathbf{0} \not\in \pi_{\mathbf{u},\mathbf{1}}\, \mathbf{x} + \mathcal{R}(i)$ for all $[\mathbf{x},i] \in \Gamma(\mathbf{1}^\bot)$ with $\mathbf{x} \ne 0$. \end{proposition} \begin{proof} Assume first that $\mathcal{C}_\mathbf{1}$ forms a tiling. Then $(\sigma_n)_{n\in\mathbb{N}}$ satisfies the geometric coincidence condition by Proposition~\ref{p:gcc}. Thus, for each $R > 0$ and sufficiently large~$n$, $E_1^*(\sigma_{[0,n)})[\mathbf{0},i_n]$ contains a ball of radius~$R$ of $\Gamma(\tr{(M_{[0,n)})}\, \mathbf{1})$ for some $i_n \in \mathcal{A}$. By Lemma~\ref{l:relativelydense}, there is $R > 0$ such that, for $k$ large enough, each ball of radius $R$ in $\Gamma(\tr{(M_{[0,n_k)})}\, \mathbf{1})$ contains a translate of the patch~$\mathcal{U} = \{[\mathbf{0},i]:\, i\in \mathcal{A}\}$. Therefore, we have some $k \in \mathbb{N}$, $i \in \mathcal{A}$, and $\mathbf{x} \in \mathbb{Z}^d$ such that $\mathbf{x} + \mathcal{U} \subset E_1^*(\sigma_{[0,n_k)})[\mathbf{0},i]$. This shows that the strong coincidence condition holds. The proof of the converse direction runs along the same lines as the corresponding part of the proof of Proposition~\ref{p:gcc}, that is, (\ref{i:gcc3}) $\Rightarrow$ (\ref{i:gcc4}) $\Rightarrow$ (\ref{i:gcc1}). We have to replace $E_1^*(\sigma_{[0,n)})[\mathbf{0},i_n]$ and $E_1^*(\sigma_{[0,n)})[\mathbf{0},i]$ by $\bigcup_{i\in\mathcal{A}} E_1^*(\sigma_{[0,n)})[\mathbf{0},i]$ and use Proposition~\ref{p:strongcoincidence}. If $\boldsymbol{\sigma}$ satisfies the geometric finiteness property, then we obtain as in Proposition~\ref{p:gcc} (\ref{i:gcc3}) $\Rightarrow$ (\ref{i:gcc4}) that $\big\{[\mathbf{y},j] \in \Gamma(\tr{(M_{[0,n)})}\, \mathbf{1}):\, \|\pi_{(M_{[0,n)})^{-1}\mathbf{u},\mathbf{1}}\, \mathbf{y}\| \le C\big\} \subset \bigcup_{i\in\mathcal{A}} E_1^*(\sigma_{[0,n)})[\mathbf{0},i]$ for some $n \in \mathbb{N}$, with $C$ such that $\mathcal{L}_{\boldsymbol{\sigma}}^{(n)}$ is $C$-balanced, thus $\mathbf{0} \not\in \pi_{\mathbf{u},\mathbf{1}}\, \mathbf{x} + \mathcal{R}(i)$ for all $[\mathbf{x},i] \in \Gamma(\mathbf{1})$ with $\mathbf{x} \ne 0$. As $\mathcal{C}_\mathbf{1}$ is a covering of~$\mathbf{1}^\bot$ by Proposition~\ref{p:covering}, we get that $\mathbf{0}$ is an inner point of~$\mathcal{R}$. \end{proof} \begin{remark}\label{rem:-2} Proposition~\ref{p:gccvariant} remains true with an analogous proof if strong coincidence is replaced by negative strong coincidence in its statement. Also, Proposition ~\ref{p:gccvariant} admits an effective version analogous to Proposition~\ref{p:gcc}~(\ref{i:gcc4}). \end{remark} \section{Dynamical properties of $S$-adic shifts}\label{sec:UE} We now use the results of the previous sections to investigate the dynamics of $S$-adic shifts. At the end of this section we will have collected all the necessary preparations to finish the proofs of Theorems~\ref{t:1} and~\ref{t:3}. \subsection{Minimality and unique ergodicity} First we observe that \cite[Theorem~5.2]{Berthe-Delecroix} implies the following result. \begin{lemma}\label{lem:minimal} Let $S$ be a finite or infinite set of unimodular substitutions over a finite alphabet and let $\boldsymbol{\sigma}\in S^{\mathbb{N}}$ be a primitive directive sequence. Then the $S$-adic shift $(X_{\boldsymbol{\sigma}}, \Sigma)$ is minimal. Thus each infinite word of $(X_{\boldsymbol{\sigma}}, \Sigma)$ is uniformly recurrent. \end{lemma} To gain unique ergodicity we need slightly stronger assumptions. \begin{lemma}\label{lem:uniquelyergodic} Let $S$ be a finite or infinite set of unimodular substitutions over a finite alphabet and let $\boldsymbol{\sigma}\in S^{\mathbb{N}}$ be a primitive, recurrent directive sequence. Then the $S$-adic shift $(X_{\boldsymbol{\sigma}}, \Sigma)$ is uniquely ergodic. \end{lemma} \begin{proof} Primitivity and recurrence of~$\boldsymbol{\sigma}$ imply that there are indices $k_1 < \ell_1 \le k_2 < \ell_2 \le \cdots$ and a positive matrix~$B$ such that $B = M_{[k_1,\ell_1)} = M_{[k_2,\ell_2)} = \cdots$. From \eqref{e:topPF} we gain therefore that $\bigcap_{n\ge k} M_{[k,n)}\, \mathbb{R}^d_+$ is one-dimensional for each $k\in\mathbb{N}$ and, hence, \cite[Theorem~5.7]{Berthe-Delecroix} yields the result (the fact that $\boldsymbol{\sigma}$ is ``everywhere growing'' in the sense stated in that theorem is an immediate consequence of primitivity and recurrence). \end{proof} \subsection{Representation map} In order to set up a representation map from~$X_{\boldsymbol{\sigma}}$ to~$\mathcal{R}$, we define refinements of the subtiles of~$\mathcal{R}$ by \[ \mathcal{R}(w) = \overline{\{\pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(p):\, p \in \mathcal{A}^*,\ \mbox{$p\hspace{.1em}w$ is a prefix of a limit word of $\boldsymbol{\sigma}$}\}} \quad (w \in \mathcal{A}^*). \] \begin{lemma} \label{l:convrefinement} Let $S$ be a finite or infinite set of unimodular substitutions over a finite alphabet and let $\boldsymbol{\sigma}\in S^{\mathbb{N}}$ be a primitive, algebraically irreducible, and recurrent directive sequence with balanced language~$\mathcal{L}_{\boldsymbol{\sigma}}$. Then $\bigcap_{n\in\mathbb{N}} \mathcal{R}(\zeta_0 \zeta_1 \cdots \zeta_{n-1})$ is a single point in~$\mathcal{R}$ for each infinite word $\zeta_0 \zeta_1 \cdots \in X_{\boldsymbol{\sigma}}$. Therefore, the representation map \[ \varphi:\, X_{\boldsymbol{\sigma}} \to \mathcal{R},\ \zeta_0 \zeta_1 \cdots \mapsto \bigcap_{n\in\mathbb{N}} \mathcal{R}(\zeta_0 \zeta_1 \cdots \zeta_{n-1}), \] is well-defined, continuous and surjective. \end{lemma} \begin{proof} Let $\zeta = \zeta_0 \zeta_1 \cdots \in X_{\boldsymbol{\sigma}}$ and let $\omega$ be a limit word of~$\boldsymbol{\sigma}$. Then $\mathcal{R} = \mathcal{R}(\zeta_{[0,0)}) \supset \mathcal{R}(\zeta_{[0,1)}) \supset \cdots$, and $\mathcal{R}(\zeta_{[0,n)}) \ne \emptyset$ for all $\ell\in\mathbb{N}$, where we use the abbreviation $\zeta_{[k,\ell)} = \zeta_k \zeta_{k+1} \cdots \zeta_{\ell-1}$. As $(X_{\boldsymbol{\sigma}}, \Sigma)$ is minimal by Lemma~\ref{lem:minimal}, we have a sequence $(n_k)_{k\in\mathbb{N}}$ such that $\zeta_{[n_k,n_k+k)} = \omega_{[0,k)}$ for all $k \in \mathbb{N}$. Since $\mathcal{R}(\zeta_{[0,n_k+k)}) \subset \mathcal{R}(\zeta_{[n_k,n_k+k)}) - \pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(\zeta_{[0,n_k)})$, it only remains to show that the diameter of $\mathcal{R}(\zeta_{[n_k,n_k+k)}) = \mathcal{R}(\omega_{[0,k)})$ converges to zero. We even show that $\bigcap_{k\in\mathbb{N}} \mathcal{R}(\omega_{[0,k)}) = \{\mathbf{0}\}$. Let $\mathcal{S}_k = \{\pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(\omega_{[0,n)}):\, 0 \le n \le k\}$. Then we clearly have $\mathcal{R}(\omega_{[0,k)}) + \mathcal{S}_k \subset \mathcal{R}$ for all $k \in \mathbb{N}$. We also have $\lim_{k\to\infty} \mathcal{S}_k = \mathcal{R}$ (in Hausdorff metric) because, for each prefix $\tilde{p}$ of a limit word~$\tilde{\omega}$, $\pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(\tilde{p})$ can be approximated arbitrarily well by $\pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(p)$ with a prefix $p$ of~$\omega$, by primitivity and Proposition~\ref{p:strongconvergence}. This implies that $\lim_{k\to\infty} \mathcal{R}(\omega_{[0,k)}) = \{\mathbf{0}\}$, which proves that $\varphi$ is well-defined. Since the sequence $(\mathcal{R}(\zeta_{[0,n)})_{n\in\mathbb{N}}$ is nested and converges to a single point, $\varphi$ is continuous. The surjectivity follows from a Cantor diagonal argument. \end{proof} \subsection{Domain exchange} Suppose that the strong coincidence condition\footnote{All the results of this subsection remain true if strong coincidence is replaced by negative strong coincidence.} holds. Then, by Proposition~\ref{p:strongcoincidence}, the \emph{domain exchange} \begin{equation} \label{e:T} E:\ \mathcal{R} \to \mathcal{R}, \quad \mathbf{x} \mapsto \mathbf{x} + \pi_{\mathbf{u},\mathbf{1}}\, \mathbf{e}_i \quad \mbox{if $\mathbf{x} \in \mathcal{R}(i) \setminus \bigcup_{j\ne i} \mathcal{R}(j)$}, \end{equation} is well-defined almost everywhere on~$\mathcal{R}$. This map induces a dynamical system $(\mathcal{R}, E, \lambda_\mathbf{1})$. \begin{proposition}\label{p:domainexchange} Let $S$ be a finite or infinite set of unimodular substitutions over a finite alphabet $\mathcal{A}$. If $\boldsymbol{\sigma}\in S^{\mathbb{N}}$ has Property PRICE and satisfies the strong coincidence condition, then the following results hold. \renewcommand{\theenumi}{\roman{enumi}} \begin{enumerate} \itemsep1ex \item \label{i:de1} The domain exchange map $E$ is $\lambda_\mathbf{1}$-almost everywhere bijective. \item \label{i:de2} Each collection $\mathcal{K}_n = \{\mathcal{R}(w): w \in \mathcal{L}_{\boldsymbol{\sigma}} \cap \mathcal{A}^n\}$, $n \in \mathbb{N}$, is a measure-theoretic partition of~$\mathcal{R}$. \item \label{i:de3} The representation map~$\varphi$ is $\mu$-almost everywhere bijective, where $\mu$ is the unique $\Sigma$-invariant probability measure on $(X_{\boldsymbol{\sigma}},\Sigma)$. \item \label{i:de4} The dynamical system $(X_{\boldsymbol{\sigma}}, \Sigma, \mu)$ is measurably conjugate to the domain exchange $(\mathcal{R}, E, \lambda_\mathbf{1})$. More precisely, the following diagram commutes: \[ \begin{CD} X_{\boldsymbol{\sigma}} @> \Sigma >> X_{\boldsymbol{\sigma}} \\ @VV\varphi V @VV\varphi V\\ \mathcal{R} @> E >> \mathcal{R} \end{CD} \] \end{enumerate} \end{proposition} \begin{proof} All the following statements are to be understood up to measure zero. Since $\boldsymbol{\sigma}$ satisfies the strong coincidence condition, Proposition~\ref{p:strongcoincidence} implies that the map~$E$ is a well-defined isometry on~$\mathcal{R}(i)$, with \[ E(\mathcal{R}(i)) = \overline{\{\pi_{\mathbf{u},\mathbf{1}}\, \mathbf{l}(p\hspace{.1em}i):\, p \in \mathcal{A}^*,\ \mbox{$p\hspace{.1em}i$ is a prefix of a limit word of $\boldsymbol{\sigma}$}\}} \quad (i \in\mathcal{A}). \] Therefore, we have $\bigcup_{i\in\mathcal{A}} E(\mathcal{R}(i)) = \mathcal{R}$. Thus $E$ is a surjective piecewise isometry, hence, it is also injective, which proves Assertion~(\ref{i:de1}). As \begin{equation} \label{e:R0n} \mathcal{R}(w_0 w_1 \cdots w_{n-1}) = \bigcap_{\ell=0}^{n-1} E^{-\ell} \mathcal{R}(w_\ell), \end{equation} Assertion~(\ref{i:de2}) is again a consequence of Proposition~\ref{p:strongcoincidence} together with the injectivity of~$E$. Since \begin{equation}\label{eq:comm} E \circ \varphi = \varphi \circ \Sigma \end{equation} follows easily by direct calculation, the measure $\lambda_\mathbf{1} \circ \varphi$ is a shift invariant probability measure on~$X_{\boldsymbol{\sigma}}$. Thus, by unique ergodicity of $(X_{\boldsymbol{\sigma}}, \Sigma, \mu)$, we have $\mu = \lambda_\mathbf{1} \circ \varphi$. Now, Assertion~(\ref{i:de2}) implies that $\varphi(\mathbf{x}) \ne \varphi(\mathbf{y})$ for all distinct $\mathbf{x}, \mathbf{y} $ satisfying $\varphi(\mathbf{x}),\varphi(\mathbf{y} ) \in \mathcal{R} \setminus \bigcup_{n\in\mathbb{N}, K\in\mathcal{K}_n} \partial(K)$. As, by \eqref{e:R0n} and Proposition~\ref{p:boundary}, $\lambda_\mathbf{1}(\partial K)=\mu(\varphi^{-1}(\partial K)) = 0$ for all $K\in\mathcal{K}_n$, $n\in\mathbb{N}$, the map~$\varphi$ is a.e.\ injective, which, together with Lemma~\ref{l:convrefinement}, proves Assertion~(\ref{i:de3}). Finally, using~\eqref{eq:comm}, Assertion~(\ref{i:de4}) follows immediately from Assertion~(\ref{i:de3}). \end{proof} \subsection{Group translations} Fix some $j\in\mathcal{A}$. If $\mathcal{C}_\mathbf{1}$ forms a tiling of~$\mathbf{1}^\bot$, then $\mathcal{R}$ is a fundamental domain of the lattice $\Lambda = \mathbf{1}^\bot \cap \mathbb{Z}^d$ (which is spanned by $\mathbf{e}_j - \mathbf{e}_i$, $i \in \mathcal{A} \setminus \{j\}$). Since $\pi_{\mathbf{u},\mathbf{1}}\, \mathbf{e}_i \equiv \pi_{\mathbf{u},\mathbf{1}}\, \mathbf{e}_j \pmod \Lambda$ holds for each $i\in\mathcal{A}$, the canonical projection of~$E$ onto the torus $\mathbf{1}^\bot / \Lambda \simeq \mathbb{T}^{d-1}$ is equal to the translation $\mathbf{x} \mapsto \mathbf{x} + \pi_{\mathbf{u},\mathbf{1}}\, \mathbf{e}_{j}$. In general, even if the strong coincidence condition is not satisfied, the following proposition holds. \begin{proposition}\label{p:rotate} Let $S$ be a finite or infinite set of unimodular substitutions over the finite alphabet $\mathcal{A}$ and let $\boldsymbol{\sigma}\in S^{\mathbb{N}}$ be a primitive, algebraically irreducible, and recurrent directive sequence with balanced language~$\mathcal{L}_{\boldsymbol{\sigma}}$. Fix $j\in\mathcal{A}$. If $\mathcal{C}_\mathbf{1}$ forms a multiple tiling of~$\mathbf{1}^\bot$, then the translation $(\mathbf{1}^\bot/\Lambda, + \pi_{\mathbf{u},\mathbf{1}}\, \mathbf{e}_j, \overline{\lambda_\mathbf{1}})$, where $\overline{\lambda_\mathbf{1}}$ denotes the Haar measure on the torus $\mathbf{1}^\bot/\Lambda$, is a topological factor of the dynamical system $(X_{\boldsymbol{\sigma}}, \Sigma, \mu)$. If furthermore $\mathcal{C}_\mathbf{1}$ forms a tiling of~$\mathbf{1}^\bot$, then $(X_{\boldsymbol{\sigma}}, \Sigma, \mu)$ is measurably conjugate to the translation $(\mathbf{1}^\bot/\Lambda,+ \pi_{\mathbf{u},\mathbf{1}}\, \mathbf{e}_j, \overline{\lambda_\mathbf{1}})$. More precisely, the following diagram commutes: \[ \begin{CD} X_{\boldsymbol{\sigma}} @> \Sigma >> X_{\boldsymbol{\sigma}} \\ @VV\overline{\varphi} V @VV\overline{\varphi} V\\ \mathbf{1}^\bot/\Lambda @> + \pi_{\mathbf{u},\mathbf{1}}\, \mathbf{e}_{j} >> \mathbf{1}^\bot/\Lambda \end{CD} \] Here, $\overline{\varphi}$ is the canonical projection of the representation mapping~$\varphi$ onto $\mathbf{1}^\bot/\Lambda$. \end{proposition} \begin{proof} If $\zeta = \zeta_0\zeta_1\cdots \in X_{\boldsymbol{\sigma}}$, then $\varphi \circ \Sigma(\zeta) = \varphi(\zeta) + \pi_{\mathbf{u},\mathbf{1}}\, \mathbf{e}_{\zeta_0}$. Applying the canonical projection onto $\mathbf{1}^\bot / \Lambda$, this identity becomes $\overline{\varphi}\circ\Sigma(\zeta) = \overline{\varphi}(\zeta) + \pi_{\mathbf{u},\mathbf{1}}\, \mathbf{e}_{j}$. The result now follows by noting that $\overline{\varphi}$ is $m$ to~$1$ onto, where $m$ is the covering degree of~$\mathcal{C}_1$, and, hence, a bijection if $\mathcal{C}_1$ forms a tiling. \end{proof} \subsection{Proof of Theorem~\ref{t:1}} Let $S$ be a finite or infinite set of unimodular substitutions over the finite alphabet $\mathcal{A}$ and let $\boldsymbol{\sigma}\in S^{\mathbb{N}}$. We are now in a position to finish the proof of Theorem~\ref{t:1} by collecting the results proved so far. Throughout the proof, observe that in view of Lemma~\ref{lem:th1star} the conditions of Theorem~\ref{t:1} imply that~$\boldsymbol{\sigma}$ has Property PRICE. Concerning~(\ref{i:11}), we see that $(X_{\boldsymbol{\sigma}},\Sigma)$ is minimal by Lemma~\ref{lem:minimal} and uniquely ergodic by Lemma~\ref{lem:uniquelyergodic}. The unique $\Sigma$-invariant measure on~$X_{\boldsymbol{\sigma}}$ is denoted by~$\mu$. As for~(\ref{i:12}), first observe that $\mathcal{R}(i)$ is closed by definition ($i \in \mathcal{A}$). Thus compactness of~$\mathcal{R}(i)$ follows from Lemma~\ref{l:bounded}. The fact that $\lambda_\mathbf{1}(\partial \mathcal{R}(i)) = 0$ is contained in Proposition~\ref{p:boundary}. The multiple tiling property of the collection~$\mathcal{C}_\mathbf{1}$ in~(\ref{i:13}) follows from Proposition~\ref{p:independentmultiple} by taking $\mathbf{w} = \mathbf{1}$. The finite-to-one covering property comes from Proposition~\ref{p:rotate}, and it implies that $(X_{\boldsymbol{\sigma}},\Sigma,\mu)$ is not weakly mixing; see also \cite[Theorem~2.4]{Furstenberg}. To prove~(\ref{i:14}), first observe that strong coincidence implies that the sets~$\mathcal{R}(i)$, $i\in \mathcal{A}$, are measurably disjoint by Proposition~\ref{p:strongcoincidence}. Thus Proposition~\ref{p:domainexchange}~(\ref{i:de4}) implies that $(X_{\boldsymbol{\sigma}},\Sigma, \mu)$ is measurably conjugate to an exchange of domains on~$\mathcal{R}$. To prove~(\ref{i:15}), we combine Propositions~\ref{p:gcc} and~\ref{p:independentmultiple}. This yields that the geometric coincidence condition is equivalent to the fact that $\mathcal{C}_\mathbf{1}$ forms a tiling. We now turn to the results that are valid under the assumption that $\mathcal{C}_\mathbf{1}$ forms a tiling. To prove~(\ref{i:16}), we use Proposition~\ref{p:rotate}, which implies that $(X_{\boldsymbol{\sigma}}, \Sigma, \mu)$ is measurably conjugate to a translation~$T$ on the torus~$\mathbb{T}^{d-1}$. This implies that $(X_{\boldsymbol{\sigma}}, \Sigma, \mu)$ has purely discrete measure-theoretic spectrum by classical results. Assertion~(\ref{i:17}) follows from the definition of a natural coding (see Section~\ref{sec:coding}), as the translation~$T$ was defined in terms of an exchange of domains. Finally, due to \cite[Proposition~7]{Adamczewski:03}, the $C$-balancedness of~$\mathcal{L}_{\boldsymbol{\sigma}}$ implies that~$\mathcal{R}(i)$ is a bounded remainder set for each $i \in \mathcal{A}$, which proves~(\ref{i:18}). \subsection{Proof of Theorem~\ref{t:3}}\label{sec:t:3proof} Let $S$ be a finite or infinite set of unimodular substitutions over the finite alphabet~$\mathcal{A}$, and let $(G,\tau)$ be an $S$-adic graph. Let $(E_G, \Sigma, \nu)$ be the associated edge shift equipped with an ergodic probability measure $\nu$. We assume that this shift has log-integrable cocycle~$A$ and satisfies the Pisot condition stated in Section~\ref{sec:lyap-expon-pisot}, and that there exists a cylinder of positive measure in~$E_G$ corresponding to a substitution with positive incidence matrix. For $C > 0$, let \[ E_{G,C} = \{\boldsymbol{\gamma}\in E_G:\, \mathcal{L}_{\boldsymbol{\gamma}}\ \mbox{is $C$-balanced}\}. \] We will use the following statement from \cite{Berthe-Delecroix}, see also \cite{DelHL}. \begin{lemma}[{\cite[Theorem~6.4]{Berthe-Delecroix}}] \label{l:DC} Let $S$ be a finite or infinite set of unimodular substitutions over the finite alphabet~$\mathcal{A}$ and let $(G,\tau)$ be an $S$-adic graph with associated edge shift $(E_G, \Sigma, \nu)$. We assume that this shift is ergodic, has log-integrable cocycle~$A$, and satisfies the Pisot condition, and that there exists a cylinder of positive measure in~$E_G$ corresponding to a substitution with positive incidence matrix. Then \[ \lim_{C\to\infty} \nu(E_{G,C}) = 1. \] \end{lemma} \begin{lemma} \label{l:Pisot} Let $S$ be a finite or infinite set of unimodular substitutions over the finite alphabet~$\mathcal{A}$, and let $(G,\tau)$ be an $S$-adic graph with associated edge shift $(E_G, \Sigma, \nu)$. We assume that this shift is ergodic, has log-integrable cocycle~$A$, and satisfies the Pisot condition, and that $\nu$-almost all sequences $\boldsymbol{\gamma} \in E_G$ are primitive. Then, for $\nu$-almost every sequence $\boldsymbol{\gamma} \in E_G$, for each $k \in \mathbb{N}$, $M_{[k,\ell)}$ is a Pisot irreducible matrix for all sufficiently large $\ell \in \mathbb{N}$. \end{lemma} \begin{proof} Let $k \in \mathbb{N}$ and choose $\eta$ with $\theta_2 < \eta < 0$. Then, for $\nu$-almost all sequences $\boldsymbol{\gamma} \in E_G$, all but the largest singular values of~$M_{[k,\ell)}$ tend to zero for $\ell \to \infty$ with order~$\mathcal{O}(e^{\ell\eta})$. Thus the image of the unit sphere by~$M_{[k,\ell)}$ is an ellipsoid~$\mathcal{E}$ with largest semi-axis close to $\mathbb{R}\, (M_{[0,k)})^{-1} \mathbf{u}$, and length of all other semi-axes tending to zero with order~$\mathcal{O}(e^{\ell\eta})$. Let $\lambda$ be an eigenvalue of~$M_{[k,\ell)}$ with $|\lambda| \ge 1$, and let $\mathbf{w}$ be an associated eigenvector (which depends on~$\ell$), with $\|\mathbf{w}\| = 1$. We have to show that in this case $\lambda$ is equal to the Perron-Frobenius eigenvalue of~$M_{[k,\ell)}$ for $\ell$ large enough (to make $M_{[k,\ell)}$ a positive matrix). If $\lambda$ is real with $|\lambda| \ge 1$, then the image $M_{[k,\ell)}\, \mathbf{w}$ can lie in~$\mathcal{E}$ only if its direction is close to that of $(M_{[0,k)})^{-1} \mathbf{u}$. Therefore, if $\ell$ is sufficiently large, the coordinates of~$\mathbf{w}$ all have the same sign, i.e., $\lambda$~is the Perron-Frobenius eigenvalue of~$M_{[k,\ell)}$. This shows that $\lambda$ is the only real eigenvalue with $|\lambda| \ge 1$. If $\lambda$ is non-real with $|\lambda| \ge 1$, then $\mathbf{w} = \mathbf{w}_1 + i \mathbf{w}_2$ for two non-zero real vectors $\mathbf{w}_1,\mathbf{w}_2$. Since $\mathbf{w}$ is determined up to multiplication by a complex number, we may assume that $\|\mathbf{w}_1\| = \|\mathbf{w}_2\| = 1$ with $\mathbf{w}_1\bot \mathbf{w}_2$. Easy calculations now yield that $\|M_{[k,\ell)}\, \mathbf{w}_1\| = \|M_{[k,\ell)}\, \mathbf{w}_2\| = |\lambda|^{\frac12} \ge 1$ with $M_{[k,\ell)}\, \mathbf{w}_1 \bot M_{[k,\ell)}\, \mathbf{w}_2$. This contradicts the fact that $M_{[k,\ell)}\mathbf{w}_1, M_{[k,\ell)}\mathbf{w}_2 \in \mathcal{E}$ for large values of~$\ell$. Thus such an eigenvalue cannot exist. We then deduce the irreducibility of the characteristic polynomial of~$M_{[k,\ell)}$ by noticing that these integer matrices have no zero eigenvalue by unimodularity. \end{proof} \begin{proof}[Proof of Theorem~\ref{t:3}] Our goal is to apply Theorem~\ref{t:1}. By assumption, there exists a cylinder $\mathcal{Z}(\tau_0, \ldots, \tau_{\ell-1})$ with $\nu(\mathcal{Z}(\tau_0, \ldots, \tau_{\ell-1})) > 0$ and the incidence matrix of the walk $\tau_{[0,\ell)}$ is positive. This implies primitivity for $\nu$-almost all sequences, by ergodicity of the shift $(E_G,\Sigma,\nu)$ together with the Poincar\'e Recurrence Theorem. Algebraic irreducibility for almost all sequences $\boldsymbol{\tau} \in E_G$ is now a consequence of Lemma~\ref{l:Pisot}. We claim that there exists $C \in\mathbb{N}$ large enough such that \begin{equation}\label{recurrenceclaim} \nu\big(\mathcal{Z}(\gamma_0, \ldots, \gamma_{\ell-1}) \cap \Sigma^{-\ell}(E_{G,C})\big) > 0 \hbox{ holds for for all }\boldsymbol{\gamma} = (\gamma_\ell) \in E_G \hbox { and all } \ell \ge 0. \end{equation} To see this, note that the sets $\Sigma^\ell(\mathcal{Z}(\gamma_0,\ldots,\gamma_{\ell-1}))$ and $\Sigma^\ell(\mathcal{Z}(\gamma_0,\ldots,\gamma_{\ell-1})) \cap E_{G,C}$ depend only on the vertex of the graph $G$, where a path labelled by $\gamma_0, \ldots, \gamma_{\ell-1}$ arrives. Since $G$ has finitely many vertices and we have $\nu(\mathcal{Z}(\gamma_0, \ldots, \gamma_{\ell-1})) > 0$ by assumption, by Lemma~\ref{l:DC} there exists $C$ large enough such that \eqref{recurrenceclaim} holds. By another application of Poincar\'e's Recurrence Theorem, \eqref{recurrenceclaim} implies that for $\nu$-almost all sequences $\boldsymbol{\gamma} \in E_G$ and for all $\ell \in \mathbb{N}$, there is a positive integer~$n$ such that $\Sigma^n(\boldsymbol{\gamma}) \in \mathcal{Z}(\gamma_0, \ldots, \gamma_{\ell-1})$ and $\Sigma^{n+\ell}(\boldsymbol{\gamma}) \in E_{G,C}$. \end{proof} \section{$S$-adic shifts associated with continued fraction algorithms} \label{sec:examples} \subsection{Arnoux-Rauzy words} In this subsection, we prove our results on Arnoux-Rauzy words. To this matter we consider $S$-adic words with $S = \{\alpha_1,\alpha_2,\alpha_3\}$. Recall that the $\alpha_i$ are the Arnoux-Rauzy substitutions defined in~\eqref{eq:AR}. We begin by proving that the conditions of Proposition~\ref{p:gccvariant} (with negative strong coincidence, see Remarks~\ref{rem:-} and~\ref{rem:-2}) hold. \begin{lemma}\label{lem:strongAR} Let $\boldsymbol{\sigma} \in S^\mathbb{N}$ be a directive sequence of Arnoux-Rauzy substitutions over three letters. Then $\boldsymbol{\sigma}$ satisfies the negative strong coincidence condition. \end{lemma} \begin{proof} Just observe that for each $i\in\mathcal{A}$ the image $\alpha_i(j)$ ends with the letter $i$ for each $j\in\mathcal{A}$. \end{proof} We mention that (positive) strong coincidence for sequences of Arnoux-Rauzy substitutions is (essentially) proved in~\cite[Proposition~4]{Barge-Stimac-Williams:13}. \begin{proposition}\label{lem:superAR} Let $(\sigma_n)_{n\in\mathbb{N}} \in S^\mathbb{N}$\ with $S=\{\alpha_1,\alpha_2,\alpha_3\}$ be a directive sequence of Arnoux-Rauzy substitutions such that, for each $i \in \{1,2,3\}$, we have $\sigma_n = \alpha_i$ for infinitely many values of~$n$. Then the geometric finiteness property holds. \end{proposition} \begin{proof} Let $(n_k)_{k\in\mathbb{N}}$ be an increasing sequence of integers such that $\{\sigma_\ell:\, n_k \le \ell < n_{k+1}\} = S$ for each $k \in \mathbb{N}$. It is shown in the proof of \cite[Theorem~4.7]{Berthe-Jolivet-Siegel:12} that the ``combinatorial radius'' of $\bigcup_{i\in\mathcal{A}} E_1^*(\sigma_{[0,n_k)})[\mathbf{0},i]$ is at least~$k$, i.e., $\bigcup_{i\in\mathcal{A}} E_1^*(\sigma_{[0,n_k)})[\mathbf{0},i]$ contains larger and larger balls in $\Gamma(\tr{(M_{[0,n)})}\, \mathbf{1})$ around~$\mathbf{0}$. \end{proof} \begin{proof}[Proof of Theorem~\ref{t:5}] By~\cite[Theorem~1]{AD13}\footnote{Let $N_i$ be the incidence matrix of~$\alpha_i$. In~\cite{AD13}, the authors deal with products of the transposes~$\tr{N}_{i}$. However, as indicated in \eqref{eq:transposeequal}, the Lyapunov exponents do not change under transposition.}, the shift $(S^{\mathbb N},\Sigma,\nu)$ satisfies the Pisot condition. Furthermore, any product of substitutions in~$S$ that contains each of the three Arnoux-Rauzy substitutions has a positive incidence matrix. Therefore, in order to apply Theorem~\ref{t:3}, it remains to prove that the collection~$\mathcal{C}_\mathbf{1}$ forms a tiling. However, in view of Lemma~\ref{lem:strongAR} and Proposition~\ref{lem:superAR}, this follows from Proposition~\ref{p:gccvariant}; see Remark~\ref{rem:-2}. Now all assertions of Theorem~\ref{t:5} directly follow from Theorem~\ref{t:3}. \end{proof} \begin{proposition}[{\cite[Theorem~7 and its proof]{Berthe-Cassaigne-Steiner}}] \label{p:1} Let $\boldsymbol{\sigma}=(\sigma_n) \in \{\alpha_1,\alpha_2,\alpha_3\}^\mathbb{N}$. If each $\alpha_i$ occurs infinitely often in~$\boldsymbol{\sigma}$ and if we do not have $\sigma_n = \sigma_{n+1} = \cdots = \sigma_{n+h}$ for any $n \in \mathbb{N}$, then $\mathcal{L}_{\boldsymbol{\sigma}}^{(n)}$ is $(2h{+}1)$-balanced for each $n \in \mathbb{N}$. \end{proposition} \begin{proof}[Proof of Theorem~\ref{t:4}] Let $\boldsymbol{\sigma}$ be as in Theorem~\ref{t:4}. As $\alpha_i$ occurs infinitely often in~$\boldsymbol{\sigma}$ for each $i \in \mathcal{A}$, \cite[Lemma~13]{Arnoux-Ito:01} implies that for each~$k$ and each sufficiently large $\ell>k$ the matrix~$M_{[k,\ell)}$ has a characteristic polynomial that is the minimal polynomial of a cubic Pisot unit and, hence, irreducible. Thus $\boldsymbol{\sigma}$ is algebraically irreducible. The primitivity of~$\boldsymbol{\sigma}$ follows from the same fact, as any product~$M_{[k,\ell)}$ containing the incidence matrix of each of the three Arnoux-Rauzy substitutions is positive. Since $\boldsymbol{\sigma}$ is recurrent by assumption, Proposition~\ref{p:1} implies that there is $C>0$ such that for each~$n$ there is~$\ell$ such that $(\sigma_0,\ldots,\sigma_{\ell-1}) = (\sigma_n,\ldots,\sigma_{n+\ell-1})$ and $\mathcal{L}_{\boldsymbol{\sigma}}^{(n+\ell)}$ is $C$-balanced. As in the proof of Theorem~\ref{t:5}, in view of Lemma~\ref{lem:strongAR} and Proposition~\ref{lem:superAR}, it follows from Proposition~\ref{p:gccvariant} that $\mathcal{C}_\mathbf{1}$ induces a tiling. Thus all the assertions of Theorem~\ref{t:1} hold for~$\boldsymbol{\sigma}$, and the proof is finished. \end{proof} \begin{proposition}\label{prop:LR} An Arnoux-Rauzy word is linearly recurrent if and only if it has bounded strong partial quotients, that is, each substitution of~$S$ occurs in its directive sequence with bounded gaps. \end{proposition} \begin{proof} It is easy to check that strong partial quotients have to be bounded for an Arnoux-Rauzy word~$\omega$ to be linearly recurrent; see also \cite{RisleyZamboni}.\footnote{This characterization is already given in \cite[Corollary 3.9]{RisleyZamboni} but it relies on \cite{Durand:00a} and it needs the extra argument of \cite[Lemma 3.1]{Durand:00b}.} The converse is a direct consequence of \cite[Lemma 3.1]{Durand:00b} by noticing that the largest difference between two consecutive occurrences of a word of length~$2$ in~$\omega^{(n)}$ is bounded (with respect to~$n$). \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:AR}] This is a direct consequence of Proposition~\ref{prop:LR} together with Theorem~\ref{t:4}. \end{proof} \subsection{Brun words} In this subsection, we prove our results on $S$-adic words defined in terms of the Brun substitutions $\beta_1,\beta_2,\beta_3$ defined in~\eqref{eq:brun}. Consider $S$-adic words, where $S = \{\beta_1,\beta_2,\beta_3\}$. Again we begin by proving that the conditions of Proposition~\ref{p:gccvariant} hold for negative strong coincidences (see Remarks~\ref{rem:-} and \ref{rem:-2}). \begin{lemma}\label{lem:strongB} Let $S = \{\beta_1,\beta_2,\beta_3\}$. If $\boldsymbol{\sigma} \in S^\mathbb{N}$ contains $\beta_3$, then it has negative strong coincidences. \end{lemma} \begin{proof} This follows from the fact that $\beta_3\beta_i(j)$ ends with the letter~$3$ for all $i, j\in\mathcal{A}$. \end{proof} Next we use a result from \cite{BBJS14}, where a slightly different set of Brun substitutions is considered, namely \[ \sigma_1^\mathrm{Br} : \begin{cases} 1 \mapsto 1 \\ 2 \mapsto 2 \\ 3 \mapsto 32 \end{cases} \quad \sigma_2^\mathrm{Br} : \begin{cases} 1 \mapsto 1 \\ 2 \mapsto 3 \\ 3 \mapsto 23 \end{cases} \quad \sigma_3^\mathrm{Br} : \begin{cases} 1 \mapsto 2 \\ 2 \mapsto 3 \\ 3 \mapsto 13 \end{cases} \] Note that the incidence matrix of~$\sigma_i^\mathrm{Br}$ is the transpose of that of~$\beta_i$. We have the following relation between products of substitutions from the two sets. \begin{lemma} \label{l:relBrun} Let $i_0, i_1, \ldots, i_n \in \{1,2,3\}$, $n \in \mathbb{N}$. Then \[ \beta_{i_0} \beta_{i_1} \cdots \beta_{i_n} = \begin{cases}\sigma_2^\mathrm{Br} \sigma_{i_0}^\mathrm{Br} \sigma_{i_1}^\mathrm{Br} \cdots \sigma_{i_{n-1}}^\mathrm{Br} \pi_{(23)} & \mbox{if}\ i_n = 1, \\ \sigma_2^\mathrm{Br} \sigma_{i_0}^\mathrm{Br} \sigma_{i_1}^\mathrm{Br} \cdots \sigma_{i_{n-1}}^\mathrm{Br} & \mbox{if}\ i_n = 2, \\ \sigma_2^\mathrm{Br} \sigma_{i_0}^\mathrm{Br} \sigma_{i_1}^\mathrm{Br} \cdots \sigma_{i_{n-1}}^\mathrm{Br} \pi_{(12)} & \mbox{if}\ i_n = 3,\end{cases} \] where $\pi_{(ij)}$ denotes the cyclic permutation that exchanges the letters $i$ and~$j$. \end{lemma} \begin{proof} We have $\beta_1 = \sigma_2^\mathrm{Br}\, \pi_{(23)}$, $\beta_2 = \sigma_2^\mathrm{Br}$, $\beta_3 = \sigma_2^\mathrm{Br}\, \pi_{(12)}$, and $\pi_{(23)}\, \sigma_2^\mathrm{Br} = \sigma_1^\mathrm{Br}$, $\pi_{(12)}\, \sigma_2^\mathrm{Br} = \sigma_3^\mathrm{Br}$. \end{proof} \begin{proposition}\label{lem:superB} Let $(\sigma_n)_{n\in\mathbb{N}} \in S^\mathbb{N}$\ with $S = \{\beta_1,\beta_2,\beta_3\}$ be a directive sequence of Brun substitutions with infinitely many occurrences of~$\beta_3$. Then, for each $R > 0$, $\bigcup_{i\in\mathcal{A}} E_1^*(\sigma_{[0,n)})[\mathbf{0},i]$ contains a ball of radius~$R$ of $\Gamma(\tr{(M_{[0,n)})}\, \mathbf{1})$ for all sufficiently large $n \in \mathbb{N}$. \end{proposition} \begin{proof} This follows by Lemma~\ref{l:relBrun} from \cite[Theorem~5.4~(1)]{BBJS14} together with Lemma \ref{lem:strongB}. \end{proof} \begin{proof}[Proof of Theorem~\ref{t:6}] By~\cite[Theorem~1]{AD13}\footnote{Again, in \cite{AD13} the authors deal with products of the transposes of the incidence matrices of the substitutions.} (see also \cite{FUKE96,Meester,Schratzberger:98,Broise}), the shift $(S^{\mathbb N},\Sigma,\nu)$ satisfies the Pisot condition. Moreover, it is easy to see that the product $\beta_3\beta_2$ has positive incidence matrix. Thus, in order to apply Theorem~\ref{t:3}, we need to prove that the collection~$\mathcal{C}_\mathbf{1}$ forms a tiling. Using Lemma~\ref{lem:strongB} and Proposition~\ref{lem:superB}, this follows for $\nu$-almost every $\boldsymbol{\sigma} \in S^\mathbb{N}$ from Proposition~\ref{p:gccvariant} (see Remark~\ref{rem:-2}). Now, all assertions of Theorem~\ref{t:6} follow directly from Theorem~\ref{t:3}. \end{proof} \begin{proof}[Proof of Theorem~\ref{t:7}] In view of Proposition~\ref{p:rotate}, Theorem~\ref{t:6} states that almost all $\boldsymbol{\sigma} \in S^\mathbb{N}$ (w.r.t.\ any ergodic shift invariant probability measure~$\nu$ that assigns positive measure to each cylinder) give rise to an $S$-adic shift $(X_{\boldsymbol{\sigma}}, \Sigma)$ that is measurably conjugate to the translation \[ \pi_{\mathbf{u},\mathbf{1}}(\mathbf{e}_3) = u_1 (\mathbf{e}_3-\mathbf{e}_1) + u_2 (\mathbf{e}_3-\mathbf{e}_2) \] on the torus $\mathbf{1}^\bot/(\mathbb{Z}(\mathbf{e}_3-\mathbf{e}_1) + \mathbb{Z}(\mathbf{e}_3-\mathbf{e}_2))$. Here, $(u_1,u_2,u_3)$ is the frequency vector of a word in~$X_{\boldsymbol{\sigma}}$. Of course, this translation is conjugate to the translation $(u_1,u_2)$ on the standard torus~$\mathbb{T}^2$. Note that the vector $(x_1,x_2) \in \Delta_2$ corresponds to $(u_1,u_2,u_3) = \big(\frac{x_1}{1+x_1+x_2},\frac{x_2}{1+x_1+x_2},\frac{1}{1+x_1+x_2}\big)$ in the projectivized version of Brun's algorithm. Recall the definition of the conjugacy map~$\Phi$ in~\eqref{CDPhi}. According to \cite[Th\'eor\`eme]{ArnouxNogueira93} (see also \cite[Section~3.1]{Schweiger:91}), the invariant probability measure~$m$ of the map $T_\mathrm{Brun}$ defined in~\eqref{eq:brunmap} has density $h(x_1,x_2) = \frac{12}{\pi^2x_1(1+x_2)}$ and is therefore equivalent to the Lebesgue measure. We now define the measure $\nu = m \Phi^{-1}$ on~$S^\mathbb{N}$. It is an ergodic shift invariant probability measure on~$S^\mathbb{N}$. By~\eqref{CDPhi}, the mapping~$T_\mathrm{Brun}$ is measurably conjugate to the shift $(S^\mathbb{N}, \Sigma,\nu)$ via~$\Phi$. Moreover, $\nu(C)$ is positive for each cylinder $C\subset S^\mathbb{N}$, since each cylinder in~$\Delta_2$ has also positive Lebesgue measure and, hence, positive measure~$m$ (it has non-vanishing Jacobian, see e.g.~\cite{SCHWEIGER}). Let now $Y \subset \Delta_2$ be a set with the property that for each $(x_1,x_2) \in Y$ the $S$-adic shift $X_{\Phi(x_1,x_2)}$ is \emph{not} measurably conjugate to the translation $(u_1,u_2)$ on~$\mathbb{T}^2$. Theorem~\ref{t:6} (together with Proposition~\ref{p:rotate}) implies that $\nu \Phi(Y) = m(Y) = 0$. As $m$ is equivalent to the Lebesgue measure, this proves the result. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:8}] We can prove similarly as in the proof of Theorem~\ref{t:7}, by choosing $j=1$ and $j=2$, respectively in Proposition~\ref{p:rotate} that, for almost all $(x_1,x_2) \in \Delta_2$, the $S$-adic shift $(X_{\boldsymbol{\sigma}},\Sigma)$ with $\boldsymbol{\sigma} = \Phi(x_1,x_2)$ is measurably conjugate to the translation by~$\mathbf{t}$ on the torus~$\mathbb{T}^2$, for each \begin{equation}\label{eq:tvals} \mathbf{t}\in \Big\{ \Big(\frac{x_1}{1+x_1+x_2},\frac{x_2}{1+x_1+x_2}\Big),\Big(\frac{x_1}{1+x_1+x_2},\frac{1}{1+x_1+x_2}\Big),\Big(\frac{x_2}{1+x_1+x_2},\frac{1}{1+x_1+x_2}\Big) \Big\} . \end{equation} It is easy to see that the set of all $\mathbf{t} \in \mathbb{R}^2$ satisfying~\eqref{eq:tvals} for some pair $(x_1,x_2) \in \Delta_2$ is equal to $\{\mathbf{t} = (t_1,t_2):\, 0\le t_2\le 1,\, t_2 \le t_1 \le 1-t_2\}$. Since the translations $(t_1,t_2)$, $(t_2,t_1)$, $(1-t_1,1-t_2)$, and $(1-t_2,1-t_1)$ on $\mathbb{T}^2$ are pairwise (measurably) conjugate, this implies the result. \end{proof} \bigskip \noindent {\bf Acknowledgements.} We are grateful to Pascal Hubert and Sasha Skripchenko for valuable discussions on the subject of this paper. \bibliographystyle{amsalpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Deep learning has gained great success in computer vision and natural language processing, but conventional deep learning paradigms mostly follow a centralised learning manner where data from different sources are collected to create a central database for model learning. With an increasing awareness of data privacy, decentralised deep learning~\cite{mcmahan2017communication,wu2021decentralised} is more desirable. To this end, federated learning~\cite{mcmahan2017communication,li2020fedbn} has been recently introduced to optimise local models (clients) with non-shared local data while learning a global generalised central model (server) by transferring knowledge across the clients and the server. This enables to protect data privacy and reduce transmission cost as local data are only used for training local models and only model parameters are transmitted across the clients and server. There have been a variety of federated learning paradigms for computer vision applications, such as image classification~\cite{chen2021bridging}, person reidentification~\cite{sun2021decentralised} and object detection~\cite{liu2020fedvision}. However, existing federated learning paradigms~\cite{mcmahan2017communication,li2020fedbn,wu2021decentralised,chen2021bridging} mostly focus on encoding holistic high-level knowledge into models for communication across the clients and the server. Since high-level knowledge is closely related to objects of interest, this may pose a threat to data privacy. In contrast, mid-level semantic knowledge (such as attribute) is usually generic containing semantically meaningful properties for visual recognition~\cite{lampert2013attribute} , so it is not sensitive to objects of interests. Besides, since the number of attributes are finite in compositional learning~\cite{yuille2011towards} but the number of classes can be infinite, mid-level knowledge is also supposed to be more scalable. Therefore, learning mid-level semantic knowledge transfer for federated learning is important and is desirable for protecting privacy and improving model scalability. On the other hand, zero-shot learning (ZSL) is a well-established paradigm for learning mid-level knowledge. It aims to learn mid-level semantic mapping between image features and text labels (typically attributes) using seen object categories and then transfer knowledge for recognising unseen object categories with the help of the composition of shared attributes between seen and unseen categories. However, existing ZSL methods~\cite{pourpanah2020review,chen2021knowledge,chen2021free} mostly consider centralised learning scenarios which require to share training data from different label spaces to a central data collection. \begin{figure*}[t] \centering \includegraphics[width=0.85\textwidth]{./image/fzsl.pdf} \caption{An overview of federated zero-shot learning with mid-level semantic knowledge transfer. % % (1) Local model training process. (2) Local clients upload model parameters to the server and server constructs a global model by aggregating local model parameters. (3) Local models are reinitialised with central server model. % The Semantic Knowledge Augmentation (SKA) employs external knowledge to further improve the model's discriminative ability.} \label{fig:local_client} \end{figure*} In this work, we formulate a new Federated Zero-Shot Learning (FZSL) paradigm, which aims to learn mid-level semantic knowledge in federated learning for zero-shot learning in a decentralised learning manner. An overview of FZSL is depicted in Fig.~\ref{fig:overview}. Specifically, we consider there are multiple local clients where each client has an independent non-overlapping class label space whilst all clients share a common mid-level attribute space. Then, we optimise local models (clients) with non-shared local data and learn a central generalised model (server) by transferring knowledges (model parameters) between the clients and the server. With this paradigm, FZSL unifies federated learning and zero-shot learning for learning mid-level semantic knowledge in a decentralised learning manner with data privacy protection. It cumulatively optimises a generic mid-level attribute space from non-sharable distributed local data of different object categories. Instead of aggregating holistic models like traditional federated learning~\cite{mcmahan2017communication} or separating domain-specific classifiers like recent decentralized learning~\cite{wu2021collaborative,wu2021decentralised}, we only aggregate generators across the clients and the server while discriminators are retained locally. This facilitates to learn more generalised knowledge and reduce the number of model parameters for communicating. Furthermore, to improve model discriminative ability, we employ a vision-language foundation model (e.g., CLIP~\cite{radford2021learning}) to explore semantic knowledge augmentation to enrich the mid-level semantic space in FZSL. With the help of a pre-trained richer knowledge space, this semantic knowledge augmentation allows to learn a more generic knowledge to encode sample diversity as well as improve model scalability. Our \textbf{contributions} are: We introduce a new Federated Zero-Shot Learning paradigm to transfer mid-level knowledge from independent non-overlapping class label spaces for federated learning. With the formulated baseline model, we propose to explore semantic knowledge augmentation from external knowledge to learn a richer mid-level semantic space in FZSL. We conduct extensive experiments on five zero-shot learning benchmark datasets and demonstrate that our approach is capable of learning a generalised federated learning model with mid-level semantic knowledge transfer. \section{Related Work} \paragraph{Federated Learning.} Federated learning~\cite{mcmahan2017communication,li2020fedbn,li2020federated} is a recently introduced model learning paradigm aiming to learn a central model (server) with the collaboration of multiple local models (clients) under data privacy protection. It has been explored in many computer vision tasks, such as medical image segmentation~\cite{liu2021feddg}, person reidentification~\cite{wu2021decentralised}, object detection~\cite{liu2020fedvision}, etc. Conventional federated learning approaches, e.g., FedAvg~\cite{mcmahan2017communication}, learn a sharable central model by aggregating holistic model parameters among different local models. To disentangle generic and specific knowledge, recent approaches~\cite{wu2021collaborative,zhang2021fedzkt,wu2021fedcg,sun2021decentralised} propose to optimise generic feature extractors or generators by decoupling discriminators or domain-specific classifiers, but are still learning holistic class-level knowledge. Different from existing works, we propose to learn mid-level semantic knowledge (i.e., attributes) for federated zero-shot learning. Although there have been several seemingly similar federated zero-shot learning studies~\cite{gudur2021zero,hao2021towards,zhang2021fedzkt}, none of these methods are aimed at bridging the gap between seen and unseen classes by learning mid-level semantic knowledge. ZSDG~\cite{hao2021towards} generates existing categories by gathering statistics through the server. FedZKT~\cite{zhang2021fedzkt} and ~\cite{gudur2021zero} are based on zero-shot knowledge distillation~\cite{nayak2019zero} with the purpose of transferring knowledge between clients and server with no extracted prior information. Unlike them, our FZSL is learning from multiple independent \emph{non-overlapping} class label spaces, while ZSDG~\cite{hao2021towards} and~\cite{gudur2021zero} are studying sharing knowledge with a sharing class space. Furthermore, our FZSL is generalisable and shows stable generalisability on \emph{unseen} classes, while FedZKT and ZSDG are only tested on existing classes. More importantly, all of these methods are based on class-level knowledge while our FZSL learns to transfer mid-level semantic knowledge. Besides, we propose semantic knowledge augmentation from external knowledge to improve model discriminative ability for FZSL. \paragraph{Zero Shot Learning.} Zero shot learning (ZSL) aims to recognise unseen object categories leveraging seen categories for learning consistent semantic information to bridge seen and unseen categories. Current ZSL methods can broadly be divided into embedding based methods~\cite{fu2015transductive} and generative based methods~\cite{xian2018feature}. Embedding based methods transfer from a visual space to a semantic space and classify unseen categories based on semantic similarity without any training data. In contrast, generative based methods learn a projection from a semantic space to a visual space, which enables to turn the zero shot learning task to a pseudo feature supervised learning task, alleviating overfitting~\cite{xian2018feature}. Existing ZSL methods are following a centralised learning manner, while our work proposes a new federated zero-shot learning paradigm to transfer mid-level knowledge across different non-overlapping class label spaces with data privacy protection. \paragraph{Foundation Models.} Foundation models refer to models trained with a vast quantity of data and can be further used for various downstream tasks, such as BERT~\cite{devlin2018bert}, RoBERTa~\cite{liu2019roberta}, CLIP~\cite{radford2021learning}, etc. These models are usually learned by self-learning using unlabelled data and are able to predict underlying properties such as attributes, so they are scalable and potentially more useful than models trained on a limited label space~\cite{bommasani2021opportunities}. In this work, we employ a vision-language foundation model (e.g., CLIP~\cite{radford2021learning}) to explore semantic knowledge augmentation enriching the mid-level semantic space in FZSL. \section{Methodology} \subsection{Problem Definition} In this work, we study Federated Zero-Shot Learning (FZSL), where each client contains an independent non-overlapping class label space with non-shared local data while a central model is aggregated for deployment. Suppose there are $N$ local clients, where the $i$-th client contains a training set $\mathcal{S}_{i} = \left\{\bm{x}, y\right\}$, here $y \in \mathcal{Y}_{i}$ includes $N_{i}$ classes. Since each client contains non-overlapping class space, i.e., $\{\mathcal{Y}_{i}\cap \mathcal{Y}_{j}{=}\emptyset, \forall i,j\}$, $\mathcal{Y}_{1} \cup\mathcal{Y}_{2} \cup \ldots \cup \mathcal{Y}_{N}=\mathcal{Y}_{s}$. Meanwhile, each class can be described by an attribute vector $\bm{a}=\left\{a_1,a_2\ldots a_m\right\}$ and these $m$ attributes are consistent among classes in all clients, i.e. the mid-level attribute space is shared across clients. The goal of federated zero shot learning task is to construct a classifier $F:\mathcal{X}\rightarrow \mathcal{Y}$ for $\mathcal{Y}_{u} \subset \mathcal{Y}$, where $\mathcal{Y}_{u}$ is the unseen set and $\{\mathcal{Y}_{i}\cap\mathcal{Y}_{u}=\emptyset, \forall i,j\}$. \subsection{FZSL by Mid-Level Semantic Knowledge Transfer} \label{cha:fzsl_baseline} \subsubsection{A Baseline Model.} To learn mid-level semantic knowledge transfer for federated learning, we formulate a baseline model which unifies federated learning and zero-shot learning in a decentralised learning paradigm. Since generative based zero-shot learning is capable of generating pseudo image features according to a consistent and generic mid-level attribute space, in this work, we employ a representative f-CLSWGAN~\cite{xian2018feature} as the backbone (in practice, our approach is compatible to various ZSL backbones, such as VAEGAN~\cite{xian2019f} and FREE~\cite{chen2021free}). As for federated learning, we use the commonly used FedAvg~\cite{mcmahan2017communication}. As shown in Fig.~\ref{fig:local_client}, the learning process of the baseline model consists of three iterative steps, namely local model learning, central model aggregation and local model reinitialisation with central model. In each local client, with the non-shared local data $\mathcal{S}_{i}{=}\left\{\bm{x}, y\right\}$, the model learning process follows f-CLSWGAN~\cite{xian2018feature}. A generator $G(\bm{z}, \bm{a}_g)$ learns to generate a CNN feature \bm{$\hat{x}$} in the input feature space $\mathcal{X}$ from random noise $\bm{z}$ and a ground truth condition $\bm{a}_g$, where each value in $\bm{a}_g$ corresponds with one specific attribute, e.g. stripes. While a discriminator $D(\bm{x}, \bm{a}_g)$ takes a pair of input features $\bm{x}$ and a ground truth condition $\bm{a}_g$ as input and a real value as output. Thus, the training objective of each local client model is defined as: \begin{equation} \label{eq:baseline_local_model} \min _{G} \max _{D} \mathcal{L}_{W G A N}+\beta \mathcal{L}_{C L S}, \end{equation} where $\beta$ is a hyper-parameter weight on the classifier. After optimising each local client model for $E$ local epochs, the local model parameters \bm{$w_{i}$} are transmitted to a central server to aggregate a global model. Following FedAvg~\cite{mcmahan2017communication}, the aggregating process is formulated as: \begin{equation} \label{eq:baseline_agg} \bm{w}_{t}=\frac{1}{N \cdot S} \sum_{i \in N_{S}} \bm{w}_{i,t}, \end{equation} where $N$ denotes the number of local clients and $t$ denotes the $t$-th global model iterative update round. $S$ denotes the randomly selected clients fraction for each round ($S\in[0.0,1.0]$) and $N_S$ is the set of selected clients. Note that the central server only aggregates local model parameters without accessing local data so as to protect local data privacy. Then, each local model is reinitialised with the central model as follows: \begin{equation} \label{eq:baseline_reinit} \bm{w}_{i,t+1} = \bm{w}_{t}. \end{equation} This is an iterative learning process (Eqs.(\ref{eq:baseline_local_model})-(\ref{eq:baseline_reinit})) until $T$ global model update round. Since the attribute space is consistent among local clients, the learned global generator encodes mid-level semantic knowledge. Finally, based on the attributes from unseen classes, the learned generator from the global server is used to generate $M$ pseudo image features for each unseen classes $\mathcal{Y}_{un}$. A softmax classifier is then trained under the supervision from pseudo features and tested for image classification on unseen classes. \subsubsection{Improved Baseline With Selective Module Aggregation.} Although aggregating holistic model parameters following FedAvg~\cite{mcmahan2017communication} is simple, it is inefficient for FZSL because the generic mid-level semantic knowledge is mainly encoded in the generator while the discriminator may contain knowledge specific to classes in each client. Inspired by recent approaches~\cite{wu2021collaborative,zhang2021fedzkt} in federated learning, we improve the baseline by decoupling the discriminator from the central model aggregation process, i.e., only aggregating the generator in the central server. This not only reduces the cost for transmitting model parameters but also facilitates to learn more generalisable mid-level knowledge. Thus, the central aggregation in Eq.~(\ref{eq:baseline_agg}) and the local client reinitialisation in Eq.~(\ref{eq:baseline_reinit}) are reformulated as: \begin{equation} \label{eq:opt_agg} \bm{w}_{G,t}=\frac{1}{N \cdot S} \sum_{i \in N_{S}} \bm{w}_{G_{i},t}, \end{equation} \begin{equation} \label{eq:opt_reinit} \bm{w}_{G_{i},t+1} = \bm{w}_{G,t},~~~ \bm{w}_{D_{i},t+1} = \bm{w}_{D_{i},t}, \end{equation} where $ \bm{w}_{G,t}$ and $ \bm{w}_{D,t}$ denote model parameters for a generator and a discriminator, respectively. \subsection{Semantic Knowledge Augmentation for FZSL} Although the formulated baseline with selective module aggregation is able to transfer mid-level generic knowledge in a decentralized learning manner, it still suffers from sparse attribute and ambiguous attribute separability for limited data diversity in each client. To resolve this problem, we propose to explore a vision-language foundation model (CLIP~\cite{radford2021learning} in this work) to explore semantic knowledge augmentation (SKA) to enrich the mid-level semantic space in FZSL. Since a foundation model like CLIP contains word embedding knowledge that can supply information regarding hierarchical relationships among classes, it can help FZSL to learn richer external knowledge with the sharable common attribute space. In this work, we introduce class-level semantic knowledge augmentation, which greatly facilitates the generated feature diversification in both training and testing stages. Empirically, we observe that directly concatenating a noise-enhanced CLIP text embedding and an attribute vector is an effective way, which do not require extra learnable parameters and can alleviate overfitting on seen classes. In our semantic knowledge augmentation, as shown in Fig.~\ref{fig:local_client}, we simply combine a default prompt `a photo of a' with class names and use this sentence as the input to a CLIP text encoder~\cite{radford2021learning}. We then further add the gaussian noise $ \bm{z}_c\sim N(0,\gamma)$ to the output text embedding $ \bm{a}_c$ so as to enrich the semantic space and to better align with the instance-wise diversified visual space, where each class-level semantic can always correspond to different samples with various poses and appearances in visual space. The semantic augmented attribute is the concatenation between noise-enhanced text embedding and ground truth manual annotation attribute labels $ \bm{a}_g$. This semantic augmentation process can be formulated as follows: \begin{equation} \label{eq:SKA} \bm{a}_{SKA} = [\bm{a}_c \oplus \bm{z}_c,\bm{a}_g ], \end{equation} where $\oplus$ is the element-wise summation. During FZSL model training, the CLIP text embedding of seen class name is utilised as external knowledge to construct semantic knowledge augmented attribute $\bm{a}_{SKA}$ and further generate image features in each local client. The discriminator condition keeps \bm{$a_g$} to distinguish between the real distribution and the pseudo distribution. Most importantly, in the testing stage, instead of generating pseudo image features based on the same attribute $\bm{a}_g$ for each class as in conventional ZSL~\cite{xian2018feature,xian2019f,chen2021free}, the SKA module supplies diversified attribute $\bm{a}_{SKA}$ for each class. The gaussian noise $\bm{z}_c$ in $\bm{a}_{SKA}$ can help explore the rich information in CLIP text encoder so to enrich the attribute space. Overall, our semantic knowledge augmentation can increase inter-class separability as well as supply diversified attribute space by only using the text information of the class name. \begin{table*}[t] \centering \begin{tabular}{l | l | c c c c c } \hline & Method & \textbf{AWA2} & \textbf{AWA1} & \textbf{aPY} & \textbf{CUB} & \textbf{SUN} \\ \hline \multirow{3}{*}{$Centralised$} &CLSWGAN~~\cite{xian2018feature} & $ 67.4$ & $66.6 $ &$37.7 $ & $56.8$ & $60.3 $ \\ &VAEGAN~~\cite{xian2019f} & $ 60.0 $ & $53.8 $ &$ 17.8 $ & $ 46.4$ & $ 58.2 $ \\ &FREE~~\cite{chen2021free} & $ 67.7 $ & $68.9 $ &$ 42.2 $ & $60.9 $ & $61.3 $ \\ \hline \multirow{11}{*}{$Decentralised$} &CLSWGAN+FedProx~~\cite{li2020federated} & $ 61.3 $ & $ 58.4 $ & $ 34.0$ & $ 53.1 $ & $59.3 $ \\ &CLSWGAN+MOON~~\cite{li2021model} & $ 61.0$ & $ 58.6$ & $ 33.2 $ & $55.1 $ & $59.5 $ \\ \cdashline{2-7} &FL-VAEGAN & $48.9 $ & $44.0 $ & $16.4 $ & $43.6 $ & $56.2 $ \\ &FL-VAEGAN+SMA & \underline{$50.4$} & \underline{$44.6 $} & $\textbf{25.9} $ & \underline{$46.0 $} & \underline{$59.4 $} \\ &FL-VAEGAN+SMA+SKA & $\textbf{60.1} $ & $\textbf{58.2} $ & \underline{$ 19.6$} & $\textbf{52.6} $ & $\textbf{61.2} $ \\ \cdashline{2-7} &FL-FREE & $60.9$ & $59.8 $& $ 25.9$ & $ 54.5$ & $56.4 $\\ &FL-FREE+SMA & \underline{$61.4$} & \underline{$61.1$} & \underline{$27.4$} & \underline{$55.4$} & \underline{$57.0$} \\ &FL-FREE+SMA+SKA & $\textbf{ 68.4} $ & $\textbf{68.4} $ & $\textbf{32.0} $ & $\textbf{60.7} $ & $\textbf{60.5} $ \\ \cdashline{2-7} &FL-CLSWGAN & $61.6$ & $58.5$ & $ 33.8$ & $53.8$& $ 59.5$ \\ &FL-CLSWGAN+SMA & \underline{$62.8$} & \underline{$61.7$} & \underline{$ 38.4$ } & \underline{$55.5$} & \underline{$ 59.4$}\\ &FL-CLSWGAN+SMA+SKA & $\textbf{69.0}$& $ \textbf{70.6}$ & $\textbf{47.1}$ & $\textbf{59.4} $ & $ \textbf{66.5}$\\ \hline \end{tabular} \caption{Comparing our approach with other methods on AWA2, AWA1, aPY, CUB and SUN for federated zero-shot learning. Top-1 accuracy is reported on all experiments. SMA denotes selective module selection while SKA denotes semantic knowledge augmentation. \textbf{Bold} and \underline{underline} represent the best and the second best performance in each baseline. } \label{table:FZSL} \end{table*} \section{Experiments} \paragraph{Datasets.} To evaluate the effectiveness of our approach, we conduct extensive experiments on five zero-shot benchmark datasets, including three coarse-grained datasets: (Animals with Attributes (AWA1)~\cite{lampert2013attribute}, Animals with Attributes 2 (AWA2)~\cite{xian2018zero} and Attribute Pascal and Yahoo (aPY)~\cite{farhadi2009describing}); and two fine-grained datasets (Caltech-UCSD-Birds 200-2011 (CUB)~\cite{wah2011caltech} and SUN Attribute(SUN)~\cite{patterson2012sun}). AWA1 is a coarse-grained dataset with 30475 images, 50 classes and 85 attributes, while AWA2 shares the same number of classes and attributes as AWA1 but with 37322 images in total. The aPY dataset is a relatively small coarse-grained dataset with 15339 images, 32 classes and 64 attributes. CUB contains 11788 images from 200 different types of birds annotated with 312 attributes, while SUN contains 14340 images from 717 scenes annotated with 102 attributes. We use the zero-shot splits proposed by~\cite{xian2018zero} for AWA1, AWA2, aPY, CUB and SUN ensuring that none of training classes are present in ImageNet~\cite{russakovsky2015imagenet}. All these five datasets are composed of seen classes set and unseen classes set. In decentralised learning experiments, we evenly split the seen classes set randomly to four clients. Note, both seen classes and unseen classes share the same attribute space in each dataset. \paragraph{Evaluation Metrics.} In FZSL, the goal is to learn a generalisable server model which can assign unseen class label $\mathcal{Y}_{u}$ to test images. Following commonly used zero-shot learning evaluation protocol~\cite{xian2018zero}, the accuracy of each unseen class is calculated independently before divided by the total unseen class number, i.e., calculating the average per-class top-1 accuracy of the unseen classes. \paragraph{Implementation Details.} In our approach, we employed a frozen ResNet-101~\cite{he2016deep} pretrained on ImageNet~\cite{russakovsky2015imagenet} as the feature extractor and constructed our baseline model with a generator and a discriminator for each client respectively following the representative generative zero-shot learning work~\cite{xian2018feature}. Further, we employed a frozen pretrained CLIP~\cite{radford2021learning} text encoder, a ViT-Base/16 transformer, to supply class-name-based text embedding for each client. All clients share the same model structure while the server aggregates local model parameters to construct a global model. For the improved baseline with selective module aggregation (SMA), only the generator from local client are aggregated. As for further improved with semantic knowledge augmentation (SKA), both the generator and text-enhanced module are aggregated to the server. Each client contains local non-overlapping classes from the seen classes set and the aggregated server model is tested on the unseen classes set. By default, we set the number of local clients $N$=4 and randomly client select fraction $S$=1. Generated feature number $M$ and classifier weight $\beta$ follows the original ZSL work~\cite{xian2018feature}. We empirically set batch size to 64, maximum global iterations rounds $T$=100, maximum local epochs $E$=1. For each local client, we used Adam optimizer with a learning rate of $1e{-}3$ for CUB, $2e{-}4$ for SUN and $1e{-}5$ for the others. Noise augmentation $\gamma$ is set to 0.1 empirically. Our models were implemented with Python(3.6) and PyTorch(1.7), and trained on NVIDIA A100 GPUs. \subsection{Federated Zero-Shot Learning Analysis} There are no existing works discussing mid-level semantic knowledge transfer in federated learning, so besides our baseline model (CLSWGAN~\cite{xian2018feature} with FedAvg~\cite{mcmahan2017communication}) donated as FL-CLSWGAN, we also implemented a traditional ZSL method VAEGAN~\cite{xian2019f} and a recent ZSL method FREE~\cite{chen2021free} with FedAvg~\cite{mcmahan2017communication} denoted as FL-VAEGAN and FL-FREE respectively for comparison. Further, the proposed SMA and SKA are implemented on three baselines respectively, where the generality and compatibility of SMA and SKA can be demonstrated. Note, when implementing SMA to FREE, feature refinement module will also be aggregated to the server which will be used during testing. All compared methods are inductive where only attribute information of unseen classes are used for training the classifier and unseen images are not used during training. From Table~\ref{table:FZSL}, we can see that: (1) Compared with the centralised baselines, the formulated decentralised baselines (FL-CLSWGAN, FL-VAEGAN, FL-FREE) yield compelling performance, which shows the effectiveness of the proposed paradigm for learning globally generalised model whilst protecting local data privacy; (2) With selective module selection (SMA), overall the performance of the baselines are improved (3.4\% in FL-VAEGAN, 1\% in FL-FREE and 2.1 \% in FL-CLSWGAN on average), which verifies that learning a generic generator and decoupling the discriminator from central aggregation can facilitate mid-level semantic knowledge transfer in FZSL; (3) With semantic knowledge augmentation (SKA), our approach significantly improves the baselines by 8.5\% in FL-VAEGAN, 6.5\% in FL-FREE and 9.1\% in FL-CLSWGAN on average, which validates the effectiveness and generality of SKA in FZSL; (4) Comparing with other federated learning approaches, such as FedProx~\cite{li2020federated} and MOON~\cite{li2021model}, our approaches achieve significantly better performance, showing the importance of learning mid-level semantic knowledge for FZSL. In the following context, the decentralised baseline donates CLSWGAN~\cite{xian2018feature} with FedAvg~\cite{mcmahan2017communication} since it achieves overall the best performance on our experiments. \begin{table*}[t] \begin{center} \begin{tabular}{c|c|c|c|c|c|c} \hline \multicolumn{1}{c|}{\multirow{1}{*}{Settings}} & \multicolumn{1}{c|}{\multirow{1}{*}{Methods}} & AWA2 & AWA1 & aPY & CUB & SUN \\ \hline\hline \multicolumn{1}{c|}{\multirow{5}{*}{\tabincell{c}{Local Training}}} &Client 1 & 49.0 & 47.8 & 23.2 & 42.4 & 50.6 \\ &Client 2 & 37.1 & 38.7 & 22.8 & 40.5 & 52.1 \\ &Client 3 & 40.2 & 41.1 & 34.3 & 40.2 & 49.8\\ &Client 4 & 53.0 & 51.9 & 26.3 & 40.2 & 50.4\\ \cline{2-7} & Average & 44.8 & 44.9 & 26.7 & 35.5 & 50.7\\ \hline \multicolumn{1}{c|}{\multirow{2}{*}{\tabincell{c}{Decentralised}}} & Baseline & {61.6 }& {58.5 }&{33.8} & {53.8 }& {59.5 }\\ & Baseline+SMA+SKA & 69.0 & 70.6 & 47.1 & 59.4 & 66.5 \\ \hline Centralised & Baseline (Joint) & 67.4 & 66.6 & 37.7 & 56.8 & 60.3\\ \hline \end{tabular} \end{center} \caption{Comparing local training (individual clients) and decentralised learning (baseline and baseline+SMA+SKA). Top-1 accuracy in percentage on unseen classes. Baseline donates CLSWGAN~\cite{xian2018feature} with FedAvg~\cite{mcmahan2017communication} } \label{table:local_decentralised} \end{table*} \subsection{Local Training vs. Decentralised Learning} To verify the effectiveness of the formulated federated zero-shot learning paradigm, we separately train four individual local models~\cite{xian2018feature} with local client data and compare with decentralised learning models. Note that the performance are tested on the same unseen classes for all compared methods. As shown in Table \ref{table:local_decentralised}, the decentralised baseline model significantly outperforms all individual client models and their average. This shows that the federated collaboration between the localised clients and the central server model facilitates to optimise a generalisable model in FZSL. Furthermore, baseline+SMA+SKA even surpasses the performance of the centralised joint-training baseline, which further verifies the effectiveness of our improved baseline for FZSL. \begin{table}[t] \begin{center} \small \begin{tabular}{c|c|c|c|c|c|c} \hline GT & CLIP & AWA2 & AWA1 & aPY &CUB & SUN \\ \hline\hline \hline \ding{51} &\ding{55} & 62.8 & 61.7 & 38.4 & 55.5 &59.4\\ \ding{55}&\ding{51}& 70.1 & 72.4 & 48.2 & 42.2 & 54.4 \\ \ding{51}&\ding{51}& 69.0 & 70.6 & 47.1 & 59.4 & 66.5 \\ \hline \end{tabular} \end{center} \caption{Baseline+SMA with different attribute variations. GT means dataset supplied annotated attributes. SKA means our proposed semantic augmentation with a CLIP text encoder.} \label{table:ska} % \end{table} \begin{table}[t] \begin{center} \small \begin{tabular}{c|c|c|c|c|c|c} \hline (CL)SKA & ALSKA & AWA2 & AWA1 & aPY &CUB & SUN \\ \hline \hline \ding{55} &\ding{55} & 62.8 & 61.7 & 38.4 & 55.5 & 59.4\\ \ding{51} &\ding{55} & 69.0 & 70.6 & 47.1 & 59.4 & 66.5 \\ \ding{55}&\ding{51}& 62.8 & 64.4 & 44.8 & 54.4 & 61.6 \\ \ding{51}&\ding{51}& 69.3 & 70.7 & 46.2 & 59.0 & 65.6 \\ \hline \end{tabular} \end{center} \caption{Baseline+SMA with different semantic augmentation variations. CLSKA means class-level semantic augmentation. ALSKA means attribute-level semantic augmentation.} \label{table:alsa} % \end{table} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{./image/tsne.pdf} \caption{tSNE of unseen classes on AWA2 for baseline+SMA (left) and baseline+SMA+SKA (right). The same colour implies the same class. Circle and cross means the generated distribution and real unseen distribution, respectively. The number in the caption means increase or decrease percentage for each class after implementing SKA. The classifier trained on generated pseudo distribution is tested on the unseen real distribution. } \label{fig:tsne} \end{figure} \subsection{Effect of Semantic Knowledge Augmentation} As shown in Table~\ref{table:FZSL}, the performance of the baseline model can be significantly improved with semantic knowledge augmentation. To show the impact of semantic knowledge augmentation on FZSL, we further analyse the results both quantitatively and qualitatively. Quantitatively, we report experimental results in Table~\ref{table:ska} for the baseline+SMA with and without SKA. It can be observed from Table~\ref{table:ska} that CLIP text embedding alone can supply discriminative information in three coarse datasets (AWA1, AWA2 and aPY) but lack discriminative ability in the other two fine-grained datasets. The combination of the ground truth annotation and CLIP text embedding, which is our SKA setting, works the best on average. Qualitatively, the tSNE visualisations of AWA2 unseen classes for baseline+SMA before and after implementing the semantic knowledge augmentation are shown in Fig.~\ref{fig:tsne}. It can be seen that with SKA, the generated distribution has a larger inter-class distance as shown in the red box. This larger inter-class distance significantly improves coarse-grained classification accuracy, which is consistent with the conclusion of FREE~\cite{chen2021free}. \begin{table}[t] \begin{center} \small \begin{tabular}{c|c|c|c|c|c|c } \hline \multicolumn{2}{c|}{Text Encoder} & AWA2 & AWA1 & aPY & CUB & SUN \\ \hline\hline \multicolumn{2}{c|}{\ding{55}} &62.8 & 61.7& 38.4& 55.5 & 59.4\\ \hline \multicolumn{1}{c|}{\multirow{2}{*}{\tabincell{c}{LM}}} &BERT & 63.4 & 63.8 & 41.1 & 54.6 & 60.9 \\ &RoBERTa & 65.4 & 64.6 & 41.6 & 54.7 & 61.0 \\ \hline \multicolumn{1}{c|}{\multirow{2}{*}{\tabincell{c}{VLP}}} &DeFILIP & 74.1 & 75.5 & 49.4 & 58.2 & 64.2 \\ &CLIP & 69.0 & 70.6 & 47.1 & 59.0 & 65.6 \\ \hline \end{tabular} \end{center} \caption{In comparison with Baseline+SMA, evaluation with the text embedding of two Language Models (LM) and two Vision-Language Pretrained models (VLP) are reported.} \label{table:exp_textEmb} % \end{table} \begin{figure*}[t] \centering \subfigure[Client number]{ \includegraphics[width=0.19\linewidth]{./image/Number.pdf} \label{fig:client_num} } \subfigure[Client fraction]{ \includegraphics[width=0.19\linewidth]{./image/fraction.pdf} \label{fig:client_frac} } \subfigure[Client local step]{ \includegraphics[width=0.19\linewidth]{./image/Local.pdf} \label{fig:local_step} } \caption{Ablation study on (a) client number, (b) client fraction, (c) local steps} \end{figure*} \subsection{Variation of Semantic Knowledge Augmentation } We do variations on the SKA in two directions: (1) In a more concrete attribute level and (2) text embedding from other text encoders. \paragraph{Attribute-Level Semantic Knowledge Augmentation.} To further show whether an attribute text will bring more discriminative information to FZSL, we employ the attribute-level semantic augmentation (ALSKA) and compare with the proposed class-level semantic augmentation ((CL)SKA). we reconstruct the input sentence of a CLIP text encoder with a superclass name and a random selected activated attribute from a target class. For example, for class `beach', the input sentence can be constructed as `a photo of a swimming scene.', where `scene' is a superclass name and `swimming' is a random selected positive attribute for class `beach'. Further, we combine ALSKA and (CL)SKA by constructing the input sentence of CLIP text encoder as `a photo of a \{attribute\} \{class name\}.'where \{attribute\} is one of the activated attributes in \{class name\}. As shown in Table~\ref{table:alsa}, we can see that: (1) Both class-level semantic augmentation (SKA) and the attribute-level semantic augmentation can supply discriminative information, which proves the effectiveness of our structure learning from text based external knowledge; (2) Comparing with (CL)SKA, the ALSKA is still limited in the CLIP text encoder. How to explore the fine-grained information from foundation model needs to be further explored and we leave this for the future work. \paragraph{Semantic Knowledge Augmentation with Other Text Encoder.} FZSL can gain benefit from a large scale pretrained text encoder. We naturally interested in whether other language models or visual language pretrained models can bring similar benefits. We therefore compare two large scale language models BERT~\cite{devlin2018bert} and RoBERTa~\cite{liu2019roberta}; and the text encoder of a vision-language pretrained model DeFILIP~\cite{cui2022democratizing}. BERT and RoBERTa are bidirectional encoder trained on 16GB and 161GB text corpora respectively. DeFILIP is a variation of CLIP~\cite{radford2021learning} which aims to explore fine-grained information in a more data efficient method. All of three methods will calculate the embedding of the whole input sentence, where we fed in the same sentence as our SKA. As shown in Table \ref{table:exp_textEmb}, we can see that: (1) Both LM and VLP text encoder can bring benefits (except LM model on CUB) comparing with baseline, which can demonstrate the effectiveness and generality of the proposed SKA structure. (2) FZSL with VLP achieves better results compare to LM. The reason is mainly that these models are pretrained on image set and are prone to achieve the alignment between visual and semantic distribution. (3) DeFILIP, a fine-grained variation of CLIP, achieves the best result among different text encoders. Interestingly, we find that DeFILIP with attribute-level SKA can achieve 59.8\% and 65.6\% on CUB and SUN respectively (cf. 58.2\% and 64.2\% on CUB and SUN with class-level SKA), which implies that the fine-grained information from DeFILIP can be further explored with an appropriate mining method. \subsection{Further Analysis and Discussion} \paragraph{Client Number $K$.} Fig.~\ref{fig:client_num} compares central server aggregation with different numbers of local clients, where $K$= 1,2 and 4 represent seen classes of the dataset is randomly split to 1,2 and 4 clients on average respectively. We can see that the FZSL performance decreases when implementing to increase number of clients, which implies greater difficulty with larger number of clients with less data variety. \paragraph{Client Fraction $S$.} Fig.~\ref{fig:client_frac} compares FZSL with different client fraction. We can see that a smaller number of fraction is inferior to collaboration with larger fraction of clients, which demonstrates that collaboration among multi-clients can further contribute to the generalisation ability of the server model. \paragraph{Client Local Step $E$.} Fig.~\ref{fig:local_step} compares FZSL with different client local steps $E$ which influences the communication efficiency. Overall, the performance on different datasets shows relatively stable trends whilst on SUN, the performance decreases when $E$ increases due to the accumulation of biases in local client. \section{Conclusion} In this work, we introduced a new Federated Zero-Shot Learning paradigm to explore mid-level semantic knowledge transfer for federated learning. We formulate a baseline model based on conventional zero-shot learning and federated learning, and then further improve the baseline model with selective module aggregation and semantic knowledge augmentation. Extensive experiments on five zero-shot learning benchmark datasets examine the effectiveness of our approach.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The generation of a compressed pulse marked a paradigm shift in optics \cite{Strickland1985,Jones2000,Udem2002}, enabling the realization of attosecond experiments \cite{Brabec2000,Cavalieri2007} as well as the synthesis of arbitrary optical waveforms \cite{Shelton2001,Rausch2008} down to the limit of a single cycle of light \cite{Krauss2010}. Similarly, shaped microwave pulses have been widely used in nuclear magnetic resonance to dynamically control the state of a classical \cite{Freeman1998} or a quantum state \cite{Vandersypen2005}. Extending this idea to the field of acoustics, only periodic waveforms with arbitrary shapes have been realized so far \cite{Schulein2015}. However, to synthesize arbitrary phononic waveforms, it is necessary to generate an on-demand acoustic pulse in the single-cycle limit. Propagating phonons in the form of surface acoustic waves (SAWs) are massively used in the telecommunication industry, and recently, they are finding more and more impressive applications in quantum science \cite{Delsing2019,Gustafsson2014,Satzinger2018,Bauerle2018,Hsiao2020}. A particularly promising example are SAW-driven quantum experiments in solid-state devices with single flying electrons \cite{Hermelin2011,McNeil2011,Stotz2005,Sanada2013,Bertrand2016,Takada2019,Jadot2021}. Owing to piezoelectric coupling, a SAW is accompanied by an electric potential that allows single-shot transport of an electron between distant surface-gate-defined quantum dots (QD) \cite{Hermelin2011,McNeil2011}. The acousto-electric approach allows highly efficient single-electron transfer along coupled quantum rails approaching macroscopic scales \cite{Takada2019} and in-flight preservation of spin entanglement \cite{Jadot2021}. These properties make SAW-driven electron transport a technique that is promising for proof-of-principle demonstrations of quantum-computing implementations \cite{Barnes2000,Schuetz2015,Bauerle2018}. However, sound-driven single-electron transport has an intrinsic limitation related to the large spatial extent of the SAW train. The quantum state of the flying electron can be disturbed by SAW modulation during the dwell time in the stationary QDs \cite{Bertrand2016}. Because of the presence of many potential minima accompanying the SAW (typically hundreds) it is furthermore difficult to transport the flying electron with accurate timing. To overcome the latter problem, a triggered SAW-driven sending process has been developed \cite{Takada2019}. Requiring one radio-frequency line and one picosecond-voltage-pulse channel per QD, this method is limited to a few electron sources and thus not scalable. In addition, the triggering technique introduces unwanted electromagnetic crosstalk and potential charge excitation. Alternatively, replacing the periodic SAW train with a single-cycle acoustic pulse would deliver an elegant sending approach that brings less perturbation and naturally enables synchronized transport from a basically unlimited number of sources. Here, we present a chirp-synthesis technique to generate on demand a single, strongly compressed acoustic pulse. To determine the shape of the engineered SAW, we perform time-resolved measurements with a broadband interdigital transducer (IDT) as SAW detector. By comparison of the experimental data with numerical simulations based on an impulse-response model, we assess the reliability of the synthesis method and outline a path toward maximum pulse compression. We then employ the acousto-electric chirped pulse to transport a single electron between distant quantum dots and evaluate the transport efficiency. Triggering the SAW-driven sending process with a picosecond voltage pulse, we then investigate if the electron is fully confined in the central minimum of the chirped pulse. Finally, we apply a superposition of phase-shifted chirp signals to demonstrate the emission of multiple pulses with precise control on their time delay. \section{Pulse compression via chirp synthesis} A SAW emitted by an IDT is uniquely determined by its electrode design \cite{Morgan,Ekstrom2017,Dumur2019,Lima2003}. Changing the unit cell pattern allows, for instance, the generation of higher SAW harmonics for the formation of periodic waveforms of arbitrary shapes \cite{Schulein2015}. The conceptual generalization of this Fourier-synthesis approach is the emission of a \textit{solitary} SAW pulse. It can be achieved by the so-called chirp IDT whose frequency response is determined by its gradually changing cell periodicity $\lambda_n$. This nonuniform design has been extensively used in analog electronic filters \cite{Court1969,Atzeni1975} and in radar technologies \cite{Klauder1960}. In quantum applications, this approach has so far been mainly employed to broaden the IDT's passband \cite{Weiss2018b}. However, the chirp design can also be employed in an inverse manner---similar to the formation of an ultrashort laser pulse \cite{Strickland1985}---to superpose a quasi-continuum of many elementary SAWs with gradually changing wavelength to a single, distinct, acoustic pulse. In this work, we aim at the emission of a solitary SAW pulse approaching the form of a Dirac $\delta$ function. Mathematically, it is approximated via the superposition of a discrete set of frequencies $f_n$, \begin{equation} \delta(t) \propto \int_{-\infty}^{\infty} e^{i\;2\pi\cdot f\cdot t} df \approx \sum_{n=1}^{N} e^{i\;2\pi\cdot f_n\cdot t}, \label{equ:delta} \end{equation} which is mostly destructive, except around the timing $t=0$ where all elementary waves are in phase and thus interfere constructively. The central idea for synthesizing such a SAW pulse with a chirp-IDT design is to subsequently drive this set of elementary waves with frequencies $f_n$ [see Eq.~(\ref{equ:delta})] according to its gradually changing cell periodicity $\lambda_n=v_{\textrm{SAW}}/f_n$ -- where $v_{\textrm{SAW}}$ indicates the SAW velocity. Applying an input signal \begin{equation} V_{\rm S}(t) \propto \sin\left(2\pi\int_0^t f(\tau) d\tau+\phi_0 \right), \label{equ:input} \end{equation} with properly chosen frequency modulation $f(t)$, the chirp transducer allows us to excite the elementary waves with frequency $f_n$ at the right timing to achieve the desired superposition. Owing to the widely linear SAW dispersion---see Appendix ~\ref{suppl:disp}---the shape of the emitted pulse remains unchanged during propagation. The design of the chirp IDT is determined by the set of frequencies $f_n$. A natural choice for $f_n$ is an evenly spaced set, \begin{equation} f_n = f_1+(n-1)\cdot\Delta f,\hspace{1cm}\Delta f = \frac{f_N-f_1}{N-1}, \label{equ:evfreq} \end{equation} leading to the following recurrence relation for the cell periodicity: \begin{equation} \lambda_{n+1} = \frac{1}{\frac{1}{\lambda_n}+\frac{\Delta f}{v_\textrm{SAW}}} \label{equ:recur} \end{equation} With this chirp geometry, maximal pulse compression is achieved by applying an input signal---see derivation in Appendix~\ref{suppl:deriv}---with frequency modulation that follows an exponential course: \begin{equation} f(t) = f_1 \cdot e^{\Delta f \cdot t} \label{equ:fmod} \end{equation} \begin{figure}[b] \includegraphics[width=70mm]{fig1.pdf} \caption{Experimental setup. Schematic of chirp IDT launching a compressed SAW towards a quantum rail and a subsequent broadband SAW detector. We show a perspective view on the sample that is realized via metallic surface gates in a GaAs/AlGaAs heterostructure. Inset: comparison between the periodic SAW modulation from regular transduction (dotted gray line) with the ideal SAW profile for electron transport consisting of a single propagating minimum (red line). \label{fig:setup} } \end{figure} \begin{figure*}[t] \includegraphics[width=140mm]{fig2.pdf} \caption{SEM images of the chirp IDT with zooms (insets) in the regions of large (left panel) and small (right panel) periodicity of the interlocked electrodes. \label{fig:chirp} } \end{figure*} \section{Experimental setup} To perform single-electron-transport measurements with the SAW pulse, we employ the experimental setup sketched in Fig.~\ref{fig:setup}. The sample consists of a quantum rail that is sandwiched between a chirp IDT and a SAW detector. By properly driving the chirp transducer with an input signal $V_{\rm{S}}$, a single propagating SAW minimum is emitted. When the acoustic pulse passes the quantum rail, the accompanying potential modulation forms a moving QD, which we use to transport an electron in a single shot from one QD to the other. The SAW detector that is placed after the quantum rail---see Appendix ~\ref{suppl:detect}---allows us to time-resolve the SAW profile via the induced voltage $V_{\rm D}$, and thus to verify its shape {\it in situ}. The experiment is performed at a temperature of about 20~mK in a $^3\textrm{He}/^4\textrm{He}$ dilution refrigerator. We use a Si-modulation-doped GaAs/AlGaAs heterostructure grown by molecular beam epitaxy (MBE). The two-dimensional electron gas (2DEG) is located 110~nm below the surface, with an electron density of $n\approx2.8\times 10^{11}$~cm$^{-2}$ and a mobility of $\mu\approx 9\times10^5$~cm$^{2}$V$^{-1}$s$^{-1}$. Metallic surface gates (Ti 3~nm, Au 14~nm) define the nanostructures. We apply a voltage of 0.3~V on all Schottky gates during cooldown. At low temperatures, the 2DEG below the transport channel and the QDs is completely depleted via a set of negative voltages applied on the surface gates. The surface electrodes of the IDTs are fabricated using standard electron-beam lithography with successive thin-film evaporation (metallization Ti 3~nm, Al 27~nm). A detailed fabrication recipe is provided in Appendix~\ref{suppl:fab}. To reduce internal reflections at resonance, we employ a double-electrode pattern for the transducers. All IDTs have an aperture of 30~\textmu m, with the SAW propagation direction along $[1\bar{1}0]$. The IDTs are designed and simulated with the homemade open-source Python library ``idtpy" \cite{idtpy}. We verify the linearity of SAW dispersion in the frequency range of 1 to 8~GHz by employing regular transducers ($\lambda_n=\lambda_0$) on a GaAs substrate (see Appendix~\ref{suppl:disp}). From this investigation, we can deduce the SAW velocity $v_{\rm{SAW}} = (2.81 \pm 0.01)$~\textmu m/ns at ambient temperature. To characterize the frequency response of the transducer, we measure the transmission $S_{21}$ between two identical IDTs that are opposing each other via a vector network analyzer (Keysight E5080A). In order to remove parasitic signals from reflections at the sample boundaries, the transmission data are cropped in the time domain after Fourier transform in the range of 300 to 600~ns (expected arrival of first transient around 310~ns) and then transformed back in the frequency domain. For the time-resolved measurements of the SAW profile, we employ an arbitrary waveform generator (AWG, Keysight M8195A) to provide the input signal $V_{\rm{S}}$ of the chirp IDT. We record the induced voltage $V_{\rm D}$ on the detector IDT via a fast sampling oscilloscope (Keysight N1094B DCA-M). In order to reinforce the input and detection signals, $V_{\rm S}$ and $V_{\rm D}$, broadband amplifiers (SHF S126A) are placed along the transmission lines that are connected to the respective IDT's. As for the transmission data, we apply a Fourier filter on the time-resolved data in the range of 0.4 to 3.5~GHz in order to suppress parasitic contributions from internal higher harmonics of the AWG, the amplifier responses, airborne capacitive coupling, and standing waves in the rf lines. \section{Generation of an acoustic chirped pulse} The synthesis of the strongly compressed acousto-electric pulse is performed with a chirp transducer as shown via the scanning-electron-microscopy (SEM) image in Fig.~\ref{fig:chirp}. It consists of $N=167$ cells ranging from $f_1 = $ 0.5~GHz to $f_N = $ 3~GHz with the cell periodicity gradually changing from $\lambda_1\approx$ 5.56~\textmu m to $\lambda_N\approx$ 0.92~\textmu m according to Eq.~(\ref{equ:recur}). The transmission spectrum of the chirp IDT allows us to benchmark its quality via the shape of the passband. Figure~\ref{fig:vna} shows transmission data from a measurement at ambient temperature (black) and the expectation from a delta-function model \cite{Tancrell1971} (gray; see Appendix~\ref{suppl:dfmodel}). We observe a continuous spectrum over a broad frequency range that is defined by the varying finger periodicity $\lambda_n$. The flatness of the chirp IDT's passband is mainly achieved by the light electrode material (aluminium) mitigating resonance shifts from mass loading \cite{Morgan}. The good agreement between experiment and simulation reflects the well-controlled nanofabrication process. \begin{figure} \includegraphics[width=70mm]{fig3.pdf} \caption{Frequency response. Transmission measurement between opposing chirp IDTs (black) with simulation via the delta-function model (gray) and indications of the passband ranging from $f_1\approx$ 0.5~GHz to $f_N\approx$ 3.0~GHz. Note that the transmission bandwidth of the radio-frequency lines is not considered in the simulation. \label{fig:vna} } \end{figure} \begin{figure} \includegraphics[width=70mm]{fig4.pdf} \caption{Time-resolved measurements. (a) Trace of detector response for the frequency-modulated input signal applied on the chirp IDT with $t_\textrm{S}\approx 120$~ns. After initial electromagnetic crosstalk ($t=0$~ns), SAW arrives at the expected delay time $t_\textrm{D}$. (b) Zoom-in of the time window around $t_\textrm{D}$, which shows the detected voltage pulse (black) with impulse-response simulation (gray) and the corresponding SAW shape (red; with offset and arbitrary units) derived via deconvolution of the detector response. \label{fig:trace} } \end{figure} Having outlined the basic properties of the chirp IDT, let us now employ it for single-shot pulse generation. For maximal pulse compression, we apply an input signal that follows Eq.~(\ref{equ:input}) and (\ref{equ:fmod}) with a duration $t_{\rm{S}}$ matching the SAW-propagation time along the transducer, $t_N\approx 120$~ns---see Appendix~\ref{suppl:dev}. The measured time-resolved response on the SAW detector at room temperature is shown in Fig.~\ref{fig:trace}(a). We observe an initial electromagnetic crosstalk (at time $t=0$~ns) followed by a SAW-related response that appears at the expected delay $t_{\rm{D}}$. The clear contrast between the input signal duration $t_{\rm{S}}$ and the narrow SAW signal confirms the successful compression. Zooming in on the arrival window [see Fig.~\ref{fig:trace}(b)], we observe the narrow response which follows the shape expected from the impulse-response model \cite{Hartmann1988} (see Appendix~\ref{suppl:irmodel}). A slight asymmetry occurs due to a phase offset $\phi_0\approx\pi/2$ introduced by the amplifier. Because of the agreement between experiment and simulation, we can extract the actual SAW shape (red line with offset) via deconvolution of the detector-response function. We find that the actual SAW profile has much flatter sidelobes than the signal $V_{\rm D}$ on the broadband detector. \section{Single-electron transport} \begin{figure*} \includegraphics[width=140mm]{fig5.pdf} \caption{Electron shuttling along quantum rail. (a) SEM image of the quantum rail consisting of two surface-gate-defined quantum dots (QD) that are connected via a depleted transport channel. We further show the quantum-point-contact (QPC) electrometers that are placed next to each QD to sense the presence of electrons. (b) Time-dependent measurement of the chirped pulse in the cryogenic setup via the SAW detector (black). The offset red line shows the corresponding SAW profile extracted via deconvolution of the detector's response function. (c) Histogram of jumps in the electrometer current $\Delta I_{\rm QPC}$ at the receiver QD from 70 000 single-shot measurements after launching the SAW pulse with (red) and without (gray) precedent loading of an electron in the source QD. (d) Table of transfer and error probabilities with QD-occupation code of the events (source before, source after, receiver before, receiver after). (e) Successful (1001) and failed (1000) transfer probabilities as a function of the trigger delay $\tau$ of the sending process with respect to the SAW emission. \label{fig:shuttling} } \end{figure*} To demonstrate the ability of this highly compressed SAW pulse to transport an electron, let us now employ the 8-µm-long quantum rail. Figure~\ref{fig:shuttling}(a) shows a SEM image of the nanoscale device. A QD is located at each end of the transport channel, serving as a single-electron source and receiver. The occupancy of the QD is monitored by the variation in the current through a nearby QPC acting as a highly sensitive electrometer. For each transport sequence, we first evacuate all electrons in the system and then load one electron into the source QD (see red point). Second, the chirp IDT is excited to emit the compressed SAW pulse which then propagates along the quantum rail. If the SAW is capable of picking up the electron at the source and bringing it to the receiver, related changes are detected in the electron occupancy of the source and receiver QDs. In order to optimize the SAW profile for single-electron transport in a single moving potential minimum, we exploit the input signal's phase offset $\phi_0$ [see Eq.~(\ref{equ:input})] to form an asymmetric chirped pulse. Note that an increased SAW velocity \cite{Powlowski2019} has to be taken into account for the input signal at a cryogenic condition. Analyzing the SAW profile with $\phi_0\approx 3\pi/2$, we observe a smooth ramp just before the first strongly pronounced minimum ($t<0$~ns) as shown in Fig.~\ref{fig:shuttling}(b). With this choice, electron transfer is suppressed until the arrival of the leading SAW minimum. Employing this chirped pulse to perform single-shot electron shuttling with many repetitions, we observe a histogram of QPC-current jumps, $\Delta I_{\rm QPC}$, as shown in Fig.~\ref{fig:shuttling}(c). As a reference, we also perform each transport sequence without loading an electron at the source QD (gray). The comparison of the electrometer data at the receiver QD shows sufficient contrast to clearly distinguish transport events. Moreover, the reference data allows us to quantify the amount of undesired extra electrons injected into the system from outside (inflow). Figure~\ref{fig:shuttling}(d) summarizes the transfer probability and the sources of error (loading, sending, catching and inflow) from 70 000 single-shot measurements. The overall low error rates indicate a single-electron-transfer efficiency of $(99.4\pm0.4)$\%, which is similar to the highest value achieved with regular IDT design \cite{Takada2019}. Let us now focus on the question of where exactly within the compressed SAW pulse the electron is transferred. For this purpose, we employ a fast voltage pulse injected via a bias tee on a gate of the source QD [see Fig.~\ref{fig:shuttling}(a)] to trigger the sending process with the SAW \cite{Takada2019}. In this experiment, the potential landscape of the source QD is set such that the initially loaded electron is protected when the acoustic wave passes. By triggering a picosecond voltage pulse, the potential is temporarily lifted to load the electron into the moving SAW. Sweeping the time delay $\tau$ of this trigger, we thus successively address each position along the SAW pulse in an attempt to transfer the electron. Figure~\ref{fig:shuttling}(e) shows transmission probability data of such a measurement. We observe three transmission peaks that emerge in congruence with the potential minima of the SAW profile. The highest transport probability (code 1001) appears at the first peak ($\tau=0$~ns) that corresponds to the deepest minimum of the SAW pulse. The extent of 97\% sets a lower limit to the probability that the electron is emitted on arrival of this moving potential minimum at the source QD. In order to investigate whether the electron stays within this position as it propagates along the quantum rail, it is insightful to also look at the unsuccessful transfer events (code 1000). The strongly increased error at the third peak of more than 40\% indicates that it plays a rather negligible role since, without a sending trigger, this error is only 0.4\%. Estimating an amplitude of $(19\pm3)$~meV of the first acoustic minimum---see Appendix\,\hyperref[suppl:comp]{H}---the currently employed SAW confinement is slightly below the 95\%-confinement threshold of approximately $24$~meV \cite{Edlbauer2021}. Therefore, we cannot exclude transitions into the second minimum ($\tau\approx0.4$~ns) during transport. However, we anticipate reinforcement of single-minimum confinement via increased input-signal power and enhanced transducer design. We further evaluate the orbital level spacing by approximating the acoustic minimum to a parabolic potential \cite{Ciftja2009}. For instance, if the frequency range is raised up to 6~GHz, we find an increase of the energy spacing from 2~meV to 3~meV. Accordingly, we expect that the reinforcement of the SAW confinement will also enable the loading of a single electron into the ground state and transport without excitation \cite{Kataoka2009,Takada2019,Ito2021}---two conditions that are essential for the realization of SAW-driven flying electron qubits. \section{SAW engineering} The wide-ranging linearity of the SAW dispersion opens up a flexible platform to engineer any nanomechanical waveform using a single chirp IDT. Multiple $\delta$ pulses can be superposed via overlaid input signals $V_p(t)$ with deliberately chosen delay ($\Delta t_p$), phase ($\phi_p$), and amplitude ($A_p$): \begin{equation} V(t) = \sum_{p=1}^P A_p \cdot V_p(t+\Delta t_p, \phi_p) \end{equation} Following this approach, a sawtooth shape can be achieved, for instance, by superimposing uniformly delayed pulses with linearly decreasing amplitude. For the sake of simplicity, let us demonstrate this wave-engineering method by means of two pulses ($P=2$) with arbitrary delay. A relevant application of such a synthesis is the sequential transport of a pair of entangled electrons to observe spin interference patterns \cite{Jadot2021}. Figure~\ref{fig:engineering} shows the SAW profile from time-dependent measurements of such an engineered waveform. Two identical pulses are apparent, and they are separated by the chosen delay $\Delta t$. Note that the halving in pulse amplitude compared to the single-pulse case ($\Delta t=0$) is expected since the amplitude scales inversely with the number of superposed signals $P$ (for $\Delta t<t_N$). In order to achieve sufficient amplitude of the engineered waveform, it is thus crucial to maximize the IDT length (via the number of cells $N$) and the bandwidth ($B=f_N-f_1$) to achieve maximal pulse compression---see Appendix~\ref{suppl:rules}. Owing to the linear SAW dispersion, the shape of the generated pulses is independent of the delay of the input signals $V_1$ and $V_2$. The precise time control of $\delta$ pulses lays the groundwork for on-demand emissions of arbitrary nanomechanical waveforms. \begin{figure}[h] \includegraphics[width=70mm]{fig6.pdf} \caption{Acousto-electric wave engineering. Time-resolved measurements from the SAW detector for an input signal consisting of two superposed pulses with different time delays $\Delta t=\Delta t_2-\Delta t_1\in[0,3,6]$~ns. \label{fig:engineering} } \end{figure} \section{Summary and outlook} In conclusion, we have demonstrated an original SAW-engineering method to generate in a single shot a solitary acousto-electric $\delta$ pulse. We implemented the concept using a chirp transducer operating in the frequency band of 0.5 to 3~GHz. Our investigations showed that chirp synthesis is a highly controlled technique allowing reliable acoustic pulse shaping by design. Demonstrating a single-electron transport efficiency exceeding 99\%, we confirmed robust potential confinement for SAW-driven quantum transport. Confirming the confinement location during flight, this acoustic chirped wavefront thus represents the scalable alternative for synchronized and unambiguous SAW-driven single-electron transport from multiple sources. This technique is compatible with all the essential building blocks developed for SAW-driven flying electron qubits such as on-demand single-electron partitioning \cite{Takada2019}, time-of-flight measurements \cite{Edlbauer2021} and electron-spin transfer \cite{Bertrand2016, Jadot2021}. The nonuniform IDT design enables the possibility to engineer arbitrary combinations of superposed pulses having high relevance for experiments where multiple charges are transferred successively \cite{Jadot2021}. Accordingly, we expect that the chirp approach opens up new routes for quantum experiments on interference and entanglement exploiting spin and charge degree of freedom with single flying electrons \cite{Bauerle2018,Barnes2000,Schuetz2015}. We highlighted that chirp synthesis is readily applicable to other piezoelectric platforms such as LiNbO$_3$ or ZnO. Further enhancement of the power density is achievable by integrating the concept of unidirectional \cite{Ekstrom2017,Dumur2019} or focusing \cite{Lima2003} designs in the chirp transducer. Additionally, the frequency band and thus pulse compression is easily adjustable via the electrode periodicity of the chirp IDT. Finally, owing to the wide-ranging applications of propagating phonons in fundamental research \cite{Delsing2019,Gustafsson2014,Satzinger2018,Bauerle2018,Hsiao2020,Midolo2018,Yokoi2020,Kobayashi2017,Yokouchi2020,Chen2021}, the demonstrated acoustic pulse is not restricted to the field of quantum information processing. In spintronics, for instance, our chirp synthesis technique opens up the way for on-demand generation of spin-current pulses in nonferromagnetic materials \cite{Kobayashi2017}. Employing two compressed acoustic pulses with a controlled time delay and opposite phases, it further allows investigations on the spin-current formation in the time domain. Moreover, since SAW can create skyrmions in thin-film samples without Joule heating \cite{Yokouchi2020}, a solitary acoustic pulse could be the key to create a single skyrmion and to perform manipulations at the single-shot level. In metrology applications, the accuracy of SAW-driven electron pumps \cite{Cunningham2000} is currently limited by the overlapping between the electromagnetic crosstalk and the acoustic signal \cite{Kataoka2006}. Emitting compressed pulses with a controlled repetition rate, such single-electron pumps can be significantly enhanced in performance and easily operated in parallel without additional radio-frequency lines. Similarly, phonons can also stimulate single-photon emission in a hybrid quantum-dot--nanocavity system \cite{Weiss2016}. Since each SAW period contributes to the creation of photons, our technique would allow on-demand single-photon emission with precise timing. In summary, analogous to the advantages of using solitary optical \cite{Strickland1985,Jones2000,Udem2002,Brabec2000,Cavalieri2007,Shelton2001,Rausch2008,Krauss2010} and microwave pulses \cite{Freeman1998,Vandersypen2005}, we anticipate that the presented compression technique will open new routes for fundamental research employing nanoscale acoustics. \begin{acknowledgments} J.W. acknowledges the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant Agreement No. 754303. A.R. acknowledges financial support from ANR-21-CMAQ-0003, France 2030, Projet QuantForm-UGA. T.K. and S.T. acknowledge financial support from JSPS KAKENHI Grant No. 20H02559. C.B. acknowledges funding from the European Union's H2020 research and innovation program under Grant Agreement No. 862683 and from the French Agence Nationale de la Recherche (ANR), project QUABS ANR-21-CE47-0013-01. A.D.W., and A.L. acknowledge support from TRR 160/2-Project B04, DFG 383065199, the German Federal Ministry of Education and Research via QR.X Project 16KISQ009, and the DFH/UFA CDFA-05-06. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Local Search over Graphs} In this section we provide communication lower bounds on the communication complexity of local search over several families of graphs. \begin{theorem}\label{theo:opt} The following bound holds for the randomized communication complexity of $\textsc{SumLS}$: \begin{enumerate} \item\label{theo:opt-bounded} $CC(\textsc{SumLS}(G))= \Omega(\sqrt{N})$, when $G$ is a specific constant-degree (36) graph with $N$ vertices. \item\label{theo:opt-hypercube} $CC(\textsc{SumLS}(\textsf{Hyp}_n)) = \Omega(\sqrt{N}) = \Omega(2^{n/2})$ where $N=2^n$ is the number of vertices. \item\label{theo:opt-grid} $CC(\textsc{SumLS}(H)= \Omega(\sqrt{N})$, when $H$ is a grid with a constant dimension (119) grid with $N$ vertices. \item\label{theo:odd} $CC(\textsc{SumLS}(\textsf{Odd}_n)) = \Omega(2^{n/2})$. \end{enumerate} \end{theorem} We note that results \ref{theo:opt-bounded}, \ref{theo:opt-hypercube}, and \ref{theo:opt-grid} are optimal since \cite{Ald} provides a randomized algorithm that finds a local maximum in these graph using $O(\sqrt N)$ value queries. Result \ref{theo:odd}, on the other hand, is not necessarily optimal because the odd graph has $N\approx 4^{n}$ vertices, so in terms of the number of vertices our lower bound is $\Omega(\sqrt[4]{N})$. Result \ref{theo:opt-grid} proves an optimal bound for a grid with a constant dimension, but this dimension is quite large (119). We are able to show that finding a local optimum in the three-dimensional grid is hard, but our lower bound in this case is only $\Omega(N^c)$, for some constant $c>0$ (in contrast to an optimal bound of $\Omega(\sqrt N)$ for the $119$-dimensional grid). To prove this, we first show that finding a local maximum is hard even for degree $4$ graphs. \begin{theorem}\label{theo:grid} There exists a constant $c>0$ such that the randomized communication complexity of $\textsc{SumLS}$ satisfies: \begin{enumerate} \item\label{theo:deg-bounded} $CC(\textsc{SumLS}(G))\geq N^c$, when $G$ is a specific degree 4 graph with $N$ vertices. \item\label{theo:dim-grid} $CC(\textsc{SumLS}(\textsf{Grid}_{N\times N \times [2]})= N^c$. \end{enumerate} \end{theorem} The overall structure of the proofs of Theorems \ref{theo:opt} and \ref{theo:grid} is similar, but the proofs use different techniques. The proof of Theorem \ref{theo:opt} appears in Section \ref{sec:main-pr} and the proof of Theorem \ref{theo:grid} appears in Section \ref{sec:grid-pr}. \section{Proof of Theorem \ref{theo:opt}}\label{sec:main-pr} Our starting point is a communication variant of a pebbling game. In this problem, $D=(V,E)$ is a known directed acyclic graph. The input is a boolean assignment for the vertices $b:V \rightarrow \{0,1\}$ such that every source is true ($b(v)=1$) and every sink is false ($b(v)=0$). The output is a false vertex whose all predecessors are true (i.e., $v\in V$ such that $b(v)=0$ and $b(u)=1$ for all $u\in V$, $(u,v)\in E$). Note that the problem is total. The communication variant of the pebbling game $\textsc{Pebb(D)}$ is defined by distributing the information $b(v)\in \{0,1\}$ of every vertex by a constant size index-gadget $\{0,1\}^3\times [3] \rightarrow \{0,1\}$. In \citep{GP14} it is shown that there exists a constant degree graph $D$ with $N$ vertices where both the randomized communication complexity of the problem is $\Theta(\sqrt{N})$. The proof is done in three steps. \paragraph{Step 1} We introduce an intermediate communication problem $\textsc{VetoLS}(G)$ where Alice holds the potential function and Bob holds a subset of valid vertices (equivalently, Bob vetoes the vertices that are not in the set that he holds). The goal is to find a valid local maximum: a valid vertex whose \emph{valid} neighbours have (weakly) lower potential. Given a graph $D$ as above, we construct a constant degree graph $G$ with $O(N)$ vertices and reduce $\textsc{Peb(D)}$ to $\textsc{VetoLS}(G)$. This gives us an optimal communication lower bound for $\textsc{VetoLS}$ for a concrete graph $G$. \paragraph{Step 2} We define a certain notion of ``embedding'' of one graph into the other. We show that if $G'$ can be embedded in $G$, then $CC(\textsc{VetoLS}(G'))\geq CC(\textsc{VetoLS}(G))$. We use this observation to prove optimal hardness bound of $\textsc{VetoLS}$ over the hypercube by embedding $G$ into an hypercube of dimension $\log(N)+c$ for a constant $c$. \paragraph{Step 3} For every graph $G$, we show that $CC(\textsc{VetoLS}(G))\approx CC(\textsc{SumLS}(G))$. \vspace{0.1in}\noindent We now provide a detailed description of each step. \subsection{Starting Point: Pebbling Games}\label{sec:step0} In Section \ref{sec:emb} we use the concrete structure of the constant degree graph $D$ for which the hardness of the pebbling game is proved. Hence, we start with providing an explicit description of the graph $D$. The vertices of $D$ are given by $V=[M^3]\times [M]\times [M] \times [M]$. Each vertex $v=(k_1,k_2,k_3,k_4)$ has six successors: $$\{v+(1,\pm 1,0,0),v+(1,0,\pm 1,0),v+(1,0,0,\pm 1)\}$$ where the $\pm 1$ addition in the last three coordinates is done modulo $M$. The addition in the first coordinate is the standard addition. Thus, each vertex has six predecessors: $$\{v+(-1,\pm 1,0,0),v+(-1,0,\pm 1,0),v+(-1,0,0,\pm 1)\}$$ The sources of the graph are $\{(1,\cdot,\cdot,\cdot)\}$ and its sinks are $\{(M^3,\cdot,\cdot,\cdot)\}$. In \citep{GP14} an optimal bound on the communication complexity is proved the following optimal bound on the communication of the following variant of pebbling games. In the communication problem $\textsc{Pebb}(D)$, Alice's input is an assignment $b:V\times [3] \rightarrow \{0,1\}$. Bob's input is an index for each vertex $I:V \rightarrow [3]$. The input satisfies $b(v,I(v))=1$ for every source $v$, and $b(v,I(v))=0$ for every sink $v$. The output is a vertex $v\in S$ such that $b(v,I(v))=0$ and for every predecessor $u$ of $v$ holds $b(u,I(u))=1$. \begin{theorem}[\citep{GP14}]\label{theo:gp} $CC(\textsc{Pebb}(D))=\Omega(M^3)$. This bound also holds for randomized protocols. \end{theorem} \subsection{Step 1: From Pebbling to $\textsc{VetoLS}$}\label{sec:veto} Given a graph $G$, the communication problem $\textsc{VetoLS}(G)$ is defined as follows. Alice's input is a function $f:V\rightarrow [W]$. Bob's input is a non-empty subset $S\subset V$. The output is a vertex $v\in S$ such that $f(v)\geq f(w)$ for every $w\in S$ such that $\{v,w\}\in E$ (i.e., for every valid neighbour). We show that the communication complexity of $\textsc{VetoLS}(G)$ is at least that of $\textsc{Pebb}(D)$, for some $G$ that is related to $D$. Next we show how to obtain the graph $G$ from $D$. We construct the graph $G$ in two stages. First, given a graph $D$ of the pebbling game, let $G'$ be an undirected version of $D$ which additionally has an edge from every source of $D$ to some sink of $D$. Let $G$ be the graph that is obtained from $G'$ by replacing each vertex in $G$ with three new vertices and duplicating the edges so that each new vertex is connected to all the copies of its neighbors in $G'$. We call the graph $G$ the \emph{replication graph} of $D$. \begin{proposition}\label{pro:peb-ls} Let $D$ be a graph and $G$ be its replication graph. The communication complexity of $\textsc{VetoLS}(G)$ is at least that of $\textsc{Pebb}(D)$. \end{proposition} \begin{proof} Let $V$ denote that set of vertices of $D$ and $V'=V\times [3]$ be the set of vertices of $G$. Let $t:V'\rightarrow \mathbb R$ be a topological numbering of the vertices of $G$. I.e., $t((u,i))>t((v,j))$ if there exists a directed edge $(u,v)$ in $D$. Alice's input in $\textsc{Pebb}(D)$ is an assignment $b:V\times [3] \rightarrow \{0,1\}$. We use this to define the potential function $f$ that Alice holds in $\textsc{VetoLS}(G)$: for vertex $v\in V'$ let $f(v)=t(v)+6N\mathds{1}_{b(v)=0}$. Bob's input in $\textsc{Pebb}(D)$ is the function $I:V \rightarrow [3]$. Bob defines the set of valid vertices in $\textsc{VetoLS}(G)$ to be $S=\{(v,I(v)):v\in V\}$. Namely, among the three copies of $v$ only the one with correct index is valid. This choice of valid vertices has the desirable property that the subgraph of \emph{valid} vertices is precisely $G'$ and the assignment $b(v,i)$ over the vertices of $G'$ is precisely the decomposed assignment $b(v,I(v))$. We argue that the local maxima of $f$ are precisely all false vertices whose all incoming neighbours are true. Those are indeed local maxima, because their ``predecessors" do not have the bonus of $6N$ and their ``successors" have lower topological number. The source cannot be a local maximum because it is a ``true'' vertex and it is connected to a sink that is a ``false'' vertex. A true vertex (other than source) is not local maximum because its predecessor has higher topological number. Similarly, a false vertex with false predecessor is not local maximum. This leaves us only with false vertices whose predecessors are true. \end{proof} \subsection{Step 2: Embedding the Bounded Degree Graph}\label{sec:emb} In this step we define a certain notion of embedding of one graph into another. We will see that if a graph $G$ can be embedded into $H$ then the communication complexity of local search on $H$ is essentially at least as large as the communication complexity of local search on $G$. We will then see how to embed the graph $G$ of the previous steps into the three dimensional grid, the hypercube, and the odd graph. \begin{definition}\label{def:vied} A \emph{vertex-isolated edge-disjoint (VIED) embedding} of a graph $G=(V_G,E_G)$ in a graph $H=(V_H,E_H)$ is a pair of mappings $\varphi : V_G \rightarrow V_H$ and $\chi: E_G \rightarrow P(H)$, where $P(H)$ is the set of \emph{simple paths} on $H$, such that: \begin{itemize} \item $\varphi$ is injective. \item For every edge $\{v,w\}\in E_G$, the path $\chi(\{v,w\})$ connects $\varphi(v)$ to $\varphi(w)$. \item The interior vertices of the paths $\chi(\{v,w\})$ and $\chi(\{v',w'\})$ are disjoint (edge disjointness). \item For every $v\in V_G$ and every $\{w,w'\}\in E_G$ such that $v\neq w,w'$ holds $d(\varphi(v),\chi(\{w,w'\}))\geq 2$, where $d$ denotes the distance in $H$ of the vertex from the path (vertex isolation). \end{itemize} \end{definition} That is, in a VIED embedding every edge of $G$ is replaced by a path in $H$ that connects the corresponding vertices such that these paths do not share a vertex. Moreover, for every $v\in V_G$, $\varphi(v)$ is isolated in the sense that no path passes through the neighbours of $\varphi(v)$. \begin{lemma}\label{lem:emb} Let $G$ be a graph and suppose it can be VIED embedded into some other graph $H$. Then $CC(\textsc{VetoLS}(G))\leq CC(\textsc{VetoLS}(H))$. \end{lemma} \begin{proof} Alice's potential is defined as follows. For vertices $w\in \varphi(V_G)$ we define $f_H(\varphi(v))=f_G(v)$. Consider a vertex $w\in \chi(E_G)$ that belongs to an edge $\{u,v\}\in E_G $. Suppose that $w$ is the $k$'th element in the path $\chi(\{u,v\})$ and $l$ is the total length of this path. Define: \begin{align}\label{eq:edge} f_H(w)=\frac{k}{l} f_G(u) + \frac{l-k}{l}f_G(v) \end{align} In all other vertices Alice's potential will not play a role because these vertices will not be valid, thus we can simply set $f_H(w)\equiv 0$ for all other vertices. We recall that Bob's input in $\textsc{VetoLS}(G)$ is $S_G\subset V_G$. We denote by $E_G(S_G)\subset E_G$ the set of internal edges of $S_G$. Bob's subset of valid vertices in $H$ is defined by\footnote{By $\chi(E_G(S_G))$ we obviously mean the corresponding \emph{vertices} in these paths.} $S_H=\varphi(S_G) \cup \chi(E_G(S_G))$. If $v\in V_G$ is a valid local maximum, then $\varphi(v)\in V_H$ is a valid local maximum because all its valid neighbours are valid edges in which $v$ participates (here we use the isolation property), and the value along these edges is a weighted average of $f_G(v)$ and $f_G(u)\leq f_G(v)$, where $u$ is a valid neighbour of $v$. We argue that there are no additional valid local maxima in $H$. Indeed, if $v\in V_G$ is not a local maximum then $\varphi(v)\in V_H$ is not a local maximum because there is a valid edge where the potential increases. If $w\in \chi(E_G(S_G))$, by distinctness, $f_G(u)\neq f_G(v)$ therefore in one of the directions of the path $\chi(\{u,v\})$ the potential increases. All other vertices are invalid. \end{proof} \subsubsection{An Explicit Description of the Graph G} In the embeddings we use the specifics of the DAG $D$ for which the hardness of pebbling games is proved in \cite{GP14}. We now explicitly describe the replication graph $G$ that is obtained from $D$ so that Proposition \ref{pro:peb-ls} can be applied. Let $G'$ be the undirected version of the DAG $D$ for which the hardness of pebbling games is proved with additional edges that connect the sources and sinks of $D$ in a same way other vertices in $D$ are connected. Formally, the vertices of $G'$ are $V=[M^3]\times [M]\times [M] \times [M]$, and the edges are: $$E=\{(u,v):u-v\in \{(\pm 1,\pm 1,0,0),(\pm 1,0,\pm 1,0),(\pm 1,0,0,\pm 1)\}\}$$ Let $G$ be the graph that is obtained from $G'$ by replacing each vertex in $G$ with three new vertices and duplicating the edges so that each new vertex is connected to all the copies of its neighbors in $G'$. Formally, the vertices of $G$ are $\{(v,i):v\in V,i\in [3]\}$ and the edges are $\{((u,i),(v,j)):(u,v)\in E, i,j\in [3]\}$. Note that $G$ is a graph with $3M^6$ vertices and (constant) degree $d=36$. \subsubsection{Embedding into the Hypercube} In this section we show how to embed the replication graph $G$ obtained in the previous step into the hypercube. Moreover, the embedding is such that the number of vertices in the hypercube increases only by a constant factor. This small blowup is crucial for obtaining an optimal $2^{n/2}$ bound. \begin{lemma}\label{lem:hyp} The graph $G$ (with $3M^6$ vertices) can be VIED-embedded into the $n$'th-dimensional hypercube $\textsf{Hyp}_n$ for $n=6\lceil \log M \rceil + 111$. As a corollary, $CC(\textsc{VetoLS}(\textsf{Hyp}_n))=\Omega(2^{n/2})$. \end{lemma} \begin{proof} For clarity of exposition we assume that $M=2^c$ is a power of 2. We start with some notations and properties of the graph $G$. Recall that the vertices of $G$ are $V=[M^3]\times [M] \times [M] \times [M]\times [3]$. For a vertex $v=(k_1,k_2,k_3,k_4,i)$, $k_1$ is called the \emph{layer of $v$}. Note that all edges connect $k$ layer vertices to $k+1$ layer vertices. $k_1+k_2+k_3+k_4 \mod 2$ is called the \emph{parity of $v$}. $i$ is called the \emph{replication index of $v$}. We present an edge coloring of $G$ with $108$ colors in which no two adjacent edges are colored the same (a ``valid'' coloring). We first color all edges from layer 1 to layer 2 with $54$ colors. Given a vertex $v=(k_2,k_3,k_4)$, edges are specified by a displacement $d\in \{\pm 1,0,0),(0,\pm 1,0),(0,0,\pm 1)\}$ that operates on $(k_2,k_3,k_4)$ and pair of replication indices $i,j\in [3]$ ($i$ is the replication index of the vertex at layer $1$ and $j$ is the replication index of the vertex at layer $2$). Note that we have $6\cdot 9=54$ such specifications. It is easy to verify that coloring these edges in $54$ different colors is a valid edge coloring. We proceed by coloring all edges between layers $2$ and $3$ with \emph{different} $54$ colors using a similar coloring method. Similarly, all edges from layer $2k-1$ to layer $2k$ are colored as edges between layers $1$ and $2$ and all edges from layer $2k$ to layer $2k+1$ are colored as edges between layers $2$ and $3$. This defines an edge coloring of $G$. Now we present some notation. The vertices of the hypercube are partitioned into blocks as follows: \begin{itemize} \item For $i=1,...,5$ the \emph{$i$'th index} block consists of bits that represent the $i$'th index. The sizes of the blocks are $(3c,c,c,c,2)$ for $i=1,2,3,4,5$ correspondingly. \item A \emph{parity bit} memorizes the parity of a vertex. \item The \emph{edge} block consists of $108$ bits. \item The \emph{counter} block consists of $3$ bits that serves as a counter to keep track of the block on which we currently apply the changes along the embedding path (see below). \end{itemize} \paragraph{Embedding the vertices.} Let $(h_1,...,h_{M^3})$ be a Hamiltonian path of the $3c$-dimensional hypercube. Let $(h'_1,...,h'_{M})$ be a Hamiltonian path of the $c$-dimensional hypercube and $(h''_1,...,h''_{4})$ be a Hamiltonian path of the $2$-dimensional hypercube. To define $\phi(v)$, we embed a vertex $v=(k_1,k_2,k_3,k_4,i)$ into the vertex of the hypercube whose first block is the bits of $h_{k_1}$, the second block is $h'_{k_2}$, then $h'_{k_3}$, $h'_{k_4}$ and $h''_{k_5}$. We set the parity bit to be the parity of $v$, the edge block to $\textbf{0}$, and the counter block to $\textbf{0}$. \paragraph{Embedding the edges.} Note that the coloring of $G$ in $108$ colors naturally induces an order on the edges. Every vertex has at most one $m$'th edge, and two adjacent vertices agree on the index of this edge. The $m$'th edge of $v$, from $v$ in layer $k_1$ to $u$ in layer $k_1+1$, is defined by the following sequence of bit flipping. \begin{enumerate} \item The $m$'th bit in the edge block is flipped to 1. \item A single bit in the counter block is flipped to encode the integer 1. \item A single bit in the first index block is flipped to encode the integer $k_1+1$. \item A single bit in the counter block is flipped to encode the integer 2. \item If the displacement of the edge is $(\pm 1,0,0)$, a single bit in the second index block is flipped to encode the integer $k_2 \pm 1$. If the displacement of the edge is $(0,\pm 1,0)$, a single bit in the third index block is flipped to encode the integer $k_3 \pm 1$. If the displacement of the edge is $(0,0,\pm 1)$, a single bit in the fourth index block is flipped to encode the integer $k_3 \pm 1$. \item A single bit in the counter block is flipped to encode the integer 3. \item The two bits of the fifth index block are flipped (one by one in a fixed order) to encode the integer $j$ (the replication index of $u$). \item The counter block returns back to $\textbf{0}$. \item The $m$'th bit in the edge block is flipped back to $\textbf{0}$. \end{enumerate} It is easy to see that this path ends up at $\phi(u)$ (note that the parity of $v$ and $u$ is the same, and indeed we did not flip the parity bit). We argue that the defined paths are disjoint. It is sufficient to prove that given a node on the path one can recover the previous node. Given the color of the edge and the counter, it is immediate to recover the previous node in all intermediate steps excluding steps (3) and (5). In steps (3) and (5) it is unclear whether we should flip the corresponding index block or the counter block. To determine this we use the parity bit: In step (3), if the parity bit is equal to the parity of the encoded vertices, then it means that we did not flip yet a bit, and to get the previous vertex we set the counter block to encode 0. If the parity bit differs from the parity of the encoded indices, then it means that we have flip a bit, and to get the previous vertex we should flip a bit in the index block. In step (5) we do the opposite. If the parity bit differs from the parity of the encoded indices, then we flip the counter. If the parity bit is equal to the parity of the encoded indices, then we flip the index block. It is easy to check that the embedding is vertex isolated because of the parity bit. \end{proof} \subsubsection{Embedding into the Grid} \begin{lemma}\label{lem:grid1} The graph $G$ (with $3M^6$ vertices) can be VIED-embedded in a constant-dimension grid with $O(M^6)$ vertices. As a corollary, $CC(\textsc{VetoLS}(\textsf{Grid}_d))=\Omega(\sqrt{N})$ for some constant-dimension grid with $N$ vertices. \end{lemma} \begin{proof}(sketch) The embedding is very similar to the one we presented in Lemma \ref{lem:hyp} for embedding into the hypercube. In the proof of Lemma \ref{lem:hyp} we only used the fact that the hypercube has an Hamiltonian cycle. For the grid, we will take advantage of the observation that the two-dimensional grid has an Hamiltonian cycle. Specifically, a vertex $v=(k_1,k_2,k_3,k_4,i)$ is embedded into the vertex of the grid whose first block is the bits of that correspond to a Hamiltonian cycle on $\textsf{Grid}_{M^{1.5}\times M^{1.5}}$, blocks $2-4$ are specified using the Hamiltonian cycle on $\textsf{Grid}_{M^{0.5}\times M^{0.5}}$, and block $5$ using the Hamiltonian cycle on $\textsf{Grid}_{2}\times \textsf{Grid}_2$. We set the parity bit to be the parity of $v$, the edge block to $\textbf{0}$, and the counter block to $\textbf{0}$. Applying very similar arguments to the proof of Lemma \ref{lem:hyp} we establish the embedding of $G$ into the grid $[M^{1.5}]^2\times [M^{0.5}]^6 \times [2]^{111}$. \end{proof} \subsubsection{Embedding into the Odd Graph} \begin{lemma}\label{lem:odd} There exists a VIED embedding of $\textsf{Hyp}_n$ in $\textsf{Odd}_{n+2}$. As a corollary, $CC(\textsc{VetoLS}(\textsf{Odd}_n))=\Omega(2^{n/2})$. \end{lemma} \begin{proof} We first embed $\textsf{Hyp}_n$ in $\textsf{Hyp}_{n+1}$ simply by $\phi_1(v)=(v,0)$ and $\chi_1(\{v,w\})=\{(v,0),(w,0)\}.$ Obviously this embedding is edge disjoint (but not vertex isolated). We now embed $\textsf{Hyp}_{n+1}$ in $\textsf{Odd}_{n+2}$. We refer to each vertex of $\textsf{Hyp}_{n+1}$ as a subset $S\subset [n+1]$. We denote $S+n+1=\{i+n+1:i\in S\}$. We denote $T^c=[n+1]\setminus T$ (this notation will be relevant for subsets of $[n+1]$ rather than subsets of $[2n+3]$ as the vertices of $\textsf{Odd}_{n+2}$). The embedding is defined by \begin{align*} \phi_2(S)=&S\cup (S^c + n+1). \\ \chi_2(S,S\cup \{i\})=&S\cup (S^c + n+1) \rightarrow (S^c\setminus \{i\}) \cup (S+n+1) \cup \{2n+3\} \rightarrow \\ & S \cup \{i\} \cup ((S\cup \{i\})^c + n+1) \end{align*} It is easy to check that this indeed defines a valid path on $\textsf{Odd}_{n+2}$. All the defined paths are disjoint because given a vertex on a path $T\cup (T'+n)\cup \{2n+3\}$ we can identify the edge: $S=T'$ and $i$ is the unique element that is missing from both sets $T$ and $T'$. Now we define the embedding of $\textsf{Hyp}_n$ in $\textsf{Odd}_{n+2}$ to be the decomposition of these two embeddings; I.e., $\phi(v)=\phi_2(\phi_1(v))$ and $\chi(e)=\chi_2(\chi_1(e))$. The embedding $(\phi,\chi)$ is edge disjoint because both embeddings $(\phi_1,\chi_1)$ and $(\phi_2,\chi_2)$ are edge disjoint. Now we prove that $(\phi,\chi)$ is vertex isolated. A vertex $\phi_2(\phi_1(v))=S\cup (S^c + 2n)$ has $n+2$ neighbours in $\textsf{Odd}_{n+2}$. Among these neighbours, $n+1$ participate in an embedding of the outgoing edges of $S\in \textsf{Hyp}_{n+1}$. So there is a single neighbour, $S^c \cup (S+n+1)$, who is suspected to belong to an embedding of an independent edge. Note that $S^c \cup (S+n+1)=\phi_2(S^c)$ and $S^c\in \textsf{Hyp}_{n+1}$ \emph{does not} belong to the embedding of $\textsf{Hyp}_n$ in $\textsf{Hyp}_{n+1}$: indeed, for every vertex $v\in \textsf{Hyp}_n$ the complementary vertex $\overline{(v,0)}=(\overline{v},1)\in \textsf{Hyp}_{n+1}$ does not belong to the embedding of $\textsf{Hyp}_n$ in $\textsf{Hyp}_{n+1}$ (neither to $\phi_1(V_{\textsf{Hyp}_n})$ nor to $\chi_1(E_{\textsf{Hyp}_n})$). \end{proof} \subsection{Step 3: From $\textsc{VetoLS}$ to $\textsc{SumLS}$}\label{sec:veto-sum} First, recall that the potential function gets values in $[W]$. We reduce the problem $\textsc{VetoLS}(G)$ to $\textsc{SumLS}(G)$. Alice's potential remains unchanged (i.e., $f_A(v):=f_G(v)$). Bob fixes some valid vertex $v^*\in S$ and sets his potential as follows: $f_B(v):=0$ if $v\in S$, otherwise he sets $f_B(v)=-d(v,v^*)\cdot (W+1)$, where $d$ is the distance in $G$. Indeed every valid local maximum $v$ is a local maximum of the sum because all the valid neighbours have lower sum of potentials $f_A(v)+f_B(v)=f_G(v)\geq f_G(w)=f_A(v)+f_B(v)$ and all invalid neighbours have negative sum of potentials $f_A(w)+f_B(w)\leq W - (W+1)<0$. It is easy to check that every valid vertex that is not a local maximum is not a local maximum of the sum. Finally, every invalid vertex $v$ is not a local maximum of the sum because the neighbour $w$ in the direction of the shortest path to $v^*$ has higher sum of potentials: \begin{align*} f_A(v)+f_B(v) &\leq W - d(v,v^*)(W+1) < -(d(v,v^*)-1)(W+1) \\ &= -(d(w,v^*)-1)(W+1)\leq f_A(w)+f_B(w). \end{align*} We apply this reduction on the graphs considered in Lemmas \ref{lem:grid1}, \ref{lem:hyp} and \ref{lem:odd} to deduce the theorem. \section{Proof of Theorem \ref{theo:grid}}\label{sec:grid-pr} The overall structure of the proof is similar to that of Theorem \ref{theo:opt}. \paragraph{Step 0} We start with a local-search-related communicationally-hard problem over some graph $H$. \paragraph{Step 1} We use the intermediate problem $\textsc{VetoLS}(G)$, where $G$ is constructed from $H$. \paragraph{Step 2} We embed $G$ in the three-dimensional grid. \paragraph{Step 3} We reduce $\textsc{VetoLS}(\textsf{Grid})$ to $\textsc{SumLS}(\textsf{Grid})$. \vspace{0.1in} \noindent However, in order to be able to embed $G$ in the three-dimensional grid, the degree of $G$ should be very low; at most 6. The pebbling game result of \citep{GP14} does not serve our purposes because the degree of the graph $G$ is 36. Hence, our starting point is some different local-search-related communicationally hard problem over some degree 3 graph $H$. In Step 1, we carefully modify $H$ to $G$ by increasing the degree only by 1; i.e., $G$ is degree 4 graph. Now, in Step 2 we are able to embed $G$ in the three-dimensional grid. Step 3 is identical to that in the proof of Theorem \ref{theo:opt}. \subsection{Step 0: The Query Complexity of Local Search and its Simulated Variant} In the problem $\textsc{QuLS}(H)$ there is a graph $H$ and a function $h$ that gives a value $h(v)$ for every vertex. The function $h$ can only be accessed via queries $h(v)$. Furthermore, for each two vertices $v,u$ are distinct: $h(v)\neq h(u)$. The goal is to find a local maximum of $h$ while minimizing the number of queries. Santha and Szegedy \citep{SS} introduced a general connection between the query complexity of the local search problem and the expansion of a graph. Since random $3$-regular graphs are expanders with high probability, we have that there exists a degree 3 graph $H$ with $N$ vertices for which finding a local maximum requires $\textsf{poly}(N)$ queries. However, their construction does not assume that $h(v)\neq h(u)$ for every two vertices $v$ and $u$. This is easy to fix: let $h'(v)=2N\cdot h(v) +v$ (where $v\in [N]$ denotes the index of $v$). Observe that each local maximum of $h'$ is also a local maximum of $h$ and that the query $h'(v)$ can be computed by one query $h(v)$, so the number of queries required to find a local maximum of $h'$ is at least the number of queries required to find a local maximum of $h$. We therefore have: \begin{lemma}[essentially \citep{SS}] There exists a degree 3 graph $H$ with $N$ vertices and a function $h'$ such that every vertex has a distinct value for which finding a local maximum requires $\textsf{poly}(N)$ queries. \end{lemma} The simulation theorems provides us a recipe to produce problems with high communication complexity, given a problem with high query complexity. In particular, \citep{GPW,AGJKM} suggest the \emph{index-gadget} recipe, which starting from $\textsc{QuLS}(H)$ is translated to the following communication problem $\textsc{SimLS}(H)$: for each vertex $v\in H$, Alice holds an array of valuations $(f(v,i))_{i\in [M]}$ where $f(v,I(v))=h(v)$ and\footnote{E.g., $M=N^{256}$ in \citep{GPW}.} $M=\textsf{poly}(N)$. Bob holds the correct index $I(v)\in [M]$. Their goal is to compute a local maximum of the function $f(v,I(v))$. Direct application of the simulation theorems to our setting gives that: $$CC(\textsc{SimLS}(H))=\Theta(\log N)QC(\textsc{QuLS}(H))=\textsf{poly}(N)$$ \subsection{Step 1: The Communication Complexity of $\textsc{VetoLS}$} In this step we prove the communication hardness of $\textsc{VetoLS}$ on a certain bounded degree graph. We recall the definition of $\textsc{VetoLS}(G)$. Alice's input is a function $f_G:V\rightarrow [W]$. Bob's input is a non-empty subset $S\subset V$. The output is a vertex $v\in S$ such that $f_G(v)\geq f_G(w)$ for every $w\in S$ such that $\{v,w\}\in E$ (i.e., for every valid neighbour). Unlike the communication pebbling game problem that uses index gadgets of size 3, the simulated $\textsc{QuLS}$ problem uses gadgets of size $M=\textsf{poly} N$ ($N$ is the number of vertices of $G$). The idea in the proof of Theorem \ref{theo:opt} is to replicate each vertex according to the gadget size, and connect every vertex with all its replicated neighbours. This idea is impractical here, because the degree of the resulting graph will be huge. Instead, we replace each replicated vertex with degree $3M$ by a carefully chosen binary tree structure in order to reduce the degree. \begin{figure}[h] \caption{The graph $G$. The replacement of a vertex by $M$ pairs of binary trees, and the neighbours of the leaves of $T^{out}$.}\label{fig:g} \centering \vspace*{3mm} \begin{tikzpicture} \node[circle, draw, minimum size=0.8cm] (v) at (-1,0) {$v$}; \node[circle, draw] (w1) at (-2,1) {$w_1$}; \node[circle, draw] (w2) at (-1,1) {$w_2$}; \node[circle, draw] (w3) at (0,1) {$w_3$}; \draw (v) -- (w1); \draw (v) -- (w2); \draw (v) -- (w3); \draw[<->,ultra thick] (0.5,0) -- (1.5,0); \filldraw[black] (3,0) circle (0.1); \draw (3,0) -- (1.7,1.3) -- (4.3,1.3) -- (3,0); \draw (3,0) -- (1.7,-1.3) -- (4.3,-1.3) -- (3,0); \node at (3,1) {$T^{out}(v,1)$}; \node at (3,-1) {$T^{in}(v,1)$}; \filldraw[black] (6,0) circle (0.1); \draw (6,0) -- (4.7,1.3) -- (7.3,1.3) -- (6,0); \draw (6,0) -- (4.7,-1.3) -- (7.3,-1.3) -- (6,0); \node at (6,1) {$T^{out}(v,2)$}; \node at (6,-1) {$T^{in}(v,2)$}; \filldraw[black] (7.8,0) circle (0.05); \filldraw[black] (8,0) circle (0.05); \filldraw[black] (8.2,0) circle (0.05); \filldraw[black] (10,0) circle (0.1); \draw (10,0) -- (8.7,1.3) -- (11.3,1.3) -- (10,0); \draw (10,0) -- (8.7,-1.3) -- (11.3,-1.3) -- (10,0); \node at (10,1) {$T^{out}(v,M)$}; \node at (10,-1) {$T^{in}(v,M)$}; \draw[decorate,decoration={brace,amplitude=6pt,mirror,raise=4pt},yshift=0pt] (11.3,0.1) -- (11.3,1.3) node [black,midway,xshift=0.7cm] { $3a$}; \draw[decorate,decoration={brace,amplitude=6pt,mirror,raise=4pt},yshift=0pt] (11.3,-1.3) -- (11.3,-0.1) node [black,midway,xshift=0.7cm] { $3a$}; \draw[decorate,decoration={brace,amplitude=6pt,mirror,raise=4pt},yshift=0pt] (1.7,-1.3) -- (4.3,-1.3) node [black,midway,yshift=-0.6cm] { $M^3$}; \filldraw[black] (3.8,1.3) circle (0.1); \node at (3.8,1.6) {$t_{(i,j,k)}(v,1)$}; \filldraw[black] (3.5,4.3) circle (0.1); \draw (3.5,4.3) -- (2.2,3) -- (4.8,3) -- (3.5,4.3); \filldraw[black] (6.5,4.3) circle (0.1); \draw (6.5,4.3) -- (5.2,3) -- (7.8,3) -- (6.5,4.3); \filldraw[black] (9.5,4.3) circle (0.1); \draw (9.5,4.3) -- (8.2,3) -- (10.8,3) -- (9.5,4.3); \node at (3.5,3.3) {$T^{in}(w_1,i)$}; \node at (6.5,3.3) {$T^{in}(w_2,j)$}; \node at (9.5,3.3) {$T^{in}(w_3,k)$}; \draw (3.8,1.9) -- (2.5,3); \draw (3.8,1.9) -- (5.5,3); \draw (3.8,1.9) -- (8.5,3); \end{tikzpicture} \end{figure} \paragraph{The Graph $G$.} Without loss of generality we assume that $M=2^a$ is a power of 2. We obtain our graph $G$ by replacing every vertex $v\in H$ by a tuple of $M$ graphs $(T^{out}(v,i)\cup T^{in}(v,i))_{i\in M}$, where $T^{out}(v,i)\cup T^{in}(v,i)$ denotes two binary trees with an overlapping root, both of depth $\log (M^3)=3a$ (see Figure \ref{fig:g}). Roughly speaking, the role of $T^{out}(v,i)$ is to decode the correct indices of the three neighbours, and in parallel to split the outgoing edges from $v_i$. The role of $T^{in}(v,i)$ is simply to gather the incoming edges into $v_i$. More formally, the vertices of $T^{out}(v,i)$ at depth $d$ are denoted by $(t_s(v,i))_{s\in \{0,1\}^d}$. The vertices of $T^{in}(v,i)$ at depth $d$ are denoted by $(t'_s(v,i))_{s\in \{0,1\}^d}$. The vertices at depth $3a$ will be called \emph{leaves}\footnote{Note that they are leaves only with respect to the tree. In the graph $G$ they will not be leaves.}. As was mentioned above, the vertex at depth 0 of these two trees coincides (i.e., $t_\emptyset (v,i) = t'_\emptyset (v,i)$). Now we describe how the leaves of $T^{out}(v,i)$ connect to the leaves of $T^{in}(w,j)$ for $w\neq v$. For a leaf $t_s(v,i)\in T^{out}(v,i)$ we denote $s=(j_1,j_2,j_3)$ where $j_1,j_2,j_3\in [M]$ are the indices of the three neighbors of $v$, $w_1,w_2,w_3$. The leaf $t_s(v,i)\in G$ has a single edge to the tree $T^{in}(w_1,j_1)$, a single edge to the tree $T^{in}(w_2,j_2)$, and a single edge to the tree $T^{in}(w_3,j_3)$ (see Figure \ref{fig:g}). In principle, we should specify which leaf exactly in $T^{in}(w_1,j_1)$ is connected to $t_s(v,i)$. However, since it will not play any role in our arguments, we just implement a counting argument to ensure that the number of neighbours from other trees of every leaf $t'_{s'}(w,j)$ is at most 3. If $w$ has a neighbour $v$, then for every $i\in [M]$ exactly $M^2$ vertices $t_s(v,i)$ will encode the index $j$. So from $T^{out}(v,i)$ we have $M\cdot M^2=M^3$ incoming edges. Summing over the 3 neighbours we get $3M^3$ incoming edges. If we distribute them equally among the $M^3$ vertices, we get 3 neighbours for each. \paragraph{Alice's Potential. } Alice's potential function is defined by $f_G(t'_s(v,i))=7af(v,i)+3a-|s|$ and $f_G(t_s(v,i))=7af(v,i)+3a+|s|$. Namely the potential in the tree $T'_s(v,i)$ starts at a value of $7af(v,i)$ in the leaves of $T^{in}(v,i)$. It increases by $1$ after every edge until it gets to the root. At the root we move to the tree $T^{out}(v,i)$ where it proceeds to increase by $1$ until it gets to the leaves of $T^{out}(v,i)$ where the value of the potential is $7af(v,i)+6a$. \paragraph{Bob's valid Vertices. }Now we define the subset of valid vertices $S$ held by Bob. Let $bin(i)\in \{0,1\}^a$ denote the binary representation of an index $i\in [M]$. We denote by $nbin(v)=(bin(I(w_i)))_{i=1,2,3}$ the binary representation of the triple of $v$'s neighbours. For a binary string $b$ we denote by $b_{[k]}$ its first $k$ elements. A vertex $t_s(v,i)\in S$ iff $i=I(v)$ and $s=nbin(v)_{[|s|]}$ (recall that $I(v)$ is Bob's input in $\textsc{SimLS}$). Informally speaking the valid vertices are those where the tree $T^{out}(v,i)$ (or $T^{in}(v,i)$) has the correct index, and if the vertex is in $T^{out}(v,i)$ we require, in addition, that the prefix of the encoding of the neighbours' indices will be correct. \paragraph{Local Maxima in $G$.} Since the potential of Alice increases starting from the leaves of $T^{in}(v,i)$ and ending at the leaves of $T^{out}(v,i)$, and in addition for every valid vertex there exists a valid neighbour with higher (lower) depth in $T^{out}(v,i)$ (in $T^{in}(v,i)$) the valid local maxima appear only on the leaves of $T^{out}(v,i)$. Every valid leaf of $T^{out}(v,i)$ has a potential of $7af(v,I(v))+6a$ (i.e., the correct potential) and is connected to leaves of $T^{in}(w_j,I(w_j))$ for $j=1,2,3$ with a potential of $7af(w_j,I(w_j))$ (i.e., the correct potential of the neighbours). Note that the potential values are integers. Therefore, $7af(v,I(v))+6a \geq 7af(w,I(w))$ if and only if $f(v,I(v))\geq f(w,I(w))$. Hence, there is a one-to-one correspondence between valid local maxima of $f_G$ with respects to the set if valid vertices $S$ and local maxima of $h$ over $H$. This completes the proof item \ref{theo:deg-bounded} of the Theorem. \subsection{Step 2: Embedding the Degree 4 Graph Into the Grid} We VIED embed (see Definition \ref{def:vied}) the degree 4 graph $G$ obtained in the previous step into the grid. We use Lemma \ref{lem:emb} to deduce hardness of $\textsc{VetoLS}$ over the grid. \begin{lemma}\label{lem:grid} Every degree 4 graph $G$ with $N$ vertices can be VIED-embedded in $\textsf{Grid}_{4N\times (2N+2) \times 2}$. As a corollary, $CC(\textsc{VetoLS}(\textsf{Grid}_{N\times N \times 2}))=\textsf{poly}(N)$. \end{lemma} \begin{proof} We embed the graph $G$ in the grid whose vertices are $\{3,4,...,4N+2\}\times \{-1,0,...,2N\} \times \{0,1\}$. We denote the vertices of $G$ by $\{v_i\}_{i\in [N]}$ and we embed $\phi(v_i)=(4i,0,0)$. We use (for instance) the structure of Figure \ref{fig:paths} to place the four outgoing edges of $(4i,0,0)$ at the points $(4i-1,1,0),(4i,1,0),(4i+1,1,0)$ and $(4i+2,1,0)$. \begin{figure}[h] \caption{The outgoing edges of the embedded vertices.}\label{fig:paths} \centering \begin{tikzpicture}[scale=0.7] \draw[step=1,gray,thin] (0,0) grid (16,3); \draw[line width=3] (1,2) -- (1,0); \draw[line width=3] (2,1) -- (0,1); \draw[line width=3] (0,1) -- (0,2); \draw[line width=3] (2,1) -- (2,2); \draw[line width=3] (1,0) -- (3,0); \draw[line width=3] (3,2) -- (3,0); \draw[line width=3] (5,2) -- (5,0); \draw[line width=3] (6,1) -- (4,1); \draw[line width=3] (4,1) -- (4,2); \draw[line width=3] (6,1) -- (6,2); \draw[line width=3] (5,0) -- (7,0); \draw[line width=3] (7,2) -- (7,0); \draw[line width=3] (14,2) -- (14,0); \draw[line width=3] (15,1) -- (13,1); \draw[line width=3] (13,1) -- (13,2); \draw[line width=3] (15,1) -- (15,2); \draw[line width=3] (14,0) -- (16,0); \draw[line width=3] (16,2) -- (16,0); \filldraw (1,1) circle (0.3); \filldraw (5,1) circle (0.3); \filldraw (14,1) circle (0.3); \node at (0,-0.5) {3}; \node at (1,-0.5) {4}; \node at (5,-0.5) {8}; \node at (14,-0.5) {$4N$}; \node at (-0.5,0) {-1}; \node at (-0.5,1) {0}; \node at (-0.5,2) {1}; \end{tikzpicture} \end{figure} We denote by $\{e_i\}_{i\in [m]}$ the edges in the graph $G$. Note that $m\leq 4N/2=2N$ because the graph degree is 4. The embedding of the edges is by an increasing order $e_1,...,e_m$. For an edge $e_i=(v_j,v_k)$ let $r_j\in \{-1,0,1,2\}$ be the minimal index such that the vertex $(4j+r_j,1,0)$ is not yet used by previous edges $\{e_{i'}\}_{i'<i}$. Similarly we define $r_k$. The edge $e_i=(v_j,v_k)$ is embedded to the path: $$(4j+r_j,1,0)\leftrightsquigarrow (4j+r_j,i,0) \leftrightarrow (4j+r_j,i,1) \leftrightsquigarrow (4k+r_k,i,1) \leftrightarrow (4k+r_k,i,0) \leftrightsquigarrow (4k+r_k,1,0)$$ where $(x,y,z) \leftrightsquigarrow (x,y',z)$ denotes a straight line that consistently changes the second coordinate (similarly for $(x,y,z) \leftrightsquigarrow (x',y,z)$). The embedding is VIED because all horizontal lines appear at $(\cdot,\cdot, 1)$ while all vertical lines appear at $(\cdot,\cdot, 0)$. The embedding is vertex isolated by the construction of Figure \ref{fig:paths}. \end{proof} Finally Step 3 is identical to Section \ref{sec:veto-sum}. We use the reduction from $\textsc{VetoLS}$ to $\textsc{SumLS}$ to deduce the Theorem. \section{Identifying Ordinal Potential Games}\label{ap:ident-ord} We will prove two results, one for two-player $N$-action games and one for $n$-player $2$ action games. In both we use the following two-player two-action game for $x,y\in \{0,2\}$: \begin{center} \begin{tabular}{|l|l|} \hline $2,1$ & $1,2$ \\ \hline $1,y$ & $x,1$ \\ \hline \end{tabular} \end{center} \noindent This game has a better-reply cycle if and only if $x=y=2$. \begin{proposition}\label{pro:2-ord} Recognizing whether a two-player $N$-action game is an ordinal potential game requires $\textsf{poly}(N)$ bits of communication, even for randomized protocols. \end{proposition} \begin{proof} Denote by $u'$ the two-player $2N\times 2N$ table that contains $N\times N$ copies of this game with the parameters $(x_{i,j},y_{i,j})_{i,j\in [N]}$. We denote by $u''$ the two-player $2N\times 2N$ game with the payoffs $u''(a,b)=(3\lceil \frac{a}{2}\rceil,3\lceil \frac{b}{2}\rceil)$. And we denote $u=u'+u''$. The game $u$ has a better-reply cycle if and only if there exist $i,j\in [N]$ such that $x_{i,j}=y_{i,j}=2$. Indeed if $x_{i,j}=y_{i,j}=2$, since we have added a constant payoff of $3i$ to player $1$ ($3j$ to player 2) to the $(i,j)$ copy of the game, the better reply cycle remains a better reply cycle in $u$. If $(x_{i,j},y_{i,j})\neq (2,2)$ for all $i,j$ then we have no better reply cycle within the copies of the $2\times 2$ games, and we have no better reply cycles across the $2\times 2$ games because at least one player has dominant strategy. Therefore the determination of ordinal potential property is as hard as disjointness, which requires $\textsf{poly}(N)$ communication, even with randomized communication. \end{proof} \begin{proposition}\label{pro:n-ord} Recognizing whether an $n$-player $2$-action game is an ordinal potential game requires $2^{\Omega(n)}$ bits of communication, even for randomized protocols. \end{proposition} \begin{proof} Consider an $(n+2)$-player game where for each profile $a\in \{0,1\}^n$ the last two players are playing the above $2\times 2$ game with parameters $x_a,y_a$. For the first $n$ players we set the utilities such that $1$ is dominant strategy (e.g., $u_i(a_i,a_{-i})=a_i$ for $i\in [n]$). Similarly to the previous arguments, the game contains a better reply cycle if and only if $x_a=y_a=2$ for some $a\in \{0,1\}^n$. Again, we obtain a reduction to disjointness. \end{proof} \section{Introduction} This paper deals with the communication complexity of local search problems. The general problem involves a search over some ``universe'' $V$, for an element $v^* \in V$ that maximizes, {\em at least ``locally''}, some objective function $f:V\rightarrow \mathbb{R}$. The notion of ``locality'' is formalized by putting a fixed, known, neighbourhood structure $E$ on the set of elements, so the requirement of local optimality is that for all $u \in V$ such that $(v^*,u) \in E$ we have that $f(v^*) \ge f(u)$. The notion of local optimality is interesting from two points of view: first, it captures the outcome of a wide range of ``gradual-improvement'' heuristics where the neighbourhood structure represents the types of gradual improvements allowed, and second, locally-optimal solutions provide a notion of stability, where the neighborhood structure models the possible ``deviations'' from stability. In the context of computational complexity, local search problems are captured by the complexity class PLS \citep{JPY} which is a subset of the well studied class TFNP (defined in \citep{MP} and studied, e.g., in \citep{PSY, BCEI, DGP, hubacek2017journey}): search problems for which a witness always exists (``total search problems'') and can by efficiently verified (``in NP''). The problem has also been widely studied in the model of {\em query complexity} where the cost of an algorithm is the number of black-box queries to the objective function $f$, from the pioneering work of \citep{Ald} on the Boolean hypercube, to a rather complete characterization of not only the deterministic query complexity but also the randomized and even quantum complexities on any graph \citep{SS,Aar,SY}. The interest in analyzing local search from a communication complexity point of view is clear: in essentially any application, the objective function $f$ is not really given as a ``black box'' but is somehow determined by the problem structure. When this structure has any element of distributed content then communication may become an important bottleneck. The question of how the information is distributed is key: in the simplest imaginable scenario, the search space $V$ is split somehow between the (say, two) parties, where each party holds the values $f(v)$ for its subset of $v \in V$ (the fixed commonly known neighbourhood structure still involves all of $V$). However, in this scenario even a global maximum (which is certainly also a local one) can be easily found with a small amount of communication by each player finding the maximum among his subset, and only communicating and comparing the maxima of the parties. Thus, for the problem to be interesting we must split the information $f(v)$ of each vertex between the parties. There are various ways to do this and the most natural one, conceptually and in terms of applications, is probably to split $f$ as the {\em sum} of two functions $f_A:V\rightarrow \mathbb{R}$ and $f_B:V\rightarrow \mathbb{R}$ held by Alice and Bob. So we consider the following problem: \vspace{0.1in} \noindent {\bf Definition:} For a fixed, commonly known graph $G=(V,E)$, the $\textsc{SumLS}(G)$ communication problem is the following: Alice holds a function $f_A:V\rightarrow \{1,...,W\}$, Bob holds a function $f_B:V\rightarrow \{1,...,W\}$, and their goal is to find a vertex $v^* \in V$ such that $f_A(v^*)+f_B(v^*) \ge f_A(u)+f_B(u)$ for all $u \in V$ with $(v^*,u) \in E$. \vspace{0.1in} \noindent Determining the communication complexity of $\textsc{SumLS}$ on certain families of graphs is easy. For example, a simple reduction from disjointness shows that the communication complexity of $\textsc{SumLS}$ on the clique with $n$ vertices is $\Omega(n)$. Our main theorem proves optimal lower bounds for several important families of graphs, all have small degree. The technical challenge is that the non-deterministic communication complexity of the problem on small degree graphs is clearly low: to {\em verify} that $v^*$ is a local optimum, Alice and Bob need only communicate the values $f(u)$ and $g(u)$ for the small number of $v^*$'s neighbours in the graph (note that the degree of all graphs that we consider is indeed small: $\log N$ or even constant). There are only a few results in the communication complexity literature that manage to prove good lower bounds for total problems where verification is easy, most notably for Karchmer-Wigderson games \citep{KW,KRW,RM} and for PPAD-like communication problems \citep{BR,GoosRub}. \begin{maintheo*} \ \begin{enumerate} \item The communication complexity of local search on the $n$-dimensional hypercube with $N=2^n$ vertices is $\Omega(\sqrt{N})$. \item The communication complexity of local search on a constant-dimension grid with $N$ vertices is $\Omega(\sqrt{N})$. \item The communication complexity of local search on a specific family of constant degree graphs with $N$ vertices is $\Omega(\sqrt{N})$. \item The communication complexity of local search on the odd graph with $N$ vertices is $\Omega(\sqrt[4]{N})$. \end{enumerate} \end{maintheo*} We note that all our bounds hold for \emph{randomized} communication complexity. Interestingly, the first three bounds are optimal: first, since for these families of graphs an algorithm by \cite{Ald} finds a local optimum with $O(\sqrt N)$ queries in expectation, which clearly implies an analogous communication algorithm with the same efficiency. Our proof starts from considering the communication variant of a pebbling game \cite{GP14}. $D=(V,E)$ is a known directed acyclic graph. The input is a boolean assignment for the vertices $b:V \rightarrow \{0,1\}$ such that every source is true ($b(v)=1$) and every sink is false ($b(v)=0$). The output is a false vertex whose all predecessors are true (i.e., $v\in V$ such that $b(v)=0$ and $b(u)=1$ for all $u\in V$, $(u,v)\in E$). \citep{GP14} consider the communication variant of the game which is obtained by distributing the information $b(v)\in \{0,1\}$ of every vertex by a constant size index-gadget $\{0,1\}^3\times [3] \rightarrow \{0,1\}$. They show that for some constant-degree graph $D$ with $N$ vertices the communication complexity of the problem is $\Theta(\sqrt{N})$, which is optimal. Our proof is composed of three steps. The first step shows how to reduce the pebbling game to a variant of local search on a graph $G$ ($\textsc{VetoLS}$) where Alice holds the function $f$ and Bob holds a set of valid vertices. The goal is to find a local maximum in the subgraph that is composed of the valid vertices. The second step is the most technically challenging one. We first define a notion of embedding one graph to the other, and show that if a graph $G$ can be embedded into $H$ then the communication of $\textsc{VetoLS}(H)$ is at least that of $\textsc{VetoLS}(H)$. We then show that the graph $G$ obtained in the previous step can be embedded into each of the families considered in the theorem. This embedding is quite delicate and uses specifics properties of the graph $G$, since the number of vertices of $G$ and $H$ must be almost the same, in order to obtain an optimal bound of $\Omega(\sqrt N)$ for $\textsc{VetoLS}(H)$, where $N$ is the number of vertices of $H$. Finally, in the third step we show that the communication complexity of $\textsc{VetoLS}$ on any graph is at least that of local search, thus establishing the theorem. The constants that are obtained in our theorem are quite big (the dimension of the grid has to be at least $119$, and the degree of the constant degree graph is $36$). Thus, we also provide an alternative proof that obtains better constants, at the cost of a worse communication bound. Specifically, we show that there exists a specific family of $4$-degree graphs for which the communication complexity of local search is $\Omega(N^c)$ for some constant $c>0$. We also show a lower bound of the form $\Omega(N^c)$ for the three dimensional grid $N\times N\times 2$. The alternative proof uses the more recent and more generic ``simulation'' lemmas that ``lift'' lower bounds from the query complexity setting to the communication complexity setting \citep{GPW,GPW15,RM}, instead of the ``simulation'' lemma of \citep{GP14} that was developed for specific settings like the pebbling game. The main technical difficulty that we overcome is that the ``combination gadgets'' used in these lemmas (specifically the index function) are very different from the simple sum that we desire. We now describe two applications of our basic lower bound. In both applications we study communication variants of problems that are known to be PLS complete, have low non-deterministic complexity and, as we show, high communication complexity. \subsection{Potential Games} The communication requirements for reaching various types of equilibria in different types of games have received a significant amount of recent interest (\cite{BR,GoosRub}) as they essentially capture the convergence time of arbitrary dynamics in scenarios where each player only knows his own utilities (``uncoupled dynamics'' \citep{HMas,HMan}) and must ``learn'' information about the others. Of particular importance here is the class of potential games \citep{MS}. \vspace{0.1in} \noindent {\bf Definition:} An $n$-player game with strategy sets $A_1,...,A_n$ and utility functions $u_1,...,u_n$ is an \emph{exact potential game} if there exists a single potential function $\phi : A_1 \times \cdots \times A_n \rightarrow \mathbb{R}$ so that for every player $i$, every two strategies $a_i,a'_i \in A_i$ and every tuple of strategies $a_{-i} \in A_{-i}$ we have that $u_i(a_i,a_{-i})-u_i(a'_i,a_{-i})=\phi(a_i,a_{-i})-\phi(a'_i,a_{-i})$. The game is an \emph{ordinal potential function} if there exists a single potential function $\phi : A_1 \times \cdots \times A_n \rightarrow \mathbb{R}$ so that for every player $i$, every two strategies $a_i,a'_i \in A_i$ and every tuple of strategies $a_{-i} \in A_{-i}$ we have that $sign(u_i(a_i,a_{-i})-u_i(a'_i,a_{-i}))=sign(\phi(a_i,a_{-i})-\phi(a'_i,a_{-i}))$, i.e., the value of the potential function increases if and only if the player improves his utility. \vspace{0.1in} The class of exact potential games includes, in particular, all congestion games. A key property of potential games (exact or ordinal) is that every sequence of better responses converges to an equilibrium and therefore every potential game always has a pure Nash equilibrium. \citep{HMan} study the communication complexity of pure Nash equilibrium in \emph{ordinal} potential games. They consider $n$-player games where each player has four actions and show (by a reduction from disjointness) that exponential communication is required to distinguish between the case where the game is an ordinal potential game (and thus has a Nash equilibrium) and the case where the game is not a potential game and does not admit any Nash equilibrium. This immediately implies that finding an equilibrium in games that are guaranteed to have one takes $exp(n)$ bits of communication. Does finding an equilibrium become any easier for \emph{exact} potential games? In \citep{Noam-blog-2} it was shown that exponentially many \emph{queries} are needed to find an equilibrium, but maybe in the communication model the problem becomes much easier. The technical challenge is again that the non-deterministic communication complexity of the problem is low, i.e, verifying that a certain profile is a Nash equilibrium does not require much communication (each player only has to make sure that he plays his best response). Nevertheless, we provide a ray of hope and show that in contrast to ordinal potential games, there is a randomized protocol that uses only $\textsf{polylog}(|A|)$ (when $|A|=|A_1|\cdot ... \cdot |A_n|$ is the game size) bits of communication and determines whether the game is an exact potential game or not. We then show that although it is easy to recognize whether a game is an exact potential game or not, finding an equilibrium requires polynomial (in the size of the game) communication (and in particular exponential in the number of players). These results provide a negative answer to an open question posed in \citep{Noam-blog}. \begin{theo} For some constant $c>0$, the following problem requires at least $N^c$ communication (even randomized): Alice gets an $N \times N$ matrix $u_A$ and Bob gets an $N \times N$ matrix $u_B$, they are promised that the game defined by these matrices is an (exact) potential game and they must output a pure Nash equilibrium of the game. \end{theo} \begin{theo} For some constant $c>0$, the following problem requires at least $2^{cn}$ communication (even randomized): Alice gets the utility functions of the first $n$ players in a $2n$-player $2$-action game. Bob gets the utility functions of the last $n$ players. They are promised that the game defined by these matrices is an (exact) potential game and they must output a pure Nash equilibrium of the game. \end{theo} \noindent Our proofs are via reductions from local search on (certain) degree 4 graphs in the two-player $N$-action case, and from local search on the hypercube in the $2n$-player 2-action case. While the relation between equilibria of potential games and local maxima is well known and very simple, the reduction is actually quite subtle. First the neighbourhood structures do not naturally match (in the two-player case), but more crucially the input to the players here is very limited: only very specifically related matrices $u_A$ and $u_B$ give an (exact) potential game, while the lower bounds for local search were for arbitrary inputs. We also show that the search for a pure Nash equilibrium in exact potential games can be formulated as a \emph{total search problem}: Either find a pure Nash equilibrium (that is guaranteed to exist in exact potential games) or provide a succinct evidence that the game is not an exact potential game. Interestingly such a succinct evidence of violation of exact potential property is guaranteed to exist by \citep{MS}. As an immediate corollary from our results we deduce hardness of this total search problem. \subsection{Local Optima in Combinatorial Auctions} Our second application concerns attempts to weaken the global optimality constraints in market allocations. Consider a combinatorial auction of $m$ indivisible items among $n$ players, each with his own valuation function $v_i$ that gives a real value to every subset of the items. The usual goal of optimizing social welfare aims to globally maximize $\sum_i v_i(S_i)$ over all allocations $(S_1,...,S_n)$ of the items. A corresponding notion of equilibrium is the Walrasian equilibrium, which includes also a vector of prices $p_1,...,p_m$ such that every player receives his globally-optimal set of items at these prices. While these notions provide very strong guarantees, they are usually ``too good to be true'': Walreasian equilibria only rarely exist and optimizing social welfare is usually infeasible, in essentially any sense of the word, and in particular in the sense of requiring exponential communication \citep{nisan2006communication}. Several papers have tried to relax the notion of a Walrasian equilibrium or similarly view the allocation problem as a game and analyze the equilibria in this game. In particular, in the model of simultaneous second price auctions \citep{christodoulou2008bayesian} it is easy to see that when the valuations are submodular every allocation that is \emph{locally optimal} can be part of an equilibrium in the game, and the same goes for the endowed equilibrium of \citep{BDO18}. Recall that a locally optimal allocation in a combinatorial auction is an allocation of the items $(S_1,\ldots, S_n)$ such that transferring any single item $j\in S_i$ to some other player $i'$ does not improve the welfare. Since local optima play a central role in various relaxed notions of equilibria, an obvious question is whether they are easy to find. In \citep{BDO18} it is shown that for some succinctly represented submodular valuations it is PLS hard to compute a locally optimal allocation in combinatorial auction. Furthermore, in the query model it is shown that finding a locally optimal allocation is as hard as finding a local maximum in the odd graph. Combining the same reduction with our communication hardness of local search on the odd graph, we get that: \begin{theo} The communication complexity of finding a locally optimal allocation between two players with submodular valuations is $2^{\Omega(n)}$. \end{theo} \section{The Communication Complexity of Exact Potential Games} Recall that a game is an exact potential game if there exists a potential function $\phi:A^n\rightarrow \mathbb R$, such that $\phi(a_i,a_{-i})-\phi(a'_i,a_{-i})=u_i(a_i,a_{-i})-u_i(a'_i,a_{-i})$ for every player $i$, every pair of actions $a_i,a'_i\in A_i$, and every profile of the opponents $a_{-i}\in A_{-i}$. In this section we study the communication complexity of exact potential games. We assume that each of the players knows only his own utility function and the goal is to compute a pure Nash equilibrium in the game. In game theoretic settings this form of information distribution is called \emph{uncoupledness} \citep{HMas,HMan}. It is known that the communication complexity of computing an equilibrium captures (up to a logarithmic factor) the rate of convergence of uncoupled dynamics to equilibrium \citep{CS,HMan}. As a preliminary result, we demonstrate that \emph{determining} whether a game is an exact potential games (under the uncoupled distribution of information) requires low communication. This result is in contrast to ordinal potential games (see Appendix \ref{ap:ident-ord}). \begin{proposition}\label{pro:epd} Consider a game with $n$ players and $N$ actions. There exists a randomized communication protocol that determines whether the game is an exact potential game or not that uses only $\textsf{poly}(\log(N),n)$ bits of communication. \end{proposition} The proof is quite simple, and we demonstrate it here for $2$-player games. Monderer and Shapley \citep{MS} show that a two-player game $(A,B,u_A,u_B)$ is an exact potential game if and only if for every four actions $a,a'\in A$, and $b,b'\in B$ we have \begin{align}\label{eq:cycle} \begin{split} &(u_A(a',b)-u_A(a,b))+(u_B(a',b')-u_B(a',b)) \\ &+(u_A(a,b')-u_A(a',b'))+(u_B(a,b)-u_B(a,b'))=0 \end{split} \end{align} Namely, the sum of gains/losses from unilateral divinations over every cycle of size four should sum up to zero. Now each player checks, for every possible four-action cycle, whether the sum of changes in his utility equals the negative of the change in utility of the other player for the same cycle. Verifying this simultaneously for all cycles can be done by applying any efficient protocol for the equality problem (we recall that we focus on \emph{randomized} communication protocols). For a general number of players, a similar characterization exists and we have to use protocols based on the ``equal sum'' problem as demonstrated below. \begin{proof}[Proof of Proposition \ref{pro:epd}] By \citep{MS}, an $n$-player game $(A,u)$ is an exact potential game if and only if for every pair of permutations $\overline{\pi},\underline{\pi}$ over $[n]$ and for every pair of action profiles $a,b\in A$ we have \begin{align}\label{eq:n-cyc} \begin{split} &\sum_{k=1}^n u_{\overline{\pi}(k)}(b,a,\overline{\pi}([k]))-u_{\overline{\pi}(k)}(b,a,\overline{\pi}([k-1]))+ \\ &\sum_{k=1}^n u_{\underline{\pi}(k)}(a,b,\underline{\pi}([k]))-u_{\overline{\pi}(k)}(a,b,\underline{\pi}([k-1]))=0 \end{split} \end{align} Simply speaking, for every sequence of unilateral deviations that starts at $a$ goes back and forth to $b$, where each player changes his strategy from $a_i$ to $b_i$ once and from $b_i$ to $a_i$ once, the sum in the gains/losses of all players from the unilateral divinations should sum up to 0. The players should check whether Equation \eqref{eq:n-cyc} holds for all possible pairs of profiles $a,b\in [N]^n$ and pairs of permutations $\overline{\pi},\underline{\pi}$ over $[n]$. The number of these equations is $c=m^{2n} (n!)^2$. Each player can generate from his private input a vector in $\{-2W,...,0,...,2W\}^c$ which captures the sum of changes in his utility for each one of the tuples $(a,b,\overline{\pi},\underline{\pi})$. So the problem can be reduced to the following: Each player $i$ holds a vector $v_i\in \{-2W,...,0,...,2W\}^c$ and the goal of the players is to determine whether $\sum_{i\in [n]} v_i = \textbf{0}_c$. This variant of the equality problem has a $\textsf{poly}(\log W,\log c)=\textsf{poly}(n,\log N)$ randomized communication protocol \citep{nisan1993communication,viola2015communication}. \end{proof} In contrast, identifying whether a game is an \emph{ordinal} potential game is hard, even for randomized communication protocols. Identification of the ordinal potential property has a reduction to the disjointness problem. We relegate these reductions (for two-player and for $n$-player games) to Appendix \ref{ap:ident-ord}. The contrast between the hardness of identifying whether a game is an ordinal potential game and the easiness of identifying whether a game is an exact potential game might give some hope that computing an equilibrium in exact potential games is much easier than in ordinal potential games. Unfortunately, our main results for this section show that finding a Nash equilibrium remains hard even for exact potential games. \begin{theorem}\label{theo:2pot} Consider the two-party promise communication problem where Alice holds the utility $u_A:[N]\times [N] \rightarrow \mathbb{R}$, and Bob holds the utility $u_B$ of an exact potential game. The goal is to output a pure Nash equilibrium of the game. The problem requires $\textsf{poly}(N)$ bits of communication, even for randomized protocols. \end{theorem} We can also show hardness for the $2n$-player $2$-action case. \begin{theorem}\label{theo:n-pot} Consider the two-party promise communication problem where Alice holds the utilities of $(u_i)_{i\in [n]}$ and Bob holds the utilities $(u_i)_{i\in [2n]\setminus [n]}$ of an exact potential game, and they should output a pure Nash equilibrium of the game. The problem requires $2^{\Omega(n)}$ communication, even for randomized protocols. \end{theorem} This problem is obviously requires at least as much communication as the $2n$-party communication problem where each player holds his own utility function. In both theorems, we reduce from the problem of finding a local maximum (on a bounded degree graph in the two player case and on the hypercube in the $n$ player case) and show that the set of pure Nash equilibria corresponds exactly to the set of local maxima. The proofs of the Theorems appear in Sections \ref{sec:pr-2} and \ref{sec:pr-n}. \subsection{Total variants of Pure Nash Equilibrium Search}\label{sec:tot} In Theorems \ref{theo:2pot} and \ref{theo:n-pot} we have demonstrated communicational hardness of two \emph{promise} problems. Such hardness results are not rare in the literature. For instance, finding a pure Nash equilibrium in a game when it is promised that such an equilibrium exists. To appreciate the novelty of our results we focus on a \emph{total} variant of equilibrium search problem $\textsc{TotExPot}$: either find a Nash equilibrium or provide a succinct evidence that the game is not an exact potential game. By \citep{MS} such a succinct evidence, in the form of a violating cycle (see Equations \eqref{eq:cycle},\eqref{eq:n-cyc}), necessarily exists. More formally, in the problem $\textsc{TotExPot}(2,N)$ Alice holds the utility $u_A$, Bob holds a utility $u_B$ of an $N\times N$ game, and the output is either a pure Nash equilibrium or a cycle of actions of size 4 that violates Equation \eqref{eq:cycle}. Similarly in the problem $\textsc{TotExPot}(2n,2)$ Alice holds the utilities $(u_i)_{i\in n}$, Bob holds the utilities $(u_i)_{i\in [2n]\setminus [n]}$ of an $2n$-player 2-action game, and the output is either a pure Nash equilibrium or a cycle of actions of size $4n$ that violates Equation \eqref{eq:n-cyc}. In Proposition \ref{pro:epd} we showed that low communication is needed to determine whether a game is an exact potential game or not (accompanied with an evidence in case it is not). From these observation along with Theorem \ref{theo:2pot} we deduce that \begin{corollary} The total search problem $\textsc{TotExPot}(2,N)$ requires $\textsf{poly}(N)$ communication. \end{corollary} Similarly for the $2n$-player 2-action case we have \begin{corollary} The total search problem $\textsc{TotExPot}(2n,2)$ requires $2^{\Omega(n)}$ communication. \end{corollary} Note that the non-deterministic complexity of $\textsc{TotExPot}(2,N)$ is $\log(N)$. Indeed a Nash equilibrium can be described by single action profile ($\Theta(\log N)$ bits), and a violating cycle can be described by $4$ action profiles. Each player can verify his best-reply condition and communicate a single bit to the opponent. Also verification of violating cycle can be done by communicating 4 valuations of utility. Similarly, we can show that the non-deterministic complexity of $\textsc{TotExPot}(2n,n)$ is $\textsf{poly}(n)$. Thus again, our results demonstrate an exponential separation between the non-deterministic and the randomized communication complexity of a total search problem. \section{Proof of Theorem \ref{theo:2pot}}\label{sec:pr-2} We reduce the problem of finding a local maximum on a graph $G$ with degree $4$ to finding a Nash equilibrium in an exact potential game with two players and $N$ actions. We then apply Theorem \ref{theo:grid}(\ref{theo:deg-bounded}) to get our communication bound. We construct the following exact potential game. For a vertex $v\in V$ we denote by $n_i(v)$ the $i$'th neighbour of $v$ for $i=1,2,3,4$. The strategy set of both players is $A=B=V\times [W]^5$ (recall that the potentials in $\textsc{SumLS}(G)$ get values in $[W]$ and that $W=\textsf{poly}(N)$). The interpretation of a strategy $(v,x)\in A$ where $\overrightarrow x=(x_0,x_1,...,x_4)\in [W]^5$ is (Alice's reported) potential for $v$ and its four neighbours. This report induces a valuation for all vertices $w\in V$ by \begin{align*}\label{eq:rep-val} val^{(v,\overrightarrow x)}(w)= \begin{cases} x_0 & \text{ if } w=v; \\ x_i & \text{ if } w=n_i(v); \\ 0 & \text{ otherwise.} \end{cases} \end{align*} A strategy $(v,\overrightarrow x)$ is \emph{truthful} if and only if $x_0=f_A(v)$ and $x_i=f_A(n_i(v))$ for all neighbours of $v$ (in short, $\overrightarrow x=n(v)$). Similarly Bob's strategy $(w,\overrightarrow y)$ induces a valuation $val^{(w,\overrightarrow y)}(v)$ on all vertices $v\in V$, and a truthful report is similarly defined. The utilities of Alice and Bob are given by (recall that $d(v,w)$ is the distance in the graph between two vertices $v$ and $w$): \begin{equation*} \begin{split} & u_A((v,\overrightarrow x),(w,\overrightarrow y))= 4W\cdot \mathds{1}_{d(v,w)\leq 1}+4W\cdot \mathds{1}_{x=n(v)} + val^{(w,\overrightarrow y)}(v) + val^{(v,\overrightarrow x)}(w)+ f_A(v) \\ & u_B((v,\overrightarrow x),(w,\overrightarrow y))= 4W\cdot \mathds{1}_{d(v,w)\leq 1}+4W\cdot \mathds{1}_{y=n(w)} + val^{(w,\overrightarrow y)}(v) + val^{(v,\overrightarrow x)}(w)+ f_B(w) \end{split} \end{equation*} Namely, both players get large reward of $4W$ if they choose adjacent vertices, or the same vertex. Both players get large reward of $4W$ if they report truthfully their own valuations in the neighbourhood of their vertex. Both players get the sum of valuations of the two chosen vertices $v,w$ according to the report of the opponent. In addition Alice gets the (partial) potential of her vertex according to $f_A$, and Bob gets the potential of his vertex according to $f_B$. \begin{lemma}\label{lem:exact} The game is an exact potential game. \end{lemma} \begin{proof} We will see that the game can be ``decomposed'' to two exact potential games, and will use this ``decomposition'' to provide a potential function for our game. We will use the following basic properties of potential games. We recall the notation of $(A_1,A_2,u_1,u_2)=(A,u)$ for a two-player game, where each $A_i$ is the action space of player $i$ and $u_i$ is the utility function of player $i$. \begin{itemize} \item An \emph{identical interest game} $(A,u)$ is a game in which $u_1=u_2$. An identical interest game is an exact potential game with potential function $\varphi=u_1$. \item An \emph{opponent independent game} is a game in which the utility of each player $i$ depends only on his own actions: $u_i(a_1,a_2)=u_i(a_i)$ for every $(a_1,a_2)\in A$. Every opponent independent game is an exact potential game where the potential function is simply the sum of the utilities of the players. \item For every pair of exact potential games $(A,u'),(A,u'')$ with potentials $\varphi',\varphi''$, the game $(A,u'+u'')$ is an exact potential game with potential $\varphi=\varphi'+\varphi''$. \end{itemize} Note that our game can be written as a sum of an identical interest game: $$u'_A=u'_B= 4W\cdot \mathds{1}_{d(v,w)\leq 1} + val^{(w,\overrightarrow y)}(v) + val^{(v,\overrightarrow x)}(w)$$ and an opponent independent game: $$u''_A=4W\cdot \mathds{1}_{\overrightarrow x=n(v)} + f_A(v), \ u''_B=4W\cdot \mathds{1}_{\overrightarrow y=n(w)} + f_B(w)$$ Therefore their sum is a potential game with potential: \begin{align}\label{eq:pot} \begin{split} \phi((v,\overrightarrow x),(y,\overrightarrow w))= & 4W\cdot \mathds{1}_{d(v,w)\leq 1}+4W\cdot \mathds{1}_{\overrightarrow x=n(v)}+4W\cdot \mathds{1}_{\overrightarrow y=n(w)} \\ & + val^{(w,\overrightarrow y)}(v) + val^{(v,\overrightarrow x)}(w)+ f_A(v) +f_B(w) \end{split} \end{align} \end{proof} \begin{lemma}\label{lem:pne} The pure Nash equilibria of the game are precisely $((v,\overrightarrow x),(v,\overrightarrow x'))$ such that $v$ is a local maximum of $f_A+f_B$ and $\overrightarrow x$ and $\overrightarrow x'$ are truth reports of the values of $v$ and its neighbours according to $f_A$ and $f_B$, respectively. \end{lemma} \begin{proof} Pure Nash equilibria are the local maxima (with respect to a unilateral deviation) of the potential. It is easy to check that in a local maximum $x$ and $y$ are truth reports, because the gain in a truthful report is $4W$ whereas if the players do not report truthfully they lose this reward. However, Alice can gain at most $val^{(v,\overrightarrow x)}(w)+ f_A(v)\leq 2W$ from misreporting the value, and Bob's loss is similar. Similarly, in a local maximum $v$ and $w$ are neighbours (or the same vertex), because the gain of $4W$ is lost if $v$ and $w$ are not neighbors, in which case Alice's gain from the terms $val^{(w,\overrightarrow y)}(v) + val^{(v,\overrightarrow x)}(w)+ f_A(v)$ is at most $3W$. A similar argument holds for Bob. For a profile of strategies that satisfies the above the potential is equal to (see Equation \eqref{eq:pot}): \begin{align}\label{eq:tpot} \phi((v,\overrightarrow x),(w,\overrightarrow y))=12W+f_B(v)+f_A(w) +f_A(v)+ f_A(w) \end{align} A profile where $v\neq w$ is not a Nash equilibrium because by the distinctness assumption, $f_A(v)+ f_B(v) \neq f_A(w)+ f_B(w)$, so if $f_A(v)+ f_B(v) < f_A(w)+ f_B(w)$ Alice can deviate to $(w,\overrightarrow y)$ and increase the potential; Otherwise Bob can deviate to $(v,\overrightarrow x)$ and increase the potential. Finally, a profile $((v,\overrightarrow x),(v,\overrightarrow x'))$ with truth reporting is clearly a Nash equilibrium if it is a local maximum of $f_A+f_B$. If $v$ is not a local maximum of $f_A+f_B$, then Alice will increase the potential (given in Equation \eqref{eq:tpot}) if she deviates to the action $(w,n(w))$ where $w$ is a neighbour of $v$ with $f_A(w)+f_B(w)>f_A(v)+f_B(v)$. \end{proof} Lemmas \ref{lem:exact} and \ref{lem:pne} complete the proof of the theorem. \section{Proof of Theorem \ref{theo:n-pot}}\label{sec:pr-n} The proof of Theorem \ref{theo:n-pot} is done in two steps. First, we show a $2^{\Omega(\sqrt[3]{n})}$ bound. This is the significant part, in terms of the deduced result and also in terms of the techniques. Thereafter, in Section \ref{sec:2n} we improve the bound to $2^{\Omega(n)}$ building upon the arguments of this Section. We start with proving the $2^{\Omega(\sqrt[3]{n})}$ bound. Our starting point is the proof of the hardness of $2$-player $n$-actions exact potential games (Theorem \ref{theo:2pot}). However, since we consider $n$-player binary-action games, it is convenient to reduce the problem $\textsc{SumLS}(\textsf{Hyp}_n)$ (local search on the $n$-th hypercube). We will get an exact potential game with $\Theta(n^3)$ players, where each player has only two actions. \paragraph{A naive approach and an obstacle. } The simplest idea that comes to mind is to consider a \emph{group} of $n$-players who will choose $v\in \textsf{Hyp}_n$, and a \emph{group} of $(n+1) \lceil\log W \rceil$ players who will report the valuation vector $\overrightarrow x$ of the vertex itself and its $n$ neighbours, and similarly for Bob. We would like to set the group of Alice's players an \emph{identical utility} that is similar to the utility of Alice in the two-player game. An obstacle that arises with this approach is that if the groups of Alice's and Bob's players are playing two adjacent vertices $v,w\in \textsf{Hyp}_n$ with truthful valuations, none of them will want to switch to the opponent's vertex, even if at the adjacent vertex the sum of $f_A+f_B$ is higher. This follows from the fact if $(v,\overrightarrow x)$ is a truthful valuation, then $(w,\overrightarrow x)$ is not necessarily a truthful valuation (because the relevant vertices and their order is different with respect to $v$ and with respect to $w$). Thus, players in Alice's group will gain the difference in the potentials (at most $3W$) but lose $4W$ because now the group report is not truthful. Note that the same obstacle does not arise in the two-player case. In the two-player case Alice could change the vertex $v$ and the report $\overrightarrow x$ \emph{simultaneously}. In the $n$-player case we consider unilateral deviations that correspond to changes of single bits and thus such simultaneous deviations are impossible. \paragraph{The solution to the obstacle. } To resolve the above problematic issue, we modify the form of the report $\overrightarrow x$ in the game. \begin{itemize} \item Instead of reporting the values in the ball of radius 1 around $v$ (i.e., the neighbors of $v$), each player reports the values in the ball of radius 2 around $v$. In the hypercube, this means that the report consists of $m=1+n+\frac{n(n-1)}{2}$ valuations. \item Instead of reporting the values in a fixed order (namely $(v,n_1(v),...,n_4(v))$), the players jointly report pairs, where each pair consists of an index of a vertex $v$ and $f_A(v)$ (or $f_B(v)$). \end{itemize} \paragraph{The construction. }More formally, for Alice, we have a group of $n$ players with binary actions who jointly choose the vertex $v\in \{0,1\}^n$. In other words, the action of the $i$'th player in the group corresponds to the $i$'th bit in the index of the vertex. We have a group of $mn$ players with binary actions who jointly choose a list of $m$ vertices $\overrightarrow{xv}= (xv_1,...,xv_m)\in (\{0,1\}^n)^m$. Finally, we have a group of $m b:=m\lceil \log W \rceil$ players with binary actions who jointly choose a list of $m$ valuations $\overrightarrow{xf}=(xf_1,...,xf_m)\in (\{0,1\}^b)^m$. We denote $\overrightarrow{x}=(\overrightarrow{xv},\overrightarrow{xf})$. Similarly to the two-player case, a report $\overrightarrow{x}=(\overrightarrow{xv},\overrightarrow{xf})$ defines a valuation function over all vertices. For a list $\overrightarrow{xv}$ we denote $I_{\min}(\overrightarrow{xv}):=\{i\in [m]: xv_i \neq xv_j \text{ for all } j<i\}$ the set of indices with \emph{first} appearance of a vertex. The valuation is defined by \begin{align*} val^{\overrightarrow{x}}(w)=\begin{cases} val(xf_i) &\text{if } w=xv_i \text{ for } i\in I_{\min}(\overrightarrow{xv}); \\ 0 &\text{otherwise.} \end{cases} \end{align*} where $val(\cdot)\in [W]$ denotes the numerical value of the binary string. Note that in case of multiple appearances of $w$ in the list we choose the value at the first appearance. Similarly for Bob, we have three groups who jointly choose $w$, $\overrightarrow{yw}$, and $\overrightarrow{yf}$. The report $\overrightarrow{y}$ defines a valuation function $val^{\overrightarrow{y}}$ over all vertices. Note that the total number of players in the game is $2(n+m(n+b))=O(n^3)$. Before we present the actual utilities we informally describe the prioritization according to which we set the utilities. In the two-player case there were only two levels of prioritization: the top level priority included the distance $d(v,w)$ (the $\mathds{1}_{d(v,w)\leq 1}$ term in the utility functions) and the truthfulness of the report (the $\mathds{1}_{x=n(v)}$ term in the utility functions). The bottom level priority included the remaining potential related terms ($val^{(w,\overrightarrow y)}(v), val^{(v,\overrightarrow x)}(w), f_A(v)$). More formally by \emph{prioritization} we mean that improving the higher priority term by $1$ should increase the utility \emph{irrespective} of how the lower priority terms change. Indeed the multiplier $4W$ was set in such a way. In the current construction, the prioritization levels are more involved, and we sketch them here from the highest priority to the lowest. \begin{enumerate} \item The distance $d(v,w)$. \item The list $\overrightarrow{xv}$ should contain $v$ and its neighbours. \item The valuations $\overrightarrow{xf}$ should be correct for $v$ and its neighbours. \item The potential related terms (the core of the proof). \item The list $\overrightarrow{xv}$ should contain the vertices within a distance 2 from $v$. \item The valuations $\overrightarrow{xf}$ should be correct for vertices within a distance 2 from $v$. \end{enumerate} Now we describe what is the analogue of each one of these priorities in the $n$-player case. Hereafter, $d(\cdot,\cdot)$ will denote the \emph{hamming distance} (in the corresponding dimension). We denote by $B_r(v)$ the ball of radius $r$ around $v$ with respect to the hamming distance. \begin{enumerate} \item $\mathds{1}_{d(v,w)\leq 1}$ is translated to $-d(v,w)\cdot \mathds{1}_{d(v,w)\geq 2}$. Namely the loss is 0 in case the players choose the same vertex or adjacent vertices. Otherwise the loss increases with the distance. \item Given $v$, we denote by $N_1(v):=\{(v_1,...,v_m): \{v_1,...,v_m\} \supset B_1(v)\}\subset \{0,1\}^{mn}$. Namely, $N_1(v)$ specifies $v$ and its neighbours. At the second priority we have $-d(\overrightarrow{xv},N_1(v))$. \item Given $v$ and $\overrightarrow{xv}$, for an index $i\in I_{\min}(\overrightarrow{xv})$ such that $xv_i\in B_1(v)$ we have at the third priority the term $-d(xf,bin(f_A(xv_i))$ when we recall that $bin(z)\in \{0,1\}^b$ represents the binary representation of the potential value $z\in [W]$. Note that this definition takes into account only the \emph{first} appearance of every neighbour, which is consistent with the definition of $val^{\overrightarrow{x}}$. For other indices $i\in [m]$ the term will be identical but it will appear at the lowest sixth priority. \item The profile $(v,\overrightarrow{x}),(w,\overrightarrow{y})$ defines a natural analogue of the two-player potential terms: $val^{\overrightarrow y}(v), val^{\overrightarrow y}(w), f_A(v), f_B(w)$. These terms are at the forth priority. \item Given $v$, we denote by $N_2(v):=\{(v_1,...,v_m): \{v_1,...,v_m\} = B_2(v)\}\subset \{0,1\}^{mn}$ the lists that include precisely the set of all vertices within a radius 2 from $v$. At the fifth priority we have $-d(\overrightarrow{xv},N_2(v))$. \item Finally, similarly to item 3, given $v$ and $\overrightarrow{xv}$, for every index $i\in [m]$ we have at the sixth priority the term $-d(xf,bin(f_A(xv_i))$. \end{enumerate} Now we are ready to define the utilities. As was mentioned above all the players in Alice's groups have identical utilities which is equal to: \begin{align*} u^A_i(v,\overrightarrow{x},w,\overrightarrow{y})= & - k_1 \cdot d(v,w)\mathds{1}_{d(v,w)\geq 2} \\ & - k_2 \cdot d(\overrightarrow{xv},N_1(v)) \\ & - k_3 \cdot \sum_{i\in I_{\min}(\overrightarrow{xv}) \text{ s.t. } xv_i\in B_1(v)} d(xf,bin(f_A(xv_i)) \\ & + k_4 [val^{\overrightarrow y}(v) + val^{\overrightarrow x}(w) + f_A(v)] \\ & - k_5 \cdot d(\overrightarrow{xv},N_2(v)) \\ & - k_6 \cdot \sum_{i\in [m]} d(xf,bin(f_A(xv_i)), \end{align*} when we set $k_1,...,k_6$ as follows. We set $k_6=1$. Now we set $k_5$ to be greater than the maximal difference of sixth priority terms, e.g., $k_5=2n^2 b>mb$. Now we set $k_4$ to be the greater than the maximal total difference of sixth and fifth priority terms, e.g., $k_4=2n^3 b>mb+k_5(nm)$. Similarly we may proceed with $k_3 = 8W n^3 b$, $k_2 = 8Wn^5 b^2$, and $k_1=8Wn^8 b^2$. Similarly we define each member in Bob's group to have the following identical utility function: \begin{align*} u^B_i(v,\overrightarrow{x},w,\overrightarrow{y})= & - k_1 \cdot d(v,w)\mathds{1}_{d(v,w)\geq 2} \\ & - k_2 \cdot d(\overrightarrow{yw},N_1(w)) \\ & - k_3 \cdot \sum_{i\in I_{\min}(\overrightarrow{yw}) \text{ s.t. } yw_i\in B_1(w)} d(yf,bin(f_A(yw_i)) \\ & + k_4 [val^{\overrightarrow y}(v) + val^{\overrightarrow x}(w) + f_B(w)] \\ & - k_5 \cdot d(\overrightarrow{wy},N_2(w))\\ & - k_6 \cdot \sum_{i\in [m]} d(yf,bin(f_A(yw_i)). \end{align*} \begin{lemma}\label{lem:potential} The defined $(2n+2m(n+b))$-player binary action game is an exact potential game. \end{lemma} \begin{proof} If we view the game as a \emph{two}-player game where Alice chooses $(s,\hat{x})$ and Bob chooses $(r,\hat{y})$ the game is an exact potential game by similar arguments to those in Lemma \ref{lem:exact}. Namely it is the sum of two games where one is identical interest game and the other is opponent independent game. The potential function of the game is given by: \begin{align*} \phi(v,\overrightarrow{x}, & w,\overrightarrow{y})= - k_1 \cdot d(v,w)\mathds{1}_{d(v,w)\geq 2} \\ & - k_2 [d(\overrightarrow{xv},N_1(v))+d(\overrightarrow{yw},N_1(w))] \\ & - k_3 [\sum_{i\in I_{\min}(\overrightarrow{xv}) \text{ s.t. } xv_i\in B_1(v)} d(xf,bin(f_A(xv_i)) + \sum_{i\in I_{\min}(\overrightarrow{yw}) \text{ s.t. } yw_i\in B_1(w)} d(yf,bin(f_A(yw_i))] \\ & + k_4 [val^{\overrightarrow y}(v) + val^{\overrightarrow x}(w) + f_A(v)+f_B(w)] \\ & - k_5 [d(\overrightarrow{xv},N_2(v))+d(\overrightarrow{yw},N_1(w))] \\ & - k_6 [\sum_{i\in [m]} d(xf,bin(f_A(xv_i))+\sum_{i\in [m]} d(yf,bin(f_B(yw_i))] \end{align*} Note that by replacing Alice (Bob) by a group of $n+m(n+b)$ players all with the same utility we only reduced the set of possible unilateral deviations. For each one of these unilateral deviation by the two-player result the change is the utility is equal to the change in the potential. \end{proof} \begin{lemma}\label{lem:npne} Every pure Nash equilibrium of the defined $(2n+2m(n+b))$-player binary action game is of the form $(v,\overrightarrow{x},v,\overrightarrow{y})$ where $v$ is a local maximum of $f_A+f_B$ over the hypercube. \end{lemma} \begin{proof} The proof proceeds by narrowing the set of equilibria candidates according to the prioritization levels, with a twist at the fourth priority level. First, in every equilibrium $d(v,w)\leq 1$ because otherwise there exists a player in Alice's $v$ group who can switch his strategy and decrease the distance by 1. Such a switch increases the first term in the utility of the group by $k_1$. By the choice of $k_1$, any change in the other terms of utilities is smaller. Second, in every equilibrium $\overrightarrow{xv}\in N_1(v)$, because otherwise there exists a player in Alice's $\overrightarrow{xv}$ group who can switch his strategy and decrease the distance by 1. Such a switch does not effect the first term of the utility, and it increases the second term by $k_2$. By the choice of $k_2$, any change in the other terms of utilities is smaller. Similarly for Bob we have $\overrightarrow{yw}\in N_1(w)$. Third, in every equilibrium for every $i\in I_{\min}(\overrightarrow{xv})$ such that $xv_i\in B_1(v)$ we have $xf_i=bin(f_A(xv_i))$. Simply speaking, all first appearances of elements in $B_1(v)$ (which indeed appear by the argument regarding the second priority level) have correct valuation. If it wasn't so, then there exists a player in Alice's $\overrightarrow{xf_i}$ group who can switch his strategy and decrease the distance by 1. Such a switch does not affect the first two terms of the utility, and it increases the third term by $k_3$. By the choice of $k_3$, any change in the other terms of utilities is smaller. Similarly for Bob, all first appearances of elements in $B_1(w)$ have correct valuation. Now we jump to the fifth and the sixth priority levels. Given that $v,w$ are neighbours (or the same vertex) and their values already appear in the report $\overrightarrow{x}$ the terms of the utility in the fourth priority level are not affected by the vertices $xv_i$ such that $i\notin I_{\min}(\overrightarrow{xv})$ or $xv_i \notin B_1(v)$. Therefore, we can deduce that necessarily in equilibrium we have $\overrightarrow{xv}\in N_2(v)$ because otherwise some player in the $\overrightarrow{xv}$ group can decrease the distance by 1 without affecting any of the first four terms, and increase the fifth term by $k_5$. Any change in the last terms is smaller. Similarly we can argue for the sixth priority level, that the values of $xf_i$ for the corresponding indices do not affect any other term. From these arguments it follows that in any equilibrium both Alice (and Bob) report a list $\overrightarrow{xv}$ ($\overrightarrow{yw}$) that contains exactly all the vertices in the ball of radius 2 around $v$ ($w$), moreover all valuations of all these vertices are correct. Now we go back to the fourth priority. Assume by way of contradiction that $v\neq w$. Similarly to the two-player case, the fourth term in the \emph{potential function} of the game is $val^{\overrightarrow{x}}(w)+val^{\overrightarrow{y}}(v)+f_A(v)+f_B(w)$, which under all the above restrictions of equilibria is equal to $f_A(w)+f_B(v)+f_A(v)+f_B(w)$. Assume by way of contradiction that $v\neq w$, then we may assume w.l.o.g. that $f_A(w)+f_B(w)\geq f_A(v)+f_B(v)+1$ (we recall that we may assume that the sum defers at adjacent vertices and has integer values), then there exists a player in Alice's $v$ group who can switch his bit and turn the vertex $v$ into $w$. Let us examine the effect of this change on the potential. The first priority level term remains 0. The key observation is that the second and third priority level terms also remain 0. Note that the list $\overrightarrow{xv}$ includes all the vertices within radius 2 from $v$, and in particular all the vertices within radius 1 from $w$. Similarly the valuations $\overrightarrow{xf}$ of these vertices remain correct. Therefore the potential increases by at least $k_4$ in the first four terms, and any change in the fifth and sixth terms is smaller. Finally for the case of $v=w$ where $v$ is not a local maximum we apply very similar arguments: There exists a player in Alice's group who can increase the potential of the game by $k_3$ and change only the fifth and sixth terms of the potential. \end{proof} Lemmas \ref{lem:potential}, and \ref{lem:npne} complete the proof of the $2^{\Omega(\sqrt[3]{n})}$ bound. \subsection{Proof of Theorem \ref{theo:n-pot}: Improving the Bound to $2^{\Omega(n)}$}\label{sec:2n} The presented above reduction has $\Theta(n^3)$ players, which yields a lower bound of $2^{\Omega(\sqrt[3]{n})}$ on the problem of finding a pure Nash equilibrium. Here we modify the reduction to have $\Theta(n)$ players, which implies a lower bound of $2^{\Omega(n)}$ on the problem of finding a pure Nash equilibrium. The idea is to reduce the unnecessary ``wasting" of players in the reduction. In the presented reduction Alice reports to Bob the valuations of \emph{all} vertices within radius 2 around $v$ (there are $\Theta(n^2)$ such vertices). However, the arguments of the proof of Theorem \ref{theo:grid} can be modified to show the existence of hard instances over the hypercube where for most of the neighbours within radius 2 from $v$, Alice and Bob \emph{know} the valuations of each other over these vertices. In fact, for these hard instances there exist only a \emph{constant} number of neighbours for which Alice does not know Bob's valuation, and Bob does not know Alice's. In the modified reduction, Alice's group will report only the valuation of the unknown vertices, which will require only $O(n)$ players for her group. We start with a modification of Lemma \ref{lem:hyp}, which embeds the constant degree graph $G$ in $\textsf{Hyp}_n$. We present an embedding of $G$ in $\textsf{Hyp}_n$ with the additional property that every ball of radius 2 in $\textsf{Hyp}_n$ contains at most \emph{constant} number of vertices of the embedding's image. Formally, given an embedding $(\varphi,\chi)$ where $\varphi:V_G \rightarrow \{0,1\}^n$, $\chi:E_G \rightarrow P(\textsf{Hyp}_n)$, we denote the \emph{image of the embedding} by $Im(G)=\{w\in \{0,1\}^n: w\in \varphi(V_G) \cup \chi(E_G)\}$. \begin{lemma}\label{lem:hyp-const} Let $G$ be the graph with $N$ vertices that is defined in Section \ref{sec:veto} (the constant degree graph for which Theorem \ref{theo:opt}(\ref{theo:opt-bounded}) holds). The graph $G$ can be VIED-embedded in $\textsf{Hyp}_n$ for $n=O(\log N)$, such that for every $w\in\{0,1\}^n$ we have\footnote{More concretely, $n=3\log N+333$ and $|B_2(w)\cap Im(G)|\leq 73$.} $|B_2(w)\cap Im(G)|=O(1)$. \end{lemma} \begin{proof} We ``sparse" the embedding of Lemma \ref{lem:hyp} to reach a situation where every pair of independent edges are embedded to paths that are within a distance of at least 3 one from the other. This can be done, for instance, by embedding $G$ in a hypercube of dimension $n=3(\log N+111)$ rather than dimension $n'=\log N +111$, when we replace every vertex in $\textsf{Hyp}_{n'}$ by three copies of itself. Such a change multiplies the hamming distance by a factor of 3. For such an embedding the maximal number of vertices of $Im(G)$ in a ball or radius 2 is obtained at a vertex $w\in \phi(V_G)$ and is equal to $1+2\cdot 32$; the vertex and two vertices of every one of the 36 embedded edges. \end{proof} We proceed with a short presentation of the arguments that prove Theorem \ref{theo:opt}(\ref{theo:opt-hypercube}) from Theorem \ref{theo:opt}(\ref{theo:opt-bounded}), followed by a Corollary that will be essential in our reduction. The arguments below are very similar, but yet slightly defer from the proof that is presented in Section \ref{sec:main-pr}. \paragraph{Proof of Theorem \ref{theo:opt}(\ref{theo:opt-hypercube}) from Theorem \ref{theo:opt}(\ref{theo:opt-bounded}). } We reduce $\textsc{SumLS}(G)$ to $\textsc{SumLS}(\textsf{Hyp}_n)$ using the VIED embedding $(\varphi,\chi)$ of Lemma \ref{lem:hyp-const}. Let $Im(G)\subset \textsf{Hyp}_n$ be the image of the embedding and let $w^*\in Im(G)$ be some fixed vertex. Given an instance $(f_A,f_B)$ of $\textsc{SumLS}(G)$ we define an instance $(f'_A,f'_B)$ of $\textsc{SumLS}(\textsf{Hyp}_n)$ by \begin{align*} f'_A(w)= \begin{cases} f_A(v) &\text{if } w=\varphi(v)\in \varphi(V_G) \\ \frac{k}{l} f_A(v) + \frac{l-k}{l}f_A(v') &\text{if } w\in \chi(\{v,v'\})\subset \chi(E_G) \\ -d(w,w^*) &\text{otherwise.} \end{cases}\\ f'_B(w)= \begin{cases} f_B(v) &\text{if } w=\varphi(v)\in \varphi(V_G) \\ \frac{k}{l} f_B(v) + \frac{l-k}{l}f_B(v') &\text{if } w\in \chi(\{v,v'\})\subset \chi(E_G) \\ -d(w,w^*) &\text{otherwise.} \end{cases} \end{align*} where in the case $w\in \chi(\{v,v'\})$ we assume that $w$ is the $k$'th element in the path $\chi(\{v,v'\})$ and $l$ is the total length of this path. Simply speaking, we set the functions $f'_A,f'_B$ to have the values $f_A(v)$ on the embedded vertices $\varphi(v)$. On intermediate vertices along a path that embeds an edge we set the value to be a weighted average of the two extreme valuations. For vertices out of $Im(G)$ we set $f'_A(w)=f'_B(w)$ to be a negative constant that does not depend on the instance $(f_A,f_B)$. It can be easily checked that the local maxima of $f'_A+f'_B$ over $\textsf{Hyp}_n$ are precisely $\{\varphi(v): v \text{ is a local maximum of } f_A+f_B \text{ over } G\}$. \begin{corollary}\label{cor:img-fixed} Finding local maximum in $\textsf{Hyp}_n$ with the promise of $f'_A(w)=f'_B(w)=-d(w,w^*)$ for all $w\notin Im(G)$, requires $2^{\Omega(n)}$ communication. \end{corollary} Now we construct a potential game with $\Theta(n)$-players that solves the \emph{promise} $\textsc{SumLS}$ problem of Corollary \ref{cor:img-fixed}. We mimic the arguments of the previous $2^{\sqrt[3]{n}}$ bound with one change: the reports $\overrightarrow{x}$ and $\overrightarrow{y}$ are done on vertices in $B_2(v)\cap Im(G)$ rather than $B_2(v)$. By Lemma \ref{lem:hyp-const} it is sufficient to report $73$ vertices (rather than $\Theta(n^2)$). A report consists of $\overrightarrow{x}=(\overrightarrow{xv},\overrightarrow{xf})$ where $\overrightarrow{xv}$ is a 73-tuple of vertices, and $\overrightarrow{xf}$ is a 73-tuple of valuations. The valuation function $val^{\overrightarrow{x}}(w)$ is modified to be $val^{\overrightarrow{x}}(w) = val(xf_i)$ if $w=xv_i$ for $i\in I_{\min}(\overrightarrow{xv})$; Otherwise, if $w\notin Im(G)$ we set $val^{\overrightarrow{x}}(w)=-d(w,w^*)$; Otherwise, we set $val^{\overrightarrow{x}}(w)=0$. Similarly for Bob. We also modify the definition of the neighbour vertices of $v\in \textsf{Hyp}_n$: \begin{align*} N_1(v)&:=\{(v_1,...,v_73): \{v_1,...,v_73\} \supset B_1(v)\cap Im(G)\}\subset \{0,1\}^{73n} \\ N_2(v)&:=\{(v_1,...,v_9): \{v_1,...,v_73\} \supset B_2(v) \cap Im(G)\}\subset \{0,1\}^{73n}. \end{align*} Note that by the Lemma \ref{lem:hyp-const} $N_1(v),N_2(v)\neq \emptyset$ for all vertices $v$. From here, we apply similar arguments to those in Section \ref{sec:npot-pr} to prove a reduction from the local search promise problem of Corollary \ref{cor:img-fixed} to pure Nash equilibrium in potential games. The only additional argument that is needed is that $val^{\overrightarrow{x}},val^{\overrightarrow{y}}$ have the correct valuation for all vertices $w\notin Im(G)$ (in particular those within radius 2).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Vesicles are widely used as a model system for biological cells due to their simplicity and controllability. The deformation of the lipid membrane, in particular under an applied electric field (electrodeformation), is often explored to probe membrane properties \cite{Kummrow1991,Niggemann1995} and to detect pathological changes in cells. \cite{Wong2005} In the past decade, vesicle electrodeformation has become a significant subject of study, and earlier work can be divided into two categories. In the first category, an alternating-current (AC) field is applied, which often induces stationary and small deformations. \cite{Kummrow1991,Niggemann1995,Dimova2007,Aranda2008} Correspondingly, an electrohydrodynamic theory in the small-deformation limit was developed to interpret the data trends. \cite{Vlahovska2009} In the second, under direct-current (DC) electric fields, vesicles usually exhibit large and transient deformations due to the large field strengths commonly applied. \cite{Kakorin2003,Riske2005,Riske2006,Sadik2011} Recently, using high-resolution, high-speed optical imaging Riske and Dimova\cite{Riske2005} acquired a large amount of data capturing the complex deformation-relaxation behavior of the vesicles. Although some qualitative and scaling arguments were presented, \cite{Dimova2007} the data was not fully interpreted due to the absence of a predictive model. Meanwhile, one of us (HL) experimentally examined vesicles in the large-deformation regime with aspect ratios reaching ten. \cite{Sadik2011} A large-deformation theory was also presented, which provided quantitative agreement with the data therein. However, the model was semi-empirical in that the hydrodynamic problem was not rigorously treated, but followed an empirical approach by Hyuga and co-authors. \cite{Hyuga1991a,Hyuga1991} In general, a rigorous and transient analysis needs to be developed to understand the complex deformation-relaxation behavior, and to provide insights on the underlying physical processes. In this work, we develop a transient analysis for vesicle electrodeformation. The theory is derived by extending our previous work on a droplet model, \cite{Zhang2012} with the additional consideration of a lipid membrane separating two fluids of arbitrary properties. For the latter, both a membrane-charging and a membrane-mechanical model are supplied. Similar to the droplet model, the main result is also an ordinary differential equation (ODE) governing the evolution of the vesicle aspect ratio. The effects of initial membrane tension and pulse length are examined. The model prediction is extensively compared with experimental data from Riske and Dimova \cite{Riske2005} and Sadik \emph{et al.},\cite{Sadik2011} and is shown to accurately capture the system behavior in the regime of no or weak electroporation. More importantly, the comparison reveals that vesicle relaxation obeys a universal behavior, and is governed by a single timescale that is a function of the vesicle initial radius, the fluid viscosity, and the initial membrane tension. This behavior is regardless of the means of deformation, either via AC/DC electric field, or via mechanical stretching. This universal scaling law is a main contribution of the current work, and can be used to calculate membrane properties from experimental data. \section{Theory} The problem configuration is shown in Fig. \ref{fig:Schematics-of-problem.}. Under the influence of an applied electric field, charges of opposite signs are allowed to accumulate on the two sides of the membrane, which induces vesicle deformation and electrohydrodynamic flows both inside and outside the vesicle. We assume that the vesicle remains spheroidal in shape throughout the process. All notations, as well as the prolate spheroidal coordinate system follow those from Zhang \emph{et al.}.\cite{Zhang2012} The surface of the prolate spheriod is conveniently given as\begin{equation} \xi=\xi_{0}\equiv\frac{a}{c}.\label{prolate surface}\end{equation}Here $c=\sqrt{a^{2}-b^{2}}$ is chosen to be the semi-focal length of the spheroidal vesicle, and $a$ and $b$ are the major and minor semi-axis, respectively. For the derivation below, we further assume that the volume of the vesicle is conserved. We subsequently obtain\begin{equation} a=r_{0}(1-\xi_{0}^{-2})^{-\frac{1}{3}},\qquad b=r_{0}(1-\xi_{0}^{-2})^{\frac{1}{6}}.\label{ab}\end{equation} Therefore, the vesicle geometry is completely characterized by a single parameter, $\xi_{0}$, which evolves in time along with deformation. The critical idea of the current analysis is to express all variables, e.g., the electric potential and the stream function in terms of $\xi_{0}$. In what follows, we introduce both an electrical and a mechanical model for the membrane. An ODE for $\xi_{0}$ is obtained by applying the stress matching and kinematic conditions. \begin{figure} \center\includegraphics[width=0.35\textwidth]{schematic_a} \includegraphics[width=0.35\textwidth]{schematic_b} \caption{(a) A schematic of the problem configuration. The original radius of the vesicle is $r_{0}$. The conductivity is denoted by $\sigma$, the permittivity is denoted by $\epsilon$, the viscosity is denoted by $\mu$, and the subscripts $i$ and $e$ denote intravesicular and extravesicular, respectively. The strength of the applied electric field is $E_{0}$. (b) The prolate spheroidal coordinate system. \label{fig:Schematics-of-problem.}} \end{figure} \subsection{The electrical problem} The electric potentials both inside and outside the vesicle are described by the Laplace equations:\begin{equation} \nabla^{2}\phi_{i}=\nabla^{2}\phi_{e}=0.\label{laplace for vesicle}\end{equation} However, at the membrane the matching conditions are modified:\begin{eqnarray} \frac{\sigma_{e}}{h_{\xi}}\frac{\partial\phi_{e}}{\partial\xi}=\frac{\sigma_{i}}{h_{\xi}}\frac{\partial\phi_{i}}{\partial\xi}=&&C_{m}\frac{\partial\frac{c}{h_{\xi}}(\phi_{e}-\phi_{i})}{\partial t}\nonumber\\ &&+\frac{G_{m}c}{h_{\xi}}(\phi_{e}-\phi_{i}),\qquad{\rm at}\ \xi=\xi_{0}.\label{eq:current continous}\end{eqnarray} Here $C_{m}$ and $G_{m}$ denote the membrane capacitance and conductance, respectively. $h_{\xi}$ is a metric coefficient of the prolate spheroidal coordinate system. This membrane-charging model is commonly adopted by many previous research. \cite{Schwan1989,Grosse1992,Debruin1999,Krassowska2007,Li2011} The displacement currents from the electrolytes are not included, which approximation is valid when the Maxwell-Wagner timescale, $T_{MW}=(\epsilon_{i}+2\epsilon_{e})/(\sigma_{i}+2\sigma_{e})$, and the charge relaxation timescale, $T_{cr}=\epsilon/\sigma$, are small when compared with the membrane-charging time, $T_{ch}=r_{0}C_{m}(1/\sigma_{i}+1/2\sigma_{e})$, and the deformation time, $T_{d}=\mu_{e}/\epsilon_{e}E_{0}^{2}$. However, the last two times are in general comparable with each other. The first term on the RHS of Eq. (\ref{eq:current continous}) represents capacitive charging of the membrane, which includes the effect of membrane deformation. However, the contribution from this effect is usually small, and is neglected in the current analysis for simplicity. Equation (\ref{eq:current continous}) can be consequently reduced to\begin{eqnarray} \frac{\sigma_{e}}{h_{\xi}}\frac{\partial\phi_{e}}{\partial\xi}=\frac{\sigma_{i}}{h_{\xi}}\frac{\partial\phi_{i}}{\partial\xi}=&&\frac{C_{m}c}{h_{\xi}}\frac{\partial(\phi_{e}-\phi_{i})}{\partial t}\nonumber\\ &&+\frac{G_{m}c}{h_{\xi}}(\phi_{e}-\phi_{i}),\qquad{\rm at}\ \xi=\xi_{0}.\label{eq:reduced current continous}\end{eqnarray} Equation (\ref{eq:reduced current continous}) can be further simplified by considering different stages of charging. In the first stage, the transmembrane potential (TMP), $V_{m}\equiv(\phi_{i}-\phi_{e})_{\xi=\xi_{0}}$, grows continuously in magnitude, but the membrane is not permeabilized. Under this condition, $G_{m}$ is near zero, and Eq. (\ref{eq:reduced current continous}) becomes\begin{equation} \frac{\sigma_{e}}{h_{\xi}}\frac{\partial\phi_{e}}{\partial\xi}=\frac{\sigma_{i}}{h_{\xi}}\frac{\partial\phi_{i}}{\partial\xi}=\frac{C_{m}c}{h_{\xi}}\frac{\partial(\phi_{e}-\phi_{i})}{\partial t},\qquad{\rm at}\ \xi=\xi_{0}.\label{eq:capacitive current continous}\end{equation} In the second stage, the maximum TMP reaches the critical threshold, $V_{c}$, for electroporation to occur.\cite{Chang1990,Leontiadou2004,Gurtovenko2005,Tarek2005,Wohlert2006,Pliquett2007,Fernandez2010} The membrane becomes permeable to ions, and $G_{m}$ increases significantly to limit further growth of the TMP. In general, the exact values of $V_{m}$ and $G_{m}$ depend on the detailed electroporation conditions and variables such as pore density and pore area. \cite{Li2011} The solution usually requires a complex numerical simulation which is beyond the scope of the theoretical analysis pursued in this paper. However, a comprehensive model study by Li and Lin \cite{Li2011} showed that the maximum TMP remained at the critical level in the presence of the pulse post-permeabilization. In this work, we adopt an approximate model for this stage. We assume that once the maximum value of $V_{m}$ reaches $V_{c}$, it no longer grows and {}``freezes'' in time. In addition, the membrane is completely permeabilized, and Eq. (\ref{eq:reduced current continous}) is replaced by\begin{equation} \frac{\sigma_{e}}{h_{\xi}}\frac{\partial\phi_{e}}{\partial\xi}=\frac{\sigma_{i}}{h_{\xi}}\frac{\partial\phi_{i}}{\partial\xi},\quad V_{m}=V_{c},\qquad{\rm at}\ \xi=\xi_{0}.\label{eq:reach critical Vm}\end{equation} Note that electroporation only occurs for sufficiently strong electric fields, and Eq. (\ref{eq:reach critical Vm}) is not needed for some of the cases studied below where $V_{c}$ is never reached. Far away from the vesicle surface, the electric field is uniform\begin{equation} -\nabla\phi_{e}=E_{0}\mathbf{\boldsymbol{z}},\qquad{\rm at}\ \xi\rightarrow\infty.\label{eq:farfield electric vesicle}\end{equation} We also require that $\phi_{i}$ remains finite at $\xi=1$. For initial condition, we solve Eqs. (\ref{laplace for vesicle}) and (\ref{eq:capacitive current continous}) with $V_{m}=0$. The general solution of the electric potentials for both the exterior and interior of the vesicle can be obtained following a similar procedure outlined in Zhang \emph{et al.}:\cite{Zhang2012} \begin{equation} \phi_{e}=E_{0}r_{0}\left[-\lambda\xi+\alpha Q_{1}(\xi)\right]\eta,\label{exterior potential}\end{equation} \begin{equation} \phi_{i}=E_{0}r_{0}\beta\xi\eta.\label{interior potential}\end{equation} Here, $Q_{1}(\xi)$ is a 1st-degree Legendre polynomial of the second kind. $\lambda\equiv c/r_{0}$ is the dimensionless semi-focal length. The coefficients $\alpha$ and $\beta$ are again obtained by applying the matching conditions. In the absence of electroporation, they are given as \begin{equation} \alpha=\frac{\beta+\sigma_{r}\lambda}{Q_{1}^{'}(\xi_{0})\sigma_{r}},\label{a(t)}\end{equation} \begin{widetext} \begin{eqnarray} \left[\frac{Q_{1}(\xi_{0})}{Q_{1}^{'}(\xi_{0})\sigma_{r}}-\xi_{0}\right]\frac{d\beta}{d\tau}-&&\left[\frac{Q_{1}(\xi_{0})Q_{1}^{''}(\xi_{0})-Q_{1}^{'2}(\xi_{0})(1-\sigma_{r})}{Q_{1}^{'2}(\xi_{0})\sigma_{r}}\frac{d\xi_{0}}{d\tau}+\frac{\tau_{2}}{\tau_{1}\lambda}\right]\beta\nonumber\\ &&-\left[\left(\xi_{0}-\frac{Q_{1}(\xi_{0})}{Q_{1}^{'}(\xi_{0})}\right)\frac{d\lambda}{d\xi_{0}}+\frac{\lambda Q_{1}^{''}(\xi_{0})Q_{1}(\xi_{0})}{Q_{1}^{'2}(\xi_{0})}\right]\frac{d\xi_{0}}{d\tau}=0,\label{b(t)} \end{eqnarray} \begin{equation} \alpha(0)=\frac{\lambda\xi_{0}(\sigma_{r}-1)}{Q_{1}^{'}(\xi_{0})\xi_{0}\sigma_{r}-Q_{1}(\xi_{0})},\qquad\beta(0)=\left[-\lambda+\alpha(0)Q_{1}^{'}(\xi_{0})\right]\sigma_{r}. \label{initial ab}\end{equation} \end{widetext} Here $\sigma_{r}\equiv\sigma_{e}/\sigma_{i}$ is the conductivity ratio. $\tau_{1}\equiv r_{0}C_{m}/\sigma_{i}$ is a membrane-charging time. $\tau_{2}\equiv r_{0}\mu_{e}/\Gamma_{0}$ is a characteristic flow timescale. $\Gamma_{0}$ is the initial membrane tension introduced below. The dimensionless time $\tau$ defined as $\tau\equiv t/\tau_{2}$ has been used. Note that the definition of these times slightly deviates from those used in Zhang \emph{et al.}\cite{Zhang2012} due to the difference between droplet and vesicle. However, $\tau_{2}$ remains formally the same by replacing $\gamma$ in Zhang \emph{et al.}\cite{Zhang2012} with $\Gamma_{0}$. After the maximum value of $V_{m}$ reaches the critical threshold, electroporation occurs. $\alpha$ and $\beta$ are calculated by Eq. (\ref{eq:reach critical Vm}) which yields \begin{equation} \alpha=\frac{-V_{c}/(E_{0}r_{0})-\lambda\xi_{0}(\sigma_{r}-1)}{Q_{1}(\xi_{0})-Q_{1}^{'}(\xi_{0})\xi_{0}\sigma_{r}},\:\beta=\left[-\lambda+\alpha Q_{1}^{'}(\xi_{0})\right]\sigma_{r}.\label{electroporated ab}\end{equation} The expressions for the normal and tangential electrostatic stresses are found in Zhang \emph{et al.} \cite{Zhang2012} and not repeated here. \subsection{The hydrodynamic problem} In the regime of low-Reynolds-number flow, the governing equation for the hydrodynamic problem can be rewritten in terms of the stream function, $\psi$, as \begin{equation} \rm{E}^{4}\psi=0.\label{stream function}\end{equation} Here, the expression for the operator $\rm{E}^{2}$ can be found in Dubash and Mestel\cite{Dubash2007} and Bentenitis and Krause.\cite{Bentenitis2005} The stream function is related to the velocity components as\begin{equation} u=-\frac{1}{h_{\xi}h_{\theta}}\frac{\partial\psi}{\partial\xi},\qquad v=\frac{1}{h_{\eta}h_{\theta}}\frac{\partial\psi}{\partial\eta}.\label{velocity field}\end{equation} $h_{\eta}$ and $h_{\theta}$ are metric coefficients of the prolate spheroidal coordinate system. At the membrane, $u$ and $v$ represent the tangential and normal velocities, respectively, and they are required to be continuous \begin{equation} u_{e}=u_{i},\qquad v_{e}=v_{i},\qquad{\rm at}\ \xi=\xi_{0}.\label{eq:no-slip condition}\end{equation} In addition, we prescribe a kinematic condition relating the membrane displacement to the normal velocity, \begin{equation} v(\xi=\xi_{0},\:\eta)=\frac{r_{0}\left(1-\xi_{0}^{-2}\right)^{-5/6}}{3\xi_{0}^{2}}\frac{\left(1-3\eta^{2}\right)}{\sqrt{\xi_{0}^{2}-\eta^{2}}}\frac{d\xi_{0}}{dt}.\label{eq:kinematic equation}\end{equation} At the membrane, the stress matching condition is given as:\begin{equation} ||\tau\cdot\mathbf{\boldsymbol{n}}||=\mathbf{\boldsymbol{f}}^{mem}.\label{eq:stress balance}\end{equation} Here $\mathbf{\boldsymbol{f}}^{mem}$ is the surface force density arising from the vesicle membrane. The tensor $\tau$ includes contributions from both the hydrodynamic and electrostatic stresses:\begin{equation} \tau\equiv-p{\rm I}+\mu(\nabla\mathbf{\boldsymbol{v}}+\nabla\mathbf{\boldsymbol{v}}^{T})+\epsilon\mathbf{\boldsymbol{E}}\mathbf{\boldsymbol{E}}-\frac{1}{2}\epsilon(\mathbf{\boldsymbol{E}}\cdot\mathbf{\boldsymbol{E}}){\rm I}.\label{eq:stress}\end{equation} \begin{figure} \center \includegraphics[width=0.5\textwidth]{effect_of_initial_tension} \caption{The relative increase of the apparent area, $\Delta$, as a function of membrane tension, $\Gamma_{h}$, for different values of initial membrane tension, $\Gamma_{0}$. The inset shows the linear regime for larger $\Gamma_{h}$ values.\label{fig:Relative-area-expansion}} \end{figure} \subsection{The membrane-mechanical model} The surface force density at the vesicle membrane essentially consists of two parts \cite{Seifert1997,Vlahovska2009}\begin{equation} \mathbf{\boldsymbol{f}}^{mem}=\mathbf{\boldsymbol{f}}^{\kappa}+\mathbf{\boldsymbol{f}}^{\Gamma}.\label{eq:surface force density}\end{equation} Here $\mathbf{\boldsymbol{f}}^{\kappa}$ is the surface force density induced by bending resistance. $\mathbf{\boldsymbol{f}}^{\Gamma}=2\Gamma H\mathbf{\boldsymbol{n}}-\nabla_{s}\Gamma$ is the surface force density induced by the membrane tension. $H$ is the mean curvature, and $\Gamma$ is the local membrane tension. We can easily verify that $\mathbf{\boldsymbol{f}}^{\kappa}$ is several orders of magnitude smaller than $\mathbf{\boldsymbol{f}}^{\Gamma}$, and is therefore not included in the current analysis. The local membrane tension, $\Gamma$, is calculated by assuming an effective tension which is uniform over the entire membrane.\cite{Helfrich1984,Vlahovska2009} An increase of the homogeneous tension, $\Gamma_{h}$, from the initial tension, $\Gamma_{0}$, leads to an increase in the apparent membrane area: \cite{Helfrich1984,Evans1990,Kummrow1991,Evans1991}\begin{equation} \Delta=\frac{k_{B}T}{8\pi\kappa}{\rm ln}\frac{\Gamma_{h}}{\Gamma_{0}}+\frac{\Gamma_{h}-\Gamma_{0}}{K_{a}}.\label{membrane tension}\end{equation} Here $\Delta$ is the increase in the apparent membrane area relative to the initial spherical state,\begin{equation} \Delta=\frac{1}{2}\left(1-\xi_{0}^{-2}\right)^{-\frac{2}{3}}\left[1-\xi_{0}^{-2}+\left(\xi_{0}^{2}-1\right)^{\frac{1}{2}}{\rm arcsin}\left(\xi_{0}^{-1}\right)\right]-1.\label{eq:relative area increase}\end{equation} $K_{a}$ is the elastic stretching modulus. $\kappa$ is the bending rigidity. Equation (\ref{membrane tension}) indicates that $\Gamma_{0},\:\kappa$, and $K_{a}$ are the important parameters in determining membrane tension. $\kappa$ and $K_{a}$ are usually constants for a specific vesicle type, and their values are often readily obtained from previous work. \cite{Kwok1981,Kummrow1991,Needham1995} On the other hand, $\Gamma_{0}$ is specific to an individual vesicle, and its value can not be directly determined from experimental measurements. The relation between $\Delta$ and $\Gamma_{h}$ for different choices of $\Gamma_{0}$ is shown in Fig. \ref{fig:Relative-area-expansion}. When $\Delta$ is small, the membrane area increases through the flattening of the undulations, and $\Gamma_{h}$ shows an exponential correlation with $\Delta$. When $\Delta$ is sufficiently large, a linear behavior is observed instead, and the membrane area increase is mainly due to elastic stretching. Moreover, a larger $\Gamma_{0}$ always leads to a larger $\Gamma_{h}$ for the same value of $\Delta$. \subsection{General solution} A solution for vesicle electrodeformation can be obtained by solving the governing equations of both the electrical and hydrodynamic problems, with the help of the matching conditions. The solution strategy is identical to that presented in Zhang \emph{et al.},\cite{Zhang2012} with only differences in the detailed matching conditions for both the electric field and the interfacial forces. For brevity, only the final governing equation for $\xi_{0}$ is presented here: \begin{widetext} \addtocounter{equation}{0}\begin{subequations}\begin{equation} \frac{d\xi_{0}}{d\tau}=-\frac{1}{F}\left[Q_{N}f_{21}(\xi_{0})+Q_{T}\frac{\mu_{r}f_{22}(\xi_{0})+f_{23}(\xi_{0})}{\mu_{r}f_{14}(\xi_{0})+f_{15}(\xi_{0})}-\frac{\Gamma_{h}}{\Gamma_{0}}f_{24}(\xi_{0})\right],\label{vesicle shape evolution}\end{equation} \begin{equation} Q_{N}=\frac{Ca_{E}}{\lambda^{2}}\left[(\lambda-\alpha Q_{1}^{'}(\xi_{0}))^{2}+(\lambda-\alpha Q_{1}(\xi_{0})/\xi_{0})^{2}-2\beta^{2}/\epsilon_{r}\right],\end{equation} \begin{equation} Q_{T}=\frac{Ca_{E}}{\lambda^{2}}\left[(\lambda-\alpha Q_{1}^{'}(\xi_{0}))(\lambda-\alpha Q_{1}(\xi_{0})/\xi_{0})-\beta^{2}/\epsilon_{r}\right].\end{equation} \end{subequations} \end{widetext} The functions $f_{14}(\xi_{0})$, $f_{15}(\xi_{0})$, $f_{21}(\xi_{0})-f_{24}(\xi_{0})$, and $F$ are the same as those used in Zhang \emph{et al.}, \cite{Zhang2012} and the detailed expressions are found in the Appendix. $\epsilon_{r}\equiv\epsilon_{e}/\epsilon_{i}$ is the permittivity ratio. The factors $Q_{N}$ and $Q_{T}$ again arise from the effects of the tangential and normal stresses, respectively. $Ca_{E}\equiv r_{0}\epsilon_{e}E_{0}^{2}/\Gamma_{0}$ is the modified electric capillary number. In the absence of electroporation, the coefficients $\alpha$ and $\beta$ are given in Eqs. (\ref{a(t)}) and (\ref{b(t)}). Once the electroporation occurs, Eq. (\ref{electroporated ab}) is used instead. Similar to the droplet model, an examination of the three terms in the numerator of Eq. (\ref{vesicle shape evolution}) reveals the contribution from the normal stress, tangential stress, and membrane tension, respectively. The balance between these three terms determines the equilibrium vesicle shape. The above equations are solved until the end of the pulse, $t=t_{p}$. In the context of vesicle electrodeformation, the relaxation process is equally important, and is more revealing of the underlying physical processes. The governing equations are presented below. In the absence of electroporation, Eq. (\ref{laplace for vesicle}) is solved without an applied electric field. The resulting equation for $\xi_{0}$ remains the same as Eq. (\ref{vesicle shape evolution}). The coefficients of $Q_{N}$, $Q_{T}$, $\alpha$, and $\beta$ are given as\begin{equation} Q_{N}=\frac{\epsilon_{e}V_{c}^{2}}{\lambda^{2}r_{0}\Gamma_{0}}\left[\alpha^{2}\left(Q_{1}^{'2}(\xi_{0})+Q_{1}^{2}(\xi_{0})/\xi_{0}^{2}\right)-2\beta^{2}/\epsilon_{r}\right],\end{equation} \begin{equation} Q_{T}=\frac{\epsilon_{e}V_{c}^{2}}{\lambda^{2}r_{0}\Gamma_{0}}\left[\alpha^{2}Q_{1}(\xi_{0})Q_{1}^{'}(\xi_{0})/\xi_{0}-\beta^{2}/\epsilon_{r}\right],\end{equation} \begin{equation} \alpha=\frac{\beta}{Q_{1}^{'}(\xi_{0})\sigma_{r}},\end{equation} \begin{widetext} \begin{equation} \left[\frac{Q_{1}(\xi_{0})}{Q_{1}^{'}(\xi_{0})\sigma_{r}}-\xi_{0}\right]\frac{d\beta}{d\tau}-\left[\frac{Q_{1}(\xi_{0})Q_{1}^{''}(\xi_{0})-Q_{1}^{'2}(\xi_{0})(1-\sigma_{r})}{Q_{1}^{'2}(\xi_{0})\sigma_{r}}\frac{d\xi_{0}}{d\tau}+\frac{\tau_{2}}{\tau_{1}\lambda}\right]\beta=0,\end{equation} \begin{equation} \alpha(\tau_{p})=\frac{V_{m}(\tau_{p})}{V_{c}(Q_{1}^{'}(\xi_{0})\xi_{0}\sigma_{r}-Q_{1})},\qquad\beta(\tau_{p})=\frac{V_{m}(\tau_{p})Q_{1}^{'}(\xi_{0})\sigma_{r}}{V_{c}(Q_{1}^{'}(\xi_{0})\xi_{0}\sigma_{r}-Q_{1})}.\label{eq:intial relaxation}\end{equation} \end{widetext} In Eq. (\ref{eq:intial relaxation}), the initial conditions for $\alpha$ and $\beta$ are obtained by solving Eqs. (\ref{laplace for vesicle}) and (\ref{eq:capacitive current continous}), and requiring that $V_{m}$ assumes the value at the end of the pulse. $\tau_{p}$ is the dimensionless time, $t_{p}/\tau_{2}$. Note that in this case, although the pulse is switched off, the electric field is in general not zero, due to the capacitive discharging of the membrane. In this case, the TMP will decreases from its peak value to zero on the membrane-charging timescale, $T_{ch}$. When electroporation is present, the discharging process is slightly more complex. The full membrane-charging model (\ref{eq:reduced current continous}) is used. In order to determine the membrane conductance, $G_{m}$, we simply assume that it remains unchanged from the moment the pulse ceases, namely, \begin{equation} G_{m}=-\frac{\sigma_{e}\beta E_{0}}{\lambda V_{c}}.\end{equation} The resulting equation for $\xi_{0}$ again does not formally deviate from Eq. (\ref{vesicle shape evolution}). The coefficients of $Q_{N}$, $Q_{T}$, $\alpha$, and $\beta$ are\begin{equation} Q_{N}=\frac{\epsilon_{e}V_{c}^{2}}{\lambda^{2}r_{0}\Gamma_{0}}\left[\alpha^{2}\left(Q_{1}^{'2}(\xi_{0})+Q_{1}^{2}(\xi_{0})/\xi_{0}^{2}\right)-2\beta^{2}/\epsilon_{r}\right],\end{equation} \begin{equation} Q_{T}=\frac{\epsilon_{e}V_{c}^{2}}{\lambda^{2}r_{0}\Gamma_{0}}\left[\alpha^{2}Q_{1}(\xi_{0})Q_{1}^{'}(\xi_{0})/\xi_{0}-\beta^{2}/\epsilon_{r}\right],\end{equation} \begin{equation} \alpha=\frac{\beta}{Q_{1}^{'}(\xi_{0})\sigma_{r}},\end{equation} \begin{widetext} \begin{equation} \left[\frac{Q_{1}(\xi_{0})}{Q_{1}^{'}(\xi_{0})\sigma_{r}}-\xi_{0}\right]\frac{d\beta}{d\tau}-\left[\frac{Q_{1}(\xi_{0})Q_{1}^{''}(\xi_{0})-Q_{1}^{'2}(\xi_{0})(1-\sigma_{r})}{Q_{1}^{'2}(\xi_{0})\sigma_{r}}\frac{d\xi_{0}}{d\tau}+\frac{\tau_{2}}{\tau_{1}\lambda}-\frac{\tau_{2}G_{m}}{C_{m}}\left(\frac{Q_{1}(\xi_{0})}{Q_{1}^{'}(\xi_{0})\sigma_{r}}-\xi_{0}\right)\right]\beta=0.\end{equation} \end{widetext} \subsection{A similarity solution for vesicle relaxation} The governing equation for the relaxation process can be further simplified following two considerations. First, we may ignore the membrane-discharging process. The membrane-charging/discharging time, $T_{ch}$, is on the order of 1 ms, which is in general much shorter than the relaxation time observed in the experiments, namely, a few tens of ms or longer. The relatively small effect of discharging on relaxation is clearly seen in Fig. \ref{fig:effect of sigma0} presented in the following section. Without including the discharging process, the coefficients $Q_{T}$ and $Q_{N}$ in Eq. (\ref{vesicle shape evolution}) are simply set to zero. Second, in the membrane-mechanical model (\ref{membrane tension}), the first and second term on the RHS represent the effects of undulation unfolding and elastic stretching, respectively. For moderate values of $\Gamma_{0}$, and for small-to-moderate deformations, the second term can be ignored, and the membrane-mechanical model becomes\begin{equation} \Delta=\frac{k_{B}T}{8\pi\kappa}{\rm ln}\frac{\Gamma_{h}}{\Gamma_{0}}.\label{eq:reduced membrane tension model}\end{equation} Substituting $Q_{T}=Q_{N}=0$ and Eq. (\ref{eq:reduced membrane tension model}) into (\ref{vesicle shape evolution}), we obtain\begin{equation} \frac{d\xi_{0}}{d\tau}=\frac{1}{F}{\rm exp}(\frac{8\pi\kappa\Delta}{k_{B}T})f_{24}(\xi_{0}).\label{eq:similarity}\end{equation} This equation is conveniently rewritten in terms of the aspect ratio as \begin{equation} \frac{d\frac{a}{b}}{d\tau}=-\frac{1}{F}{\rm exp}(\frac{8\pi\kappa\Delta}{k_{B}T})(\xi_{0}^{2}-1)^{-\frac{3}{2}}f_{24}(\xi_{0}).\label{eq:reduced similarity}\end{equation} Note that in this equation, $\kappa$, the bending rigidity, is regarded constant for a specific vesicle type, and $\mu_{r}$ (embedded in $F$, see Appendix) is close to 1 as both the fluids are usually aqueous. In addition, $\Delta$, the relative increase of apparent membrane area, depends exclusively on $\xi_{0}$, hence $a/b$ according to Eqs. (\ref{eq:relative area increase}) and (\ref{ab}). Under these assumptions, we observe that Eq. (\ref{eq:reduced similarity}) is completely autonomous, and the relaxation process is governed by the dimensionless time, $\tau=t/\tau_{2}$, where $\tau_{2}=r_{0}\mu_{e}/\Gamma_{0}$. This result suggests that the relaxation of vesicles with different initial radius, $r_{0}$, and initial tension, $\Gamma_{0}$, obeys a similarity behavior with the proper scaling suggested above. This behavior is demonstrated by both simulation and analysis of previous experimental data below. \section{Results} For all results below, we assume the lipid membrane to be made of egg-PC following Riske and Dimova\cite{Riske2005} (henceforth abbreviated as $'\rm{RD05}'$) and Sadik \emph{et al.}\cite{Sadik2011} (henceforth denoted as $'\rm{S11}'$). The bending rigidity is taken to be $\kappa=2.47\times10^{-20}\;{\rm J}$; \cite{Kummrow1991} the elastic modulus, $K_{a}=0.14\;{\rm N/m}$;\cite{Kwok1981,Needham1995} the membrane capacitance, $C_{m}=0.01\;{\rm F/m^{2}}$; \cite{Needham1989} the intravesicular and extravesicular viscosities, $\mu_{i}=\mu_{e}=10^{-3}\;{\rm Pa\cdot s}$; the intravesicular and extravesicular permittivities, $\epsilon_{i}=\epsilon_{e}=7\times10^{-10}\:{\rm F/m}$. The critical transmembrane potential is assumed to be $V_{c}=1\:{\rm V}$. \cite{Portet2010} \subsection{The effects of $\Gamma_{0}$ and $t_{p}$} We begin by examining the effects of $\Gamma_{0}$ on vesicle electrodeformation and relaxation. Figure \ref{fig:effect of sigma0} shows the typical system behavior for values of $\Gamma_{0}$ ranging from $10^{-7}-10^{-3}\;{\rm N/m}$. The intravesicular and extravesicular conductivities are $\sigma_{i}=6\times10^{-4}\;{\rm S/m}$ and $\sigma_{e}=4.5\times10^{-4}\;{\rm S/m}$, respectively following RD05. The field strength is \emph{$E_{0}=1\;{\rm kV/cm}$}, the pulse length is $t_{p}=250\;{\rm \mu s}$, and the initial radius is $r_{0}=15\;{\rm \mu m}$. Figure \ref{fig:effect of sigma0}(a) shows the evolution of $V_{m}$ at the cathode-facing pole, which demonstrates only a weak dependence on $\Gamma_{0}$. The threshold for electroporation ($1\:{\rm V}$) is reached just before the end of the pulse, and its effects are present yet negligible. The discharging occurs on the relatively short timescale of $1\;{\rm ms}$ as we discussed above. Figure \ref{fig:effect of sigma0}(b) shows the evolution of the aspect ratio, $a/b$. The discharging process manifests itself as a sudden and slight decrease in the aspect ratio immediately after the pulse ceases; its effects can in general be ignored without significantly altering the relaxation behavior. A smaller value of $\Gamma_{0}$ leads to a larger aspect ratio, and a longer relaxation process. The maximum aspect ratio, $[a/b]_{{\rm max}}$, is plotted as a function of $\Gamma_{0}$ in Fig. \ref{fig:effect of sigma0}(c). As the initial membrane tension decreases toward zero, the maximum achievable aspect ratio saturates. The similarity behavior in the relaxation process is demonstrated in Fig. \ref{fig:effect of sigma0}(d). The descending branches of the curves ($t>t_{p}$) shown in Fig. \ref{fig:effect of sigma0}(b) are rescaled in terms of $\tau=t/\tau_{2}$, and shifted horizontally. In comparison, the thick solid curve is obtained by directly solving Eq. (\ref{eq:reduced similarity}). The convergence of all curves validates that $\tau_{2}=r_{0}\mu_{e}/\Gamma_{0}$ is the single timescale governing vesicle relaxation \begin{figure*} \center \includegraphics[width=0.5\textwidth]{potential}\includegraphics[width=0.5\textwidth]{E=1e5_effect_of_initial_tension} \includegraphics[width=0.5\textwidth]{maximum_aspect_ratio}\includegraphics[width=0.5\textwidth]{similarity}\caption{Vesicle deformation-relaxation as a function of $\Gamma_{0}$. The governing parameters are $\sigma_{i}=6\times10^{-4}\:{\rm S/m}$, $\sigma_{e}=4.5\times10^{-4}\:{\rm S/m}$, \emph{$E_{0}=1\:{\rm kV/cm}$}, $t_{p}=250\:{\rm \mu s}$, and $r_{0}=15\:{\rm \mu m}$. (a) The transmembrane potential at the cathode-facing pole. (b) The time-course of the aspect ratio. (c) The maximum aspect ratio as a function of $\Gamma_{0}$. (d) The similarity behavior in relaxation. The descending branches from (b) are rescaled with $\tau=t/\tau_{2}$. The thick solid curve is directly obtained by integrating Eq. (\ref{eq:reduced similarity}).\label{fig:effect of sigma0}} \end{figure*} \begin{figure} \center \includegraphics[width=0.5\textwidth]{tp} \includegraphics[width=0.5\textwidth]{tp_similarity}\caption{Vesicle deformation-relaxation as a function of $t_{p}$. The parameters are the same as in Fig. \ref{fig:effect of sigma0}. The initial tension is set to be constant, $\Gamma_{0}=1\times10^{-6}\:{\rm N/m}$. (a) The time-course of the aspect ratio. (b) The similarity behavior is observed by shifting the relaxation curves with respect to time. The relaxation timescale, $\tau_{2}=r_{0}\mu_{e}/\Gamma_{0}$, is the same for all cases. The thick solid curve is directly obtained by integrating Eq. (\ref{eq:reduced similarity}).\label{fig:effect of tp}} \end{figure} \begin{table} \caption{List of parameters for Fig. \ref{fig:Comparison with RD05}. For each case, $E_{0}$ and $t_{p}$ are specified according to RD05. $\Gamma_{0}$ is a fitting parameter to obtain best comparison between simulation and data. For cases b, d, e, and f, extended pulse lengths (denoted by star) are also used. \label{tab:Listed-parameters}} \begin{ruledtabular} \center\begin{tabular}{>{\centering}p{1cm}>{\centering}p{1cm}>{\centering}p{2cm}>{\centering}p{3cm}} \multicolumn{1}{c}{case \#} &\multicolumn{1}{c}{$E_{0}$ (kV/cm)} &\multicolumn{1}{c}{$t_{p}$ (${\rm \mu s}$)} &\multicolumn{1}{c}{$\Gamma_{0}$ (N/m)}\tabularnewline \hline a & 1 & 150 & $2.79\times10^{-4}$\tabularnewline b & 1 & 200 & $3.23\times10^{-6}$\tabularnewline & 1 & 300{*} & $3.23\times10^{-6}$\tabularnewline c & 1 & 250 & $1.67\times10^{-4}$\tabularnewline d & 1 & 300 & $1.80\times10^{-6}$\tabularnewline & 1 & 400{*} & $1.80\times10^{-6}$\tabularnewline e & 2 & 50 & $1.80\times10^{-4}$\tabularnewline & 2 & 80{*} & $1.80\times10^{-4}$\tabularnewline f & 2 & 100 & $3.16\times10^{-6}$\tabularnewline & 2 & 170{*} & $3.16\times10^{-6}$\tabularnewline g & 3 & 50 & $6.67\times10^{-6}$\tabularnewline h & 3 & 100 & $3.42\times10^{-7}$\tabularnewline \end{tabular} \end{ruledtabular} \end{table} The effects of $t_{p}$ are examined in Fig. \ref{fig:effect of tp}. The parameters are the same as in Fig. \ref{fig:effect of sigma0}, and we fix $\Gamma_{0}$ at $1\times10^{-6}\;{\rm N/m}$. Figure \ref{fig:effect of tp}(a) shows that a longer pulse consistently leads to greater deformation, and the aspect ratio increases along the same envelope. The relaxation times are approximately the same for all cases, because $\tau_{2}$ remains unchanged. The discharging process is in general more conspicuous with longer pulses. In Fig. \ref{fig:effect of tp}(b), the relaxation curves are again shifted horizontally and rescaled with $\tau_{2}$ to show good agreement with the similarity solution (thick solid line). Note that here because all cases share the same values of $\tau_{2}$, the collapse of the curves is primarily caused by simple shifting. In other words, the aspect ratio also decreases along a common envelope. The above results are exemplary and demonstrate the typical system behavior. In general, the relaxation process (in particular the relaxation time) is more appreciably affected by the change in $\Gamma_{0}$ than the deformation process. A wide range of pulsing parameters are studied below, in direct comparison with experimental data from RD05 and S11. \begin{figure*} \center \includegraphics[trim=0cm 1cm 0cm 0.5cm, clip=true, width=0.45\textwidth]{E=1e5_150us}\includegraphics[trim=0cm 1cm 0cm 0.5cm, clip=true,width=0.45\textwidth]{E=1e5_200us} \includegraphics[trim=0cm 1cm 0cm 0.5cm, clip=true,width=0.45\textwidth]{E=1e5_250us}\includegraphics[trim=0cm 1cm 0cm 0.5cm, clip=true,width=0.45\textwidth]{E=1e5_300us} \includegraphics[trim=0cm 1cm 0cm 0.5cm, clip=true,width=0.45\textwidth]{E=2e5_50us}\includegraphics[trim=0cm 1cm 0cm 0.5cm, clip=true,width=0.45\textwidth]{E=2e5_100us} \includegraphics[trim=0cm 0.2cm 0cm 0.5cm, clip=true,width=0.45\textwidth]{E=3e5_50us}\includegraphics[trim=0cm 0.2cm 0cm 0.5cm, clip=true,width=0.45\textwidth]{E=3e5_100us} \caption{Comparison with the deformation-relaxation data from RD05. For all cases, $r_{0}=15\:{\rm \mu m}$, $\sigma_{i}=6\times10^{-4}\:{\rm S/m}$, and $\sigma_{e}=4.5\times10^{-4}\:{\rm S/m}$. Parameters specific to each case are listed in table \ref{tab:Listed-parameters}. The data is represented by symbols, and the simulation is represented by solid curves. For cases b, d, e, and f, the dashed lines represent the simulated results with extended pulses (denoted by stars in table \ref{tab:Listed-parameters}).\label{fig:Comparison with RD05}} \end{figure*} \subsection{Comparison with experimental data} An extensive comparison of our theoretical prediction with the data from RD05 is presented in Fig. \ref{fig:Comparison with RD05}. For all eight cases, the initial radius is $r_{0}=15\;{\rm \mu m}$. The electrical conductivities are $\sigma_{i}=6\times10^{-4}\;{\rm S/m}$ and $\sigma_{e}=4.5\times10^{-4}\;{\rm S/m}$, respectively, leading to a conductivity ratio of $\sigma_{r}=0.75$. Other parameters are listed in table \ref{tab:Listed-parameters}. All parameters are taken directly from RD05, except for the extended pulse lengths for some cases noted below. For each case, the initial tension, $\Gamma_{0}$, is determined to best fit the experimental data; their values are listed in table \ref{tab:Listed-parameters} in the last column. The experimental data are presented as symbols; the theoretical predictions, solid lines. In Figs. \ref{fig:Comparison with RD05}(a) to \ref{fig:Comparison with RD05}(d), the electric field strength is $E_{0}=1\;{\rm kV/cm}$. For these cases, $V_{m}$ is predicted to reach $V_{c}$ at $t=242\;{\rm \mu s}$. In Figs. \ref{fig:Comparison with RD05}(a) and \ref{fig:Comparison with RD05}(c), good agreements are observed between the theoretical prediction and the data. In Figs. \ref{fig:Comparison with RD05}(b) and \ref{fig:Comparison with RD05}(d), the model results underpredict the maximum aspect ratios. This discrepancy is peculiar: our simulation follows the data accurately during the presence of the pulse, which duration is provided by RD05. After the pulse ceases, the simulation predicts immediate relaxation, whereas the vesicles continued to deform in the experiments, due to some unknown cause. In an attempt to mend this difference, we artificially increase the pulse lengths in the simulation in b and d from 200 and 300 to 300 and 400 ${\rm \mu s}$, respectively. The values for $\Gamma_{0}$ remain unchanged. The results are shown as dashed curves. The model predicts well the data for both the deformation and relaxation processes. Note that although the relaxation curves represented by the solid and dashed lines look somewhat different due to the semi-log scale on the time axis, they actually follow the same descending envelopes which we have demonstrated in Fig. \ref{fig:effect of tp}(b) above. In Figs. \ref{fig:Comparison with RD05}(e) and \ref{fig:Comparison with RD05}(f), the field strength is increased to be $E_{0}=2\;{\rm kV/cm}$, and the pulse lengths used in RD05 were 50 and 100 ${\rm \mu s}$, respectively. For these cases, our model predicts the occurrence of electroporation around $t=103\;{\rm \mu s}$. A similar situation is observed as in Figs. \ref{fig:Comparison with RD05}(b) and \ref{fig:Comparison with RD05}(d). The solid curves underpredict the maximum aspect ratio. Artificially extending the pulses in e and f to 80 and 170 ${\rm \mu s}$, respectively, leads to much better agreement between the two. In Figs. \ref{fig:Comparison with RD05}(g) and \ref{fig:Comparison with RD05}(h), the field strength is further increased to 3 kV/cm, and electroporation is predicted to occur at $t=66\;{\rm \mu s}$. The entire deformation-relaxation process is well-captured in g where $t_{p}=50\;{\rm \mu s}$. In Fig. \ref{fig:Comparison with RD05}(h), where $t_{p}=100\;{\rm \mu s}$, although the model accurately predicts the deformation, the simulated relaxation curve completely deviates from the experimental data. For this case, and for pulses even longer than 100 ${\rm \mu s}$, RD05 [Fig. 1(c) therein] exhibits a regime where complex, multi-stage relaxation process was observed. In this regime, the membrane structure is likely severely altered due to electroporation, which process can not be captured by our present model. Further comparison with these data is not pursued. The similarity behavior in the relaxation process is demonstrated in Fig. \ref{fig:The-similarity-behavior}. The experimental data from Figs. \ref{fig:Comparison with RD05}(a) to \ref{fig:Comparison with RD05}(g) are shifted horizontally and rescaled with $\tau_{2}$. For each case, $\tau_{2}$ is obtained using $\Gamma_{0}$ listed in table \ref{tab:Listed-parameters}. The thick solid curve is again the similarity solution from Eq. (\ref{eq:reduced similarity}), and the results are shown on both semi-log and linear scales in $\tau$. The coefficient of determination is $R^{2}=0.96$. The experimental data from a wide range of parameters demonstrate a universal behavior governed by a single timescale, $\tau_{2}=r_{0}\mu_{e}/\Gamma_{0}$. This result is a main contribution of the present work. \begin{figure} \center\includegraphics[width=0.5\textwidth]{simi_a} \includegraphics[width=0.5\textwidth]{simi_b} \caption{The similarity behavior of vesicle relaxation. The experimental data from cases a-g in Fig. \ref{fig:Comparison with RD05} are shifted in time, then rescaled by $\tau_{2}=r_{0}\mu_{e}/\Gamma_{0}$. They are represented by symbols. The solid curves are calculated with Eq. (\ref{eq:reduced similarity}). The same data are shown on both a semi-log (a) and a linear (b) scale. The coefficient of determination is $R^{2}=0.96$.\label{fig:The-similarity-behavior}} \end{figure} We remark that a similar behavior should be observed for droplets, where the initial membrane tension, $\Gamma_{0}$, is replaced by $\gamma$, the coefficient of surface tension in $\tau_{2}$ (cf. the definition of $\tau_{2}$ in Zhang \emph{et al.}).\cite{Zhang2012} However, there is a subtle difference between droplet and vesicle relaxation while the coefficient of surface tension is usually a constant, the membrane tension, $\Gamma_{h}$, is not. Nonetheless, as long as $\Gamma_{h}$ depends linearly on $\Gamma_{0}$, which is a good approximation for small-to-moderate deformations. The universal behavior in Fig. \ref{fig:The-similarity-behavior} is expected \begin{figure} \center \includegraphics[width=0.5\textwidth]{s11_transient} \includegraphics[width=0.5\textwidth]{s11}\caption{Comparison with data from S11. (a) Simulated time-course of the aspect ratio for various conductivity ratios. For all cases $r_{0}=11.3\;{\rm \mu m}$ and $\Gamma_{0}=1\times10^{-8}\;{\rm N/m}$. (b) The aspect ratio at $t=500\;{\rm \mu s}$ as a function of $1/\sigma_{r}$.\label{fig:Comparison-with-S11}} \end{figure} Finally, the model prediction is compared with data from S11. In this work, the deformation is examined at a fixed pulse length of $t_{p}=500\;{\rm \mu s}$, and for five intra-to-extra vesicular conductivity ratios. Only the case of $E_{0}=0.9\;{\rm kV/cm}$ is examined, where no or weak electroporation is expected. We do not compare the cases of $E_{0}=2$ and 3 kV/cm in S11, where the vesicles were in the strongly-electroporated regime, and our model no longer applies. The governing parameters are $r_{0}=11.3\;{\rm \mu m}$ and $\sigma_{e}=3\times10^{-4}\;{\rm S/m}$. The initial membrane tension is chosen to be the same for all vesicles, namely, $\Gamma_{0}=1\times10^{-8}\;{\rm N/m}$. Figure \ref{fig:Comparison-with-S11}(a) shows the deformation process as a function of time for five conductivity ratios. As $\sigma_{r}$ decreases the rate of deformation increases. Except for the case of $\sigma_{r}=0.5$, the aspect ratio reaches a plateau before the pulse ends. The time at which the aspect ratio increases saturates with an increasing $\sigma_{r}$. For $\sigma_{r}=0.5$, an equilibrium could be reached if the pulse length is extended and sufficiently long (not shown here). In Fig. \ref{fig:Comparison-with-S11}(b), the aspect ratio at $t=t_{p}$ is shown as a function of $1/\sigma_{r}$. We choose this representation to facilitate comparison with the data from S11 (symbols), where the definition of the conductivity ratio is $\sigma_{i}/\sigma_{e}$. A reasonable agreement is found between the two. The behavior of the simulation and the data is explained by the dependence of the electrical stress on $\sigma_{r}$ in S11 [see Eq. (21) and Sec. 4 therein]. We do not repeat it here for brevity. The current model represents a significant improvement from that in S11, where the hydrodynamic problem is treated empirically. Some remarks are appropriate before concluding the section. First, for most cases studied here, the TMP is near the threshold, and the vesicles are expected to experience no or weak electroporation. For this regime, our model is shown to provide a good predictive capability, which demonstrates that the membrane-mechanical model (\ref{membrane tension}), although derived assuming no electroporation, can be extended to the weakly-electroporated regime, presumably due to the absence of major structural alterations. Our model is not applicable to the strongly-electroporated regime. Second, the universal scaling law in relaxation observed in Figs. \ref{fig:effect of sigma0}, \ref{fig:effect of tp}, and \ref{fig:The-similarity-behavior} is expected to hold regardless of the means of deformation, e.g., via AC/DC electric fields, or via mechanical stretching. Equation (\ref{eq:reduced similarity}) is applicable to a wide range of relaxation phenomena beyond electrodeformation. Third, the current work suggests that an extensive parametric study on vesicle electrodeformation-relaxation experimentally, in particular in the sub-critical regime where electroporation is avoided, can provide the benefit to further validate our model understanding. A systematic approach can be possibly developed based on this work to map membrane properties. \section{Conclusions} In this work, we developed a transient analysis for vesicle electrodeformation. The theory is derived by extending our previous work on a droplet model in Zhang \emph{et al.}, \cite{Zhang2012} with the additional consideration of a lipid membrane separating two fluids of arbitrary properties. For the latter, both a membrane-charging and a membrane-mechanical model are supplied. Similar to the droplet model, the main result is also an ODE governing the evolution of the vesicle aspect ratio. The effects of initial membrane tension and pulse length are examined. The initial membrane tension affects the relaxation process much more significantly than the deformation process, in particular when its value is small. The model prediction is extensively compared with experimental data from Riske and Dimova\cite{Riske2005} and Sadik \emph{et al.},\cite{Sadik2011} and is shown to accurately capture the system behavior in the regime of no or weak electroporation. More importantly, the comparison reveals that vesicle relaxation obeys a universal behavior, and is governed by a single timescale that is a function of the vesicle initial radius, the fluid viscosity, and the initial membrane tension. This behavior is regardless of the means of deformation, either via AC/DC electric field, or via mechanical stretching. This universal scaling law is a main contribution of the current work, and can be used to calculate membrane properties from experimental data. \begin{acknowledgements} JZ and HL acknowledge fund support from an NSF award CBET-0747886 with Dr William Schultz and Dr Henning Winter as contract monitors. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Spectators may be sent into infinite loops by Zeno, but Achilles catches up with the turtle anyway. Paradoxes do not spell trouble in the way contradictions do, as a contradiction appears in them only when improper assumptions are made. The more reasonable these assumptions seem, the brighter shines the paradox. Here we investigate the paradox of Kochen and Specker \cite{KS}, \cite{Bell}, describing a particular property of quantum mechanics by which it is distinguished from classical physics: contextuality \cite{KS}--\cite{ID}. The statement ``quantum mechanics is contextual" means that descriptions of quantum phenomena in terms of classical statistical mechanics---so-called non-contextual hidden variable models (ncHVMs) \cite{EPR},\cite{KS},\cite{Bell}---are in general not viable. In such models, all observables are assigned pre-existing values which are merely revealed by measurement---in stark contrast with quantum mechanics.\medskip Each paradox invites us to ask what becomes of the glaring discrepancy once the (in hindsight) improper assumption is excised. For the Kochen-Specker paradox, one such inquiry leads to measurement-based quantum computation (MBQC) \cite{RB01}, a scheme of universal quantum computation driven by measurement. We identify the mathematical structures that simultaneously capture the contextuality and the computational output of measurement-based quantum computations. These structures turn out to be cohomological. Put in graphical form, we explore the following triangle. \begin{equation}\label{Triangle} \parbox{10cm}{\includegraphics[width=10cm]{Triangle}} \end{equation} In the first part of this paper, consisting of Sections~\ref{bg} and \ref{CohoW}, we flesh out the above diagram for the simplest case, deterministic temporally flat MBQCs and the corresponding proofs of contextuality. Temporally flat means that these MBQCs have no non-trivial temporal order, which is a restriction. Section~\ref{bg} reviews the necessary background on contextuality and measurement-based quantum computation, and Section~\ref{CohoW} explains how cohomology encapsulates the essence of parity-based contextuality proofs and temporally flat MBQCs. The main results are Theorem~\ref{CPth2} \cite{Coho} and Theorem~\ref{ObeT} \cite{CohoMBQC}, which we restate here. In the second part of the paper, consisting of Section~\ref{TO}, we work towards removing the assumption of temporal flatness. MBQCs are typically temporally ordered. Even though the measurements driving the computation commute, measurement bases need to be adjusted depending on the outcomes of other measurements, and this introduces temporal order. This adjustment is necessary to prevent the randomness inherent in quantum measurement from creeping into the logical processing. While we do not yet tackle temporally ordered MBQCs, we demonstrate that a known contextuality proof exhibiting temporal ordering of measurements, the so-called ``iffy'' proof \cite{Exa}, can be described by the {\em{same}} cohomological formalism that is used for the temporally flat case. We conjecture that this strategy might also work for general MBQCs. Section~\ref{Concl} is the conclusion, and Section~\ref{TL} covers some stations of the author's own journey through the world of quantum computation and paradox. \section{Background}\label{bg} \subsection{Contextuality}\label{Crev} We assume that the reader is familiar with the concept of contextuality \cite{KS},\cite{Bell}; see \cite{Merm} for a review. To provide a short summary, contextuality of quantum mechanics signifies that, in general, quantum mechanical phenomena cannot be described by so-called non-contextual hidden variable models (ncHVMs) \cite{EPR}. In an ncHVM, observable quantities have predetermined value assignments; i.e., each observable possesses a value, and those values are merely revealed upon measurement. The statistical character of measurement in quantum mechanics is then sought to be reproduced by a probability distribution over the value assignments. For certain sets of measurements no such probability distribution exists. If that's the case, then the physical setting at hand is contextual. In this paper, we assume that in each value assignment $\lambda$ is deterministic; i.e., the value $\lambda(A)$ assigned to each observable $A$ is an eigenvalue of that observable, in accordance with the Dirac projection postulate. More general constructs are conceivable; for example the value assignments may themselves be probability distributions over eigenvalues \cite{Spekk}; however, we do not consider such generalizations here. We remark that deterministic ncHVMs are equivalent to factorizable probabilistic ones \cite{AB}; also see \cite{Fine}. \smallskip The Kochen-Specker (KS) theorem~\cite{KS} says that in Hilbert spaces of dimension 3 and higher, it is impossible to assign all quantum-mechanical observables deterministic non-contextual values in a consistent fashion. A very simple proof of the KS theorem, in dimension 4 and up, is provided by Mermin's square \cite{Merm}. It is the simplest parity proof of contextuality, where the assumption of existence of a consistent non-contextual value assignment $\lambda$ leads to a system of mod 2--linear equations with an internal inconsistency. As we will discuss below, the connection between contextuality and MBQC runs through the parity proofs. For MBQC we employ state-dependent contextuality. In it, consistent value assignments $\lambda$ do exist, but no probability distribution over them can explain the measurement statistics for the quantum state in question. The reason that value assignments suddenly become possible does not contradict the KS theorem; we merely have shrunk the set of observables considered. Already the original proof \cite{KS} of the KS theorem and the simpler proof via Mermin's square use a finite number of observables picked from a priori infinite sets; and in the application to MBQC we simply reduce those sets further.\smallskip The key example for the connection between contextuality and MBQC is the state-dependent version of Mermin's star \cite{Merm}, as was observed in \cite{AB}. Consider the eight-dimensional Hilbert space of 3 qubits, a specific state in it, the Greenberger-Horne-Zeilinger (GHZ) state \cite{GHZ}, \begin{equation}\label{GHZ} |\text{GHZ}\rangle = \frac{|000\rangle + |111\rangle}{\sqrt{2}}, \end{equation} and furthermore the six local Pauli observables $X_i$, $Y_i$, $i=1,..,3$. The state-dependent contextuality question is whether those six local observables can be assigned values $\lambda(\cdot) = \pm 1$ in such a way that the measurement statistics for the four non-local Pauli observables $X_1X_2X_3$, $X_1Y_2Y_3$, $Y_1X_2Y_3$, $Y_1Y_2X_3$ is reproduced. The GHZ state is a simultaneous eigenstate of these observables, $$ X_1X_2X_3 \, |\text{GHZ}\rangle = -X_1Y_2Y_3\, |\text{GHZ}\rangle =-Y_1X_2Y_3\, |\text{GHZ}\rangle= -Y_1Y_2X_3 \, |\text{GHZ}\rangle = |\text{GHZ}\rangle. $$ The measurement outcomes for the four non-local observables are deterministic and equal to $\pm 1$. Now note that these non-local observables are products of the local ones $X_i$, $Y_i$, namely $X_1X_2X_3 = (X_1) (X_2) (X_3)$, $X_1Y_2Y_3 = (X_1) (Y_2) (Y_3)$, etc. Assuming an ncHVM value assignment $\lambda$ for the local observables, the above operator constraints translate into constraints on the assigned values $\lambda(\cdot)$, namely $\lambda(X_1)\lambda(X_2) \lambda(X_3)=+1$, $\lambda(X_1)\lambda(Y_2) \lambda(Y_3)=-1$, and two more of the same kind. It is useful to write the value assignments $\lambda$ in the form $\lambda(\cdot)=(-1)^{s(\cdot)}$. In terms of the binary variables $s$, the four constraints read \begin{equation}\label{sdMS} \begin{array}{rcl} s(X_1) + s(X_2) + s(X_3) \mod 2 &=& 0,\\ s(X_1) + s(Y_2) + s(Y_3) \mod 2 &=& 1,\\ s(Y_1) + s(X_2) + s(Y_3) \mod 2 &=& 1,\\ s(Y_1) + s(Y_2) + s(X_3) \mod 2 &=& 1.\\ \end{array} \end{equation} Adding those four equations mod 2 reveals a contradiction $0=1$, hence no value assignment $s$ (equivalently $\lambda$) for the six local observables reproduces the measurement statistics of the GHZ state. The state-dependent Mermin star is thus contextual. We will return to Eq.~(\ref{sdMS}) throughout, as it relates to the simplest example of a contextual MBQC \cite{AB}. \medskip In preparation for the subsequent discussion we review one further concept, the contextual fraction \cite{ABsheaf}. To define it, consider an empirical model $e$, i.e., a collection of probability distributions over measurement contexts, and split it into a contextual part $e^C$ and a non-contextual part $e^{NC}$, \begin{equation} e=\tau e^{NC} + (1-\tau) e^C,\; 0 \leq \tau \leq 1. \end{equation} The maximum possible value of $\tau$ is called the non-contextual fraction ${\sf{NCF}}(e)$ of the model $e$, \begin{equation} {\sf{NCF}}(e) := \max_{e^{NC}} \tau. \end{equation} The contextual fraction ${\sf{CF}}(e)$ is then the probability weight of the contextual part $e^{C}$, \begin{equation} {\sf{CF}}(e):=1-{\sf{NCF}}(e). \end{equation} It is a measure of the ``amount'' of contextuality contained in a given physical setup. \subsection{Measurement-based quantum computation}\label{MBQCrev} Again, we assume that the reader is familiar with the concept of measurement-based quantum computation, a.k.a. the one way quantum computer \cite{RB01}. Here we provide only a very short summary, and then expand on one technical aspect that is of particular relevance for the connection with contextuality--the classical side processing. For a review of MBQC see e.g. \cite{RW12}. In MBQC, the process of quantum computation is driven by local measurement on an initially entangled quantum state; no unitary evolution takes place. Further, the initial quantum state, for example a 2D cluster state, does not carry any information about the algorithm to be implemented---it is universal. All algorithm-relevant information is inputted to that quantum state, processed and read out by the local measurements. In quantum mechanics, the basis of a measurement can be freely chosen but the measurement outcome is typically random; and this of course affects MBQC. There, the choice of measurement bases encodes the quantum algorithm to be implemented, and the measurement record encodes the computational output. In MBQC every individual measurement outcome is in fact completely random, and meaningful information is contained only in correlations of measurement outcomes. As it turns out, these computationally relevant correlations have a simple structure. To extract them from the measurement record, every MBQC runs a classical side processing. The need for classical side processing in MBQC also arises in a second place: measurement bases must be adapted according to previously obtained measurement outcomes, in order to prevent the randomness of quantum measurement from creeping into the logical processing. We confine our attention to the original MBQC scheme on cluster states \cite{RB01}, which we will henceforth call $l2$-MBQC. There are other MBQC schemes, for example using AKLT states as computational resources, in which the side processing is more involved. In $l2$-MBQC, for each measurement $i$ there are two possible choices for the measured observable $O_i[q_i]$, depending on a binary number $q_i$. The eigenvalues of these observables are constrained to be $\pm 1$. Furthermore, both the bitwise output $\textbf{o}=(o_1,o_2..,o_k)$ and the choice of measurement bases, $\textbf{q}=(q_1,q_2,..,q_N)$ are functions of the measurement outcomes $\textbf{s}=(s_1,s_2,..,s_N)$. In addition, $\textbf{q}$ is also a function of the classical input $\textbf{i}=(i_1,i_2,..,i_m)$. These functional relations are all mod 2 linear, \begin{subequations}\label{CCR} \begin{align}\label{CCR_out} \textbf{o}&=Z\textbf{s} \mod 2,\\ \label{CCR_in} \textbf{q} &=T\textbf{s}+S\textbf{i} \mod 2. \end{align} \end{subequations} Therein, the binary matrix $T$ encodes the temporal order in a given MBQC. If $T_{ij}=1$ then the basis for the measurement $i$ depends on the outcome of measurement $j$, hence the measurement $j$ must be executed before the measurement $i$. We remark that Eqs.~(\ref{CCR}) have been discussed with additional constant offset vectors on the r.h.s. \cite{TO_sym}, but we don't need that level of generality here. \subsection{Links between contextuality and MBQC}\label{Link} The basic result relating MBQC to contextuality is the following. \begin{Theorem}[\cite{RR13}]\label{NLPCrel} Be ${\cal{M}}$ an $l2$-MBQC evaluating a function $o:(\mathbb{Z}_2)^m \longrightarrow \mathbb{Z}_2$. Then, ${\cal{M}}$ is contextual if it succeeds with an average probability $p_S>1-d_H(o)/2^m$, where $d_H(o)$ is the Hamming distance of $o$ from the closest linear function. \end{Theorem} That is, if the function evaluated by the $l2$-MBQC is non-linear---hence outside what the classical side processing can compute by itself---then the assumption of non-contextuality puts a limit on the reachable probability of success. The reliability of the MBQC can be improved beyond this threshold only in the presence of contextuality. The more nonlinear the computed function (in terms of the Hamming distance $d_H(o)$), the lower the threshold. The lowest contextuality thresholds are reached for bent functions. For $m$ even and $o$ bent, it holds that $d_H(o) = 2^{m-1} - 2^{m/2-1}$ \cite{MWS}, and therefore the contextuality threshold for the average success probability $p_S$ approaches $1/2$ for large $m$. An MBQC can thus be contextual even though its output is very close to completely random.\medskip In particular when comparing the above Theorem~\ref{NLPCrel} to structurally similar theorems on the role of entanglement in MBQC \cite{MVdN}, we observe that the above only provides a binary ``can do vs. cannot do'' separation. According to the theorem, in the presence of contextuality a success probability of unity is a priori possible, but without it the stated bound applies. Yet it is intuitively clear that the reachable success probability of function evaluation in MBQC should depend on the ``amount'' of contextuality present. In this regard, we note the following refinement of Theorem~\ref{NLPCrel}, invoking the contextual fraction. \begin{Theorem}[\cite{CF}]\label{T1} Let $f: (\mathbb{Z}_2)^m \longrightarrow \mathbb{Z}_2$ be a Boolean function, and $\mathbb{H}(f,{\cal{L}})$ its Hamming distance to the closest linear function. For each l2-MBQC with contextual fraction ${\sf{CF}}(\rho)$ that computes $f$ with average success probability $\overline{p}_S$ over all $2^m$ possible inputs it holds that \begin{equation}\label{pSbd} \overline{p}_S\leq 1- \frac{(1-{\sf{CF}}(\rho))\, \mathbb{H}(f,{\cal{L}})}{2^m}. \end{equation} \end{Theorem} Thus, the larger the contextual fraction, the larger the achievable success probability for function evaluation through MBQC. If the contextual fraction of the resource state becomes unity, then the theorem puts no non-trivial bound on the success probability of the corresponding $l2$-MBQC. If, on the other hand, the contextual fraction of the resource state becomes zero, i.e., when the resource state can be described by a non-contextual hidden variable model, the threshold in success probability reduces to that of Theorem~\ref{NLPCrel}. Theorem~\ref{T1} interpolates between those two limiting cases.\medskip One important aspect of the MBQC--contextuality relationship is revealed only by the proof of Theorem~\ref{NLPCrel}, but not by the statement of the theorem itself. Namely, the contextuality of MBQC is intimately related to the classical side processing Eq.~(\ref{CCR}). Rather than replicating the proof from \cite{RR13}, here we illustrate the idea through the example of Anders and Browne's GHZ-MBQC \cite{AB}, related to Mermin's star. We will return to this example throughout.\smallskip {\em{Example (GHZ-MBQC).}} In this scenario, the resource state is a Greenberger-Horne-Zeilinger state of Eq.~(\ref{GHZ}), and the local measurable observables $O_i[q_i]$, depending on a binary number $q_i$, are $O_i[0]=X_i, \; O_i[1]=Y_i$, for $i=1,..,3$. These are precisely the ingredients of the state-dependent version of Mermin's star, as we discussed in Section~\ref{Crev}. As before, the measurement outcomes $s_i\in \mathbb{Z}_2$ are related to the measured eigenvalues $\lambda_i = \pm 1$ of the respective local Pauli observables via $\lambda_i=(-1)^{s_i}$. There are two bits $y,z$ of input and one bit $o$ of output, and the computed function is an OR-gate, $o= y \vee z$. The required linear classical side processing is as follows. \begin{subequations}\label{CCRghz} \begin{align} \label{CCR_inGHZ} q_1 = y,\, q_2 = z,\, q_3 = y+z \mod 2,\\ \label{CCR_outGHZ} o= s_1+s_2+s_3 \mod 2. \end{align} \end{subequations} The two input bits $y$ and $z$ determine the choices $q_i$ of measured observables through Eq.~(\ref{CCR_inGHZ}), and then the corresponding binary measurement outcomes $s_1, s_2, s_3$ determine the outputted value of the function, $o(y,z)$. Let's verify that the output is the intended OR function. First, consider $y=z=0$. Thus, by Eq.~(\ref{CCR_inGHZ}), $q_1=q_2=q_3=0$, and all three locally measured observables are of $X$-type. While the outcomes $s_1,s_2,s_3$ are individually random, they are correlated since the product of the corresponding observables $X_i$ is the stabilizer of the GHZ state, $X_1X_2X_3|\text{GHZ}\rangle = |\text{GHZ}\rangle$. Therefore, $s_1+s_2+s_3\mod 2 = 0$. Hence, with Eq.~(\ref{CCR_outGHZ}), $o(0,0)=0$ as required for the OR-gate. We consider one more input combination, $y=0$ and $z=1$. Then, with Eq.~(\ref{CCR_inGHZ}), $q_1=0$ and $q_2=q_3=1$. Hence $X_1$, $Y_2$ and $Y_3$ are measured. Because of the stabilizer relation $X_1Y_2Y_3|\text{GHZ}\rangle = - |\text{GHZ}\rangle$, the three measurement outcomes $s_1,s_2,s_3$ satisfy $s_1+s_2+s_3 \mod 2 =1$. With Eq.~(\ref{CCR_outGHZ}), $o(0,1)=1$ as required. The discussion of the other two inputs is analogous.\smallskip The OR-gate is a very simple function; yet it is of consequence for the above computational setting. Every MBQC requires a classical control computer, to enact the classical side processing of Eq.~(\ref{CCRghz}). This control computer is constrained to performing addition mod 2, and it is therefore not classically computationally universal. The OR-gate is a non-linear Boolean function. By adding it to the available operations, the extremely limited classical control computer is boosted to classical computational universality \cite{AB}. To understand the connection between contextuality and MBQC classical processing relations, we state Eq.~(\ref{CCR_outGHZ}) separately for all four input values. \begin{equation}\label{GHZmbqc1} \begin{array}{rrcr} \textbf{input:} \; (0,0)& \hspace*{4mm}\textbf{output:} \; 0&=& s(X_1)+s(X_2)+s(X_3)\\ (0,1)& 1&=& s(X_1)+s(Y_2)+s(Y_3)\\ (1,0)& 1&=& s(Y_1)+s(X_2)+s(Y_3)\\ (1,1)& 1&=& s(Y_1)+s(Y_2)+s(X_3) \end{array} \end{equation} Note the striking resemblance of Eq.~(\ref{GHZmbqc1}) to the earlier Eq.~(\ref{sdMS}). The only difference is that Eq.~(\ref{GHZmbqc1}) refers to quantum mechanical measurement record, one context at a time, whereas Eq.~(\ref{sdMS}) refers to a noncontextual value assignment in an ncHVM, applying to all contexts simultaneously. Thus, if we assume an ncHVM then we obtain a contradiction; and if we do not assume it then the same equations describe a computation. This dichotomy exists not only for the GHZ-scenario discussed here, but indeed for all MBQCs satisfying the classical processing relations Eq.~(\ref{CCR}). It is the basis for Theorems~\ref{NLPCrel} and \ref{T1}. \section{Cohomology}\label{CohoW} In the previous section we found that for $l2$-MBQCs contextuality and computation hinge on the same algebraic structure. If we impose an ncHVM description on top of this structure, we obtain a contradiction; and if we do not impose it, we obtain a computation. This begs the question: {\em{What precisely is this common algebraic structure underlying both parity-based contextuality proofs and measurement-based quantum computation?}} This is where cohomology comes in.\smallskip Below, we build up the cohomological picture for deterministic, temporally flat MBQCs. The connection between MBQC and contextuality runs through state-dependent parity-based contextuality proofs. In Section~\ref{CohoCon}, we first introduce the cohomological description of the state-independent counterpart. It is based on a chain complex ${\cal{C}}(E)$, and slightly simpler. We then progress to the state-dependent version, described by the relative chain complex ${\cal{C}}(E,E_0)$. In Section~\ref{CohoComp}, we explain the relation between cohomology in ${\cal{C}}(E,E_0)$ and MBQC output. \subsection{Cohomology and contextuality}\label{CohoCon} We begin with the simpler state-independent parity proofs of contextuality, and then move on to their state-dependent cousins which are of more direct interest for MBQC. In all that follows we consider observables whose eigenvalues are all $\pm 1$. We denote these observables by $T_a$, a notation we now explain. The basic object in the cohomological discussion of the parity proofs are chain complexes ${\cal{C}}(E)=(C_0,C_1,C_2,C_3)$ consisting of points (0-chains), edges (1-chains), faces (2-chains) and volumes (3-chains), and boundary maps $\partial$ between those chains. The observables $T_a$ forming the contextuality proof are associated with the edges $a\in E$ in the complex ${\cal{C}}(E)$. More precisely, each edge $a$ corresponds to an equivalence class $\{\pm T_a\}$ of observables, $a:=\{\pm T_a\}$. From each equivalence class $a$, one observable is picked and denoted as $T_a$. From the perspective of contextuality, the reason for considering the observables $T_a$ and $-T_a$ as equivalent is the following. If a parity-based contextuality proof can be based on some set of observables $\{T_a, a\in E\}$, then any signed set $\{(-1)^{\gamma(a)} T_a, \;\gamma(a) \in \mathbb{Z}_2, \forall a\in E\}$ produces an equivalent proof. The signs $(-1)^{\gamma(a)}$ in the definition of the observables $T_a$ don't matter for the existence of contextuality proofs; and this leads us to consider the equivalence classes $\{\pm T_a\}$. We will return to this observation once we have set up the appropriate notation, right after Theorem~\ref{CPth1}. The 1-chains $c_1\in C_1$ are linear combinations of the edges $a\in E$ with $\mathbb{Z}_2$ coefficients. The faces of ${\cal{C}}(E)$ are sets $f=(a_1,a_2,..,a_n)$ of edge labels $a_i$ of pairwise commuting operators $T_{a_i}$, such that for every face $f $ it holds that \begin{equation}\label{ProdRel} \prod_{a\in f} T_a = I (-1)^{\beta(f)}, \end{equation} for a suitable function $\beta$ defined on the faces. We denote the set of faces by $F$, and the 2-chains $c_2 \in C_2$ are linear combinations of the faces $f\in F$ with coefficients in $\mathbb{Z}_2$. We can now define a boundary operator $\partial: C_2 \longrightarrow C_1$ via $\partial(f) = \sum_{a\in f} a$, for all $f\in F$, and extension from $F$ to $C_2$ by linearity. We can then also define a coboundary operator $d: C^1 \longrightarrow C^2$ in the usual way; i.e. for every 1-cochain $x \in C^1$ it holds that $d x(f):=x(\partial f)$, for all $f\in F$.\medskip The function $\beta: C_2 \longrightarrow \mathbb{Z}_2$ plays a central role in the cohomological discussion of contextuality. Namely, assume that a non-contextual value assignment $\lambda$ exists, and as before write $\lambda(\cdot) = (-1)^{s(\cdot)}$. Then, Eq.~(\ref{ProdRel}) implies that $\beta(f) = \sum_{a\in f}s(a) = s(\partial f)$ for all $f\in F$. We may write this in cochain notation as \begin{equation}\label{betads} \beta = ds. \end{equation} This equation may be interpreted as a constraint on the value assignment $s$, given $\beta$. But it may as well be regarded as a constraint on $\beta$. Namely, not all functions $\beta$ are of form Eq.~(\ref{betads}), for any 1-cochain $s$. Thus, a measurement setting based on ${\cal{C}}(E)$ is non-contextual only if $\beta = ds$ for some $s\in C^1$, or, equivalently, it is contextual if $\beta \neq ds$, for any $s\in C^1$. \begin{figure} \begin{center} \begin{tabular}{lclcl} (a) && (b) && (c)\\ \includegraphics[height=3.5cm]{MerminStar2.pdf} && \includegraphics[height=3.3cm]{MstarB2.pdf} && \includegraphics[height=3.3cm]{Mstar3.pdf} \end{tabular} \caption{\label{MermSt} Mermin's star. (a) Standard representation. Each line represents a measurement context, composed of four commuting Pauli observables multiplying to $\pm I$. (b) Mermin's star re-arranged on a surface. The Pauli observables now correspond to edges, and each measurement context to the boundary of one of the four elementary faces. The exterior edges are pairwise identified. The colored edges carry a value assignment, resulting from the GHZ stabilizer. (c) Relative complex ${\cal{C}}(E,E_0)$. The edges corresponding to observables in the GHZ stabilizer are removed by contraction.} \end{center} \end{figure} We will now slightly reformulate the last statement, to better bring out its cohomological nature. The function $\beta$ is by definition a 2-cochain. But in fact it is a 2-cocycle, $d\beta =0$ \cite{Coho}. Thus, we may express the above contextuality condition as follows. \begin{Theorem}[\cite{Coho}]\label{CPth1}A set of measurements specified by the chain complex ${\cal{C}}(E)$ is contextual if for the cocycle class $[\beta] \in H^2({\cal{C}}(E),\mathbb{Z}_2)$ it holds that $$[\beta] \neq 0.$$ \end{Theorem} {\em{Remark:}} We observed above that no transformation $T_a \longrightarrow (-1)^{\gamma(a)} T_a$, $\forall a \in E$, affects the existence of contextuality proofs. We can now verify this statement in Theorem~\ref{CPth1}. At the level of the cocycle $\beta$, the transformations act as $\beta \longrightarrow \beta + d\gamma$. Hence, $[\beta] \longrightarrow [\beta]$. The parity proofs are thus indeed unchanged. We point out that the transformations discussed here---which we call gauge transformations---have a further use in characterizing MBQC output functions; see Section~\ref{CohoComp}. \medskip Now let's consider the state-independent Mermin star in this framework. The ten Pauli observables $T_a$ therein are assigned to the edges $a \in E$ in a chain complex ${\cal{C}}$; see Fig.~\ref{MermSt}b. For the five faces shown we have $\beta(f_1)=\beta(f_2) =\beta(f_3) = \beta(f_4)=0$, and $\beta(f_5)=1$. Further denote ${\cal{F}}:=\sum_{i=1}^5 f_i$, such that $\partial {\cal{F}} =0$ and $\beta({\cal{F}})=1$. Now assume Mermin's star were non-contextual. Then, $\beta=ds$ for some $s\in C^1$, and we have $$ 0 = s(0) = s(\partial {\cal{F}}) = ds({\cal{F}}) = \beta({\cal{F}}) = 1. $$ Contradiction. Hence, Mermin's star is contextual.\medskip We now seek a state-dependent version of Theorem~\ref{CPth1}, preferably formulated in an analogous way. This can be achieved by proceeding from the chain complex ${\cal{C}}(E)$ to a relative chain complex ${\cal{C}}(E,E_0)$. The quantum state $|\Psi\rangle$ now appears, and the set $E_0\subset E$ consists of those edges $a$ for which the corresponding operator $T_a$ has $|\Psi\rangle$ as an eigenstate, \begin{equation}\label{ES} T_a|\Psi\rangle = (-1)^{\mu(a)}|\Psi\rangle,\;\text{with } \mu: E_0 \longrightarrow \mathbb{Z}_2. \end{equation} Geometrically, ${\cal{C}}(E,E_0)$ is obtained from ${\cal{C}}(E)$ by contracting the edges in $E_0$. Thereby, the faces of ${\cal{C}}(E)$ whose boundary lives entirely inside $E_0$ are removed. Under this contraction, the boundary map $\partial$ changes to a relative boundary map $\partial_R$ defined by $\partial_R(f) = \sum_{a\in f\backslash E_0}a$. Extending the above function $\mu$ to all of $E$ by setting $\mu(a):=0$ for all $a \not \in E_0$, we define a relative 2-cochain \begin{equation}\label{betaPsi} \beta_\Psi:= \beta + d\mu \mod 2. \end{equation} Again, $\beta_\Psi$ is a 2-cocycle. Also, $\beta_\Psi$ evaluates to zero on all faces with boundary entirely inside $E_0$, and it is thus a cocycle in the relative complex ${\cal{C}}(E,E_0)$. Quantum mechanically, the measurement record in the context corresponding to any face $f\in F$ has to satisfy $s|_{f\cap E_0} = \mu |_{f \cap E_0}$, and $\beta(f) = s(\partial f)$. Then, from the above definitions it follows that \begin{equation}\label{BetaPsi_s} \beta_\Psi(f) = s(\partial_R f). \end{equation} Now assume a value assignment $s$ exists. It has to satisfy the condition Eq.~(\ref{BetaPsi_s}) for all faces $f\in F$ simultaneously. We may thus write the constraints on such a global value assignment $s$ as $ds=\beta_\Psi$, with $d$ now being the coboundary operator in the complex ${\cal{C}}(E,E_0)$. We thus have, in complete analogy with the state-independent case, the following result. \begin{Theorem}[\cite{Coho}]\label{CPth2}A set of measurements and a quantum state $|\Psi\rangle$ specified by the chain complex ${\cal{C}}(E,E_0)$ are contextual if for the cocycle class $[\beta_\Psi] \in H^2({\cal{C}}(E,E_0),\mathbb{Z}_2)$ it holds that $$[\beta_\Psi] \neq 0.$$ \end{Theorem} {\em{Example, Part II.}} We now apply this to the example of the state-dependent Mermin star. Four faces remain in ${\cal{C}}(E,E_0)$ after contraction of $E_0$ in ${\cal{C}}(E)$, $f_1',.., f_4'$. We have $\beta_\Psi(f_1')=0$, $\beta_\Psi(f_2')= \beta_\Psi(f_3')=\beta_\Psi(f_4')=1$. Denote ${\cal{F}}'=\sum_{i=1}^4 f_i'$ such that the relative boundary of ${\cal{F}}'$ vanishes, $\partial_R {\cal{F}}'=0$, and $\beta_\Psi({\cal{F}}')=1$. Now assume that the state-dependent Mermin star is non-contextual. Then, $\beta_\Psi=ds$ for some 1-cochain $s\in C^1({\cal{C}}(E,E_0),\mathbb{Z}_2)$. And thus \begin{equation}\label{cohoCount} 1= \beta_\Psi({\cal{F}}') = s({\partial {\cal{F}}'}) = s(0) = 0. \end{equation} Contradiction. Hence the state-dependent Mermin star is contextual. Eq.~(\ref{cohoCount}) is the cohomological version of Eq.~(\ref{sdMS}). It describes the exact same system of linear constraints. \subsection{Cohomology and computation}\label{CohoComp} Recall from Section~\ref{MBQCrev} that in MBQC there are two measurable observables at each physical site $i$, $O_i[q_i]$, $q_i \in \mathbb{Z}_2$. To make use of the cohomological formalism, we now denote these observables as \begin{equation}\label{OT} O_i[0]=T_{a_i},\; O_i[1]=T_{\overline{a}_i},\;\; \forall i=1,..,n. \end{equation} We define the notion of an input group to import the classical processing relation Eq.~(\ref{CCR_in}) into our cohomological picture. The input group is $Q =\langle \textbf{i}_j,\;j=1,..,m\rangle \cong \mathbb{Z}_2^m$. The generators of $Q$ act on the observables of Eq.~(\ref{OT}) as \begin{equation}\label{cflip} \begin{array}{rrl} \textbf{i}_j(a_i)=(a_i),&\textbf{i}_j(\overline{a}_i)=(\overline{a}_i),& \text{if $S_{ij}=0$},\\ \textbf{i}_j(a_i)=(\overline{a}_i),&\textbf{i}_j(\overline{a}_i)=(a_i),& \text{if $S_{ij}=1$}. \end{array} \end{equation} Denoting by ${\cal{E}}_\text{e}$ a reference context corresponding to the trivial input $\text{e} \in Q$, ${\cal{E}}_\text{e} := \{a_j,\,j = 1,..,n\}$, and by ${\cal{E}}_\textbf{i}$ the measurement context for any input $\textbf{i} \in Q$, then, with the definitions Eq.~(\ref{OT}) and (\ref{cflip}), the relation \begin{equation}\label{Iact} {\cal{E}}_\textbf{i} = \textbf{i}({\cal{E}}_\text{e}):=\{\textbf{i}(a_j),\,j=1,.., n\} \end{equation} reproduces the classical side processing relation Eq.~(\ref{CCR_in}) in the limit of temporally flat MBQCs, $T=0$. This is the limit we are presently interested in. We have thus far represented computational input by a group $Q$ that maps the complex ${\cal{C}}(E,E_0)$ to itself, and we now turn to the computational output. In terms of the above sets ${\cal{E}}_\textbf{i}$, the classical side processing relations for output, Eq.~(\ref{CCR_out}), read \begin{equation}\label{CCR_out_2} o(\textbf{i}) = \sum_{a\in {\cal{E}}_\textbf{i}} s(a) \mod 2,\;\; \forall \textbf{i} \in Q. \end{equation} We note that for any $\textbf{i} \in Q$, the observables $T_a$, $a\in {\cal{E}}_\textbf{i}$, pairwise commute. Furthermore, in the setting of deterministic computation, the input group $Q$ (equivalently, the matrix $S$ in Eq.~(\ref{CCR_in})) is chosen such that the MBQC resource state $|\Psi\rangle$ is an eigenstate of all observables $\prod_{a \in {\cal{E}}_\textbf{i}} T_a $. That is, $\prod_{a \in {\cal{E}}_\textbf{i}} T_a =\pm T_x$, with $x\in E_0$; cf. Eq.~(\ref{ES}). Therefore, the edges $a\in {\cal{E}}_\textbf{i}$ form the boundary of a face $f_\textbf{i}$ in the contracted complex ${\cal{C}}(E,E_0)$, i.e., $f_\textbf{i} \in C_2({\cal{C}}(E,E_0))$ satisfies ${\cal{E}}_\textbf{i} = \{\partial_R f_\textbf{i}\}$. Finally, with Eq.~(\ref{Iact}), ${\cal{E}}_\textbf{i} = \{\textbf{i} (\partial_R f_\text{e})\}$, and the face $f_\text{e}$ corresponds to ${\cal{E}}_\text{e}$. Therefore, Eq.~(\ref{CCR_out_2}) can be rewritten in cohomological notation as $$ o(\textbf{i}) = s(\textbf{i}(\partial_R f_\text{e})), $$ where $s$ is the measurement record for the observables in ${\cal{E}}_\textbf{i}$. Inserting Eq.~(\ref{BetaPsi_s}) into the last equation, we obtain the following result. \begin{Theorem}[\cite{CohoMBQC}]\label{ObeT} The function $o: Q \longrightarrow \mathbb{Z}_2$ computed in a given deterministic and temporally flat $l2$-MBQC is related to the cocycle $\beta_\Psi \in C^2({\cal{C}}(E,E_0))$ via \begin{equation}\label{obeta} o(\textbf{i}) = \beta_\Psi(\textbf{i}(f_\text{e})),\; \forall \textbf{i} \in Q. \end{equation} \end{Theorem} This relation between the computational output $o$ and the 2-cocycle $\beta_\Psi$ is the main result of this section. It has been established in greater generality in \cite{CohoMBQC} (Theorem~4 therein), but we don't need the additional generality here. Theorem~\ref{ObeT} is the computational counterpart to Theorem~\ref{CPth2} in Section~\ref{CohoCon}. Both results together establish that a single cohomological object, the cocycle $\beta_\Psi$, governs contextuality and computational output in MBQC. Jointly, Theorems~\ref{CPth2} and \ref{ObeT} thus flesh out the Diagram~(\ref{Triangle}).\medskip {\em{Example, Part III.}} For the GHZ-MBQC, Eq.~(\ref{obeta}) may be explicitely verified by inspecting Fig.~\ref{MermSt}c. The reference context is ${\cal{E}}_{\text{e}}=(a_{X_1},a_{X_2},a_{X_3})$, hence $f_\text{e} = f'_1$, w.r.t. the labeling of Fig.~\ref{MermSt}c. The input group is $Q=\mathbb{Z}_2\times \mathbb{Z}_2$. Its two generators $\textbf{i}_1, \textbf{i}_2$ are related to the input bits $y$, $z$ of the OR-gate via $y \mapsto \textbf{i}_1,\; z \mapsto \textbf{i}_2$, and Eq.~(\ref{cflip}) becomes \begin{equation}\label{Qghz} \begin{array}{rl} \textbf{i}_1: &a_{X_1} \leftrightarrow a_{Y_1}, \; a_{X_3} \leftrightarrow a_{Y_3}, \; a_{X_2} \circlearrowright,\; a_{Y_2} \circlearrowright,\\ \textbf{i}_2: &a_{X_2} \leftrightarrow a_{Y_2}, \; a_{X_3} \leftrightarrow a_{Y_3}, \; a_{X_1} \circlearrowright,\; a_{Y_1} \circlearrowright. \end{array} \end{equation} We may now verify in the cohomological calculus established above that this action does indeed lead to the execution of the OR-gate in the corresponding GHZ-MBQC. For example, if $y=z=0$ then $\textbf{i}=\text{e}$, and $\textbf{i}(f_1') =f_1'$; and thus $o(0,0) = \beta_\Psi(f_1') = 0 = \text{OR}(0,0)$. Further, if $y=1$ and $z=0$, then $\textbf{i} =\textbf{i}_1$, and $\textbf{i}_1(f'_1) = f_3'$. Thus, $o(1,0) = \beta_\Psi(f_3')=1=\text{OR}(1,0)$. The other two cases are analogous. See Fig.~\ref{GHZsymm} for illustration of the action of the input group given by Eq.~(\ref{cflip}).\medskip \begin{figure} \begin{center} \includegraphics[width=8cm]{GroupInp} \caption{\label{GHZsymm}Input group of the GHZ-MBQC. Displayed is the action of the element $\textbf{i}_1$ of the input group $Q = \mathbb{Z}_2\times \mathbb{Z}_2$. As described by Eq.~(\ref{Qghz}), for qubits 1 and 3, $X$ and $Y$ are interchanged under the given input, and the reference context $(X_1,X_2,X_3)$ is thereby changed into $(Y_1,X_2,Y_3)$. } \end{center} \end{figure} One point remains to be discussed. When comparing Theorems~\ref{CPth2} and \ref{ObeT}, we notice a difference. Theorem~\ref{CPth2} invokes the cohomology class $[\beta_\Psi]$ whereas Theorem \ref{ObeT} invokes the cocycle $\beta_\Psi$ itself. Only the former theorem is therefore truly topological. This prompts the question: {\em{Is there an operationally meaningful way of grouping the MBQC output functions $o$ into equivalence classes $[o]$ that depend only on $[\beta_\Psi]$?}} That is indeed the case. The equivalence classes $[o]$ of MBQC output functions are motivated and constructed as follows. We note that the signs in the observables $\{T_a,\,a\in E\backslash E_0\}$ are a mere convention. If an observable $T_a$, for some $a\in E\backslash E_0$, is measured in a given MBQC, then a measurement of $-T_a$ is exactly as hard, because the corresponding projectors are the same. To obtain one measurement from the other, only the labels of the two pointer positions of the measurement device need to be switched. Therefore, the change \begin{equation}\label{GT} T_a \longrightarrow (-1)^{\gamma(a)} T_a, \;\forall a\in E\backslash E_0, \end{equation} for any cochain $\gamma: C_1(E,E_0) \longrightarrow \mathbb{Z}_2$ is an equivalence transformation, or, as it is also called, a gauge transformation. Yet, these transformations have an effect. The cocycle $\beta_\Psi$ changes, namely $$ \gamma: \beta_\Psi \mapsto \beta_\Psi + d\gamma. $$ And thus, by Theorem~\ref{ObeT}, the outputted function $o$ changes too. Functions obtained from one another through such a transformation should be considered computationally equivalent, as was argued above. It is thus meaningful to group MBQC output functions $o$ into equivalence classes $$ [o(\cdot)]:=\{(\beta_\Psi+d\gamma)(\cdot f_\text{e}),\;\forall \gamma \in C^1({\cal{C}}(E,E_0))\}. $$ With this definition, Theorem~\ref{ObeT} has the following corollary. \begin{Cor}\label{CohoCompCor} For each deterministic and temporally flat $l2$-MBQC, the equivalence class $[o]$ of output functions is fully determined by $[\beta_\Psi]$. \end{Cor} Thus, the gauge-invariant information in an MBQC output function is contained in the same cohomological information that also provides the contextuality proof. \medskip {\em{Example, Part IV.}} In the GHZ-MBQC, we may flip $Y_3 \longrightarrow - Y_3$. In result, the new computed function is an AND. Therefore, AND and OR are equivalent wrt. MBQC. Considering the whole set of equivalence transformations for this example, we find that there are two equivalence classes of functions on two bits, the non-linear Boolean functions and the linear ones. Each member of the former class boosts the classical control computer of MBQC to computational universality, whereas the second class has no effect on the computational power at all. From the cohomological perspective, $H^2({\cal{C}}(E,E_0),\mathbb{Z}_2) = \mathbb{Z}_2$, i.e. there are two equivalence classes of cocycles $\beta_\Psi$. The trivial class corresponds to the linear Boolean functions on two bits and the non-trivial class to the non-linear Boolean functions. \subsection{On the probabilistic case} In the previous sections we focussed to deterministic MBQC. Indeed, powerful deterministic quantum algorithms do exist, notably for the Discrete Log problem \cite{MZ}. However, most known quantum algorithms are probabilistic, i.e., they succeed with a probability smaller than one. A cohomological treatment of probabilistic MBQCs is given in \cite{CohoMBQC}, based on group cohomology. Here we are content with alerting the reader to the additional layer of difficulty posed by the probabilisitic case.\smallskip Let's trace the restriction to deterministic MBQCs back to its origin. In Theorem~\ref{ObeT}, the central result on the computational side, it is present through the cocycle $\beta_\Psi \in C^2({\cal{C}}(E,E_0),\mathbb{Z}_2)$. This cocycle is defined in Eq.~(\ref{betaPsi}), in terms of the cocycle $\beta \in C^2({\cal{C}}(E),\mathbb{Z}_2)$ and the value assignment $\mu: E_0 \longrightarrow \mathbb{Z}_2$. The value assignment $\mu$ in turn refers to eigenvalues of certain observables related to computational output, of which the resource state $|\Psi\rangle$ is an eigenstate; cf. Eq.~(\ref{ES}). In the probabilistic case, the value assignment $\mu$ does in general not exist. Hence, $\beta_\Psi$ is not defined, and we cannot have straightforward probabilistic counterparts of Theorems \ref{CPth2} and \ref{ObeT}.\smallskip But the problem is not merely technical; it is conceptual. Consider our running example of the GHZ-MBQC, which executes an OR-gate with certainty. As soon as probabilistic computations are admitted, we may as well say that it evaluates the constant function $y\equiv 1$ with an average success probability of 75 percent. In fact, the same computation executes any 2-bit Boolean function, except $\neg \text{OR}$, with some nonzero probability of success. How can we then say that one particular function is computed while all others are not? Key to the solution is a group $G$ of symmetry transformations that extends the input group $Q$, in the group-theoretic sense. $G$ maps the complex ${\cal{C}}(E,E_0)$ to itself, acting on the observables $T_a$, $a\in E\backslash E_0$ via \begin{equation}\label{PhiDef} g(T_a) = (-1)^{\tilde{\Phi}_g(a)}T_{ga},\;\;\forall g\in G. \end{equation} Therein, the phase function $\tilde{\Phi}$ is, per construction, a 1-cocycle in group cohomology. There is a further condition on $G$. Namely, the action Eq.~(\ref{PhiDef}) of $G$ on the set of observables $\{\pm T_a, a\in E\backslash E_0\}$ induces an action on the output function $o$, and we require $o$ to be invariant under this action. It turns out that, given $G$, this invariance condition constrains $o$ up to an additive constant \cite{CohoMBQC}. Thus, the output function $o$ is {\em{defined}} through a symmetry group. Furthermore, $o$ can be expressed in terms of the phase function $\tilde{\Phi}$, and a contextuality proof can be given in terms of a group cohomology class derived from $\tilde{\Phi}$. In result, Theorems~\ref{CPth2} and \ref{ObeT} have counterparts in the probabilistic case. They are given as Theorem 5 in \cite{Coho} and Theorem 6 in \cite{CohoMBQC}, respectively. The probabilistic counterpart of Corollary~\ref{CohoCompCor} is Corollary~2 in \cite{CohoMBQC}. \section{Temporal order}\label{TO} The connection between contextuality and $l2$-MBQC described by Theorem~\ref{NLPCrel} is completely general. It applies to deterministic and probabilistic measurement-based computations, as well as temporally flat and temporally ordered ones. It is only the cohomological description of this connection that is presently restricted to temporally flat computations. This is a technical limitation, and the purpose of this section is to outline an approach for overcoming it. The idea is to not change the cohomological description at all, but to enlarge the complex ${\cal{C}}(E,E_0)$ by additional observables which take care of the temporal ordering. We illustrate this approach with the setting of the ``iffy'' proof \cite{Exa}. In Section~\ref{Iffy1} we review the iffy contextuality proof, largely following the original exposition \cite{Exa}. We then explain why the signature feature of iffiness is incompatible with applications to MBQC. In Section~\ref{Iffy2} we present a cohomological contextuality proof for the iffy scenario that is MBQC-compatible. This proof includes temporal order, yet is covered by Theorem~\ref{CPth2} without any modification. \subsection{The ``iffy" contextuality proof}\label{Iffy1} To get started, we require a simple example for a contextuality proof with temporal order, a counterpart to the non-adaptive GHZ proof. Luckily, Ref.~\cite{Exa}, Section 6, offers one; in fact, it offers a whole family of examples. We begin by writing them in a stabilizer notation that suits our purpose. The examples consist of a three-qubit resource state $|\Psi\rangle$, and local measurement settings for the three qubits. For any even integer $N$, choose $$ |\Psi\rangle \sim |00\rangle |\nu\rangle + | 11\rangle |\omega\rangle, $$ where $$ \begin{array}{rcl} |\nu\rangle &=& \cos \frac{\lambda}{2}|0\rangle + \sin\frac{\lambda}{2}|1\rangle,\\ |\omega\rangle &=&\sin \frac{\lambda}{2}|0\rangle + \cos\frac{\lambda}{2}|1\rangle, \end{array} $$ and $\lambda = \pi/2 - \pi/N$. This defines the resource state. Now the measurements: qubit 3 will be measured in the eigenbasis of $X$ or $Y$, and qubits 1 and 2 will be measured in the eigenbases of any of the operators \begin{equation}\label{XkDef} X_k:= \cos\left( k\frac{\pi}{N}\right) X + \sin\left( k\frac{\pi}{N}\right)Y,\;\; \forall k =0 ... 2N-1. \end{equation} Note that $X_{N+k} = - X_k$, such that we really only need the observables $X_0$, .. , $X_{N-1}$. Denote by $P_{y,\pm}$ the projector on the eigenstate of $Y$ with positive and negative eigenvalue, respectively, and define the operators \begin{equation}\label{tauX} \begin{array}{rcll} \tau_k &: =& X_{N-1-k}^{(1)} \otimes X_{k}^{(2)}\otimes P_{y,+}^{(3)} + X_{N+1-k}^{(1)} \otimes X_{k}^{(2)}\otimes P_{y,-}^{(3)},& k=0,.., N-1,\\ \overline{X}_k &:=& X^{(1)}_{N-k}\otimes X^{(2)}_k \otimes X^{(3)},& k=0,.., N-1. \end{array} \end{equation} By direct calculation, we can verify that \begin{subequations}\label{Stab} \begin{align}\label{StabX} \overline{X}_k |\Psi\rangle &=- |\Psi\rangle,\; \forall k,\\ \label{StabY} \tau_k |\Psi\rangle &= - |\Psi\rangle,\; \forall k. \end{align} \end{subequations} The measurement strategies considered in the contextuality proof have temporal order. Namely, first qubit 3 is measured, in the $X$ or $Y$ basis. In the latter case, the further choice of the measurement bases for qubits 1 and 2 depends on the outcome of the measurement at 3. From Eq.~(\ref{Stab}) we can read off the constraints on the non-contextual hidden variable model, which are provided in \cite{Exa}. Denote by $a_k$ and $b_k$ the binary measurement outcomes on qubits 1 and 2, respectively, given the measured observable $X_k$, and by $c_0$ ($c_1$) the outcome on qubit 3 if the measured observable is $X$ ($Y$). If these values form the value assignment of an ncHVM, they must satisfy the constraints \begin{equation}\label{ValAss} \begin{array}{rcll} a_i \oplus b_j \oplus c_0 &=& 0, & \forall i,j\;\text{s.th. } i+j =0,\\ a_i \oplus b_j \oplus c_0 &=& 1, & \forall i,j\;\text{s.th. } i+j =N,\\ \\ a_i \oplus b_j &=& 0, & \forall i,j\;\text{s.th. } i+j +(-1)^{c_1} =0,\\ a_i \oplus b_j &=& 1, & \forall i,j\;\text{s.th. } i+j +(-1)^{c_1} =N. \end{array} \end{equation} The contextuality proof proceeds from there, as usual, by adding up equations mod 2. This will be discussed below.\medskip We now show how Eq.~(\ref{ValAss}) are derived from the stabilizer relations Eq.~(\ref{Stab})\footnote{The original derivation of Eq.~(\ref{ValAss}) in \cite{Exa} uses a different formalism which we do not reproduce here.}. The two relations at the top of Eq.~(\ref{ValAss}) follow straightforwardly from Eq.~(\ref{StabX}); here we focus on the relations at the bottom of Eq.~(\ref{ValAss}), which derive from Eq.~(\ref{StabY}). First, for the observables of Eq.~(\ref{tauX}), with Eq.~(\ref{Stab}) we have the following values \begin{equation} s_{\tau_k}= s_{\overline{X}_k} =1,\;\;k=0,.., N-1. \end{equation} corresponding to eigenvalues $(-1)^1=-1$. Now consider separately the two cases of $c_1=0$ and $c_1=1$, respectively. Case I: $c_1=0$. We now want to argue that, in this case, the observables $ \tau_k(0) = X_{N-1-k}^{(1)} \otimes X_{k}^{(2)} $ are also assigned the value 1, $$ s_{\tau_k(0)} =1, \;\; k=0,..,N-1. $$ The argument is as follows. If $c_1=0$, then this fact could be established by measuring $Y^{(3)}$. According to quantum mechanics, the post-measurement state would be $|y,+\rangle:=P_{y,+}^{(3)}|\Psi\rangle$. For this state it holds that \begin{equation}\label{eqChain} \tau_k(0)|y,+\rangle = \tau_k|y,+\rangle = \tau_k P_{y,+}|\Psi\rangle = P_{y,+} \tau_k|\Psi\rangle = - P_{y,+} |\Psi\rangle =-|y,+\rangle. \end{equation} For later reference, note that in the above chain of equalities we have used the properties \begin{equation}\label{ref} \tau_k(0)P_{y,+} = \tau_k P_{y,+}\mbox{ and } [\tau_k,P_{y,+}]=0. \end{equation} By Eq.~(\ref{eqChain}), $s_{\tau_k(0)} =1$, for all $k$, as claimed. Further, by standard arguments, $s_{\tau_k(0)} =a_{N-1-k} \oplus b_k$. Combining the last two statements, $$ a_{N-1-k} \oplus b_k = 1,\;\;\forall k=0,.., N-1. $$ This provides the lower part of Eq.~(\ref{ValAss}) for the case of $c_1=0$. Case II: $c_1=1$. A completely analogous argument establishes the bottom half of Eq.~(\ref{ValAss}) for $c_1=1$.\smallskip Eq.~(\ref{ValAss}) is thus established as a set of constraints that any value assignment $\{a_k,b_l,c_m\}$ needs to satisfy. We now complete the proof, focussing on Case I, $c_1=0$. Case II is analogous. We assume that a value assignment exists. From the upper half of Eq.~(\ref{ValAss}), we pick the equation $a_0 +b_0+c_0 \mod 2=0$, and the equations $a_k+b_{N-k}+c_0 \mod 2 =1$, for $k=1,..,N-1$. From the lower half we pick the equations $a_l + b_{N-1-l} =1$, for $l=0,..,N-1$. Summing those equations, we obtain $N c_0 +2\sum_{k=0}^{N-1}(a_k+b_k) = 2N-1\;\;(\text{mod}\; 2)$. Since $N$ is even, this is a contradiction. $\Box$ \medskip Now that we have presented the iffy contextuality proof, let's take a step back and ask two questions. (1) {\em{Where is temporal order in this contextuality proof?}}---Suppose one wants to test the correlations of Eq.~(\ref{tauX}) through local measurement. The correlations are labeled by an integer $k \in \mathbb{Z}_N$, and a further binary integer $l\in \mathbb{Z}_2$ that decides whether qubit \#3 is measured in the $X$-basis ($l=0$) or in the $Y$-basis $(l=1)$. Given the input $(k,l)$, the pattern of local measurements to test the correlations is fully specified. Therein, if $l=1$, the measurement basis for qubit \#1 depends on the outcome $c_1$ obtained on qubit \#3, cf. Eq.~(\ref{tauX}), upper line. Thus, qubit \#1 must be measured {\em{after}} qubit \#3. This is the same temporal ordering due to adaptive measurement as occurs in MBQC.\smallskip \begin{figure} \begin{center} \begin{tabular}{lcl} (a) && (b)\\ \includegraphics[width=4cm]{surf4} & &\includegraphics[width=4cm]{surf5} \end{tabular} \caption{\label{topol}Chain complexes in the iffy proof, for $N=4$. (a) the complex ${\cal{C}}^{(0)}$ for $c_1=0$ and (b) the complex ${\cal{C}}^{(1)}$ for $c_1=1$. In either case, the four edges labeled ``$c_0$'' correspond to the same observable $X^{(3)}$, and are identified. The faces $f$ on which $\beta_\Psi(f)=1\, (0)$ are shown in color (white).} \end{center} \end{figure} (2) {\em{Is the iffy proof topological?}}---Yes, but with a caveat. The value assignment for $c_1$ is not part of the topological description. Instead there are two separate topological descriptions, one for $c_1=0$ and one for $c_1=1$. They are depicted in Fig.~\ref{topol}, (a) the complex ${\cal{C}}^{(0)}$ for $c_1=0$ and (b) the complex ${\cal{C}}^{(1)}$ for $c_1=1$. In both cases there is a surface ${\cal{F}}^{(c_1)}$ comprising all of the faces displayed. Those surfaces have the property that $\partial {\cal{F}}^{(c_1)} =0$. In both cases it holds that $\beta_\Psi^{(c_1)}({\cal{F}}^{(c_1)})=1$, which, together with the former statement, implies that $\left[\beta_\Psi^{(c_1)}\right]\neq 0$, $\forall c_1\in \mathbb{Z}_2$. The iffy proof thus has two cohomological parts, conditioned by the value of $c_1$, \begin{equation}\label{IP} \text{Iffy\,Proof} =\left\{ \mathbb{Z}_2 \ni c_1 \mapsto \left({\cal{C}}^{(c_1)}, \beta_\Psi^{(c_1)}\right)\right\}. \end{equation} The conditioning on $c_1$ is in the way of using the iffy proof as a template for describing temporally ordered MBQCs. To see why this is so, let's recap the earlier topological proofs. There, the assumption of a noncontextual value assignment $s$ is contradicted by $[\beta_\Psi]\neq 0$, and $\beta_\Psi$ is an object that is well-defined in quantum mechanics. Beyond the contextuality witness (see Theorem~\ref{CPth2}), $\beta_\Psi$ also contains the function computed in MBQC (see Theorem~\ref{ObeT}). The counterpart of $\beta_\Psi$ in the present iffy proof is the quantum-classical hybrid structure given by Eq.~(\ref{IP}). It consists of the quantum-mechanically valid parts ${\cal{C}}^{(c_1)}$, $\beta_\Psi^{(c_1)}$, and one element, $c_1$, of the non-contextual value assignment, so far assumed to exist. (Recall that ruling out the existence of such a value assignment is the very purpose of the contextuality proof.) Unlike $\beta_\Psi$ in the former cases, as a whole this hybrid object is not compatible with quantum mechanics. It is thus not suitable to base a description of MBQC on. Now that we have understood this, we seek to modify the iffy proof such that it becomes compatible with measurement-based quantum computation. \subsection{Deiffifying the iffy proof}\label{Iffy2} Here we present a topological contextuality proof for the above iffy scenario that uses a complex of the type defined in \cite{Coho}. The proof works in completely the same way as in the temporally flat scenarios it was previously applied to. \begin{figure} \begin{center} \includegraphics[width=8cm]{Complex1.jpg} \caption{\label{Comp1}Complex for the cohomological contextuality proof of the iffy scenario. There are four edges corresponding to $X^{(3)}$, and two each for $\sigma^\pm_k$, for various values of $k$, and for $Y^{(3)}$. Such edges are identified. The faces $f$ coloured in red have $\beta_\Psi(f)=1$, and the white faces $g$ have $\beta_\Psi(g)=0$.} \end{center} \end{figure} We define a couple of extra observables, for all $k\in \mathbb{Z}_{2N}$, \begin{subequations}\label{EpSig} \begin{align} \epsilon_k &:= \frac{I^{(3)}+Y^{(3)}}{2} \otimes X^{(1)}_{k-1} + \frac{I^{(3)}-Y^{(3)}}{2} \otimes X^{(1)}_{k+1},\\ \sigma^+_k &:= \frac{I^{(3)}+Y^{(3)}}{2} \otimes X^{(1)}_{k-1} + \frac{I^{(3)}-Y^{(3)}}{2} \otimes I^{(1)},\\ \sigma^-_k &:= \frac{I^{(3)}+Y^{(3)}}{2} \otimes I^{(1)} + \frac{I^{(3)}-Y^{(3)}}{2} \otimes X^{(1)}_{k+1}. \end{align} \end{subequations} These are correlated observables on qubits \#1 and \#3. They can also be considered as unitary gates in which qubit \#3 is the control and qubit \#1 the target. This is how the original iffiness enters into our topological proof, but in a fully quantum fashion. The stabilizer relations Eq.~(\ref{Stab}) can be expressed in terms of the observables defined in Eq.~(\ref{EpSig}). (only the first relation changes), \begin{subequations}\label{StabRel2} \begin{align} \label{SR2a} \epsilon_{N-k} \otimes X^{(2)}_k \, |\Psi\rangle &= - |\Psi\rangle,\\ \label{SR2b} X_{N-k}^{(1)} \otimes X^{(2)}_k X^{(3)}\, |\Psi\rangle &=- |\Psi\rangle. \end{align} \end{subequations} Further, the observables $\epsilon_k$, $\sigma^\pm_k$ satisfy the following {\em{recoupling relations}}: \begin{subequations}\label{esRel} \begin{align} \label{RelA} \epsilon_k &= \sigma_k^+ \sigma_k^-,\\ \label{RelB} X^{(1)}_k &= \sigma^+_{k+1}\sigma^-_{k-1},\\ \label{RelC} -Y^{(3)} &= \sigma^+_k \sigma^+_{N+k},\\ \label{RelD} Y^{(3)} &= \sigma^-_k \sigma^-_{N+k}. \end{align} \end{subequations} Finally, we note the commutation relations \begin{subequations}\label{CommRel} \begin{align} [\sigma^+_k,\sigma^-_l]&=0,\;\; \forall k,l \in\mathbb{Z}_{2N},\\ [\sigma^\pm_k,Y^{(3)}]&=0,\;\; \forall k, \in\mathbb{Z}_{2N}. \end{align} \end{subequations} With these relations, the complex shown in Fig.~(\ref{Comp1}) is well-composed. I.e., all faces correspond to triples of commuting operators that multiply to $\pm I$. \begin{figure} \begin{center} \includegraphics[width=5cm]{Complex2} \caption{\label{Comp2}The complex for the cohomological contextuality proof of the iffy scenario, in a different colouring. Orange: faces corresponding to the stabilizer relation Eq.~(\ref{SR2a}), purple: faces stemming from the stabilizer relation Eq.~(\ref{SR2b}), white: faces invoking the recoupling relations Eq.~(\ref{esRel}).} \end{center} \end{figure} We first consider the case of $N=4$ which is displayed in Fig.~\ref{Comp1}, and then the general case. Denote ${\cal{F}}:=\sum_i f_i$, i.e. ${\cal{F}}$ is the complete surface shown. It is easily verified that, after identifying the outer edges, $\partial {\cal{F}}=0$. Further, there are 9 faces in ${\cal{F}}$ on which $\beta_\Psi$ evaluates to 1, hence $\beta_\Psi({\cal{F}}) = 1\mod 2$. Both facts together imply that $[\beta_\Psi]\neq 0$, and hence the arrangement is contextual. $\Box$\medskip We now turn to the general case of even $N$. If and only if $N$ is even, there is an even number of edges labeled by $X^{(3)}$ in the boundary of the disc (shown in Fig.~\ref{Comp1} for $N=4$). Hence $\partial {\cal{F}} = 0 \mod 2$ if and only if $N$ is even. We still need to establish $\beta_\Psi({\cal{F}}) = 1 \mod 2$. So let's count the number of faces $f$ with $\beta_\Psi(f)=1$. Such faces arise through the relation of Eq.~(\ref{StabRel2}), and there are $2N$ of them. Hence their contribution cancels mod 2. There is one more contribution to $\beta_\Psi({\cal{F}})$. For guidance, we look at Fig.~\ref{Comp1} and follow the red arrow in the counter-clockwise sense. The first observable we encounter that has non-trivial support only on qubit \#1 is $X_0^{(1)}$. The next such observable is $X_1^{(1)}$, then $X_2^{(1)}$, and so forth. Going around the disk, we increase the value of $k$ for such observables $X_k^{(1)}$ in increments of 1. Completing the circle, we arrive at $X_N^{(1)}$ which equals $-X_0^{(1)}$ by virtue of Eq.~(\ref{XkDef}). $X^{(1)}_0$ already is the label of the start-stop edge, and hence we obtain an additional factor of $-1$ (That is why, in Fig.~\ref{Comp1}, the color of the last face before completing the circle is white, $\beta_\Psi(f_\text{last})=0$). We have thus overcounted the contributions stemming from Eq.~(\ref{StabRel2}) by 1, which we now correct for. There are no other contributions, hence $\beta_\Psi({\cal{F}})=1$. Now assume the existence of a value assignment $s=(a_k,b_l,c_m)$, i.e., $\beta_\Psi=ds$. Then, $$ 1 = \beta_\Psi({\cal{F}}) = ds({\cal{F}}) = s(\partial_R {\cal{F}}) = s(0) = 0. $$ Contradiction. Thus, no non-contextual value assignment exists. $\Box$\medskip To conclude, let's compare the above proof for the iffy scenario with the original iffy proof. The ``iffiness" is gone. The algebraic structure Eq.~(\ref{IP}) underlying the iffy proof is replaced by a simpler one, namely a relative chain complex ${\cal{C}}(N)$ with 2-cocycle $\beta_\Psi(N)$ living in it ($N$ even). This is exactly the same structure as in the parity-based contextuality proofs without temporal order. We achieved this reduction to the prior case by introducing additional observables in the chain complex, namely $\{\epsilon_k, \sigma^+_k, \sigma^-_k\}$ as defined in Eq.~(\ref{EpSig}), to represent the temporal ordering. We propose this as a blueprint for a general method of constructing cohomological contextuality proofs describing temporally ordered measurement-based quantum computations. \section{Conclusion}\label{Concl} In this paper, we have explained the contextuality--MBQC--cohomology triangle of Diagram~(\ref{Triangle}). Its upper corners, contextuality and measurement-based quantum computation, represent the phenomenology of interest; and the lower corner, cohomology, the mathematical method to describe it. The link between MBQC and contextuality is provided by Theorems~\ref{NLPCrel} and \ref{T1}, the link between contextuality and cohomology by Theorem~\ref{CPth2}, and the link between MBQC and cohomology by Theorem~\ref{ObeT} and Corollary~\ref{CohoCompCor}. Finally, in the center of the diagram sits the cocycle class $[\beta_\Psi]$, an element of the second cohomology group of the underlying chain complex. It contains the function computed in a given MBQC up to gauge equivalence, and the corresponding contextuality proof. A limitation of the cohomological framework established to date is that it only applies to temporally flat MBQCs, which form a small subclass. Here we made a first step towards describing MBQCs and contextuality proofs with temporal order in a cohomological fashion, by providing a cohomological contextuality proof in one concrete temporally ordered setting, the so-called ``iffy'' scenario \cite{Exa}. Extending the cohomological formalism to all MBQCs with proper temporal order is a main subject of future research on the MBQC-contextuality connection.\medskip \noindent {\em{Acknowledgments.}} The author thanks the Yukawa Institute for Theoretical Physics Kyoto (YITP) for their hospitality. Part of this work was performed there. This work is supported by NSERC. \section{Travel log}\label{TL} As I learned over the years, the 8th Conference on Quantum Physics and Logic, held in Nijmegen, the Netherlands in November 2011, is remembered fondly by many participants; for all sorts of reasons. Here I'd like to describe my journey towards this conference, how I spiralled out of it, and my thoughts for the future. My journey began in Munich in 2003, the final year of my PhD. Hans Briegel and I had discovered the one-way quantum computer, a scheme of measurement-based quantum computation (as it is now known) in 2000, and had answered the obvious first question---universality. Quite naturally, the universality proof was based on a mapping to the circuit model. But, besides proving the point, the mapping seemed inadequate in many ways. For example, the temporal order among the measurements in MBQC was different and more flat than the mapping would suggest: all Clifford gates can be implemented in the first round of measurement, before all other gates, irrespective of where they are located in the simulated circuit. This and similar observations prompted us to look for a description of MBQC outside the realm of circuit simulation, and, in the first place, for the basic structures upon which such a description could be built. There was, and is, no manual for how to approach this question. We are left to our own intuition and judgement. A structural element we focussed on early were the correlations among measurement outcomes that yield the computational result. Individually, the measurement outcomes in MBQC are completely random, and meaningful information can only be gleaned from certain correlations among them. What made the analysis of these correlations simultaneously difficult and interesting was their non-stabilizerness; i.e. the fact that the correlator observables are in general not mere tensor products of Pauli operators $X$, $Y$, $Z$. Fault-tolerance seemed a path to make progress on these correlations. I figured that it could not be established for MBQC without understanding the structure of these correlations first. At the time, fault-tolerance with high error threshold was a problem with a price tag. In addition, when solved for MBQC we could sure learn something from the solution---a goldilocks problem. When first putting non-stabilizer quantum correlations on my map in early 2003, unbeknownst to me, someone in far away Moscow was finding out something about them: Sergey Bravyi. The next year we would be office mates at Caltech. Having arrived at Caltech in October 2003, it took about two years until, resting upon the scrap of two unsuccessful attempts, I established fault-tolerant universal MBQC with 3D cluster states \cite{FT1},\cite{FT2} (joint work with Jim Harrington and Kovid Goyal). Price tag fetched: the fault-tolerance threshold was high, and the whole construction elegant. And yet, one thing didn't completely fall into place---the learning-from-the-solution part. As noted above, I had stipulated that in order to establish fault-tolerance for MBQC, the structure of the non-stabilizer correlations would need to be understood first. It panned out differently. Those correlations did not need to be understood, and I hadn't understood them. This realization is one of three waypoints encountered at Caltech on my journey to Nijmegen. However, some correlations in MBQC---those which provide the error-correction capability for Clifford gates---could be understood very well. Namely, it turned out that those correlations have a cohomological underpinning. 3D cluster states can be described by a pair of three-dimensional chain complexes, related by Poincare duality. The measurement outcomes live on the respective faces, and are thus represented by 2-cochains $s$. The cluster state stabilizer implies that, in the absence of errors, the measurement record satisfies the constraint $s(\partial v)=0$, for all volumes $v$, and hence $s$ is a 2-cocycle. Furthermore, the output of the MBQC is given by evaluations $s(f)$, for non-trivial 2-cycles $f$. Fault-tolerance and computation on 3D cluster states is thus a matter of cohomology. This finding is the second Caltech waypoint. In 2004, Sergey Bravyi and Alexei Kitaev developed ``magic state distillation'' \cite{BK}, an efficient and robust technique for implementing non-Clifford gates fault-tolerantly. It was eventually incorporated into fault-tolerant MBQC, but its main effect on me was a different one. Magic state distillation exploits non-Pauli quantum correlations to operate, as they are found, for example, in Reed-Muller quantum codes. Save the aspect of temporal order, these were precisely the quantum correlations I wanted to understand in the first place! A shortcut seemed to open: What about using quantum Reed-Muller code states as computational resource states in MBQC---could toy computations exhibiting non-trivial correlations be constructed this way? I was eager to try, and settled on the following conditions for Reed-Muller toy MBQCs: (i) The classical side processing relations Eq.~(\ref{CCR}) have to be obeyed; in particular, the input values form a vector space, as required by Eq.~(\ref{CCR_in}). (ii) The outcome is deterministic for every admissible value of input, and (iii) the MBQC is non-Clifford. Further, the criterion for an ``interesting'' computation was that it computed a non-linear Boolean function. Quite a low bar, but justified as it exceeds what the classical side processing permits by itself. \begin{figure} \begin{center} \includegraphics[width=16cm]{Maple.pdf} \caption{\label{Maple}Numerical experiment on toy MBQCs using Reed-Muller quantum code states as computational resources. Shown is the output for the example based on a 31-qubit punctured Reed-Muller code. All tests worked out---the Boolean function computed was total and non-linear.} \end{center} \end{figure} Armed with those criteria, I got my laptop running. I started with the 15 qubit punctured Reed-Muller quantum code, and it didn't work. So I went on to the 31 qubit punctured Reed Muller code, which, given the next came at 63, I knew was the largest I could handle. I held my breath. There was deterministic output on 2048 inputs---power of 2, good sign. The output was imbalanced, hence the computed function non-linear. A final check remained to be made: did the inputs form a vector space, as required by Eq.~(\ref{CCR_in})? That worked out too! I was over the moon. Sometime in the subsequent months, while finalizing the fault-tolerance work, it must have trickled in that to be excited about such toy quantum computations needed a very particular taste or preparation. They didn't achieve anything of real computational value. At any rate, the finding of these Reed-Muller toy MBQCs is my third waypoint at Caltech. In 2008, after I had moved to the University of British Columbia by way of the Perimeter Institute (PI), at a workshop at PI I heard Dan Browne speak about similar toy MBQCs. In a work of Janet Anders and him \cite{AB}, they considered MBQCs on a Greenberger-Horne-Zeilinger state, satisfying the above conditions (i) and (ii). Not enforcing condition (iii) (non-Cliffordness) allowed them to get by with 3 qubits rather than 31. But much, much more importantly, they managed to relate their 3 qubit-MBQC to something known and valued in the world of Physics: Mermin's star. Thus the MBQC--contextuality link saw the light of day. Learning of this result I was ready to go to QPL 2011, although the conference was still 3\,1/2 years ahead.\medskip Finally, being at QPL 2011 in Nijmegen, what made the day for me was a talk by Samson Abramsky, Shane Mansfield and Rui Barbosa on ``The Cohomology of Non-Locality and Contextuality''. It had taken me quite a bit of effort to make it to the conference---teaching had to be rescheduled and so on. But I boarded the return plane in Amsterdam with a swagger: very, very worth the trouble. Although, honestly, in actual terms I had not learned all that much. I had understood precisely one slide of Shane Mansfield's presentation, and that was the title slide. What my journey through Caltech and PI had prepared me for was to see significance in the words ``contextuality'' and ``cohomology'' appearing side by side. I also somehow managed to not be completely bypassed by Mansfield's cohomological explanation of the GHZ scenario, at least in so far as I noted the argument's existence. Of course I tried to chase down Mansfield and Barbosa after their talk, but they seemed quite busy answering other calls.\medskip For me, the upshot of Nijmegen was that a cohomological theory of MBQC was in range, making sense of all the known toy examples and hopefully beyond. To get started, all I needed to do was to get to grip with the Abramsky--Mansfield--Barbosa paper \cite{A2}, which finally happened in the Spring of 2012. Then it turned out that their cohomological explanation of the GHZ example did not quite provide the desired connection to MBQC. The latter required a cohomological interpretation of precisely Mermin's argument for the GHZ-scenario, not merely a cohomological explanation of that scenario. And so, with my collaborators Cihan Okay, Stephen Bartlett, Sam Roberts and Emily Tyhurst, we set out to define our own cohomological framework. I do not need to describe the ensuing work here, since I already did in the previous sections.\medskip This brings me to my thoughts for the future. Regarding measurement-based quantum computation, the recent investigations into its structure---contextuality as we discussed it here, computational phases of matter \cite{screen}--\cite{Archi} and temporal order \cite{CompMod}, \cite{Gflow}, \cite{TO_sym}---have to day remained separate. And yet they share a common ingredient at their cores: symmetry. I'm confident that these investigations will be unified into a single framework in the coming years, and that something new will spring from it. To think about the future of our field more broadly, let's take a really long run-up and zoom right into the year 1842. Ada, Countess of Lovelace and assistant to the British computing pioneer Charles Babbage, had just invented the notion of the computer program. Also, at a time when everybody around her saw the future of computation in calculating trajectories of cannon balls, she had the fundamental insight that not only numbers can be processed by computers, but rather symbolic information of any kind---musical notes, images, text \cite{Innovators}. Her insight lives on today in digital radio and television, the internet, Maple, the Google search engine, and countless other inventions of the information age. But, quantum computation extends beyond this line of thought. Quantum information is not ``symbolic''. Due to the irreversibility of quantum measurement, it cannot be perceived by looking at it. And with the limits of the reigning paradigm exposed, a new era of computation can begin---at least in the skunkworks. On the theory side of it, whether one is thinking about measurement-based quantum computation or the circuit model, essentially everything boils down to one thing: quantum algorithms. Towering achievements such as Shor's factoring not\-with\-standing, we seem to have difficulty inventing new quantum algorithms, and it's a matter of intuition. \begin{center} What would Ada's insight be today? \end{center} \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Oscillating time series are common in applications and are characterized by series of patterns of an upward trend followed by a downward trend. When oscillating time series do not exhibit apparent periodicity, such as those generated by chaotic systems, the problem of prediction lies basically on the time and magnitude of the peaks and troughs, as the results of three time series competitions showed \cite{Weigend94,Suykens98,ESTSP07}. Interestingly, the winning prediction schemes in the two first competitions were based on a local prediction model (with rather involved modifications of the standard nearest neighbor prediction approach). Local models stem from dynamical systems and chaos theory, are computationally efficient, and perform as well as more complicated black-box models, such as neural networks, in the prediction of irregular time series \cite{Kantz97}. For multi-step ahead prediction typically a higher embedding dimension $M$ is required. For a fixed time delay $\tau$, the reconstructed points span a time window of length $\tau_w=(M-1)\tau$. This should be large enough to account for the mean orbital period of the underlying trajectory, i.e. $\tau_w$ should cover the period of an oscillation or a pattern of oscillations \cite{Kugiumtzis96}. Turning point prediction is of great practical interest in many applications, such as finance \cite{Bao08}. A recently developed approach attempts to model oscillating time series from low-dimensional systems with the so-called peak-to-peak dynamics \cite{Piccardi08}. This approach relies on simple one or two dimensional maps for the peaks. In \cite{Kugiumtzis08b}, it was shown that the prediction of turning points with local models is improved using state space reconstruction on the time series of turning points at a lower embedding dimension $m$. Here, we extend the state space reconstruction to include also the time series of the times of the turning points. This is the setting of local dynamic regression, where a local model on two time series (for magnitudes and times of turning points) is build in order to predict the magnitudes and times of turning points. \section{State Space Reconstruction of Turning Points} Suppose an oscillating time series of length $N$, $\{x(t)\}_{t=1}^N$, is observed at a sampling time $\tau_s$. A sample $y_i=x(t_i)$ is a turning point of $\{x(t)\}_{t=1}^N$ at time step $t_i$ if it is the minimum or maximum of all samples in $[t_i-p,t_i+p]$, for a scale parameter $p$. Scanning all samples of $\{x(t)\}_{t=1}^N$, the time series $\{y_i\}_{i=1}^n$ and $\{t_i\}_{i=1}^n$ of magnitudes and times of the alternating turning points, respectively, are derived. Instead of the time of the turning points we derive the duration of the upward and downward trends from the first differences $z_i=t_i-t_{i-1}$, giving the time series $\{z_i\}_{i=2}^n$. Thus two successive samples of $\{y_i\}_{i=2}^n$ together with the synchronous samples of $\{z_i\}_{i=2}^n$ regard an oscillation of $\{x(t)\}_{t=1}^N$, as shown in Fig.~\ref{fig:oscext}. \begin{figure}[h!] \hspace{7mm} \includegraphics[height=35mm]{hypxoscext.eps} \caption{Time series of turning point magnitudes and trend durations derived from an oscillating time series.} \label{fig:oscext} \end{figure} The bivariate time series $\{y_i,z_i\}_{i=2}^n$ compresses the information in $\{x(t)\}_{t=1}^N$ with some loss of information depending on the pattern of the samples between the turning points. At the limit where the upward and downward trends are linear there is no loss of information as any sample $x(t_i-k)$ between two turning points $x(t_{i-1})$ and $x(t_i)$, where $k \in \{0,1,\ldots,t_i-t_{i-1}\}$, can be expressed in terms of the magnitude and time of the two turning points as \[ x(t_i-k) = x(t_i)-k\frac{x(t_i)-x(t_{i-1})}{t_i-t_{i-1}}= y_i-k\frac{y_i-y_{i-1}}{t_i-t_{i-1}}. \] The state space reconstruction of $\{y_i\}_{i=2}^n$ can be considered as a specific state space reconstruction of $\{x(t)\}_{t=1}^N$ at time points $\{t_i\}_{i=1}^n$ for delays depending at each $t_i$. For an embedding dimension $m$, this reads \begin{equation} \begin{array}{rcl} \mathbf{y}_i & = & [y_i,y_{i-1},\ldots,y_{i-m+1}]^{\prime} \\ & = & [x(t_i),x(t_{i-1}),\ldots,x(t_{i-m+1})]^{\prime}, \end{array} \label{eq:embedextpoi} \end{equation} for $i=m,\ldots,n$ \cite{Kugiumtzis08b}. The advantage of this reconstruction is that it reduces the embedding dimension $M$ of the standard state space reconstruction of the type $\mathbf{x}(t)=[x(t),x(t-\tau_1),\ldots,x(t-\tau_{M-1})]^{\prime}$ to $m$. Usually, in prediction tasks the delays $\tau_j$ are small (and commonly a fixed delay $\tau$ is used) suggesting a rather large $M$ in order the time window $\tau_w$ to cover the mean oscillation period. We extend the state space reconstruction in (\ref{eq:embedextpoi}) to account for the duration of trends. The analysis on the bivariate time series $\{y_i,z_i\}_{i=2}^n$ requires that both time series are standardized (subtracting the mean and dividing with the standard deviation for each time series). The state space reconstruction on the standardized $\{y_i,z_i\}_{i=2}^n$ reads \begin{equation} \mathbf{w}_i = [y_i,y_{i-1},\ldots,y_{i-m_y+1},z_i,z_{i-1},\ldots,z_{i-m_z+1}]^{\prime}. \label{eq:embedmagtim} \end{equation} We allow for different embedding dimensions $m_y$ and $m_z$ for the magnitudes of turning points and duration of trends, respectively. \section{Dynamic Regression Prediction of Turning Points} The prediction of $y_{i+T}$ and $z_{i+T}$ for a lead time $T$ can be posed independently and this constitutes a problem of dynamic regression (also termed distributed lag modeling) \cite{Pankratz91}. In this setting we apply local average models (LAM) and local linear models (LLM) \cite{Kantz97}. The prediction of $y_{i+T}$ with LAM is given by the average of the $T$ step ahead mappings of the $K$ nearest neighboring points to $\mathbf{w}_i$ $\{\mathbf{w}_{i(1)},\ldots,\mathbf{w}_{i(K)}\}$ \begin{equation} \hat{y}_{i+T} = \bar{y}_{i(K)+T} = \frac{1}{K}\sum_{j=1}^K y_{i(j)+T}. \label{eq:lammag} \end{equation} Assuming a linear autoregressive model restricted to the neighboring points to $\mathbf{w}_i$, the LLM prediction of $y_{i+T}$ is \begin{equation} \hat{y}_{i+T} = \bar{y}_{i(K)+T} + \mathbf{a}^{\prime} (\mathbf{w}_i - \bar{\mathbf{w}}_{i(K)}), \label{eq:llmmag} \end{equation} where $\bar{\mathbf{w}}_{i(K)}$ is the center point of the $K$ neighboring points to $\mathbf{w}_i$ and $\mathbf{a}$ is estimated from the minimization of the error function \begin{equation} \sum_{j=1}^K \left(y_{i(j)+T}-\bar{y}_{i(K)+T}-\mathbf{a}^{\prime} (\mathbf{w}_{i(j)} - \bar{\mathbf{w}}_{i(K)})\right)^2. \label{eq:errfun} \end{equation} We consider also regularization of the ordinary least square solution of (\ref{eq:errfun}) making use of principal component regression (PCR) and projection on the $q$ first components \cite{Kugiumtzis98}. Note that $z_{i+T}$ is predicted in the same way, but in line with dynamic regression setting the suitable embedding dimensions $m_y$ and $m_z$ may be different for the models (LAM or LLM) for $y_{i+T}$ and $z_{i+T}$. This approach differs from the approach in \cite{Kugiumtzis08b} in that the neighboring points are formed not only based on the turning point magnitudes but also on the duration of trends. Both LAM and LLM models are simple extensions of the respective local models used for univariate time series. Note that the direct scheme is used here, but the iterative prediction scheme can be applied in a similar way (in \cite{Kugiumtzis08b} it was found that the iterative scheme of LAM on $\{x(t)\}_{t=1}^N$ gave worse results). In the following, we compare the prediction of turning points (magnitude and time) using LAM or LLM models estimated on all the samples of the oscillating time series $\{x(t)\}_{t=1}^N$ (denoted osc-LAM and osc-LLM) and on the bivariate time series of turning point magnitudes and trend durations $\{y_i,z_i\}_{i=2}^n$ (denoted tur-LAM and tur-LLM). \section{Turning Point Prediction on Simulated Systems} Before presenting the results of the predictions on selected simulated systems we make some general observations regarding the implementation of the prediction schemes. For a fixed number of oscillations, $N$ is inversely proportional to $\tau_s$, so that a better time resolution of the measurements implies a larger oscillating time series $\{x(t)\}_{t=1}^N$, whereas the length of the turning point time series $\{y_i,z_i\}_{i=2}^n$ is unaffected (being $2(n-1)$). A small $\tau_s$ is actually welcome in the analysis based on turning points because it allows for more accurate detection of the turning points and especially the trend durations. For example, for a time series with an average oscillation period of 10 samples the range of $z_i$ is limited to integers from 1 to less than 10, and this range is insufficient to define neighborhoods (in the projected reconstructed state space of dimension $m_z$). Thus a smaller $\tau_s$ would render the information of $\{z_i\}_{i=2}^n$ more useful in the setting of dynamic regression. The parameter $p$ that defines the local window for the detection of turning points depends on $\tau_s$ and should not be too large, so that turning points of short lasted oscillations can be detected, and not too small, so that glitches of noisy oscillations are not assigned to turning points. For the latter case, a small $p$ can still be used if the time series is filtered, and then the turning points are detected on the smoothed time series to give the turning point times, whereas the turning point magnitudes are taken from the original time series. In the simulations we use $p=3$ and filter only noisy data with an order depending on the system and noise amplitude. When predicting turning points with osc-LAM or osc-LLM on $\{x(t)\}_{t=1}^N$, the current point is not a turning point $x(t_i)$ but the sample $x_{t_i+p}$ at the time the turning point $x(t_i)$ can first be detected. Thus using a large $p$ or $\tau_s$ favors the prediction on $\{x(t)\}_{t=1}^N$ because then the current point is well in the next trend of the oscillation. The prediction schemes are illustrated in Fig.~\ref{fig:preoscext}. \begin{figure}[h!] \hspace{7mm} \includegraphics[height=35mm]{preextdisplay1.eps} \caption{Turning point prediction: the real time series segment (grey lines and circles), the sample and turning point predictions with osc-LAM (dark dots and crosses), and the turning point prediction with tur-LAM (dark asterisks). The current time of the turning point is set to 0 and the delays of the standard embedding on the samples are shown with open circles.} \label{fig:preoscext} \end{figure} Note that the predicted turning points with osc-LAM are detected among the multi-step sample predictions in the same way as the turning points are detected in the oscillating time series. We applied the LAM and LLM schemes on multiple realizations of known systems, such as the first and third variable of the R\"{o}ssler system \cite{Roessler76}, the first and fourth variable of the R\"{o}ssler hyper-chaos system \cite{Roessler79} (a segment of this is shown in Fig.~\ref{fig:preoscext}), and the Mackey-Glass delay differential equation for different delays $\Delta=17,30,100$ \cite{Mackey77}. The prediction measure is the normalized root mean square error (NRMSE) of the turning point prediction at the last quarter of each time series. In Fig.~\ref{fig:MCHyp}, the average NRMSE (with the standard deviation forming the error bars) is shown for 1000 realizations of the fourth variable of the R\"{o}ssler hyper-chaos system using the osc-LAM and tur-LAM models as well as the osc-LLM and tur-LLM models. \begin{figure}[h!] \centering \hbox{\includegraphics[height=33mm]{hypwextpredlammag.eps} \hspace{7mm} \includegraphics[height=33mm]{hypwextpredlamtim.eps}} \hbox{\includegraphics[height=33mm]{hypwextpredllmmag.eps} \hspace{7mm} \includegraphics[height=33mm]{hypwextpredllmtim.eps}} \caption{(a) The average NRMSE (with error bars for the standard deviation) of the prediction of next turning point magnitude of the fourth variable of the R\"{o}ssler hyperchaos system ($\tau_s=0.1$, $N=2^{14}$) using osc-LAM and tur-LAM (for $m_z=0,1,2$ as given in the legend). The $\tau_w$ in the abscissa is defined as $\tau_w=(M-1)10$ for osc-LAM and $\tau_w=(m-1)33$ for tur-LAM, as the mean oscillation period is estimated to be 66. (b) As in (a) but for the prediction of trend duration. (c) and (d) are as (a) and (b), respectively, but using the LLM models instead with PCR regularization parameter $q=3$.} \label{fig:MCHyp} \end{figure} The parameters of state space reconstruction for both $\{x(t)\}_{t=1}^N$ and $\{y_i,z_i\}_{i=2}^n$ were chosen so that $\tau_w$ covers up to three mean oscillation periods. For the latter, different combinations of $m_y$ and $m_z$ were considered and in Fig.~\ref{fig:MCHyp} the tur-LAM and tur-LLM are shown for $m_z=0,1,2$ ($m_z=0$ denotes that the model is built only on the turning point magnitudes). In this example, there is little improvement of turning point prediction using the trend durations. Using either LAM or LLM, the prediction of turning points based on $\{y_i,z_i\}_{i=2}^n$ is superior. For osc-LLM prediction of turning point magnitudes, NRMSE is larger than one (the mean prediction) and has a large variance (not shown in Fig.~\ref{fig:MCHyp}c). The linear mapping diverges for multi-step ahead predictions, and we conjecture that this is because temporally close points are selected in the set of the $K$ neighboring points. The large variance of NRMSE is observed with all LLM models for $m=3$ (equal to $q$) and this needs further investigation. The best predictions of LAM for turning point magnitudes and trend durations for both $\{x(t)\}_{t=1}^N$ and $\{y_i,z_i\}_{i=2}^n$ are given in Table~\ref{tab:MCHyp}. \begin{table}[h!] \centering \begin{tabular}{|c|c|cc|ccc|} \hline \multicolumn{2}{|c|}{} & \multicolumn{5}{c|}{Turning point magnitude} \\ \hline $T$ & $K$ & $M$ & NRMSE & $m_y$ & $m_z$ & NRMSE \\ \hline 1 & 1 & 9 & 0.604 & 3 & 1 & 0.505 \\ 1 & 5 & 9 & 0.621 & 3 & 1 & 0.518 \\ 1 & 10 & 9 & 0.662 & 2 & 1 & 0.558 \\ \hline 2 & 1 & 10 & 0.837 & 3 & 1 & 0.679 \\ 2 & 5 & 8 & 0.748 & 3 & 1 & 0.642 \\ 2 & 10 & 3 & 0.732 & 3 & 0 & 0.665 \\ \hline \multicolumn{2}{|c|}{} & \multicolumn{5}{c|}{Trend duration} \\ \hline 1 & 1 & 10 & 0.669 & 2 & 1 & 0.368 \\ 1 & 5 & 3 & 0.549 & 2 & 0 & 0.366 \\ 1 & 10 & 3 & 0.526 & 2 & 0 & 0.414 \\ \hline 2 & 1 & 10 & 1.016 & 3 & 1 & 0.817 \\ 2 & 5 & 10 & 0.989 & 2 & 0 & 0.772 \\ 2 & 10 & 9 & 1.018 & 2 & 0 & 0.782 \\ \hline \end{tabular} \caption{For the system in Fig.~\ref{fig:MCHyp} and for each combination of $T=1,2$ and $K=1,5,10$, the $M$ of best prediction with osc-LAM and $m_y$ and $m_z$ of best prediction with tur-LAM together with the respective NRMSE are given, where $M=3,\ldots,10$ ($\tau=10$) and $m=2,\ldots,6$.} \label{tab:MCHyp} \end{table} The best turning point predictions (magnitude and time) are derived with tur-LAM at small embedding dimensions (up to 3 for $m_y$ and 0 or 1 for $m_z$). Closer investigation showed that for some prediction tasks osc-LAM predicted better than tur-LAM, whereas in other cases it formed a turning point far from the true turning point, so overall the NRMSE was worse. This difference in osc-LAM and tur-LAM persists for different $N$ (we tested for $\log_2N=12,13$) and at a lesser extent also for the addition of observational noise (we used $5\%$ and $10\%$ of noise amplitude). Moreover, the inclusion of the last trend duration ($m_z=1$) improved the prediction of turning point magnitudes and only marginally the prediction of trend durations. The same qualitative results were obtained from the same simulations on the other systems. For the highly complex Mackey-Glass system with $\Delta=100$ (it has a fractal dimension of about 7) the best results of tur-LAM were obtained for high embedding dimensions of both turning point magnitudes and trend durations, indicating that for this system the trend duration is important to predict the next peaks or troughs. \section{Conclusion} We showed that the local prediction of turning points (magnitude and time) can be improved if the nearest neighbor model of average or linear mapping is build on reconstructed points from the bivariate time series of turning point magnitudes and trend duration. \section*{Acknowledgments} The work is part of the research project 03ED748 within the framework of the ''Reinforcement Programme of Human Research Manpower'' (PENED) and it is co-financed at 90\% jointly by European Social Fund (75\%) and the Greek Ministry of Development (25\%) and at 10\% by Rikshospitalet, Norway.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $(X, \mathbf{Z})$ be a random vector in $\mathbb{R} \times \mathbb{R}^d = \mathbb{R}^{d+1}$, $d \ge 1$. We assume that $(X, \mathbf{Z})$ has a joint density on $\mathbb{R}^{d+1}$. If we want to predict $X$ using $\mathbf{Z}$ we usually formulate the following regression problem: \begin{eqnarray}\label{eq:RegMdl} X = m(\mathbf{Z}) + \epsilon, \end{eqnarray} where $m(\mathbf{z}) = \mathbb E(X|\mathbf{Z} = \mathbf{z})$ is the conditional mean of $X$ given $\mathbf{Z} = \mathbf{z}$ and $\epsilon := X - m(\mathbf{Z})$ is the {\it residual} (although $\epsilon$ is usually called the error, and its estimate the residual, for this paper we feel that the term residual is more appropriate). Typically we further assume that the residual $\epsilon$ is {\it independent} of $\mathbf{Z}$. However, intuitively, we are just trying to break the information in $(X,\mathbf{Z})$ into two parts: a part that contains all relevant information about $X$, and the ``residual'' (the left over) which does not have anything to do with the relationship between $X$ and $\mathbf{Z}$. In this paper we address the following question: given any random vector $(X, \mathbf{Z})$ how do we define the notion of a ``residual'' of $X$ on $\mathbf{Z}$ that matches with the above intuition? Thus, formally, we want to find a function $\varphi: \mathbb{R}^{d+1} \to \mathbb{R}$ such that the residual $\varphi(X, \mathbf{Z})$ satisfies the following two conditions: \begin{enumerate} \item[(C.1)] $\;\;\;\;\;$ the residual $\varphi(X, \mathbf{Z})$ is independent of the predictor $\mathbf{Z}$, i.e., \begin{eqnarray*}\label{eq:Indep} \varphi(X, \mathbf{Z}) \perp \! \! \! \perp \mathbf{Z}, \qquad \mbox{and } \end{eqnarray*} \item[(C.2)] $\;\;\;\;\;$ the information content of $(X, \mathbf{Z})$ is the same as that of $( \varphi(X, \mathbf{Z}), \mathbf{Z} )$, i.e., \begin{equation}\label{eq:Info} \sigma(X, \mathbf{Z}) = \sigma( \varphi(X, \mathbf{Z}), \mathbf{Z} ), \end{equation} where $\sigma(X, \mathbf{Z})$ denotes the $\sigma$-field generated by $X$ and $\mathbf{Z}$. We can also express~\eqref{eq:Info} as: there exists a measurable function $h : \mathbb{R}^{d+1} \to \mathbb{R} $ such that \begin{equation}\label{eq:GenX} X = h(\mathbf{Z}, \varphi(X, \mathbf{Z})); \end{equation} see e.g., Theorem 20.1 of~\cite{Bill95}. \end{enumerate} In this paper we propose a notion of a residual that satisfies the above two conditions, under any joint distribution of $X$ and $\mathbf{Z}$. We investigate the properties of this notion of residual in Section~\ref{sec:NPResid}. We show that this notion indeed reduces to the usual residual (error) in the multivariate normal regression model. Further, we use this notion of residual to develop a test for conditional independence. Suppose now that $(X,Y,\mathbf{Z})$ has a joint density on $\mathbb{R} \times \mathbb{R} \times \mathbb{R}^d = \mathbb{R}^{d+2}$. The assumption of conditional independence means that $X$ is independent of $Y$ given $\mathbf{Z}$, i.e., $X \perp \! \! \! \perp Y |\mathbf{Z}$. Conditional independence is an important concept in modeling causal relations (\cite{dawid79}, \cite{Pearl00}), in graphical models (\cite{Lauritzen96}; \cite{koller09}), in economic theory (see \cite{Chiappori00}), and in the literature of program evaluations (see \cite{Heckman97}) among other fields. Traditional methods for testing conditional independence are either restricted to the discrete case (\cite{Lauritzen96}; \cite{Agresti02}) or impose simplifying assumption when the random variables are continuous (\cite{Lawrance76}). However, recently there has been a few nonparametric testing procedures proposed for testing conditional independence without assuming a functional form between the distributions of $X,Y$, and $\mathbf{Z}$. \cite{SuWhite07} consider testing conditional independence based on the difference between the conditional characteristic functions, while \cite{SuWhite08} use the Hellinger distance between conditional densities of $ X$ given $Y$ and $\mathbf{Z}$, and $X$ given $Y$ to test for conditional independence. A test based on estimation of the maximal nonlinear conditional correlation is proposed in \cite{Huang10}. \cite{B11} develops a test based on partial copula. \cite{KerCondInd07} propose a measure of conditional dependence of random variables, based on normalized cross-covariance operators on reproducing kernel Hilbert spaces; \cite{Z12} propose another kernel-based conditional independence test. \cite{poczos12} extend the concept of distance correlation (developed by \cite{SzekelyRizzoBakirov07} to measure dependence between two random variables or vectors) to characterize conditional dependence. \cite{SR14} investigate a method that is easy to compute and can capture non-linear dependencies but does not completely characterize conditional independence; also see~\cite{GW12} and the references therein. In Section~\ref{sec:TestCondInd} we use the notion of residual defined in Section~\ref{sec:NPResid} to show that the conditional independence between $X$ and $Y$ given $\mathbf{Z}$ is equivalent to the mutual independence of three random vectors: the residuals of $X$ on $\mathbf{Z}$ and $Y$ on $\mathbf{Z}$, and $\mathbf{Z}$. We reduce this testing of mutual independence to a one sample multivariate goodness-of-fit test. We further propose a modification of the easy-to-implement \textit{energy} statistic based method (\cite{SzekelyRizzo05}; also see \cite{SzekelyRizzo13}) to test the goodness-of-fit; see Section~\ref{sec:TestMutInd}. In Section~\ref{sec:sub_test_cond} we use our notion of nonparametric residual and the proposed goodness-of-fit test to check the null hypothesis of conditional independence. Moreover, we describe a bootstrap scheme to approximate the critical value of this test. In Section \ref{sec:simul} we compare the finite sample performance of the procedure proposed in this paper with other available methods in the literature through a finite sample simulation study. We end with a brief discussion, Section~\ref{sec:Disc}, where we point to some open research problems and outline an idea, using the proposed residuals, to define (and test) a nonparametric notion of partial correlation. \section{A nonparametric notion of residual}\label{sec:NPResid} Conditions (C.1)--(C.2) do not necessarily lead to a unique choice for $\varphi$. To find a meaningful and unique function $\varphi$ that satisfies conditions (C.1)--(C.2) we impose the following natural restrictions on $\varphi$. We assume that \begin{enumerate} \item[(C.3)] $\;\;\;\;\;$ $x \mapsto \varphi(x,\mathbf{z})$ is strictly increasing in its support, for every fixed $\mathbf{z} \in \mathbb{R}^d$. \end{enumerate} Note that condition (C.3) is a slight strengthening of condition (C.2). Suppose that a function $\varphi$ satisfies conditions (C.1) and (C.3). Then any strictly monotone transformation of $\varphi(\cdot, \mathbf{z})$ would again satisfy (C.1) and (C.3). Thus, conditions (C.1) and (C.3) do not uniquely specify $\varphi$. To handle this identifiability issue, we replace condition (C.1) with (C.4), described below. First observe that, by condition (C.1), the conditional distribution of the random variable $\varphi(X, \mathbf{Z})$ given $\mathbf{Z} = \mathbf{z}$ does not depend on $\mathbf{z} $. We assume that \begin{enumerate} \item[(C.4)] $\;\;\;\;\;$ $\varphi(X, \mathbf{Z})| \mathbf{Z} = \mathbf{z}$ is uniformly distributed, for all $\mathbf{z} \in \mathbb{R}^d$. \end{enumerate} Condition (C.4) is again quite natural -- we usually assume that the residual has a fixed distribution, e.g., in regression we assume that the (standardized) residual in normally distributed with zero mean and unit variance. Note that condition (C.4) is slightly stronger than (C.1) and will help us uniquely identify $\varphi$. The following result shows that, indeed, under conditions (C.3)--(C.4), a unique $\varphi$ exists and gives its form. \begin{lemma}\label{lem:NPError} Let $F_{X|\mathbf{Z}}(\cdot| \mathbf{z})$ denote the conditional distribution function of $X|\mathbf{Z} = \mathbf{z}$. Under conditions (C.3) and (C.4), we have a unique choice of $\varphi(x, \mathbf{z})$, given by \begin{eqnarray*} \varphi(x, \mathbf{z}) = F_{X|\mathbf{Z}}(x| \mathbf{z}). \end{eqnarray*} Also, $h(\mathbf{z}, u)$ can be taken as \begin{eqnarray}\label{eq:InvCondDist} h(\mathbf{z}, u) =F^{-1}_{X|\mathbf{Z}}(u|\mathbf{z}). \end{eqnarray} \end{lemma} \begin{proof} Fix $\mathbf{z}$ in the support of $\mathbf{Z}$. Let $u \in (0, 1)$. Let us write $\varphi_\mathbf{z}(x) = \varphi(x, \mathbf{z})$. By condition (C.4), we have $\mathbb P[ \varphi(X, \mathbf{Z}) \le u | \mathbf{Z} = \mathbf{z} ] = u$. On the other hand, by (C.3), $$\mathbb P[ \varphi(X, \mathbf{Z}) \le u | \mathbf{Z} = \mathbf{z} ] = \mathbb P[ X \le \varphi_\mathbf{z}^{-1}(u) | \mathbf{Z} = \mathbf{z} ] = F_{X|\mathbf{Z}}( \varphi_\mathbf{z}^{-1}(u) | \mathbf{z}) .$$ Thus, we have $$ F_{X|\mathbf{Z}}( \varphi_\mathbf{z}^{-1}(u) | \mathbf{z}) = u, \ \ \mbox{ for all } u \in (0,1), $$ which is equivalent to $ \varphi_\mathbf{z}(x) = F_{X|\mathbf{Z}}(x| \mathbf{z})$. Let $h$ be as defined in~\eqref{eq:InvCondDist}. Then, $$ h(\mathbf{z}, \varphi(x, \mathbf{z})) = F^{-1}_{X|\mathbf{Z}}( \varphi(x, \mathbf{z}) |\mathbf{z}) = F^{-1}_{X|\mathbf{Z}}( F_{X|\mathbf{Z}}(x| \mathbf{z}) |\mathbf{z}) = x, $$ as required. \end{proof} Thus from the above lemma, we conclude that in the nonparametric setup, if we want to have a notion of a residual satisfying conditions (C.3)--(C.4) then the residual has to be $F_{X|\mathbf{Z}}(X| \mathbf{Z})$. The following remarks are in order now. \begin{remark} Let us first consider the example when $(X, \mathbf{Z})$ follows a multivariate Gaussian distribution, i.e., $$ \begin{pmatrix} X \\ \mathbf{Z}\end{pmatrix} \sim N \left ( \begin{pmatrix} \mu_1 \\ \bm{\mu}_2 \end{pmatrix}, \Sigma := \begin{pmatrix} \sigma_{11}& \bm{\sigma}_{12}^\top \\ \bm{\sigma}_{12} & \Sigma_{22} \end{pmatrix} \right), $$ where $\mu_1 \in \mathbb{R}$, $\mu_2 \in \mathbb{R}^d$, $\Sigma$ is a $(d+1) \times (d+1)$ positive definite matrix with $\sigma_{11} > 0$, $\sigma_{12} \in \mathbb{R}^{d \times 1}$ and $\Sigma_{22} \in \mathbb{R}^{d \times d}$. Then the conditional distribution of $X$ given $\mathbf{Z} = \mathbf{z}$ is $N(\mu_1 + \bm{\sigma}_{12}^\top \Sigma_{22}^{-1} (\mathbf{z} - \bm{\mu}_2), \sigma_{11} - \bm{\sigma}_{12}^\top \Sigma_{22}^{-1} \bm{\sigma}_{12} )$. Therefore, we have the following representation in the form of~\eqref{eq:RegMdl}: $$ X = \mu_1 + \bm{\sigma}_{12}^\top \Sigma_{22}^{-1} (\mathbf{Z} - \bm{\mu}_2) + \Big( X - \mu_1 - \bm{\sigma}_{12}^\top \Sigma_{22}^{-1} (\mathbf{Z} - \bm{\mu}_2) \Big) $$ where the usual residual is $X - \mu_1 - \bm{\sigma}_{12}^\top \Sigma_{22}^{-1} (\mathbf{Z} - \bm{\mu}_2)$, which is known to be independent of $\mathbf{Z}$. In this case, using Lemma~\ref{lem:NPError}, we get $$ \varphi(X, \mathbf{Z}) = \Phi \left(\frac{ X - \mu_1 - \bm{\sigma}_{12}^\top \Sigma_{22}^{-1} (\mathbf{Z} - \bm{\mu}_2) }{\sqrt{\sigma_{11} - \bm{\sigma}_{12}^\top \Sigma_{22}^{-1} \bm{\sigma}_{12}} } \right),$$ where $\Phi(\cdot)$ is the distribution function of the standard normal distribution. Thus $\varphi(X, \mathbf{Z})$ is just a fixed strictly increasing transformation of the usual residual, and the two notions of residual essentially coincide. \\ \end{remark} \begin{remark} The above notion of residual does not extend so easily to the case of discrete random variables. Conditions (C.1) and (C.2) are equivalent to the fact that $\sigma(X, \mathbf{Z})$ factorizes into two sub $\sigma$-fields as $\sigma(X, \mathbf{Z}) = \sigma( \varphi(X, \mathbf{Z}) ) \otimes \sigma(\mathbf{Z} )$. This may not be always possible as can be seen from the following simple example. Let $(X, Z)$ take values in $\{0, 1\}^2$ such that $\mathbb P[X = i, Z =j] >0$ for all $i, j \in \{0, 1\}$. Then it can be shown that such a factorization exists if and only if $X$ and $Z$ are independent, in which case $\varphi(X, Z) = X$. \\ \end{remark} \begin{remark} Lemma~\ref{lem:NPError} also gives an way to generate $X$, using $\mathbf{Z}$ and the residual. We can first generate $\mathbf{Z}$, following its marginal distribution, and an independent random variable $U \sim \mathcal{U}(0,1)$ (here $\mathcal{U} (0,1)$ denotes the Uniform distribution on $(0,1)$) which will act as the residual. Then~\eqref{eq:GenX}, where $h$ is defined in~\eqref{eq:InvCondDist}, shows that we can generate $X = F^{-1}_{X|\mathbf{Z}}(U|\mathbf{Z})$. \\ \end{remark} In practice, we need to estimate the residual $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ from observed data, which can be done both parametrically and non-parametrically. If we have a parametric model for $F_{X|\mathbf{Z}}(\cdot|\cdot)$, we can estimate the parameters, using e.g., maximum likelihood, etc. If we do not want to assume any structure on $F_{X|\mathbf{Z}}(\cdot|\cdot)$, we can use any nonparametric smoothing method, e.g., standard kernel methods, for estimation; see~\cite{B11} for such an implementation. We will discuss the estimation of the residuals in more detail in Section~\ref{sec:NPEst}. \section{Conditional independence}\label{sec:TestCondInd} Suppose now that $(X,Y,\mathbf{Z})$ has a joint density on $\mathbb{R} \times \mathbb{R} \times \mathbb{R}^d = \mathbb{R}^{d+2}$. In this section we state a simple result that reduces testing for the conditional independence hypothesis $H_0: X \perp \! \! \! \perp Y |\mathbf{Z}$ to a problem of testing mutual independence between three random variables/vectors that involve our notion of residual. We also briefly describe a procedure to test the mutual independence of the three random variables/vectors (see Section~\ref{sec:TestMutInd}). We start with the statement of the crucial lemma. \begin{lemma}\label{lem:CondInd} Suppose that $(X,Y,\mathbf{Z})$ has a continuous joint density on $\mathbb{R}^{d+2}$. Then, $X \perp \! \! \! \perp Y |\mathbf{Z}$ if and only if $F_{X|\mathbf{Z}}(X|\mathbf{Z}), F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ and $\mathbf{Z}$ are mutually independent. \end{lemma} \begin{proof} Let us make the following change of variable $$ (X,Y,\mathbf{Z}) \mapsto (U,V,\mathbf{Z}) := (F_{X|\mathbf{Z}}(X), F_{Y|\mathbf{Z}}(Y), \mathbf{Z}).$$ The joint density of $(U,V,\mathbf{Z})$ can be expressed as \begin{equation}\label{eq:trans} f_{(U,V,\mathbf{Z})}(u,v,\mathbf{z}) = \frac{f(x,y,\mathbf{z})}{f_{X|\mathbf{Z}=\mathbf{z}}(x) f_{Y|\mathbf{Z}=\mathbf{z}}(y)} = \frac{f_{(X, Y)|\mathbf{Z}=\mathbf{z}}(x, y)f_\mathbf{Z}(\mathbf{z})}{f_{X|\mathbf{Z}=\mathbf{z}}(x) f_{Y|\mathbf{Z}=\mathbf{z}}(y)}, \end{equation} where $x = F_{X|\mathbf{Z}=\mathbf{z}}^{-1}(u)$, and $y = F_{Y|\mathbf{Z}=\mathbf{z}}^{-1}(v)$. Note that as the Jacobian matrix is upper-triangular, the determinant is the product of the diagonal entries of the matrix, namely, $f_{X|\mathbf{Z} = \mathbf{z}}(x)$, $f_{Y|\mathbf{Z}=\mathbf{z}}(y)$ and $1$. If $X \perp \! \! \! \perp Y |\mathbf{Z}$ then $f_{(U,V,\mathbf{Z})}(u,v,\mathbf{z})$ reduces to just $f_\mathbf{Z}(\mathbf{z})$, for $u, v \in (0,1)$, from the definition of conditional independence, which shows that $U,V,\mathbf{Z}$ are independent (note that it is easy to show that $U,V$ are marginally $\mathcal{U}(0,1)$, the Uniform distribution on $(0,1)$). Now, given that $U,V,\mathbf{Z}$ are independent, we know that $f_{(U,V,\mathbf{Z})}(u,v,\mathbf{z}) = f_\mathbf{Z}(\mathbf{z})$ for $u, v \in (0,1)$, which from (\ref{eq:trans}) easily shows that $X \perp \! \! \! \perp Y |\mathbf{Z}$. \end{proof} $\vspace{0.000in}$ \begin{remark}\label{rem:berg} Note that the joint distribution of $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ is known as the \textit{partial copula}; see e.g.,~\cite{B11}.~\cite{B11} developed a test for conditional independence by testing mutual independence between $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$. However, as the following example illustrates, the independence of $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ is not enough to guarantee that $X \perp \! \! \! \perp Y |\mathbf{Z}$. Let $W_1, W_2, W_3$ be i.i.d.~$\mathcal{U}(0,1)$ random variables. Let $X = W_1+W_3$, $Y =W_2$ and $Z = \mathrm{mod}(W_1 + W_2, 1)$, where `$\mathrm{mod}$' stands for the modulo (sometimes called modulus) operation that finds the remainder of the division $W_1 + W_2$ by 1. Clearly, the random vector $(X, Y, Z)$ has a smooth continuous density on $[0,1]^3$. Note that $Z$ is independent of $W_i$, for $i = 1,2$. Hence, $X, Y$ and $Z$ are pairwise independent. Thus, $F_{X|\mathbf{Z}}(X) = F_X(X)$ and $F_{Y|\mathbf{Z}}(X) = F_Y(Y)$, where $F_X$ and $F_Y$ are the marginal distribution functions of $X$ and $Y$, respectively. From the independence of $X$ and $Y$, $F_X(X)$ and $F_Y(Y)$ are independent. On the other hand, the value of $W_1$ is clearly determined by $Y$ and $Z$, i.e., $W_1 = Z-Y$ if $Y \le Z$ and $W_1 = Z-Y+1$ if $Y>Z$. Consequently, $X$ and $Y$ are not conditionally independent given $Z$. To see this, note that for every $z \in (0,1)$, $$\mathbb E[ X| Y, Z=z ] = \left\{ \begin{array}{ll} z-Y + 0.5 & \mbox{if $Y \le z$}\\ z - Y +1 + 0.5& \mbox{if $Y > z$,}\end{array} \right.$$ which obviously depends on $Y$. In Remark~\ref{Bergsma2} we illustrate this behavior with a finite sample simulation study. \\ \end{remark} \begin{remark} We can extend the above result to the case when $X$ and $Y$ are random vectors in $\mathbb{R}^p$ and $\mathbb{R}^q$, respectively. In that case we define the conditional multivariate distribution transform $F_{X|\mathbf{Z}}$ by successively conditioning on the co-ordinate random variables, i.e., if $X = (X_1,X_2)$ then we can define $F_{X|\mathbf{Z}}$ as $(F_{X_2|X_1,\mathbf{Z}}, F_{X_1|\mathbf{Z}})$. With this definition, Lemma~\ref{lem:CondInd} still holds. \\ \end{remark} To use Lemma~\ref{lem:CondInd} to test the conditional independence between $X$ and $Y$ given $\mathbf{Z}$, we need to first estimate the residuals $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ from observed data, which can be done by any nonparametric smoothing procedure, e.g., standard kernel methods (see Section~\ref{sec:NPEst}). Then, any procedure for testing the mutual independence of $F_{X|\mathbf{Z}}(X|\mathbf{Z}), F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ and $\mathbf{Z}$ can be used. In this paper we advocate the use of the {\it energy} statistic (see \cite{RizzoSzekely10}), described briefly in the next subsection, to test the mutual independence of three or more random variables/vectors. \subsection{Testing mutual independence of three or more random vectors with known marginals}\label{sec:TestMutInd} Testing independence of two random variables (or vectors) has received much recent attention in the statistical literature; see e.g.,~\cite{SzekelyRizzoBakirov07}, \cite{KerIndepALT05}, and the references therein. However, testing the mutual independence of three or more random variables is more complicated and we could not find any easily implementable method in the statistical literature. In this sub-section, we test the mutual independence of three or more random variables (vectors) with known marginals by converting the problem to a one-sample goodness-of-fit test for multivariate normality. In the following we briefly describe our procedure in the general setup. Suppose that we have $r \ge 3$ continuous random variables (or vectors) $V_1, \ldots, V_r$ and we want to test their mutual independence. We assume that we know the marginal distributions of $V_1, \ldots, V_r$; without loss of generality, we can assume that $V_i$'s are standard Gaussian random variables (vectors). We write $T:= (V_1, V_2, \ldots, V_r) \in \mathbb{R}^k$ and introduce $T_{\text{ind}} := (V_1^*, V_2^*, \ldots, V_r^*)$ where $V_j^*$ is an i.i.d.~copy of $V_j$, $j=1,2, \ldots, r$, but in $T_{\text{ind}}$ the coordinates, $V_1^*, V_2^*, \ldots, V_r^*$, are independent. To test the mutual independence of $V_1, V_2, \ldots, V_r$ all we need to test now is whether $T$ and $T_{\text{ind}}$ are identically distributed. If we observed a sample from $T$, we can test for the equality of distributions of $T$ and $T_{\text{ind}}$ through a one-sample goodness-of-fit test for the standard multivariate normal distribution, i.e., $$H_0: T \sim N(\textbf{0},\textbf{I}_{k\times k}),$$ as $T_{\text{ind}}\sim N(\textbf{0},\textbf{I}_{k\times k})$, where $\textbf{I}_{k \times k}$ is the identity matrix of order $k$ and $\textbf{0} := (0, \ldots, 0) \in \mathbb{R}^{k}.$ In this paper we consider the following {\it energy} statistic (see~\cite{SzekelyRizzo05} and \cite{RizzoSzekely10}) \begin{equation}\label{eq:EStat} \Lambda(T) = 2 \mathbb E \|T - T_{\text{ind}}\| - \mathbb E \|T - T'\| - \mathbb E \|T_{\text{ind}} - T_{\text{ind}}'\|, \end{equation} where $T'$ and $T_{\text{ind}}'$ are i.i.d.~copies of $T$ and $T_{\text{ind}}$, respectively ($\|\cdot\|$ denotes the Euclidean norm). Note that $\Lambda(T)$ is always nonnegative, and equals 0, if and only if $T$ and $T_{\text{ind}}$ are identically distributed, i.e., if and only if $V_1, V_2, \ldots, V_r$ are mutually independent (see Corollary 1 of~\cite{SzekelyRizzo05}). Suppose now that we observe $n$ i.i.d.~samples $T_1, \ldots, T_n$ of $T$. The (scaled) sample version of the energy statistic for testing the goodness-of-fit hypothesis is \begin{equation}\label{eq:teststat} \mathcal{E}_n(T_1,\ldots, T_n) :=2 \sum_{i=1}^n \mathbb E \|T_i-T_\text{ind}\| - \frac{1}{n} \sum_{i=1}^n\sum_{j=1}^{n} \|T_i-T_j\|- n \mathbb E \|T_\text{ind}-T^\prime_\text{ind}\|. \end{equation} Note that the first expectation in the above display is with respect to $T_\text{ind}$. Under the null hypothesis of mutual independence, the test statistic $\mathcal{E}_n(T_1,\ldots, T_n)$ has a limiting distribution, as $n \rightarrow \infty,$ while under the alternative hypothesis $\mathcal{E}_n(T_1,\ldots, T_n)$ tends to infinity; see Section 4 of \cite{SzekelyRizzo05} and Section 8 of \cite{SzekelyRizzo13} for detailed discussions. Thus any test that rejects the null for large values of $\mathcal{E}_n(T_1,\ldots, T_n)$ is consistent against general alternatives. As $T_\text{ind}$ and $T^\prime_\text{ind}$ are i.i.d.~$N(\textbf{0}, \textbf{I}_{k\times k})$ random variables. The statistic $\mathcal{E}_n(T_1,\ldots, T_n)$ is easy to compute: $$\mathbb E\|T_\text{ind}-T_\text{ind}^\prime\| =\sqrt{2}\mathbb E \|T_{ind}\|= 2 \frac{\Gamma \big(\frac{d+3}{2}\big)}{\Gamma \big( \frac{d+2}{2}\big)}$$ and for any $a\in \mathbb{R}^{d+2}$, we have $$\mathbb E\|a-T_\text{ind}\| =\frac{\sqrt{2}\Gamma \big(\frac{d+3}{2}\big)}{\Gamma \big( \frac{d+2}{2}\big)} + \sqrt{\frac{2}{\pi}} \sum_{k=0}^\infty \frac{(-1)^k}{k!\, 2^k} \frac{|a|^{2k+2}}{(2k+1)(2k+2)} \frac{\Gamma \big( \frac{d+3}{2}\big)\Gamma \big( k+\frac{3}{2}\big)}{\Gamma \big( k+\frac{d}{2}+2\big)}.$$ The expression for $\mathbb E\|a-T_\text{ind}\|$ follows from the discussion in \cite{Zacks81} (see page 55). See the source code ``energy.c'' in the \textit{energy} package of R language (\cite{Rlang}) for a fast implementation of this; also see \cite{SzekelyRizzo13}. \subsection{Testing conditional independence} \label{sec:sub_test_cond} In this sub-section we use Lemma \ref{lem:CondInd} and the test for mutual independence proposed in the previous sub-section (Section~\ref{sec:TestMutInd}) to test for the conditional independence of $X$ and $Y$ given $\mathbf{Z}.$ We start with a simple lemma \begin{lemma} \label{lem:CondIndeqiv} Suppose that $(X,Y,\mathbf{Z})$ has a continuous joint density on $\mathbb{R}^{d+2}$. Then $X \perp \! \! \! \perp Y |\mathbf{Z}$ if and only if $$W:=(F_{X|\mathbf{Z}}(X|\mathbf{Z}), F_{Y|\mathbf{Z}}(Y|\mathbf{Z}), F_\mathbf{Z}(\mathbf{Z})) \sim \mathcal{U}([0,1]^{d+2}),$$ where $F_\mathbf{Z}(\mathbf{z}) = \left(F_{Z_d|Z_{d-1},\ldots, Z_1}(z_d|z_{d-1},\ldots, z_1), \ldots, F_{Z_2|Z_1}(z_2|z_1), F_{Z_1}(z_1)\right),$ $\mathbf{Z} =$ \\$ (Z_1,\ldots, Z_d),$ $\textbf{z}=(z_1,\ldots, z_d),$ and $\mathcal{U}([0,1]^{d+2})$ denote the Uniform distribution on $[0,1]^{d+2}$. \end{lemma} \begin{proof} Note that by Lemma~\ref{lem:CondInd}, $X \perp \! \! \! \perp Y |\mathbf{Z}$ if and only if $F_{X|\mathbf{Z}}(X|\mathbf{Z}),$ $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ and $\mathbf{Z}$ are mutually independent. Furthermore, note that $F_{X|\mathbf{Z}}(X|\mathbf{Z}),$ $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ are i.i.d.~$\mathcal{U}(0,1)$ random variables. Thus the proof of the lemma will be complete if we show that $F_\mathbf{Z}(\mathbf{Z}) \sim \mathcal{U}([0,1]^d)$. As each of $F_{Z_d|Z_{d-1},\ldots, Z_1}(Z_d|Z_{d-1},\ldots, Z_1), \ldots, F_{Z_2|Z_1}(Z_2|Z_1),$ and $F_{Z_1}(Z_1)$ are $\mathcal{U}(0,1)$ random variables, it is enough to show that they are mutually independent. For simplicity of notation, we will only prove the independence of $F_{Z_2|Z_1}(Z_2|Z_1)$ and $F_{Z_1}(Z_1)$, independence of other terms can be proved similarly. Note that \begin{align*} \mathbb P(F_{Z_2|Z_1}(Z_2|Z_1) \le z_2 | F_{Z_1}(Z_1)=z_1) ={}& \mathbb P(F_{Z_2|Z_1}(Z_2|Z_1) \le z_2 | Z_1=F_{Z_1}^{-1}(z_1))\\ ={}&\mathbb P\Big(Z_2 \le F_{Z_2|Z_1}^{-1}\big(z_2| F_{Z_1}^{-1}(z_1)\big) \Big| Z_1=F_{Z_1}^{-1}(z_1)\Big)\\ ={}&F_{Z_2|Z_1} \Big(F_{Z_2|Z_1}^{-1}\big(z_2| F_{Z_1}^{-1}(z_1)\big) |F_{Z_1}^{-1}(z_1)\Big)\\ ={}&z_2. \end{align*} As the conditional distribution of $F_{Z_2|Z_1}(Z_2|Z_1)$ given $ F_{Z_1}(Z_1) = z_1$ does not depend on $z_1$, we have that $F_{Z_2|Z_1}(Z_2|Z_1)$ and $F_{Z_1}(Z_1)$ are independent. \end{proof} Let us now assume $X \perp \! \! \! \perp Y |\mathbf{Z}$ and define \begin{equation*} \label{eq:T_def} W:=\left(F_{X|\mathbf{Z}}(X|\mathbf{Z}), F_{Y|\mathbf{Z}}(Y|\mathbf{Z}), F_{Z_d|\mathbf{Z}_{-d}}(Z_d|\mathbf{Z}_{-d}), \ldots, F_{Z_2|Z_1}(Z_2|Z_1), F_{Z_1}(Z_1)\right). \end{equation*} By Lemma~\ref{lem:CondIndeqiv}, we have \begin{equation*} \label{eq:eq_dist} W\stackrel{\mathcal D}{=} (U_1, \dots, U_{d+2}), \end{equation*} where $U_1, U_2, \ldots, U_{d+2}$ are i.i.d.~$\mathcal{U}(0,1)$ random variables. An equivalent formulation is \begin{equation} \label{eq:mvn} H_0: T:= \Phi^{-1} (W) \stackrel{\mathcal D}{=} N(\textbf{0}, \textbf{I}_{(d+2) \times (d+2)}), \end{equation} where $\Phi$ is the distribution function corresponding to the standard Gaussian random variable, and for any $\textbf{a} \in \mathbb{R}^{d+2}$, $\Phi^{-1} (\textbf{a}) := (\Phi^{-1}(a_1), \ldots, \Phi^{-1}(a_{d+2})).$ We observe i.i.d.~data $\{(X_i,Y_i,\mathbf{Z}_i): i = 1,\ldots, n\}$ from the joint distribution of $(X,Y,\mathbf{Z})$ and we are interested in testing $X \perp \! \! \! \perp Y |\mathbf{Z}$. Suppose first that the distribution functions $F_{X| \mathbf{Z}}(\cdot|\cdot), F_{Y| \mathbf{Z}}(\cdot|\cdot),$ and $F_{\mathbf{Z}}(\cdot)$ are known. Then we have an i.i.d.~sample $T_1,\ldots, T_n$ from $T$, where \begin{equation} \label{eq:data_ver} T_i:=\Phi^{-1}(F_{X|\mathbf{Z}}(X_i|\mathbf{Z}_i), F_{Y|\mathbf{Z}}(Y_i|\mathbf{Z}_i), F_{\mathbf{Z}}(\mathbf{Z}_i)). \end{equation} Now we can use the the test statistic \eqref{eq:teststat} to test the hypothesis of conditional independence. As the true conditional distribution functions $F_{X| \mathbf{Z}}, F_{Y| \mathbf{Z}},$ and $F_{\mathbf{Z}}$ are unknown, we can replace them by their estimates $\widehat F_{X|\mathbf{Z}}, \widehat F_{Y|\mathbf{Z}},$ and $\widehat F_{\mathbf{Z}}$, respectively, where $\widehat F_\mathbf{Z} (\mathbf{z}) =\left( \widehat F_{Z_d|Z_{d-1},\ldots, Z_1}(z_d|z_{d-1},\ldots, z_1), \ldots,\widehat F_{Z_2|Z_1}(z_2|z_1), \widehat F_{Z_1}(z_1)\right)$; see Section \ref{sec:NPEst} for more details on how to compute these estimates. Let us now define \begin{equation} \label{eq:data_hat_ver} \widehat T_i:=\Phi^{-1}(\widehat F_{X|\mathbf{Z}}(X_i|\mathbf{Z}_i), \widehat F_{Y|\mathbf{Z}}(Y_i|\mathbf{Z}_i), \widehat F_{\mathbf{Z}}(\mathbf{Z}_i)), \end{equation} for $i = 1, 2,\ldots, n.$ We will use \begin{equation} \label{eq:en_hat} \widehat{\mathcal{E}_n}:= \mathcal{E}_n(\hat{T}_1, \ldots \hat{T}_n) \end{equation} to test the hypothesis of conditional independence. \subsubsection{Approximating the asymptotic distribution through bootstrap} The limiting behavior of $\mathcal{E}_n$ is not very useful in computing the critical value of the test statistic $\widehat{\mathcal{E}_n}$ proposed in the the previous sub-section. In a related but slightly different problem studied in~\cite{sen14}, it was shown that, the analogous versions of $\mathcal{E}_n$ and $\widehat{\mathcal{E}_n}$ have very different limiting distributions. In independence testing problems it is quite standard and natural to approximate the critical value of the test, under $H_0$, by using a permutation test; see e.g.,~\cite{SzekelyRizzo09}, \cite{gretton07}. However, in our problem as we use $\hat{T}_i$ instead of $T_i$, the permutation test is not valid; see~\cite{sen14}. In this sub-section, we propose a bootstrap procedure to approximate the distribution of $\widehat{\mathcal{E}_n}$, under the null hypothesis of conditional independence. We now describe the bootstrap procedure. Let $\mathbb{P}_{n,\mathbf{Z}}$ be the empirical distribution of $\mathbf{Z}_1, \ldots,\mathbf{Z}_n$. \begin{enumerate}[label=\bfseries Step \arabic*:] \item Generate an i.i.d.~sample $\{U_{i,1}^*, U_{i,2}^*, \mathbf{Z}^*_{n,i}\}_{ 1 \le i \le n}$ of size $n$ from the measure $\mathcal{U}(0,1) \times \mathcal{U}(0,1) \times \mathbb{P}_{n,\mathbf{Z}}$; recall that $\mathcal{U}(0,1)$ denotes the Uniform distribution on $(0,1).$ \item The bootstrap sample is then $\{X^*_{n,1}, Y^*_{n,1}, \mathbf{Z}^*_{n,1}\}_{ 1 \le i \le n},$ where \begin{equation} X^*_{n,i} := \widehat{F}^{-1}_{X|Z}(U_{i,1}^*|\mathbf{Z}_{n,1}^*) \qquad \text{and} \qquad Y^*_{n,i} := \widehat{F}^{-1}_{Y|Z}(U_{i,2}^*|\mathbf{Z}_{n,1}^*). \end{equation} \item Use the bootstrap sample $\{X^*_{n,i}, Y^*_{n,i}, \mathbf{Z}^*_{n,i}\}_{ 1 \le i \le n}$ to get smooth estimators $\widehat F^*_{X|\mathbf{Z}}, \widehat F^*_{Y|\mathbf{Z}},$ and $\widehat F^*_{\mathbf{Z}}$ of $F_{X| \mathbf{Z}}, F_{Y| \mathbf{Z}},$ and $F_{\mathbf{Z}}$; see Section \ref{sec:NPEst} for a discussion on smooth estimation of the conditional distribution functions. \item Compute the bootstrap test statistic $\mathcal{E}^*_n:= \mathcal{E}_n(\widehat{T}^*_1, \ldots, \widehat{T}^*_n) $ where {\small \begin{equation} \widehat{T}^*_i= \Phi^{-1} \big(\widehat F^*_{X|\mathbf{Z}}(X^*_{n,i}|\mathbf{Z}_{n,i}^*), \widehat F^*_{Y|\mathbf{Z}}(Y^*_{n,i}|\mathbf{Z}^*_{n,i}), \widehat F^*_{\mathbf{Z}}(\mathbf{Z}^*_{n,i})). \end{equation} } \end{enumerate} We can now approximate the distribution of $\widehat{\mathcal{E}_n}$ by the conditional distribution of $\mathcal{E}_n^*$ given the data $\{X_i, Y_i,\mathbf{Z}_i\}_{ 1 \le i \le n}.$ In Section \ref{sec:simul} we study the finite sample performance of the above procedure through a simulation study and illustrate that our procedure indeed yields a valid test for conditional independence. \begin{remark} In steps 1 and 2 above, we generate the bootstrap sample from the approximated joint distribution of $(X, Y, \mathbf{Z})$ under the null hypothesis of conditional independence. In steps 3 and 4 we mimic the evaluation of the test statistic $\widehat{\mathcal{E}_n}$ using the bootstrap sample. This is an example of a model based bootstrap procedure.~\cite{sen14} prove the consistency of a similar bootstrap procedure in a related problem. As the sample size increases the approximated joint distribution of $(X, Y, \mathbf{Z})$ (under $H_0$) would converge to the truth and the bootstrap distribution would replicate the distribution of $\widehat{\mathcal{E}_n}$. \end{remark} \subsection{Nonparametric estimation of the residuals}\label{sec:NPEst} In this sub-section we discuss procedures to nonparametrically estimate $ F_{X| \mathbf{Z}}, F_{Y| \mathbf{Z}},$ and $F_{\mathbf{Z}}$ given data $\{X_i, Y_i,\mathbf{Z}_i\}_{ 1 \le i \le n}.$ The nonparametric estimation of the conditional distribution functions would involve smoothing. In the following we briefly describe the standard approach to estimating the conditional distribution functions using kernel smoothing techniques (also see~\cite{LeeLeePark06}, \cite{YuJones98}, and \cite{HallWolffYao99}). For notational simplicity, we restrict to the case $d=1$, i.e., $\mathbf{Z}$ is a real-valued random variable. Given an i.i.d.~sample of $\{(X_i,Z_i): i = 1,\ldots, n\}$ from $f_{X,Z}$, the joint density of $(X,Z)$, we can use the following kernel density estimator of $f_{X,Z}$: $$ \widehat f_n(x,z) = \frac{1}{n h_{1,n} h_{2,n}} \sum_{i=1}^n k \left( \frac{x - X_i}{h_{1,n}} \right) k \left( \frac{z - Z_i}{h_{2,n}} \right)$$ where $k$ is a symmetric probability density on $\mathbb{R}$ (e.g., the standard normal density function), and $h_{i,n}, i=1,2$, are the smoothing bandwidths. It can be shown that if $n h_{1,n} h_{2,n} \rightarrow \infty$ and $\max\{h_{1,n}, h_{2,n}\} \rightarrow 0$, as $n \rightarrow \infty,$ then $\widehat f_n(x,z) \stackrel{P}{\rightarrow} f_{X,Z}(x,z)$. In fact, the theoretical properties of the above kernel density estimator are very well studied; see e.g., \cite{FG96} and \cite{EM05} and the references therein. For the convenience of notation, we will write $h_{i,n}$ as $h_i$, $i=1,2$. The conditional density of $X$ given $Z$ can then be estimated by $$\widehat f_{X|Z}(x|z) = \frac{\widehat f_n(x,z)}{\widehat f_Z(z)} = \frac{\frac{1}{n h_{1} h_{2}} \sum_{i=1}^n k \left( \frac{x - X_i}{h_{1}} \right) k \left( \frac{z - Z_i}{h_{2}} \right)}{\frac{1}{n h_{2}} \sum_{i=1}^n k \left( \frac{z - Z_i}{h_{2}} \right)}.$$ Thus the conditional distribution function of $X$ given $Z$ can be estimated as $$ \widehat F_{X|Z}(x|z) = \frac{\int_{-\infty}^ x \widehat f_n(t,z) \; dt}{\widehat f_Z(z)} = \frac{\frac{1}{n h_{2}} \sum_{i=1}^n K \left( \frac{x - X_i}{h_{1}} \right) k \left( \frac{z - Z_i}{h_{2}} \right)}{\frac{1}{n h_{2}} \sum_{i=1}^n k \left( \frac{z - Z_i}{h_{2}} \right)} = \sum_{i=1}^n w_i(z) K \left( \frac{x - X_i}{h_{1}} \right) $$ where $K$ is the distribution function corresponding to $k$ (i.e., $K(u) = \int_{-\infty}^u k(v) \; dv$) and $w_i(z) = \frac{\frac{1}{n h_{2}} k \left( \frac{z - Z_i}{h_{2}} \right)}{\frac{1}{n h_{2}} \sum_{j=1}^n k \left( \frac{z - Z_j}{h_{2}} \right)}$ are weights that sum to one for every $z$. Least square cross-validation method proposed in \cite{hall2004cross} can be used to find the optimal choices for $h_1$ and $h_2.$ For general $d$, the optimal parameters must satisfy $h_1 \sim n^{-2/(d+4)}$ and $h_2 \sim n^{-1/(d+4)};$ see Section 6.2 of \cite{LiRacine07} and \cite{lilira13} for a thorough discussion. \begin{remark}\label{Bergsma2} Now we provide empirical evidence for the failure of the test proposed in~\cite{B11} in the example discussed in Remark~\ref{rem:berg}. We plot (see Figure~\ref{fig:berg}) the histogram of $p$-values obtained from the proposed test (see Section~\ref{sec:sub_test_cond}) and that of the $p$-values obtained from testing the independence of $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ (using their estimates $\widehat F_{X|\mathbf{Z}}(\cdot|\cdot)$ and $\widehat F_{Y|\mathbf{Z}}(\cdot|\cdot)$). We use the distance covariance test statistic (see \citet{SzekelyRizzoBakirov07}) to test for the independence of $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$. Figure~\ref{fig:berg} demonstrates that a test for mutual independence of $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ can fail to capture the conditional dependence between $X$ and $Y$ given $\mathbf{Z}$. \end{remark} \begin{figure}[h!] \includegraphics[scale=.8]{berg_1000_cv_all.pdf} \caption{Histograms of $p$-values (estimated using 1000 bootstrap samples) over $1000$ independent replications. Here, for $i=1,\ldots,200$, $\{X_i,Y_i,Z_i\}$ are i.i.d.~samples from the example discussed in Remark \ref{rem:berg}.} \label{fig:berg} \end{figure} \section{Simulation}\label{sec:simul} We now investigate the finite sample performance of the testing procedure developed in this paper through a simulation study. We also compare the performance of the our testing procedure to those proposed in \cite{KerCondInd07} and \cite{Z12}. We denote the the testing procedure proposed in \cite{KerCondInd07} by $CI_{perm}$ and use $KCI$ to denote the kernel based conditional independence test proposed in \cite{Z12}. To illustrate and compare the performance of different testing procedures, we consider the following sampling scenario borrowed from \cite{Z12}. Let us assume that $X$ and $Y$ are only dependent on $Z_1$ (the first coordinate of $\mathbf{Z}$) and that all other conditioning variables are independent of $X,Y,$ and $Z_1.$ We assume that $\mathbf{Z} \sim N_d(\textbf{0}, \sigma^2_z \textbf{I}_{d\times d})$, $X:= W+ Z_1+ \epsilon,$ and $Y:= W+ Z_1+ \epsilon^\prime,$ where $\epsilon, \epsilon^\prime,$ and $W$ are three independent mean zero Gaussian random variables. Moreover, we assume that $\epsilon, \epsilon^\prime,$ and $W$ are independent of $\mathbf{Z},$ $var(\epsilon)=var(\epsilon^\prime)=\sigma^2_E,$ and $var(W)= \sigma^2_W,$ where for any real random variable $V$, $var(V)$ denotes its variance. Note that $X \perp \! \! \! \perp Y |\mathbf{Z}$ if and only if $\sigma_W=0.$ In our finite sample simulations we fixed $\sigma_E= 0.3 $ and $\sigma_z=0.2$. We generate $500$ i.i.d.~samples $\{X_i, Y_i, \mathbf{Z}_i\}_{1 \le i \le 500}$ for each of $d=1, 3,$ and $5$ and for different values of $\sigma_W.$ For each such sample, we use 1000 bootstrap replicates to estimate the $p$-value of the proposed test procedure. We have used the ``\texttt{np}" (see \cite{np}) package in R (\cite{R}) to estimate the conditional distribution functions with the tuning parameters chosen using least-squares cross validation (see Section~\ref{sec:NPEst}). In Figure \ref{fig:power_curve} we plot the power (estimated using 500 independent experiments) of the testing procedure proposed in Section \ref{sec:sub_test_cond} along with those of $CI_{perm}$ and $KCI$ as $\sigma_W$ increases from $0$ to $0.25$, for dimensions $1, 3,$ and $5$. We fix the significance level at $0.05$. \begin{figure}[h!] \includegraphics[width=.65\paperwidth]{Final_fig_2.pdf} \caption{The power (at significance level $0.05$) of the three testing procedures for sample size $n=500$ as the dimension $d$ and $\sigma_W$ increase.} \label{fig:power_curve} \end{figure} The distribution of the $KCI$ test statistic under the null hypothesis of conditional independence is estimated with a Monte Carlo procedure suggested in \cite{Z12}. To implement the $CI_{perm}$ and the $KCI$ testing procedures, we have used the MATLAB source codes provided in \cite{Z12}; the source code can be found at \url{http://people.tuebingen.mpg.de/kzhang/KCI-test.zip}. The R language codes used to implement our procedure are available at \url{http://stat.columbia.edu/~rohit/research.html}. Observe that for $CI_{perm}$, the probability of type I error is much greater than the significance level for $d=3$. Furthermore, for $d=5$, it fails to detect the alternative for all values of $\sigma_W$. The performance of $CI_{perm}$ is sensitive to the dimension of the conditioning variable. The probability of type I error for both the proposed and the $KCI$ testing procedures are around the specified significance level. Moreover, the powers of $KCI$ and the proposed test increase to $1$ as $\sigma_W$ increases. Overall, we think that for this simulation scenario the $KCI$ method has the best performance. \section{Discussion}\label{sec:Disc} Given a random vector $(X, \mathbf{Z})$ in $\mathbb{R} \times \mathbb{R}^d = \mathbb{R}^{d+1}$ we have defined the notion of a nonparametric residual of $X$ on $\mathbf{Z}$ as $F_{X|\mathbf{Z}}(X|\mathbf{Z})$, which is always independent of the response $\mathbf{Z}$. We have studied the properties of the nonparametric residual and showed that it indeed reduces to the usual residual in a multivariate normal regression model. However, nonparametric estimation of $F_{X|\mathbf{Z}}(\cdot|\cdot)$ requires smoothing techniques, and hence suffers from the curse of dimensionality. A natural way of mitigating this curse of dimensionality could be to use dimension reduction techniques in estimating the residual $F_{X|\mathbf{Z}}(X|\mathbf{Z})$. Another alternative would be to use a parametric model for the conditional distribution function. Suppose now that $(X,Y,\mathbf{Z})$ has a joint density on $\mathbb{R} \times \mathbb{R} \times \mathbb{R}^d = \mathbb{R}^{d+2}$. We have used this notion of residual to show that the conditional independence between $X$ and $Y$, given $\mathbf{Z}$, is equivalent to the mutual independence of the residuals $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ and the predictor $\mathbf{Z}$. We have used this result to propose a test for conditional independence, based on the energy statistic. We can also use these residuals to come up with a nonparametric notion of partial correlation. The partial correlation of $X$ and $Y$ measures the degree of association between $X$ and $Y$, removing the effect of $\mathbf{Z}$. In the nonparametric setting, this reduces to measuring the dependence between the residuals $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$. We can use distance covariance (\cite{SzekelyRizzoBakirov07}), or any other measure of dependence, for this purpose. We can also test for zero partial correlation by testing for the independence of the residuals $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$. \newline \noindent {\bf Acknowledgements:} The second author would like to thank Arnab Sen for many helpful discussions, and for his help in writing parts of the paper. He would also like to thank Probal Chaudhuri for motivating the problem. The research of second and third authors is supported by National Science Foundation. \bibliographystyle{elsarticle-harv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} Most of the star formation in the Galaxy occurs in clusters associated with at least one high-mass star \citep{Adams2010ARA&A}. An understanding of star formation on global galactic and extra-galactic scales therefore entails the study of the early evolution of high-mass stars and how they impact their molecular environment. The physical characterization of the places where high-mass stars form is an important observational achievement of the far-infrared (far-IR) and submillimeter astronomy of the last decades. High-mass stars form in massive molecular clumps of sizes $\lesssim 1$ pc, column densities $\gtrsim0.1$ gr~cm$^{-2}$, densities $n_{\rm H_2}\gtrsim10^4$ cm$^{-3}$, and masses $>200$ \mbox{\,$M_{\odot}$}\ \citep{Tan2014prpl}, with temperatures depending on their evolutionary stage. Determining the evolutionary sequence of these massive molecular clumps and their properties is currently an active field of study. We can define a schematic timeline that comprises four major observational stages \citep{Jackson2013PASA,Chambers2009ApJ}: \begin{enumerate} \item{Quiescent and prestellar sources, that is, molecular clumps in the earliest phase with no embedded high-mass young stellar objects (HMYSOs). Some of these clumps are called infrared dark clumps (IRDCs) because they appear in absorption against the bright mid-IR background associated with the Galactic plane.} \item{Protostellar clumps are those associated with signs of star formation such as outflows and HMYSOs, but where H{\rmfamily\scshape{ii}}\ regions have not developed. We expect the embedded young high-mass stars to accrete at a high rate \citep[$\ge10^{-4}$ \mbox{\,$M_{\odot}$}\ yr$^{-1}$, e.g.,][]{McKee2003ApJ,Keto2006ApJ,Tan2014prpl} and to reach the main sequence in typically $\lesssim10^5$ yr \citep{Behrend2001AA,Molinari2008AA}. Based on the Kelvin-Helmholtz contraction timescale, high-mass young stars will be likely on the main sequence while still accreting.} \item{Molecular clumps associated with compact H{\rmfamily\scshape{ii}}\ regions. The young high-mass stars in these clumps have probably finished their main accretion phase and have reached their final masses. Strong UV radiation from the newly born high-mass stars start to ionize the surrounding cocoon.} \item{Clumps in a late evolutionary stage, where the ionizing radiation, winds and outflows feedback, and the expansion of the ionized gas finally disrupt the molecular envelope, marking the transition to an observational stage characterized by an extended classical H{\rmfamily\scshape{ii}}\ region and a photodissociation region (PDR).} \end{enumerate} Studying the dust continuum emission in the mid-IR, far-IR, and submillimeter range is one of the most reliable ways to determine the evolutionary phase of molecular clumps. Dust emission in the submillimeter is usually optically thin and traces both cold and warm environments. By combining large infrared Galactic plane surveys like Hi-GAL \citep[Herschel Infrared Galactic plane survey,][]{Molinari2010PASP}, ATLASGAL \citep[APEX Telescope Large Area Survey of the Galaxy,][]{Schuller2009AA}, GLIMPSE \citep[Galactic Legacy Infrared Midplane Survey Extraordinaire,][]{Benjamin2003PASP}, and MIPSGAL \citep{Carey2008AAS}, we can determine the evolutionary state and calculate basic physical parameters of a large sample of molecular clumps. With this prospect in mind, the Millimeter Astronomy Legacy Team 90 GHz (MALT90) survey\footnote{{Survey website: http://malt90.bu.edu/. The molecular line data can be accessed from http://atoa.atnf.csiro.au/MALT90.}} (Rathborne et al.\ in preparation; \citealp{Jackson2013PASA,Foster2011ApJS,Foster2013PASA}) has studied 3246\ molecular clumps identified using SExtractor \citep{Bertin1996AA} from the ATLASGAL data at 870\um\ \citep{Contreras2013AA,Urquhart2014AA}. MALT90 has mapped these clumps in 15 molecular and one hydrogen recombination line located in the 90 GHz atmospheric band using the 22 m Mopra telescope. The objective is to determine the main physical and chemical characteristics of a statistically relevant sample of high-mass molecular clumps over a wide range of evolutionary stages. Approximately 80\% of the MALT90 sources exhibit mid-IR characteristics that allow us to classify them into one of the four preceding evolutionary stages: Quiescent, Protostellar, H{\rmfamily\scshape{ii}}\ region, or PDR. This classification of the sources was done by visual inspection of Spitzer images at 3.6, 4.5, 8.0, and 24 \micron, as described in \citet{Hoq2013ApJ} \citep[see also][]{Foster2011ApJS}. By combining the MALT90 dataset with far-IR continuum and molecular line data, we can characterize quantitatively their temperatures, column densities, volume densities, distances, masses, physical sizes, kinematics, luminosity, and chemistry of the clumps. In this paper, we focus on the dust continuum emission of the MALT90 molecular clump sample. We model the far-IR and submillimeter emission to derive physical parameters which, to a first approximation, are distance independent such as the dust temperature and the column density. Forthcoming publications by Whitaker et al. (in preparation) and Contreras et al.\ (in preparation) will present kinematic distances and analyze the clumps' masses, sizes, volume densities, and luminosities. Preliminary analysis of the molecular emission indicates that the relative abundances, line opacities (Rathborne et al., in preparation, see also \citealp{Hoq2013ApJ}), and infall signatures (Jackson et al., in preparation) are consistent with the mid-IR classification acting as a proxy for clump evolution. The MALT90 data have been already used in several other studies of high-mass star formation, either based on a small ($<10$) set of relevant sources \citep{Rathborne2014ApJ,Stephens2015ApJ,Walker2015MNRAS,Deharveng2015AA} or using a statistical approach on a larger sample \citep[$>30$,][]{Hoq2013ApJ,Miettinen2014AA,Yu2015MNRAS,He2015MNRAS}. In these studies with large samples \citep[with the exception of][]{Hoq2013ApJ}, the dust temperature and column density of the clumps have not been simultaneously derived from a model of the far-infrared spectral energy distribution (SED). This paper aims to complement future high-mass star formation studies based on the MALT90 sample by supplying robust measurements of these physical properties and their uncertainties. Section \ref{sec-obs} of this work presents the main characteristics of the data set and its reduction. Section \ref{sec-ana} describes the methods used for analyzing the data, the modeling of the dust emission, and uncertainty and degeneracy estimations. Section \ref{sec-dis} discusses possible interpretations of the statistical results of the dust parameters and, specially, how the clump evolutionary stages correlate with the dust derived physical parameters. Section \ref{sec-sum} summarizes the main results of this work. {\section{OBSERVATIONS}\label{sec-obs}} The analysis presented in this work is based on data taken with the \emph{Herschel Space Observatory} \citep[HSO,][]{Pilbratt2010AA} and with the APEX telescope \citep{Gusten2006AA}. {\subsection{Processing of Public HSO Hi-GAL Data}\label{sec-higal}} We use public HSO data from the Herschel Infrared Galactic Plane Survey key-project \citep[Hi-GAL,][]{Molinari2010PASP} observed between January of 2010 and November of 2012 and obtained from the Herschel Science Archive. The observations were made using the parallel, fast-scanning mode, in which five wavebands were observed simultaneously using the PACS \citep{Poglitsch2010AA} and the SPIRE \citep{Griffin2010AA} bolometer arrays. The data version obtained from the Herschel Science Archive corresponds to the Standard Product Generation version 9.2.0. Columns 1 to 4 of Table \ref{tab-ins} {list} the instrument, the representative wavelength in microns of each observed band, the angular resolution represented by the FWHM of the point spread function \citep{Olmi2013AA}, and the estimated point source sensitivity ($\sigma_p$), respectively. The point source sensitivity, assuming Gaussian beams, is given by $\sigma_{\rm rms}\Omega_b\left(\Omega_b/2\Omega_{\rm pix}\right)^{-1/2}$, where $\sigma_{\rm rms}$ is the rms variations in intensity units, $\Omega_b$ is the beam solid angle, and $\Omega_{\rm pix}$ is the pixel solid angle.\footnote{Theoretical justification and more detailed calculations for this formula can be found at the Green Bank Telescope technical notes: http://www.gb.nrao.edu/$\mathtt{\sim}$bmason/pubs/m2mapspeed.pdf (B. Mason, private communication)} The fifth column gives the noise level of the convolved and re-gridded maps (see Sections \ref{sec-noi} and \ref{sec-conv}) and the sixth column lists the observatory where the data were taken. Throughout this work, we will refer to the data related to a specific waveband by their representative wavelength in micrometers. The position uncertainty of the Hi-GAL maps is $\sim$3\arcsec. The generation of maps that combine the two orthogonal scan directions was done using the Herschel Interactive Processing Environment (HIPE) versions 9.2 and 10. Cross-scan combination and destriping were performed over 42 Hi-GAL fields of approximately $2\fdg2\times2\fdg2$ using the standard tools available in HIPE. Columns 1 to 4 of Table \ref{tab-ids} give the target name, the ID of the observation, the observing mode, and the observation dates, respectively. For the SPIRE maps, we applied the extended source calibration procedure (Section 5.2.5 from the SPIRE Handbook\footnote{http://herschel.esac.esa.int/Docs/SPIRE/spire\_handbook.pdf}) since most of MALT90 sources correspond to dense clumps that are comparable to or larger than the largest SPIRE beam size. The saturation limit of the nominal mode for SPIRE (Section 4.1.1 from the SPIRE Handbook) is approximately 200 Jy~beam$^{-1}$. To prevent saturation, fields with longitudes $|l|\le5$\arcdeg\ were observed with SPIRE using the bright observing mode instead of the nominal observing mode. {\subsection{Other HSO Data}\label{sec-hobys}} In addition to Hi-GAL data, we {used} data from three observations made using the SPIRE bright mode by the HOBYS key project \citep[Herschel Imaging Survey of OB YSOs,][]{Motte2010AA}. Table \ref{tab-ids} lists these observations' IDs. They were directed toward the NGC 6334 ridge and the central part of M17, areas which are heavily saturated in the Hi-GAL data. {\subsection{ATLASGAL Archival Data}\label{sec-laboca}} Data at 870 \um\ were taken between 2007 and 2010 using the bolometer LABOCA \citep{Siringo2009AA} installed on the APEX telescope located in Chajnantor valley, Chile, as part of the ATLASGAL key project \citep{Schuller2009AA}. Calibrated and reduced fits images were obtained from the data public releases made by \citet{Contreras2013AA} and \citet{Urquhart2014AA}. Table \ref{tab-ins} displays the angular resolution, {the point source sensitivity calculated as in Section \ref{sec-higal} using $\sigma_{\rm rms}=60$ mJy beam$^{-1}$ and $\Omega_{b}/\Omega_{\rm pix}=11.6$ \citep{Contreras2013AA}, and the typical noise of the convolved and re-gridded ATLASGAL maps}. In addition to this noise, we assume a 10\% uncertainty in the absolute calibration. {\section{ANALYSIS}\label{sec-ana}} The following sections describe the methods used in the model fitting and uncertainty estimations. There are 2573\ ATLASGAL sources observed by MALT90 classified according to their mid-IR appearance as Quiescent, Protostellar, H{\rmfamily\scshape{ii}}\ region, or PDR. The remaining sources (\Uncertain) exhibit no clear mid-IR features that allow us to classify them unambiguously in these evolutionary stages. We refer to these sources as ``Uncertain.'' The MALT90 catalog includes 3557\ entries, of which 2935\ sources are associated with molecular emission detected at a single V$_{\rm LSR}$. MALT90 also detected molecular emission arising at two V$_{\rm LSR}$ toward \DoubleClumps\ ATLASGAL sources, which correspond to 622\ entries in the MALT90 catalog. The continuum emission from these sources comes from two or more clumps located at different distances, complicating the interpretation. We have calculated column densities and temperatures toward these blended sources, but we have excluded them from the discussion of Section \ref{sec-dis}. {\subsection{Noise Estimation of the HSO Data}\label{sec-noi}} To a first approximation, the intensity assigned to each pixel is given by the average of the bolometer readings that covers that pixel position. The spatial sampling of the maps, on the other hand, includes $\sim$3 pixels per beamwidth. Observed astronomical signals vary spatially on angular scales $\gtrsim$ 1 beamsize. Therefore, in the large fraction of the map area that is away from very strong sources, we expect that the differences between adjacent pixels are dominated by instrumental noise. In order to estimate this noise, we use the high-pass filter defined by \citet{Rank1999IEEP} to determine the distribution of pixel-to-pixel variations and filter out astronomical emission. The width of this distribution determines the typical noise through the relation $2.36\sigma=\text{FWHM}$. The advantage of this method is that it gives us an extra and relatively simple way to estimate the noise of the final maps. The noise estimation is similar to that obtained from \emph{jackknife} maps, produced by taking the difference between maps generated by the two halves of the bolometer array \citep[see][for an analogous procedure]{Nguyen2010AA}. The 1-$\sigma$ point source sensitivities derived from the high-pass filter method described above are typically 18 and 24 mJy for the two PACS bands at 70 and 160 \micron, {and 12 mJy for the three SPIRE bands at 250, 350, and 500 \micron}. These derived sensitivities are in good agreement with the ones expected for the Hi-GAL survey \citep{Molinari2010PASP} and in reasonable agreement with the sensitivities expected for the parallel mode,\footnote{http://herschel.esac.esa.int/Docs/PMODE/html/ch02s03.html} with the possible exception of the 160 \um\ band where we estimate about half of the expected noise. The noise value derived at 250 \um\ is comparable with the noise component derived by \citet{Martin2010AA} also from Hi-GAL data, indicating that our estimation effectively filters most of the sky emission variations, including the cirrus noise. Finally, and as expected, we find that the noise in fields observed in the SPIRE bright mode is $\sim$4 times larger compared to that in fields observed in nominal mode. For subsequent analyses, we consider an additional independent calibration uncertainty of 10\% whenever we compare data among different bands, as for example, in the SED fitting. This 10\% represents a conservative approximation of the combined calibration uncertainty of the SPIRE photometers (5.5\%) and the beam solid angle (4\%, see Section 5.2.13 of the SPIRE Handbook). {\subsection{Convolution to a Common Resolution and Foreground/Background Filtering}\label{sec-conv}} Multi-wavelength studies of extended astronomical objects, such as star-forming clumps and IRDCs, often combine data taken with different angular resolutions. Therefore, to make an adequate comparison of the observed intensities, it is necessary to transform the images to a common angular resolution. We accomplish this by convolving the images to the lowest available resolution, given by the 500 \micron\ SPIRE instrument, using the convolution kernels of \citet{Aniano2011PASP} in the case of HSO data. The ATLASGAL data were convolved by a two-dimensional Gaussian with FWHM equal to $\sqrt{35\farcs0^2- 19\farcs2^2}\approx29\farcs3$, under the assumption that the point spread functions of the ATLASGAL and the 500 \micron\ data are Gaussians. In addition, to compare the intensity of the HSO images with that of the APEX telescope, we need to remove from the HSO data the low spatial frequency emission that has been filtered from the ATLASGAL images. The ATLASGAL spatial filtering is performed during the data reduction, and is a by-product of the atmospheric subtraction method which removes correlated signal between the bolometers \citep{Siringo2009AA}. As a consequence, any uniform astronomical signal covering spatial scales larger than 2\farcm5 is lost \citep{Schuller2009AA}. We filter the HSO data in a similar way by subtracting a background image from each field and at each band. We assume that this background is a smooth additive component that arises from diffuse emission either behind or in front of the clump. In addition to filtering the HSO data in order to combine it with ATLASGAL, the background subtraction serves two more purposes: it separates the Galactic cirrus emission from the molecular clouds \citep[e.g.,][]{Battersby2011AA}, and it corrects for the unknown zero level of the HSO photometric scale. Our background model consists of a lower-envelope of the original data under two constrains: its value at each pixel has to be less than in the image, within a 2-$\sigma$ tolerance, and it has to vary by less than 10\% over 2\farcm5, which corresponds to the ATLASGAL filter angular scale. We construct a background image for each Hi-GAL field following a slight modification of the \emph{CUPID-findback} algorithm\footnote{http://starlink.jach.hawaii.edu/starlink/findback.html} of the \emph{Starlink} suite \citep{Berry2013ASPC}. The iterative algorithm used to construct the background starts with the original image. Then, we calculate a smoothed image by setting to zero (in the Fourier transform plane) the spatial frequencies corresponding to flux variations on angular scales $<2\farcm5$. For each pixel in this smoothed image with a value larger than the corresponding pixel in the original image plus $2 \sigma$, the pixel value from the smoothed image is replaced by the one in the original image, where $\sigma$ is the uncertainty of the map. The remaining pixels in the smoothed image are kept unchanged. The resultant map is the first iteration of the algorithm. This first iteration replaces the starting image and the cycle repeats, generating further iterations, until the change between two consecutive iterations is less than 5\% in all pixels. Figure \ref{fig-bac} shows an example of this process, which converges to a smooth lower-envelope of the original image. The solid black line shows a cut along $l=355\fdg8$ of the intensity measured at 250 \um. Negative intensity values away from the Galactic plane are a consequence of the arbitrary zero-level of the HSO photometry scale. Dashed lines show different iterations of the algorithm and the final adopted background is marked in red. The error bar at the center of the plot measures 2\farcm5, that is, the shortest angular scale filtered by the background. Note that Figure \ref{fig-bac} shows a cut across latitude at a fixed longitude, but the algorithm works on the two-dimensional image, not assuming any particular preferred direction. {\subsection{Single Temperature Grey-Body Model}\label{sec-fit}} We interpret the observed intensities as arising from a single temperature grey-body dust emission model. The monochromatic intensity at a frequency $\nu$ is given by \begin{equation} I_\nu(T_d,N_g)=B_\nu(T_d)\left(1-e^{-\tau_\nu}\right)~~,\label{eq-Idust} \end{equation} where $B_\nu(T_d)$ is the Planck function at a dust temperature $T_d$ and \begin{align} \tau_\nu&=N_{\rm dust}\kappa_\nu~~,\label{eq-tauDust}\\ &={\rm GDR}\times N_g\kappa_\nu~~,\label{eq-gdr} \end{align} where $\tau_\nu$ is the dust optical depth, $N_{\rm dust}$ is the dust column density, and $\kappa_\nu$ is the dust absorption coefficient. The relation between the dust and gas ($N_g$) column densities is determined by the gas-to-dust mass ratio (GDR), which we assume is equal to $100$. We also define the particle column density by $N_p:=N_g/(\mu m_{\rm H})$, where $\mu=2.3$. The number column density of molecular hydrogen ($N_{\rm H_2}$) is obtained in the same way but using $\mu=2.8$ \citep{Kauffmann2008AA}, under the assumption that all the hydrogen is in molecular form. We assume throughout this work that $N_g$ is measured in gr cm$^{-2}$ and $N_{\rm H_2}$ and $N_p$ in cm$^{-2}$. To compare the dust emission model to the data, we weight the intensity given by Equation \eqref{eq-Idust} by the spectral response function of the specific waveband, in order to avoid post-fitting color corrections \citep[see for example,][]{Smith2012ApJ}. We exclude the 70 \micron\ intensity from the single $T_d$ fitting since this emission cannot be adequately reproduced by Equation \eqref{eq-Idust} (see Section \ref{sec-mq}). This problem has been noted by several authors \citep[e.g.,][]{Elia2010AA,Smith2012ApJ,Battersby2011AA,Russeil2013AA}, who have provided at least three possible reasons: \begin{enumerate} \item{emission at this wavelength comes from a warmer component,} \item{cold and dense IRDCs are seen in absorption against the Galactic plane at 70 \um\ rather than emission,} \item{a large fraction of the 70 \um\ emission comes from very small grains, where the assumption of a single equilibrium temperature is not valid.} \end{enumerate} For each pixel and given the observed background-subtracted intensities $I_{\rm \nu, obs}$, we minimize the squared difference function, \begin{equation} \chi^2(T_d,N_g)=\sum_{\rm \nu}\frac{(I_{\rm \nu, obs}-\tilde{I}_{\nu})^2}{\sigma_{\nu}^2}~~,\label{eq-chi2} \end{equation} where the sum is taken over the observed frequencies (i.e., 5 bands) and $\tilde{I}_{\nu}$ is the intensity spectrum predicted by the model weighted by the respective bandpass. The best-fit dust temperature, $T_d$, and gas column density, $N_g$, minimize the $\chi^2$ value. The variance $\sigma_{\nu}^2$ is equal to the sum in quadrature of the noise (taken from Table \ref{tab-ins}) plus 10\% of the background-subtracted intensity. We fit the model described in Equation \eqref{eq-Idust} for all the pixels with intensities larger than $2\sigma_\nu$ in all bands. The reduced $\chi^2$, defined as $\chi^2_r:=\chi_{\rm min}^2/(m-p)$ \citep{Bevington2003DRDP}, is a simple measure of the quality of the model. Here, $\chi^2_{\rm min}$ is the minimized $\chi^2$ of Equation \eqref{eq-chi2}, $m$ is the number of data-points, and $p$ is the number of fitted parameters. In our case, we fit the dust temperature and the logarithm of the gas column density, so $p=2$. Under the hypothesis that the data are affected by ideal, normally distributed noise, $\chi_r^2$ has a mean value of 1 and a variance of $2/(m-p)$. Figure \ref{fig-chi2CDF} shows the $\chi^2_r$ cumulative distribution function (CDF), calculated using all the pixels for which we fit the SED. The median $\chi^2_r$ value is 1.6. This value is less than $1+\sqrt{2/3}\approx1.8$, which is the expected value plus 1-$\sigma$ under the assumption of normal errors for any particular fit. We conclude that the SED model is in most cases adequate, or equivalently, the limited amount of photometric data does not justify a more complicated model. Note that, although the distribution of $\chi^2_r$ has a reasonable mean and median, it has a large tail: the 95\% quantile is located at $\chi^2_r\approx9.6$. This value represents a poor fit to the model, which can be usually attributed to a single discordant data-point. Generally, this point corresponds to the 870 \micron\ intensity, which illustrates the difficulties of trying to match the spatial filtering of the HSO with ATLASGAL data, despite the background correction and common resolution convolution. We re-examine the fitting when the $\chi^2_r$ value is larger than 10 and remove from the fitting at most one data point only if its removal decreases the $\chi^2_r$ value by a factor of 10 or more. {\subsubsection{Spectral Index of the Dust Absorption Coefficient}\label{sec-beta}} At frequencies $\nu<1$~THz, the dust absorption coefficient curve $\kappa_\nu$ is well approximated by a power law dependence on frequency with spectral index $\beta$ \citep{Hildebrand1983QJRAS}, that is, \begin{equation} \kappa_\nu=\kappa_0(\nu/\nu_0)^\beta~~.\label{eq-beta} \end{equation} In principle, it is possible to quantify $\beta$ toward regions where the emission is optically thin and the temperature is high enough such that the Rayleigh-Jeans (R-J) approximation is valid. In this case, from Equations \eqref{eq-Idust} and \eqref{eq-beta} we deduce that \begin{equation} I_{\nu_1}/I_{\nu_2}=(\nu_1/\nu_2)^{\beta+2}~~,\label{eq-opthin} \end{equation} which is independent of temperature. We estimate $\beta$ through Equation \eqref{eq-opthin} using low frequency ($<600$~GHz) data taken towards warm ($>30$ K) sources to ensure that the R-J and the dust absorption coefficient power-law approximations are valid. Using this value of $\beta$ we will be able to justify better the selection of a dust opacity law among the different theoretical models \citep[e.g.,][]{Ormel2011AA}. In order to ensure that the sources used to estimate $\beta$ are warm enough for Equation \eqref{eq-opthin} to be valid, we select IRAS sources that are part of the 1.1 mm Bolocam Galactic Plane Survey \citep[BGPS,][]{Rosolowsky2010ApJS,Ginsburg2013ApJS} and the ATLASGAL catalog at 870 \micron. We also require that they fulfill $S_{60}/S_{100}>0.5$, where $S_{60}$ and $S_{100}$ are their fluxes at 60 and 100 \um, respectively. In addition, we select sources with $|l|>10\arcdeg$ in order to avoid possible confusion that may arise in the crowded regions around the Galactic center. We find 14 IRAS sources fulfilling these requirements: 18079$-$1756, 18089$-$1837, 18114$-$1825, 18132$-$1638, 18145$-$1557, 18159$-$1550, 18162$-$1612, 18196$-$1331, 18197$-$1351, 18223$-$1243, 18228$-$1312, 18236$-$1241, 18247$-$1147, and 18248$-$1158. The average $\beta$ calculated for these sources using Equation \eqref{eq-opthin} is 1.6, but with a dispersion of 0.5 among the sources. This dispersion is large, but it is compatible with a 15\% uncertainty in the fluxes. The spectral index is in agreement with the absorption coefficient law of silicate-graphite grains, with $3\times10^4$ yr of coagulation, and without ice coatings according to the dust models from \citet{Ormel2011AA}. For the rest of this work, we use this model of dust for the SED fitting. The tables compiled by \citet{Ormel2011AA} also sample the frequency range of interest for this work in more detail than the frequently used dust models of \citet{Ossenkopf1994AA}. We refrain from fitting $\beta$ together with the SED for two reasons: i) we lack the adequate data to effectively break the degeneracy between $\beta$ and $T_d$, that is, good spectral sampling of highly sensitive data below $500$ GHz; and ii) the range of dust models explored by fitting $\beta$ includes only power-laws instead of using more physically motivated tabulated dust models. To compare our results with previous studies, which may have derived temperatures and column densities using different hypotheses, we review how different assumptions on $\beta$ affect the best-fit estimation of $T_d$. Several studies \citep[e.g.,][]{Shetty2009ApJ696-676,Shetty2009ApJ696-2234,Juvela2012AA541-33} have discussed this problem in association with least-squares SED fitting in the presence of noise. {They find} that $\beta$ and $T_d$ are somewhat degenerate {and} associated with elongated (sometimes described as banana-shaped) best-fit uncertainty regions in the $\beta$-$T_d$ plane. In this work, we stress one aspect that has not been sufficiently emphasized: there are \emph{two} behaviors of the $\beta$-$T_d$ degeneracy: one is evident when the data cover the SED peak, and the other when the data only cover the R-J part of the spectrum. In the first case the degeneracy is well described by the modified Wien displacement law \begin{equation} \frac{h\nu_{\rm peak}}{k T_d}\approx(\beta+3)~~,\label{eq-mwdl} \end{equation} that is, the uncertainty region of $T_d$ and $\beta$ is elongated along the curve defined by Equation \eqref{eq-mwdl}. In Equation \eqref{eq-mwdl}, $\nu_{\rm peak}$ represents the frequency where the SED takes its maximum value, which under optically thin conditions is proportional to the temperature. The proportionality constant depends on $\beta$ in a complicated way, but the approximation of Equation \eqref{eq-mwdl} is correct within a 10\% for $\beta>1$ and within a 20\% for all $\beta\ge0$. Note that by assuming a value of $\beta$ and determining $\nu_{\rm peak}$ observationally we can estimate $T_d$ using Equation \eqref{eq-mwdl} in a simple way. \citet{Sadavoy2013ApJ} and \citet[][their 20 K case]{Shetty2009ApJ696-676} show examples of uncertainty regions given by the iso-contours of the $\chi^2$ function which are elongated along the curve defined in Equation \eqref{eq-mwdl}. On the other hand, if the spectral range of the data does not cover the observed peak of the SED and covers only the R-J region, the degeneracy between $\beta$ and $T_d$ is better described by the following relation, \begin{equation} \beta-\frac{h\nu_m}{2 k T_d}= {\rm constant}~~,\label{eq-RJ} \end{equation} where $\nu_m$ is the highest observed frequency. This relation describes well the degeneracy of the high temperature curves (60 and 100 K) shown in \citet{Shetty2009ApJ696-676}. The constant in the right hand side of Equation \eqref{eq-RJ} is approximately \[2+\frac{d\ln S_{\nu_m}}{d \ln \nu}~~,\] that is, 2 plus the logarithmic derivative (or spectral index) of the spectrum evaluated in the highest observed frequency. In practice, the exact value of the constants in the right hand side of Equations \eqref{eq-mwdl} and \eqref{eq-RJ} can be determined from the best-fit solutions. In this work, the HSO bands usually cover the peak of the SED so Equation \eqref{eq-mwdl} is more pertinent. Depending on the spectral sampling, we can use Equation \eqref{eq-mwdl} or \eqref{eq-RJ} to compare temperatures between studies that assume different values of $\beta$. For example, the emission in the HSO bands from a cloud of $T_d=15$~K with a $\beta=1.0$ dust absorption law is also consistent, by Equation \ref{eq-mwdl} and assuming 10\% uncertainty, with the emission coming from a cloud of $T_d=12$~K and $\beta=2$. In each case, the HSO bands cover the peak of the SED. We use this method to re-scale and compare best-fit temperatures obtained from the literature in Section \ref{sec-dis}. {\subsection{Model Uncertainties}\label{sec-mq}} We estimate the best-fit parameter uncertainties using the projection of the 1-$\sigma$ contour of the function $\Delta\chi^2:=\chi^2-\chi^2_{\rm min}$ \citep{Lampton1976ApJ}. In the case of 2 fitted parameters, the 1-$\sigma$ uncertainty region is enclosed by the $\Delta\chi^2=2.3$ contour. The parameter uncertainties for the SED fitting are given by the projections of these uncertainty regions onto the $T_d$ and $\log N_{g}$ axes. For pixels in images observed using the nominal observing mode, the projections are well described by the following equations \begin{equation} \label{eq-unc}% \begin{aligned} \delta T^{-} &=\eta_{10}\left(0.3-0.4~T_{10}+ 0.4~T_{10}^2\right)~~,\\ \delta T^{+} &= \eta_{10}\left(1.1-1.3~T_{10}+0.7~T_{10}^2\right)~~,\\ \delta\log N_g &= \eta_{10}\left(0.03-0.03\log N_g\right)~~, \end{aligned} \end{equation} where $T_{10}=T_d/(10~{\rm K})$, $N_g$ is in gr~cm$^{-2}$, and $\eta_{10}$ is the flux calibration uncertainty in units of 10\%. The best fit temperature and log-column density with their 1-$\sigma$ uncertainties are given by ${T_d}^{+\delta T^{+}}_{-\delta T^{-}}$ and $\log N_g\pm\delta \log N_g$, respectively. For pixels in images observed using the bright observing mode, the projections are well described by \begin{equation} \label{eq-uncB}% \begin{aligned} \delta T^{-} &=\eta_{10}\left(0.7-0.71~T_{10}+ 0.53~T_{10}^2\right)~~,\\ \delta T^{+} &=\eta_{10}\left(1.1-1.3~T_{10}+0.74~T_{10}^2\right)~~,\\ \delta\log N_g &= \eta_{10}\left(0.05-0.03\log N_g\right)~~. \end{aligned} \end{equation} Equations \eqref{eq-unc} and \eqref{eq-uncB} were derived by fitting the upper and lower limits of $T_d$ and $\log N_g$ projections of the uncertainty region. These approximations for the uncertainty are valid for $T_d$ between 7 and 40 K, for $\log N_g $ between $-3.4$ and $1.1$ (equivalent to $\log N_{\rm H_2} $ between 19.9 and 24.4), and for values of $\eta_{10}$ between 1 and 2, which correspond to 10\% and 20\% calibration errors, respectively. Figure \ref{fig-Dchi2con} shows an example of the prediction of Equations \eqref{eq-unc} compared to the $\Delta\chi^2$ contours. Within their range of validity and for data taken in the nominal mode, the intervals $\left[T_d-\delta T^{-},T_d+\delta T^{+}\right]$ and $\left[\log N_g-\delta \log N_g,\log N_g+\delta \log N_g\right]$ correspond to the projections of the uncertainty ellipse onto the $T_d$ and $\log N_g$ axes within 0.2 K and 0.02 dex, respectively. Equations \eqref{eq-unc} also indicate that, while the confidence interval of $\log N_p $ is symmetric and roughly constant, the temperature uncertainties grow rapidly above 25 K and they are skewed towards the higher value \citep{Hoq2013ApJ}. This is produced mainly because of the absence of data at wavelengths shorter than 160 \um. As explained in Section \ref{sec-fit}, we do not use the 70 \micron\ data in our model. Including the 70 \um\ data increases the median of the $\chi^2_r$ distribution to $\sim3.5$. The temperature and log-column density uncertainties are also correlated along the approximate direction where the product $T_d\times N_g $ is constant, indicated in Figure \ref{fig-Dchi2con}. The better the R-J approximation is for the SED, the better will be the alignment of the major axis of the ellipse with the line $T_d\times N_g=\text{constant}$. {\subsection{Saturated Sources}\label{sec-sat}} The HSO detectors in SPIRE and PACS have saturation limits that depend on the observing mode. The saturation intensities for the nominal observing mode are 220 and 1125 Jy~beam$^{-1}$ for the 70 and 160 \um\ PACS bands\footnote{PACS Observer's Manual, v.\ 2.5.1, Section 5.4}, respectively, and 200 Jy~beam$^{-1}$ for SPIRE. Saturation is most problematic in the 250 \um\ SPIRE band. There are 46 MALT90 sources whose Hi-GAL data are affected by saturation. Of these, six are covered by HOBYS observations made by using the bright mode (Section \ref{sec-hobys}) that gives reliable 250 \um\ intensities. For the remaining 40 sources, we replace the saturated pixels with the saturation limits given above, and we fit the SED taking these values as lower bounds. {\subsection{The 70 $\mu$m Appearance of the Quiescent Clumps}\label{sec-re70}} MALT90 and other previous studies \citep[e.g.,][]{Molinari2008AA,Lopez-Sepulcre2010AA,Sanhueza2012ApJ,Sanchez-Monge2013MNRAS,Giannetti2013AA,Csengeri2014AA} use mid-IR observations as a probe of star formation activity. However, deeply embedded, early star formation activity could be undetected at mid-IR yet be conspicuous at far-IR. Quantitatively, we expect that the 24 to 70 \um\ flux density ratio of HMYSOs to vary between $10^{-6}$ and 1 for a wide range of molecular core masses (60 to 240 \mbox{\,$M_{\odot}$}) and central star masses over 1~\mbox{\,$M_{\odot}$}\ \citep{Zhang2014ApJ}. Therefore, despite MIPSGAL having $\sim 50$ times better point source sensitivity at 24 \micron\ than that of Hi-GAL at 70 \micron, it is possible to detect embedded protostars in 70 \um\ images that would otherwise appear dark at 24 \um\ and would be classified as Quiescent. The 70 \um\ data thus allow us to further refine the MALT90 classification since a truly Quiescent clump should lack 70 \um\ compact sources \citep[e.g.,][]{Sanhueza2013ApJ,Beuther2013AA553,Tan2013ApJ}. We examined the Hi-GAL 70 \um\ images of the 616\ Quiescent sources and found 91 (15\%) that show compact emission at 70 \um\ within 38\arcsec\ -- or one Mopra telescope beamsize -- of the nominal MALT90 source position. Hereafter, we consider these sources as part of the Protostellar sub-sample. We also found 83 sources that appear in absorption at 70 \um\ against the diffuse Galactic emission. We refer to these clumps as far-IR dark clumps (far-IRDCs). The remaining 442 Quiescent sources are either associated with diffuse emission not useful for tracing embedded star formation, or they are confused with the 70 \um\ diffuse emission from the Galactic plane. {\section{DISCUSSION}\label{sec-dis}} Figure \ref{fig-sed} shows the dust temperature and column density obtained for each pixel around the source AGAL343.756$-$00.164, which is taken as a typical example. Best-fit dust temperatures, column densities, and their uncertainties are calculated pixel by pixel. The two plots located in the lower-right corner of Figure \ref{fig-sed} show the SED measured in two directions (center and periphery) toward AGAL343.756$-$00.164. The blue dashed line in each of these plots is the curve given by Equation \eqref{eq-Idust} evaluated in the best fit solution. The shaded region around the curve is the locus covered by the model when the best-fit parameters vary within the 1-$\sigma$ confidence interval. As explained in Section \ref{sec-fit}, the $\chi^2$ is calculated by comparing the measured intensities at each band with the SED model weighted by the respective bandpasses. Table \ref{tab-NT} gives the derived dust temperatures and log-column densities of 3218\ MALT90 sources. This correspond to a 99.1\% of the 3246\ ATLASGAL sources observed by MALT90. The remaining \NoFit\ sources are either not covered by HSO observations (24 sources) or they are too faint to reliably estimate the dust parameters (4 sources). Column 1 indicates the ATLASGAL name of the source. We include \WithFitDouble\ entries which correspond to multiple sources blended along the same line of sight, indicated with an ``m'' superscript. Column 2 {gives} the effective angular radius of the source in arcsec, defined as $\theta_{\rm eff}=\sqrt{\Omega_s/\pi}$, where $\Omega_s$ is the effective angular area occupied by the MALT90 source. This area corresponds to the intersection between the region enclosing the source where the column density is greater than 0.01~gr~cm$^{-2}$ ($>2.0\times10^{21}$~cm$^{-2}$ in \mbox{H$_2$}\ column density) and the 870 \um\ ATLASGAL mask (see \citealp{Contreras2013AA} and \citealp{Urquhart2014MNRAS}). Figure \ref{fig-sed} shows an example of one of these areas (red contour in top left image). Column 3 of Table \ref{tab-NT} {gives} the mean dust temperature averaged over the area of each source ($\bar{{T_d}}$). Columns 4 and 5 {list} the lower and upper uncertainty of $\bar{T_d}$, respectively. Columns 6, 7, and 8 give the dust temperature at the position of the 870 \um\ peak intensity ($T_{d,{\rm P}}$) and its lower and upper uncertainties, respectively. Columns 9, 10, and 11 list the average column density ($\bar{N_{g}}$), its logarithm, and the uncertainty of the latter, respectively. Columns 12 gives the peak column density ($N_{g,{\rm P}}$), derived using the 870 \um\ peak intensity and $T_{d,{\rm P}}$ (in Equations \eqref{eq-Idust}, \eqref{eq-tauDust} and \eqref{eq-gdr}). Columns 13 and 14 give $\log N_{g,{\rm P}}$ and its uncertainty, respectively. Finally, column 15 {gives} the mid-IR classification of the MALT90 source, as Quiescent (616\ clumps), Protostellar (749\ clumps), H{\rmfamily\scshape{ii}}\ region (844\ clumps), PDR (343\ clumps), or Uncertain (666\ clumps). Note that these numbers describe the statistics of Table \ref{tab-NT}, that is, of the 3218\ sources for which we have dust column density and temperature estimations.\ For the Quiescent sources, we indicate with a superscript ``C'' or ``D'' whether the source is associated with 70 \um\ compact emission or if it is a far-IRDC, respectively (see Section \ref{sec-re70}). No superscript means that neither of these features appears related to the clump. Previous studies of massive molecular clumps have relied on samples obtained from the IRAS catalog and fit SEDs to obtain dust temperatures and masses. We find a total of 116 matches between MALT90 and those samples as analyzed by \citet[94 matching sources]{Faundez2004AA} and \citet[22 matches]{Giannetti2013AA}. Other studies, such as \citet{Sridharan2002ApJ} and \citet{Williams2005AA}, have targeted the northern sky and they do not overlap significantly with MALT90 (1 source in common each). From all these sources, 63 are classified as H{\rmfamily\scshape{ii}}\ region, 44 as Protostellar, 3 as PDR, 6 as Quiescent, and 2 as Uncertain. From the relative fraction of Quiescent sources in the MALT90 sample, we would expect 13 or more of the 118 to be Quiescent with a 99\% probability, assuming that they are randomly sampled. Since there are only 6 Quiescent matches, we conclude that previous surveys were biased toward more evolved stages, illustrating how MALT90 helps to fill in the gap in the study of cold clumps. Figure \ref{fig-comp} shows the dust temperature calculated by previous studies versus the dust temperatures given in this work. We calculate a Spearman correlation coefficient \citep[Section 4.2.3]{Wall2012psa} of 0.75 with a 95\% confidence interval between 0.68 and 0.83, indicating a positive correlation between our temperature estimations and those from the literature. \citet{Faundez2004AA} assume a dust absorption spectral index $\beta=1$, while our dust model is characterized by $\beta\approx1.7$. Therefore, we correct their temperatures according to Equation \eqref{eq-mwdl} by multiplying them by $(3+1)/(3+1.7)\approx0.85$. The correction decreases the mean of the differences between the dust temperatures obtained by \citet{Faundez2004AA} and the temperatures obtained by us from $+7$ K (uncorrected) to $+2$ K. We apply the same correction to the temperatures given by \citet{Giannetti2013AA} using their reported best-fit $\beta$ values. Figure \ref{fig-comp} shows that temperatures estimated using data from mid- and far-IR bands below 100 \um\ are more often higher than the dust temperatures derived in this work. In consequence, the slope of a linear regression performed in the data shown in Figure \ref{fig-comp} is slightly larger than unity ($1.13\pm0.09$). This is somewhat expected since our own single temperature SED model underestimates the 70 \um\ intensity (see Section \ref{sec-fit}). The most plausible reason is that in these sources there is a warmer dust component better traced by IR data below 100 \um. Dust temperatures and column densities of a preliminary MALT90 subsample consisting of 323 sources were presented by \citet{Hoq2013ApJ}.\footnote{\citet{Hoq2013ApJ} report 333 sources, but only 323 of these are part of the final catalog.} They also use the Hi-GAL data (without ATLASGAL) and they employ a similar data processing and SED fitting procedure compared to that used in this work. \citet{Hoq2013ApJ} report dust temperatures which are consistent within 13\% compared to the ones given in Table \ref{tab-NT}. However, we obtain average column densities that are smaller by about 20\%. The differences are due to our source sizes being larger than the ones assumed by \citet{Hoq2013ApJ}. They use a fixed size equal to one Mopra telescope beam, while we define the size of the source based on its extension in the column density map. When there is more than one source in the same line of sight, the continuum emission blends two or more clumps located at different distances. This makes uncertain the interpretation of the temperature, column density, and evolutionary stage classification. Therefore, for further analysis we remove these sources from the MALT90 sample, leaving 2907\ sources. This number breaks down in the following way (see Table \ref{tab-means}): there are 464\ sources considered as Quiescent (single V$_{\rm LSR}$, without a compact 70 \um\ source), 788\ considered Protostellar (including Quiescents with a 70 \um\ compact source), 767\ H{\rmfamily\scshape{ii}}\ regions, and 326\ PDRs. The remaining sources (562) have an Uncertain classification. This selection and reclassification of sources, as we show in Appendix \ref{sec-stat}, does not affect the conclusions presented in the following sections. \subsection{Dust Temperature versus Gas Temperature} The dust temperature is fixed by the balance between heating and radiative cooling of the grain population. If the density in a molecular cloud is greater than $5\times10^4$~cm$^{-3}$ \citep{Goldsmith2001ApJ,Galli2002AA}, we expect the dust temperature to be coupled to the gas temperature. We test this hypothesis by comparing the average dust temperatures with the gas kinetic temperatures determined from ammonia observations. Figure \ref{fig-amm} shows the ammonia temperature derived by \citet[23 matching sources]{Dunham2011ApJ}, \citet[{10 matching sources}]{Urquhart2011MNRAS}, \citet[106 matching sources]{Wienen2012AA}, and \citet[19 sources]{Giannetti2013AA} versus the dust temperature, separated by evolutionary stage. \citet{Dunham2011ApJ} and \citet{Urquhart2011MNRAS} performed the NH$_3$ observations using the Green Bank Telescope at 33\arcsec\ angular resolution. \citet{Wienen2012AA} used the Effelsberg Radiotelescope at 40\arcsec\ angular resolution. These are comparable to the resolution of our dust temperature maps (35\arcsec). On the other hand, \citet{Giannetti2013AA} used NH$_3$ data obtained from ATCA with an angular resolution of $\sim20\arcsec$. In this last case, we compare their ammonia temperatures with $T_{d,{\rm P}}$ instead of $\bar{T_d}$. All but eight MALT90 sources with ammonia temperature estimation are classified in one of the four evolutionary stages. The Spearman correlation coefficient of the entire sample (158 sources, including these eight with Uncertain mid-IR classification) {is 0.7, with a 95\% confidence interval between 0.6 and 0.8,} indicating a positive correlation between both temperature estimators. The scatter of the relation is larger than the typical temperature uncertainty, and it grows with the temperature of the source. For sources below 22 K, ammonia and dust temperatures agree within $\pm3$ K. Above 22 K, the uncertainties of both temperature estimators become larger \citep[see Equations \eqref{eq-unc} and, for example,][]{Walmsley1983AA}, consistent with the observed {increase of the scattering.} In addition, higher temperature clumps are likely being heated from inside and therefore associated with more variations in the dust temperatures along the line of sight, making the single temperature approximation less reliable. {The slopes of the linear regressions performed in the data are $0.7\pm0.1$, $0.8\pm0.1$, $0.7\pm0.1$, and $0.9\pm0.3$ for the Quiescent, Protostellar, H{\rmfamily\scshape{ii}}\ region, and PDR samples, respectively.} The ammonia and dust temperatures relation agrees in general with that found by \citet{Battersby2014ApJ786}, except that we do not find a systematically worse agreement on Quiescent sources compared with the other evolutionary stages. \subsection{Temperature and Column Density Statistics} Figure \ref{fig-sd} shows {maps} of smoothed 2-D histograms of the distributions of $\bar{T_d}$ and $\log N_{g,{\rm P}}$ of the MALT90 clumps for each mid-IR classification. In the following analysis we focus on these two quantities and their relation with the evolutionary stage. We use $N_{g,{\rm P}}$ instead of $\bar{N_g}$ because $N_{g,{\rm P}}$ is independent of the specific criterion used to define the extension of the clump and because the column density profiles are often steep \citep[$\propto s^{-0.8}$, where $s$ is the plane-of-the-sky distance to the clump center,][]{Garay2007ApJ}, making the average $\bar{N_g}$ less representative of the clump column density values. On the other hand, dust temperature gradients are shallower \citep[$\propto r^{-0.4}$, where $r$ is the distance to the clump center, {see}][]{vanderTak2000ApJ} and $\bar{T_d}$ has less uncertainty compared with the temperature calculated toward a single point. We include in the Protostellar group those sources that are associated with a compact source at 70 \um\ (Section \ref{sec-re70}). The most conspicuous differences between the evolutionary stages are evident between the Quiescent/Protostellar and the H{\rmfamily\scshape{ii}}\ region/PDR populations \citep{Hoq2013ApJ}. The main difference between these groups is the temperature distribution. Most of the sources in the Quiescent/Protostellar stage have temperatures below 19 K, while most H{\rmfamily\scshape{ii}}\ region/PDR sources have temperatures above 19 K. We also note that the Quiescent, Protostellar, and H{\rmfamily\scshape{ii}}\ region populations have peak column densities $\gtrsim0.1$~gr~cm$^{-2}$, equivalent to $2.13\times10^{22}$ \mbox{H$_2$}\ molecules per cm$^{2}$, while the PDR population has peak column densities of typically half of this value. These differences are also apparent in Figure \ref{fig-cdf}, where solid lines display the marginalized CDFs of $\bar{T_d}$ and $\log N_{g,{\rm P}}$ for each evolutionary stage. The dashed lines show the distributions of the Uncertain group. It is clear from these plots that the median temperature increases monotonically with evolutionary stage, and that the Protostellar and PDR clumps are the stages associated with the largest and smallest column densities, respectively. Figure \ref{fig-boxNT} shows Tukey box plots \citep[][Section 5.9]{Feigelson2012msma} of the marginalized distributions of $\bar{T_d}$ and $\log N_{g,{\rm P}}$ separated by evolutionary stage. In these plots, the boxes indicate the interquartile range (half of the population), the thick horizontal line inside each box indicates the median, and the error bars encompass the data that are within 1.5 times the inter-quartile distance from the box limits. The remaining points, in all cases less than 4\% of the sample, are plotted individually with small circles and we refer to them formally as outliers. Figure \ref{fig-boxNT} shows that the $\bar{T_d}$ and $\log N_{g,{\rm P}}$ interquartile range shifts with evolutionary stage. This is evidence of systematic differences between the different populations, despite the large overlaps. In practice, the overlap between populations implies that it is unfeasible to construct sensible predictive criteria that could determine the evolutionary stage of a specific source based on its temperature and peak column density and it also reflects that star formation is a continuous process that cannot be precisely separated into distinct stages. Nevertheless, the fact that the proposed evolutionary stages show a monotonic increase in mean temperature demonstrates that the classification scheme has a legitimate physical basis. In the following, we focus our analysis on the Quiescent and Protostellar populations. Figures \ref{fig-sd} to \ref{fig-boxNT} show that these two samples are similarly distributed and they exhibit the most important overlap. We test the statistical significance of the Quiescent and Protostellar differences in $\bar{T_d}$ and $\log N_{g,{\rm P}}$ by comparing these differences with their uncertainties. Table \ref{tab-means} shows the medians, means, and r.m.s. deviations of $\bar{T_d}$, $T_{d,{\rm P}}$, $\log \bar{N_g}$, and $\log N_{g,{\rm P}}$ for each population. In general, these dispersions are larger than the uncertainties of the individual values, indicating that the dispersions are intrinsic to each population and not due to the fitting uncertainties. The means $\bar{T_d}$ for the Quiescent and Protostellar populations are $16.8$ and $18.6$ K, respectively. The Protostellar population has a mean $\bar{T_d}$ larger by $+1.8$ K compared with the Quiescent population. We can estimate the expected uncertainty of this mean difference using the dispersion and size of each population, which gives $\sqrt{\left(3.8^2/464\right)+\left(4.4^2/788\right)}\approx0.24$ K. Therefore, the difference is more than seven times the expected uncertainty. On the other hand, the difference of the means of $\log N_{g,{\rm P}}$ is $+0.17$ in favor of the Protostellar population. The expected uncertainty in this case is $\sqrt{\left(0.25^2/464\right)+\left(0.34^2/788\right)}=0.017$, that is, ten times smaller than the observed difference. We conclude that the observed differences of the Quiescent and Protostellar populations are statistically significant. Furthermore, the differences in temperature and column density are orthogonal to the expected uncertainty correlation (Figure \ref{fig-Dchi2con}), giving us more confidence that we are observing a real effect in both parameters. We confirm the significance of the difference using more sophisticated statistical tests in Appendix \ref{sec-stat}. Note that either the temperature or the column density difference between other population pairs are larger than those between the Quiescent and Protostellar. Are these statistical differences evident when comparing the $\bar{N_g}$ and $T_{d,{\rm P}}$? On one hand, the difference between the mean $T_{d,{\rm P}}$ of the Protostellar and Quiescent samples is 2.6 K, while the expected uncertainty of this difference is 0.2 K. Therefore, despite the larger fitting uncertainties of $T_{d,{\rm P}}$ (see Table \ref{tab-NT}), we still detect a statistically significant difference between both populations when comparing only their central temperatures. On the other hand, the difference between the means of $\log \bar{N_g}$ is $0.04$ over an expected uncertainty of $0.02$, that is, the difference is only 2 times the uncertainty. The latter is not highly significant, which is somehow expected because of the reasons explained at the beginning of this section. It is also expected from previous studies that indicate that neither the mass of the clumps \citep{Hoq2013ApJ} nor their radii \citep{Urquhart2014MNRAS} change conspicuously with evolutionary stage. This in turn implies that the average column density should remain approximately unchanged. Within the Quiescent sample, we identified in Section \ref{sec-re70} a population of 83 clumps that appear as far-IRDCs at 70 \um. Of these, there are 77 associated with a single source along the line of sight. This sample has a mean and median $\bar{T_d}$ of $14.9$ and $14.7$ K, respectively. The remaining Quiescent population has mean and median $\bar{T_d}$ equal to $17.2$ and $16.4$ K, respectively. Based on a Wilcoxon non-parametric test \citep[Section 5.4]{Wall2012psa} { we obtain a p-value of $4\times10^{-7}$} under the null hypothesis that these distributions are the same. {Therefore, the temperature differences between the far-IRDCs and the rest of the Quiescent clumps are significant.} The column densities of the far-IRDC subsample are also larger compared to those of the rest of the Quiescent sample. The far-IRDC $\log N_{g,{\rm P}}$ mean and median are $-0.76$ and $-0.78$, respectively, while for the remaining Quiescent clumps they are $-0.88$ and $-0.91$, respectively. Again, we reject the null hypothesis (Wilcoxon p-value of $\sim10^{-5}$) and conclude that the far-IRDC sample is a colder and denser subsample of the Quiescent population. Finally, the Uncertain group (that is, MALT90 sources that could not be classified into any evolutionary stage) seems to be a mixture of sources in the four evolutionary classes, but associated with lower column densities (median $\log N_{g,{\rm P}}\sim0.1$ gr cm$^{-2}$). Figure \ref{fig-cdf} shows that the $\bar{T_d}$ values of the Uncertain group distribute almost exactly in between the other evolutionary stages. Neither a Wilcoxon nor a Kolmogorov-Smirnov test can distinguish the $\bar{T_d}$ distributions of the Uncertain sample from the remainder of the MALT90 sources combined with a significance better than 5\%. Figure \ref{fig-cdf} also shows that the column densities of the Uncertain group are in general lower compared with those of any evolutionary stage except the PDRs. Molecular clumps with low peak column density may be more difficult to classify in the mid-IR, since they are probably unrelated to high-mass star formation. It is also possible that a significant fraction of these sources are located behind the Galactic plane cirrus emission and possibly on the far-side of the distance ambiguity, making the mid-IR classification more difficult and decreasing the observed peak column density because of beam dilution. \subsubsection{Column Density and Temperature Evolution in Previous Studies} Since the discovery of IRDCs \citep[typically dark at mid-IR wavelengths, see][]{Egan1998ApJ,Carey1998ApJ} it has been pointed out that they likely consist of cold ($<20$ K) molecular gas. This has been confirmed by several studies of molecular gas \citep{Pillai2006AA450,Sakai2008ApJ,Chira2013AA} and dust \citep[e.g.,][]{Rathborne2010ApJ}. Systematic \mbox{H$_2$}\ column density differences between IR dark, quiescent, and star-forming clumps have been more difficult to establish. Some authors have found no significant column density differences between these groups \citep{Rathborne2010ApJ,Lopez-Sepulcre2010AA,Sanchez-Monge2013MNRAS}. However, most studies based on large samples agree that star forming clumps have larger molecular column densities compared to the quiescent ones \citep{Dunham2011ApJ,Giannetti2013AA,Hoq2013ApJ,Csengeri2014AA,Urquhart2014MNRAS,He2015MNRAS}. Furthermore, \citet{Beuther2002ApJ}, \citet{Williams2005AA}, and \citet{Urquhart2014MNRAS} found evidence that molecular clumps which display star formation activity have a more concentrated density profile. \citet{Urquhart2014MNRAS}, based on ATLASGAL and the Red MSX Source (RMS) survey \citep{Lumsden2013ApJ}, analyze a large ($\sim1300$) number of molecular clumps with signs of high-mass star formation. High-mass star formation activity was determined from associations with the MSX point source catalog \citep{Egan2003VizieR}, methanol masers \citep{Urquhart2013MNRAS431}, and H{\rmfamily\scshape{ii}}\ regions detected using centimeter wavelength radio emission \citep{Urquhart2007AA,Urquhart2009AA501,Urquhart2013MNRAS}. In \citet{Urquhart2014MNRAS}, ATLASGAL clumps associated with WISE sources \citep{Wright2010AJ} are called massive star-forming (MSF) clumps and all the rest are otherwise ``quiescent.'' \citet{Urquhart2014MNRAS} find that MSF clumps have larger column densities than their ``quiescent'' clumps by a factor of $\sim3$. \citet{Urquhart2014MNRAS} and \citet{He2015MNRAS} also report that clumps associated with H{\rmfamily\scshape{ii}}\ regions have larger column densities than the remainder of the star forming clumps. This result contradicts our finding that H{\rmfamily\scshape{ii}}\ region sources have typically lower column densities compared with the Protostellar sample (see Table \ref{tab-means} and Figure \ref{fig-cdf}). To examine this disagreement in more detail, we analyze the intersection between the MSF and the MALT90 samples. There are 515 MSF clumps in common with the MALT90 sample that are covered by Hi-GAL: 285 classified as H{\rmfamily\scshape{ii}}\ regions, 204 as Protostellar, 22 as PDR, and 4 as Quiescent. We calculate that these 515 sources have a mean average temperature of 24 K and a mean log-peak column density of $-0.63$. The temperature is consistent with the H{\rmfamily\scshape{ii}}\ region sample of MALT90, but the column densities are much higher. Within these 515 sources we find that, in agreement with \citet{Urquhart2014MNRAS}, those with centimeter wavelength emission have significantly higher column densities ($\log N_{g,{\rm P}}=-0.59$) and temperatures ($26$ K) compared with the rest ($\log N_{g,{\rm P}}=-0.67$ and $\bar{T_d}=22$ K). The reason for \citet{Urquhart2014MNRAS} finding that sources associated with H{\rmfamily\scshape{ii}}\ regions are associated with the largest column densities, in disagreement with our results, arises most likely from differences in the classification criteria. \citet{Urquhart2014MNRAS} report centimeter radio emission arising from ionized gas toward 45 out of the 204 common sources we classify as Protostellar, and 94 out of the 285 clumps we classify as H{\rmfamily\scshape{ii}}\ regions were observed by the CORNISH survey at 5 GHz \citep[$\sim2$ mJy sensitivity,][]{Hoare2012PASP} and were not detected. These are relatively few sources and exchanging their classification (Protostellar by H{\rmfamily\scshape{ii}}\ region and vice-versa) does not modify the trends described in the previous section. However, if they reflect an underlying fraction of misclassified sources between the Protostellar and H{\rmfamily\scshape{ii}}\ region groups, they might change the statistics. Conversely, we detect embedded HMYSOs in 641 ATLASGAL sources that are treated as ``quiescent'' in \citet{Urquhart2014MNRAS}, in part due to the better sensitivity and angular resolution of MIPS compared to MSX and WISE. It is likely that the ``quiescent'' sample of \citet{Urquhart2014MNRAS} does not contain currently young high-mass stars, but does contain a large fraction of intermediate mass star formation activity, and some of these sources are also associated with PDRs. In summary, we expect that the Quiescent sample from MALT90 to be more truly devoid of star formation than the non-MSF ATLASGAL clumps, while at the same time, several of our Protostellar clumps are probably associated with H{\rmfamily\scshape{ii}}\ regions, which are more efficiently detected using radio centimeter observations. {\subsubsection{Temperature and Column Density Contrasts}\label{sec-cont}} We analyze spatial variations of $T_d$ and $N_g$ by comparing their values at the peak intensity position with the average value in the clump. For each MALT90 clump, we define the temperature contrast and log-column density contrast as $\Delta T=\bar{T_d}-T_{d,{\rm P}}$ and $\Delta\log N_g=\log\left(\bar{N_g}/N_{g,{\rm P}}\right)$, respectively. Table \ref{tab-cont} {lists} the means and medians of the temperature and log-column density contrasts. Table \ref{tab-cont} also {gives} 95\% confidence intervals\footnote{The upper limit of the CI is the lowest value $u$ larger than the observed median for which we can reject the null hypothesis that $u$ is the true population median with a significance of 5\%. The lower limit of the CI is calculated similarly.} (CIs) for the medians of $\Delta T$ and $\Delta\log N_g$ per evolutionary stage, determined using the sign test \citep[Section 12.2]{Ross2004ipses}. They were calculated using the task \texttt{SIGN.test} from the R statistical suite\footnote{www.r-project.org} (version 3.1.1). The sign test is not very sensitive but it has the advantage that it is non-parametric and, in contrast to the Wilcoxon test (for example), it does not assume that the distributions have the same shape. A negative $\Delta\log N_g$ indicates that the clump has a centrally peaked column density profile with the absolute value of $\Delta\log N_g$ being a measure of its steepness. As a reference, a critical Bonnor-Ebert sphere is characterized by $\Delta\log N_g=-0.5515$. Perhaps not very surprisingly, the $\Delta\log N_g$ means, medians, and CIs are always negative, indicating that most of the clumps are centrally peaked. We find that the medians are significatively different between evolutionary stages, with no overlap in the CIs. The H{\rmfamily\scshape{ii}}\ region clumps are those associated with the steepest column density profiles, followed by the Protostellar and the PDR clumps. Clumps in the Quiescent evolutionary stage are associated with the smoothest column density profiles. The temperature contrasts are also distinct for different evolutionary stages. A positive $\Delta T$ indicates that the dust temperature increases away from the clump center, that is, dust temperature at the peak column density position ($T_{d,{\rm P}}$) being lower than the average temperature ($\bar{T_d}$). On the other hand, $\Delta T$ is negative for decreasing temperature profiles. $\Delta T$ is positive for Quiescent clumps and PDRs, it is consistent with zero for the Protostellar sources (temperature at peak similar to average temperature), and positive (peak column density warmer than average temperature) for the H{\rmfamily\scshape{ii}}\ region sample. \subsection{\boldmath Mid-IR Classification versus $T_d$, $N_g$, $\Delta T_d$, and $\Delta\log N_g$} The previous sections have presented the differences between the temperature and column densities of the MALT90 groups. These differences are qualitatively consistent with the evolutionary sequence sketched in \citet{Jackson2013PASA} that starts with the Quiescent, and proceeds through the Protostellar, H{\rmfamily\scshape{ii}}\ region, and PDR evolutionary stages. As Figure \ref{fig-boxNT} shows, Quiescent clumps are the coldest, in agreement with the expectation that these clumps are starless and there are no embedded young high-mass stars. The far-IRDC subsample of the Quiescent population is colder and denser on average compared to the rest of the Quiescent clumps, and they might represent a late pre-stellar phase just before the onset of star-formation. The mean temperature and $\log N_{g,{\rm P}}$ of the far-IRDC subsample are $\sim15$ K and $-0.78$, respectively. To establish what fraction of the Quiescent clumps might evolve to form high-mass stars, we use for now criteria defined by previous authors based on distance independent information, such as the column density; a more complete analysis will be done in Contreras et al.\ (in preparation). \citet{Lada2010ApJ} and \citet{Heiderman2010ApJ} propose that the star formation rate in a molecular cloud is proportional to the mass of gas with column densities in excess of $\sim$120~\mbox{\,$M_{\odot}$}~pc$^{-2}$ ($\sim2.43\times10^{-2}$ gr cm$^{-2}$). Since there is considerable overlap in column density between MALT90 clumps that have different levels of star formation activity, we start by assuming that this relation gives the average star formation rate over the timescale of 2 Myr adopted by \citet{Lada2010ApJ} and \citet{Heiderman2010ApJ}. We find that 98\% of the Quiescent clumps have $\bar{N_g}>120$~\mbox{\,$M_{\odot}$}~pc$^{-2}$, including all of the far-IRDCs, which suggest that most of these clumps will support some level of star formation activity in the future. \citet{Urquhart2014MNRAS} propose a column density threshold of $0.05$~gr~cm$^{-2}$ for what they denominate ``effective'' high-mass star formation. This same threshold was recently proposed by \citet{He2015MNRAS} based on a study of $405$ ATLASGAL sources. Of the Quiescent sample, 78\% of the clumps have an average column density above this threshold, with the percentage increasing to 92\% for the far-IRDCs. Based on these criteria, we conclude that virtually all Quiescent clumps will develop at least low-mass star formation activity and that a large fraction ($>70\%$) will form high-mass stars. On the other hand, \citet{Lopez-Sepulcre2010AA} suggest a third column density threshold based on the observed increase of molecular outflows for clumps with column densities in excess of $0.3$ gr cm$^{-2}$. This column density is significatively larger than the previous thresholds, and only 3\% and 6\% of the Quiescent and far-IRDCs populations, respectively, have larger average column densities. However, half of the clump sample of \citet{Lopez-Sepulcre2010AA} have diameters $< 35\arcsec$ (the beam size of our column density maps) and more than a third have masses $< 200 \mbox{\,$M_{\odot}$}$, which indicates that the $0.3$ gr cm$^{-2}$ threshold may be pertinent for more compact structures than the clumps considered in this work. The temperature and temperature contrast of the Quiescent clumps are qualitatively consistent with equilibrium between the interstellar radiation field and dust and gas cooling \citep{Bergin2007ARA&A}. We find that Quiescent clumps are the coldest among the evolutionary stages, but they are typically warmer ($\sim17$ K) than expected from thermal equilibrium between dust cooling and cosmic ray heating alone ($T_d\sim10$ K). We also find that the central regions of the Quiescent clumps are in general colder than their external layers ($\Delta T$ negative). These characteristics are consistent with Quiescent clumps being heated by a combination of external radiation and cosmic-rays. The Quiescent sources also have the flattest density structure, with the {largest} $\Delta \log N_g$ among all the other evolutionary stages. This is similar to the behavior found by \citet{Beuther2002ApJ}, that is, the earliest stages of high-mass star formation are characterized by flat density profiles that become steeper as they collapse and star formation ensues. The Protostellar clump sample can be distinguished from Quiescent clumps based on their column density and dust temperature. Protostellar clumps have larger column densities ($\sim0.2$ gr cm$^{-2}$) and are slightly warmer ($\sim19$~K). The central temperatures of the Protostellar clumps also increase and become comparable to the temperature in their outer regions ($\Delta T\cong0$). These characteristics indicate that Protostellar clumps have an internal energy source provided by the HMYSOs. According to the results presented by \citet{Hoq2013ApJ}, there is no significative difference in the distribution of masses between the Quiescent and Protostellar population. If we assume that this is also the case for the sample presented in this work \citep[which will be confirmed in upcoming publications, see also][]{He2015MNRAS}, then the most likely reason for the larger column densities of the Protostellar sample compared with the Quiescent sample is gravitational contraction. Because contraction develops faster on the densest, central regions, we expect the column density profiles to become steeper at the center of the clump. This is consistent with the observed decrease of $\Delta \log N_g$ for the Protostellar clumps compared with the Quiescent clumps. The H{\rmfamily\scshape{ii}}\ region sample is associated with the most negative temperature and column density contrasts {(median of $\Delta T=-0.33$ K and $\Delta \log N_g=-0.42$)} compared with any other population, which indicates that H{\rmfamily\scshape{ii}}\ region clumps are very concentrated and they have a strong central heating source. This picture is consistent with the presence of a young high-mass star in the center of the clump. The slight decrease of the peak column density compared with the Protostellar phase {could} be explained because of the expansion induced by the development of the H{\rmfamily\scshape{ii}}\ region and the fraction of gas mass that has been locked into newly formed stars. Finally, PDR clumps have the lowest column densities and largest temperatures among the four evolutionary stages. They are also associated with having colder temperatures toward the center compared to their outer regions. PDR clumps are possibly the remnants of molecular clumps that have already been disrupted by the high-mass stars' winds, strong UV radiation field, and the expansion of H{\rmfamily\scshape{ii}}\ regions. These molecular remnants are being illuminated and heated from the outside by the newly formed stellar population, but probably are neither dense nor massive enough to be able to sustain further high-mass star formation. {\section{SUMMARY}\label{sec-sum}} We determined dust temperature and column density maps toward 3218\ molecular clumps. This number corresponds to more than 99\% of the ATLASGAL sources that form the MALT90 sample. We fit greybody models to far-IR images taken at 160, 250, 350, 500, and 870 \um. This catalog represents the largest sample of high-mass clumps for which both dust temperature and column density have been simultaneously estimated. We summarize the main results and conclusions as follows. \begin{enumerate} \item{The average dust temperature increases monotonically along the proposed evolutionary sequence, with median temperatures ranging from $16.1$ K for the Quiescent clumps to $27.4$ K for the clumps associated with PDRs. This confirms that the MALT90 mid-IR classification broadly captures the physical state of the molecular clumps.} \item{The highest column densities are associated with the Protostellar clumps, that is, those that show mid-IR signs of star formation activity preceding the development of an H{\rmfamily\scshape{ii}}\ region. The average peak column density of the Protostellar clumps is 0.2~gr~cm$^{-2}$, which is about 50\% higher than the peak column densities of clumps in the other evolutionary stages. We interpret this as evidence of gravitational contraction or possibly that Protostellar clumps are more massive. The latter possibility will be analyzed in future work (Contreras et al., in preparation).} \item{The radial temperature gradients within the clumps decrease from positive (higher temperatures in the outer layers of the clump), to null (no dust temperature gradient), and to negative (higher temperatures toward the center of the clump) values associated with the Quiescent, Protostellar, and H{\rmfamily\scshape{ii}}\ region clumps, respectively. Quantitatively, the mean difference between the average ($\bar{T_d}$) and the central ($T_{d,{\rm P}}$) clump temperatures range between $+0.7$, $-0.1$, and $-0.6$ K for the Quiescent, Protostellar, and H{\rmfamily\scshape{ii}}\ region samples, respectively. This confirms that Quiescent clumps are being externally heated and Protostellar and H{\rmfamily\scshape{ii}}\ region clumps have an internal embedded energy source.} \item{The ratio between the peak and average column density for each clump category ranges between 1.8 and 2.6. The flattest column density profiles are associated with the Quiescent population, becoming steeper for the Protostellar {and} H{\rmfamily\scshape{ii}}\ region clumps. This is qualitatively consistent with the hypothesis of evolution through gravitational contraction, in which the contrast is a measure of evolutionary progress.} \item{The PDR clump population is characterized by low column densities ($\sim0.09$~gr~cm$^{-2}$), high temperatures ($27$ K), and a positive radial temperature gradient (colder inner regions toward warmer dust on the outside). We interpret this as evidence that these sources are the externally illuminated remnants of molecular clumps already disrupted by high-mass star formation feedback.} \item{We identify $83$ far-IR dark clouds, that is, Quiescent clumps that appear in absorption at 70 \um\ against the Galactic background. These clumps are cooler and they have higher column densities compared to the remainder of the Quiescent population. Therefore, they are likely in the latest stage of pre-stellar contraction or they may represent a more massive subsample of the Quiescent clumps.} \end{enumerate} \acknowledgements{A.E.G. and H.A.S. acknowledges support from NASA Grants NNX12AI55G and NNX10AD68G. A.E.G. acknowledge partial support from CONICYT through project PFB-06 and FONDECYT grant 3150570. J.M.J. acknowledges support from NASA Grant NNX12AE42G and NSF grant AST-1211844. We thank G.\ Garay and an anonymous referee for careful reading and helpful comments.}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction and notations} In their ICM talk \cite{SU06}, Skinner and Urban outline a program to connect the order of vanishing of the $L$-functions of certain polarized regular motives with the rank of the associated Bloch-Kato Selmer groups. Their strategy is to deform the motives along certain $p$-adic eigenfamilies of Galois representations to construct the expected extensions. They introduce the notion \emph{finite slope families} to encode the local properties of these $p$-adic families. One may view finite slope families as generalizations of the $p$-adic families arising from Coleman-Mazur eigencurve, which is formulated as weakly refined families by Bellaiche-Chenevier \cite{BC06}, in the sense that a finite slope family may have \emph{multiple} constant Hodge-Tate weights $k_1,\dots, k_r\in\mathbb{Z}$ and a Zariski dense subset of crystalline points which have prescribed crystalline periods with Hodge-Tate numbers $k_1,\dots,k_r$. Skinner and Urban then use the (unproved) analytic continuation of these crystalline periods to deduce that the expected extensions lie in the Selmer groups. Most recently, Harris, Lan, Taylor and Thorne construct Galois representations for (non-self dual) regular algebraic cuspidal automorphic representations of $\mathrm{GL}(n)$ over CM fields \cite{HLTT}. Their construction also involves $p$-adic deformations, and it turns out that these Galois representations live in certain $p$-adic families which generalize Skinner-Urban's finite slope families by replacing crystalline periods with semi-stable periods. Furthermore, to show that the Galois representations constructed by them are geometric as predicted by the philosophy of Langlands correspondence, one needs the analytic continuation of semi-stable periods for these families. In this paper, we make use of the notion finite slope families to encode the local properties of the $p$-adic families of Galois representations in \cite{HLTT}; this generalizes the original definition of Skinner-Urban. Our main result is then to prove the analytic continuation of semi-stable periods for such families. This will provide a necessary ingredient to Skinner-Urban's ICM program. Besides, we recently learned from Taylor that, in an ongoing project of Ila Varma, she will establish the aforementioned geometric properties of Galois representations based on the results of this paper and a previous one of us \cite{L12}. We also note that recently Shah proves some results about interpolating Hodge-Tate and de Rham periods in families of $p$-adic Galois representations which may be applied to some related situations \cite{S}. As the $p$-adic families over Coleman-Mazur eigencurve are special cases of finite slope families, our result generalizes the famous result of Kisin on the analytic continuation of crystalline periods for such families \cite{Ki03}. However, even in the crystalline case, our strategy and techniques are completely different from his. In fact, in Kisin's original work as well as the recent enhancement made by us \cite{L12}, one crucially relies on the fact that the families have only one constant Hodge-Tate weight, which is obviously not the case for general finite slope families. On the other hand, the work presented in this paper is inspired by the works of Berger and Colmez on families of de Rham representations \cite{BC07} and Kedlaya, Pottharst and Xiao on the cohomology of families of $\m$-modules \cite{KPX}. For a finite slope family, by adapting the techniques of \cite{KPX}, we first cut out a sub-family of $\m$-modules, which is expected to be generated by the desired semi-stable periods, after making a proper and surjective base change. We then develop a theory of families of Hodge-Tate and de Rham $\m$-modules with bounded Hodge-Tate weights. Finally we prove some analogues of Berger-Colmez for such families of $\m$-modules, and use them to conclude that the sub-family of $\m$-modules is semi-stable. In the remainder of this introduction, we give more precise statements about our results. We fix a finite extension $K$ of $\Q$. Let $K_0$ be the maximal unramified sub-extension of $K$, and let $f=[K_0:\Q]$. \begin{defn}\label{def:fs} Let $X$ be a reduced and separated rigid analytic space over $K$. A \emph{finite slope family} of $p$-adic representations of dimension $d$ over $X$ is a locally free coherent $\OO_X$-module $V_X$ of rank $d$ equipped with a continuous $G_K$-action and together with the following data \begin{enumerate} \item[(1)]a positive integer $c$, \item[(2)]a monic polynomial $Q(T)\in\OO_X(X)[T]$ of degree $m$ with unit constant term, \item[(3)]a subset $Z$ of $X$ such that for all $z$ in $Z$, $V_z$ is semi-stable with non-positive Hodge-Tate weights, and for all $B\in\mathbb{Z}$ the set of $z$ in $Z$ such that $V_z$ has $d-c$ Hodge-Tate weights less than $B$ is Zariski dense in $X$, \item[(4)]for $z\in Z$, a $K_0\otimes_{\Q}k(z)$-direct summand $\mathcal{F}_{z}$ of $D^+_{\mathrm{st}}(V_z)$ which is free of rank $c$ and stable under $\varphi$ and $N$ such that $\varphi^f$ has characteristic polynomial $Q(z)(T)$ and all Hodge-Tate weights of $\mathcal{F}_z$ lie in $[-b,0]$ for some $b$ which is independent of $z$. \end{enumerate} \end{defn} Our main results are as follows. \begin{theorem}\label{thm:main} Let $V_X$ be a finite slope family over $X$. Then there exists a surjective proper morphism $X'\ra X$ so that $(K\otimes_{K_0}D^+_{\mathrm{st}}(V_{X'}))^{Q(\varphi)=0}$ has a rank $c$ locally free coherent $K_0\otimes_{\Q}\OO_{X'}$-submodule which specializes to a rank $c$ free $K_0\otimes_{\Q}k(x)$-submodule in $\D_\rig^\dag(V_x)$ for any $x\in X'$. As a consequence, $D^+_{\mathrm{st}}(V_x)^{Q(\varphi)(x)=0}$ has a free $K_0\otimes_{\Q}k(x)$-submodule of rank $c$ for any $x\in X$. \end{theorem} The following corollary is clear. \begin{cor} Let $V_X$ be a finite slope family over $X$. If $V_z$ is crystalline for any $z\in Z$, then there exists a surjective proper morphism $X'\ra X$ so that $(K\otimes_{K_0}D^+_{\mathrm{crys}}(V_{X'}))^{Q(\varphi)=0}$ has a rank $c$ locally free coherent $K_0\otimes_{\Q}\OO_{X'}$-submodule which specializes to a rank $c$ free $K_0\otimes_{\Q}k(x)$-submodule in $\D_\rig^\dag(V_x)$ for any $x\in X'$. As a consequence, $D^+_{\mathrm{crys}}(V_x)^{Q(\varphi)(x)=0}$ has a free $K_0\otimes_{\Q}k(x)$-submodule of rank $c$ for any $x\in X$. \end{cor} \section*{Acknowledgements} Thanks to Christopher Skinner, Richard Taylor and Ila Varma for useful communications. We especially thank Richard Taylor for suggesting a more concise definition of finite slope families. \section{Families of $\m$-modules} \begin{defn} Let $A$ be a Banach algebra over $\Q$. For $s>0$, a \emph{$\varphi$-module} over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$ is a finite projective $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$-module $D_A^s$ equipped with an isomorphism $$\varphi^*D_A^s\cong D_A^s\otimes_{\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A}\mathbf{B}^{\dag,ps}_{\rig,K}\widehat{\otimes}_{\Q}A.$$ A \emph{$\varphi$-module} $D_A$ over $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}A$ is the base change to $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}A$ of a $\varphi$-module $D_A^s$ over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$ for some $s>0$. A \emph{$\m$-module} over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$ is a $\varphi$-module $D_A^s$ over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$ equipped with a commuting semilinear continuous action of $\Gamma$. A \emph{$\m$-module} $D_A$ over $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}A$ is the base change to $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}A$ of a $\m$-module $D_A^s$ over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$ for some $s>0$. \end{defn} \begin{notation} For a morphism $A\ra B$ of Banach algebras over $\Q$, we denote by $D^s_B$ (resp. $D_B$) the base change of $D^s_A$ (resp. $D_A$) to $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}B$ (resp. $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}B$). In the case when $A=S$ is an affinoid algebra over $\Q$ and $x\in M(S)$, we denote $D^s_{k(x)}$ (resp. $D_{k(x)}$) by $D_x^s$ (resp. $D_x$) instead. \end{notation} Let $S$ be an affinoid algebra over $\Q$. Recall that for sufficiently large $s$, a vector bundle over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}S$ consists of one finite flat module $D_S^{[s_1,s_2]}$ over each ring $\mathbf{B}^{[s_1,s_2]}_K\widehat{\otimes}_{\Q}S$ with $s\leq s_1\leq s_2$, together with isomorphisms \[ D_S^{[s_1,s_2]}\otimes_{\mathbf{B}^{[s_1,s_2]}_{K}\widehat{\otimes}_{\Q}S} \mathbf{B}^{[s_1',s_2']}_{K}\widehat{\otimes}_{\Q}S\cong D_S^{[s'_1,s'_2]} \] for all $s\leq s_1'\leq s_1\leq s_2\leq s_2'$ satisfying the cocycle conditions. A $\varphi$-bundle over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}S$ is a vector bundle $(D_S^{[s_1,s_2]})_{s\leq s_1\leq s_2}$ over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}S$ equipped with isomorphisms $\varphi^*D_S^{[s_1,s_2]}\cong D_S^{[ps_1,ps_2]}$ for all $s/p\leq s_1\leq s_2$ satisfying the obvious compatibility conditions. When $s$ is sufficiently large, by \cite[Proposition 2.2.7]{KPX}, the natural functor from the category of $\varphi$-modules over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}S$ to the category of $\varphi$-bundles over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}S$ is an equivalence of categories. Note that by its definition, one can glue $\varphi$-bundles over separated rigid analytic spaces. Therefore this equivalence of categories enables us to introduce the following definition. \begin{defn} Let $X$ be a separated rigid analytic space over $\Q$. A family of $\m$-modules $D_X$ over $X$ is a compatible family of $\m$-modules $D_S$ over $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}S$ for each affinoid subdomain $M(S)$ of $X$. \end{defn} The following theorem follows from \cite{BC07}, \cite{KL10} and \cite{L12}. \begin{theorem} Let $A$ be a Banach algebra over $\Q$, and let $V_A$ be a finite locally free $A$-linear representation of $G_K$. Then there is a $\m$-module $\D_\rig^\dag(V_A)$ over $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}A$ functorially associated to $V_A$. The rule $V_A\mapsto \D_\rig^\dag(V_A)$ is fully faithful and exact, and it commutes with base change in $A$. \end{theorem} Let $A$ be a Banach algebra over $K_0$. Recall that one has a canonical decomposition \[ A\otimes_{\Q}K_0\cong\prod_{\sigma\in\mathrm{Gal}(K_0/\Q)}A_{\sigma} \] where each $A_{\sigma}$ is the base change of $A$ by the automorphism $\sigma$. Furthermore, the $\mathrm{Gal}(K_0/\Q)$-action permutes all $A_\sigma$'s in the way that $\tau(A_\sigma)=A_{\tau\sigma}$. For any $a\in A^\times$, we equip $A\otimes_{\Q}{K_0}$ with a $\varphi\otimes 1$-semilinear action $\varphi$ by setting \[ \varphi((x_1,x_{\varphi},\dots, x_{\varphi^{f-1}}))=(ax_{\varphi^{f-1}},x_1,\dots,x_{\varphi^{f-2}}) \] where $x_{\sigma}\in A_{\sigma}$ for each $\sigma\in\mathrm{Gal}(K_0/\Q)$; we denote this $\varphi$-module by $D_a$. It is clear that the $\varphi$-action on $D_a$ satisfies $\varphi^f=1\otimes a$. We fix a uniformizer $\pi_K$ of $K$. \begin{defn} For any continuous character $\delta:K^\times\ra A^\times$, we associate it a rank 1 $(\varphi,\Gamma)$-module $(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A)(\delta)$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A$ as follows. If $\delta|_{\OO_K^\times}=1$, we set $(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A)(\delta)=(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A) \otimes_{A\otimes_{\Q}{K_0}}D_{\delta(\pi_K)}$ where we equip $D_{\delta(\pi_K)}$ with the trivial $\Gamma$-action. For general $\delta$, we write $\delta=\delta'\delta''$ such that $\delta'(\pi_K)=1$ and $\delta''|_{\OO_K^\times}=\mathrm{id}$. We view $\delta'$ as an $A$-valued character of $W_K$, and extend it to a character of $G_K$ continuously. We then set \[ (\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A)(\delta)=\D_\rig^\dagger(\delta') \otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A} (\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A)(\delta''). \] For a $(\varphi,\Gamma)$-module $D_A$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A$, we put $D_A(\delta)=D_A\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A} (\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A)(\delta)$. Let $X$ be a separated rigid analytic space over $\Q$. For a continuous character $\delta:K^\times\ra \OO(X)^\times$ and a family of $\m$-module $D_X$ over $X$, we define the families of $\m$-modules $(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}\OO_X)(\delta)$ and $D_X(\delta)$ by gluing $(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S)(\delta)$ and $D_S(\delta)$ for all affinoid subdomains $M(S)$ respectively. \end{defn} \section{Cohomology of families of $\m$-modules} Let $\Delta_K$ be the $p$-torsion subgroup of $\Gamma$. Choose $\gamma_K\in\Gamma_K$ whose image in $\Gamma/\Delta_K$ is a topological generator. \begin{defn} Let $S$ be an affnioid algebra over $\Q$. For a $\m$-module $D_S$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$, we define the Herr complex $C^\bullet_{\varphi,\gamma_K}(D_S)$ of $D_S$ concentrated in degree $[0,2]$ as follows: \[ C^{\bullet}_{\varphi,\gamma_K}(D_S)= [D_S^{\Delta_K}\stackrel{d_{1}}{\longrightarrow}D_S^{\Delta_K}\oplus D_S^{\Delta_K} \stackrel{d_{2}}{\longrightarrow}D_S^{\Delta_K}] \] with $d_1(x) = ((\gamma_K - 1)x, (\varphi - 1)x)$ and $d_2(x,y) = (\varphi - 1)x - (\gamma_K - 1)y$. One shows that this complex is independent of the choice of $\gamma_K$ up to canonical quasi-isomorphism. Its cohomology group is denoted by $H^\bullet(D_S)$. \end{defn} By the main result of \cite{KPX}, one knows that $H^i(D_S)$ is a finite $S$-module and commutes with flat base change in $S$. This enables a cohomology theory for families of $\m$-modules over general rigid analytic spaces. \begin{defn} Let $X$ be a separated rigid analytic space over $\Q$, and let $D_X$ be a family of $\m$-modules over $X$. For each $0\leq i\leq 2$, we define $H^\bullet(D_X)$ to be the cohomology of the complex \[ C^{\bullet}_{\varphi,\gamma_K}(D_X)= [D_X^{\Delta_K}\stackrel{d_{1}}{\longrightarrow}D_X^{\Delta_K}\oplus D_X^{\Delta_K} \stackrel{d_{2}}{\longrightarrow}D_X^{\Delta_K}] \] with $d_1(x) = ((\gamma_K - 1)x, (\varphi - 1)x)$ and $d_2(x,y) = (\varphi - 1)x - (\gamma_K - 1)y$. For each $0\leq i\leq 2$, $H^i(D_X)$ is therefore the coherent $\OO_X$-module obtained by gluing $H^i(D_S)$ for all affinoid subdomains $M(S)$ of $X$. \end{defn} As a consequence of finiteness of the cohomology of families of $\m$-modules, by a standard argument we see that locally on $X$, the complex $C^{\bullet}_{\varphi,\gamma_K}(D_X)$ is quasi isomorphic to a complex of locally free coherent sheaves concentrated in degree $[0,2]$. This would enable us to flatten the cohomology of families of $\m$-modules by blowing up the base $X$. The following lemma is a rearrangement of some arguments in \cite[\S6]{KPX}. \begin{lemma}\label{lem:modification} Let $X$ be a reduced, separated and irreducible rigid analytic space over $K$, and let $D_X$ be a family of $\m$-modules of rank $d$ over $X$. Then the following are true. \begin{enumerate} \item[(1)]The exists a proper birational morphism $\pi:X'\ra X$ of reduced rigid analytic spaces over $K$ so that $H^0(D_{X'})$ is flat and $H^i(D_{X'})$ has Tor-dimensions $\leq 1$ for each $i=1,2$. \item[(2)]Suppose that $D'_{X}$ is a family of $\m$-modules over $X$ of rank $d'$, and that $\lambda: D'_X\ra D_X$ be a morphism between them so that for any $x\in X$, the image of $\lambda_x$ is a $\m$-submodule of rank $d$ of $D_x$. Then there exists a proper birational morphism $\pi:X'\ra X$ of reduced rigid analytic spaces over $K$ so that the cokernel of $\pi^*\lambda$ has Tor-dimension $\leq 1$. \end{enumerate} \end{lemma} \begin{proof} The upshot is that for a bounded complex $(C^\bullet,d^\bullet)$ of locally free coherent sheaves on $X$, there exists a blow up $\pi:X'\ra X$, which depends only on the quasi-isomorphism class of $(C^\bullet,d^\bullet)$, so that $\pi^*d^i$ has flat image for each $i$. Furthermore, the construction of $X'$ commutes with dominant base change in $X$ (see \cite[Corollary 6.2.5]{KPX} for more details). Thus for (1), we can construct $X'$ locally and then glue. For (2), let $Q_X$ denote the cokernel of $\lambda$. For any $x\in X$, since the image of $\lambda_x$ is a $\m$-submodule of rank $d$, by \cite[Lemma 5.3.1]{L12}, we get that $Q_x$ is killed by a power of $t$. Now let $M(S)$ be an affinoid subdomain of $X$, and suppose that $D_S^s$ and $D'^s_S$ are defined for some suitable $s>0$. For $r>s$, set $Q_S^{[s,r]}=D^{[s,r]}_S/\lambda(D'^{[s,r]}_S)$. Since for any $x\in M(S)$, the fiber of $Q_S^{[s,r]}$ at $x$ is killed by a power of $t$, we get that $Q_S^{[s,r]}$ is killed by $t^k$ for some $k>0$. This yields that $Q_S^{[s,r]}$ is a finite $S$-module. Now we apply \cite[Corollary 6.2.5(1)]{KPX} to a finite presentation of $Q_S^{[s,ps]}$ to get a blow up $Y$ of $M(S)$ so that the pullback of $Q_S^{[s,ps]}$ has Tor-dimension $\leq1$. Using the fact $(\varphi^n)^*Q_S^{[s,ps]}\cong Q_S^{[p^ns,p^{n+1}s]}$, we see that $Y$ is also the blow up obtained by applying \cite[Corollary 6.2.5(1)]{KPX} to a finite presentation of $Q_S^{[s,p^{n+1}s]}$ for any positive integer $n$. It therefore follows that for any $r>s$, the pullback of $Q_S^{[s,r]}$ has Tor-dimension $\leq 1$; hence the pullback of $Q_S$ has Tor-dimension $\leq 1$. Furthermore, the blow ups for all affinoid subdomains $M(S)$ glue to form a blow up $X'$ of $X$ which satisfies the desired condition. \end{proof} \begin{lemma}\label{lem:ker-birational} Let $X$ be a reduced, separated and irreducible rigid analytic space over $K$. Let $D'_X$ and $D_{X}$ be families of $\m$-modules over $X$ of ranks $d'$ and $d$ respectively, and let $\lambda: D'_X\ra D_X$ be a morphism between them. Suppose that for any $x\in X$, the image of $\lambda_x$ is a $\m$-submodule of rank $d$ of $D_x$. Then there exists a proper birational morphism $\pi:X'\ra X$ of reduced rigid analytic spaces over $K$ such that the kernel of $\pi^*\lambda$ is a family of $\m$-modules of rank $d'-d$ over $X'$, and there exists a Zariski open dense subset $U\subset X'$ such that $(\ker(\pi^*\lambda))_x=\ker((\pi^*\lambda)_x)$ for any $x\in U$. \end{lemma} \begin{proof} Let $Q_X$ be the cokernel of $\lambda$. By the previous Lemma, we may suppose that $Q_X$ has Tor-dimension $\leq1$ after adapting $X$. Now let $P_X$ denote the kernel of $\lambda$. For any $x\in X$, the Tor spectral sequence computing the cohomology of the complex $[D_{X}\stackrel{\lambda}{\longrightarrow}D'_{X}]\otimes^{\mathbf{L}}_{\OO_{X}}k(x)$ gives rise to a short exact sequence \[ 0\longrightarrow P_x\longrightarrow\ker(\lambda_x)\longrightarrow\mathrm{Tor}_1(Q_X,k(x))\longrightarrow0. \] Since the image of $\lambda_x$ is a $\m$-module of rank $d$, $\ker(\lambda_x)$ is a $\m$-module of rank $d'-d$. Since $Q_X$ is killed by a power of $t$ locally on $X$, we get that the last term of the exact sequence is killed by a power of $t$. This yields that $P_x$ is a $\m$-module of rank $d'-d$. We therefore conclude that $P_X$ is a family of $\m$-modules of rank $d'-d$ over $X$ by \cite[Corollary 2.1.9]{KPX}. Furthermore, since $Q_X$ has Tor-dimension $\leq1$, by \cite[Lemma 6.2.7]{KPX}, we get that the set of $x\in X$ for which $\mathrm{Tor}_1(Q_X,k(x))\neq0$ forms a nonwhere dense Zariski closed subset of $X$; this yields the rest of the lemma. \end{proof} The following proposition modifies part of \cite[Theorem 6.2.9]{KPX}. \begin{prop}\label{prop:cohomology} Let $X$ be a reduced, separated and irreducible rigid analytic space over $K$. Let $D_X$ be a family of $\m$-modules of rank $d$ over $X$, and let $\delta:K^\times\ra \OO(X)^\times$ be a continuous character. Suppose that there exist a Zariski dense subset $Z$ of closed points of $X$ and a positive integer $c\leq d$ such that for every $z\in Z$, $H^0(D_z^{\vee}(\delta_z))$ is a $c$-dimensional $k(z)$-vector space. Then there exists a proper birational morphism $\pi:X'\ra X$ of reduced rigid analytic spaces over $K$ and a morphism $\lambda: D_{X'}\ra M_{X'}=(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}\OO_{X'})(\delta)\otimes_{\OO_{X'}}L$ of $\m$-modules, where $L$ is a locally free coherent $\OO_{X'}$-module of rank $c$ equipped with trivial $\varphi,\Gamma$-actions, such that \begin{enumerate} \item[(1)]for any $x\in X'$, the image of $\lambda_{x}$ is a $\m$-submodule of rank $c$; \item[(2)]the kernel of $\lambda$ is a family of $\m$-modules of rank $d-c$ over $X'$, and there exists a Zariski open dense subset $U\subset X'$ such that $(\ker\lambda)_x=\ker(\lambda_x)$ for any $x\in U$. \end{enumerate} \end{prop} \begin{proof} Using Lemma \ref{lem:modification}, we first choose a proper birational morphism $\pi:X'\ra X$ with $X'$ reduced such that $N_{X'}=\pi^*(D^{\vee}_{X}(\delta))$ satisfies the conditions that $H^0(N_{X'})$ is flat and $H^i(N_{X'})$ has Tor-dimension $\leq 1$ for each $i=1,2$. Then for any $x\in X'$, the base change spectral sequence $E^{i,j}_2=\mathrm{Tor}_{-i}(H^j(N_{X'}),k(x))\Rightarrow H^{i+j}(N_x)$ gives a short exact sequence \[ 0\longrightarrow H^0(N_{X'})\otimes_{\OO_{X'}}k(x)\longrightarrow H^0(N_x)\longrightarrow \mathrm{Tor}_1(H^1(N_{X'}),k(x))\longrightarrow0 \] As $H^1(N_{X'})$ has Tor-dimension $\leq1$, by \cite[Lemma 6.2.7]{KPX}, the set of $x\in X'$ for which the last term of the above exact sequence does not vanish forms a nowhere dense Zariski closed subset $V$. For any $z\in\pi^{-1}(Z)/V$, we deduce from the above exact sequence that $H^0(N_{X'})\otimes_{\OO_{X'}}k(z)$ is a $c$-dimensional $k(z)$-vector space. Since $H^0(N_{X'})$ is flat and $\pi^{-1}(Z)/V$ is a Zariski dense subset of $X'$, we get that $H^0(N_{X'})$ is locally free of constant rank $c$. Let $L$ be its dual; then the natural map $(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}\OO_{X'})H^0(N_{X'})\ra N_{X'}$ gives a map $\lambda:D_{X'}\ra M_{X'}=(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}\OO_{X'})(\delta)\otimes_{\OO_{X'}}L$. For any $x\in X'$, since the map $H^0(N_{X'})\otimes_{\OO_{X'}}k(x)\longrightarrow H^0(N_x)$ is injective, we get that the image of $\lambda_x$ is a rank $c$ $\m$-submodule of $M_x$. We thus conclude the proposition using the previous lemma. \end{proof} \section{Families of Hodge-Tate $\m$-modules} From now on, let $S$ be a reduced affinoid algebra over $K$. \begin{defn}\label{def:HT} Let $D_S$ be a $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$. For any positive integer $n$, if $D_S^{r_n}$ is defined, we set \[ \D^n_{\Sen}(D_S)=D_S^{r_n}\otimes_{\mathbf{B}^{\dag,r_n}_{\rig,K}\widehat{\otimes}_{\Q}S}K_n\otimes_{\Q}S. \] We call $D_S$ \emph{Hodge-Tate with Hodge-Tate weights in $[a,b]$} if there exists a positive integer $n$ such that the natural map \begin{equation}\label{eq:def-HT} (\oplus_{a\leq i\leq b}\D^n_\Sen(D_S(-i)))^\Gamma\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^n(D_S(-i)) \end{equation} is an isomorphism. We denote by $h_{HT}(D_S)$ the smallest $n$ which satisfies this condition, and we define $D_{\mathrm{HT}}(D_S)=(\oplus_{a\leq i\leq b}\D^{h_{HT}(D_S)}_\Sen(D_S(-i)))^\Gamma$. \end{defn} \begin{lemma}\label{lem:HT-inv} Let $D_S$ be a Hodge-Tate $\m$-module over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$. Then for any $n\geq h_{HT}(D_S)$, (\ref{eq:def-HT}) is an isomorphism and $\D_\Sen^n(D_S(-i))^{\Gamma}=\D_\Sen^{h_{HT}(D_S)}(D_S(-i))^{\Gamma}$ for any $i\in [a,b]$. As a consequence, we have $(\oplus_{a\leq i\leq b}\D_\Sen^n(D_S(-i)))^\Gamma=D_{\mathrm{HT}}(D_S)$. \end{lemma} \begin{proof} Tensoring with $K_{n}\otimes_{\Q}S[t,1/t]$ on both sides of the map \[ (\oplus_{a\leq i\leq b}\D^{h_{HT}(D_S)}_\Sen(D_S(-i)))^\Gamma\otimes_{K\otimes_{\Q}S}(K_{h_{HT}(D_S)} \otimes_{\Q}S)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^{h_{HT}(D_S)}(D_S(-i)) \] We get that the natural map \[ (\oplus_{a\leq i\leq b}\D^{h_{HT}(D_S)}_\Sen(D_S(-i)))^\Gamma\otimes_{K\otimes_{\Q}S}(K_{n}\otimes_{\Q}S)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^{n}(D_S(-i)) \] is an isomorphism. Taking $\Gamma$-invariants on both sides, we get \[ (\oplus_{a\leq i\leq b}\D^{h_{HT}(D_S)}_\Sen(D_S(-i)))^\Gamma=(\oplus_{a\leq i\leq b}\D^{n}_\Sen(D_S(-i)))^\Gamma. \] This yields the lemma. \end{proof} \begin{remark} If $D_S$ is Hodge-Tate with weights in $[a,b]$, taking $\Gamma$-invariants on both sides of (\ref{eq:def-HT}), we see that $\D^n_\Sen(D_S(-i))^{\Gamma}=0$ for any $n\geq h_{HT}(D_S)$ and $i\notin [a,b]$. \end{remark} \begin{lemma}\label{lem:HT} If $D_S$ is a Hodge-Tate $\m$-module over $\mathbf{B}^\dag_{\rig,K}\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$, then for any morphism $S\ra R$ of affinoid algebras over $K$, $D_R$ is Hodge-Tate with weights in $[a,b]$ and $h_{HT}(D_R)\leq h_{HT}(D_S)$. Furthermore, the natural map $\D^n_\Sen(D_S(i))^\Gamma\otimes_{S}R\ra\D^n_\Sen(D_R(i))^\Gamma$ is an isomorphism for any $i\in\mathbb{Z}$ and $n\geq h_{HT}(D_S)$. As a consequence, the natural map $D_{\mathrm{HT}}(D_S)\otimes_SR\ra D_{\mathrm{HT}}(D_R)$ is an isomorphism. \end{lemma} \begin{proof} Let $n\geq h_{HT}(D_S)$. Tensoring with $R$ over $S$ on both sides of (\ref{eq:def-HT}), we get that the natural map \[ (\oplus_{a\leq i\leq b}\D^n_\Sen(D_S(-i))^\Gamma\otimes_SR)\otimes_{K\otimes_{\Q}R}(K_n\otimes_{\Q}R)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^n(D_R(-i)). \] is an isomorphism. Comparing $\Gamma$-invariants on both sides, we get that the natural map \[ \D^n_\Sen(D_S(-i))^\Gamma\otimes_{S}R\ra\D^n_\Sen(D_R(-i))^\Gamma \] is an isomorphism for any $a\leq i\leq b$. This implies that the natural map \[ (\oplus_{a\leq i\leq b}\D^n_\Sen(D_R(-i))^\Gamma\otimes_{K\otimes_{\Q}R}(K_n\otimes_{\Q}R)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^n(D_R(-i)). \] is an isomorphism. This proves the lemma. \end{proof} \begin{cor} If $D_S$ is a Hodge-Tate $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$, then $D_{\mathrm{HT}}(D_S)$ is a locally free coherent $K\otimes_{\Q}S$-module of rank $d$. \end{cor} \begin{proof} By the previous lemma, it suffices to treat the case that $S$ is a finite extension of $K$; this is clear from the isomorphism (\ref{eq:def-HT}). \end{proof} \begin{defn} Let $X$ be a reduced and separated rigid analytic space over $K$, and let $D_X$ be a family of $\m$-modules of rank $d$ over $X$. We call $D_X$ \emph{Hodge-Tate} with weights in $[a,b]$ if for some (hence any) admissible cover $\{M(S_i)\}_{i\in I}$ of $X$, $D_{S_i}$ is Hodge-Tate with weights in $[a,b]$ for any $i\in I$. We define $D_{\mathrm{HT}}(D_X)$ to be the gluing of all $D_{\mathrm{HT}}(D_{S_i})$'s. \end{defn} \begin{lemma}\label{lem:HT-criterion} Let $D_S$ be a $\m$-module over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$. Then (\ref{eq:def-HT}) is an isomorphism if and only if the natural map \begin{equation}\label{eq:lem-HT} \oplus_{a\leq i\leq b}\D_\Sen^n(D_S)^{\Gamma_n=\chi^i}\longrightarrow\D_\Sen^n(D_S) \end{equation} is an isomorphism. Furthermore, if this is the case, then (\ref{eq:def-HT}) holds for $n$. \end{lemma} \begin{proof} For the ``$\Rightarrow$'' part, since (\ref{eq:def-HT}) is an isomorphism, we deduce that \begin{equation}\label{eq:lem-HT-2} \D_\Sen^n(D_S)=\oplus_{a\leq i\leq b}t^i\cdot\D^n_{\Sen}(D_S(-i))^\Gamma \otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S). \end{equation} Note that $t^i\cdot\D^n_{\Sen}(D_S(-i))^\Gamma\subseteq\D_\Sen^n(D_S)^{\Gamma_n=\chi^i}$. Hence (\ref{eq:lem-HT-2}) implies that (\ref{eq:lem-HT}) is surjective. On the other hand, it is clear that (\ref{eq:def-HT}) is injective; hence it is an isomorphism. Conversely, suppose that (\ref{eq:lem-HT}) is an isomorphism. Note that \[ \D_\Sen^n(D_S)^{\Gamma_n=\chi^i}=t^i\cdot\D_\Sen^n(D_S(-i))^{\Gamma_n}=(t^i\cdot\D_\Sen^n(D_S(-i))^\Gamma) \otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S), \] where the latter equality follows from \cite[Proposition 2.2.1]{BC07}. This implies that $D_S$ satisfies (\ref{eq:lem-HT-2}), yielding that $D_S$ satisfies (\ref{eq:def-HT}). \end{proof} \begin{prop}\label{prop:HT-family} Let $D_S$ be a $\m$-module over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}S$. Suppose that there exists a Zariski dense subset $Z\subset M(S)$ such that $D_z$ is Hodge-Tate with weights in $[a,b]$ for any $z\in Z$ and $\sup_{z\in Z}\{h_{HT}(D_z)\}<\infty$. Then $D_S$ is Hodge-Tate with weights in $[a,b]$. \end{prop} \begin{proof} Let $n\geq\sup_{z\in Z}\{h_{HT}(D_z)\}$ such that $D_S^n$ is defined, and let $\gamma$ be a topological generator of $\Gamma_n$. For any $a\leq i\leq b$, let $p_i$ denote the operator $\prod_{a\leq j\leq b, j\neq i}\frac{\gamma-\chi^{j}(\gamma)}{\chi^i(\gamma)-\chi^j(\gamma)}$, and let $M_i=p_i(\D_\Sen^n(D_S))$. It is clear that $p_i$ is the identity on $\D_{\Sen}^n(D_S)^{\Gamma_n=\chi^i}$; hence $\D_{\Sen}^n(D_S)^{\Gamma_n=\chi^i}\subseteq M_i$. On the other hand, for any $z\in Z$, since $D_z$ is Hodge-Tate with weights in $[a,b]$ and $h_{HT}(D_z)\leq n$, we deduce from Lemma \ref{lem:HT-criterion} that $p_i(\D_\Sen^n(D_z))=\D^n_\Sen(D_z)^{\Gamma_n=\chi^i}$. This implies that $M_i$ maps onto $\D^n_\Sen(D_z)^{\Gamma_n=\chi^i}$ under the specialization $\D_\Sen^n(D_S)\ra \D_\Sen^n(D_z)$. Since $Z$ is Zariski dense, we conclude $M_i\subseteq\D^n_\Sen(D)^{\Gamma_n=\chi^i}$; hence $M_i=\D^n_\Sen(D)^{\Gamma_n=\chi^i}$. Let $M=\oplus_{a\leq i\leq b}M_i$. We claim that the natural inclusion $M\subseteq \D_\Sen^n(D_S)$ is an isomorphism. In fact, for any $z\in Z$, since $\D_\Sen^n(D_z)=\oplus_{a\leq i\leq b}\D_\Sen^n(D_z)^{\Gamma_n=\chi^i}$, we have that $M$ maps onto $\D_\Sen^n(D_z)$. Thus $\D^n_\Sen(D_S)/M$ vanishes at $z$. We therefore conclude $\D^n_\Sen(D_S)/M=0$ because $Z$ is Zariski dense. By Lemma \ref{lem:HT-criterion} and the claim, we conclude that $D_S$ is Hodge-Tate with weights in $[a,b]$. \end{proof} \section{Families of de Rham $\m$-modules} \begin{defn}\label{def:dR} Let $D_S$ be a $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$. For any positive integer $n$, if $D_S^{r_n}$ is defined, we set \[ \D^{+,n}_{\dif}(D_S)=D_S^{r_n}\otimes_{\mathbf{B}^{\dag,r_n}_{\rig,K}\widehat{\otimes}_{\Q}S}(K_n\otimes_{\Q}S)[[t]], \qquad \D^{n}_{\dif}(D_S)=\D^{+,n}_{\dif}(D_S)[1/t]. \] We equip $\D_\dif^n(D_S)$ with the filtration $\mathrm{Fil}^i\D_\dif^n(D_S)=t^i\D_\dif^{+,n}(D_S)$. We call $D_S$ \emph{de Rham with weights in $[a,b]$} if there exists a positive integer $n$ such that \begin{enumerate} \item[(1)] the natural map \begin{equation}\label{eq:def-de Rham} \D^n_\dif(D_S)^\Gamma\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]][1/t]\longrightarrow \D_\dif^n(D_S) \end{equation} is an isomorphism; \item[(2)]$\mathrm{Fil}^{-b}(\D^n_\dif(D_S)^\Gamma)=D_S$ and $\mathrm{Fil}^{-a+1}(\D^n_\dif(D_S)^\Gamma)=0$ where $\mathrm{Fil}^{i}(\D^n_\dif(D_S)^\Gamma)$ is the induced filtration on $\D^n_\dif(D_S)^\Gamma$. \end{enumerate} We denote by $h_{dR}(D_S)$ the smallest $n$ which satisfies these conditions, and we define $D_{\mathrm{dR}}(D_S)=\D^{h_{dR}(D_S)}_\dif(D_S)^\Gamma$. \end{defn} \begin{lemma}\label{lem:dR-inv} Let $D$ be a de Rham $\m$-module over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$. Then for any $n\geq h_{dR}(D_S)$, $\D^n_\dif(D_S)^\Gamma=D_{\mathrm{dR}}(D_S)$ \end{lemma} \begin{proof} We tensor $K_{n+1}\otimes_{\Q}S[[t]][1/t]$ on both sides of the map \[ \D^{h_{dR}(D_S)}_\dif(D_S)^\Gamma\otimes_{K\otimes_{\Q}S}(K_{h_{dR}(D_S)}\otimes_{\Q}S)[[t]][1/t]\longrightarrow \D_\dif^{h_{dR}(D_S)}(D_S), \] yielding that the map \[ \D^{h_{dR}(D_S)}_\dif(D_S)^\Gamma\otimes_{K\otimes_{\Q}S}(K_{n}\otimes_{\Q}S)[[t]][1/t]\longrightarrow \D_\dif^{n}(D_S). \] is an isomorphism. Comparing $\Gamma$-invariants on both sides, we get the desired result. \end{proof} \begin{lemma}\label{lem:dR-HT} If $D$ is a de Rham $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$, then $D$ is Hodge-Tate with weights in $[a,b]$ and $h_{HT}(D_S)\leq h_{dR}(D_S)$. Furthermore, we have $\mathrm{Gr}^iD_{\mathrm{dR}}(D_S)=\D_\Sen^n(D_S(i))^\Gamma$ under the identification $\mathrm{Gr}^i\D_\dif^n(D_S)=\D_\Sen^n(D_S(i))$ for any $n\geq h_{dR}(D_S)$. \end{lemma} \begin{proof} Let $n\geq h_{dR}(D_S)$. Since (\ref{eq:def-de Rham}) is an isomorphism, we deduce that the natural map of graded modules \begin{equation}\label{eq:lem-dR-HT} \oplus_{i\in\mathbb{Z}}\mathrm{Gr}^iD_{\mathrm{dR}}(D_S) \otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^n(D_S(i)) \end{equation} is surjective. On the other hand, since $t^i\cdot\mathrm{Gr}^{-i}D_{\mathrm{dR}}(D_S)\subset \D_{\Sen}^n(D_S)$, we have that the natural map \[ \oplus_{a\leq i\leq b}t^i\cdot\mathrm{Gr}^{-i}D_{\mathrm{dR}}(D_S)\ra \D_\Sen^n(D_S) \] is injective. This implies that (\ref{eq:lem-dR-HT}) is injective; hence it is an isomorphism. Comparing the $\Gamma$-invariants on both sides, we get $\mathrm{Gr}^iD_{\mathrm{dR}}(D_S)=\D_\Sen^n(D_S(i))^\Gamma$ for each $i\in\mathbb{Z}$. This proves the lemma. \end{proof} \begin{lemma}\label{lem:dR} If $D_S$ is a de Rham $\m$-module over $\mathbf{B}^\dag_{\rig,K}\widehat{\otimes}_{\Q}S$, then for any morphism $S\ra R$ of affinoid algebras over $K$, $D_R$ is de Rham with weights in $[a,b]$ and $h_{dR}(D_R)\leq h_{dR}(D_S)$. Furthermore, the natural maps $\mathrm{Fil}^i D_{\mathrm{dR}}(D_S)\otimes_{S}R\ra \mathrm{Fil}^iD_{\mathrm{dR}}(D_R)$ are isomorphisms for all $i\in \mathbb{Z}$. \end{lemma} \begin{proof} Let $n\geq h_{dR}(D_S)$. Tensoring with $(K_n\otimes_{\Q}R)[[t]][1/t]$ on both sides of (\ref{eq:def-de Rham}), we get that the natural map \begin{equation}\label{eq:lem-dR} (\D^n_\dif(D_S)^\Gamma\otimes_S R)\otimes_{K\otimes_{\Q}R}(K_n\otimes_{\Q}R)[[t]][1/t]\longrightarrow \D_\dif^n(D_R). \end{equation} is an isomorphism. Comparing $\Gamma$-invariants on both sides of (\ref{eq:lem-dR}), we get that the natural map $\D^n_\dif(D_S)^\Gamma\otimes_{S}R\ra\D^n_\dif(D_R)^\Gamma$ is an isomorphism; hence $D_R$ is de Rham. Then by Lemmas \ref{lem:HT} and \ref{lem:dR-HT}, we deduce that the natural map $\mathrm{Gr}^i(D_{\mathrm{dR}}(D_S))\otimes_SR\ra\mathrm{Gr}^i(D_{\mathrm{dR}}(D_R))$ is an isomorphism. This implies the rest of the lemma. \end{proof} \begin{cor} If $D_S$ is a de Rham $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q} S$, then $D_{\mathrm{dR}}(D_S)$ is a locally free coherent $K\otimes_{\Q}S$-module of rank $d$. \end{cor} \begin{proof} We first note that for each $i\in\mathbb{Z}$, $\mathrm{Gr}^i(D_{\mathrm{dR}}(D_S))$, which is isomorphic to $\D_\Sen^n(D_S(i))^\Gamma$ by Lemma \ref{lem:dR-HT}, is a coherent $K\otimes_{\Q}S$-module. We then deduce that $D_{\mathrm{dR}}(D_S)$ is a coherent $K\otimes_{\Q}S$-module. Using Lemma \ref{lem:dR}, it then suffices to treat the case that $S$ is a finite extension of $K$; this follows easily from the isomorphism (\ref{eq:def-de Rham}). \end{proof} \begin{defn} Let $X$ be a reduced and separated rigid analytic space over $K$, and let $D_X$ be a family of $\m$-modules of rank $d$ over $X$. We call $D_X$ \emph{de Rham} if for some (hence any) admissible cover $\{M(S_i)\}_{i\in I}$ of $X$, $D_{S_i}$ is de Rham with weights in $[a,b]$ for any $i\in I$. We define $D_{\mathrm{dR}}(D_X)$ to be the gluing of all $D_{\mathrm{dR}}(D_{S_i})$'s. \end{defn} \begin{lemma}\label{lem:dR-weight} If $D_S$ is a de Rham $\m$-module over $\mathbf{B}^\dag_{\rig,K}\widehat{\otimes}_{\Q}S$ of rank $d$ with weights in $[a,b]$, then $t^{-a}\D_\dif^{+,n}(D_S)\subset D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]]\subset t^{-b}\D_\dif^{+,n}(D_S)$ for any $n\geq h_{dR}(D_S)$. \end{lemma} \begin{proof} Since $\mathrm{Gr}^{-b}D_{\mathrm{dR}}(D_S)=D_{\mathrm{dR}}(D_S)$, we get $D_{\mathrm{dR}}(D_S)\subset t^{-b}\D^{+,n}_\dif(D_S)$; hence $D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]]\subset t^{-b}\D_\dif^{+,n}(D_S)$. By the proof of Lemma \ref{lem:dR-HT}, we know that the natural map (\ref{eq:lem-dR-HT}) is an isomorphism of graded modules. By the facts that $\mathrm{Gr}^iD_{\mathrm{dR}}(D_S)=0$ for $i\geq -a+1$ and $\mathrm{Fil}^i\D_\dif^n(D_S)$ is $t$-adically complete, we thus deduce that $t^{-a}\D_\dif^{+,n}(D_S)\subset D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]]$. \end{proof} \begin{lemma} Let $D_S$ be a Hodge-Tate $\m$-module over $\mathbf{B}^\dag_{\rig,K}\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$. Then for any $k\geq b-a+1$, $i\in[a,b]$, $n\geq h_{HT}(D_S)$ and $\gamma\in\Gamma_n$, the map $\gamma-\chi^i(\gamma):t^k\D_\dif^{+,n}(D_S)\ra t^k\D_\dif^{+,n}(D_S)$ is bijective. \end{lemma} \begin{proof} Since $\D_\dif^{+,n}(D_S)$ is $t$-adically complete, it suffices to show that \[ \gamma-1:t^k\D_\dif^{+,n}(D_S)/t^{k+1}\D_\dif^{+,n}(D_S)\ra t^k\D_\dif^{+,n}(D_S)/t^{k+1}\D_\dif^{+,n}(D_S) \] is bijective for any $k\geq b-a+1$. Note that $t^k\D_\dif^{+,n}(D_S)/t^{k+1}\D_\dif^{+,n}(D_S)$ is isomorphic to $\D_\Sen^n(D_S(k))$ as a $\Gamma$-module. Note that $\D^n_\Sen(D_S(k))=\oplus_{a\leq j\leq b}(\D^n_\Sen(D_S))^{\Gamma_n=\chi^{j+k}}$ by Lemma \ref{lem:HT-criterion}. Since $j+k\geq b+1$ for all $j\in [a,b]$, we deduce that $\gamma-\chi^i(\gamma)$ is bijective on $\D^n_\Sen(D_S(k))$. \end{proof} \begin{lemma}\label{lem:dR-criterion} Let $D_S$ be a Hodge-Tate $\m$-module over $\mathbf{B}^\dag_{\rig,K}\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$. Then $D_S$ is de Rham if and only if there exists a positive integer $n\geq h_{HT}(D_S)$ such that $\Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_S)\subset t^{b-a+1}\D_\dif^{+,n}(D_S)$. Furthermore, if this is the case, then (\ref{eq:def-de Rham}) holds for $n$. \end{lemma} \begin{proof} Suppose that $D_S$ is de Rham. Let $n\geq h_{dR}(D_S)$, and put \[ N=D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]]. \] Since $D$ has weights in $[a,b]$, by Lemma \ref{lem:dR-weight}, we have $t^{-a}\D_\dif^{+,n}(D_S)\subset N\subset t^{-b}\D_\dif^{+,n}(D_S)$. On the other hand, by the construction of $N$, it is clear that $(\gamma-1)N\subset tN$. It therefore follows that \[ \Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_S)\subset \Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)(t^aN)\subset t^{2b-a+1}N\subset t^{b-a+1}\D_\dif^{+,n}(D_S). \] Now suppose $\Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_S)\subset t^{b-a+1}\D_\dif^{+,n}(D_S)$ for some $n\geq h_{HT}(D_S)$. We claim that for any $j\in[a,b]$ and $a\in(\D^n_\Sen(D_S))^{\Gamma_n=\chi^j}$, we can lift $a$ to an element in $(\D_{\dif}^{+,n}(D_S))^{\Gamma_n=\chi^j}$. In fact, let $\tilde{a}$ be any lift of $a$ in $\D_\dif^{+,n}(D_S)$, and let $\tilde{b}=\prod_{a\leq i\leq 2b-a, i\neq j}\frac{\gamma-\chi^i(\gamma)}{\chi^j(\gamma)-\chi^i(\gamma)}\tilde{a}$ where $\gamma$ is a topological generator of $\Gamma_n$; it is clear that $\tilde{b}$ is also a lift of $a$. Furthermore, by assumption, we have $(\gamma-\chi^j(\gamma))(\tilde{b})\in \Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_S)\subset t^{b-a+1}\D^{+,n}_\dif(D_S)$. By the previous lemma, we choose some $\tilde{c}\in t^{b-a+1}\D^{+,n}_\dif(D_S)$ satisfying $(\gamma-\chi^j(\gamma))(\tilde{b})=(\gamma-\chi^j(\gamma))(\tilde{c})$. It is then clear that $\tilde{b}-\tilde{c}$ is a desired lift of $a$. Since $\D^n_\Sen(D_S)=\oplus_{a\leq i\leq b}(\D^n_\Sen(D_S))^{\Gamma_n=\chi^i}$, we have that $(\D^n_\Sen(D_S))^{\Gamma_n=\chi^i}$ is locally free for each $i\in[a,b]$. By shrinking $M(S)$, we may further suppose that each $(\D^n_\Sen(D_S))^{\Gamma_n=\chi^i}$ is free. We then deduce from the claim that there exists a free $K_n\otimes_{\Q}S$-module $M\subseteq(\D_\dif^{n}(D_S))^{\Gamma_n}$ such that the natural map \[ M\otimes_{K_n\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]][1/t]\longrightarrow \D_\dif^n(D_S) \] is an isomorphism. It follows that the natural map \[ M^\Gamma\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]][1/t]\longrightarrow \D_\dif^n(D_S) \] is an isomorphism because $M=M^{\Gamma}\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)$ by \cite[Proposition 2.2.1]{BC07}. Taking $\Gamma$-invariants on both sides, we get $M^{\Gamma}=(\D_\dif^n(D_S))^\Gamma$. This implies that $D_S$ is de Rham. \end{proof} \begin{prop}\label{prop:dR-family} Let $D_S$ be a $\m$-module over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}S$. Suppose that there exists a Zariski dense subset $Z\subset M(S)$ such that $D_z$ is de Rham with weights in $[a,b]$ for any $z\in Z$ and $\sup_{z\in Z}\{h_{dR}(D_z)\}<\infty$. Then $D_S$ is de Rham with weights in $[a,b]$. \end{prop} \begin{proof} By Proposition \ref{prop:HT-family}, we first have that $D_S$ is Hodge-Tate with weights in $[a,b]$. Let $n\geq \max\{h_{HT}(D_S),\sup_{z\in Z}\{h_{dR}(D_z)\}\}$. By Lemma \ref{lem:dR-criterion}, we have \[ \Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_z)\subset t^{b-a+1}\D_\dif^{+,n}(D_z) \] for any $z\in Z$. This implies $\Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_S)\subset t^{b-a+1}\D_\dif^{+,n}(D_S)$ because $Z$ is Zariski dense. Hence $D_S$ is de Rham by Lemma \ref{lem:dR-criterion} again. \end{proof} \section{$P$-adic local monodromy for families of de Rham $\m$-modules} The main goal of this section is to prove the $p$-adic local monodromy for families of de Rham $\m$-modules. The proof is similar to Berger-Colmez's proof of the $p$-adic local monodromy for families of de Rham representations \cite[\S6]{BC07}. Indeed, with the results we have proved in \S2 and \S3, the proof from [\emph{loc.cit.}] go over verbatim. We therefore often sketch our proof and refer the reader to [\emph{loc.cit.}] for more details. We fix $E$ to be a finite extension of the products of the complete residue fields of the Shilov boundary of $M(S)$. \begin{prop}\label{prop:N_dR} Let $D_S$ be a de Rham $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$. For any $s>0$ such that $n(s)\geq h_{dR}(D_S)$, let \[ N_s(D_E)=\{y\in t^{-b}D^{s}_E\hspace{2mm}\text{such that}\hspace{2mm}\iota_n(y)\in D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]\hspace{1mm}\text{for each}\hspace{2mm}n\geq n(s)\}. \] Then the following are true. \begin{enumerate} \item[(1)]The $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q} E$-module $N_s(D_E)$ is free of rank $d$ and stable under $\Gamma$. \item[(2)]We have $N_s(D_E)\otimes_{\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}E,\iota_n}(K_n\otimes_{\Q}E)[[t]] =D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]$ for each $n\geq n(s)$. \end{enumerate} Furthermore, if we put $N_{\mathrm{dR}}(D_E)=N_s(D_E)\otimes_{\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}E} \mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}E$, then the following are true. \begin{enumerate} \item[(3)]The $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q} E$-module $N_{\mathrm{dR}}(D_E)$ is free of rank $d$, stable under $\Gamma$, and independent of the choice of $s$. \item[(4)]We have $\varphi^*(N_{\mathrm{dR}}(D_E))=N_{\mathrm{dR}}(D_E)$ and $\nabla(N_{\mathrm{dR}}(D_E))\subset t\cdot N_{\mathrm{dR}}(D_E)$. \end{enumerate} \end{prop} \begin{proof} Since the localization map $\iota_n$ is continuous, we first have that $N_s(D_E)$ is a closed $\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E$-submodule of $t^{-b}D_E^{s}$. It follows that $N_s(D_E)$ is a finite locally free $\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E$-module because $\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E$ is isomorphic to a finite product of Robba rings. On the other hand, by Lemma \ref{lem:dR-weight}, we get that $t^{-a}D_E^{s}$ is contained in $N_s(D_E)$. We thus conclude that $N_s(D_E)$ is a free $\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E$-module of rank $d$. To show (2), we proceed as in the proof of \cite[Proposition 6.1.1]{BC07}. For any $y\in D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]$ and $w\geq \max\{0,b-a\}$, since \[ D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]\subset t^{-b}\D_\dif^{+,n}(D_E) \] by Lemma \ref{lem:dR-weight}, we may pick some $y_0\in t^{-b}D_E^{s}$ such that $\iota_n(y_0)-y\in t^w D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]$. Let $t_{n,w}$ be the function defined in \cite[Lemme I.2.1]{LB04}. It follows that \[ \iota_m(t_{n,w}y_0)\in t^{w-b}\D_\dif^{+,n}(D_E)=t^{w-b+a}(t^{-a}\D_\dif^{+,n}(D_E))\subset D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_m\otimes_{\Q}E)[[t]] \] for $m>n$ and \[ \iota_n(t_{n,w}y_0)-y\in t^{w}D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]. \] This implies that the natural map $N_s(D_E)\ra D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]/(t^w)$ is surjective; this proves (2). We get (3) immediately from (2). The first half of (4) follows from the fact that $\iota_{n+1}\circ \varphi=\iota_n$. Note that $\iota_n(\nabla(N_s(D_E)))=\nabla(\iota_n(N_s(D_E)))\subset tD_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]$ for any $n\geq n(s)$; this proves the second half of (4). \end{proof} \begin{prop}\label{prop:monodromy} Keep notations as in Proposition \ref{prop:N_dR}. Then there exists a finite extension $L$ over $K$ such that \[ M=(N_{\mathrm{dR}}(D_E)\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E} \mathbf{B}_{\log,L}^\dag\widehat{\otimes}_{\Q}E)^{I_L} \] is a free $L_0'\otimes_{\Q}E$-module of rank $d$ and the natural map \begin{equation*} \begin{split} M\otimes_{L_0'\otimes_{\Q}E} \mathbf{B}_{\log,L}^\dag\widehat{\otimes}_{\Q}E \longrightarrow N_{\mathrm{dR}}(D_E)\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E} \mathbf{B}_{\log,L}^\dag\widehat{\otimes}_{\Q}E \end{split} \end{equation*} is an isomorphism. \end{prop} \begin{proof} Let $f'=[K_0':\Q]$. Note that there is a canonical decomposition $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E\cong\prod_{i=0}^{f'-1}\r_E^{(i)}$ where each $\r_E^{(i)}$ is isomorphic to $\r_E$ and stable under $\Gamma_K$, and satisfies $\varphi(\r_E^{(i)})\subset\r_E^{(i+1)}$ ($\r_E^{(f')}=\r_E^{(0)}$). Let $N^{(i)}_{\mathrm{dR}}(D_E)=N_{\mathrm{dR}}(D_E) \otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E}\r_E^{(i)}$. It follows that each $N_{\mathrm{dR}}^{(i)}(D_E)$ is stable under $\partial=\nabla/t$ and $\varphi^{f'}$; hence it is a $p$-adic differential equation with a Frobenius structure. By the versions of the $p$-adic local monodromy theorem proved by Andr\'e \cite{An} or Mebkhout \cite{Meb}, we conclude that each $N^{(i)}_{\mathrm{dR}}(D_E)$ is potentially unipotent. This yields the proposition using the argument of \cite[Proposition 6.2.2]{BC07} and \cite[Corollaire 6.2.3]{BC07}. \end{proof} \begin{lemma}\label{lem:monodromy} Keep notations as in Proposition \ref{prop:monodromy}, and let \[ M=(N_s(D_E)\otimes_{\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E} \mathbf{B}_{\log,K}^{\dag,s}\widehat{\otimes}_{\Q}E)^{I_L} \] for sufficiently large $s$. Then for any $n\geq n(s)$, we have \begin{equation}\label{eq:lem-monodromy} L\otimes_{L_0}\iota_n(M)=(\D_\dif(D_E\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E} \mathbf{B}_{\rig,L}^\dag\widehat{\otimes}_{\Q}E))^{I_L} \end{equation} \end{lemma} \begin{proof} By the previous proposition, the left hand side of (\ref{eq:lem-monodromy}) is a free $L\otimes_{L_0}L_0'\otimes_{\Q}E$-module of rank $d$. On the other hand, since $((L_n\otimes_{\Q}E)[[t]][1/t])^{I_L}=L\otimes_{L_0}L_0'\otimes_{\Q}E$, we deduce that the right hand side of (\ref{eq:lem-monodromy}), which obviously contains the left hand side, is an $L\otimes_{L_0}L_0'\otimes_{\Q}E$-module generated by at most $d$-elements. This yields the desired identity. \end{proof} \section{Proof of the main theorem} We start by making some preliminary reductions. After a finite surjective base change of $X$, we may assume that $Q(T)$ factors as $\prod_{i=1}^m(T-F_i)$. By reordering the $f_i$'s and throwing away some points of $Z$ we may further assume that for all $z\in Z$, $v_p(F_i(z))\geq v_p(F_j(z))$ if $i>j$ and $F_i(z)\neq F_j(z)$ if $F_i\neq F_j$. We then set $\F_{i,z}= D_{\mathrm{st}}^+(V_z)^{(\varphi^f-F_1(z))\cdots(\varphi^f-F_{i}(z))=0}$ for all $z\in Z$ and $1\leq i\leq m$. Using Definition \ref{def:fs}(3), we may suppose that $\F_{i,z}\subseteq \F_z$ for all $z\in Z$ and $1\leq i\leq m$ by shrinking $Z$. Furthermore, by the fact that $N\varphi=p\varphi N$ and the condition that $v_p(F_i(z))\geq v_p(F_j(z))$ if $i>j$, we see that $N=0$ on each graded piece $\F_{i,z}/\F_{i-1,z}$. Let $c_{i,z}$ be the rank of $ \F_{i,z}/\F_{i-1,z}$ over $K_0\otimes k(z)$, and partition $Z$ into finitely many subsets according to the sequence $c_{i,z}$. One of these subsets of $Z$ must still be Zariski dense. Replace $Z$ by this subset and set $c_i = c_{i,z}$ for any $z$ in this subset. For $z\in Z$, we will inductively set $\m$-submodules $\mathrm{Fil}_{i,z}\subset\D_\rig^\dag(V_z)$ for $1\leq i\leq m$ such that $D_{\mathrm{st}}(\mathrm{Fil}_{i,z})=\F_{i,z}$. For $i=1$, since $V_z$ has non-positive Hodge-Tate weights and $N(\F_{1,z})=0$, we have \[ \F_{1,z}=(D^+_{\mathrm{crys}}(V_z))^{\varphi^f=F_1(z)}\subset\D_\rig^\dag(V_z)^{\Gamma} \] by Berger's dictionary. Let $\mathrm{Fil}_{1,z}$ be the saturation of the $\m$-submodule generated by $\mathcal{F}_{1,z}$. Now suppose we have set $\mathrm{Fil}_{i-1,z}$ for some $i\geq 2$. It follows that \[ D_{\mathrm{st}}^+(\D_\rig^\dag(V_z)/\mathrm{Fil}_{i-1,z})=D_{\mathrm{st}}^+(V_z)/\F_{i-1,z}. \] Note that \[ \F_{i,z}/\F_{i-1,z}=(D_{\mathrm{st}}^+(V_z)/\F_{i-1,z})^{\varphi^f=F_{i}(z),N=0}. \] Hence \[ \F_{i,z}/\F_{i-1,z}=D^+_{\mathrm{crys}}(\D_\rig^\dag(V_z)/\mathrm{Fil}_{i,z})^{\varphi^f=F_{i}(z)}\subset (\D_\rig^\dag(V_z)/\mathrm{Fil}_{i-1,z})^\Gamma. \] We then set $\mathrm{Fil}_{i,z}$ to be the preimage of the saturation of the $\m$-submodule of $\D_\rig^\dag(V_z)/\mathrm{Fil}_{i-1,z}$ generated by $\F_{i,z}/\F_{i-1,z}$. Now for each $1\leq i\leq m$, we define the character $\delta_i:K\ra\OO(X)^\times$ by setting $\delta_i(p)=F_i^{-1}$ and $\delta_i(\OO_K^\times)=1$. Let $D_X=\D_\rig^\dag(V_X)^{\vee}$. \begin{lemma}\label{lem:de Rham-part} Suppose that $X$ is irreducible. Then for each $0\leq i\leq m$, there exists a proper birational morphism $\pi:X'\ra X$ and a sub-family of $\m$-modules $D^{(i)}_{X'}\subset D_{X'}$ over $X'$ of rank $d-c_1-\dots-c_i$ such that \begin{enumerate} \item[(1)] for any $x\in X'$, the natural map $D_x^{(i)}\ra D_x$ is injective; \item[(2)] there exists a Zariski open dense subset $U$ of $X'$ such that for any $z\in Z'=\pi^{-1}(Z)\cap U$, the natural map $D^{(i)}_z\ra D_z$ is the dual of the projection $\D_\rig^\dag(V_{\pi(z)})\ra \D_\rig^\dag(V_{\pi(z)})/\mathrm{Fil}_{i,\pi(z)}$. \end{enumerate} \end{lemma} \begin{proof} We proceed by induction on $i$. The initial case is trivial. Suppose that for some $1\leq i\leq m$, the lemma is true for $i-1$. Note that $\mathcal{F}_{i,z}/\mathcal{F}_{i-1,z}$ maps into $\D_\rig^\dag(V_{z})/\mathrm{Fil}_{i,z}$ for any $z\in Z$. Since $\F_{i,z}/\F_{i-1,z}=(D_{\mathrm{crys}}^+(V_z)/\F_{i-1,z})^{\varphi^f=F_{i}(z)}$, we get that $(D^{(i)}_z)^{\vee}(\pi^{*}(\delta_i)(z))$ has $k(z)$-dimension $c_i$ for any $z\in Z'$. Since $Z'$ is Zariski dense in $X'$, by Proposition \ref{prop:cohomology}, after adapting $X'$ and $U$, we may find a sub-family of $\m$-modules $D^{(i)}_{X'}$ of $D^{(i-1)}_{X'}$ with rank $d-c_1-\dots-c_i$ such that \begin{enumerate} \item[(1')]$D_x^{(i)}\ra D_x^{(i-1)}$ is injective for any $x\in X'$; \item[(2')]for any $z\in \pi^{-1}(Z)\cap U$, $D_z^{(i)}$ is the kernel of the dual of the map \[ (\mathbf{B}_{\rig,K}^\dag\otimes_{\Q}k(z))\cdot(\mathcal{F}_{i,\pi(z)}/\mathcal{F}_{i-1,\pi(z)})\ra \D_\rig^\dag(V_{\pi(z)})/\mathrm{Fil}_{i,\pi(z)}. \] \end{enumerate} It is clear that (1') and (2') imply (1) and (2) respectively; this finishes the inductive step. \end{proof} To prove Theorem \ref{thm:main}, we also need the following lemma. \begin{lemma} Let $V_S$ be a free $S$-linear representation of $G_K$ of rank $d$. Then there exists a positive integer $m(V_S)$ such that for any $x\in M(S)$ and $a\in\D_\dif^{+}(V_x)$, if $a$ is $\Gamma$-invariant, then $a\in\D_\dif^{+,m(V_S)}(V_x)$. \end{lemma} \begin{proof} This is a consequence of the Tate-Sen method. Using \cite[Th\'eor\`{e}me 4.2.9]{BC07}, we first choose a finite extension $L$ over $K$ and some positive integer $m$ so that $\D_{\rig,L}^{\dag,r_m}(V_S)$ is a free $\mathbf{B}_{\rig,L}^{\dag,r_m}\widehat{\otimes}_{\Q}S$-module with a basis $\mathrm{e}=(e_1,\dots,e_d)$. Let $\gamma$ be a topological generator of $\Gamma_{L_m}$ and write $\gamma(e)=eG$ for some $G\in\mathrm{GL}_d(\mathbf{B}_{\rig,L}^{\dag,r_m}\widehat{\otimes}_{\Q}S)$. Recall that by the classical work of Tate \cite{T}, we know that there exists a constant $c>0$ such that $v_p((\gamma-1)x)\leq v_p(x)+c$ for any nonzero $x\in (1-R_{L,m})\widehat{L}_\infty$, where $R_{L,m}:\widehat{L}_\infty\ra L_m$ is Tate's normalized trace map. Since the localization map $\iota_m:\mathbf{B}_{\rig,L}^{\dag,r_m}\ra L_m[[t]]$ is continuous, by enlarging $m$, we may suppose that the constant term of $\iota_m(G)-1$ has norm less than $p^{-c}$. We fix some $m_0\in\mathbb{N}$ such that $K_\infty\cap L_m=K_{m_0}\cap L_m$. Now let $a\in\D_\dif^{+,K_n}(V_x)^\Gamma$ for some $x\in X$ and $n\geq m$. We will show that $a\in\D_\dif^{+,K_{m_0}}(V_x)^\Gamma$. Since $\iota_m(\mathrm{e})$ forms a basis of $\D^{+,L_n}_{\dif}(V_S)$, we may write $a=\iota_m(\mathrm{e})(x)A$ for some \[ A\in \mathrm{M}_{d\times1}((L_n\otimes_{\Q}k(x))[[t]]). \] The $\Gamma$-invariance of $a$ implies $\iota_m(G(x))\gamma(A)=A$; thus $(1-R_{L,m})\iota_m(G(x))\gamma(A)=(1-R_{L,m})A$. Note that $\iota_m(G(x))$ has entries in $(L_m\otimes_{\Q}k(x))[[t]]$. It follows that $(G(x)-1)B=(1-\gamma^{-1})B$ where $B=(1-R_{L,m})A$. Let $B_0$ be the constant term of $B$. If $B_0\neq0$, then the constant term of $(\iota_m(G(x))-1)B$ has valuation $\geq v(\iota_m(G(x))-1)+v(B_0)>v(B_0)+c$ whereas the constant term $(1-\gamma^{-1})B_0$ of $(1-\gamma^{-1})B$ has valuation $\leq v(B_0)+c$; this yields a contradiction. Hence $B_0=0$. Iterating this argument, we get $B=0$. Hence $a\in \D_\dif^{+,L_m}(V_x)\cap\D_\dif^{+,K_n}(V_x)\subset\D_\dif^{+,K_{m_0}}(V_x)$. Thus we may choose $m(V_S)=m_0$. \end{proof} \emph{Proof of Theorem 0.2}. We retain the notations as above. By passing to irreducible components, we may suppose that $X$ is irreducible. We then apply Lemma \ref{lem:de Rham-part} to $V_X$. Note that $V_{X'}$ is again a finite slope family over $X'$ with the Zariski dense set of crystalline points $\pi^{-1}(Z)$. We may suppose that $X'=X$. Let $\lambda:\D^\dag_{\rig}(V_X)=D^{\vee}_X\ra (D_X^{(m)})^{\vee}$ be the dual of $D_X^{(m)}\ra D_X$, and let $P_X=\ker(\lambda)$. For any $x\in X$, since $D^{(m)}_x\ra D_x$ is injective, we get that the image of $\lambda_x$ is a $\m$-submodule of rank $d-c_1-\cdots-c_m$. Thus by Lemma \ref{lem:ker-birational}, after adapting $X$, we may assume that $P_X$ is a family of $\m$-modules of rank $c_1+\cdots+c_m$, and there exists a Zariski open dense subset $U\subset X$ such that $P_x=\ker(\lambda_x)$ for any $x\in U$. Note that $\ker(\lambda_z)=\mathrm{Fil}_{i,z}$ for any $z\in Z$. Thus by replacing $Z$ with $Z\cap U$, we may assume that $P_z=\mathrm{Fil}_{i,z}$ for any $z\in Z$. We claim that $P_{X}$ is de Rham with weights in $[-b,0]$. To do so, we set $Y$ to be the set of $x\in X$ for which $P_x$ is de Rham with weights in $[a,b]$. By the previous lemma, we see that for any affinoid subdomain $M(S)\subset X$, there exists an integer $m(V_S)$ such that if $P_x$ is de Rham for some $x\in M(S)$, then $h_{dR}(P_x)\leq m(V_S)$. We then deduce from Proposition \ref{prop:dR-family} that $Y\cap M(S)$ is a Zariski closed subset of $M(S)$. Hence $Y$ is a Zariski closed subset of $X$. On the other hand, since $P_z$ is de Rham with weights in $[-b,0]$, we get $Z\subset Y$; thus $Y=X$ by the Zariski density of $Z$. Furthermore, using Proposition \ref{prop:dR-family} and the previous lemma again, we deduce that $P_X$ is de Rham with weights in $[-b,0]$. As a consequence, we obtain a locally free coherent $\OO_X\otimes_{\Q}K$-module $D_{\mathrm{dR}}(P_X)$ of rank $c_1+\cdots+c_m$. The next step is to show that for any $x\in X$, $D_{\mathrm{dR}}(P_x)$ is contained in $D^+_{\mathrm{st}}(V_x)\otimes_{K_0}K$. Let $Y$ be the set of $x\in X$ satisfying this condition. We first show that $Y$ is a Zariski closed subset of $X$. For this, it suffices to show that $Y\cap M(S)$ is a Zariski closed subset of $M(S)$ for any affinoid subdomain $M(S)$ of $X$. To show this, we employ the $p$-adic local monodromy for families of de Rham $\m$-modules. As in \S5, let $E$ be the product of the complete residue fields of the Shilov boundary of $M(S)$. Since $P_S$ is a family of de Rham $\m$-modules with weights in $[-b,0]$, by Lemma \ref{lem:monodromy}, there exists a finite extension $L$ of $K$ such that for sufficiently large $s$ and $n\geq n(s)$, we have \[ L\otimes_{L_0}\iota_n(M)=(\D_\dif(P_E\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E} \mathbf{B}_{\rig,L}^\dag\widehat{\otimes}_{\Q}E))^{I_L} \] for $M=(N_s(P_E)\otimes_{\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E} \mathbf{B}_{\log,K}^{\dag,s}\widehat{\otimes}_{\Q}E)^{I_L}$; furthermore, $N_s(P_E)\subset P_E^{s}$. Thus \[ \iota_n(M)\subset \iota_n(P_E\otimes_{\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E} \mathbf{B}_{\log,K}^{\dag,s}\widehat{\otimes}_{\Q}E)\subset \iota_n(\D_\rig^\dag(V_E)\otimes_{\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E} \mathbf{B}_{\log,K}^{\dag,s}\widehat{\otimes}_{\Q}E)\subset\mathbf{B}^+_{\mathrm{st}}\widehat{\otimes}_{\Q}V_E. \] Note that $D_{\mathrm{dR}}(P_E)\subset \D_\dif^+(P_E)\subset\D_\dif^+(V_E)\subset\mathbf{B}_{\mathrm{dR}}^+\widehat{\otimes}_{\Q}V_E$. This yields \[ D_{\mathrm{dR}}(P_E)\subset (\mathbf{B}^+_{\mathrm{st}}\widehat{\otimes}_{\Q}V_E)\otimes_{L_0}L\cap \mathbf{B}_{\mathrm{dR}}^+\widehat{\otimes}_{\Q}V_E= (\mathbf{B}^+_{\mathrm{st}}\widehat{\otimes}_{\Q}V_E)\otimes_{L_0}L. \] We therefore deduce from \cite[Lemme 6.3.1]{BC07} that \[ D_{\mathrm{dR}}(P_S)\subset (\mathbf{B}^+_{\mathrm{st}}\widehat{\otimes}_{\Q}V_E)\otimes_{L_0}L\cap \mathbf{B}_{\mathrm{dR}}^+\widehat{\otimes}_{\Q}V_S=(\mathbf{B}^+_{\mathrm{st}}\widehat{\otimes}_{\Q}V_S)\otimes_{L_0}L. \] It follows that $Y\cap M(S)$, which is the set of $x\in M(S)$ such that $D_{\mathrm{dR}}(P_x)\subset (\mathbf{B}^+_{\mathrm{st}}\otimes_{\Q}V_x)\otimes_{K_0}K$, is Zariski closed in $M(S)$. To conclude the theorem, it then suffices to show that $D_{\mathrm{dR}}(P_x)\subset (D^+_{\mathrm{st}}(V_x)\otimes_{K_0}K)^{Q(\varphi)(x)=0}$ for any $x\in X$; here we $K$-linearly extend the $\varphi^f$-action to $D^+_{\mathrm{st}}(V_x)\otimes_{K_0}K$. Note that $\mathrm{Fil}_{m,z}$ is semi-stable with $D_{\mathrm{st}}(\mathrm{Fil}_{m,z})=\mathcal{F}_{m,z}$. This implies that $Q(\varphi)(D_{\mathrm{dR}}(P_X))$ vanishes at $z$, yielding that $Q(\varphi)(D_{\mathrm{dR}}(P_X))=0$ by the Zariski density of $Z$.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Solutions to the hierarchy problem of the Standard Model (SM) invoke new physics (NP) around the TeV scale to cut-off the quadratically divergent quantum corrections to the Higgs mass. Ideally, to avoid too much fine-tuning, the lightest NP states should be present already at the weak ({\em sub}-TeV) scale. However, NP induces higher-dimensional operators involving the SM particles which result in a tension with precision tests of the SM, in both the electroweak (EW) and the flavor sector. To be consistent with the EW precision tests, flavor-preserving operators generated by NP typically require the scale of NP to be larger than {\em a few} TeV \cite{Barbieri:1999tm} and are difficult to suppress by any known (approximate) symmetries of the SM\footnote{Exceptions include custodial isospin for the $T$ parameter \cite{Peskin:1991sw}.}. This tension is called the little hierarchy problem. Besides, in the presence of $O(1)$ new sources of CP violation, the data on flavor violation in Kaon system requires the NP mass scale to be larger than several thousands TeV. However, it might be possible to address the latter constraints by suitable flavor symmetries. A new symmetry at the TeV scale can ameliorate some of these constraints if at least the lightest NP states, which a priori give the largest electroweak corrections, are charged under this symmetry while the SM particles are neutral \cite{Wudka:2003se,Cheng:2003ju}. In such a case, the charged NP states do not contribute at tree level to the operators constrained by the precision tests since couplings of a single charged state to SM particles are forbidden. NP contributions from these states arise only at loop level. This makes sub-TeV NP states consistent with EW precision data. These NP states may then play the role of cutting-off the Higgs mass divergence without any fine-tuning, thus avoiding the little hierarchy problem. As a spin-off, the new symmetry implies the existence of a new stable particle that can be a dark matter candidate if it is electrically neutral and weakly interacting. The simplest possibility of a new symmetry at the TeV scale is a discrete $Z_2$ parity. The classic example is $R$-parity in supersymmetry. In little Higgs, the similar role is played by $T$-parity \cite{Cheng:2003ju,Cheng:2004yc} under which the new gauge bosons are charged. Yet another example is Kaluza-Klein (KK) parity \cite{Cheng:2002ab} in universal extra dimensions (UED) \cite{Appelquist:2000nn}. However, no explicit UV completions exist in the literature for the latter two scenarios, which by nature are effective theories below say 10 TeV. Moreover, all three of these frameworks do not address flavor violation issues which require detailed understanding of the possible UV completion or SUSY breaking mechanism. The situation is quite different in the Randall--Sundrum (RS1) setup \cite{Randall:1999ee} based on a slice of AdS$_5$ in the sense that both the Planck-weak and flavor hierarchies can be addressed as follows. Owing to the warped geometry, the 4D (or zero-mode) graviton is localized near the UV/Planck brane which has a Planckian fundamental scale, whereas the Higgs sector can be localized near the IR/TeV brane where the cut-off is of order TeV. In this way the Planck-weak hierarchy is addressed. Based on the AdS/CFT correspondence \cite{Maldacena:1997re, Witten:1998qj}, RS1 is conjectured to be dual to 4D composite Higgs models \cite{Arkani-Hamed:2000ds, Rattazzi:2000hs, Contino:2003ve}. In the original RS1 model, the entire SM (including the fermions and gauge bosons) are assumed to be localized on the TeV brane. However, it was subsequently realized that, with the SM fermion \cite{gn, gp} and gauge fields \cite{bulkgauge} propagating in the bulk, such a framework not only solves the Planck-weak hierarchy, but can also address the flavor hierarchy. The idea is that light SM fermions (which are zero-modes of 5D fermions) can be localized near the UV brane, whereas the top quark is localized near the IR brane, resulting in small and large couplings respectively to the SM Higgs localized near the IR brane. Moreover, the flavor problem (both from unknown physics at the cut-off and from the KK states) is also under control \cite{gp, Huber:2000ie} due to an analog of the GIM mechanism or approximate flavor symmetries \cite{Agashe:2004cp}, even with a few TeV KK scale and despite the recent $B$-physics data \cite{NMFV}. The versions of this framework studied so far do not have a discrete symmetry analogous to KK parity in UED. The constraints from EW precision tests require the lightest gauge KK modes to be heavier than a few TeV, provided suitable custodial symmetries are implemented to suppress contributions to the $T$ parameter \cite{Agashe:2003zs} and shift in coupling of $Z$ boson to $b_L$ \cite{Agashe:2006at}. As mentioned above, a similar limit on the KK mass scale arises also from flavor violation: see references \cite{others1, others2, Cacciapaglia:2007fw} for other studies of these issues. Thus, although the big (Planck-weak) hierarchy and flavor issues are addressed, the little hierarchy problem generically persists in these models. Phenomenologically, the implication of the little hierarchy is that, if mass scales of the new physics are higher than $2 - 3$ TeV, the new particles would barely be reachable at the LHC especially if they are not charged under the $SU(3)_c$ strong interaction \cite{kkgluon}. The goal of this paper is to implement an analog of KK-parity of UED in a warped extra dimension, by requiring the warp factor to be symmetric with respect to the mid-point of the extra dimension. In this construction, there are two towers in the KK decomposition of a bulk field, namely, KK modes which are even and odd under the parity symmetry. The SM particles belong to the even towers. The odd modes cannot have single couplings to the SM, therefore they are allowed to be lighter than a TeV without contradicting the precision EW constraints. Although the primary focus of the present work is to ease out the experimental constraints and lower the mass scale of the new particle, we will argue that these lightest odd modes can cut off the quadratic divergences in the Higgs sector, thus addressing the little hierarchy problem. Furthermore, the lightest odd particle is stable and could be a WIMP, naturally giving the correct dark matter abundance, like in UED \cite{Servant:2002aq,Cheng:2002ej}. The resulting collider phenomenology is different from usual models with a warped extra dimension. In particular, KK-odd particles have to be produced in pairs and give missing energy signals due to the decay chains ending in lightest KK-odd particles. The outline of the paper is as follows. In the next section we present a brief review of KK number conservation and KK parity in UED. In Section \ref{threesite} we discuss three-site moose toy models to understand the relation between different warp factors and the low-energy KK spectrum of gauge bosons. In Section \ref{IRUVIR} we consider gluing two identical slices of AdS$_5$ in the UV region (the IR-UV-IR setup) and discuss the phenomenological features. Large brane-localized terms are necessary in order to obtain the desired pattern for the spectra of gauge bosons. In this Section, we present a model where the LKP is a KK $Z$ gauge boson and discuss the corresponding dark matter phenomenology. In Section \ref{UVIRUV} we discuss briefly the alternative of gluing two slices of AdS$_5$ in the IR region (the UV-IR-UV setup). Even though the UV-IR-UV setup has certain nice phenomenological features, this setup is unstable gravitationally. In the last section we present our conclusions. Lastly in the appendices we give a CFT interpretation of our setups, as well as some discussion on cutting off the Higgs quadratic-divergences using the lightest KK-odd gauge bosons. \section{Mini-Review on UED} \label{ued} We begin by reviewing origins of the success of UED \cite{Appelquist:2000nn} in fitting the precision electroweak measurements while allowing for KK masses well below 1 TeV, as certain important features of UED have not been emphasized enough in the past, which nonetheless will become crucial when constructing models with KK parity in warped extra dimension. Hence, this review of UED will serve as a guide in model-building for the warped case. In the framework of UED, the existence of KK parity requires very special conditions. In that setup, the extra dimension is an interval with a flat background geometry, and KK parity is realized as a geometric reflection about the midpoint of the extra dimension.\footnote{ Strictly speaking, in Ref.~\cite{Cheng:2002ab}, KK-parity is defined as the reflection about the midpoint combined with the orbifold projection. However, one could instead work on the line interval without referring to the orbifold at all. We come back to this when discussing bulk fermion mass.} Alternatively, such an extra dimension can be viewed as an orbifold $S^1/Z_2$, that is a compactified circle with a $Z_2$ orbifolding imposed. Before $Z_2$ orbifolding, the circle $S^1$ has a translational symmetry that is manifested as a $U(1)$ symmetry in the 4D KK decomposition. Momentum in the fifth direction now becomes quantized and each KK mode carries a conserved quantum number, the KK number, under the $U(1)$ symmetry. The translational symmetry along the circle is obviously broken by the $Z_2$ orbifolding, or, in other words, by the orbifold fixed points, which can be thought of as boundaries or branes at the ends of the extra dimension. However, it is clear that a discrete subgroup of the translation survives (assuming that any interactions, whether large or small, localized on the two branes are equal), leading to the KK parity. The picture of $S^1/Z_2$ orbifold makes it clear that KK parity has a larger parent symmetry, the KK number conservation, which is broken only by the interactions living on the branes at the ends of the interval. In the literature on UED models, it is usually assumed that the brane-localized interactions are symmetric with respect to the $Z_2$ reflection about the midpoint, so that KK parity is an exact symmetry. It is also assumed that they are suppressed (loop-induced), implying that KK number is still an approximate symmetry. These assumptions have very important phenomenological implications, as both KK parity and the approximate KK number conservation are needed to evade precision electroweak constraints for UED models; KK parity eliminates couplings of a single odd KK mode with the SM field, whereas the approximate KK number conservation suppresses certain interactions among the even level KK modes, such as single coupling of the 2nd KK mode with the SM, which are not forbidden by KK parity. In the end, both the odd and even KK modes are allowed to have masses well below 1 TeV. If there were only KK parity and not the approximate KK number conservation, experimental constraints would have required the 2nd KK mass to be higher than 2 - 3 TeV and, therefore, the compactification scale to be around 1 TeV or higher (recall that in flat geometry KK modes are evenly spaced). One should keep in mind that the flatness of profiles in UED is not natural and reflects the fact that electroweak symmetry breaking is not addressed but just postulated. A model of dynamical symmetry breaking in UED would typically spoil the flatness of the Higgs profile and constraints on the KK scale would have to be reexamined accordingly (a somewhat related discussion on the little hierarchy problem in UED is presented in \cite{Burdman:2006jj}). The virtue of UED is that mass scales of new particles are allowed to be very close to the electroweak scale at a few hundreds GeV, allowing for easy access at the LHC, even though the model addresses neither the Planck-weak nor the fermion mass hierarchy as it stands in the literature. Since the KK number conservation which prevents the 2nd KK mode from giving large electroweak corrections has its origin in the flat background geometry in the extra dimension, it is clear that it will be lost in a curved background. As a consequence, if we want to implement KK parity in a warped extra dimension, all the higher even KK modes will have un-suppressed couplings with the SM and be required to be heavier than 2 - 3 TeV, as dictated by the model-independent analysis. On the other hand, all KK modes odd under KK parity still need to couple in pairs to the SM and can only contribute to electroweak observables at the loop level. Contrary to UED, a warped extra dimension allows us to investigate various UV sensitive questions such as the Planck-weak hierarchy problem. However, before going into a full-fledged extra-dimensional setup, it is instructive to consider a low-energy effective description involving only up to the 2nd KK mode of the gauge boson. Since higher KK modes might be too heavy to be accessible at the LHC, such an effective theory may be all that matters at the collider experiments and we present this discussion in the next section. \section{Three-site Toy Model} \label{threesite} In essence, the low-energy effective theory amounts to a three-site deconstruction \cite{Arkani-Hamed:2001ca,Hill:2000mu} of the warped extra dimension; see Fig.~\ref{threesitefig}. The gauge symmetry at each site is denoted $G_i$, $i=a, b, c$, with corresponding gauge bosons $A^{(i)}_\mu$. In general, the gauge coupling constants and the decay constants can be different at each site, unlike in the case of flat background geometry. However, the KK parity, which in the current setup is the geometric reflection $a \leftrightarrow c$, ensures that the gauge couplings on the two boundary sites as well as the two decay constants are equal. It is then straightforward to work out the low-energy spectrum of the three-site model. Defining the zero-mode gauge coupling to be \begin{equation} \frac1{g_0^2} = \frac2{g_a^2} +\frac1{g_b^2}, \end{equation} the mass eigenvalues and eigenstates are \begin{equation} m_0 = 0, \quad m_{1_-} = g_a f , \quad m_{1_+}= \sqrt{g_a^2 + 4g_b^2} f, \end{equation} and \begin{eqnarray} \label{threesiteeig} A_\mu^{(0)} &=& \frac{g_0}{g_a} \left(A^{(a)}_\mu + A^{(c)}_\mu\right) + \frac{g_0}{g_b} A^{(b)}_\mu, \nonumber \\ A_\mu^{(-)} &=& \frac1{\sqrt{2}} \left(A^{(a)}_\mu - A^{(c)}_\mu\right), \\ A_\mu^{(+)} &=& \frac{g_0}{\sqrt{2} g_b} \left(A^{(a)}_\mu + A^{(c)}_\mu\right) - \frac{\sqrt{2}g_0}{g_a} A^{(b)}_\mu. \nonumber \end{eqnarray} \begin{figure}[t] \begin{center} \includegraphics[width=7cm]{threesite.eps} \caption{\it Three-site deconstruction of the warped extra-dimension. } \label{threesitefig} \end{center} \end{figure} As a first check, we see both the zero-th and second KK modes are even under the KK parity, $a\leftrightarrow c$, whereas the first KK mode is odd. Furthermore, we see that the KK masses are controlled by the gauge couplings on the boundary and the middle sites. Two particular limits we are interested in are \begin{eqnarray} \frac{g_a}{g_b} \gg 1 &\Rightarrow& \frac{m_{1_-}}{m_{1_+}} \approx 1 - \frac{2g_b^2}{g_a^2} \approx 1; \\ \frac{g_a}{g_b} \ll 1 &\Rightarrow& \frac{m_{1_-}}{m_{1_+}} \approx \frac12 \frac{g_a}{g_b} \ll 1. \end{eqnarray} In the first case when the gauge coupling at the boundary is much larger than the coupling in the middle, the two massive KK modes are roughly degenerate. In the other case when the coupling in the middle site is much larger than the coupling on the boundary, the odd KK mode is ``anomalously'' light compared to the 2nd KK mode and there can be a sizeable hierarchy between the two KK modes. If we view the three-site model as deconstruction of the warped extra dimension, the two limiting cases actually correspond to two opposite types of warped geometries. It is useful to observe that the massless wave function in Eq.~(\ref{threesiteeig}) is always localized where the gauge coupling is smaller; the wave function is localized near the boundary sites if $g_a\ll g_b$ and the middle if $g_a\gg g_b$. The massive modes, on the other hand, are localized away from where the gauge coupling is small due to orthogonality conditions. In models with warped extra dimension, it is well-known that the massive modes are localized toward the IR region \cite{bulkgauge, gp}. The above observation suggests that, in the case of $g_a/g_b \gg 1$, the two boundary sites mimic the IR region whereas the middle site is the UV region. In other words, it would correspond to an IR-UV-IR warp factor which is symmetric with respect to the reflection about the middle site. This geometric $Z_2$ symmetry again serves as the source of the KK parity in our setup of warped extra dimension. The other case of $g_a/g_b \ll 1$ then corresponds to the opposite situation in which the two boundaries correspond to the UV region. This is the UV-IR-UV setup. Another way of understanding the same statement is through the fact that the smaller gauge coupling at a particular site implies that the strong coupling scale (the Landau pole) of the gauge theory is higher. In the warped extra dimension, a local cutoff, where the theory becomes strongly coupled, at a particular location, is determined by the warp factor at that point, so that the UV region has a higher local cutoff than the IR region. Then, we arrive at the same conclusion that, without additional contribution from brane-localized kinetic terms, that could generate some hierarchy between the boundary and bulk couplings, the IR-UV-IR setup will lead to (almost) degenerate first (odd) and second (even) KK gauge bosons, whereas in the UV-IR-UV setup there could be a (little) hierarchy between the odd and even KK modes. \section{IR-UV-IR Model} \label{IRUVIR} In order to obtain a warp factor which is symmetric with respect to reflection about the midpoint of the extra dimension, we consider joining two slices of AdS$_5$ since a single slice does not have such a symmetry.\footnote{Setups with more than one slice of AdS$_5$ space have been discussed in Refs.~\cite{Cacciapaglia:2005pa,Cacciapaglia:2006tg}, even though a symmetric warp factor was not considered.} Clearly there are two distinct ways to do this. We can glue the two slices either in the UV or in the IR region. We begin with the first possibility, labeled as the IR-UV-IR model. The metric of the 5D background spacetime resulting from gluing two AdS$_5$ slices at the UV brane is \begin{eqnarray} ds^2 = dy^2 + a^2(|y|)d x^2, \end{eqnarray} where $y \in [-L,L]$ is the extra dimension and $a(y) = e^{ - k y }$ is the warp factor. In order to obtain a Planck-Weak hierarchy, we choose $kL\sim 30$. Notice that in the conventional models of single slice AdS$_5$ the extra dimension is only $y \in [0,L]$. This constitutes a solution of the 5D Einstein's equations in the presence of a negative bulk cosmological constant, a positive tension midpoint (the UV brane) and two IR branes with equal negative tensions. In such a setup the kinetic term of the massless radion has the correct sign and the radius can be stabilized (i.e., radion made massive) by a suitable mechanism as usual \cite{Goldberger:1999uk}. Therefore, there are no problematic stability issues associated with the IR-UV-IR model as opposed to the UV-IR-UV model described in the next section. We assume the $Z_2$ parity in interchanging $y \to - y$ is an exact symmetry of the 5D theory.\footnote{% If the Chern-Simons term is present in 5D, it is necessarily odd under $Z_2$. This would affect stability of the dark matter particle, as pointed out in \cite{Hill:2007zv}. The Chern-Simons term could arise in the presence of brane-localized anomalies \cite{ArkaniHamed:2001is}. In the following we will assume that all brane-localized anomalies cancel and no Chern-Simons terms are present. } In such a case, the eigenmodes can be divided into two classes with different symmetry properties: even modes, whose profiles are symmetric under reflection around the mid-point, and odd modes with anti-symmetric profiles. Obviously, the even and odd profiles are orthogonal to each other on the $[-L,L]$ interval. As long as the action respects the exact $Z_2$ symmetry, the odd modes can only couple in pairs to the even modes in the KK decomposition and the low-energy, four-dimensional effective theory has the KK parity we desire for. By continuity, the odd modes satisfy the Dirichlet boundary conditions, henceforth denoted by $(-)$, at the UV brane. Similarly, the even modes have Neumann $(+)$ boundary conditions (in the presence of UV brane localized terms mixed boundary conditions (BC) arise). This observation suggests a useful description of the model by referring to the spectrum of a {\em single} slice of AdS$_5$. Namely, the spectrum of a bulk field in the $Z_2$-symmetric model contains two single-slice KK towers corresponding to $(+)$ and $(-)$ boundary conditions in the UV. For example, a bulk field with Neumann boundary conditions in IR would have both $(++)$ and $(-+)$ towers, where the first (second) sign is the BC on the UV (IR) brane. Note however that the physical volume of the extra dimension in our setup is twice as large as in the single-slice description, which affects the normalization of the wave functions. At this point we already have a model combining warped geometry and KK parity. This is not the end of the story, however. As outlined in the previous section, one of our objectives is to obtain fairly light odd $(-+)$ modes (we would like these modes to cut off the quadratic divergences in the Higgs mass) and sufficiently heavy $(++)$ KK modes (so as to evade tight constraints from the precision electroweak tests). Unfortunately, in the simplest version with no brane kinetic terms the even and odd KK modes are quite degenerate, as exemplified in the three-site model in the previous section. Both modes have masses of order $m_{\rm KK} = k e^{ - k L }$ with their relative splitting being of order $\sim 1 /( k L ) \sim 1/30$. Another way to understand the degeneracy is that the AdS geometry localizes KK modes near the IR brane so that their spectrum is little sensitive to the UV brane boundary conditions. As we discuss next, a splitting between even and odd {\em gauge} KK modes can be obtained with very large IR brane kinetic terms (BKT), which in turn have important implications on the strong coupling scale in the 5D setup. \subsection{Gauge bosons with large IR brane kinetic terms} We consider the spectrum of {\em gauge} KK modes. A similar analysis can be performed for other fields. We follow the notation of Ref.~\cite{Carena:2002dz} for the BKT's (see also Ref.~\cite{Davoudiasl:2002ua}). The 5D action is \begin{eqnarray} \label{5daction} S & = & - \int d^4x \int_{-L}^{L} dy \sqrt{- g}\ \frac1{4g_5^2} \Big[ F^{ M N } F_{ M N }+ 2 r_{ UV } F^{ \mu \nu } F_{ \mu \nu } \delta ( y ) \nonumber \\ && \hspace{5cm} + 2 r_{ IR } F^{ \mu \nu } F_{ \mu \nu } \delta ( y - L ) + 2 r_{ IR } F^{ \mu \nu } F_{ \mu \nu } \delta ( y + L ) \Big], \end{eqnarray} where $g$ is the determinant of the metric, and capital Latin letters $M,N=0,1,2,3,5$ refer to the 5D coordinates, whereas lower case Greek letters $\mu,\nu=0,1,2,3$ refer only to the four uncompactified directions. The strengths of the BKT on the two boundary IR branes are required to be equal by the $Z_2$ symmetry. Furthermore, each delta function on the boundary brane contributes only a factor of 1/2 when performing the $y$ integration. Choosing the gauge $A_5=0$, we perform the KK decomposition by expanding \begin{equation} A_\mu(x,y) = \sum_{n} A_{\mu, n}(x) f_{n}(y), \end{equation} where the bulk wave function $f_n(y)$ satisfies \begin{eqnarray} \label{5deom} &&\partial_y\left[e^{-2k|y|}\partial_y f_n(y)\right] \nonumber \\ && \hspace{2cm} + m_n^2 \left[1+2r_{UV}\delta(y) +2r_{IR}\delta(y-L) + 2r_{IR}\delta(y+L)\right]f_n(y) = 0, \\ && \label{5dnorm} \frac1{g_5^2} \int_{-L}^{L} dy \left[1+2r_{UV}\delta(y) +2r_{IR}\delta(y-L) + 2r_{IR}\delta(y+L)\right]f_n^2(y) = 1. \end{eqnarray} The $Z_2$ symmetry, $y\leftrightarrow -y$, inherited from the 5D action implies that bulk profiles are either even or odd under the reflection in the $y-$direction, $f_n(y) = \pm f_n(-y)$. Therefore, we could rewrite the KK decomposition as \beq A_\mu(x,y) =\sum_{n_+, n_-} A_{\mu,n_+}(x) f_{n_+}(|y|) + A_{\mu,n_-}(x) \epsilon(y) f_{n_-}(|y|) \eeq where $f_{n_+}$ and $f_{n_{-}}$ are the even and odd modes, respectively, and $\epsilon(y)$ is +1 $(-1)$ for $y>0$ ($y<0$). Because of the warp factor $\exp(-2k|y|)$ in the equation of motion, one solves Eq.~(\ref{5deom}) separately for $y>0$ and $y<0$, imposes Neumann boundary conditions (mixed boundary conditions, in the presence of IR BKTs) at $y=\pm L$ to ensure a massless zero mode, and matches the solutions at $y=0$ as implied by the delta functions in Eq.~(\ref{5deom}). When $r_{UV}=0$, the ``continuity conditions'' at $y=0$ are simply \begin{equation} f_{n_-}(0) = 0, \quad \partial_y f_{n_+}(0) = 0. \end{equation} As emphasized earlier, the above equation shows that a single bulk field in the IR-UV-IR setup would encompass modes that have both Dirichlet and Neumann boundary conditions on the UV brane, and we can simply ``borrow'' the results from the single-slice AdS$_5$ model by considering both types of boundary conditions. In fact, using the $Z_2$ reflection symmetry, the 5D action in the IR-UV-IR setup in Eq.~(\ref{5daction}) can be re-written as \begin{equation} \label{new5daction} S= - \int dx^4 \int_{0}^{L} dy \sqrt{- g}\ \frac1{4\tilde{g}_5^2} \Big[ F^{ M N } F_{ M N }+ 2 r_{ UV } F^{ \mu \nu } F_{ \mu \nu } \delta ( y ) + 2 r_{ IR } F^{ \mu \nu } F_{ \mu \nu } \delta ( y - L ) \Big], \end{equation} where the integration in $y$ is only from 0 to $L$ and $\tilde{g}_5^2=g_5^2/2$. It is then clear that this is the 5D action of a single-slice AdS$_5$ with a re-defined 5D gauge coupling $\tilde{g}_5 = g_5/\sqrt{2}$, where the factor of $\sqrt{2}$ represents the fact that the physical volume in the $y$-direction is actually twice as large as being integrated in Eq.~(\ref{new5daction}). Now it is straightforward to construct the solutions to Eqs.~(\ref{5deom}) and (\ref{5dnorm}) by considering the equations of motion in the single-slice setup with $y\in [0,L]$: \begin{eqnarray} && \pa_y (e^{-2 k y} \pa_y f_n) + m_n^2 f_n = 0 \\ && \label{new5dnorm} \frac1{\tilde{g}_5^2} \int_0^L dy \left[1+2r_{UV}\delta(y) +2r_{IR}\delta(y-L)\right]f_n^2(y) = 1. \end{eqnarray} and the boundary conditions \bea e^{-2 k L} \pa_y f_{n_\pm}(L) &=& m_{n_\pm}^2 r_{IR} f_{n_\pm}(L) \\ \pa_y f_{n_+}(0) &=& - m_{n_+}^2 r_{UV} f_{n_+}(0) \\ f_{n_-}(0) &=& 0 \eea The normalization in Eq.~(\ref{new5dnorm}) is consistent with Eq.~(\ref{5dnorm}) after taking into account $\tilde{g}_5 = g_5/\sqrt{2}$. The spectrum of the gauge boson in the IR-UV-IR setup now consists of two interlacing towers of modes, using the language of the single-slice model: the $(++)$ tower, which is KK-even, and the $(-+)$ tower, which is KK-odd. A massless mode in the $(++)$ tower always exists, irrespectively how large the BKTs are. We also have two towers of (roughly) equal-spaced KK modes starting at $\sim m_{\rm KK} = k e^{-k L}$. In addition, each tower has a parametrically lighter massive state. For $r_{IR} \gg 1/k$ we find the approximate expression \begin{eqnarray} m_{1_-}^2 &\approx& \frac{2}{k r_{IR}} m_{\rm KK}^2 \label{gaugeodd} \\ m_{1_+}^2 &\approx& \frac{r_{UV} + r_{IR} + L}{r_{UV} + L} \frac{2}{k r_{IR}} m_{\rm KK}^2 \end{eqnarray} As we can see, the lightest KK mode in each tower has its mass suppressed with respect to $m_{\rm KK}$. Of more importance to us is that we can split the lightest even and odd modes. The ratio is \begin{equation} \label{eoratio} \frac{m_{1_+}}{m_{1_-}} \approx \sqrt{1 + \frac{r_{IR}}{r_{UV} + L}} \end{equation} Let us consider the effects of UV and IR BKT's in turn. As mentioned earlier, in the absence of BKT's, the even and odd modes are quite degenerate since they are localized away from the UV brane so they are insensitive to the different BC's there (and BC's on IR brane are the same). It is clear that very large UV BKT's, which affect only $(++)$ modes, could compensate the small UV brane wavefunction and modify the spectrum of $(++)$ modes relative to $(-+)$ ones. However, we see from Eq.~(\ref{eoratio}) that for fixed IR BKT's, UV BKT's in fact tend to {\em reduce} the splitting between even and odd KK modes. It turns out that positive BKTs tend to repel massive KK modes away from the brane \cite{Carena:2002dz,Davoudiasl:2002ua} so that very large UV BKTs will effectively convert $(+)$ BC on the UV brane into $(-)$ BC, i.e., make the 2 towers even more degenerate. Negative $r_{UV}$ increases the mass splitting, but it also leads to the appearance of a ghost (or the Landau pole in the UV brane propagator) at the intermediate scale $\sim k e^{- k |r_{UV}|}$. We cannot obtain a sizable splitting this way without lowering the UV brane cut-off very much. In the following we will set UV BKTs to be small or zero since they do not give the desired effects. Consider next the effect of positive IR BKT's. In the absence of BKT's, even and odd KK modes are localized near the IR brane. The BC (hence the wavefunction) being the same on the IR brane for the even and odd towers, we might expect the effect of IR BKT's on the two towers to be similar and therefore not lead to mass splitting. However, large positive IR BKTs tend to repel the massive wave functions away from the IR brane, pushing them toward the UV brane. In this case, the spectrum would then become more sensitive to the BCs on the UV brane [which are different for the $(++)$ and $(-+)$ modes], hence lead to a larger splitting between the two modes. However, to actually end up repelling the KK modes away from the IR brane, the BKT's have to overcome the ``pressure'' from AdS geometry to localize KK's near the IR brane. Only very large IR BKT's, $kr_{ IR } \gg k L$, lead to a large splitting between even and odd modes. To be precise: \begin{eqnarray} \frac{ m_{1_+} }{ m_{1_-} } & \sim & \sqrt{\frac{ k r_{ IR } }{ k L }} \label{ratio} \end{eqnarray} The need for such a size of IR BKT's can be understood using the idea of the holographic RG flow. As explained in Ref. \cite{holoRG}, moving the UV brane by the infinitesimal proper distance $\epsilon$ toward the IR brane induces a brane kinetic term on the UV brane with a coefficient $\propto \epsilon / g_5^2$. Moving the UV brane very close the IR brane we find that AdS without any brane kinetic terms is equivalent to flat space with large brane kinetic terms $\sim L / g_5^2$ on one brane. Now, in the AdS model with large {\em IR} BKT's (but no UV BKT's to begin with), there is a competition between the UV brane terms {\em induced} via the holographic RG (which repel KK modes away from the UV brane) and the IR brane terms (which repel KK modes away from the IR brane) -- clearly the latter ``win'' for $r_{IR} \gg L$. Because of that repulsion away from the IR brane, the {\em even} gauge KK spectrum is effectively given by $(+-)$ (in addition to a zero-mode which is effectively localized near the IR brane). With such boundary conditions, there is the tower of KK modes starting at $\, m_{\rm KK}$. In addition, there is a light mode whose mass is parametrically suppressed with respect to the KK scale, $m_{1+} \sim \, m_{\rm KK}/(k L)^{1/2}$, as is well known from analysis of Higgsless models in AdS$_5$ \cite{higgsless}. Its mass is set by the zero-mode coupling in absence of large BKTs which is the origin of the $(k L)^{1/2}$. This feature follows from the fact that this is a would-be zero mode (it would have been a zero-mode were it not for the effectively Dirichlet boundary condition on the IR brane) and its profile is almost flat except near the IR brane where it is suppressed (see below). Similarly, the odd gauge KK spectrum is effectively $(--)$, plus a would-be zero-mode localized near the IR brane. Moreover, the fifth component $A_5$ has effectively $(++)$ BC, which yields a massless scalar mode that marries the would-be vector zero-mode. As a consequence, the vector mass is set by the IR brane-localized gauge coupling and thus the suppression factor in this mass is $(k r_{IR})^{1/2}$. The profile of the lightest modes can be approximated by \begin{eqnarray} f_0(y) &\approx & \frac{\tilde{g}_5}{\sqrt{r_{IR}}} \\ f_{1-}(y) &\approx& \frac{\tilde{g}_5}{e^{2 k L}\sqrt{r_{IR}}} \left (e^{2 k y} - 1 \right ) \\ f_{1+}(y) &\approx& \frac{\tilde{g}_5}{\sqrt{L + r_{UV}}} \left ( 1 - \frac{1}{2k} m_{1_+}^2 (y + r_{UV}) e^{2 k y} \right ) . \end{eqnarray} It is important to remember that the wave functions here are written in terms of the ``re-defined'' 5D gauge coupling $\tilde{g}_5$ in the single-slice AdS$_5$ action in Eq.~(\ref{new5daction}). In the original formulation of IR-UV-IR setup in Eq.~(\ref{5daction}), $g_5= \sqrt{2} \tilde{g}_5$, which would result in a suppression factor of $1/\sqrt{2}$ in the wave functions and account for the fact that the physical volume in the extra dimension in the IR-UV-IR setup is twice as large as in the single-slice AdS$_5$. Here we see that the zero-mode is flat and its normalization dominated by the IR BKT so that it is {\em effectively} localized near IR brane. The zero mode gauge coupling, one of the low-energy observables, is related to the 5D gauge coupling by $g_0 \approx g_5/\sqrt{2r_{IR}}$. Again this is different from the case of a single-slice AdS$_5$ setup by the volume factor. As illustrated in Fig.~\ref{profiles}, the first odd mode is peaked at the IR brane, while the first even mode is almost flat everywhere except when it is near the IR brane where the wave function is suppressed. From the profiles of the wave functions one sees that, for $kL \gg 1$, the first odd mode couples to the IR brane with a similar strength to the zero mode, $f_0(L) \approx f_{1-}(L) \approx g_5/{\sqrt{2r_{IR}}}$, whereas its coupling to the UV brane is zero. On the other hand, comparing to zero mode, the even mode coupling to the IR brane is suppressed, \beq \label{e.ireven} \frac{f_{1+}(L)}{f_0(L)} \approx \sqrt{\frac{r_{UV} + L}{r_{IR}}} \approx \frac{m_{1_-}}{m_{1_+}}, \eeq while its coupling to the UV brane is {\em enhanced} (we will use this fact later on): \beq \label{crat} \frac{f_{1+}(0)}{f_0(0)} \approx \sqrt{ \frac{r_{IR}}{r_{UV} + L} } \approx \frac{m_{1_+}}{m_{1_-}}. \eeq Such behaviors for the first KK-even mode have been observed in Refs.~\cite{Carena:2002dz,Davoudiasl:2002ua} and can be understood from the repulsion of the corresponding wave functions away from the branes due to the BKT's. On the other hand, the zero-modes (including the lightest odd mode which corresponds to a ``would-be'' zero-mode) are not similarly repelled. In the IR-UV-IR setup we found that the enhancement and suppression of the coupling of lightest even KK mode to the UV and IR branes, respectively, is correlated with the mass splitting $m_{1_+}/m_{1_-}$. \begin{figure}[t] \begin{center} \includegraphics[width=5.3cm]{profilesGB.eps} \includegraphics[width=5.3cm]{KKP_profilesIR.eps} \includegraphics[width=5.3cm]{profilesGB_rir0.eps} \caption{\it Left: Gauge boson wave functions along extra dimension for first even $(1_+)$ mode (red), first odd $(1_-)$ mode (blue) and zero $(0)$ mode (black). Middle: Profiles are zoomed near the IR brane and we added in dashed lines the level-2 KK modes for comparison. Right: Same as middle plot but switching off the IR BKT. The first odd mode is more strongly coupled to the IR brane while the first even mode is less suppressed than in the case with IR BKT. } \label{profiles} \end{center} \end{figure} Even though very large IR BKT's result in a sizable ratio between the first even and odd KK modes, which is desirable from the phenomenological viewpoint, it would also imply the 5D gauge coupling $g_5$ is large due to the relation $g_0=g_5/\sqrt{2r_{IR}}$, assuming that the zero-mode couples with the SM strength. Therefore, if one demands the UV/IR hierarchy to be Planck-weak and/or the ratio $m_{1_+}/m_{1_-}$ to be sizable, 5D perturbativity may become an issue of concern. The strong coupling scale in the IR-UV-IR model can be estimated using the results from the single-slice AdS$_5$ setup by taking into account two facts: First, the physical volume in the IR-UV-IR model is twice as large, which is reflected in the normalizations of the wave functions as well as the relation $g_5=\tilde{g}_5/\sqrt{2}$. Secondly, a single bulk field in the IR-UV-IR model contains two towers of KK modes, both $(++)$ and $(-+)$ BCs, in the single-slice setup. Consider an Euclidean propagator between two points $y_{1,2} \sim L$ in 4D momentum space. It can be represented as $i g_{eff}^2(p^2)/p^2 $. At low energies, below the lightest KK mass we have $g_{eff}^2 \approx g_0^2$, but above the KK scale the effective coupling grows with energy, which in the single-slice AdS$_5$ setup is \cite{Carena:2002dz} $g_{eff}^2 \approx e^{k L} \tilde{g}_5^2 p$ for one type of BC's.\footnote{Note that, as seen in Fig.~\ref{profiles}, the regularly spaced heavy KK modes (both odd and even) with mass $\sim \, m_{\rm KK}$ tend to vanish at the IR brane due to the repulsion by very large BKT's. So, if we consider the propagator between two points localized exactly on the IR brane, we will not find the above growth with energy since the heavy modes do not contribute to this propagator. In order to include the effects of these heavier KK modes giving the above growth of the effective coupling with energy, we must consider the propagator with endpoints which are $\sim 1/ k$ (which is roughly the width of these KK profiles) away from the IR brane thus accounting for the use of a smearing factor in Fig.~\ref{strongcoupling}.} In the IR-UV-IR setup the growth is twice as large because both types of BC's are included. Therefore, defining the strong coupling scale $\Lambda$ by $g_{eff}^2(\Lambda) = 16 \pi^2$, one arrives at the estimate \beq \Lambda \sim e^{- k L} \frac{16 \pi^2}{g_5^{2}} \sim \frac{8 \pi^2 }{ k L g_0^2} \, m_{\rm KK} \left( \frac{ m_{1_-} }{ m_{1_+ } } \right)^2 \eeq where we used $g_{0} \approx g_5/ \sqrt{2r_{IR}}$ and $m_{1_+}/m_{1_-} \approx \sqrt{ r_{IR}/L}$. For example, setting $g_0^2 \sim 1/2$, $m_{1_-} /m_{1_+} \sim 1/2$ and $k L\sim 30$, we get $\Lambda \sim \, m_{\rm KK} \sim$ tens of TeVs. The strong coupling scale is far above the masses of the lightest even and odd KK modes, however it is not separated from the scale where the tower of evenly spaced KK modes sets in. These estimates are confirmed by the numerical analysis in Fig.~\ref{strongcoupling}. Thus there is no energy regime where the theory is effectively five-dimensional and weakly coupled (for that we would need $\Lambda \gg \, m_{\rm KK}$). As a compromise, we might need to lower the UV brane scale to some intermediate scale (i.e., choose smaller $k L$) in which case we loose the solution to the Planck-weak hierarchy problem, but we can still easily address the hierarchy between the weak scale and (at least) the flavor scale $\sim 1000$ TeV. \begin{figure}[!htb] \begin{center} \includegraphics[width=7cm]{KKP_strongcouplingb.eps} \caption{\it The position dependent propagator smeared with $a^{-1}$ (solid red). It hits the strong coupling scale at the second heavy KK mass. For comparison, IR brane-to-brane propagators in the absence of IR BKT (dashed blue).} \label{strongcoupling} \end{center} \end{figure} \subsection{Fermions} The Lagrangian for the fermions \beq {\cal L}_f = \bar{ \psi } \Gamma^M \left( D_M - \epsilon(y) c k \right) \psi \eeq has the $Z_2$ symmetry $y \to - y$ with $\psi_{L,R} \to \gamma_5 \psi_{L,R}$. In the above $\{\Gamma^M\}$ are the 5D Dirac matrices and $D_M$ is the covariant derivative. As is familiar from the RS1 and UED setups, a bulk fermion mass term is odd under the reflection in $y \to -y$, therefore we need to include a bulk mass profile that is odd under $y \to -y$ and introduce the $c$ parameter such that $M_b=\epsilon(y) c k$. Notice however that in the conventional either flat or warped extra-dimensional setups, the physical domain is only from 0 to $L$ after the orbifold projection. So even though the bulk mass profile is odd under $y \to -y$, the mass term itself is constant over the whole physical domain in $[0,L]$. In our case, the physical domain has been extended from $[0,L]$ to $[-L,L]$ and the mass profile would in fact include a jump at $y=0$. At this stage we will not be concerned with the detailed origin of such a mass profile except to note that a plausible source could arise from coupling to a scalar with a kink profile, similarly to the orbifold setup in Ref.~\cite{Kaplan:2001ga}. As shown below, for the fermions we do not need BKTs to obtain a splitting between even and odd KK modes, so we omit them in most of the following discussion. The IR boundary conditions require vanishing of one chiral component on the boundaries. Consider the case when the right-handed component vanishes: $\psi_R(L) = 0$; in this case there is a massless zero mode for the left-handed component. The discussion for the case $\psi_L(L) = 0$ is in parallel, with $c \to -c$. Like the gauge field, a 5D fermion contains two KK towers with different UV boundary conditions: \bea \psi_L(x,y) &=& \sum_{n_+,n_-} a^{-3/2}f_{L,n_+}(|y|) \psi_{L,n_+}(x) + \epsilon(y) a^{-3/2} f_{L,n_-}(|y|) \psi_{L,n_-}(x) \nn \psi_R(x,y) &=& \sum_{n_+,n_-} \epsilon(y) a^{-3/2} f_{R,n_+}(|y|) \psi_{R,n_+}(x) + a^{-3/2} f_{R,n_-}(|y|) \psi_{R,n_-}(x) \eea where the profiles satisfy the following coupled, first-order equations of motion \begin{eqnarray} \left(\partial_y + \frac{a^\prime}{2a} + M_b \right) f_{L,n} &=& m_n a^{-1} f_{R,n} \\ \left(-\partial_y - \frac{a^\prime}{2a} + M_b \right) f_{R,n} &=& m_n a^{-1} f_{L,n}. \end{eqnarray} The massless zero mode $f_{L,0}(y)$ is even under reflection $y\to -y$. For massive modes, the equations of motions imply that when the left-handed component has a symmetric profile under reflection, the corresponding right-handed chirality has an anti-symmetric profile, and vice versa. We insert an extra $(-1)$, in addition to the reflection in $y\to -y$, in the definition of KK-parity for the right-handed chirality. In the language of the orbifolding, this extra minus sign could arise from performing the orbifold projection and is consistent with the definition of KK-parity in UED. With the above definition of KK-parity, the {\em even} tower has right-handed components that are anti-symmetric in $y\to -y$ and the ``continuity condition'' $f_{R,n_+}(0) = 0$, which can be interpreted as the boundary condition on the UV brane. The left-handed zero mode has the profile $f_{L,0} \approx e^{(1/2 - c) k y}$, which is localized towards UV for $c > 1/2$ and towards IR for $c < 1/2$. The massive KK-even modes start at $\sim \, m_{\rm KK}$ for all values of $c$. For the {\em odd} tower the continuity condition reads $f_{L,n_-}(0) = 0$. The mass of the lightest odd state is \bea m_{1_-} &\sim& \frac{ \, m_{\rm KK} }{\sqrt{ k L } } \; \hbox{to} \; \, m_{\rm KK} \qquad c \gtrsim -1/2 \nn m_{1_-} &\sim& \, m_{\rm KK} e^{k L (1/2 + c)} \qquad c < -1/2 \label{fermionodd} \eea Thus, choosing $c < -1/2$ we can generate a sizable splitting between the lightest even and odd KK modes without resorting to BKTs. In that case the RH profile is localized toward UV: $f_{R,1_-} \sim e^{(1/2 + c) k y}$ (see Fig.~\ref{Fermion_profiles}). Note that the splitting can only be achieved if the corresponding zero mode fermion is sharply localized at the IR brane. As is clear from the discussion, the zero mode fermion has $(++)$ BC's on the (UV, IR) branes. Changing its BC's from $(++)$ to $(-+)$ produces a would-be zero mode that is very light, as the wave function is localized near the IR and insensitive to the BC at the UV, which is nothing but the lightest odd mode. Typically, naturalness arguments require only the top quark KK modes below $\stackrel{<}{{}_\sim} 1 \, {\rm TeV}$ and the top quark is always localized toward IR, naturally giving light odd KK modes for it. Hence, the even-odd splitting for KK fermions that we obtain by choosing $c$ appropriately is sufficient for our purpose (this is different from the gauge case where the introduction of large BKT's is necessary). \begin{figure}[!htb] \begin{center} \includegraphics[width=7cm]{Fermion_profile_c1.eps} \includegraphics[width=7cm]{Fermion_profile_c2.eps} \caption{\it Profiles of first odd (black) and even (red) KK fermions with RH chirality for two values of $c$.} \label{Fermion_profiles} \end{center} \end{figure} For the light fermions there are two options: they can be localized near the UV or near the IR brane. The former setup allows us to simply address the Yukawa hierarchy and flavor issues, but, as we show below, is more constrained by EW data. \bi \item {\bf Light fermions near the UV brane.} The light fermions can be localized near the UV brane by choosing the corresponding 5D mass parameter $c>1/2$. This yields naturally small couplings to the Higgs localized in IR and so the flavor hierarchy is addressed \cite{gn,gp}. At the same time, a severe flavor problem is avoided \cite{gp, Huber:2000ie}. However, in this case, the coupling of light fermions to the {\em lightest} even gauge KK mode is enhanced compared to the SM gauge coupling. From Eq.~(\ref{crat}), the coupling is given approximately by $g_{5}/ \sqrt{2L}$ which is enhanced with respect to the zero-mode coupling, $g_5/\sqrt{2r_{IR}}$, by a factor equal to the splitting between even and odd gauge KK's. Integrating out the lightest even gauge boson will induce 4-fermion (flavor-preserving) operators with the coefficient given by $\sim g_{5}^2 / (2L)\times m_{ + }^{-2} \sim g_0^2 / m_{ - }^2$. Since the limit on the mass scale suppressing $4$-fermion operators is a few TeV \cite{Barbieri:1999tm}, the EW data constrain the mass of the {\em odd} mode to be $\gtrsim$ a few of TeV. Thus, with the light fermions on the UV brane there is a tension between naturalness and electroweak precision data. \item{\bf Light fermions away from the UV brane.} The alternative is to localize the light fermions away from the UV brane such that their coupling to the lightest even gauge KK mode is suppressed. Such a localization for zero-mode fermions can be achieved either (i) in the standard way by choosing $c < 1/2$ or (ii) keeping $c \gtrsim 1/2$ and adding huge IR fermion kinetic terms\footnote{In this case, the fermionic profile is peaked towards the UV. However the dominant contribution to the normalization of the fermion zero-mode (and its coupling to gauge modes) comes from the IR localized kinetic term.} $k r_F > e^{(2c-1) k L}$. In the case of the light fermions localized very close to the IR brane, the coupling to the lightest even mode is smaller than the SM strength, see Eq.~(\ref{e.ireven}). Consequently, constraints from four-fermion operators are not so stringent. In this case, the main constraint comes as usual from the S parameter and requires the lightest {\em even} mode to be heavier than a few TeV. In turn, the odd KK mode can still be lighter than a TeV in order to improve naturalness. Nevertheless, with the light fermions localized in the IR, the flavor hierarchy is not addressed in the usual fashion as in Refs.~\cite{gn,gp}. We also expect a severe flavor problem: the four-fermion flavor-violating operators from integrating out the cut-off physics are generically too large, even though contributions from gauge KK exchange might be suppressed due to the latter's repulsion from the IR brane where the light fermions are localized. Such large effects arise either from the cut-off suppressed operators in the bulk for the case (i), or are localized on the IR brane in the case (ii). To avoid flavor problems we should equip the model with additional flavor structures, see e.g. \cite{Rattazzi:2000hs}. \ei \begin{figure}[!htb] \begin{center} \includegraphics[width=10cm]{spectrum2.eps} \caption{\it KK mass spectrum. The first tower is for gauge bosons ($r_{IR}=4L$). The last three towers are for fermions with different $c$ parameters. The $n=1$ modes are black and the $n=2$ are pink. Each tower contains two sub-towers, the left one is for KK parity-odd modes, the right one for KK parity-even modes. } \label{spectrum} \end{center} \end{figure} \subsection{Dark Matter} \label{subsection:dm} KK parity implies that the lightest KK-odd particle (LKP) is stable. There are two main possibilities: it could be either the lightest KK-odd gauge boson or the lightest KK-odd fermion (in the IR-UV-IR setup with large IR BKTs, the KK graviton is never the lightest mode). From our previous discussion and as illustrated in the spectrum of Fig.~\ref{spectrum}, the LKP can be a fermion if the $c$ parameter is $c \;\raisebox{-0.9ex}{$\textstyle\stackrel{\textstyle<-1/2$, that is when the zero mode is sharply localized toward the IR brane. From naturalness arguments, we expect the appearance of a light odd KK mode of the top quark. In particular, we expect that the only fermion having a $c$-value close to $-1/2$ is the RH top quark. However, in order to be a viable dark matter candidate, the LKP has to be electrically neutral and should interact weakly and this discards the case where the lightest odd-KK top quark is the LKP. The only possibility for fermionic LKP dark matter would be the KK partner of the RH neutrino, assuming the RH neutrino has the smallest $c \;\raisebox{-0.9ex}{$\textstyle\stackrel{\textstyle< -1/2$. This would mean the zero mode of the RH neutrino lives near the IR brane, which is not very well motivated since the neutrino is the lightest of the SM particles and we expect it to be localized in the UV. Therefore, in the following we do not consider the KK-odd fermion LKP case and we refer to \cite{Agashe:2004ci,Belanger:2007dx} for analysis of Dirac RH neutrino dark matter. Having concentrated on the lightest gauge boson as the LKP, there remain several options that lead to different interactions of the LKP. Here we consider the situation in the KK parity symmetric version of the model of Ref.~\cite{Agashe:2003zs} where the electroweak symmetry is extended to $SU(2)_L \times SU(2)_R \times U(1)_X$ and contains custodial symmetry. The model contains three neutral gauge bosons, $L_{1-}^3$, $R_{1-}^3$, $X_{1-}$, and the LKP could be a combination of those. In our setup with large brane kinetic terms, the masses of the lightest gauge states depend in the first place on the relative size of the IR BKTs $r_L$, $r_R$, $r_X$ for the three group factors. Unlike the minimal UED scenario, the one loop corrections to gauge boson masses play a secondary role (they are still relevant though, because they split the masses of charged and neutral gauge bosons). Generically, the LKP will be embedded in the group factor with the largest BKT. The annihilation cross section of the LKP can be very different, depending whether the LKP is embedded in $R_{1-}^3$ or $X_{1-}$, or whether it lives in $L_{1-}^3$. If the LKP is $X_{1-}$, it has no non-abelian gauge interactions whatsoever. If it is $R_{1-}^3$ it does have non-abelian interactions, however vertices with the SM $W$ boson (who lives in $L_0^\pm$) are only induced by electroweak symmetry breaking and are very suppressed. Thus, both of these cases are similar and, using the UED nomenclature, we refer to both as the KK photon LKP. In UED, the KK photon annihilates dominantly into SM fermions with SM couplings \cite{Servant:2002aq} and its mass is predicted to be close to the 1 TeV scale to account for the observed dark matter abundance. In the model at hand, the situation is different due to different mass scales and non-trivial profiles along the extra dimension. The lightest KK-odd gauge boson is peaked toward the IR brane and it couples with the SM strength only to the SM fermions localized toward the IR brane. Furthermore, by $Z_2$ parity conservation, the interaction vertex with the light fermion must involve an odd KK fermion. The latter are typically very heavy in our setup, unless the corresponding SM fermion is sharply localized on the IR brane ($c < -1/2$). Thus, typically the LKP can annihilate efficiently only to top quarks. For this reason, the annihilation cross section into fermions will be too small to support a TeV mass dark matter particle, unless all SM fermions are sharply localized toward the IR. The possibility that the LKP is $L_{1-}^3$, which we refer to as the KK $Z$, appears more promising. In UED, KK $Z$ is usually not considered as the LKP. The reason is that, without BKTs, the KK photon is lighter than KK $Z$ due to one-loop corrections to KK masses \cite{Cheng:2002iz}. In the present setup, however, there is no reason to reject the KK $Z$ scenario. The most important point is that the KK $Z$ has non-abelian gauge interactions with the SM $W$ bosons. More precisely, we have the trilinear vertex: \beq \label{e.gtv} {\cal L}_{3} \approx - i g_L (\pa_\mu L_{1-,\nu}^3 - \pa_\nu L_{1-,\mu}^3) L_{1-,\mu}^+ W_\nu^- + \dots \eeq and the coupling here is the SM $SU(2)_L$ coupling. We also have the quartic vertex: \beq \label{e.gqv} {\cal L}_{4} \approx - g_L^2 L_{1-,\mu}^3 L_{1-,\mu}^3 W_\nu^+ W_\nu^- + \dots \eeq In the above, we neglected the effects of electroweak symmetry breaking. These couplings lead to the annihilation diagrams shown in Fig.~\ref{Feynmanndiagrams} and the annihilation cross section into $W^+W^-$ is \cite{Burnell:2005hm} \beq \sigma_{L^3 L^3 \rightarrow W^+ W^- } \ = \ \frac{g_L^4}{18\pi m^2 s^3 \beta^2} \left[ - 12m^4 (s- 2m^2 )L + s \beta (12m^4+3s m^2+4s^2) \right] \eeq where $\beta^2 = 1 - 4 s^2/m^2$, $L = \log[(1 + \beta)/(1-\beta)]$. The KK $Z$ couples also to the Higgs boson which is localized on the IR brane. According to Fig.~\ref{profiles}, the lightest odd gauge boson couples with the same strength as the zero mode to the IR brane. Thus, the coupling to the Higgs has the SM strength. Annihilation via the Higgs boson yields only a small correction (however, the coupling to the Higgs will be relevant for direct detection). \begin{figure}[!htb] \begin{center} \includegraphics[height=3.cm,width=14cm]{KKZanni_diagrams.eps} \caption{\it Diagrams contributing to the annihilation of the KK $Z$. In the first diagram both $t$ and $u$ channels should be included. And in the case of the $SO(4)$ model, both vector $V^{\pm}$ and axial $A^{\pm}$ charged gauge bosons are exchanged.} \label{Feynmanndiagrams} \end{center} \end{figure} For the same reasons as in the KK photon case, we do not expect the cross section for annihilation into fermions to be sizable. Finally, annihilation into $ZZ$ and $hh$ are comparatively negligible. An interesting variation of the KK $Z$ LKP is the case when the gauge couplings and the BKTs for $SU(2)_L$ and $SU(2)_R$ are equal, which may occur if the model displays $SO(4)$ symmetry (that may be a consequence of the larger underlying $SO(5)$ symmetry as in \cite{Agashe:2004rs}). In the $SO(4)$ invariant case, $L^3_{1-}$ and $R^3_{1-}$ are degenerate in the limit of no EW breaking. Electroweak breaking lifts the degeneracy, and it picks up the vector combination $V^3 = L^3_{1-} + R^3_{1-}$ as the LKP, while the axial combination $A^3 = L^3_{1-} - R^3_{1-}$ which couples to the Higgs on the IR brane is heavier. The couplings of the LKP to the SM $W$ bosons are reduced by one half with respect to the previous case. Moreover, the annihilation via $t$- and $u$-channel exchange of the charged axial gauge bosons should be taken into account. All in all, the annihilation cross section due to non-abelian gauge interactions is reduced by one quarter, \beq \sigma_{ V^3 V^3 \rightarrow W^+ W^- } = {1 \over 4} \sigma_{ L^3 L^3 \rightarrow W^+ W^- } \eeq Furthermore, the vector LKP does not couple to the Higgs boson at all. Like in UED, we assume that the reheat temperature is at least a few tens of GeV so that the relic dark matter abundance follows from the standard thermal freeze-out procedure and is entirely determined by the annihilation cross section of the LKP. In the generic KK $Z$ scenario, this leads to $m_{LKP}\sim 3.5$ TeV to obtain the correct relic abundance (see Fig.~\ref{relic}). This mass scale is quite high and would signify that the little hierarchy problem is not solved in our model. The situation is better in the $SO(4)$ invariant case, where the reduction of the annihilation cross section leads to the smaller LKP mass, $m_{LKP}\sim 1.7$ TeV. \begin{figure}[!htb] \begin{center} \includegraphics[height=7.cm,width=11cm]{relic_density.eps} \caption{\it Relic density prediction as a function of the $Z^1$ mass in two cases. 1) $Z^1$ is $L^3_{1-}$ and has SM couplings. 2) $Z^1$ is $V^3=L^3_{1-}+R^3_{1-}$ with $g_L=g_R$. Only self-annihilation into $W^+W^-$ is included.} \label{relic} \end{center} \end{figure} Moreover, this mass scale could be further reduced if co-annihilation is taken into account \cite{Servant:2002aq}. We indeed expect the next lightest KK modes (NLKP) $W^{\pm}_{1-}$ (as well as $A^3_{1-}$ and $A^{\pm}_{1-}$ in the $SO(4)$ model) to be close in mass to the LKP. The relevant self-annihilation cross sections of the nearly degenerate states as well as the co-annihilation cross sections were computed in Ref.~\cite{Burnell:2005hm} to study co-annihilation effects in KK photon dark matter but were not used to study KK $Z$ dark matter. This issue is, however, model-dependent and here we do not go beyond the rough estimate obtained without co-annihilation. Direct detection of KK $Z$ from its elastic scattering off a nucleus in underground detectors such as CDMS or XENON will be very challenging, and in the $SO(4)$ model, it is hopeless since in this case there is no coupling to the Higgs. To predict the rates for direct detection of a heavy $Z^1=L^3_{1-}$, we can use the same analysis as the one for UED \cite{Cheng:2002ej,Servant:2002hb}, replacing the hypercharge coupling by the $SU(2)_L$ coupling. In addition, we can remove the effects from fermion interactions and only take into account the elastic scattering from $t$-channel Higgs exchange. The spin-independent elastic scattering cross section on a nucleon is \begin{equation} \sigma_n=\frac{m_N^2}{4\pi(m_{Z^1}+m_N)^2}\left[Zf_p^{Z^1}+(A-Z)f_n^{Z^1}\right]^2 \frac{m_{p,n}}{A^2\mu} \ \ \mbox{where} \ \ f^{Z^1}_{p,n}=m_{p,n}{\sum} f^{p,n}_{T_q}\frac{g^2}{2m_H^2} \end{equation} where $A$ and $Z$ are the number of nucleons and protons in the nucleus, $m_{n,p}$ is the mass of the proton or neutron, $\mu=m_N m_{Z^1}/(m_N+m_{Z^1})\sim m_N$ is the reduced mass of the WIMP-nucleus system and $f^{p,n}_{T_q}$ are the usual nucleonic matrix elements. We therefore have a $(g/g')^2$ enhancement compared to UED but also a suppression from the higher mass. So, at the end, the predictions are of the same order as the ones from UED, where elastic scattering is also dominated by Higgs exchange, unless some enhancement effect takes place from KK fermion exchange if we force a mass degeneracy between $\gamma^1$ and KK quarks. As shown in Fig.~\ref{direct_detection}, for $Z^1$ masses of order 3--4 TeV, the nucleon-$Z^1$ spin-independent cross section is smaller than $10^{-10}$ pb, well below the projected sensitivities of near-future experiments. \begin{figure}[!htb] \begin{center} \includegraphics[height=6.cm,width=10cm]{elastic.eps} \caption{\it Spin-independent elastic scattering of $Z^1$ ($=L^3_{1-}$) on nucleon.} \label{direct_detection} \end{center} \end{figure} \subsection{Collider signatures} As suggested in the previous subsection, an LKP mass at 1 TeV is possible if the $SU(2)_R$ component of the LKP is increased, which makes its effective coupling to the SM smaller and its relic density compatible with observations for a mass smaller than in the case of a pure $SU(2)_L$ coupling. We have also argued that if $Z^1$ is indeed the LKP (either $L^3_{1-}$ or $V^3_{1-}$), we expect the next lightest KK modes (NLKPs) $W^{\pm}_{1-}$ (as well as $A^3_{1-}$ and $A^{\pm}_{1-}$ in the $SO(4)$ model) to be close in mass to the LKP. There is, on the other hand, a large mass splitting between these modes and the other KK states (even gauge KK modes and KK fermions other than the KK top) --unless the fermions are localized on the IR brane. This follows from our prejudice that, as required by EW precision tests, only these gauge fields have large brane kinetic terms. Therefore, we expect that only the KK top, the LKP and nearly degenerate gauge KK modes to be produced at LHC. This is quite different from the usual UED phenomenology where masses of all first level KK modes are of the same order. The UED implications for collider phenomenology were discussed in \cite{Cheng:2002ab}. Pair production of KK fermions lead to cascade decays and final states with leptons, jets and missing energy, very much like supersymmetric signatures. The distinction in our setup is that the only SM particle the LKP couples significantly to is a $W$ so that we always end up with at least one $W$ in the final state. Pair production of $t^1_R$ leads to $t\overline{t}Z^1Z^1$. This eventually leads to jets, leptons and large missing energy like in SUSY and UED but one way to probe this LKP scenario would be to reconstruct $W$ and $t$ candidates. \section{UV-IR-UV Model} \label{UVIRUV} In this section we consider another setup with $Z_2$ parity where we glue the two AdS$_5$ slices in the IR region (instead of the UV region as considered before). We call this setup the UV-IR-UV model. The metric is \begin{eqnarray} ( ds )^2 = ( dy )^2 + e^{ + 2 k \left( | y | - L \right) } ( d x )^2, \end{eqnarray} The warp factor has a {\em minimum} at the midpoint, which is now referred to as the IR brane, while the two end-point branes at $y = \pm L$ are UV branes. The above metric is a solution of the 5D Einstein's equations with a negative bulk cosmological constant, once the two UV branes have equal positive tensions while the IR brane has a negative tension. The problem is that the radion is a ghost due to the negative tension on the IR brane (in the original Randall-Sundrum setup the would-be ghost is projected out by the boundary conditions on the negative tension brane). One might try to avoid the instability problem by adding a large graviton kinetic term on the IR brane that would give the radion a large enough right-sign kinetic term. A large 4D kinetic term for the graviton is reminiscent of the DGP model in Ref.~\cite{Dvali:2000hr}. However, it is known that the DGP models may still have a ghost in the gravity sector \cite{Luty:2003vm}. An alternative is to consider a continuous metric in which case there is no need for a negative tension brane.\footnote{ In fact, it is also possible to find a continuous warp factor qualitatively similar to the IR-UV-IR setup without the UV brane in the middle, which has the behavior of $1/\cosh(2ky)$; see Ref.~\cite{Low:2000pq}.} For example, the ``cosh'' metric $( d s )^2 = ( d y )^2 + \cosh ( 2 k y ) /\cosh (2 k L) ( d x )^2$ yields a spectrum that is qualitatively similar to that of the UV-IR-UV model. The cosh metric is a solution to the 5D Einstein's equations in the presence of a negative $T_{ 55 }$ in the bulk (and two positive tension branes as before)\cite{Kanti:2000rd}. A possible source of $T_{ 55 } < 0$ was proposed in Ref.~\cite{Mukohyama:2000wq} using a conformal scalar. However, it was claimed that in this model the radion is a {\em tachyon} \cite{Hofmann:2000cj}, instead of a ghost. One might be tempted to invoke the usual mechanisms such as Goldberger-Wise or Garriga-Pomarol \cite{Goldberger:1999uk} to stabilize the radion, i.e., make its (mass)$^2$ positive. However, the worry is that a back-reaction on the metric is so large that the warp factor may lose the qualitative UV-IR-UV behavior: whatever lifts the tachyonic mass of the radion would also makes a non-negligible contribution to the stress-energy tensor such that in the end $T_{55}$ becomes positive again. In fact, one can prove a $c$-theorem on the behavior of the warp factor on very general ground. If we write the warp factor as $a(y) = e^{2A(y)}$, using 5D Einstein's equations one can show that the weak energy condition implies \cite{Freedman:1999gp}: \begin{equation} A''(y) \le 0. \end{equation} Clearly the IR-UV-IR setup, in which $A'(-\infty)=2k$ and $A'(\infty)=-2k$, satisfies this $c$-theorem, whereas the UV-IR-UV setup violates the $c$-theorem. In other words, negative energy sources violating the weak energy condition must be present in order for the UV-IR-UV setup to be a solution of the Einstein's equations. In Ref.~\cite{Mukohyama:2000wq} such a negative energy source is provided by the casimir energy. Obviously there are other examples of negative energy sources in nature such as the dark energy driving the expansion of our universe. It remains to be seen if one could find a model with the UV-IR-UV-like warp factor that is free from the ghost or the tachyon. In the following we very briefly explore the phenomenological features of the UV-IR-UV setup, since it is an obvious alternative to the IR-UV-IR setup, while keeping in mind that we are not aware of any satisfactory solutions to the issue of instability. Based on the previous discussion, it is clear that the spectrum of the UV-IR-UV model contains the even modes $(++)$ and the odd modes $(+-)$ (note that these BC's refer to a {\em single} slice of AdS$_5$). One can find that the lightest even gauge KK mode mass is $m_{1+} \sim m_{ KK }$, whereas the lightest odd gauge KK mode mass is $m_{1-} \sim m_{ KK }/\sqrt{k L}$ (as mentioned earlier). If the 5D model addresses the Planck-weak hierarchy, a large splitting between even and odd gauge KK modes is automatic; there is no need for large BKTs in this setup. This would have been a very desirable feature phenomenologically. A peculiar feature of this model is the presence of a very light massive graviton state (in addition to the massless graviton). The mass of the lightest odd mode of the graviton turns out to be $m_{1-} \approx 2 \sqrt {2} e^{- k L} \, m_{\rm KK}$. Thus, it is suppressed with respect to the KK scale by the factor equal to the UV-IR brane hierarchy. If this hierarchy is Planck-weak, the lightest KK graviton mass is of order $10^{-3}$ eV. This small mass comes because the would-be zero mode graviton from the $(+-)$ sector is highly localized near the UV brane, with the wavefunction near the IR brane suppressed by $e^{- k L}$ (just like the actual zero-mode from the $(++)$ sector). For this reason it is insensitive to changing the BC on the IR brane. Equivalently, we could think of the mass as resulting from adding a longitudinal graviton mode near the IR brane to lift the would-be zero-mode and the small overlap between the transverse and longitudinal modes gives a small mass. There are many worries over such an exponentially light massive graviton, which all result from the vDVZ discontinuity \cite{van Dam:1970vg}. Historically a tiny mass for the graviton has been ruled out by bending of light around the sun. However, the assumption there was based on the classical one graviton exchange between the sun and the photon. In the particular setup we are considering here, such an interaction is forbidden by the KK-parity because the light graviton is an odd mode under KK-parity, and therefore the experimental constraint might be loosened. Even if one were able to get away with the constraint from bending of light, a very light massive graviton has been shown to be plagued by the strong coupling issue. For a graviton with mass $m_g$, it is shown in Ref.~\cite{ArkaniHamed:2002sp} that the highest energy scale one can delay the strong coupling problem to is $\Lambda_3=(m_g^2 M_{pl})^{1/3}$, where $M_{pl}$ is the 4D Planck scale. For a massive graviton at $10^{-3}$ eV, this would translate into $\Lambda_3 \approx$ 1 GeV, at which scale we lose control of the gravity sector. To summarize, even though the UV-IR-UV setup provides a large splitting between the first even and odd KK gauge bosons, which is a nice phenomenological feature, the gravity sector seems to suffer from various instability and strong coupling problems. \section{Conclusion} \label{conclude} In this work, we considered the possibility of implementing Kaluza--Klein parity in a warped geometry. The point is that KK-parity can allow for a lower mass scale for the new particles while satisfying the electroweak constraints. Besides, collider signatures of the resulting models are different from either of the two popular extra-dimensional models: the UED and the RS models. In UED, there is KK-parity and KK number conservation so that the mass scale of new particles can be as low as 300 GeV \cite{Appelquist:2000nn} and the LKP can be a good dark matter candidate \cite{Servant:2002hb}. Moreover, because of the flat geometry, the KK mass spectrum is evenly spaced and KK parity imposes pair-production of KK-odd particles. Despite the nice feature of allowing for new particles at masses as low as several hundreds GeV, UED models do not seem to address any hierarchy problems. On the other hand, in the RS setup, where both electroweak symmetry breaking and the Planck-weak hierarchy are addressed, there is no $Z_2$ parity and all new particles can be produced singly. However, precision electroweak and flavor tests constrain the mass scale of KK gauge bosons to be heavier than 2 - 3 TeV (KK fermions are allowed to be lighter, in some circumstances). Furthermore, the first few KK masses are not evenly spaced due to the warped background. Finally, there is no stable KK state unless an extra non-geometrical symmetry is imposed\footnote{There could be a KK dark matter candidate in RS models, following from a $Z_3$ symmetry imposed to solve the proton decay problem \cite{Agashe:2004ci}, but this is not a symmetry of geometrical origin unlike in UED.}. The ``warped KK-parity'' setup we considered is the hybrid of the two scenarios: KK parity allows for a light KK mode compatible with electroweak precision tests. KK-odd particles need to be pair-produced, and the first few KK masses are not evenly spaced. All Standard Model extensions which possess a new conserved quantum number at the TeV scale share very similar collider phenomenology \cite{Cheng:2003ju, Cheng:2002ab}. Pair-production of new (colored) particles lead to multiple jets $(\ge 2)$ and missing energy signals from the dark matter candidate as well as isolated leptons from cascade decays. In contrast, in models without a new symmetry, not every event involving production of new particles would be associated with multiple jets and missing energy. From this perspective, it is natural to wonder whether phenomenologies of models with warped extra dimension would always fall into the category of single-production of new particles, in which case observations of only events with a large multiplicity of jets and missing energy would automatically disfavor warped extra-dimension, or there exists variants which would again always produce events with multiple jets plus missing energy. In this work we made a first attempt toward studying the above question. Ideally, we would like to implement the good features of UED, namely KK modes below a TeV and dark matter, in a warped background (so that the Planck-weak hierarchy is addressed) without giving up some of the virtues of warped extra-dimension such as fermion and Higgs localization. The first point to address is that, for a single slice of AdS$_5$, the warp factor is clearly not symmetric under reflection about the midpoint of the extra dimension. Therefore, \bi \item we glue two physically distinct slices of AdS$_5$ and impose the symmetry interchanging the two AdS$_5$ slices. \ei In such a construction, the mass eigenstates can be divided into two classes with different symmetry properties. For any given level $n$ in the KK decomposition, there are KK-even modes ($n+$), whose profiles are symmetric under reflection around the mid-point of the extra dimension, and KK-odd modes ($n-$) with anti-symmetric profiles. KK-odd modes can only couple in pairs to the KK-even modes and the low-energy, four-dimensional effective theory has KK parity. It is important to stress that our construction does not implement approximate KK number conservation, which would require that the fermion zero-modes and Higgs vev have a flat profile in the warped extra dimension. However, we cannot give up on the localization of the Higgs profile near the IR brane if we want to solve the Planck-weak hierarchy problem so KK number conservation is definitively lost in our approach. Therefore, while the odd modes are allowed to be lighter than a TeV, we need the even modes to be heavier than a few TeV since KK parity by itself is not enough to satisfy EW precision tests. To achieve that, we have to impose further requirements on our setup. Namely, \bi \item we need to obtain a sizable hierarchy, at least a factor of a few, between the lightest KK-even mode and the lightest KK-odd mode. \ei There are two distinct ways to realize our idea, depending on whether the two slices are glued at the UV or IR brane, leading to the IR-UV-IR and the UV-IR-UV models: \bi \item In the IR-UV-IR model the splitting between even and odd gauge KK modes can only come from very large IR brane kinetic terms. The dark matter particle can be identified with the lightest KK partner of the $Z$ boson (the KK photon would not lead to the correct abundance since its couplings to the SM are different from the UED case) and the predicted relic abundance is in the correct range. However, there are two problems with this setup. One is that large IR brane kinetic terms create a certain tension with perturbativity and the regime where the 5D theory remains weakly coupled is rather narrow. The other problem is that for light fermions localized close to the UV brane the constraints from electroweak precision tests are still quite severe. The EW constraints can be softened by localizing fermions close to the IR brane, but then the flavor problem cannot be addressed by utilizing the different localizations of fermions along the extra-dimension. Additional flavor symmetries need to be implemented, which we do not discuss in the present work. \item An apparent alternative to the IR-UV-IR setup is to glue the two slices of AdS$_5$ together in the IR region instead. In such a UV-IR-UV model the desired splitting between even and odd gauge is naturally obtained without brane kinetic terms. However, when gravity is included the radion becomes a ghost and it is a challenge to make the UV-IR-UV setup stable gravitationally. A related issue is the appearance of a very light massive graviton, which poses a very low strong coupling scale around 1 GeV, at which the gravity sector already needs UV completion. \ei Thus, the IR-UV-IR setup seems a more promising approach to incorporate KK-parity in warped extra-dimensional models, even though to obtain a sizable splitting between the lightest KK-odd and -even modes and avoid the strong coupling problem in 5D at the same time, one may need to move the UV brane to an intermediate scale below the Planck scale. This may be a drawback compared with the traditional RS models, but certainly is an improvement over the UED in which the hierarchy problem is simply not addressed at all. In addition, the model discussed so far still requires additional mechanisms to address the issue of flavor violation. We proceeded in an exploratory spirit, focusing mostly on highlighting the important issues or challenges in model-building, and hope to present a tool-kit for model-building along these lines. Moreover, we adopted a phenomenological approach without concerning ourselves with whether the new particles stabilize the Higgs mass by canceling the quadratic-divergent contributions from the standard model particles. However, in the appendix, we show in toy models how divergences in the SM Higgs mass can be canceled by the lightest odd mode thus, possibly, providing a solution to the little hierarchy problem. It certainly will be interesting to look into more details of how the requirement of Higgs mass cancelation would affect various constraints and phenomenology of the setup. There are several non-supersymmetric approaches on how new particles could stabilize the Higgs mass, which are all related directly or indirectly (via the AdS/CFT conjecture) to models with warped extra dimension. Some of the more popular ones are the gauge-Higgs unification \cite{Hosotani:1983xw}, the holographic Higgs models \cite{Contino:2003ve}, and the little Higgs theories \cite{ArkaniHamed:2001nc}. Even though a $Z_2$ parity, the $T$ parity, has been implemented in the little Higgs theories \cite{Cheng:2003ju}, no such attempts have been made with regard to the first two classes of models. Clearly, our work could be viewed as an initial step toward that direction. In addition, it also seems likely that the IR-UV-IR setup could serve as a possible UV completion of the little Higgs theories with T parity, without resorting to supersymmetrized linear sigma models above 10 TeV. Much work remains to be completed. \section*{Acknowledgments} We thank Nima Arkani-Hamed, Hsin-Chia Cheng, Takemichi Okui, Riccardo Rattazzi, Tim Tait, John Terning, Raman Sundrum and Carlos Wagner for discussions. I.~L. acknowledges hospitality of the CERN Theory Group during his visits in which part of this work was initiated and completed. K.~A. was supported in part by the U.~S.~DOE under Contract no. DE-FG-02-85ER 40231. I.~L. was supported in part by U.~S.~DOE under contract DE-AC02-06CH11357 and by National Science Foundation under grant PHY-0653656. A.~F. was partially supported by the EC contract MRTN-CT-2004-503369 for the years 2004-2008 and the MEiN grant 1 P03B 099 29 for the years 2005-2007. \vspace{1cm}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The vast majority of all-sky radio surveys to date have focused on sources emitting in total intensity (Stokes I) e.g. Westerbork Northern Sky Survey \citep[WENSS;][]{Rengelink:1997}, Sydney University Molonglo Sky Survey \citep[SUMSS;][]{Mauch:2003}, Galactic and Extragalactic All-Sky MWA Survey \citep[GLEAM;][]{Hurley-Walker:2017b} and the GMRT 150 MHz All-sky Radio Survey First Alternative Data Release \citep[TGSS ADR1;][]{Intema:2017}. The exception is the NRAO VLA Sky Survey \citep[NVSS;][]{Condon:1998} which also considered linear (Stokes Q and Stokes U) polarisation \citep{Taylor:2009v702p1230}. However, to date, there have been no all-sky radio surveys in circular polarisation (Stokes V). Numerous astrophysical sources are known emit circular polarisation with relatively high degrees of fractional polarisation ($\geq 1\%$). These include pulsars \citep{You:2006,Noutsos:2015,Johnston:2017}, flare stars \citep{Lynch:2017b}, and Jupiter \citep{Seaquist:1969}. It is also anticipated that some exoplanets may also emit circular polarisation \citep{Winglee:1986,Zarka:2001,Murphy:2015,Lynch:2017a}. Weak levels of circular polarisation ($0.01-1\%$) have been observed in active galactic nuclei \citep[AGN;][]{Komesaroff:1984, Weiler:1983, Rayner:2000, Aller:2012}, intra-day variable sources \citep{Macquart:2000}, and are also expected in diffuse Galactic synchrotron emission \citep{Ensslin:2017} at fractions of less than $0.1\%$. Observations of circular polarisation can inform us about the physical processes within these sources and propagation effects along the line of sight. Coherent emission processes can generate highly circularly polarised emission \citep{Macquart:2002, Macquart:2003}, whereas circular polarisation resulting from synchrotron radiation is generally very weak. Propagation effects can also cause circular polarisation through scintillation in a magnetised plasma, refraction effects near black holes, and linearly polarised radiation passing through a relativistic plasma \citep{Macquart:2002, Macquart:2003}. To confirm the source of circularly polarised emission generally requires a combination of detailed observations and theoretical modelling \citep[e.g.][]{OSullivan:2013}. Compared to total intensity, only a small fraction of sources emit circularly polarised radiation resulting in a lower classical confusion limit. As such, for instruments that are confusion limited in total intensity, such as the Murchison Widefield Array \citep[MWA,][]{Tingay:2013}, greater sensitivity can be achieved when observing in circular polarisation. This is particularly beneficial for sources that have high degrees of fractional circular polarisation, however, the gain in sensitivity diminishes for sources that exhibit low degrees of fractional circular polarisation. A positive aspect of this is that most sources are generally weak in circular polarisation and so do not greatly contribute to side-lobe confusion. Therefore deconvolution is unnecessary, greatly simplifying processing. Continuum observations in circular polarisation may also aid in the detection of pulsars missed using conventional means. For example, pulsars with complex orbits, sub-millisecond pulsars, and pulsars exhibiting significant pulse broadening \citep{Bhat:2004, Xue:2017, Geyer:2017}. Traditional search methods assume regular, well-separated pulses, but accelerations in compact orbits lead to significant computational difficulties and broadening can cause individual pulses to blend. Despite this, if the pulsar is sufficiently steep-spectrum low-frequency imaging searches \citep[e.g.,][]{Frail:2018} may discover a number of pulsars that would be otherwise missed. Even this can be problematic, though, as the noise in total intensity images can be significantly higher in the Galactic plane where most pulsars are found. Therefore searches done in circular polarisation may allow the deepest searches for continuum emission independent of other pulsar properties. Searching in circular polarisation is further beneficial compared to searches in Stokes I continuum as very few sources exhibit circularly polarised emission and so the number of candidate sources greatly reduces. Transient searches are also simplified as they are less affected by source confusion \citep[e.g.][]{Lynch:2017b}. Despite the potential for discovery available with observations in circular polarisation, there have been no all-sky surveys to date. All observations have been targeted towards specific known source populations e.g. AGN \citep{Rayner:2000, Cenacchi:2011}, pulsars \citep{Johnston:2017} and exoplanets \citep{Murphy:2015}. For the most part, all-sky observations of circular polarisation have been hindered by instrumental leakage. In the case of dipole-based instruments, such as the MWA, this leakage is primarily caused by poor models of the primary beam \citep{Sutinjo:2015v50p52S}. For the MWA, the effect is particularly pronounced at higher frequencies and towards the edge of the beam. \citet{Lenc:2017} demonstrated that polarisation leakage observed with the MWA could be mitigated in drift-scan observations by modelling the leakage pattern across the beam and then subtracting it. In this paper, we present the first all-sky radio survey in circular polarisation. The survey covers the entire Southern sky ranging in declination from $-85\arcdeg$ to $+30\arcdeg$ at a frequency of 200\,MHz. We use data originally observed as part of the GLEAM survey \citep{Wayth:2015v32p25,Hurley-Walker:2017A}. In the process of performing this survey, we also tested the effectiveness of leakage mitigation. Throughout this paper we have adopted the PSR/IEEE convention for left-handed and right-handed circular polarisation \citep{vanStraten:2010} which are of positive and negative sign, respectively. \section{Observations and data analysis} \subsection{Observations} We used archival visibility data observed as part of GLEAM \citep{Wayth:2015v32p25,Hurley-Walker:2017A}. The observations used a drift-scan observing mode where tiles always point at the meridian. As such, instrumental systematics are minimised by maintaining a consistent observing set up. To allow direct comparison to the GLEAM deep wide-band survey data \citep{Hurley-Walker:2017A} we used the $169-200$\,MHz and $200-231$\,MHz frequency bands. Inaccuracies in the MWA beam model make these two frequency bands more prone to polarisation leakage than the three lower bands \citep{Sutinjo:2015v50p52S,Lenc:2016,Sokolowski:2017}, so this data set is well-suited to testing the effectiveness of our polarisation leakage subtraction technique. A list of the individual GLEAM drift-scan observations used is summarised in Table \ref{table:obsdata}. \subsection{Data reduction} We calibrated the data using the real-time calibration and imaging system, referred to as the \textsc{rts} \citep{Mitchell:2008v2p707,Ord:2010}, using the procedure outlined in \citet{Lenc:2016} for point source polarimetry. Calibration was performed per epoch using a calibrator observation for that specific epoch (see Table \ref{table:obsdata}), apart from the 2013-08-18 epoch where the calibration solution for that day was poor and so a solution from the previous day was used. Archival online flagging \citep{Offringa:2012} was applied to reduce the effects of radio frequency interference. To minimise sidelobe confusion and reduce sensitivity to large-scale structure, robust weighting was used with a robustness of $-1$ and only baselines longer than $50\lambda$ were utilised. Dirty image cubes were created for each two-minute snapshot using the \textsc{rts} at full 40\,kHz spectral resolution and over a $25\arcdeg\times25\arcdeg$ region centred on the beam pointing location. All images were $2\,187\times2\,187$ pixels in extent and with a pixel size of $\sim$$41\arcsec$, this equates to a sampling of $\sim$$5$ pixels across the beam at $200\,$MHz. The spectral cubes were averaged in frequency to create two-minute snapshot images for Stokes I, Q, U and V. \begin{table} \centering \begin{tabular}{l r l r l } \hline \hline Date & Dec. & RA range (h)& $N_\text{flag}$ & Calibrator \\ \hline 2013-08-08 & $+1.6\arcdeg$ & $19.5-3.5$ & 3 & 3C444 \\ 2013-08-09 & $-55.0\arcdeg$ & $19.5-3.5$ & 11 & 3C444 \\ 2013-08-10 & $-26.7\arcdeg$ & $19.5-3.5$ & 11 & 3C444 \\ 2013-08-17 & $+18.6\arcdeg$ & $19.5-3.5$ & 4 & 3C444 \\ 2013-08-18 & $-72.0\arcdeg$ & $19.5-3.5$ & 6 & 3C444\textsuperscript{a} \\ 2013-08-22 & $-13.0\arcdeg$ & $19.5-3.5$ & 1 & 3C444 \\ 2013-08-25 & $-40.0\arcdeg$ & $19.5-3.5$ & 1 & 3C444 \\ 2013-11-05 & $-13.0\arcdeg$ & $0-8$ & 5 & Hydra A \\ 2013-11-06 & $-40.0\arcdeg$ & $0-8$ & 6 & Hydra A \\ 2013-11-07 & $+1.6\arcdeg$ & $0-8$ & 4 & Hydra A \\ 2013-11-08 & $-55.0\arcdeg$ & $0-8$ & 6 & Hydra A \\ 2013-11-11 & $+18.6\arcdeg$ & $0-8$ & 8 & Hydra A \\ 2013-11-12 & $-72.0\arcdeg$ & $0-8$ & 18 & Hydra A \\ 2013-11-25 & $-26.7\arcdeg$ & $0-8$ & 0 & Hydra A \\ 2014-03-03 & $-26.7\arcdeg$ & $6-16$ & 0 & Hydra A \\ 2014-03-04 & $-13.0\arcdeg$ & $6-16$ & 0 & Hydra A \\ 2014-03-06 & $+1.6\arcdeg$ & $6-16$ & 1 & Hydra A \\ 2014-03-08 & $+18.6\arcdeg$ & $6-16$ & 1 & Hydra A \\ 2014-03-09 & $-72.0\arcdeg$ & $6-16$ & 1 & Hydra A \\ 2014-03-16 & $-40.0\arcdeg$ & $6-16$ & 1 & Hydra A \\ 2014-03-17 & $-55.0\arcdeg$ & $6-16$ & 1 & Hydra A \\ 2014-06-09 & $-26.7\arcdeg$ & $12-22$ & 3 & 3C444 \\ 2014-06-10 & $-40.0\arcdeg$ & $12-22$ & 4 & 3C444 \\ 2014-06-11 & $+1.6\arcdeg$ & $12-22$ & 5 & 3C444 \\ 2014-06-12 & $-55.0\arcdeg$ & $12-18.5$ & 6 & Hercules A \\ 2014-06-13 & $-13.0\arcdeg$ & $12-19$ & 5 & Hercules A \\ 2014-06-14 & $-72.0\arcdeg$ & $12-22$ & 4 & 3C444 \\ 2014-06-15 & $+18.6\arcdeg$ & $12-22$ & 5 & 3C444 \\ 2014-06-16\textsuperscript{b} & $-13.0\arcdeg$ & $18.5-22$ & 3 & 3C444 \\ 2014-06-18\textsuperscript{c} & $-55.0\arcdeg$ & $15-22$ & 0 & 3C444 \\ [1ex] \hline \multicolumn{5}{l}{\textsuperscript{a}\footnotesize{Calibration from previous day (2013-08-17) re-used.}} \\ \multicolumn{5}{l}{\textsuperscript{b}\footnotesize{Partial reobservation of 2014-06-13 drift scan.}} \\ \multicolumn{5}{l}{\textsuperscript{c}\footnotesize{Partial reobservation of 2014-06-12 drift scan.}} \\ \end{tabular} \caption{GLEAM first year observing parameters. $N_\text{flag}$ is the number of MWA tiles (of the 128 available) that are flagged. The calibrator is used to determine initial bandpass, phase and flux density scale corrections.} \label{table:obsdata} \end{table} \subsection{Beam modelling and leakage} The true beam pattern of the MWA, as measured empirically by imaging field sources, differs significantly from the analytic beam pattern at the higher end of the MWA band and/or when observing at lower elevation \citep{Sutinjo:2015v50p52S}. This discrepancy results in position-dependent flux density scaling effects in Stokes I \citep{Hurley-Walker:2014v31p45,Hurley-Walker:2017A} and polarisation leakage in Stokes Q, U and V \citep{Sutinjo:2015v50p52S}. The most significant source of leakage is from Stokes I as this is where the sky signal dominates. In general, Stokes Q exhibits the strongest level of leakage but Stokes U and V can be contaminated with as much as $\sim$5\% leakage \citep{Lenc:2017} from Stokes I. For circular polarisation, such levels of leakage can result in false detections unless they are corrected for. For a given epoch and frequency, the leakage pattern for a drift-scan beam will remain fixed for each of the Stokes parameters. The nature of the leakage pattern is a function of the calibrator location within the calibrator beam pointing, the beam pointing used for the calibrator scan and the beam pointing used for the drift scan. To overcome the limitations and errors associated with the analytic beam model we measured the effect of the beam on known GLEAM sources as they drift through the beam. It is important to note that deconvolution is not performed on the snapshot images. This ensured that the PSF (point-spread function) characteristics of the leaked component remains consistent between each of the Stokes parameters for a given snapshot. A secondary benefit is that the processing was greatly simplified; enabling real-time processing. As weaker GLEAM sources are more likely to be dominated by sidelobe confusion, we excluded them from the sampling process. For this reason, we only consider GLEAM field sources that have a peak flux density in Stokes I greater than 3 Jy\,PSF$^{-1}$. For each snapshot image, we sample Stokes I, Q, U and V at the pixel location of each GLEAM field source. \begin{figure*} \centering \includegraphics[width=0.32\linewidth]{Figures/gleam_2_169_grid.pdf} \includegraphics[width=0.32\linewidth]{Figures/gleam_m27_169_grid.pdf} \includegraphics[width=0.32\linewidth]{Figures/gleam_m72_169_grid.pdf} \caption{Measured and fitted leakage in Stokes V in the 216 MHz band. The $x$ and $y$ axis are plotted in $(l,m)$ using units of pixels for a $25\arcdeg\times25\arcdeg$ field ($2\,187\times2\,187$ pixels), where $x$ represents the $l$ direction and $y$ represents the $m$ direction. Left: Observations taken from $+1.6\arcdeg$ drift scan on 2013-08-08. Centre: Observations taken from $-26.7\arcdeg$ drift scan on 2013-08-10. Right: Observations taken from $-72\arcdeg$ drift scan on 2013-08-18.} \label{fig:leak215MHz} \end{figure*} To map the position-dependent leakage for the beam we assumed that all of the field sources are unpolarised. At low frequencies, this is a reasonable assumption \citep{Lenc:2016} and one that we verify later through our detection statistics. So any measured polarisation is assumed to result from beam errors. For circular polarisation we grid the measured fractional circular polarisation ($V/I$) at each of the sampled pixel locations. For small fields of view, a simple two-dimensional plane is sufficient to model the leakage across the beam, e.g. \citet{Lynch:2017b}, however for larger fields there is significant warping in the leakage behaviour and the fit errors increase significantly at the beam edges. To better model the leakage we fit a two-dimensional quadratic surface (over both spatial directions) to the grid in order to interpolate over the entire beam. Examples of this fitting are shown in Figure \ref{fig:leak215MHz} for drift scans at three different points on the meridian. The leakage into any given Stokes parameter will be a mix of leakage from each of the other Stokes parameters. As the largest astrophysical signal is in Stokes I, it will dominate the observed leakage into each of the remaining Stokes parameters. So, for each snapshot, the previously fit leakage surface for Stokes V is used as position dependent scaling factor for the Stokes I map, the scaled Stokes I map is then subtracted from the Stokes V map to remove beam-associated leakage. The same process can also be repeated for Stokes Q and Stokes U but is not described in this paper. An aspect that is not taken into consideration is solving for XY-phase. Uncorrected XY-phase can result in leakage from Stokes U into Stokes V. Leakage of this form can lead to a false detection in Stokes V for a sufficiently strong linearly polarised source. To solve for XY-phase, at least one strong linearly polarised source would be required in each drift scan. At the time of this survey, such information was unavailable at long wavelengths. However, a survey of linearly polarised sources is currently being performed with the techniques developed here that will enable such calibration in future (Riseley in prep.). Based on prior observations at 154\,MHz we estimate the leakage from Stokes U to Stokes V to be of order $20-30\%$ \citep{Lenc:2016}. However, as linearly polarised sources detected with the MWA are generally weak at long wavelengths, typically $<5\%$ linearly polarised \citep{Bernardi:2013v771p105,Lenc:2016,Lenc:2017,OSullivan:2018}, $30\%$ leakage would typically result in less than $1.5\%$ of excess signal in Stokes V. Furthermore, for our Stokes V continuum observations, this would only be a potential source of error for sources with particularly low rotation measures ($\lvert\text{RM}\rvert<3$ rad\,m$^{-2}$) as they would otherwise be bandwidth depolarised by $>75\%$ over the 61.44\,MHz available bandwidth. \subsection{Flux calibration} Errors between the analytic beam model and the true beam can result in position-dependent flux calibration errors. During GLEAM survey processing, this was noted as a declination-dependent effect \citep{Hurley-Walker:2014v31p45,Hurley-Walker:2017A}, mainly because the mosaicking process dampened the effect in Right Ascension. However, a model of the flux calibration error can be formed using a similar process as the that used to model the leakage. To model and correct for the position-dependent flux calibration errors, rather than measuring leakage, the scaling difference between the known GLEAM flux density for a source and the measured position-dependent flux density is gridded to form a scaling map. This scaling map is then applied to both Stokes I and Stokes V images of each snapshot to correct the flux density of field sources. \subsection{Mosaic creation} Mosaic creation was performed using the software package \textsc{swarp} \citep{Bertin:2002}. A three-stage process was utilised to generate the all-sky mosaics. In the first stage, individual mosaics were formed for each epoch and each observing band. This allowed the quality of the drift scans to be assessed and also avoided limitations of the software associated with the number of individual images that could be mosaicked simultaneously. The second stage combined the individual mosaics for a given frequency band into an all-sky mosaic. Finally, in the third stage, the two bands were combined to form the deep all-sky mosaic. For the first stage of mosaicking, beam maps were created for each snapshot. During mosaicking, the individual snapshots were weighted against the square of the beam maps to minimize edge effects associated with noise spikes and increased error in the leakage corrections. Mosaics were formed using \textsc{swarp} with the corrected Stokes I and Stokes V snapshots. Mosaics were also formed using the uncorrected Stokes I and Stokes V snapshots to ultimately allow assessment of the effectiveness of the corrections in the final mosaics. All mosaics were formed with zenith-equal-angle projection and combined weight files were created by \textsc{swarp} for each of the drift scan mosaics and for each observing band. For the second stage of mosaicking we used \textsc{swarp} to combine the individual drift-scan mosaics in each observing band. The combined weight files generated by \textsc{swarp} in the first stage were used as image weights during the mosaicking process. The final mosaics for each band were formed with zenithal equal area (ZEA) projection. In the final stage of mosaicking, the $169-200$\,MHz and $200-231$\,MHz mosaics were averaged to form a $200$\,MHz deep mosaic. This resulted in all-sky maps for the uncorrected Stokes I and Stokes V, and for the corrected Stokes I and Stokes V. Figure \ref{fig:psr2} shows a cut-out from the all-sky map showing a Galactic region in the corrected Stokes I and Stokes V maps. Two pulsars, PSRs J0835$-$4510 and J1157$-$6224, are clearly detected in the Stokes V map. \begin{figure*} \centering \includegraphics[width=\linewidth]{Figures/psr2.pdf} \caption{A representative sub-set of the all-sky survey showing a Galactic region in Stokes I and Stokes V at 200 MHz. Two circularly polarised pulsars (circled in red) are detected in this region: PSRs J0835$-$4510 and J1157$-$6224. The approximate synthesised beam is shown inset and is $\sim$$3\arcmin$ in extent. SIN projection was used to generate this map.} \label{fig:psr2} \end{figure*} \subsubsection{Noise characterisation} \label{sec:noise} The final Stokes V image mosaics contain position-dependent variations in image noise. The main contributing factors to the levels of regional noise are the number of overlapping snapshots in that region, the effectiveness of calibration for the different epochs that contribute to that region (which is a function of the brightness and elevation of the calibrator), and the effectiveness of the leakage subtraction in that region. At extreme declinations, i.e. $+18.6\arcdeg$ and $-72\arcdeg$, there are no overlapping drift scans to help improve sensitivity and so these field edges have higher noise levels. Some hour angles have higher levels of overlap between drift scans at the same declination and this leads to improved sensitivity in these regions e.g. between the 0$-$8\,h scans and the 6$-$16\,h scans there are only 2\,h of overlap, whereas between 6$-$16\,h scans and the 12$-$22\,h scans there are 4\,h of overlap. Regions around bright Stokes I sources can also contribute towards increased Stokes V noise in regions where the leakage modeling is not as effective and in regions where there are extremely bright sources. Even if leakage is reduced to an ambitious level of $0.1\%$, a 100 Jy source would contribute 100 mJy to Stokes V. Since dirty images are used in processing, PSF sidelobes from these sources would contribute to noise over an extended region around each bright source. Hence we expect increased levels of Stokes V noise around bright sources such as the Crab Nebula and Pictor A. To map local noise, a $20\times20$ pixel sliding window was used. For each region within the mosaic, the mean and standard deviation is measured, any pixels with a mean subtracted peak exceeding 3$\sigma$ are excluded, the standard deviation and mean are then measured for the remaining pixels within the sliding window and recorded for that region. The resultant product is a local RMS map as shown in Figure \ref{fig:rms200MHz}. For the majority of the observable sky below $+10\arcdeg$ and above $-85\arcdeg$ the noise levels are typically of order 3\,mJy\,PSF$^{-1}$. There is a slight excess in noise at around 18\,h, this is primarily as a result of the bulk of the Galactic plane passing through this region, however, there is also limit hour-angle coverage in this region and this too will reduce sensitivity in this region. \begin{figure*} \centering \includegraphics[width=\linewidth]{Figures/rms_200MHz.pdf} \caption{All-sky map of measured RMS image noise in circular polarisation in the $200\,$MHz deep mosaic. A $20\times20$ pixel sliding window was used on the Stokes V mosaic to estimate image noise. Zenithal equal area (ZEA) projection is used, centred on a declination of $-90\arcdeg$.} \label{fig:rms200MHz} \end{figure*} Figure \ref{fig:noise200MHz} shows the proportion of the surveyed region that achieves a given noise level. The overall survey has a median noise level of 3.0\,mJy\,PSF$^{-1}$. Approximately $25\%$ of the surveyed region achieves a sensitivity better than 2\,mJy\,PSF$^{-1}$ and $75\%$ is better than 5.4\,mJy\,PSF$^{-1}$. This is a factor of $2-5$ improvement compared to the 10 mJy\,PSF$^{-1}$ sensitivity of GLEAM \citep{Hurley-Walker:2017A} in Stokes I with the same weighting scheme. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/noise.pdf} \caption{A cumulative histogram showing the fraction of the survey region achieving a given Stokes V image noise limit at 200 MHz. The highest sensitivity achieved is $\sim$1\,mJy\,PSF$^{-1}$ and $\sim$$10\%$ of the survey region has noise exceeding $\sim$10\,mJy\,PSF$^{-1}$ (mostly constrained to the edge of the survey region, see Figure \ref{fig:noise200MHz}.).} \label{fig:noise200MHz} \end{figure} To verify the Gaussian nature of the noise statistics we considered all mosaic pixels which sit in regions where the noise was estimated to be within $3.0\pm0.5$\,mJy\,PSF$^{-1}$. These were fit with a Gaussian with a mean of $0.001$\,mJy\,PSF$^{-1}$ and a standard deviation of $3.029$\,mJy\,PSF$^{-1}$. The noise statistics are highly Gaussian with $95.39\%$ of pixels within $2\sigma$, $99.67\%$ within $3\sigma$, $99.986\%$ within $4\sigma$, and $99.9994\%$ within $5\sigma$ of the mean. Assuming Gaussian statistics, we would expect $\sim$$78$ false detections at the $4\sigma$ level, $\sim$$1$ at the $5\sigma$ level, and $\ll1$ at the $6\sigma$ level over the entire survey area. \subsubsection{Flux density scale assessment} \label{sec:fluxscale} To assess the effectiveness of the flux-scale calibration, the flux density of all GLEAM sources with a cataloged peak brightness greater than 3 Jy\,PSF$^{-1}$ at 200\,MHz were measured in the uncorrected and corrected Stokes I mosaics. GLEAM sources in regions where the signal-to-noise was less than 20 were rejected to reduce measurements that are likely to be significantly affected by edge noise or source sidelobes. In total, 1779 GLEAM sources were available for use as suitable probes. Figure \ref{fig:scale200MHz} shows the ratio between the catalogued GLEAM source peak at 200 MHz and the measured Stokes I peak in the 200\,MHz mosaic plotted against declination for both the uncorrected and corrected maps. There is a clear declination dependence in the uncorrected data with an overall spread of $22\%$ in the measured flux densities and an absolute scaling of 0.943 compared to those of GLEAM. After correction, the declination dependence is no longer prominent with the overall spread reduced to $8.7\%$ and the absolute scaling at 0.984. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/gleam_200MHz_scale.pdf} \caption{Comparison of survey flux density scale of the 200 MHz deep mosaic with that of the GLEAM survey. Before correction there are declination-dependent effects. There are removed after correction.} \label{fig:scale200MHz} \end{figure} Another representation of the flux density scale improvement is shown in Figure \ref{fig:flux200MHz}. This figure takes the same scaling measurements but maps the results as a function of sky position. The blanked region marks locations where no GLEAM measurements are available. The declination-dependent variations are clearly visible in the uncorrected maps, however, strong RA-dependent variations are also apparent e.g. an abrupt change from over-estimating to under-estimated the measured flux density at around 12h at high declinations. This RA-dependent effect results from calibrator source changes and also calibrator beam-former changes from epoch to epoch. \begin{figure*} \centering \includegraphics[width=0.49\linewidth]{Figures/gleam_200MHz_I_scalemap.pdf} \includegraphics[width=0.49\linewidth]{Figures/gleam_200MHz_G_scalemap.pdf} \caption{Comparison of survey flux density scale of the 200 MHz deep mosaic with that of the GLEAM survey. The map on the left shows the flux density scale before correction and the map on the right shows the flux density scale after correction. In both maps the blanked strip removes the Galactic plane region as this was not surveyed by GLEAM. Zenithal equal area (ZEA) projection is used, centred on a declination of $-90\arcdeg$.} \label{fig:flux200MHz} \end{figure*} The greatest residual errors in the corrected flux density scale maps appear towards high declinations (particularly at the 18h and 5h mark) and towards the Galactic centre. The deviation at high declination is likely due to increased modelling errors at the edge of the map. At the edge of the map there are no further overlapping snapshots to help down-weight the increased fitting errors that are present there during mosaicking. The apparent underestimation of flux density towards the Galactic centre is likely due to sidelobe confusion affecting the sampling of source peak flux densities in that region. If it were a true underestimation of the flux density then it would affect the entire drift scan rather than just one part of it since the correction is used consistently over the entire drift scan. \subsubsection{Leakage characterisation} \label{sec:leakage} Using a similar approach to that described in Section \ref{sec:fluxscale} for assessing the flux density scale, the 1779 GLEAM sources were also used to probe leakage at various sky locations. For each GLEAM source, the Stokes I and Stokes V flux density was measured at the location of that source. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/gleam_200MHz_V_leak.pdf} \caption{Leakage from Stokes I to Stokes V plotted as a function of declination using GLEAM sources as points of reference in the 200 MHz deep mosaic. Before correction there are clear dependencies with declination whereas after correction the leakage is declination independent. Red dashed lines mark the 6$\sigma$ region of the scatter in Stokes V leakage after correction, where $1\sigma=0.12\%$ for declinations less than $+20\arcdeg$ and $0.3\%$ for declinations above $+20\arcdeg$.} \label{fig:leak200MHz} \end{figure} Figure \ref{fig:leak200MHz} shows the percentage of Stokes I to Stokes V leakage as a function of declination before and after correction. Prior to correction the leakage showed a significant declination dependent behaviour with a typical spread of $\sim1\%$. At certain declinations there were several different bands of leakage. For example, between a declination of $-30\arcdeg$ and $-10\arcdeg$ three separate trends are apparent. These different trends, for the same declination, are caused by differences in epoch-to-epoch calibration. After correction the declination dependence is removed and the typical spread is reduced to $0.12\%$. Since the drift scans at the highest and lowest declination are edge cases, i.e. they do not have overlapping drift-scans at higher and lower declinations, the model fit to the leakage pattern is not as well constrained. As a result, the spread of leakage increases in these regions. The effect is particularly pronounced above $+20\arcdeg$ declination where there are limited GLEAM sources to sample against (as a result of decreased sensitivity and regions not sampled by GLEAM). Mapping the leakage as a function of sky position, as shown in Figure \ref{fig:leakmap200MHz}, enables the characteristics of the leakage before and after correction to be analysed more readily. Before correction there are clear bands in declination where the leakage is either highly negative (high declinations) or highly positive (mid and low declinations). The epoch-to-epoch variations noted in Figure \ref{fig:leak200MHz}, which cause abrupt changes as a function of hour-angle, are seen more clearly, particularly at a declination of $\sim$$-30\arcdeg$. In the corrected map, the overall leakage is improved by an order of magnitude and no clear trends are apparent for declinations below $\sim$$+20\arcdeg$. Above a declination of $\sim$$+20\arcdeg$, it is apparent that a slight excess of leakage is still present. As described in Section \ref{sec:fluxscale}, the model fitting is less constrained in this region and is affected by the reduced number of GLEAM sources that are available within this region. Nonetheless, the leakage is improved by an order of magnitude compared to the uncorrected map. \begin{figure*} \centering \includegraphics[width=0.49\linewidth]{Figures/gleam_200MHz_V_leakmap.pdf} \includegraphics[width=0.49\linewidth]{Figures/gleam_200MHz_V_cleakmap.pdf} \caption{Comparison of measured Stokes I to Stokes V leakage for GLEAM sources in the 200 MHz deep mosaic. The map on the left shows the leakage before correction and the map on the right after correction. In both maps the blanked strip removes the Galactic plane region as this was not surveyed by GLEAM. Note that the scale in the corrected map is an order of magnitude lower compared to the uncorrected map. Zenithal equal area (ZEA) projection is used, centred on a declination of $-90\arcdeg$.} \label{fig:leakmap200MHz} \end{figure*} \section{A Blind Survey} \label{sec:results} A blind search for circularly polarised sources was performed over the entire mosaicked map. The search recorded all pixels in the 200\,MHz Stokes V map that were six times greater in flux density than the associated RMS noise in that region based on the local RMS noise map (see Section \ref{sec:noise}). Islands of neighbouring detections were grouped as single detections. \begin{figure*} \centering \includegraphics[width=\linewidth]{Figures/plot_v.pdf} \caption{Plot of all blind and targeted detections showing the signal-to-noise of the detection and the absolute percentage circular polarisation ($\lvert v\rvert$). The different source types are distinguished by marker symbol. Sources that are detected above a declination of $+20\arcdeg$ are circled. The grey dashed line shows the $1\sigma$ leakage in Stokes V for declinations below $+20\arcdeg$, the black dashed line shows the $6\sigma$ leakage in Stokes V for declinations below $+20\arcdeg$ and the red dashed line marks the $6\sigma$ leakage for declinations above $+20\arcdeg$.} \label{fig:vvssnr} \end{figure*} In total, 63 unique detections were made above the $6\sigma$ level in the 200\,MHz Stokes V map. Of these, 41 are associated with bright radio galaxies, 15 are associated with known pulsars, one with Jupiter and six were associated with known artificial satellites. The significance of the detection, the fractional circular polarisation and the object type for all 63 sources are plotted in Figure \ref{fig:vvssnr}, the figure also includes targeted detections of pulsars which will be described in Section \ref{sec:pulsars}. To account for residual leakage that still exists after correcting for Stokes I to Stokes V leakage, a $6\sigma$ cut-off is applied to avoid false detections of "leaked" Stokes I sources. For declinations lower than $+20\arcdeg$ this cut-off is set to $0.72\%$ ($6\sigma$). For declinations greater than $+20\arcdeg$, where there is increased residual leakage, the cut-off is set to $1.8\%$ ($6\sigma$) to account for the increased residual leakage in that region. Applying this cut-off, one pulsar (associated with the extremely bright Crab Nebula with a flux density $>500$\,Jy) is rejected and all but three AGN. All remaining source detections are listed in Table \ref{tab:candidates}. Table \ref{tab:candidates} also lists the angular separation between the blind detection and the nearest known radio source. For pulsars, the position is taken from the Australia Telescope National Facility (ATNF) Pulsar Catalog v1.56 \citep{Manchester:2005v129p1993}, AGN positions are taken from the NASA/IPAC Extragalactic Database (NED), and the position of Jupiter is determined for the 2013-11-11 epoch using the \textsc{pyephem} package\footnote{\url{http://rhodesmill.org/pyephem}}. Typical astrometric errors of $\sim$$0.3\arcmin$ are expected based on the PSF size and a signal-to-noise of $\sim$$6$. Uncorrected ionospheric effects will have a more dominant effect on astrometric error, contributing an additional error of order $\sim$$1\arcmin$ at $200\,$MHz \citep{Loi:2015v42p3707}. \begin{table*} \centering \begin{tabular}{l r r r r r r r r r } \hline \hline Source & RA & Dec & Ang. Sep. & $V_{200}$ & SNR & $S_{200}$ (ref) & $v_{200}$ & $v_{\nu} $ & $\nu$ (ref) \\ & (J2000) & (J2000) & (arcmin) & (mJy) & & (mJy) & (\%) & (\%) & (MHz) \\ \hline PSR J0034$-$0721 & 00$\rah$34$\ram$10$\ras$ & -07$\arcdeg$21$\arcmin$38$\arcsec$ & 0.3 & +30.5 & 13.9 & 292.0 (M) & $+10.4$ & $+12.0$ & 149 (N) \\ PSR J0437$-$4715 & 04$\rah$37$\ram$20$\ras$ & -47$\arcdeg$14$\arcmin$24$\arcsec$ & 1.0 & +135.4 & 24.6 & 834.0 (M) & $+16.2$ & $+8.0$ & 438 (Y) \\ PSR J0630$-$2834 & 06$\rah$30$\ram$51$\ras$ & -28$\arcdeg$34$\arcmin$28$\arcsec$ & 0.5 & -20.7 & 12.9 & 463.0 (M) & $-4.5$ & $-2.8$ & 1400 (J) \\ PSR J0738$-$4042 & 07$\rah$38$\ram$31$\ras$ & -40$\arcdeg$41$\arcmin$56$\arcsec$ & 0.5 & +14.0 & 7.4 & 165.0 (M) & $+8.5$ & $-3.7$ & 1400 (J) \\ PSR J0742$-$2822 & 07$\rah$42$\ram$50$\ras$ & -28$\arcdeg$22$\arcmin$29$\arcsec$ & 0.1 & -15.3 & 9.0 & 146.0 (X) & $-10.5$ & $-2.4$ & 1400 (J) \\ PSR J0745$-$5353 & 07$\rah$45$\ram$04$\ras$ & -53$\arcdeg$52$\arcmin$37$\arcsec$ & 1.5 & -18.7 & 9.2 & & & $+4.7$ & 1400 (J) \\ PSR J0835$-$4510 & 08$\rah$35$\ram$22$\ras$ & -45$\arcdeg$10$\arcmin$20$\arcsec$ & 0.6 & +243.6 & 32.7 & 7075.0 (M) & $+3.4$ & $-6.2$ & 1400 (J) \\ PSR J1136$+$1551 & 11$\rah$36$\ram$04$\ras$ & +15$\arcdeg$50$\arcmin$49$\arcsec$ & 0.4 & -159.7 & 18.7 & 684.0 (M) & $-23.4$ & $-17.0$ & 149 (N) \\ PSR J1157$-$6224 & 11$\rah$57$\ram$26$\ras$ & -62$\arcdeg$24$\arcmin$06$\arcsec$ & 1.1 & +50.8 & 11.3 & 342 (L) & $+14.9$ & $+13.2$ & 1400 (J) \\ PSR J1327$-$6222 & 13$\rah$27$\ram$28$\ras$ & -62$\arcdeg$22$\arcmin$00$\arcsec$ & 1.4 & -33.8 & 7.2 & 284.0 (M) & $-11.9$ & $+7.1$ & 1400 (J) \\ PSR J1453$-$6413 & 14$\rah$53$\ram$40$\ras$ & -64$\arcdeg$12$\arcmin$31$\arcsec$ & 1.0 & +57.0 & 11.2 & 684.0 (M) & $+8.3$ & $+6.9$ & 1400 (J) \\ PSR J1651$-$4246 & 16$\rah$51$\ram$53$\ras$ & -42$\arcdeg$45$\arcmin$56$\arcsec$ & 0.6 & -213.9 & 22.0 & 1095.0 (M) & $-19.5$ & $-5.6$ & 1400 (J) \\ PSR J1932$+$1059 & 19$\rah$32$\ram$15$\ras$ & +10$\arcdeg$58$\arcmin$47$\arcsec$ & 0.6 & -58.3 & 8.3 & 501.0 (M) & $-11.6$ & $-22.8$ & 149 (N) \\ PSR J2048$-$1616 & 20$\rah$48$\ram$35$\ras$ & -16$\arcdeg$16$\arcmin$30$\arcsec$ & 0.2 & +17.6 & 9.8 & 169.0 (M) & $+10.4$ & $+7.1$ & 1400 (J) \\ \hline PKS J0006$-$4235\textsuperscript{a} & 00$\rah$05$\ram$59$\ras$ & -42$\arcdeg$32$\arcmin$19$\arcsec$ & 2.4 & $-12.8$ & 6.3 & 1718.0 (L) & $-0.74$ & & \\ PMN J0257$-$2433\textsuperscript{a} & 02$\rah$57$\ram$23$\ras$ & -24$\arcdeg$31$\arcmin$37$\arcsec$ & 2.4 & $+10.1$ & 7.6 & 692.4 (L) & $+1.5$ & & \\ 3C 139.2\textsuperscript{a} & 05$\rah$24$\ram$35$\ras$ & +28$\arcdeg$13$\arcmin$27$\arcsec$ & 1.9 & $-227.3$ & 6.7 & 11486.3 (L) & $-2.0$ & & \\ \hline Jupiter\textsuperscript{a} & 07$\rah$27$\ram$44$\ras$ & +21$\arcdeg$54$\arcmin$17$\arcsec$ & 0.4 & $-37.0$ & 7.3 & 1198.7 (L) & $-3.0$ & & \\ [1ex] \hline \multicolumn{10}{l}{\textsuperscript{a}\footnotesize{These sources may be affected by excessive leakage from Stokes I or Stokes U.}} \\ \end{tabular} \caption{List of all sources detected above 6$\sigma$ at 200\,MHz in circular polarisation that have an associated astrophysical counterpart. The RA, Dec are the J2000 position of the peak in MWA 200 MHz mosaic image. The angular separation is the angular distance between source peak and the catalogued position of the nearest identified radio source (see text for details). $V_{200}$ is the measured Stokes V flux density. SNR is the signal to noise of the detected source. $S_{200}$ is the estimated total intensity at 200\,MHz. $v_{200}$ is the estimated fractional circular polarisation. $v_{\nu}$, and $\nu$ are the fractional circular polarisation and frequency found in literature. References provided within parenthesis refer to J:\citet{Johnston:2017}, L:This work, M:\citet{Murphy:2017}, N:\citet{Noutsos:2015}, X:\citet{Xue:2017}, and Y:\citet{You:2006}.} \label{tab:candidates} \end{table*} \subsection{Pulsars} \label{sec:blindpulsars} In total, 14 pulsars were detected in the blind survey. Table \ref{tab:candidates} lists all of the detected pulsars, the observed characteristics at 200\,MHz and observations of circular polarisation from literature (where available). Images of two of the detected pulsars, PSRs J1136$+$1551 and J0835$-$4510, are shown in Figure \ref{fig:pulsars200MHz} and demonstrate that circular polarisation of either sign can be observed. All of the detected pulsars have relatively high fractional circular polarisation at 200\,MHz ($>3\%$). In our sample, the sign of polarisation does not appear to be biased either way with near equal proportions having either negative or positive sign. Three pulsars (PSRs J0034$-$0721, J1136$+$1551, and J1932$+$1059) were previously observed with LOFAR at 149\,MHz \citep{Noutsos:2015} and exhibit a consistent sign of circular polarisation and similar fractional polarisation in our 200\,MHz observations. Four pulsars (PSRs J0738$-$4042, J0745$-$5353, J0835$-$4510, and J1327$-$6222) exhibit a sign flip at 200\,MHz compared to observations at 1.4\,GHz \citep{Johnston:2017}. PSR J0835$-$4510 is the most prominent of these given that it is detected with a signal-to-noise of greater than 30. PSR J0745$-$5353 is detected in circular polarisation at 200\,MHz but is not detected in Stokes I. \begin{figure*} \centering \includegraphics[width=\linewidth]{Figures/pulsar.pdf} \caption{Image of two sample pulsars from 200 MHz deep mosaic showing detections in different signs of circular polarisation. Left: PSR J1136$+$1551 (negative sign). Right: PSR J0835$-$4510 (positive sign). The synthesised beam is shown inset for PSRs J1136$+$1551 and J0835$-$4510, they are $3.5\times3.2$ (position angle $-15\arcdeg$) and $2.8\times2.7$ (position angle $-68\arcdeg$), respectively.} \label{fig:pulsars200MHz} \end{figure*} Of the 60 pulsars detected in the GLEAM 200\,MHz survey data \citep{Murphy:2017}, we detect 11 at the $6\sigma$ level - a proportion of $\sim$$18\%$. We also detect an additional three pulsars which were in regions not explored by GLEAM (e.g. the Galactic plane) or were too faint to be seen in the confusion limited Stokes I maps. \subsection{AGN} \label{sec:agn} It is unusual to find AGN with a fractional circular polarisation greater than $0.5\%$ so the three remaining AGN were examined in more detail. All three AGN are detected just above our $6\sigma$ threshold for residual leakage and the $6\sigma$ local noise threshold. Subtle local variations in noise and/or systematic leakage may have been sufficient to push these sources above the threshold. 3C 139.2 itself is situated at the edge of the surveyed region at a declination of $\sim$$+28\arcdeg$ where sensitivity is extremely poor. Being situated at both the edge of the field and near the Galactic plane (RA$=\sim$$5.4$\,h) also limits the effectiveness of leakage subtraction at the source location. Inspection of the 216\,MHz mosaic confirms that leakage subtraction was particularly poor at that location and so is likely a false detection. The two remaining AGN (PMN J0257$-$2433 and PKS J0006$-$4235) have peaks in circular polarisation that are significantly offset ($>2\arcmin$) from the Stokes I peak, offsets that are significantly higher than expected based on the SNR, PSF and ionosphere. While these may be associated with a chance alignment by a foreground circularly polarised source it is more likely that these are associated with AGN hot spots. If these hotspots are linearly polarised and exhibit a low rotation measure (RM) they may be symptomatic of leakage from Stokes U to Stokes V. Such leakage results from an uncorrected XY-phase and has been observed to occur with low-RM sources \citep{Lenc:2017} with a fractional leakage of $\sim$$20\%$. \citet{Taylor:2009v702p1230} determined PMN J0257$-$2433 was $5\%$ linearly polarised with an RM of $11.7\pm4.5$\,rad\,m$^{-2}$ at 1.4\,GHz. If the source exhibits similar characteristics at 200\,MHz then Stokes U to Stokes V leakage would be sufficient to cause a false detection in this instance. The same may be true for PKS J0006-4235 as it is morphologically similar. The source is known to be a 20 GHz source \citep{Murphy:2010} but its polarimetric characteristics are not known. Further observations of this source would be required to determine its true nature. \subsection{Jupiter} \label{sec:jupiter} While Jupiter is known to exhibit a fractional circular polarisation $\sim$$1\%$ level at 3.24\,GHz \citep{Seaquist:1969}, we measure $\sim$$3.1\%$ ($7.3\sigma$) at 200\,MHz. Jupiter was at a relatively high declination ($+21.9\arcdeg$) and close to the Galactic plane (7.9\,h) in the epoch where it was detected. As with the AGN examined in Section \ref{sec:agn}, the source is in a region where sensitivity is poor and residual leakage is high and this may have resulted in an over-estimation or even a false detection. Further observations would be required to confirm this. \subsection{Artificial satellites} Six of the detections in circular polarisation at 200 MHz were not associated with any astrophysical sources. Upon closer inspection of the original snapshot images that were integrated to form the all-sky mosaic, it was discovered that each of the detected sources only appeared in a single 2-minute snapshot. Further investigation of the original spectral cube for each of the snapshots revealed that the spectral energy distribution for each of the transient sources exhibited narrow-band spikes, see Figure \ref{fig:satrfi}, that are commonly associated with radio frequency interference. It is known that satellites can reflect FM-band signals from the Earth and this can be detected in MWA observations \citep{Tingay:2013b}. The result is typically a moving point source that tracks with the location of the satellite as it moves through its orbit. However, the detections found here were in a band that is well above the FM band. Secondly, the detections are unresolved and do not appear to move, this suggests that this is as a result of direct short-term transmission from the satellite itself rather than reflection. \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{Figures/sat_summary.pdf} \caption{The measured spectral energy distribution of all point-like ``transients'' observed in the survey data. Stokes I is shown in black and Stokes V in red. The unusual profile and high degree of circular polarisation suggests that these are most likely due to intermittent transmissions from artificial satellites that pass through the observed field.} \label{fig:satrfi} \end{figure*} To confirm the satellite nature of the detections, satellite ephemeris was obtained from \url{http://space-track.org/}. The positions of satellites were tracked over the 2-minute period of each snapshot to determine if any corresponded with the position of the detected source. In each instance a satellite was identified within $1\arcmin$ of the ``transient'' source, these are listed in Table \ref{tab:sat}. The fields were re-imaged at higher time-resolution around the estimated location of a nearby satellite to confirm the association of the flare with the satellite and the time of the flare was also recorded in Table \ref{tab:sat}. \begin{table*} \centering \begin{tabular}{l l l l l l } \hline \hline Satellite & NORAD ID & RA & Dec & Ang. Sep. & Time \\ & & (J2000) & (J2000) & (arcmin) & (UTC) \\ \hline GONETS D1 9 & 27060 & 21$\rah$59$\ram$06$\ras$ & +14$\arcdeg$02$\arcmin$30$\arcsec$ & 0.7 & 2013-08-08 16:57:43.3 \\ COSMOS 2438 & 32955 & 02$\rah$20$\ram$23$\ras$ & +10$\arcdeg$50$\arcmin$47$\arcsec$ & 0.3 & 2013-08-08 21:07:31.9 \\ COSMOS 2385 & 27056 & 21$\rah$52$\ram$28$\ras$ & -05$\arcdeg$17$\arcmin$59$\arcsec$ & 0.6 & 2013-08-22 16:12:55.7 \\ COSMOS 2385 & 27056 & 07$\rah$47$\ram$45$\ras$ & -42$\arcdeg$28$\arcmin$46$\arcsec$ & 0.5 & 2013-11-06 20:18:55.7 \\ GONETS D1 3 & 23789 & 00$\rah$44$\ram$36$\ras$ & -51$\arcdeg$00$\arcmin$56$\arcsec$ & 0.9 & 2013-11-08 14:00:45.9 \\ STRELA 3 & 37153 & 03$\rah$49$\ram$00$\ras$ & -31$\arcdeg$31$\arcmin$41$\arcsec$ & 0.7 & 2013-11-25 15:14:04.3 \\ [1ex] \hline \end{tabular} \caption{Table of artificial satellite emission detected in circular polarisation. The coordinate and time of the satellite are listed at the time of emission. The angular separation is between the observed emission and the predicted location of the satellite based on the satellites ephemeris.} \label{tab:sat} \end{table*} \section{Targeted Survey} \subsection{Pulsars} \label{sec:pulsars} As noted in Section \ref{sec:blindpulsars}, the pulsars detected in the blind survey all have relatively high fractional circular polarisation. This makes them excellent candidates for a targeted survey. Of the 2\,613 known pulsars in ATNF Pulsar Catalog (v1.56), 2\,376 of these are within our survey region. A targeted survey of these pulsars was performed by probing the circular polarisation map at each of the pulsar locations (as recorded in the ATNF Pulsar Catalog) for significant emission above the esimated local image noise. For the targeted survey, a lower ($4\sigma$) threshold was utilised compared to the blind survey since an a priori position was known. Table \ref{tab:pulsar} lists all pulsars that were detected above the $4\sigma$ threshold, the table excludes pulsars that were already detected in the blind survey. In total, 32 pulsars were detected, 18 of these were not detected by the blind survey. All of the pulsars with low frequency circular polarisation previously reported in literature have consistent sign with the detections reported here. Three of the pulsars have a sign flip in circular polarisation compared to measurements reported at higher frequencies i.e. PSRs J0206$-$4028, J0828$-$3417, and J1900$-$2600. \begin{table*} \centering \begin{tabular}{l r r r r r r r r r} \hline \hline Pulsar & RA & Dec & Ang. Sep. & $V_{200}$ & SNR & $S_{200}$ (ref) & $v_{200}$ & $v_{\nu}$ & $\nu$ (ref) \\ & (J2000) & (J2000) & (arcmin) & (mJy) & & (mJy) & (\%) & (\%) & (MHz) \\ \hline J0034$-$0534 & 00$\rah$34$\ram$25$\ras$ & -05$\arcdeg$34$\arcmin$52$\arcsec$ & 0.8 & $-11.4$ & 5.7 & 65.0 (M) & $-17.5$ & $-9.8$ & 149 (N) \\ J0206$-$4028 & 02$\rah$06$\ram$00$\ras$ & -40$\arcdeg$27$\arcmin$19$\arcsec$ & 0.8 & $-6.2$ & 4.7 & 32.0 (M) & $-19.4$ & $+9.3$ & 1400 (J) \\ J0452$-$1759 & 04$\rah$52$\ram$35$\ras$ & -17$\arcdeg$59$\arcmin$08$\arcsec$ & 0.4 & $+9.1$ & 4.8 & 96.0 (M) & $+9.5$ & $+3.6$ & 1400 (J) \\ J0820$-$4114 & 08$\rah$20$\ram$14$\ras$ & -41$\arcdeg$14$\arcmin$21$\arcsec$ & 0.4 & $+12.3$ & 4.8 & 116.0 (M) & $+10.6$ & $+4.5$ & 1400 (J) \\ J0828$-$3417 & 08$\rah$28$\ram$18$\ras$ & -34$\arcdeg$16$\arcmin$22$\arcsec$ & 0.8 & $-12.2$ & 4.8 & 400.0 (M) & $-3.0$ & $+5.0$ & 606 (Y) \\ J1239$+$2453 & 12$\rah$39$\ram$39$\ras$ & +24$\arcdeg$53$\arcmin$34$\arcsec$ & 0.4 & $-54.8$ & 4.3 & 154.7 (B) & $-35.4$ & $-7.5$ & 149 (N) \\ J1359$-$6038 & 14$\rah$00$\ram$08$\ras$ & -60$\arcdeg$37$\arcmin$23$\arcsec$ & 1.5 & $+29.9$ & 5.3 & 402.0 (M) & $+7.4$ & $+17.3$ & 1400 (J) \\ J1456$-$6843 & 14$\rah$56$\ram$08$\ras$ & -68$\arcdeg$43$\arcmin$24$\arcsec$ & 0.8 & $+24.8$ & 5.8 & 738.0 (M) & $+3.4$ & $+4.6$ & 1400 (J) \\ J1543$+$0929 & 15$\rah$43$\ram$40$\ras$ & +09$\arcdeg$28$\arcmin$31$\arcsec$ & 0.8 & $-29.6$ & 4.8 & 234.0 (M) & $-12.7$ & $-33.0$ & 234 (Y) \\ J1600$-$5044 & 16$\rah$00$\ram$55$\ras$ & -50$\arcdeg$44$\arcmin$06$\arcsec$ & 0.4 & $+23.1$ & 5.0 & 139.3 (F) & $+16.6$ & $+29.3$ & 1400 (J) \\ J1707$-$4053 & 17$\rah$07$\ram$23$\ras$ & -40$\arcdeg$54$\arcmin$11$\arcsec$ & 0.4 & $-42.6$ & 5.9 & 493 (L) & $-8.6$ & $-3.5$ & 1400 (J) \\ J1834$-$0731 & 18$\rah$34$\ram$17$\ras$ & -07$\arcdeg$31$\arcmin$52$\arcsec$ & 0.8 & $+30.5$ & 4.2 & & & $+14.3$ & 1400 (J) \\ J1835$-$0643 & 18$\rah$35$\ram$07$\ras$ & -06$\arcdeg$43$\arcmin$21$\arcsec$ & 0.4 & $-32.8$ & 4.1 & 99.5 (F) & $-32.9$ & $-5.3$ & 1400 (J) \\ J1842$-$0612 & 18$\rah$42$\ram$46$\ras$ & -06$\arcdeg$12$\arcmin$51$\arcsec$ & 0.8 & $+29.1$ & 4.4 & & & & \\ J1900$-$2600 & 19$\rah$00$\ram$46$\ras$ & -26$\arcdeg$00$\arcmin$59$\arcsec$ & 0.4 & $-14.4$ & 4.3 & 299.0 (M) & $-4.8$ & $+1.4$ & 1400 (J) \\ J1921$+$2153 & 19$\rah$21$\ram$46$\ras$ & +21$\arcdeg$52$\arcmin$47$\arcsec$ & 0.4 & $+63.1$ & 5.4 & 914 (L) & $+6.9$ & $+6.8$ & 149 (N) \\ J2241$-$5236 & 22$\rah$41$\ram$37$\ras$ & -52$\arcdeg$35$\arcmin$51$\arcsec$ & 1.1 & $-12.1$ & 4.7 & 60.0 (M) & $-20.1$ & & \\ J2256$-$1024 & 22$\rah$56$\ram$55$\ras$ & -10$\arcdeg$24$\arcmin$49$\arcsec$ & 0.4 & $+10.0$ & 5.4 & & $+41.1$ & & \\ [1ex] \hline \end{tabular} \caption{Targeted pulsars detected above 4$\sigma$ at 200\,MHz in circular polarisation. Table columns are the same as defined in Table \ref{tab:candidates}. References provided within parenthesis refer to B:\citet{Bilous:2016}, F:\citet{Frail:2016}, J:\citet{Johnston:2017}, L:This work, M:\citet{Murphy:2017}, N:\citet{Noutsos:2015}, and Y:\citet{You:2006}.} \label{tab:pulsar} \end{table*} The pulsar catalogue produced by \citet{Murphy:2017} for all 60 pulsars detected in total intensity at 200 MHz with the MWA provides an accurate sample against which fractional circular polarisation detections and limits can be compared. \citet{Murphy:2017} pulsars that were detected in the blind survey and targeted survey have already been listed in Table \ref{tab:candidates} and Table \ref{tab:pulsar}, respectively. Table \ref{tab:pulsarm} lists all \citet{Murphy:2017} pulsars that were not detected above $4\sigma$ in circular polarisation. When compared against the measured fractional circular polarisation in literature only two of these pulsars, PSRs J0837$-$4135 and J1752$-$2806, were expected to be detected at 200\,MHz above a $4\sigma$ limit. As the previous observations were at 1.4\,GHz, it is likely that the polarimetric behviour of these sources are different at 200\,MHz. In total, 21 out of 60 \citet{Murphy:2017} pulsars are detected above $4\sigma$, a proportion of $35\%$. Additionally, 11 sources were detected in this survey that were not detected by \citet{Murphy:2017}, suggesting that searching for pulsars in circular polarisation can help to discover sources that would have been missed in total intensity searches. \begin{table} \centering \begin{tabular}{l r r r r r r} \hline \hline Pulsar & $\lvert V_{200}\rvert$ & $S_{200}$ & $\lvert v_{200}\rvert$ & $v_{\nu}$ & $\nu$ (ref) \\ & (mJy) & (mJy) & (\%) & (\%) & (MHz) \\ \hline J0737$-$3039A & $<4.3$ & 53.0 & $<8.1$ & & \\ J0809$-$4753 & $<11.8$ & 229.0 & $<5.2$ & $-0.4$ & 1400 (J) \\ J0820$-$1350 & $<7.4$ & 160.0 & $<4.6$ & $-4.2$ & 1400 (J) \\ J0826$+$2637 & $<31.8$ & 243.0 & $<13.1$ & $+6.2$ & 149 (N) \\ J0837$+$0610 & $<12.4$ & 286.0 & $<4.3$ & $-2.6$ & 149 (N) \\ J0837$-$4135 & $<11.8$ & 95.0 & $<12.5$ & $+13.8$ & 1400 (J) \\ J0840$-$5332 & $<12.0$ & 56.0 & $<21.4$ & $+9.0$ & 660 (Y) \\ J0855$-$3331 & $<9.7$ & 47.0 & $<20.6$ & & \\ J0856$-$6137 & $<12.1$ & 85.0 & $<14.3$ & $+5.6$ & 1400 (J) \\ J0905$-$5127 & $<13.6$ & 73.0 & $<18.6$ & $+14.1$ & 1400 (J) \\ J0907$-$5157 & $<12.2$ & 106.0 & $<11.6$ & $+4.8$ & 1400 (J) \\ J0922$+$0638 & $<15.2$ & 100.0 & $<15.2$ & $+5.6$ & 1400 (J) \\ J0924$-$5302 & $<10.9$ & 96.0 & $<11.3$ & $-7.3$ & 1400 (J) \\ J0942$-$5552 & $<11.4$ & 73.0 & $<15.7$ & $+0.6$ & 1400 (J) \\ J0942$-$5657 & $<14.1$ & 112.0 & $<12.6$ & $+11.6$ & 1400 (J) \\ J0953$+$0755 & $<13.6$ & 1072.0 & $<1.3$ & $-11.5$ & 149 (N) \\ J0959$-$4809 & $<13.6$ & 50.0 & $<27.1$ & $-4.0$ & 1400 (J) \\ J1001$-$5507 & $<14.3$ & 142.0 & $<10.0$ & $+1.9$ & 1400 (J) \\ J1012$-$2337 & $<8.2$ & 47.0 & $<17.4$ & & \\ J1047$-$3032 & $<7.6$ & 24.0 & $<31.8$ & & \\ J1057$-$5226 & $<16.2$ & 202.0 & $<8.0$ & $+3.2$ & 1400 (J) \\ J1116$-$4122 & $<9.2$ & 52.0 & $<17.6$ & $-3.6$ & 1400 (J) \\ J1121$-$5444 & $<17.8$ & 101.0 & $<17.6$ & $-7.8$ & 1400 (J) \\ J1430$-$6623 & $<15.8$ & 190.0 & $<8.3$ & $+4.5$ & 1400 (J) \\ J1543$-$0620 & $<12.9$ & 91.0 & $<14.1$ & $-5.0$ & 234 (Y) \\ J1607$-$0032 & $<71.8$ & 137.0 & $<52.4$ & $+1.4$ & 1400 (J) \\ J1643$-$1224 & $<21.4$ & 123.0 & $<17.4$ & $-1.0$ & 1331 (Y) \\ J1645$-$0317 & $<23.0$ & 774.0 & $<3.0$ & $-0.1$ & 1400 (J) \\ J1651$-$1709 & $<20.2$ & 111.0 & $<18.2$ & & \\ J1722$-$3207 & $<16.7$ & 229.0 & $<7.3$ & $+3.9$ & 1400 (J) \\ J1731$-$4744 & $<24.6$ & 325.0 & $<7.6$ & $+5.4$ & 1400 (J) \\ J1752$-$2806 & $<22.8$ & 1504.0 & $<1.5$ & $+5.9$ & 1400 (J) \\ J1810$+$1744 & $<110.0$ & 231.0 & $<47.6$ & & \\ J1820$-$0427 & $<28.9$ & 499.0 & $<5.8$ & $-3.3$ & 1400 (J) \\ J1824$-$1945 & $<24.3$ & 177.0 & $<13.7$ & $+1.3$ & 1400 (J) \\ J1824$-$2452A & $<20.0$ & 199.0 & $<10.1$ & & \\ J1913$-$0440 & $<20.6$ & 176.0 & $<11.7$ & $-7.0$ & 149 (N) \\ J2053$-$7200 & $<15.4$ & 110.0 & $<14.0$ & $-4.0$ & 660 (Y) \\ J2155$-$3118 & $<7.6$ & 46.0 & $<16.5$ & $-13.8$ & 1400 (J) \\ \hline \end{tabular} \caption{Non-detections from \citet{Murphy:2017} catalogue of 200 MHz pulsars. Upper limits are specified at 4$\sigma$ at 200\,MHz in circular polarisation. $S_{200}$ is the total intensity at 200\,MHz taken from \citet{Murphy:2017}. $\lvert v_{200}\rvert$ is the upper limit of the fractional circular polarisation at 200 MHz. Table columns are the same as defined in Table \ref{tab:candidates}. References provided within parenthesis refer to B:\citet{Bilous:2016}, F:\citet{Frail:2016}, J:\citet{Johnston:2017}, L:This work, M:\citet{Murphy:2017}, N:\citet{Noutsos:2015}, and Y:\citet{You:2006}.} \label{tab:pulsarm} \end{table} We note that the pulsar PSR J2330$-$2005, previously detected by \citet{Lenc:2016} in deep observations at 154 MHz, is not detected in the targeted survey. The source was found to be circularly polarised with a flux density of $-8.9\pm1.1$\,mJy and $-9.6\pm1.0$\,mJy in two separate epochs at 154\,MHz. In the 200 MHz all-sky survey, our $4\sigma$ limit is $\lvert V_{200}\rvert<7.2$ \,mJy PSF$^{-1}$ for this source location. When adjusted for the GLEAM spectral index of this source ($\alpha=-0.71\pm0.57$), the brightest detection of this source would have an expected circular polarisation of $-8.0\pm1.6$ at 200\,MHz ($-7.4\pm1.6$ for the weaker detection). Given the error constraints, it is possible that this source may have fallen below the threshold of this survey. Deeper observations would be required to confirm the nature of this source at 200\,MHz. \subsection{Limits on radio emission from exoplanets}\label{sec:exoplanet} The magnetised planets in our Solar system emit intense, low-frequency radio emission associated with planetary aurora. Similarly, planets outside our Solar System (i.e. exoplanets) capable of generating planetary-scale magnetic fields are expected to produce bright radio emission \citep{Winglee:1986, Zarka:2001}. The emission is produced via the electron-cyclotron maser (CMI) instability, arising from the propagation of energetic electrons along converging magnetic field lines in the magnetosphere of the planet. CMI emission is characterised as bright, beamed, highly circularly polarised radio emission that can be variable on time-scales of seconds to days \citep{Wu:1979, Treumann:2006}. Because the emitting frequency of the planetary radio emission is tied magnetic field strength, radio detections of exoplanets will directly measure their field strengths and in turn provide insight into the interior composition of these planets. Additionally, the variations of the radio emission in time and frequency variability can provide geometrical constraints on the planet's orbit and magnetic field configuration \citep{Hess:2011}. There have been many observational attempts to detect radio emission from exoplanets \citep{Bastian:2000, Lazio:2004, Lazio:2007, George:2007, Smith:2009, Lazio:2010, Stroe:2012, Lecavelier:2013, Hallinan:2013, Sirothia:2014, Murphy:2015, Lynch:2017a, OGorman:2018} but there have been no unambiguous detections to date. The expected high fractional circular polarisation of CMI emission makes exoplanets prime targets for Stokes V searches. Two previous studies have used Stokes V imaging with the MWA to search for radio emission from exoplanets. From a catalog of 1\,110 known exoplanets (as of 2014 May 14), \citet{Murphy:2015} targeted 17 sources that they identified as having estimated flux densities and emission frequencies close to or above the MWA detection capabilities. \citet{Lynch:2017a} observed a young star forming region to search for variable Stokes V emission that might be associated with exoplanets in still forming planetary systems. Since the publication of \citet{Murphy:2015} many thousands of exoplanets have been discovered through various optical techniques \citep{Schneider:2011}. Using an updated catalog of 4\,132 sources (known population of exoplanets as of 2018 February 19), we did a targeted search of the 1\,506 sources located within our survey region for significant circularly polarised emission above the estimated local image noise. Again, we used a lower, 4$\sigma$ threshold since an a priori position was known. Of the 1\,506 sources searched, two sources, Proxima Cen b and HD 34445 b, were found to be associated with emission at a $>$4$\sigma$ level. Visually inspecting the Stokes V image for HD 34445 b, we found the source have structure in the image that was indicative of a noise peak in the image; thus we ruled this source out as a detection. Visual inspection of the Proxima Centauri b image found the emission to be point-like and an investigation of the associated Stokes I emission did not reveal a bright source that could be responsible for any total intensity leakage. We tentatively claim to make a detection of weak emission at the location of Proxima Centauri b, however the detected radio emission is not expected to be associated with the planet but instead with the host star. CMI emission is emitted at the cyclotron frequency of the source population of the electrons, which is directly related to the local magnetic field strength, $B_p$, of the planetary magnetosphere: \begin{equation} f_c = \frac{eB_p}{2\pi m_e}\approx 2.8\ \text{MHz}\ B_p \end{equation} where, $m_e$ and $e$ are the electron mass and charge, and $B_p$ is measured in Gauss. The maximum estimated magnetic field strength for Proxima Centauri b is 1\,Gauss \citep{Burkhart:2017}, corresponding to a maximum emission frequency of $\sim$3 MHz. Due to ionospheric absorption of emission at frequencies $<$10\,MHz, planetary radio emission from Proxima Centauri b cannot be detected by ground-based radio telescopes. Thus any emission that we detected using ground based radio telescopes must be related to the magnetic activity of the star; the possibility of Proxima Centauri producing the observed emission is discussed the next section. The upper limits set by this survey for a set of best radio detection candidate exoplanets, as identified by their theoretically estimated emission frequencies and radio flux densities, will be discussed in a future paper (Lynch et al. \emph{in prep}). \subsection{Limits on radio emission from flare stars}\label{sec:flareStars} Some magnetically active stars are observed to exhibit short-duration, narrow band, and highly circularly polarised ($\sim$100$\%$) radio flares. The observed polarisation and frequency-time structure of these flares points to a coherent emission mechanism such as CMI \citep{Bastian:2000, Gudel:2002}. In the 1960s -- 1970s several magnetically active M dwarf stars were observed at frequencies between $90 - 300$~MHz using single dish telescopes. These observations revealed bright radio flares with rates between $0.03 - 0.8$ flares per hour and intensities ranging from $0.8$ to $20$\,Jy. However, recent low-frequency surveys to detect transients have resulted in non-detections \citep[e.g.][]{Rowlinson:2016, Tingay:2016}. To confirm the previous M dwarf stellar flare rates and flux densities at $100 - 200$\,MHz, \citet{Lynch:2017b} targeted UV Ceti, a magnetically active M dwarf star. As the radio flares from UV Ceti were expected to be highly circularly polarised, this search was focused in the circularly polarised images rather than in total intensity. Four flares were detected from UV Ceti with flux densities a factor of 100 fainter than those in the literature. Following this example we used the updated catalog of radio stars by \citet{Wendker:2015} to search for circularly polarised emission associated with the positions of these objects. A wide variety of stellar objects are included in this catalogue including M dwarf stars, RS CVn binaries, and magnetic chemically peculiar hot stars. This catalog contains 3\,021 objects, 2\,400 of which are located within our survey region. From this search we identify 3 objects associated with emission at a $>$4$\sigma$ level: Proxima Centauri, HR 5942, and DM-64 1685. Visual inspection of the circularly polarised image for D-64 1685 ruled it out as a detection, as the emission structure more closely resembles that of image noise than point-source like. In the total intensity image, the location of HR 5942 offset from a bright extragalactic source leaving us to conclude that the observed Stokes V emission is not due to total intensity leakage. A similar offset is found for both Stokes Q and U and not indicative of linear polarisation leakage. We tentatively claim a $4.5\sigma$ detection of HR 5942 with a measured Stokes V flux density of $-11\pm 3$\,mJy. Additionally, we claim a tentative detection of Proxima Centauri, with a measured flux density of $-18\pm 4$\,mJy. Both HR 5942 and Proxima Centauri are previously detected in the radio. HR 5942 is a magnetic chemically peculiar Bp star with previous detections of quiescent emission at 5 and 14\,GHz \citep{Linsky:1992, Leone:1994} and radio flaring at 5\,GHz \citep{Drake:1989}; both types of emission are thought to be gyrosynchrotron emission. Coherent emission has been observed in other magnetic chemically peculiar stars from 610 to 1400\,MHz \citep{Trigilio:2000, Chandra:2015, Das:2018}. If the detection of HR 5942 is confirmed, this would be the lowest frequency detection of a magnetic chemically peculiar hot star and only the third hot star to be observed emitting highly circularly polarised, coherent emision. Proxima Centauri is an emission-line M dwarf star, previously observed to emit bright coherent bursts at $\sim$1\,GHz \citep{Lim:1996, Slee:2003}. Other M dwarf stars have been observed to emit radio emission from MHz to GHz frequencies (e.g. AD Leo: \citet{Spangler:1974}, \citet{Jackson:1989}; YZ CMi: \citet{Spangler:1976}, \citet{Kundu:1988}). Given the previously observed GHz bursts from Proxima Centauri, it is possible that this source could also produce bursts at 170 -- 230\, MHz; a previous 3$\sigma$ limit of 42.3\,mJy at 200\,MHz has been reported by \citet{Bell:2016b}. To further confirm the tentative detections of HR 5942 and Proxima Centauri, investigations into the variability and frequency spectrum of the observed emission is ongoing. \section{Conclusions} We have demonstrated the effectiveness of polarisation leakage mitigation using MWA observations and have used it to complete an all-sky survey in circular polarisation using existing observations. The fractional leakage was typically reduced by an order of magnitude to less than $0.72\%$ and allowed both blind and targeted surveys to be performed with a sensitivity of $1.0-3.0$\,mJy\,PSF$^{-1}$. We have detected 32 pulsars, 6 transient emissions from artificial satellites and 2 flare stars. When compared against total intensity observations of pulsars at 200\,MHz, $35\%$ of pulsars that were detectable in total intensity were also detected in circular polarisation. Furthermore, 11 pulsars detected in circular polarisation were not originally found in total intensity (as a result of their location in the Galactic plane or the limited sensitivity available in Stokes I because of confusion). The 2 flare stars detected in this survey were only detected in circular polarisation due to either limited sensitivity in the total intensity image or the close proximity of a nearby, bright extragalactic source. Of the 3\,610 exoplanets in our catalogue of known objects, 1\,506 exoplanets were located within our survey region; these were also searched but did not yield any detections. The all-sky survey presented here was not ideal for detecting transient emission from sources such as flare stars and exoplanets. Transient sources and sources that can change sign in polarisation, such as seen with the flare star UV Ceti \citep{Lynch:2017b}, require an alternate observing and processing strategy. To avoid diluting the signal, the integration of the snapshot images should not exceed the time-scale of the expected emission before a sign flip occurs or the emission stops. Similarly, for periodic emission where the duty cycle is low, tracked observations of a field would be better suited to increase the probability of catching the moment of the flaring emission. Two avenues of investigation what will be pursued in future will be to search through overlapping snapshot images of the drift-scan for transient emission and to apply the leakage mitigation techniques developed here to targeted observations. While an order-of-magnitude improvement in Stokes I to Stokes V leakage has been greatly beneficial, further improvements would be required to probe sources with low levels of fractional circular polarisation. The technique presented in this paper is currently limited by sidelobe confusion, noise and fitting of the 2D model of the position-dependent leakage. A more extended antenna array, such as that available with the recent MWA extension, can help reduce PSF sidelobes and improve the sensitivity of uniform-like imaging. Sensitivity can also be improved by avoiding beam-former or frequency changes over the course of a drift-scan. With a near-continuous drift-scan, field sources can probe leakage over much finer tracks throughout the field. Finally, improved 2D modelling can also help to reduce errors at the field edges where increased residual leakage is noticed. Currently a simply 2D quadratic function is used for fitting: more complex functions may improve the fitting results. An outstanding challenge not addressed in the survey presented here is distinguishing between Stokes U to Stokes V leakage and true circular polarisation. This is only problematic for sources with both a significant degree of linear polarisation and with a low rotation measure. As such, it only affects a small sub-set of candidate sources. To determine the effect of this leakage requires knowledge of the X-Y phase which is typically obtained by observing a linearly polarised source. Such sources are rare at long wavelengths, however, it may be possible to measure the effect in diffuse linearly polarised Galactic emission \citep{Lenc:2017}. The practicality and effectiveness of this is yet to be investigated for an all-sky survey. The methods for leakage mitigation demonstrated here should also be applicable to Square Kilometre Array Low Frequency array (SKA-LOW \footnote{See SKA phase 1 system (Level 1) requirements SKA-TEL-SKO-0000008: \url{http://www.skatelescope.org/wp-content/uploads/2014/03/SKA-TEL-SKO-0000008-AG-REQ-SRS-Rev06-SKA1_Level_1_System_Requirement_Specification-P1-signed.pdf}}. The method requires minimal processing and is fast because no deconvolution is required. The main requirement is that the nature of the leakage remains constant for instrumental beam. A secondary requirement is that good quality images can be generated on relatively small time-scales. In the case of the MWA, the large number of available baselines enables it to generate good quality images on 2-minute time-scales. The current specification for SKA-low has fewer baselines compared to the MWA but they will be more sensitive and more extended. As long as this compromise does not adversely affect the quality of the snapshot dirty maps, the mitigation techniques used for the MWA should still be effective with SKA-Low. \section*{Acknowledgments} The authors thank Ron Ekers for useful discussions. TM acknowledges the support of the Australian Research Council through grant FT150100099. DLK was supported by NSF grant AST-1412421. This scientific work makes use of the Murchison Radio-astronomy Observatory, operated by CSIRO. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site. Support for the operation of the MWA is provided by the Australian Government (NCRIS), under a contract to Curtin University administered by Astronomy Australia Limited. We acknowledge the Pawsey Supercomputing Centre which is supported by the Western Australian and Australian Governments. This research was conducted by the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. The authors thank the anonymous referee for providing useful comments on the original version of this paper. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} With the successful run of the Large Hadron Collider (LHC), there is an eager demand for the next-to-leading order (NLO) and next-to-next-to-leading order (NNLO) background computation. NLO and NNLO computations involve loop-order Feynman diagrams. The number of Feynman integrals grows quickly for multi-leg and multi-loop cases. However, for each diagram, many different Feynman integrals are linearly related by the integration-by-parts (IBP) relations or symmetries, so the whole set of integrals can be reduced to a minimal set of integrals, so-called {\it master integrals} (MIs). This paper focuses on the geometric meaning for IBP relations and provides a new method for obtaining IBP relations. Schematically, for a $L$-loop integral, the integration of a total derivative vanishes and resulting identity is called an IBP relation: \begin{equation} \int \frac{d^D l_1}{i \pi^{D/2}} \ldots \frac{d^D l_L}{i \pi^{D/2}} \sum_{i=1}^L \frac{\partial }{\partial l_i^\mu}\bigg(\frac{v_i^\mu}{D_1^{a_1} \ldots D_k^{a_k}}\bigg)=0. \label{IBP} \end{equation} Here $v_i^\mu$ are vectors depends on externel and internal momenta. Traditionally, various contributions to a certain amplitude are characterized by Feynman diagrams, and the final results are reduced to the form of MIs by IBP relations. In recent years, there are a lot of new methods to improve the efficiency of multi-loop diagram computation, and most of which also require the calculation of IBP identities at certain steps. Unitarity methods \cite{Bern:1994zx,Bern:1994cg, Britto:2004nc} relate a loop amplitude to the product of tree amplitudes, and the latter can be efficiently calculated by recursive methods \cite{Britto:2004ap, Britto:2005fq}. For example, Ossola-Papadopoulos-Pittau (OPP) method \cite{Ossola:2006us,Ossola:2007ax,Giele:2008ve,Badger:2008cm,Ellis:2007br,Forde:2007mi} determines the {\it minimal integrand basis} for one-loop Feynman diagrams algebraically via partial fraction. This method has been successfully generalized to multi-loop integrand level reduction by computational algebraic geometry \cite{Mastrolia:2011pr, Badger:2012dp,Zhang:2012ce,Mastrolia:2012an,Badger:2012dv,Badger:2013gxa,Feng:2012bm,Mastrolia:2012wf,Mastrolia:2012du,Kleiss:2012yv,Huang:2013kh,Fazio:2014xea, vanDeurzen:2013saa, Mastrolia:2013kca, Hauenstein:2014mda}. The coefficients of the minimal integrand are therefore fixed by unitarity cuts. However, usually the integrand basis is not the minimal integral basis, so finally the results are reduced MIs by IBP relations. Multi-loop unitarity has also been systematically performed by the maximal unitarity method \cite{Kosower:2011ty, CaronHuot:2012ab, Larsen:2012sx, Johansson:2012zv, Johansson:2013sda, Sogaard:2013yga, Sogaard:2013fpa,Sogaard:2014ila,Sogaard:2014oka} . Feynman integrals are converted to contour integrals and MI coefficients can be directly extracted from residue calculations. To get the correct contour weights, in the intermediate step, IBP relations are required \cite{Kosower:2011ty}. For multi-loop or multi-leg diagrams, in general, the computation of IBP is very heavy. For a given loop diagram, there are many IBP relations from different choices of IBP-generating vectors $v_i^\mu$ in (\ref{IBP}). The desired reduction of Feynman integrals to MIs can be achieved by Gaussian elimination of IBP relations, via Laporta algorithm \cite{Laporta:2000, Laporta:2001}. This algorithm is used for several sophisticated programs, like \AIR{} \cite{AIR}, \FIRE{} \cite{FIRE} and \Reduze{} \cite{Reduze}. Furthermore, Laporta algorithm can be greatly sped up by finite fields numerical sampling method \cite{vonManteuffel:2014ixa}. A breakthrough method for generating IBP relations by Gluza, Kajda and Kosower (GKK method) \cite{Gluza:2010ws}, appeared in 2008. GKK method finds IBP relations of the integrals without doubled propagator, so only a small portion of loop integrals need to be considered. In practice, such IBP relations are found by the careful choice of IBP generating vectors $v_i^\mu$ in (\ref{IBP}), via Syzygy computation \cite{Gluza:2010ws}. Several two-loop diagrams' IBP relations are given by this method. Furthermore, the syzygy computation can be simplified by linear algebra techniques \cite{Schabinger:2011dz}. However, GKK method does not indicate the geometric meaning of such IBP-generating vectors. It is an interesting question to ask if these vectors have any particular meaning in the loop-momentum space. In our paper, we illustrate the geometric meaning of the IBP generating vectors for integral without doubled propagator. We reformulate such a vector as a differential form by Poincar\'{e} dual. \begin{equation} v_i^\mu \Leftrightarrow \omega, \end{equation} where $\omega$ is a rank-$(DL-1)$ differential form. Then we show that it is {\it locally} proportional to the differential form $\Omega = dD_1 \wedge \ldots \wedge d D_k$, \begin{equation} \label{eq:78} \omega \mid_{\mathcal S} \ \propto \Omega \mid_{\mathcal S}, \end{equation} where $D_i$'s are the sets of all denominators of the Feynman integral and $\mathcal S$ is the unitarity cut solution. Geometrically, $\omega$ is along the normal direction of the unitarity-cut surface. Furthermore, we design a geometric method to generate IBP identities without doubled propagator. We consider the {\it primary decomposition} of the unitarity cut solutions, \begin{equation} \label{eq:62} \mathcal S=\bigcup_{i=1}^n \mathcal S_i. \end{equation} By solving congruence equations, we construct differential form $\omega_i$'s which is nonzero and proportional to $\Omega$ in $\mathcal S_i$, but vanishes on other branches, \begin{equation} \label{eq:76} \left \{ \begin{array}{c} \omega_i |_{\mathcal S_i} = \ (\alpha \wedge \Omega) |_{\mathcal S_i} \\ \omega_i |_{\mathcal S_j} = 0 |_{\mathcal S_j} ,\quad j \not= i \end{array} \right . , \end{equation} where $\alpha$ is an arbitrary non-zero $(DL-1-k)$-form. We use such $\omega_i$'s to generate the on-shell part of the IBP relations without doubled propagator. Several two-loop four-point and five-point examples are tested by our method. This paper is organized as follows: in section \ref{IBP_diff}, we reformulate IBP identities in terms of differential forms, and the condition for IBP without doubled propagator is also reformulated. In section \ref{geometry}, we illustrate the geometric meaning of the IBP-generating differential forms and present a new method for generating the on-shell part of IBPs. In section \ref{examples}, several two-loop examples based on our algorithm are given. \section{Integration-by-Parts identities in the formalism of differential form} \label{IBP_diff} We consider the $L$-loop Feynman integral, \begin{equation} \label{integral} I_{\{a_1, \ldots a_k\}}[N]= \int \frac{d^D l_1}{i \pi^{D/2}} \ldots \frac{d^D l_L}{i \pi^{D/2}} \frac{N}{D_1^{a_1} \ldots D_k^{a_k}}. \end{equation} where $N$ is a polynomial in loop momenta. The integrand reduction and unitarity solution structure has been studied by algebraic geometry methods \cite{Zhang:2012ce, Mastrolia:2012an}. In the following discussion, we will frequently use these algebraic geometry methods. The mathematical notations are summarized in the Appendix and the algebraic geometry reference is \cite{MR0463157}. We find that it is convenient to rewrite IBP relations (\ref{IBP}) in terms of differential forms. By Poincar\'{e} dual, the $(D\cdot L)$-dimensional vector $v_i^\mu$ is dual to a $D \cdot L-1$ differential form $\omega$. Explicitly, \begin{equation} \label{Poincare_duality} \omega_{i_1 \ldots i_{(DL-1)}} \equiv \epsilon_{i_1 \ldots i_{(DL-1)} i_{DL}} v^{i_{DL}} , \end{equation} where $\epsilon_{i_1 \ldots i_{(DL-1)} i_{DL}}$ is the Levi-Civita symbol. In most of the following discussion, we use the notations of differential forms, since it is convenient to write down the exterior derivative and wedge products. We call a differential form {\it polynomial-valued}, if all the components are polynomials in loop momenta, in the momentum-coordinate basis. Note that this definition is consistent with linear transformation of loop momenta. The total derivative in (\ref{IBP}) can be dually written as, \begin{equation} \frac{\partial }{\partial l_i^\mu}\bigg(\frac{v_i^\mu}{D_1^{a_1} \ldots D_k^{a_k}} \bigg) \Leftrightarrow d\bigg(\frac{\omega}{D_1^{a_1} \ldots D_k^{a_k}}\bigg). \label{eq:3} \end{equation} So the IBP relation is \begin{equation} \label{IBP_dual} \int \frac{d\omega}{D_1^{a_1} \ldots D_k^{a_k}} - \sum_{i=1}^k a_i\int \frac{dD_i\wedge \omega}{D_1^{a_1} \ldots D_i^{a_i+1} \ldots D_k^{a_k}}=0. \end{equation} Different choices of $v_i^\mu$, or $\omega$ lead to different IBPs. One particularly interesting class of IBPs is {\it IBPs without doubled propagator}, which is described in the next subsection. \subsection{IBPs without doubled propagator} For a Feynman integral from Feynman rules, the powers of the denominators $D_1,\ldots D_k$ in (\ref{integral}) are usually one or zero, i.e., $a_i=0,1$, $i=1, \ldots k$. We call such an integral, {\it integral without doubled propagator}. We are interested in {\it IBPs without doubled propagators}, which is an IBP whose teams are integrals without doubled propagator. We make an ansatz for an IBP without doubled propagator, \begin{equation} \label{Ansatz} \int d\bigg(\frac{\omega}{D_1 \ldots D_k}\bigg)=0, \end{equation} where $\omega$ is a polynomial-valued $(DL-1)$-form. Usually, the expansion of (\ref{IBP}) contains integrals with double propagators, because, \begin{equation} \label{eq:4} d\bigg(\frac{1}{D_i}\bigg) =-\frac{dD_i}{D_i^2}. \end{equation} However, a particular choice of $\omega$ can remove the double power if, \begin{equation} \label{IBP_without_double_propagator} dD_i \wedge \omega=f_i D_i dl_1^0\wedge \ldots \wedge d l_L^{D-1},\quad i=1,\ldots j \end{equation} where $f_i$ is a polynomial. \subsection{On-shell part of IBPs} Sometimes we only focus on Feynman diagrams without pinched legs, i.e., $a_i\geq 1, i=1,\ldots k$. We call the corresponding integrals {\it leading integrals}. On the other hand, we call integrals with at least one $a_i<1$ {\it simpler integrals}. If we only keep the leading integrals in an IBP relation, then the resulting formula \begin{equation} \label{eq:6} \sum_{i} c_i I_{a_{i,1},\ldots a_{i,k}}[N_i]+ \ldots =0, \end{equation} is called an {\it on-shell IBP relation}. $a_{i,j}>0, \forall i,j $. Here ``$\ldots$'' denotes the {\it simpler integrals}, and $N_i$'s are polynomial numerators. In this paper, we consider the on-shell IBP without double propagators, namely, \begin{equation} \label{IBP1} \sum_{i} c_i I_{1,\ldots 1}[N_i]+ \ldots =0, \end{equation} For the ansatz (\ref{Ansatz}) to generate an on-shell IBP without doubled propagator, it is sufficient that, \begin{equation} \label{on_shell_IBP_without_double_propagator} dD_i \wedge \omega=\sum_j f_{ij} D_j dl_1^0\wedge \ldots \wedge d l_L^{D-1},\quad i=1,\ldots j \end{equation} where each $f_{ij}$ is a polynomial. $\omega$ generates the IBP, \begin{eqnarray} \label{eq:9} 0= \int d\big( \frac{\omega}{D_1 \ldots D_k} \big) = \int \frac{d\omega}{D_1 \ldots D_k}-\sum_{i=1}^k \sum_{j=1}^k \int \frac{ f_{ij} D_j dl_1^0\wedge \ldots \wedge d l_L^{D-1}}{D_1 \ldots D_i^2 \ldots D_k} , \end{eqnarray} Pick up the on-shell part, we have \begin{equation} \label{on_shell} 0= \int \frac{d\omega}{D_1 \ldots D_k}-\sum_{i=1}^k \int \frac{ f_{ii} dl_1^0\wedge \ldots \wedge d l_L^{D-1}}{D_1 \ldots D_k} + ... , \end{equation} where $\ldots$ stands for simpler integrals. Note that this condition (\ref{on_shell_IBP_without_double_propagator}) is weaker than the condition (\ref{IBP_without_double_propagator}). Furthermore, from (\ref{on_shell}), we have the following lemma, \begin{lemma} If if all components of $\omega$ are in the ideal $I=\langle D_1, \ldots D_k \rangle$, then it generates an IBP identity whose on-shell part is trivial. \end{lemma} \begin{proof} Let $\omega'=\sum_{i=1}^m w_i dx_1\wedge \ldots \wedge\hat{dx_i}\wedge \ldots \wedge dx_m$, where $m=LD$ and $\{x_1,\ldots x_m\}$ denote the loop momenta $\{l_1^0 ,\ldots l_L^{D-1}\}$. Suppose that every $w_i$ is in $I$, i.e., $w_i=\sum_{j=1}^k g_{ij} D_j$. Hence, \begin{eqnarray} \label{eq:18} 0 &=& \int d\big(\frac{\omega}{D_1 \ldots D_k}\big)=\sum_{i=1}^m \sum_{j=1}^k\int d\big(\frac{ g_{ij} D_j dx_1\wedge \ldots \wedge\hat{dx_i}\wedge \ldots \wedge dx_m}{D_1 \ldots D_k}\big)\nonumber \\ &=& \sum_{i=1}^m \sum_{j=1}^k\int d\big(\frac{ g_{ij} dx_1\wedge \ldots \wedge\hat{dx_i}\wedge \ldots \wedge dx_m}{D_1 \ldots \hat{D_j}\ldots D_k}\big) . \end{eqnarray} From the expansion of the expression, it is clear that each term misses one of the denominators. Therefore, $\omega'$ generates the IBP, \begin{equation} \label{eq:13} 0=0 + \ldots , \end{equation} where $\ldots$ stands for simpler integrals. The on-shell part is trivial. \end{proof} From this lemma, if two rank-$DL-1$ forms $\omega_1$ and $\omega$ differ by such an $\omega'$, then $\omega_1$ and $\omega_2$ generate the same on-shell IBP. If an $\omega$ satisfying (\ref{on_shell_IBP_without_double_propagator}), then $f \omega$ also satisfies (\ref{on_shell_IBP_without_double_propagator}). Here $f$ is a polynomial in loop momenta. So we can obtain more IBPs without doubled propagator, by multiplying various $f$'s. Note that by Lemma 1, only when $f$ is a polynomial in {\it irreducible scalar products}, the resulting $f \omega$ generates a non-trivial on-shell IBP. \section{A method to construct on-shell IBPs without doubled propagator} \label{geometry} We reformulate (\ref{on_shell_IBP_without_double_propagator}) from the viewpoint of algebraic geometry, and then illustrate how to find the solution to (\ref{on_shell_IBP_without_double_propagator}) with computational algebraic geometry method. \subsection{A condition for on-shell IBPs without doubled propagator} With the background of algebraic geometry, we can reformulate the condition (\ref{on_shell_IBP_without_double_propagator}) as the differential geometry constraint in Proposition \ref{lemma_product}. \begin{proposition} \label{vanishing_lemma} For an $\omega$ in (\ref{Ansatz}) to generate an on shell IBP without doubled propagator, it is necessary that for each point on the cut solution, at the corresponding cotangent space, \begin{equation} \label{vanishing_id} (dD_i \wedge \omega)|_P=0, \quad \forall P \in \mathcal Z(I). \end{equation} If the ideal generate by the denominators is radical, then this condition is also sufficient. \end{proposition} \begin{proof} By the definition, all $D_i$ vanish on $\mathcal S=\mathcal Z(I)$. So $\forall P\in Z(I)$, $(dD_i \wedge \omega)|_P=0$. On the other hand, \begin{equation} \label{eq:8} (dD_i \wedge \omega)=F_i dl_1^0\wedge \ldots \wedge d l_L^{D-1},\quad i=1,\ldots k \end{equation} where each $F_i$ is a polynomial. (\ref{vanishing_id}) means that $F_i$ vanish everywhere on $\mathcal S$. So by Hilbert's Nullstenllensatz, $F_i\in \sqrt I$. If $I$ is radical, then $F_i\in I$ and so $F_i=\sum_j f_{ij} D_j$. \end{proof} To get some insights of (\ref{vanishing_id}), we consider the cotangent space at $P$. We consider general case, for which the cut equation system is non-degenerate, i.e., \begin{equation} \label{eq:10} \dim \mathcal S_i=DL-k, \quad i=1, \ldots n \end{equation} where $k$ is the number of denominators. If $P$ is a {\it non-singlar point}, i.e., the Jacobian \begin{equation} \label{eq:11} J=\det \bigg(\frac{\partial D_i}{\partial x_j}\bigg)|_P . \end{equation} has the rank $k$, then it is clearly that \begin{equation} \label{eq:12} (d D_1 \wedge \ldots \wedge d D_k)|_P \not=0. \end{equation} Therefore we have the following proposition, \begin{proposition} \label{lemma_product} If $k\leq DL-1$ and all cut solutions have the dimension $DL-k$, for an $\omega$ in (\ref{Ansatz}) to generate an on shell IBP without doubled propagator, it is necessary that for each non-singular point $P$ on the cut solution, at the cotangent space, \begin{equation} \omega|_P=(\alpha \wedge D_1 \wedge \ldots \wedge D_k )|_P . \end{equation} where $\alpha$ is a $(DL-k-1)$ form. \end{proposition} \begin{proof} Since at the non-singular point $P$, the Jacobian is non-zero. So locally we can choose a coordinator system, $(y_1,\ldots y_{DL})$ such that, \begin{equation} \label{eq:14} y_1 = D_1, \quad \ldots,\quad y_k=D_k. \end{equation} Expand $\omega|_P$ in this coordinator system. If $\omega|_P$ contains a component proportional to $dy_1 \wedge \ldots \hat{dy_i}\ldots \wedge dy_n$ and $i\leq k$, then \begin{equation} \label{eq:16} (dD_i \wedge \omega)|_P\not =0 . \end{equation} This is a violation to Proposition \ref{vanishing_lemma}. Collecting all terms proportional to $dy_1 \wedge \ldots \hat{dy_i}\ldots \wedge dy_n$ and $i> k$, this lemma is clear. \end{proof} Generically, the singular points on $\mathcal S$ only form a subset with lower dimension. So for ``almost all points'' on $\mathcal S$, $\omega$ is proportional to $dD_1 \wedge \ldots \wedge dD_k$. We may have an explicit ansatz, \begin{equation} \label{attempting_ansatz} \omega= \alpha\wedge dD_1 \wedge \ldots dD_k. \end{equation} Here $\alpha$ is a polynomial-valued differential form. This indeed generates an on-shell IBP relation without double propagator. However, this form may not generate enough IBP relations, since proposition \ref{vanishing_lemma} is only a local condition while (\ref{attempting_ansatz}) has a global expression. We may generalize (\ref{attempting_ansatz}) as: a polynomial-valued differential form $\omega$ which {\it locally} has the form, \begin{equation} \label{ansatz} \omega|_{\mathcal S_i} = \alpha_i \wedge dD_1 \wedge \ldots dD_k . \end{equation} on each branch $\mathcal S_i$. $\alpha_i$'s are different polynomial $(DL-k-1)$-froms on different branches. Then there are two questions, \begin{itemize} \item Given a set of $\alpha_i$'s, does such a polynomial-valued $\omega$ exist? \item Given a set of $\alpha_i$'s, is there an algorithm to find such an $\omega$? \end{itemize} These questions will be answered in the next section, explicitly in Theorem \ref{congruence}, by solving {\it congruence equations}. \subsection{Local form and congruence equations} To study the behaviour of a differential form near the cut, we use the tool of Gr\"obner basis and polynomial divisions. Recall that $I$ has the primary decomposition $I=I_1\cap ... \cap I_n$. Let $G(I)$ be the Gr\"obner basis of $I$, and $G(I_i)$ be the Gr\"obner basis of $I_i$. We denote the equivalent classes $[\ ]$ and $[\ ]_i$ as, \begin{eqnarray} \ [f]&=&[g], \quad \text{if } f-g \in I, \\ \ [f]_i&=&[g]_i, \quad \text{if } f-g \in I_i. \end{eqnarray} Intuitively, these equivalent classes characterise the limit of the polynomials approaching the cut manifold. In practise, the unique representative for $[f]$ (or $[f]_i$) can be chose as the remainder of the polynomial division of $f$ over $G(I)$ (or $G(I_i)$). Here we generalize the equivalent classes to polynomial-valued differential forms. Two differential forms $\alpha$ and $\beta$ are in the same equivalent classes, if and only if $\alpha$ and $\beta$ are of the same rank and all polynomial components are in the same equivalent classes. We still use $[\ ]$ and $[\ ]_i$ for differential forms. Then we rewrite the condition (\ref{ansatz}) as, \begin{eqnarray} \label{eq:15} [\omega]_i= [\alpha_i \wedge dD_1 \wedge \ldots dD_k]_i . \end{eqnarray} For a large classes of diagrams, given an arbitrary set of $\alpha_i$'s, such differential form $\omega$ exists. We have the following theorem, \begin{thm} \label{congruence} Let $I=\langle D_1, \ldots D_k \rangle$ be an ideal in the ring $\mathbb C[x_1,\ldots x_{m}]$. $I=I_1 \cap \ldots I_n$ is its primary decomposition and $J_i=\cap_{j=1}^i I_i$. Suppose that (1) for each component $\dim \mathcal Z(I_i)=m-k$, (2) Each $(J_i+I_{i+1})$ is a radical ideal, $i=1, \ldots n-1$. Then given an arbitrary set of rank-$(m-k-1)$ polynomial-valued forms, $\alpha_i$, there exists a rank-$(m-1)$ form $\omega$ such that, \begin{eqnarray} \label{eq:15} [\omega]_i= [\alpha_i \wedge dD_1 \wedge \ldots \wedge dD_k]_i . \end{eqnarray} \end{thm} \begin{proof} We construct $\omega$ explicitly by solving congruence equations. Define $v_i=\alpha_i \wedge dD_1 \wedge \ldots \wedge dD_k$. First, the ideal $I_1+I_2$'s zero locus is $\mathcal Z(I_1+I_2)=\mathcal Z(I_1) \cap Z(I_2)$, which are all singular points on the algebraic set $\mathcal Z(I)$. Since $\dim \mathcal Z(I_i)=m-k$, the Jacobian $\partial D_i/\partial x_j$'s rank is strictly less than $k$ on $\mathcal Z(I_1+ I_2)$. In other words, $dD_1 \wedge \ldots \wedge dD_k$ vanishes on $\mathcal Z(I_1+ I_2)$. Hence $v_1-v_2$ vanishes on $\mathcal Z(I_1+ I_2)$. Then by using Hilbert Nullstenllensatz for each component and the condition that $I_1+I_2$ is radical, $v_1-v_2$ is in $I_1+I_2$, i.e., \begin{equation} \label{eq:17} v_1-v_2=a_1+a_2, \quad a_1\in I_1,\quad a_2 \in I_2 \end{equation} Define $v_{12}=v_1-a_1$. Then $[v_{12}]_1=[v_1]_1$ and $[v_{12}]_2=[v_2]_2$. Then by induction, we have a differential form $v_{1\ldots i}$ such that $[v_{1\ldots i}]_j=[v_j]_j$, $\forall 1\leq j\leq i$. The zero locus of $J_i+I_{i+1}$ is, \begin{equation} \label{eq:19} \mathcal Z(J_i + I_{i+1})=\bigcup_{j=1}^i \big(\mathcal Z(I_j) \cap \mathcal Z(I_{i+1}) ) . \end{equation} which are also singular points on the algebraic set $\mathcal Z(I)$. Since $[v_{1\ldots i}]_j=[\alpha_j \wedge dD_1 \wedge \ldots \wedge dD_k]_j$, $v_{1\ldots i}$ vanishes on $\mathcal Z(I_j) \cap \mathcal Z(I_{i+1})$. Hence both $v_{1\ldots i}$ and $v_{i+1}$ vanish on $\mathcal Z(J_i + I_{i+1})$. Then by using Hilbert Nullstellensatz, we obtain the differential form $v_{1\ldots (i+1)}$. Finally we denote $v_{1\ldots n}=\omega$. \end{proof} A large classes of 4D high-loop diagrams satisfy two conditions in the above proposition. So we can construct $\omega$ for the IBP without doubled propagator. The proof itself provides the algorithm for obtaining $\omega$. This algorithm is realized by our Mathematica and Macaulay2 \cite{M2} package, {\sc MathematicaM2}. \footnote{This package can be downloaded from \url{http://www.nbi.dk/~zhang/MathematicaM2.html}.} \begin{remark} \label{factorization} Note that in practice, after obtaining the differential form $\omega$ which satisfies (\ref{vanishing_id}), there may exist further simplification. The form $\omega$ may factorize as, \begin{equation} \label{eq:61} \omega=f \omega' . \end{equation} where $f$ is a polynomial in loop momenta and $\omega'$ is a polynomial-valued form. If $\omega$ satisfies (\ref{vanishing_id}), there is no guarantee that $\omega'$ also satisfies (\ref{vanishing_id}). However, if accidentally $\omega'$ satisfies (\ref{vanishing_id}), we can instead use $\omega'$ to generate an IBP without doubled propagator. \end{remark} \section{Examples} \label{examples} In this section, we demonstrate our method by several $4D$ two-loop examples. In each case, we generate the $4D$ on-shell part of the IBP identities by our differential geometry method, via local form and congruence equations. To simplify the process, we combine integrand reduction method and our differential geometry approach for IBP computations. \subsection{Planar double box} Consider the $4D$ planar double box with $4$ massless legs, $p_1$, $p_2$, $p_3$ and $p_4$. \begin{figure} \center \includegraphics[width=2.8in]{dbox.eps}\\ \caption{Planar double box with $4$ massless legs}\label{dbox} \end{figure} The two loop momenta are $l_1$ and $l_2$. There are $7$ denominators for double box integrals, \begin{gather} \label{eq:21} D_1=l_1^2,\quad D_2=(l_1-p_1)^2,\quad D_3=(l_1-p_1-p_2)^2, \nonumber\\ D_4=(l_2-p_3-p_4)^2,\quad D_5=(l_2-p_4)^2, \quad D_6=l_2^2,\quad D_7=(l_1+l_2)^2. \end{gather} Instead of using Minkowski components of $l_1$ and $l_2$, we use van Neerven-Vermaseren basis, \begin{eqnarray} \label{eq:22} x_1&=&l_1 \cdot p_1,\quad x_2=l_1 \cdot p_2, \quad x_3=l_1 \cdot p_4 ,\quad x_4=l_1 \cdot \omega, \nonumber \\ y_1&=&l_2 \cdot p_1,\quad y_2=l_2 \cdot p_2, \quad y_3=l_2 \cdot p_4 ,\quad y_4=l_2 \cdot \omega . \end{eqnarray} where $\omega$ is the vector which is perpendicular to all externel legs and $\omega^2=t u/s$. The denominators have the parity symmetry, \begin{equation} \label{dbox_parity} x_4 \leftrightarrow -x_4, \quad y_4 \leftrightarrow -y_4. \end{equation} Define the ideal $I\equiv \langle D_1, \ldots D_7 \rangle$. The ISPs are $\{x_3,x_4,y_1,y_4\}$. Integrals with numerators linear in $x_4$ or $y_4$ are spurious, i.e., vanish by the orthogonal property of $\omega$. The $4D$ double box cut has $6$ branches, \begin{equation} \label{eq:26} I=I_1 \cap I_2 \cap I_3 \cap I_4 \cap I_5 \cap I_6, \end{equation} where, \begin{gather} \label{eq:28} I_1= \langle x_1,-s-2 y_1-2 y_2,s-2 x_2,y_3,x_3,t-2 y_1+2 y_4,2 x_4-t\rangle, \\ I_2= \langle y_1,x_1,s+2 y_2,s-2 x_2,y_3,t+2 y_4,-t+2 x_3+2 x_4\rangle, \\ I_3= \langle x_1,-s-2 y_1-2 y_2,s-2 x_2,y_3,x_3,-t+2 y_1+2 y_4,t+2 x_4\rangle, \\ I_4= \langle y_1,x_1,s+2 y_2,s-2 x_2,y_3,2 y_4-t,t-2 x_3+2 x_4\rangle, \\ I_5= \langle x_1,s+2 y_1+2 y_2,s-2 x_2,y_3,-s t+2 s x_3+2 s y_1+4 x_3 y_1,\nonumber\\ t-2 y_1+2 y_4,t-2 x_3+2 x_4\rangle, \\ I_6= \langle x_1,s+2 y_1+2 y_2,s-2 x_2,y_3,-s t+2 s x_3+2 s y_1+4 x_3 y_1,\nonumber\\-t+2 y_1+2 y_4,-t+2 x_3+2 x_4\rangle \end{gather} Note that under the parity symmetry (\ref{dbox_parity}), the primary ideals are permuted, \begin{equation} \label{eq:35} I_1 \leftrightarrow I_3,\quad I_2 \leftrightarrow I_4,\quad I_5 \leftrightarrow I_6 \end{equation} We can first carry out the integrand reduction for double-box numerators. The irreducible numerator terms have the form, \begin{equation} \label{eq:23} x_3^m y_1^n x_4^a y_4^b. \end{equation} The renormalizability condition requires that $0\leq m+a\leq 4, 0\leq n+b \leq 4, 0 \leq m+n+a+b \leq 6$. Furthermore, the Gr\"obner basis and polynomial division method \footnote{The package for integrand reduction can be downloaded from \url{http://www.nbi.dk/~zhang/BasisDet.html}. } \cite{Zhang:2012ce} determines that , the integrand basis $\mathcal B=\mathcal B_1 \cup \mathcal B_2 $, contains $32$ terms, \begin{equation} \label{eq:24} \mathcal B_1 =\{x_3^4 y_1,x_3 y_1^4,x_3^4,x_3^3 y_1,x_3 y_1^3,y_1^4,x_3^3,x_3^2 y_1,x_3 y_1^2,y_1^3,x_3^2,x_3 y_1,y_1^2,x_3,y_1,1\} \end{equation} and \begin{gather} \label{eq:25} \mathcal B_2 =\{x_4,x_3 x_4,x_3^2 x_4,x_3^3 x_4,x_4 y_1,y_4,x_3 y_4,x_3^2 y_4,x_3^3 y_4,x_3^4 y_4,y_1 y_4,x_3 y_1 y_4,y_1^2 y_4,\nonumber \\x_3 y_1^2 y_4,y_1^3 y_4,x_3 y_1^3 y_4\}. \end{gather} Note all terms in $\mathcal B_2$ are spurious. So we focus on further reducing the $16$ terms in $\mathcal B_1$ via IBPs. We divide our algorithm in several steps, \begin{enumerate} \item Evaluate $\Omega=dD_1 \wedge \ldots \wedge dD_7$ and the local forms $[\Omega]_i$. Direct computation gives, \begin{gather} \label{eq:30} \Omega= \frac{128 s}{t^3 (s+t)^3}\bigg( (s (x_4 (y_1+y_3)-y_4 (x_1+x_3))+t (y_4 (x_2-x_1)+x_4 (y_1-y_2))) \nonumber\\(s (y_1+y_3)+t (y_1+y_2+2 y_3)) dx_1\wedge dx_2\wedge dx_3\wedge dx_4\wedge dy_1\wedge dy_2\wedge dy_3\nonumber\\+s y_4 (s (y_4 (x_1+x_3)-x_4 (y_1+y_3))+t (y_4 (x_1-x_2)+x_4 (y_2-y_1)))\nonumber\\ dx_1\wedge dx_2\wedge dx_3\wedge dx_4\wedge dy_1\wedge dy_3\wedge dy_4 \nonumber\\+s y_4 (s (y_4 (x_1+x_3)-x_4 (y_1+y_3))+t (y_4 (x_1-x_2)+x_4 (y_2-y_1))) \nonumber \\dx_1\wedge dx_2\wedge dx_3\wedge dx_4\wedge dy_2\wedge dy_3\wedge dy_4\nonumber\\- (s (y_4 (x_1+x_3)-x_4 (y_1+y_3))+t (y_4 (x_1+x_2+2 x_3)-x_4 (y_1+y_2+2 y_3))) \nonumber\\(s (x_1+x_3)+t (x_1-x_2)) dx_1\wedge dx_2\wedge dx_3\wedge dy_1\wedge dy_2\wedge dy_3\wedge dy_4\nonumber\\-s x_4 (s (x_4 (y_1+y_3)-y_4 (x_1+x_3))+t (x_4 (y_1+y_2+2 y_3)-y_4 (x_1+x_2+2 x_3)))\nonumber \\dx_1\wedge dx_2\wedge dx_4\wedge dy_1\wedge dy_2\wedge dy_3\wedge dy_4 \bigg). \end{gather} The canonical representative of $[\Omega]_i$ is obtained by polynomial division. For example, on the first branch, \begin{gather} \label{eq:29} [\Omega]_1=-\frac{64 s^2 y_1 (t-2 y_1)}{t^2 \ (s+t)^2} (dx_1\wedge dx_2\wedge dx_3\wedge \ dx_4\wedge dy_1\wedge dy_2\wedge dy_3\nonumber\\-dx_1\wedge dx_2\wedge \ dx_3\wedge dx_4\wedge dy_1\wedge dy_3\wedge dy_4-dx_1\wedge \ dx_2\wedge dx_3\wedge dx_4\wedge dy_2\wedge dy_3\wedge dy_4). \end{gather} \item Verify that the two conditions in Theorem \ref{congruence} hold. In this case, $k=7$ and $m=DL=8$, so $m-k=1$. On the other hand, all six branches are one-dimensional. Furthermore, define $J_i=\cap_{j=1}^i I_i$. Directly commutative algebra computations indicate that $J_i+I_{i+1}$ is radical, for $i=1,2,3,4,5$. \item Solve the congruence equations in the polynomial ring. Let $\eta_i$, $i=1,\ldots, 6$ be 7-forms satisfy the following equations, \begin{equation} \label{eq:31} \left\{ \begin{array}{l l} \ [\eta_i]_j=[\Omega]_j & \quad j=i\\ \ [\eta_i]_j=0 & \quad j\not=i, \quad j=1,\ldots, 6 \end{array} \right. \end{equation} The solution for $\eta_i$'s can be quickly obtained by our package {\sc MathematicaM2}. For example, \begin{gather} \label{eq:32} \eta_1= -\frac{16 s (s (t (x_4+2 y_1+y_4)-2 (x_3 (2 y_1+y_4)+y_1 (x_4+2 (y_1+y_4))))-8 x_3 y_1 (y_1+y_4)) }{t^2 (s+t)^2} \nonumber \\(dx_1\wedge dx_2\wedge dx_3\wedge dx_4\wedge dy_1\wedge dy_2\wedge dy_3-dx_1\wedge dx_2\wedge dx_3\wedge dx_4\wedge dy_1\wedge dy_3\wedge dy_4\nonumber \\-dx_1\wedge dx_2\wedge dx_3\wedge dx_4\wedge dy_2\wedge dy_3\wedge dy_4) . \end{gather} It is easy to check that, \begin{equation} \label{eq:33} [\eta_1]_1=[\Omega]_1, \quad [\eta_1]_2=[\eta_1]_3=[\eta_1]_4=[\eta_1]_5=[\eta_1]_6=0. \end{equation} \item Find all the IBP relations generated by $f \eta_j$ according to (\ref{on_shell}), where $f\in B$ is a term from the integrand basis. For $4D$ double box case, the process can be sped up by using the parity symmetry. Define the 7-forms according to the permutation of primary ideals, \begin{equation} \label{eq:36} v_1=\eta_1+\eta_3,\quad v_2=\eta_2+\eta_4,\quad v_3=\eta_5+\eta_6 \end{equation} Then $v_i$'s, $i=1,2,3$ are even under the parity symmetry. Hence, we can consider IBP relations generated by $f v_j$, where $f\in B_1$. In this way, we avoid the redundancy from spurious terms. For example, explicitly, \begin{gather} \label{eq:37} v_1=\frac{32s}{t^2(s+t)^2}\bigg(-(s (t (x_4+y_4)-2 (x_3 y_4+x_4 y_1+2 y_1 y_4))-8 x_3 y_1 y_4)\nonumber\\ dx_1\wedge dx_2\wedge dx_3\wedge dx_4\wedge dy_1\wedge dy_2\wedge dy_3-2 y_1 (s (2 (x_3+y_1)-t)+4 x_3 y_1) \nonumber\\dx_1\wedge dx_2\wedge dx_3\wedge dx_4\wedge dy_2\wedge dy_3\wedge dy_4-2 y_1 (s (2 (x_3+y_1)-t)+4 x_3 y_1) \nonumber\\dx_1\wedge dx_2\wedge dx_3\wedge dx_4\wedge dy_1\wedge dy_3\wedge dy_4 \bigg). \end{gather} Consider the form $w=y_1 v_1$. \begin{equation} \label{eq:27} dw=-\frac{32 s y_1 \left(s \left(-5 t+10 x_3+16 y_1\right)+32 x_3 y_1\right)}{t^2 (s+t)^2} \mathbf m \end{equation} Here $\mathbf m$ is the measure, $\mathbf m=dx_1 \wedge dx_2 \wedge dx_3 \wedge dx_4 \wedge dy_1 \wedge dy_2 \wedge dy_3 \wedge dy_4$. Furthermore, it is clear that $dD_i \wedge \omega= f_{ij} D_j \mathbf m$. The related components are, \begin{gather} \label{eq:34} f_{11}=0, \quad f_{22}=0, \quad f_{33}=0,\\ f_{44}=\frac{16 s y_1 \left(s t^2-2 s t x_3-6 s t y_1-4 s x_3 y_1+8 s y_1^2-16 t x_3 y_1+16 x_3 y_1^2\right)}{t^2 (s+t)^3}\\ f_{55}=\frac{16 s y_1 }{t^3 (s+t)^3} \big(s^2 t^2-2 s^2 t x_3-6 s^2 t y_1-4 s^2 x_3 y_1-8 s^2 y_1^2-16 s t x_3 y_1\nonumber\\-16 s t y_1^2-16 s x_3 y_1^2-32 t x_3 y_1^2\big)\\ f_{66}=\frac{16 s y_1 \left(s t^2-6 s t x_3-6 s t y_1+4 s x_3 y_1+8 s y_1^2-16 t x_3 y_1+16 x_3 y_1^2\right)}{t^3 (s+t)^2}\\ f_{77}=\frac{64 s y_1 \left(s t-s x_3-3 s y_1-4 x_3 y_1\right)}{t^2 (s+t)^2} \end{gather} Using (\ref{on_shell}), we get one IBP relation, \begin{eqnarray} \label{eq:40} -4 I_{\text{dbox}}[(l_1\cdot p_4)(l_2\cdot p_1)^2] -2 s I_{\text{dbox}}[(l_1\cdot p_4)(l_2\cdot p_1)] \nonumber \\-2 s I_{\text{dbox}}[(l_2\cdot p_1)^2]+s t I_{\text{dbox}}[(l_2\cdot p_1)]+\ldots=0 \end{eqnarray} \end{enumerate} Using this algorithm, we find that both $v_1$ and $v_2$ provide $3$ IBP relations, while $v_3$ provides $6$ IBP relations. These relations are linearly independent. So our method reduces the number of double box integrals from $16$ to $16-12=4$. The resulting $4$ integrals can be chosen as \begin{equation} \label{eq:38} I_{\text{dbox}}[1],\quad I_{\text{dbox}}[l_1 \cdot p_4], \quad I_{\text{dbox}}[l_2 \cdot p_1], \quad I_{\text{dbox}}[(l_1\cdot p_4)(l_2 \cdot p_1)] \end{equation} Furthermore, the symmetry of double box determines that, \begin{equation} \label{eq:39} I_{\text{dbox}}[l_1 \cdot p_4]= I_{\text{dbox}}[l_2 \cdot p_1]. \end{equation} So we reduce the number of independent integrals to $3$. Our $4D$ formalism misses one IBP relation which can be obtained from the $D$-dimensional formalism, \begin{equation} \label{dbox_missing} I_{\text{dbox}}[(l_1 \cdot p_4)(l_2 \cdot p_1)]=\frac{1}{8} s t I_{\text{dbox}}[1] -\frac{3}{4} s I_{\text{dbox}}[l_1\cdot p_4] + \ldots . \end{equation} This identity occurs in the $O(\epsilon)$-order in a $D$-dimensional IBP relation. So it cannot be detected by the pure $4D$ IBP formalism. Including this missing IBP, all integrals for $4D$ double box are reduced to two master integrals, \begin{equation} \label{eq:41} I_{\text{dbox}}[1],\quad I_{\text{dbox}}[l_1 \cdot p_4], \end{equation} and we verified that the result is consistent with the $4D$ limit of the output of \FIRE{}. For example, \begin{eqnarray} \label{eq:42} I_{\text{dbox}}[(l_1 \cdot p_4)^2]&=&\frac{t}{2} I_{\text{dbox}}[l_1 \cdot p_4] + \ldots, \\ I_{\text{dbox}}[(l_1 \cdot p_4)^3]&=&\frac{t^2}{4} I_{\text{dbox}}[l_1 \cdot p_4]+ \ldots, \\ I_{\text{dbox}}[(l_1 \cdot p_4)^4]&=&\frac{t^3}{8} I_{\text{dbox}}[l_1 \cdot p_4]+ \ldots , \\ I_{\text{dbox}}[(l_1 \cdot p_4)^2 (l_2 \cdot p_1)]&=&-\frac{s^2 t}{16} I_{\text{dbox}}[1]+\frac{3s^2}{8} I_{\text{dbox}}[l_1\cdot p_4] + \ldots ,\\ I_{\text{dbox}}[(l_1 \cdot p_4)^3 (l_2 \cdot p_1)]&=&\frac{s^3 t}{32} I_{\text{dbox}}[1]-\frac{3s^3}{16} I_{\text{dbox}}[l_1\cdot p_4]+ \ldots . \end{eqnarray} \subsubsection{Comparison with GKK method} It is interesting to see the relation between our method and GKK method \cite{Gluza:2010ws}. GKK method solves syzygy equations for generating vectors without doubled propagator. We treat the generating vector $v$ as a dual differential form $\omega$. On each branch it is easy to find the local form of $\omega$ and finally we combine local forms together by solving congruence equations. So far, our method is limited to $4D$ and the on-shell part. We compare the $4D$ and the on-shell part of the generating vectors for double box from GKK method. There are three such vectors in \cite{Gluza:2010ws} for double box with four massless legs, namely \begin{equation} \label{eq:43} v^{(1)}_{\text{GKK}}, \quad v^{(2)}_{\text{GKK}},\quad v^{(3)}_{\text{GKK}} \end{equation} To compare these with our result, we take the Poincar\'{e} dual of these vectors, namely $\omega^{(1)}_{\text{GKK}}$, $\omega^{(2)}_{\text{GKK}}$ and $\omega^{(3)}_{\text{GKK}}$. Then we can verify that the on-shell part is related to our result as, \begin{eqnarray} \label{eq:44} \ [\omega^{(1)}_{\text{GKK}}]&=&\frac{t^2(s+t)^2}{64 s^2} \big ([\eta_1]+[\eta_2]+[\eta_3]+[\eta_4]-[\eta_5]-[\eta_6] \big) ,\\ \ [\omega^{(2)}_{\text{GKK}}]&=&\frac{t^2(s+t)^2}{64 s} \big (-[\eta_1]+[\eta_2]-[\eta_3]+[\eta_4]-[\eta_5]-[\eta_6] \big),\\ \ [\omega^{(3)}_{\text{GKK}}]&=&\frac{t^2(s+t)^2}{64 s} \big (\frac{s+2 (l_2\cdot k_1)}{s}[\eta_1]-[\eta_2]+\frac{s+2 (l_2\cdot k_1)}{s}[\eta_3]\\ &&-[\eta_4]-\frac{s+2 (l_2\cdot k_1)}{s}[\eta_5]-\frac{s+2 (l_2\cdot k_1)}{s}[\eta_6]\big). \end{eqnarray} So on-shell, $\omega^{(i)}_{\text{GKK}}$'s are the linear combination of the differential form $\eta_i$'s. (The overall factor $t^2(s+t)^2/(64s)$ comes from the normalization and has no significant meaning.) The coefficients are the same for branch pairs (under the parity symmetry), so the spurious terms drop out in the IBP calculation. Therefore, our method reproduces the $4D$ on-shell part of the double box result from GKK. \subsection{Non-planar crossed box} Our method also works for non-planar diagrams. For example, consider the $4D$ crossed box with $4$ massless legs, $p_1$, $p_2$, $p_3$ and $p_4$. The two loop momenta are $l_1$ and $l_2$. \begin{figure} \center \includegraphics[width=2.8in]{xbox.eps}\\ \caption{Non-planar double box with $4$ massless legs}\label{dbox} \end{figure} There are $7$ denominators for crossed box integrals, \begin{gather} \label{eq:21} D_1=(l_1+p_1)^2,\quad D_2=l_1^2,\quad D_3=(l_2+p_3)^2, \nonumber\\ D_4=l_2^2,\quad D_5=(l_2-p_4)^2, \quad D_6=(l_2-l_1+p_2+p_3)^2,\quad D_7=(l_2-l_1+p_3)^2. \end{gather} Again we use van Neerven-Vermaseren basis, \begin{eqnarray} \label{eq:22} x_1&=&l_1 \cdot p_1,\quad x_2=l_1 \cdot p_2, \quad x_3=l_1 \cdot p_3 ,\quad x_4=l_1 \cdot \omega, \nonumber \\ y_1&=&l_2 \cdot p_1,\quad y_2=l_2 \cdot p_2, \quad y_3=l_2 \cdot p_3 ,\quad y_4=l_2 \cdot \omega . \end{eqnarray} where $\omega$ is the vector which is perpendicular to all externel legs and $\omega^2=t u/s$. Again, the denominators have the parity symmetry, \begin{equation} \label{xbox_parity} x_4 \leftrightarrow -x_4, \quad y_4 \leftrightarrow -y_4. \end{equation} Define the ideal $I\equiv \langle D_1, \ldots D_7 \rangle$. The ISPs are $\{x_3,x_4,y_1,y_4\}$. Integrals with numerators linear in $x_4$ or $y_4$ are spurious. This diagram has the following symmetry, \begin{gather} \label{xbox_symmetry} l_1 \to l_1-l_2+p_1+p_4, \quad l_2 \to -l_2,\\ p_1\to p_2, \quad p_2 \to p_1,\quad p_3 \to p_4,\quad p_4 \to p_3. \end{gather} The $4D$ crossed box cut has $8$ branches, \begin{equation} \label{eq:26} I=I_1 \cap I_2 \cap I_3 \cap I_4 \cap I_5 \cap I_6 \cap I_7 \cap I_8 , \end{equation} where, \begin{gather} I_1=\langle -t+2 x_2-2 y_2,y_1+y_2,x_1,y_3,x_3+y_2,y_2+y_4,-\frac{t^2}{s}-\frac{2 t y_2}{s}-t+2 x_4\rangle,\\ I_2=\langle -t+2 x_2-2 y_2,y_1+y_2,x_1,y_3,x_3+y_2,y_4-y_2,\frac{t^2}{s}+\frac{2 t y_2}{s}+t+2 x_4\rangle,\\ I_3=\langle t+2 y_2,x_2,2 y_1-t,x_1,y_3,2 y_4-t,x_4-x_3\rangle,\\ I_4=\langle t+2 y_2,x_2,2 y_1-t,x_1,y_3,t+2 y_4,x_3+x_4\rangle,\\ I_5=\langle -t+2 x_2-2 y_2,y_1+y_2,x_1,y_3,x_3,y_2+y_4,\frac{t^2}{s}+y_2 (\frac{2 t}{s}+2)+t+2 x_4\rangle,\\ I_6=\langle -t+2 x_2-2 y_2,y_1+y_2,x_1,y_3,x_3,y_4-y_2,-\frac{t^2}{s}+y_2 (-\frac{2 t}{s}-2)-t+2 x_4\rangle,\\ I_7=\langle s +t+2 y_2,s+2 x_2,-s-t+2 y_1,x_1,y_3,-s-t+2 y_4,-s-t+2 x_3+2 x_4\rangle,\\ I_8=\langle s +t+2 y_2,s+2 x_2,-s-t+2 y_1,x_1,y_3,s+t+2 y_4,s+t-2 x_3+2 x_4\rangle, \end{gather} under the parity symmetry (\ref{xbox_parity}), the primary ideals are permuted, \begin{equation} \label{eq:35} I_1 \leftrightarrow I_2,\quad I_3 \leftrightarrow I_4,\quad I_5 \leftrightarrow I_6 \quad I_7, \leftrightarrow I_8. \end{equation} The irreducible numerator terms have the form, \begin{equation} \label{eq:23} x_3^m y_2^n x_4^a y_4^b. \end{equation} And the integrand reduction method \cite{Zhang:2012ce} determines that, the integrand basis $\mathcal B=\mathcal B_1 \cup \mathcal B_2$, where \begin{equation} \label{eq:46} \mathcal B_1=\{x_3 y_2^5,y_2^6,x_3^4 y_2,x_3 y_2^4,y_2^5,x_3^4,x_3^3 y_2,x_3 y_2^3,y_2^4,x_3^3,x_3^2 y_2,x_3 y_2^2,y_2^3,x_3^2,x_3 y_2,y_2^2,x_3,y_2,1\}, \end{equation} and \begin{gather} \label{eq:47} \mathcal B_2=\{x_4,x_3 x_4,x_3^2 x_4,x_3^3 x_4,x_4 y_2,y_4,x_3 y_4,x_3^2 y_4,x_3^3 y_4,x_3^4 y_4,y_2 y_4,x_3 y_2 y_4,y_2^2 y_4,x_3 y_2^2 y_4,y_2^3\nonumber \\ y_4,x_3 y_2^3 y_4,y_2^4 y_4,x_3 y_2^4 y_4,y_2^5 y_4\}. \end{gather} There are $19$ terms in $\mathcal B_1$. Similarly, Define $\Omega=d D_1 \wedge \ldots d D_7$. By solving congruence equations, we obtain rank-$7$ forms $\eta_i$, $i=1, \ldots 8$ such that, \begin{eqnarray} \label{eq:48} [\eta_i]_j=\delta_{ij} [\Omega]_j,\quad 1\leq i,j \leq 8. \end{eqnarray} Again, to remove the spurious terms in $\mathcal B_2$, we define, \begin{equation} \label{eq:36} v_1=\eta_1+\eta_3,\quad v_2=\eta_2+\eta_4,\quad v_3=\eta_5+\eta_6,\quad v_4=\eta_7+\eta_8 . \end{equation} We find that both $v_1$ and $v_3$ generate $4$ IBPs, while $v_2$ and $v_4$ generate $3$ IBPs. Again these IBPs are linearly independent, so our method generates $14$ relations. Furthermore, from the symmetry (\ref{xbox_symmetry}), we have, \begin{eqnarray} \label{eq:50} 2 I_{\text{xbox}}[l_1\cdot p_3] + I_{\text{xbox}}[l_2 \cdot p_2]&=&0+ \ldots ,\\ 2 I_{\text{xbox}}[(l_1\cdot p_3)(l_2 \cdot p_2)] + I_{\text{xbox}}[(l_2 \cdot p_2)^2]&=&0+ \ldots . \end{eqnarray} These $2$ relations are independent of the $14$ IBP relations we obtained. Using these relations, we reduce the $19$ terms in $\mathcal B_1$ to $3$ terms, \begin{eqnarray} \label{eq:51} I_{\text{xbox}}[1],\quad I_{\text{xbox}}[l_1\cdot p_3], \quad I_{\text{xbox}}[(l_1\cdot p_3)(l_2\cdot p_2)] . \end{eqnarray} Again, there is one IBP relation missing in the pure $4D$ formalism. From FIRE \cite{FIRE}, we have, \begin{eqnarray} \label{eq:52} I_{\text{xbox}}[(l_1\cdot p_3)(l_2\cdot p_2)] =\frac{1}{16} (t + s) t I_{\text{xbox}}[1] - \frac{3}{8} (s + 2 t) I_{\text{xbox}}[l_1\cdot p_3] . \end{eqnarray} Combine $14+2+1=17$ relations together, we reduce the integrand terms to two master integrals, \begin{eqnarray} \label{eq:51} I_{\text{xbox}}[1],\quad I_{\text{xbox}}[l_1\cdot p_3] \end{eqnarray} For example, \begin{eqnarray} \label{eq:53} I_{\text{xbox}}[(l_2\cdot p_2)^2]&=&-\frac{1}{8} t(s+t) I_{\text{xbox}}[1]+\frac{3}{4} (s+2t) I_{\text{xbox}}[l_1\cdot p_3] +\ldots, \\ I_{\text{xbox}}[(l_1\cdot p_3)(l_2\cdot p_2)^2]&=& \frac{-t(s^2+3 s t+2t^2)}{32} I_{\text{xbox}}[1]\nonumber \\ & & +\frac{ (3s^2+8 s t+8 t^2)}{16} I_{\text{xbox}}[l_1\cdot p_3] +\ldots,\\ I_{\text{xbox}}[(l_2\cdot p_2)^3]&=&\frac{t(s^2+3 s t+2 t^2)}{16} I_{\text{xbox}}[1] \nonumber \\ & &-\frac{(3 s^2+8 s t+8 t^2)}{8} I_{\text{xbox}}[l_1\cdot p_3] + ... \end{eqnarray} \subsection{Slashed box} Our method also works for diagram with less than $DL-1$ internal lines. In these cases, the coefficients $\alpha$'s in (\ref{ansatz}) are not scalar functions, but differential forms. For example, consider the $4D$ slashed box with $4$ massless legs, $p_1$, $p_2$, $p_3$ and $p_4$. There are $5$ denominators for slashed box integrals, \begin{gather} \label{eq:21} D_1=l_1^2,\quad D_2=(l_1-p_2)^2,\quad D_3=l_2^2, \quad D_4=(l_2-p_4)^2,\quad D_5=(l_1+l_2+p_1)^2, \end{gather} \begin{figure} \center \includegraphics[width=2.2in]{slashed.eps}\\ \caption{Planner slashed box with $4$ massless legs}\label{dbox} \end{figure} we use van Neerven-Vermaseren basis, \begin{eqnarray} \label{eq:22} x_1&=&l_1 \cdot p_1,\quad x_2=l_1 \cdot p_2, \quad x_3=l_1 \cdot p_4 ,\quad x_4=l_1 \cdot \omega, \nonumber \\ y_1&=&l_2 \cdot p_1,\quad y_2=l_2 \cdot p_2, \quad y_3=l_2 \cdot p_4 ,\quad y_4=l_2 \cdot \omega . \end{eqnarray} where $\omega$ is the vector which is perpendicular to all externel legs and $\omega^2=t u/s$. The denominators have the parity symmetry, \begin{equation} \label{slashedbox_parity} x_4 \leftrightarrow -x_4, \quad y_4 \leftrightarrow -y_4. \end{equation} Define the ideal $I\equiv \langle D_1, \ldots D_7 \rangle$. The ISPs are $\{x_1,x_3,x_4,y_1,y_2, y_4\}$. Integrals with numerators linear in $x_4$ or $y_4$ are spurious. The integrand basis for slashed box is $\mathcal B=\mathcal B_1 \cup \mathcal B_2$ \cite{Zhang:2012ce}, \begin{gather} \label{eq:60} \mathcal B_1=\{x_3^3 y_2,x_3^3 y_1,x_3^2 y_2^2,x_1 x_3^2 y_2,x_1 x_3^2 y_1,x_3^2 y_1^2,x_3 y_2^3,x_1 x_3 y_2^2,x_3 y_1 y_2^2,x_1^2 x_3 y_2,x_1 x_3 y_1 y_2,x_3 y_1^2 y_2,\nonumber \\x_1^2 x_3 y_1,x_1 x_3 y_1^2,x_3 y_1^3,x_1 y_2^3,x_1^2 y_2^2,x_1 y_1 y_2^2,x_1^3 y_2,x_1^2 y_1 y_2,x_1 y_1^2 y_2,x_1^3 y_1,x_1^2 y_1^2,x_1 y_1^3,x_3^3,x_3^2 y_2,x_1 x_3^2 \nonumber \\,x_3^2 y_1,x_3 y_2^2,x_1 x_3 y_2,x_3 y_1 y_2,x_1^2 x_3,x_1 x_3 y_1,x_3 y_1^2,y_2^3,x_1 y_2^2,y_1 y_2^2,x_1^2 y_2,x_1 y_1 y_2,y_1^2 y_2,x_1^3,x_1^2 y_1,x_1 y_1^2,\nonumber \\\ y_1^3,x_3^2,x_3 y_2,x_1 x_3,x_3 y_1,y_2^2,x_1 y_2,y_1 y_2,x_1^2,x_1 y_1,y_1^2,x_3,y_2,x_1,y_1,1\}, \end{gather} and \begin{gather} \label{eq:63} \mathcal B_2=\{x_4,x_1 x_4,x_1^2 x_4,x_3 x_4,x_1 x_3 x_4,x_3^2 x_4,x_4 y_1,x_1 x_4 y_1,x_1^2 x_4 y_1,x_3 x_4 y_1,x_1 x_3 x_4 y_1,x_3^2 x_4 y_1, \nonumber \\x_4 y_1^2,x_1 x_4 y_1^2,x_4 y_1^3,x_4 y_2,x_1 x_4 y_2, x_1^2 x_4 y_2,x_4 y_1 y_2,x_1 x_4 y_1 y_2,x_4 y_1^2 y_2,y_4,x_1 y_4,x_1^2 y_4,x_1^3 y_4,x_3 y_4 \nonumber \\,x_1 x_3 y_4,x_1^2 x_3 y_4,x_3^2 y_4,x_1 x_3^2 y_4,x_3^3 y_4,y_1 y_4,x_1 y_1 y_4,x_1^2 y_1 y_4,x_3 y_1 y_4,x_1 x_3 y_1 y_4,x_3^2 y_1 y_4,y_1^2 y_4,x_1 y_1^2 y_4,\nonumber \\x_3 y_1^2 y_4,y_2 y_4,x_1 y_2 y_4,x_1^2 y_2 y_4,x_3 y_2 y_4,x_1 x_3 y_2 y_4,x_3^2 y_2 y_4,y_1 y_2 y_4,x_1 y_1 y_2 y_4,\nonumber \\x_3 y_1 y_2 y_4,y_2^2 y_4,x_1 y_2^2 y_4,x_3 y_2^2 y_4\}. \end{gather} There are $59$ terms in $\mathcal B_1$ and $52$ terms in $\mathcal B_2$. Terms in $\mathcal B_2$ are all spurious. This diagram has the following symmetry, \begin{gather} \label{slashed_box_symmetry} l_1 \to -l_2+p_4, \quad l_2 \to -l_1+p_2,\\ p_1\to p_3, \quad p_2 \to p_4,\quad p_3 \to p_1,\quad p_4 \to p_2. \end{gather} The $4D$ crossed box cut has $4$ branches, \begin{equation} \label{eq:26} I=I_1 \cap I_2 \cap I_3 \cap I_4 , \end{equation} where, \begin{eqnarray} \label{eq:45} I_1&=& \{x_2,y_3,x_1 (-s-t)+y_1 (-s-t)+2 x_3 y_2,y_1 (-\frac{t}{s}-1)-\frac{t y_2}{s}+y_4,\nonumber \\ &&x_1 (-\frac{t}{s}-1)-x_3+x_4\},\\ I_2&=& \{x_2,y_3,x_1 (-s-t)+y_1 (-s-t)+2 x_3 y_2,y_1 (\frac{t}{s}+1)+\frac{t y_2}{s}+y_4,\nonumber \\ &&x_1 (\frac{t}{s}+1)+x_3+x_4\},\\ I_3&=&\{x_2,y_3,x_1 y_1 (\frac{2 t}{s}+2)+\frac{2 t x_1 y_2}{s}+t x_1+t y_1+2 x_3 y_1,y_1 (\frac{t}{s}+1)+\frac{t y_2}{s}+y_4, \nonumber \\ & & x_1 (-\frac{t}{s}-1)-x_3+x_4\},\\ I_4&=&\{x_2,y_3,x_1 y_1 (\frac{2 t}{s}+2)+\frac{2 t x_1 y_2}{s}+t x_1+t y_1+2 x_3 y_1,y_1 (-\frac{t}{s}-1)-\frac{t y_2}{s}+y_4, \nonumber \\ &&x_1 (\frac{t}{s}+1)+x_3+x_4\} \end{eqnarray} Under the parity symmetry, the ideals are permuted as, \begin{equation} \label{eq:49} I_1 \leftrightarrow I_2, \quad I_3 \leftrightarrow I_4. \end{equation} We have $5$ denominators, so $\alpha_i$'s in (\ref{ansatz}) are rank-2 differential forms. We use a basis for all possible rank-2 differential form, \begin{gather} \label{eq:55} \alpha^{(1)} = dx_1 \wedge dx_3,\quad \alpha^{(2)}=dx_1 \wedge d y_1,\quad \alpha^{(3)}=dx_1 \wedge dy_2, \quad \alpha^{(4)}=d x_3 \wedge d y_1, \nonumber \\ \alpha^{(5)} =dx_3 \wedge dy_2,\quad \alpha^{(6)} =dy_1 \wedge dy_2, \quad \alpha^{(7)}=dx_4 \wedge dy_4, \quad \alpha^{(8)}=dx_1 \wedge dx_4 \nonumber \\ \alpha^{(9)}=dx_3 \wedge dx_4, \quad \alpha^{(10)}=dy_1 \wedge dx_4, \quad \alpha^{(11)}=dy_2 \wedge dx_4, \quad \alpha^{(12)}=dx_1 \wedge dy_4\nonumber \\ \alpha^{(13)}=dx_3 \wedge d y_4, \quad \alpha^{(14)}=dy_1 \wedge dy_4, \quad \alpha^{(15)}=dy_2 \wedge dy_4 \end{gather} Note that all components in $dD_1 \wedge \ldots \wedge dD_5$ contains $d x_2 \wedge d y_3$. So we do not list rank-$2$ forms containing $d x_2$ or $d y_3$. Now we define, \begin{gather} \label{eq:57} \Omega^{(i)}=\alpha^{(i)} \wedge dD_1 \wedge \ldots \wedge dD_5, \quad 1 \leq i \leq 15 \end{gather} Then we solve congruence equations to get $60$ 7-forms, $\omega^{(i)}_j$, $1 \leq i \leq 15$, $1 \leq j \leq 4$, such that, \begin{eqnarray} \label{eq:64} [\omega^{(i)}_j]_k= \delta_{jk} [\Omega^{(i)}]_k. \end{eqnarray} We can use $\omega^{(i)}_j$'s to generate on-shell IBPs without doubled propagator. Again, to remove spurious terms, we define \begin{eqnarray} \label{eq:65} v_{2 i-1} &=& \omega^{(i)}_1 + \omega^{(i)}_2 \nonumber \\ v_{2 i} &=& \omega^{(i)}_3+ \omega^{(i)}_4, \quad 1 \leq i \leq 15 \end{eqnarray} Then all $v_i$'s are parity-even and we can use $f v_i$, $f \in \mathcal B_1$, to generate IBP relations. However, the new feature for this diagram is that, we can use Remark. \ref{factorization} to simplify the differential form and get more IBPs. For example, \begin{equation} \label{eq:67} v_{13}=-\frac{16 \left(s \left(t \left(x_1+y_1\right)+2 \left(x_1+x_3\right) y_1\right)+2 t x_1 \left(y_1+y_2\right)\right)}{s^2 t^2 (s+t)}\tilde v_{13}, \end{equation} where, \begin{gather} \label{eq:66} \tilde v_{13}=(s+t) (s+2 y_2) dx_1\wedge dx_2\wedge dx_3\wedge dx_4\wedge dy_1\wedge dy_3\wedge dy_4\nonumber\\+(s+t) (t+2 x_3) dx_1\wedge dx_2\wedge dx_4\wedge dy_1\wedge dy_2\wedge dy_3\wedge dy_4\nonumber\\+t (s+2 y_2) dx_1\wedge dx_2\wedge dx_3\wedge dx_4\wedge dy_2\wedge dy_3\wedge dy_4\nonumber\\-s (t+2 x_3) dx_2\wedge dx_3\wedge dx_4\wedge dy_1\wedge dy_2\wedge dy_3\wedge dy_4 . \end{gather} We can check that \begin{equation} \label{eq:68} [dD_i \wedge \tilde v_{13}]=0,\quad 1\leq i \leq 5 \end{equation} So instead, we can use $\tilde v_{13}$ to generate IBPs. In this manner, we get more IBPs. Similarly, $v_{14}$ factorizes and we can define a new rank-7 form $\tilde v_{14}$ for IBP generation. Other $v_i$'s do not have non-trivial factorization. Using all $v_i$ ($\tilde v_i$)'s , we get $51$ IBPs. Furthermore, $\Omega^{i}$ themselves also have the factorization property. For example, \begin{equation} \label{eq:69} \Omega^{(1)}=-\frac{32 x_4}{t^3 (s+t)^3} \tilde \omega^{(1)}, \end{equation} where, \begin{gather} \label{eq:70} \tilde \Omega^{(1)}=-s (s+t) dx_1\wedge dx_2\wedge dx_3\wedge dx_4\wedge dy_1\wedge dy_3\wedge dy_4 \nonumber \\ (s (y_4 (t+x_1+x_3)-x_4 (y_1+y_3))+t (y_4 (x_1+x_2)-x_4 (y_1+y_2))) \nonumber \\+s t dx_1\wedge dx_2\wedge dx_3\wedge dx_4\wedge dy_2\wedge dy_3\wedge dy_4 \nonumber \\(s (y_4 (x_3-x_1)+x_4 (y_1-y_3))+t (x_4 (y_1+y_2)-y_4 (x_1+x_2))) \nonumber \\-t (s+t) (s (t (y_1-y_3)-2 x_1 y_3+2 x_3 y_1)+t (t (y_1+y_2)+2 (x_3 (y_1+y_2)-y_3 (x_1+x_2)))) \nonumber \\ dx_1\wedge dx_2\wedge dx_3\wedge dx_4\wedge dy_1\wedge dy_2\wedge dy_3 . \end{gather} We can verify that, \begin{equation} \label{eq:68} [dD_i \wedge \tilde \Omega^{(1)}]=0,\quad 1\leq i \leq 5 \end{equation} So we can use $\tilde \Omega^{(1)}$ to generate IBPs. Similarly, $\Omega^{(6)}$, $\Omega^{(8)}$, $\Omega^{(9)}$, $\Omega^{(14)}$ and $\Omega^{(15)}$ also factorize. Using $\tilde \Omega$ forms, we get $4$ more independent IBPs. Note that although $\Omega^{(1)}$ itself has the form $\alpha\wedge dD_1 \wedge \ldots \wedge dD_5$, where $\alpha$ is a polynomial-valued differential form. However, $\tilde \Omega^{(1)}$ cannot be expressed as a product of polynomial-valued form and $dD_1 \wedge \ldots \wedge dD_5$. So $\tilde \Omega^{(1)}$ does not satisfy the conditions in Theorem. \ref{congruence} and there is no way to solve the congruence equation, \begin{gather} \label{eq:71} [\tilde \Omega^{(1)}_j]_k = \delta_{jk} [\tilde \Omega^{(1)}]_k,\quad 1\leq k \leq 4 \end{gather} to get more differential forms. In summary, from differential forms, we get $51+4=55$ IBP relations. Furthermore, using the symmetry condition (\ref{slashed_box_symmetry}), we have, \begin{equation} \label{eq:72} I_{\text{slashed}}[l_2 \cdot p_1]=-I_{\text{slashed}}[l_1 \cdot p_3]+\frac{t}{2} I_{\text{slashed}}[1]. \end{equation} So we have $59-55-1=3$ integrals left, \begin{equation} \label{eq:73} I_{\text{slashed}}[1],\quad I_{\text{slashed}}[l_1\cdot p_1] ,\quad I_{\text{slashed}}[(l_1\cdot p_1)^2] \end{equation} From FIRE \cite{FIRE}, there are two missing IBPs, \begin{eqnarray} \label{eq:74} I_{\text{slashed}}[l_1\cdot p_1] = -\frac{s t}{2 u} I_{\text{slashed}}[1], \\ I_{\text{slashed}}[(l_1\cdot p_1)^2] = \frac{s^2 t^2}{4 u^2} I_{\text{slashed}}[1] . \end{eqnarray} So the $59$ integrand terms reduce to $1$ master integral, $I_{\text{slashed}}[1]$. For example, \begin{eqnarray} \label{eq:75} I_{\text{slashed}}[l_1\cdot p_4] &= &-\frac{t}{2} I_{\text{slashed}}[1] ,\\ I_{\text{slashed}}[(l_1\cdot p_1) (l_1 \cdot p_4)]&=&\frac{s t^2}{4 u} I_{\text{slashed}}[1] ,\\ I_{\text{slashed}}[(l_1\cdot p_1) (l_2 \cdot p_1)]&=&\frac{s^2 t^2}{2 u^2} I_{\text{slashed}}[1] ,\\ I_{\text{slashed}}[(l_2\cdot p_1) (l_2 \cdot p_2)]&=&\frac{s^2 t}{4 u} I_{\text{slashed}}[1] . \end{eqnarray} \subsection{Turtle box} Now consider the $4D$ two-loop turtle box with $5$ massless legs, $p_1$, $p_2$, $p_3$, $p_4$ and $p_5$. This system is considerably more difficult than the $4$-point two-loop cases, since the kinematics is complicated. \begin{figure} \center \includegraphics[width=2.8in]{dbox5.eps}\\ \caption{Planar double box with $5$ massless legs}\label{dbox} \end{figure} The two loop momenta are $l_1$ and $l_2$. There are $7$ denominators for crossed box integrals, \begin{gather} \label{eq:21} D_1=l_1^2,\quad D_2=(l_1-p_1)^2,\quad D_3=(l_1-p_1-p_2)^2, \nonumber\\ D_4=(l_2-p_5)^2,\quad D_5=(l_2-p_4-p_5)^2, \quad D_6=l_2^2,\quad D_7=(l_1+l_2)^2. \end{gather} In this case, we find that it is easier to calculate differential forms and IBP identity in spinor helicity formalism, and then convert the result to van Neerven-Vermaseren basis in the final step. Define, \begin{eqnarray} \label{eq:54} l_1^\mu&=&\alpha_1 p_1^\mu+\alpha_2 p_2^\mu+ \frac{s_{12} \alpha_3}{\langle 14 \rangle [42]} \frac{[1|\gamma^\mu|2\rangle}{2} + \frac{s_{12} \alpha_4}{\langle 24 \rangle [41]} \frac{[2|\gamma^\mu|1\rangle}{2}, \\ l_2^\mu&=&\beta_1 p_4^\mu+\beta_2 p_5^\mu+ \frac{s_{12} \beta_3}{\langle 41 \rangle [15]} \frac{[4|\gamma^\mu|5\rangle}{2} + \frac{s_{12} \beta_4}{\langle 51 \rangle [14]} \frac{[5|\gamma^\mu|4\rangle}{2}. \end{eqnarray} Furthermore, to simplify the computation, we use {\it momentum-twistor} variables \cite{Hodges:2009hk, Mason:2009qx} for $s_{ij}$, $\langle i,j\rangle $ and $[i,j]$. The advantage is that all constraints like momentum conservation and Schouten identities are resolved in {\it momentum-twistor} variables. The ISPs are \begin{eqnarray} \label{eq:22} a=l_1 \cdot p_4,\quad b=l_1 \cdot p_5, \quad c=l_2 \cdot p_1 ,\quad d=l_2 \cdot p_2, \end{eqnarray} The integrand basis contains $32$ terms, \begin{gather} \label{eq:56} \mathcal B=\{b^4 c,b^4 d,b c d^3,b d^4,a b^3,b^4,b^3 c,b^3 d,b c d^2,b d^3,c d^3,d^4,a b^2,b^3,b^2 c,b^2 d,b c d,b d^2,\nonumber \\ c d^2,d^3,a b,a d,b^2,b c,b d,c d,d^2,a,b,c,d,1\}, \end{gather} Note that for $5$-point kinematics, there exists no vector $\omega$ perpendicular to all external legs. So it is not obvious to find spurious terms directly from the integrand basis. However, we have the following identities, \begin{eqnarray} \label{dbox5_parity} \int \frac{d^4 l_1}{(2\pi)^2}\frac{d^4 l_2}{(2\pi)^2}\frac{\epsilon(l_1,l_2,p_1,p_2)g(l_2)}{D_1 \ldots D_7} =0, \\ \int \frac{d^4 l_1}{(2\pi)^2}\frac{d^4 l_2}{(2\pi)^2}\frac{\epsilon(l_2,l_1,p_4,p_5)f(l_1)}{D_1 \ldots D_7} =0, \end{eqnarray} because of the parity properties for the sub-diagrams. Here $f(l_1)$ and $g(l_2)$ are arbitrary Lorentz-invariant functions of $l_1$ and $l_2$, respectively. There are $6$ branches for cut solutions, \begin{equation} \label{eq:26} I=I_1 \cap I_2 \cap I_3 \cap I_4 \cap I_5 \cap I_6. \end{equation} Similarly, Define $\omega=d D_1 \wedge \ldots d D_7$. By solving congruence equations, we obtain rank-$7$ forms $\eta_i$, $i=1, \ldots 6$ such that, \begin{eqnarray} \label{eq:48} [\eta_i]_j=\delta_{ij} [\Omega]_j,\quad 1\leq i,j \leq 6. \end{eqnarray} We find that each of the first $4$ differential forms $\eta_1, \ldots, \eta_4$ generates $3$ IBPs, while each of the differential forms $\eta_5$ and $\eta_6$ generate $4$ IBPs. These relations are linearly independent, so there are $24$ IBPs in total. Furthermore, the identities (\ref{dbox5_parity}) provides two more independent identities. So we have $32-26=6$ integrals left, \begin{gather} \label{eq:58} I_{\text{turtle}}[1],\quad I_{\text{turtle}}[l_1 \cdot p_4],\quad I_{\text{turtle}}[l_1 \cdot p_5],\quad I_{\text{turtle}}[l_2 \cdot p_1],\nonumber \\ \quad I_{\text{turtle}}[l_2 \cdot p_2], \quad I_{\text{turtle}}[(l_1\cdot p_4)( l_2 \cdot p_2)] \end{gather} There is a subtlety for the master integrals of turtle diagram. For the $D$-dimensional cases, there are $3$ master integrals, $I_{\text{turtle}}[1]$, $I_{\text{turtle}}[l_1 \cdot p_4]$ and $I_{\text{turtle}}[l_1 \cdot p_5]$. However, for $D=4$, there are only $2$ master integral $I_{\text{turtle}}[1]$, $I_{\text{turtle}}[l_1 \cdot p_4]$, because of an integrand reduction relation in $4D$. Since we start with the $4D$ minimal integrand, this additional relation is already incorporated. Then using $4$ additional IBPs from \FIRE{} \cite{FIRE}, \begin{eqnarray} \label{eq:59} I_{\text{turtle}}[l_2 \cdot p_1]&=& I_{\text{turtle}}[l_1 \cdot p_5],\nonumber \\ I_{\text{turtle}}[l_2 \cdot p_2]&=& \frac{s_{25}}{s_{14}} I_{\text{turtle}}[l_1 \cdot p_4],\nonumber \\ I_{\text{turtle}}[(l_1\cdot p_4)(l_2 \cdot p_2)]&=&\frac{s_{12}s_{45}}{8}I_{\text{turtle}}[1] +\frac{s_{25}}{4}I_{\text{turtle}}[l_1 \cdot p_4]-\frac{s_{24}}{4} I[l_1 \cdot p_5], \nonumber \\ I_{\text{turtle}}[(l_1\cdot p_5)(l_2 \cdot p_2)]&=& \frac{s_{15} s_{25}}{4 s_{14}} I_{\text{turtle}}[l_1 \cdot p_4] - \frac{s_{25}}{4} I_{\text{turtle}}[l_1 \cdot p_5]. \end{eqnarray} Including these missing IBP relations, we reduce all integrand terms to the master integrals $I_{\text{turtle}}[1]$, $I_{\text{turtle}}[l_1 \cdot p_4]$. For example, \begin{gather} \label{eq:62} I_{\text{turtle}}[l_1\cdot p_5]=-\frac{4 s_{15} \left(s_{12}+s_{15}-s_{34}\right)}{F} I_{\text{turtle}}[(l_1\cdot p_4)] \nonumber \\ - \frac{ s_{15} \left(s_{23} s_{34}+\left(s_{15}-s_{34}\right) s_{45}+s_{12} \left(s_{15}-s_{23}+2 s_{45}\right)\right) }{F} I_{\text{turtle}}[1] +\ldots, \\ I_{\text{turtle}}[(l_1\cdot p_4)(l_2 \cdot p_1)]=-\frac{1}{2F} s_{15} \big(s_{23} s_{34}+(s_{15}-s_{34}) s_{45}+s_{12} \left(s_{15}-s_{23}+2 s_{45}\right)\big) I_{\text{turtle}}[(l_1\cdot p_4)] \nonumber \\ -\frac{1}{4F} s_{15} \left(s_{15}-s_{23}+s_{45}\right)\big(s_{23} s_{34}+\left(s_{15}-s_{34}\right) s_{45}+s_{12} \left(s_{15}-s_{23}+2 s_{45}\right)\big) I_{\text{turtle}}[1] \nonumber \\+\ldots, \\ I_{\text{turtle}}[(l_1\cdot p_4)^2(l_1 \cdot p_5)] = -s_{15} \left(s_{12}+s_{15}-s_{34}\right) \left(s_{15}-s_{23}+s_{45}\right){}^2 I_{\text{turtle}}[l_1\cdot p_4] \nonumber \\ -\frac{1}{4} s_{15} \left(s_{15}-s_{23}+s_{45}\right){}^2 \big(s_{23} s_{34}+\left(s_{15}-s_{34}\right) s_{45}+s_{12} \left(s_{15}-s_{23}+2 s_{45}\right)\big) I_{\text{turtle}}[1] +\ldots ,\\ I_{\text{turtle}}[(l_1\cdot p_4)(l_2 \cdot p_1)(l_2 \cdot p_2)] =0 + \ldots , \end{gather} where the polynomial $F$ is, \begin{equation} \label{eq:82} F= 2 \left(2 s_{15}^2+\left(-2 s_{23}-2 s_{34}+s_{45}\right) s_{15}+s_{12} \left(s_{15}-s_{23}\right)+s_{34} \left(s_{23}-s_{45}\right)\right) \end{equation} The complete result for $4D$ on-shell turtle box IBPs can be downloaded at \url{http://www.nbi.dk/~zhang/IBP/dbox5_IBP_result.nb}. It is interesting to compare our result to the result from GKK method \cite{GKK_turtle}. GKK method determines that in $D=4-2\epsilon$ dimension, there are $15$ IBP generating vectors $v^{(i)}_\text{GKK}$, $i=1,\ldots 15$, without doubled propagator. However, in the $4D$ on-shell limit, we explicitly verified that on each of the $6$ branches, for all $15$ vectors the dual form $\omega^{(i)}_\text{GKK}$ is proportional to $\Omega$. Hence, in the $4D$ on-shell limit, the $15$ vectors are generated by our six local forms $\eta_j$, $j=1,\ldots,6$. \section{Conclusion} \label{conclusion} In this paper, we invent a new method to generate integration-by-part identities from the viewpoint of differential geometry. The generating vector for IBP identities are reformulated as differential forms, via Poincar\'{e} dual. Then by techniques of differential geometry, the geometric meaning of generating vectors for IBPs without doubled propagator is clear: {\it they are dual to the normal direction of the unitarity-cut solution.} By using the wedge product and congruence equations over cut branches, suitable differential forms to generate IBP without doubled propagator are obtained. Our algorithm is realized by our computational algebraic geometry package, {\sc MathematicaM2}. We tested our algorithm on several $4D$ two-loop examples. The algorithm is very efficient in generating the analytic on-shell part of IBP identities. For example, our program obtains the analytic on-shell IBPs of 5-point turtle diagram, in about one hour on our laptop. Following our discoveries, there are several interesting future directions, \begin{itemize} \item The extension of our formalism to $D=(4-2\epsilon)$-dimension. Apparently, the differential forms are not directly defined in non-integer dimensions. But we expect that this difficulty can be circumvented by considering our formalism in various integer-valued dimensions, and then combine the results by an analytic continuation. In general, the $D$-dimensional unitarity cut solution has a simpler structure than its $4D$ counterpart, so we expect that the discussion on the local properties of differential forms can be simplified in $D$-dimensional cases. \item The beyond-on-shell part of IBP. For the purpose of finding the contour weights in maximal unitarity \cite{Kosower:2011ty}, the algorithm is enough since it aims at the on-shell part. It is interesting to see that how to go steps further by releasing the cut constraints recursively. \item Combination of our differential form method with the classic IBP generating algorithm like Laporta. Our method focuses on the IBP relations without doubled propagator, while other algorithms can recover all the IBP relations. Even before applying the sophisticated congruence method, it is straightforward to calculate the differential form $\Omega = dD_1 \wedge \ldots \wedge dD_k$ analytically, and this form itself generate a lot of IBPs without doubled propagator. We expect that the ingredients of our method can be incorporated current IBP generating programs to speed up the computation. \end{itemize} \section*{Acknowledgement} We thank Simon Badger, Emil J. Bjerrum-Bohr, Spencer Bloch, Simon Caron-Huot, Poul Damgaard, Hjalte Frellesvig, Rijun Huang, David Kosower, Kasper Larsen and Mads S\o gaard for useful discussion on this project. We express special gratitude to Simon Caron-Huot for his participance in the early stage of this paper and careful reading of this paper in the draft stage. We also thank David Kosower and IPhT, Saclay for the hospitality during YZ's visit. YZ is supported by Danish Council for Independent Research-Natural Science (FNU) grant 11-107241.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \noindent In this paper we consider the classical SIR population model perturbed by random noise. The model consists in $S_t+ I_t + R_t$ individuals submitted to a disease, where $S_t$, $I_t$ and $R_t$ respectively denote the numbers of susceptible, infected, and recovered individuals at time $t\in \real_+$, which are modeled as \begin{subequations} \begin{empheq}[left=\empheqlbrace]{align} \label{lsir1} dS_t&= ( \Lambda -\mu S_t -\beta S_tI_t) dt+S_{t^-} dZ_1 (t), \\[3pt] \label{lsir2} dI_t&= ( \beta S_tI_t-(\mu+\eta+\varepsilon )I_t ) dt+I_{t^-}dZ_2 (t), \\[3pt] \label{lsir3} dR_t&= ( \eta I_t-\mu R_t ) dt+R_{t^-}dZ_3 (t), \end{empheq} \end{subequations} where $Z(t) = (Z_1(t),Z_2(t),Z_3(t))$ is a $3$-dimensional stochastic process modeling the intensity of random perturbations of the system. Here, $\Lambda >0$ denotes the population influx into the susceptible component, $\beta >0$ reflects the transmission rate from the susceptible group $S_t$ to infected group $I_t$, $\mu >0$ represents the nature mortality rate of the three compartments $S_t$, $I_t$ and $R_t$, $\varepsilon >0$ denotes the death rate of infected individuals induced by the disease, and $\eta>0$ is the recovery rate of the epidemic. \\ The deterministic version of \eqref{lsir1}-\eqref{lsir3} with $Z(t) = 0$ has been the object of extensive studies, starting with \cite{kermack} and \cite{may}, where the equilibrium of \eqref{lsir1}-\eqref{lsir3} in the deterministic case has been characterized by the basic reproduction number $$ \mathcal{R}_0=\frac{\beta\Lambda}{\mu(\mu+\varepsilon +\eta)} $$ such that when $\mathcal{R}_0<1$, the system admits a Globally Asymptotically Stable (GAS) boundary equilibrium $E_0=({\Lambda}/ {\mu},0,0)$ called the disease-free equilibrium, whereas when $\mathcal{R}_0>1$ there exists a GAS positive equilibrium $$ E^\ast=(S^\ast,I^\ast,R^\ast)=\left( \frac{\mu+\varepsilon +\eta}{\beta},\frac{\mu}{\beta}(\mathcal{R}_0-1),\frac{\eta}{\beta}(\mathcal{R}_0-1) \right), $$ which is called the endemic equilibrium. In order to model random variations in population numbers, Brownian noise has been added to the deterministic system in e.g. \cite{beddington}, \cite{tornatore}, \cite{mao2011}, to better describe the continuous growth of populations in real ecological systems. \\ L\'evy jump noise has been first incorporated into the stochastic Lotka-Volterra population model in \cite{Bao1}, \cite{Bao2}, where uniform $p$th moment bounds and asymptotic pathwise estimations have been derived. Driving processes of the form \begin{equation} \nonumber Z_i(t) = \varrho_i B_i(t) + \int_0^t \int_0^\infty \gamma_i (z) \tilde{N}(ds,dz), \qquad i=1,2,3, \end{equation} where $B_1(t)$, $B_2(t)$, $B_3(t)$ are independent standard Brownian motions, $\tilde{N}(ds,dz)$ is a compensated Poisson counting process with intensity $ds\nu (dz)$ on $\real_+ \times [0,\infty )$ and $\nu (dz)$ is a finite L\'evy measure on $[0, \infty)$, have been considered in \cite{amllevy}, \cite{wulia} and \cite{xiaobing}. In this setting, the asymptotic behavior of solutions of \eqref{lsir1}-\eqref{lsir3} around the equilibrium of the corresponding deterministic system has been studied in \cite{amllevy}, and the threshold of this stochastic SIR model has been investigated in \cite{wulia}. The asymptotic behavior of the stochastic solution of an SIQS epidemic model for quarantine modeling with L\'evy jump diffusion term has been analyzed in \cite{xiaobing}. \\ Previously used jump models, including \cite{amllevy}, \cite{wulia} and \cite{xiaobing}, share the property of being based on a Poisson counting process $N(dt,dz)$ with finite L\'evy measure $\nu(dz)$ on $[0,\infty )$. However, this framework excludes important families of L\'evy jump processes having an infinite L\'evy measure, as well as flexible correlation between the random noise components of the system \eqref{lsir1}-\eqref{lsir3}. In particular, the increments of jump-diffusion models with finite L\'evy measures have exponential tails, see e.g. \S4.3 of \cite{cont}, and they have a limited potential to model extreme events which usually lead to sudden shifts in population numbers. \\ In this paper, we work in the general setting of finite or infinite L\'evy measures $\nu$ on $\real$, which allows us to consider heavy tailed increments having e.g. power law distributions. Indeed, empirical data shows that the jump distribution of population dynamics under sudden environmental shocks such as earthquakes, tsunamis, floods, heatwaves and so on, can follow power law distributions, see e.g. \cite{spl} and references therein. \\ We consider a $3$-dimensional L\'evy noise $Z(t) = (Z_1(t),Z_2(t),Z_3(t))$ with L\'evy-Khintchine representation $$ \E \big[ \re^{iu_1Z_1(t)+iu_2Z_2(t)+iu_3Z_3(t)} \big] = \exp\left( -\frac{t}{2} \langle u,\varrho u\rangle_{\real^3} + t \int_{\real^3 \setminus\{0\}} \big( \re^{i\langle u,\gamma (z)\rangle_{\real^3}} - i\langle u,\gamma (z)\rangle_{\real^3} - 1 \big) \nu (dz) \right), $$ $u=(u_1,u_2,u_3)\in \real^3$, $t\in \real_+$, where $\varrho = (\varrho_{i,j})_{1\leq i,j \leq 3}$ is a positive definite $3\times 3$ matrix, the functions $\gamma_i:\real^3 \rightarrow \real$, $i=1,2,3$ are measurable functions, and $\nu (dz)$ is a $\sigma$-finite measure of possibly infinite total mass on $\real^3\setminus \{0\}$, such that \begin{equation} \label{infinite} \int_{\real^3 \setminus\{0\}} \min ( |\gamma_i (z)|^2 , 1)\nu(dz)<\infty, \qquad i=1,2,3. \end{equation} see e.g. Theorem~1.2.14 in \cite{applebk2}. In addition, the process $Z(t) = (Z_1(t),Z_2(t),Z_3(t))$ is known to admit the representation \begin{equation} \label{sdjakl} Z_i(t) = B^\varrho_i (t)+\int_0^t \int_{\real^3 \setminus\{0\}}\gamma_i(z)\tilde{N}(ds,dz), \quad i =1,2,3, \end{equation} where $(B^\varrho_1 (t), B^\varrho_2 (t), B^\varrho_3(t))$ is a $3$-dimensional Gaussian process with independent and stationary increments and covariance matrix $\varrho = (\varrho_{i,j})_{1\leq i,j \leq 3}$, and $\tilde{N}(dt,dz) =N(dt,dz)-\nu(dz)dt$ is the compensated Poisson counting process with L\'evy measure $\nu (dz)$ on $\real^3\setminus \{0\}$. All processes are defined on a complete filtered probability space $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\geq0},\textbf{P})$, $N(dt,dz)$ is independent of $(B^\varrho_1 (t), B^\varrho_2 (t), B^\varrho_3(t))$, and the covariances of $(Z_1(t),Z_2(t),Z_3(t))$ are given by $$ \E [ Z_i(t) Z_j(t) ] = \varrho_{i,j} t +t \int_{\real^3 \setminus\{0\}}\gamma_i(z)\gamma_j(z) \nu (dz ), \quad t\in \real_+, \quad i,j =1,2,3, $$ which allows for the modeling of random interactions between the components $(S_t,I_t,R_t)$ of the model. \\ In order to investigate the threshold of the stochastic SIR model with finite L\'evy measures, \cite{wulia} have derived long-term estimates (Lemmas~2.1-2.2 therein) which rely on the finiteness of the quantity \begin{equation} \label{lmbda} \int_0^\infty \big( (1+\overline{\gamma}(z))^p-1-\underline{\gamma} (z) \big)\nu(dz), \quad p>1, \end{equation} where $$ \overline{\gamma} ( z) := \max ( \gamma_1(z), \gamma_2(z) , \gamma_3(z) ) \mbox{ ~and~ } \underline{\gamma} ( z) := \min ( \gamma_1(z) , \gamma_2(z) , \gamma_3(z)), \quad z\in \real. $$ In our generalized setting \eqref{sdjakl} under \eqref{infinite}, estimates on solutions are obtained by replacing \eqref{lmbda} with the expression \begin{equation} \label{lmbda2} \lambda (p):= c_p \int_{\real^3 \setminus\{0\}} \overline{\gamma}^2(z) \nu(dz) + c_p \int_{\real^3 \setminus\{0\}} \overline{\gamma}^p(z) \nu(dz), \quad p >1. \end{equation} where $c_p:= p (p-1)\max ( 2^{p-3},1 )/2$. In addition, given the jump stochastic integral process \begin{equation} \nonumber K_t:= \int_0^t \int_{\real^3 \setminus \{0\}} g_s (z)\ \big(N(ds,dz)-\nu (dz)ds\big), \quad t\in \real_+, \end{equation} of the predictable integrand $(g_s(y))_{(s,y)\in \real_+ \times \real}$, we use Kunita's inequality \begin{eqnarray} \label{eq:BDGPoisson2} \lefteqn{ \E\left[\sup\limits_{0\leq s\leq t}|K_s|^p\right] } \\ \nonumber & \leq & {C}_p \ee\left[ \int_0^t\int_{\real^3 \setminus \{0\}} |g_s(z)|^p \ \nu (dz)ds \right] + {C}_p \ee\Bigg[ \left( \int_0^t \int_{\real^3 \setminus \{0\}} |g_s(z)|^2 \ \nu (dz)ds\right)^{ p/2 } \Bigg], \end{eqnarray} for all $t\in \real_+$ and $p \geq 2$, where ${C}_p := 2^{2p-2} \big( \sqrt{e^{\log_2p}/p} + 8 p^{\log_2p} \big)$, see Theorem~2.11 of \cite{kunitalevy}, Theorem~4.4.23 of \cite{applebk2}, and Corollary~2.2 in \cite{bretonprivault3}. This replaces the Burkholder-Davis-Gundy inequality for continuous martingales \begin{equation} \label{bdg0} \E\left[\sup\limits_{0\leq s\leq t}|M_s|^p\right]\leq C_p\E\big[ \langle M,M \rangle_t^{p/2} \big], \qquad p>1, \end{equation} which is used in the proof of Lemmas~2.1 and 2.2 in \cite{wulia}, where $\langle M,M \rangle_t$ is the (predictable) quadratic variation of the continuous martingale $(M_t)_{t\in \real_+}$, see e.g. Theorem~7.3 in Chapter~1 of \cite{mao2008}. Indeed, it is known, see e.g. Remark~357 in \cite{situ} and \cite{bretonprivault3}, that \eqref{bdg0} is invalid for martingales with jumps. \\ As an example, we consider the tempered stable distribution introduced in \cite{koponen} which belongs to the family of self-decomposable distributions, see \S~3.15 in \cite{sato}. Given $\alpha \in (0,2)$, the tempered $\alpha$-stable L\'evy measure is defined as \begin{equation}\label{measure2} \nu (A)= \int_0^\infty \frac{\re^{-r}}{r^{\alpha +1}} \int_{\real^3\setminus \{0\}} {\bf 1}_A (rx) R_\alpha (dx) dr, \end{equation} where $R_\alpha (dx)$ is a measure on $\real^3 \setminus \{0\}$ such that $$ \int_{\real^3 \setminus\{0\}} \min ( \Vert x\Vert^2_{\real^3}, \Vert x \Vert^\alpha_{\real^3}) R_\alpha (dx) <\infty, $$ see Theorem~2.3 in \cite{rosinski4}. It has been shown in \cite{sztonik}, Theorem~5, that the increments of tempered stable processes can have (heavy) power tails instead of (semi-heavy) exponential tails, see also \cite{kuchler}. Taking, e.g. $$ R_\alpha (dx) = k_- \lambda_-^\alpha \delta_{(-1/\lambda_-,-1/\lambda_-,-1/\lambda_-)}(dx) + k_+ \lambda_+^\alpha \delta_{(1/\lambda_+,1/\lambda_+,1/\lambda_+)}(dx), $$ where $k_-,k_+,\lambda_- ,\lambda_+ >0$, and $\delta_y$ denotes the Dirac measure at $y\in \real^3$, the L\'evy measure of the $3$-dimensional fully correlated tempered stable process is given by \begin{eqnarray} \label{measure2.0} \lefteqn{ \nu (A) = \int_{\real^3\setminus \{0\}} \int_0^\infty {\bf 1}_A ( r x) \frac{\re^{-r}}{r^{\alpha +1}} dr R_\alpha (dx) } \\ \nonumber & = & k_- \int_0^\infty {\bf 1}_A (-r/\lambda_-,-r/\lambda_-,-r/\lambda_-) \frac{\re^{-r}}{r^{\alpha +1}} dr + k_+ \int_0^\infty {\bf 1}_A (r/\lambda_+ , r/\lambda_+ ,r/\lambda_+ ) \frac{\re^{-r}}{r^{\alpha +1}} dr, \end{eqnarray} with $\nu (\real )=+\infty$ for all $\alpha \in (0,2)$. We note that \eqref{lmbda} is infinite when $\alpha \in [1,2)$ and $p>1$, whereas $\lambda (p)$ given by \eqref{lmbda2} remains finite whenever $p > \alpha$. \\ This paper is organised as follows. After stating preliminary results on the existence and uniqueness of solutions to \eqref{lsir1}-\eqref{lsir3} in Proposition~\ref{Theorem 2.1}, in the key Lemmas~\ref{l3.1} and \ref{l3.4} we derive new solution estimates by respectively using $\lambda (p)$ defined in \eqref{lmbda2} and Kunita's inequality \eqref{eq:BDGPoisson2} for jump processes. Then in Theorems~\ref{t4.1} and \ref{t4.2} we respectively deal with disease extinction and persistence in the mean for the system \eqref{lsir1}-\eqref{lsir3}. We show that the threshold behavior of the stochastic SIR system \eqref{lsir1}-\eqref{lsir3} is determined by the basic reproduction number \begin{equation} \label{rn} \mathcal{\widebar{R}}_0=\mathcal{R}_0-\frac{\beta_2}{\mu+\varepsilon +\eta} = \frac{\beta\Lambda / \mu - \beta_2}{\mu+\varepsilon +\eta} \end{equation} which differs from \cite{wulia} due to the quantity $$ \beta_2 :=\frac{1}{2}\varrho_{2,2} +\int_{\real^3 \setminus\{0\}} \left( \gamma_2(z)-\log (1+\gamma_2(z)) \right) \nu(dz). $$ In Section~\ref{sec4} we present numerical simulations based on tempered stable processes with parameter $\alpha \in (0,1)$. We show in particular that the addition of a jump component to the system \eqref{lsir1}-\eqref{lsir3} may result into the extinction of the infected and recovered populations as $\alpha \in (0,1)$ becomes large enough and the variance of random fluctuations increases, which is consistent with related observations in the literature, see e.g. \cite{ycai}. In addition, we note that this phenomenon can be observed when the noise variances are normalized to identical values, showing that the shape of the distribution alone can affect the long term behavior of the system. The proofs which are similar to the literature, see \cite{wulia}, are presented in the Appendix for completeness. \section{Large time estimates} \label{sec2} For $f$ an integrable function on $[0,t]$, we denote $$ \langle f\rangle_t=\frac{1}{t}\int_0^tf(s)ds, \quad \langle f\rangle^\ast=\limsup\limits_{t\rightarrow\infty}\frac{1}{t}\int_0^tf(s)ds, \quad \langle f\rangle_\ast=\liminf\limits_{t\rightarrow\infty}\frac{1}{t}\int_0^tf(s)ds, \quad t>0. $$ In addition, we assume that the jump coefficients $\gamma_i(z)$ in \eqref{sdjakl} satisfy \bigskip \noindent \textbf{$(H_1)$} ~$\displaystyle \int_{\real^3 \setminus\{0\}}|\gamma_i(z)|^2\nu(dz)<\infty$, $i=1,2,3$, \bigskip \noindent together with the condition: \\ \noindent \textbf{$(H_2)$} ~$\gamma_i(z)>-1$, $\nu (dz)$-$a.e.$, and $\displaystyle \int_{\real^3 \setminus\{0\}} \big( \gamma_i(z) - \log ( 1 + \gamma_i(z)) \big) \nu(dz)<\infty$, \ $i=1,2,3$. \medskip \begin{prop} \label{Theorem 2.1} Under $(H_1)$-$(H_2)$, for any given initial data $(S_0,I_0,$ $R_0)\in\real _+^3$, the system \eqref{lsir1}-\eqref{lsir3} admits a unique positive solution $(S_t,I_t,R_t)_{t\in \real_+}$ which exists in $(0,\infty )^3$ for all $t\geq 0$, almost surely. \end{prop} \begin{Proof} By Theorem~6.2.11 in \cite{applebk2} or Theorem~2.1 in \cite{Bao1}, the system \eqref{lsir1}-\eqref{lsir3} admits a unique local solution $(S_t,I_t,R_t)_{t\in(0,\tau_e]}$ up to the explosion time $\tau_e$ for any initial data $(S_0,I_0,R_0)\in\real _+^3$, since it has affine coefficients by $(H_1)$. In addition, by $(H_2)$ we have $\gamma_i(z)>-1$, $\nu (dz)$-$a.e.$, $i=1,2,3$, hence the solution is positive. The remaining of the proof follows the lines of proof of Theorem~1 in \cite{amllevy} by noting that the condition $(H_2)$ page~868 therein can be replaced by $(H_2)$ above. \end{Proof} Next, given $\lambda (p)$ defined in \eqref{lmbda2} we let $$ \Vert \varrho \Vert_\infty : = \max_{i=1,2,3} \sum_{j=1}^3 |\varrho_{i,j}|, $$ and we consider the following condition: \\ \textbf{$(H_3^{(p)})$} ~$\displaystyle \mu > \frac{p-1}{2}\Vert \varrho \Vert_\infty + \frac{\lambda (p)}{p}, \qquad p>1$. \\ \noindent The proofs of Lemmas~\ref{l3.1}-\ref{l3.4} below present several significant differences from the arguments of \cite{zhaoamc} and \cite{wulia}. First, our arguments do not require the finiteness of the L\'evy measure $\nu (dz)$, and in the proof of our Lemma~\ref{l3.1} we replace the Burkholder-Davis-Gundy inequality \eqref{bdg0} for continuous processes used in \cite{wulia} with the simpler bound \eqref{bd} below, as \eqref{bdg0} is not valid for jump processes and the inequality used at the beginning of the proof of Lemma~2.1 in \cite{wulia} may not hold in general because the compensated Poisson process $\tilde{N}(t)$ can have a negative drift. Second, the proof of our Lemma~\ref{l3.4} uses Kunita's inequality \eqref{eq:BDGPoisson2} for jump processes instead of relying on the Burkholder-Davis-Gundy inequality \eqref{bdg0} for continuous processes. \bigskip \noindent In the sequel, we consider the condition \bigskip \textbf{$(H_4^{(p)})$} ~ $\displaystyle \int_{\real^3 \setminus\{0\}} | (1+\overline{\gamma}(z))^p-1 | \nu(dz) < \infty, \qquad p >1$. \medskip \medskip \begin{lemma}\label{l3.1} Assume that $(H_1)$-$(H_2)$ and $(H_3^{(p)})$-$(H_4^{(p)})$ hold for some $p>1$, and let $(S_t,I_t,R_t)$ be the solution of the system \eqref{lsir1}-\eqref{lsir3} with initial condition $(S_0,I_0,R_0)\in\real _+^3$. Then we have \begin{equation}\notag \lim\limits_{t\rightarrow\infty}\frac{S_t}{t}=0,\quad \lim\limits_{t\rightarrow\infty}\frac{I_t}{t}=0,\quad \lim\limits_{t\rightarrow\infty}\frac{R_t}{t}=0, \quad \mathbb{P}\mbox{-}a.s. \\ \end{equation} \end{lemma} \begin{Proof} We let $U_t:=S_t+I_t+R_t$ and $$ H_t(z):=\gamma_1(z)S_t+\gamma_2(z)I_t+\gamma_3(z)R_t, \qquad z\in \real^3, \quad t\in \real_+, $$ with the inequality $H_t(z)\leq \overline{\gamma} (z)U_t$, $z\in \real^3$. Applying the It\^{o} formula with jumps (see e.g. Theorem 1.16 in \cite{yiteng}) to the function $x\mapsto V(x)=(1+x)^p$, we obtain \begin{eqnarray} \nonumber \lefteqn{ dV(U_t) = p(1+U_t)^{p-1}(\Lambda-\mu U_t-\varepsilon I_t)dt } \\ \nonumber & & +\frac{p(p-1)}{2}(1+U_t)^{p-2} \big(\varrho_{1,1}S_t^2+\varrho_{2,2}I_t^2+\varrho_{3,3}R_t^2 + 2 \varrho_{1,2} S_tI_t + 2 \varrho_{1,3} S_tR_t + 2 \varrho_{2,3} I_tR_t \big)dt \\ \nonumber & & +p(1+U_t)^{p-1} ( S_tdB^\varrho_1(t)+I_tdB^\varrho_2(t)+R_tdB^\varrho_3(t) ) \\ \nonumber & & +\int_{\real^3 \setminus\{0\}} ( (1+ U_{t}+H_t(z) ) ^p~-(1+U_{t})^p-p(1+U_{t})^{p-1}H_t(z) ) \nu(dz)dt\\ \nonumber & & +\int_{\real^3 \setminus\{0\}} ( (1+ U_{t^-}+H_{t^-}(z) ) ^p-(1+U_{t^-})^p ) \tilde{N}(dt,dz) \\ \nonumber &=& LV(U_t)dt+p(1+U_t)^{p-1} ( S_tdB^\varrho_1(t)+I_tdB^\varrho_2(t)+R_tdB^\varrho_3(t)) \\ \label{lv0} & & +\int_{\real^3 \setminus\{0\}} \big( (1+ U_{t^-}+H_{t^-}(z) ) ^p-(1+U_{t^-})^p \big) \tilde{N}(dt,dz), \end{eqnarray} where we let \begin{eqnarray*} \lefteqn{ \! L V(U_t) := p(1+U_t)^{p-1}(\Lambda-\mu U_t-\varepsilon I_t) } \\ & & +\frac{p(p-1)}{2}(1+U_t)^{p-2} \big(\varrho_{1,1}S_t^2+\varrho_{2,2}I_t^2+\varrho_{3,3}R_t^2 + 2 \varrho_{1,2} S_tI_t + 2 \varrho_{1,3} S_tR_t + 2 \varrho_{2,3} I_tR_t \big) \\ & & +\int_{\real^3 \setminus\{0\}} ( (1+ U_{t}+H_t(z) ) ^p~-(1+U_{t})^p-p(1+U_{t})^{p-1}H_t(z) ) \nu(dz). \end{eqnarray*} We note that for all $z\in \real^3$ and $t\in\real _+$ there exists $\theta \in(0,1)$ such that \begin{eqnarray*} \lefteqn{ \! \! \! \! \! \! \! \! (1+U_t+H_t(z))^p-(1+U_t)^p-p(1+U_t)^{p-1}H_t(z) } \\ &=&(1+U_{t})^p+p(1+U_{t})^{p-1}H_t(z) +\frac{p(p-1)}{2} (1+ U_{t}+\theta H_t(z) ) ^{p-2}H_t^2(z)\\ && -(1+U_{t})^p-p(1+U_{t})^{p-1}H_t(z)\\ &=&\frac{p(p-1)}{2} (1+ U_{t}+\theta H_t(z)) ^{p-2}H_t^2(z)\\ &\leq&\frac{p(p-1)}{2}\max ( 2^{p-3},1 ) ((1+ U_{t})^{p-2}+\theta H_t^{p-2}(z) ) H_t^2(z) \\ &\leq&c_p(1+U_{t})^{p-2}H_t^2(z)+c_pH_t^p(z)\\ &\leq&c_p (1+U_{t})^{p-2}U_{t}^2 ( \overline{\gamma}^2(z)+\overline{\gamma}^p(z) ), \quad z\in \real^3, \quad t\in\real _+, \end{eqnarray*} where we used the bound $H_t(z)\leq \overline{\gamma}(z)U_t$, with $c_p:= p (p-1)\max ( 2^{p-3},1 )/2$. It then follows that \begin{eqnarray} \nonumber \lefteqn{ LV(U_t) \leq p(1+U_t)^{p-2}((1+U_t)(\Lambda-\mu U_t )-\varepsilon (1+U_t)I_t) } \\ \nonumber & & +\frac{p(p-1)}{2}(1+U_t)^{p-2} \big(\varrho_{1,1}S_t^2+\varrho_{2,2}I_t^2+\varrho_{3,3}R_t^2 + 2 \varrho_{1,2} S_tI_t + 2 \varrho_{1,3} S_tR_t + 2 \varrho_{2,3} I_tR_t \big) \\ \nonumber & & +c_p (1+U_{t})^{p-2}U_{t}^2 \int_{\real^3 \setminus\{0\}} \overline{\gamma}^2(z) \nu(dz) +c_p (1+U_{t})^{p-2}U_{t}^2\int_{\real^3 \setminus\{0\}} \overline{\gamma}^p(z) \nu(dz) \\ \nonumber &\leq & p(1+U_t)^{p-2}(-\mu U_t^2+(\Lambda-\mu)U_t+\Lambda)+ \frac{p (p-1)}{2}(1+U_t)^{p-2} \Vert \varrho \Vert_\infty U_{t}^2 \\ \nonumber &&+\frac{pc_p}{p} (1+U_t)^{p-2}U_{t}^2\int_{\real^3 \setminus\{0\}} ( \overline{\gamma}^2(z)+\overline{\gamma}^p(z) ) \nu(dz) \\ \label{lv} &=& p(1+U_t)^{p-2}(- bU_t^2+(\Lambda-\mu)U_t+\Lambda), \end{eqnarray} where $$ b:=\mu-\frac{p-1}{2}\Vert \varrho \Vert_\infty - \frac{\lambda (p)}{p}>0 $$ by $(H_3^{(p)})$. Next, for any $k\in\real $ it holds that \begin{eqnarray} \nonumber e^{kt}(1+U_t)^p&=& (1+U_0)^p + \int_0^t e^{ks} ( k(1+U_s)^p+LV(U_s) ) ds \\ \nonumber & & + p \int_0^t e^{ks}(1+U_s)^{p-1}( S_sdB^\varrho_1(s)+I_sdB^\varrho_2(s)+R_sdB^\varrho_3(s)) \\ \nonumber & & +\int_0^t e^{ks}\int_{\real^3 \setminus\{0\}}\big( ( 1+U_{s^-}+H_{s^-}(z))^p-(1+U_{s^-})^p \big) \tilde{N}(ds,dz), \end{eqnarray} hence by taking expectations in \eqref{lv0} and in view of \eqref{lv}, for any $k<bp$ we obtain \begin{eqnarray} \nonumber \lefteqn{ e^{kt}\E[(1+U_t)^p] = (1+U_0)^p+\E\left[ \int_0^t e^{ks} \big( k(1+U_s)^p+LV(U_s) \big) ds\right]. } \\ \nonumber & \leq & (1+U_0)^p+\E\left[ \int_0^t e^{ks} \left( k (1+U_s)^p+p (1+U_s)^{p-2} (\Lambda +(\Lambda-\mu)U_s - bU_s^2 )\right) ds\right] \\ \nonumber & = & (1+U_0)^p+p \E\left[ \int_0^t e^{ks}U_s^{p-2}\left( -\left(b-\frac{k}{p}\right)U_s^2 +\left( \Lambda-\mu+\frac{2k}{p}\right)U_s+\Lambda+\frac{k}{p} \right)ds\right] \\ \nonumber & \leq & (1+U_0)^p+pM \int_0^t e^{ks} ds \\ \nonumber & = & (1+U_0)^p+\frac{pM}{k}e^{kt}, \quad t\in \real_+, \end{eqnarray} where $$ 0<M:= 1 + \sup\limits_{x\in\real _+} x^{p-2}\left( -\left( b-\frac{k}{p}\right)x^2+ \left( \Lambda-\mu+\frac{2k}{p}\right)x+\Lambda+\frac{k}{p} \right)<\infty. $$ Hence, for any $k\in(0,bp)$ we have \begin{equation}\notag \limsup\limits_{t\rightarrow\infty}\E[(1+U_t)^p]\leq\frac{pM}{k}, \end{equation} which implies that there exists $M_0>0$ such that \begin{equation}\label{dvp6} \E[(1+U_t)^p]\leq M_0,\qquad t\in \real_+. \end{equation} Now, by \eqref{lv0} and \eqref{lv} there will be \begin{eqnarray*} \lefteqn{ (1+U_t)^p-(1+U_{k\delta})^p \leq p \int_{k\delta}^t(1+U_s)^{p-2}( \Lambda +(\Lambda-\mu)U_s - bU_s^2 ) ds } \\ & & + p \int_{k\delta}^t(1+U_s)^{p-1} ( S_sdB^\varrho_1(s)+I_sdB^\varrho_2(s)+R_sdB^\varrho_3(s) ) \\ & & +\int_{k\delta}^t\int_{\real^3 \setminus\{0\}} \big( ( 1+U_{s^-}+H_{s^-}(z))^p-(1+U_{s^-})^p \big) \tilde{N}(ds,dz), \qquad t\geq k\delta , \end{eqnarray*} from which it follows that \begin{eqnarray*} \sup\limits_{k\delta\leq t\leq(k+1)\delta}(1+U_t)^p & \leq & (1+U_{k\delta})^p + p \sup\limits_{k\delta\leq t\leq(k+1)\delta}\left|\int_{k\delta}^t(1+U_s)^{p-2}( \Lambda +(\Lambda-\mu)U_s - bU_s^2)ds\right| \\ &&+ p \sup\limits_{k\delta\leq t\leq(k+1)\delta}\left|\int_{k\delta}^t(1+U_s)^{p-1} ( S_sdB^\varrho_1(s)+I_sdB^\varrho_2(s)+R_sdB^\varrho_3(s) ) \right|\\ &&+\sup\limits_{k\delta\leq t\leq(k+1)\delta}\left|\int_{k\delta}^t\int_{\real^3 \setminus\{0\}} \big( ( 1+U_{s^-}+H_{s^-}(z) )^p-(1+U_{s^-})^p \big) \tilde{N}(ds,dz)\right|. \end{eqnarray*} Taking expectation on both sides, we obtain $$ \E\bigg[\sup\limits_{k\delta\leq t\leq(k+1)\delta}(1+U_t)^p\bigg] \leq \E[(1+U_{k\delta})^p]+J_1+J_2+J_3 \leq M_0+J_1+J_2+J_3, $$ where, for some $c_3>0$, \begin{eqnarray*} J_1&:=& p \E\bigg[ \sup\limits_{k\delta\leq t\leq(k+1)\delta}\left|\int_{k\delta}^t(1+U_s)^{p-2} ( \Lambda + (\Lambda-\mu)U_s - bU_s^2 )ds\right|\bigg] \\ &\leq & c_3\E\bigg[\sup\limits_{k\delta\leq t\leq(k+1)\delta} \int_{k\delta}^t(1+U_s)^p ds \bigg]\\ &= & c_3\E\bigg[\int_{k\delta}^{(k+1)\delta}(1+U_s)^pds\bigg]\\ & \leq & c_3\delta \E\bigg[\sup\limits_{k\delta\leq t\leq(k+1)\delta}(1+U_t)^p\bigg], \end{eqnarray*} and, for some $c_4>0$, \begin{eqnarray*} J_2&:=& p \E\bigg[ \sup\limits_{k\delta\leq t\leq(k+1)\delta}\left|\int_{k\delta}^t(1+U_s)^{p-1} \big( S_sdB^\varrho_1(s)+I_sdB^\varrho_2(s)+R_sdB^\varrho_3(s) \big) \right|\bigg] \\ &\leq & p \sqrt{32} \E\bigg[ \bigg( \int_{k\delta}^{(k+1)\delta}(1+U_s)^{2(p-1)} \big( \varrho_{1,1}S_s^2+\varrho_{2,2}I_s^2+\varrho_{3,3}R_s^2 \big) ds \bigg)^{1/2} \bigg] \\ &\leq & p \sqrt{32 \delta \Vert \varrho \Vert_\infty} \E\bigg[ \bigg( \sup\limits_{k\delta\leq t\leq(k+1)\delta}(1+U_t)^{2p} \bigg)^{1/2} \bigg] \\ & = &c_4 \sqrt{\delta} \E\bigg[\sup\limits_{k\delta\leq t\leq(k+1)\delta}(1+U_t)^p\bigg], \end{eqnarray*} where we used the Burkholder-Davis-Gundy inequality \eqref{bdg0} for continuous martingales, see Theorem~IV.4.48 page 193 of \cite{protterb2005}, or Theorem~7.3 in Chapter~1 of \cite{mao2008}. Furthermore, we have \begin{eqnarray} \nonumber J_3&=& \E\left[ \sup\limits_{k\delta\leq t\leq(k+1)\delta}\left|\int_{k\delta}^t\int_{\real^3 \setminus\{0\}} \big( (1+ U_{s^-}+H_{s^-}(z) ) ^p-(1+U_{s^-})^p\big) \tilde{N}(ds,dz)\right|\right] \\ \nonumber & \leq & \E\left[ \left|\int_{k\delta}^{(k+1)\delta} \int_{\real^3 \setminus\{0\}} \big( (1+ U_{s^-}+H_{s^-}(z) ) ^p-(1+U_{s^-})^p\big) {N}(ds,dz)\right|\right] \\ \nonumber & & + \E\left[ \left|\int_{k\delta}^{(k+1)\delta} \int_{\real^3 \setminus\{0\}} \big( (1+ U_{s^-}+H_{s^-}(z) ) ^p-(1+U_{s^-})^p\big) ds \nu (dz)\right|\right] \\ \nonumber & = & 2 \E\left[ \left|\int_{k\delta}^{(k+1)\delta} \int_{\real^3 \setminus\{0\}} \big( (1+ U_{s^-}+H_{s^-}(z) ) ^p-(1+U_{s^-})^p\big) ds \nu (dz)\right|\right] \\ \nonumber &\leq & 2 \E\left[ \int_{k\delta}^{(k+1)\delta}\int_{\real^3 \setminus\{0\}} (1+U_{s^-})^p | ( 1 + \overline{\gamma}(z) )^p-1 | ds \nu (dz) \right] \\ \nonumber &= & 2 \E \left[ \int_{k\delta}^{(k+1)\delta} (1+U_s)^p ds \right] \int_{\real^3 \setminus\{0\}} | (1+\overline{\gamma}(z))^p-1 | \nu(dz) \\ \label{bd} &\leq& 2 \delta \E\Bigg[ \sup\limits_{k\delta\leq t \leq(k+1)\delta}(1+U_t)^p \Bigg] \int_{\real^3 \setminus\{0\}} | (1+\overline{\gamma}(z))^p-1 | \nu(dz) . \end{eqnarray} Therefore, we have \begin{eqnarray} \label{Esup} \lefteqn{ \E\bigg[ \sup\limits_{k\delta\leq t\leq(k+1)\delta}(1+U_t)^p\bigg] } \\ \nonumber & \leq & \E[(1+U_{k\delta})^p] +\bigg( c_3\delta+c_4 \sqrt{\delta}+ 2 \delta \int_{\real^3 \setminus\{0\}} | (1+\overline{\gamma}(z))^p-1 | \nu(dz) \bigg) \E\bigg[\sup\limits_{k\delta\leq t\leq(k+1)\delta}(1+U_t)^p\bigg]. \end{eqnarray} Furthermore, from $(H_4^{(p)})$ we can choose $\delta>0$ such that $$ c_3\delta+c_4 \sqrt{\delta}+ 2 \delta \int_{\real^3 \setminus\{0\}} | (1+\overline{\gamma}(z))^p-1 | \nu(dz) <\frac{1}{2}, $$ and, combining \eqref{dvp6} with \eqref{Esup}, we obtain \begin{equation}\label{bofE} \E\bigg[\sup\limits_{k\delta\leq t\leq(k+1)\delta}(1+U_t)^p\bigg]\leq2\E[(1+U_{k\delta})^p] \leq 2M_0. \end{equation} Let now $\varepsilon >0$ be arbitrary. By Chebyshev's inequality, we get \begin{equation}\notag \mathbb{P}\bigg( \sup\limits_{k\delta\leq t\leq(k+1)\delta}(1+U_t)^p>(k\delta)^{1+\varepsilon } \bigg) \leq\frac{1}{(k\delta)^{1+\varepsilon }} \E\bigg[\sup\limits_{k\delta\leq t\leq(k+1)\delta}(1+U_t)^p\bigg] \leq\frac{2M_0}{(k\delta)^{1+\varepsilon }} \end{equation} for all $k \geq 1$. Then, by the Borel-Cantelli lemma (see Lemma 2.4 in Chapter~1 of \cite{mao2008}) it follows that for almost all $\omega\in\Omega$, the bound \begin{equation} \nonumber \sup\limits_{k\delta\leq t\leq(k+1)\delta}(1+U_t)^p\leq(k\delta)^{1+\varepsilon }, \end{equation} holds for all but finitely many $k$. Thus, for almost all $\omega\in\Omega$ there exists $k_0(\omega)$ such that whenever $k\geq k_0 (\omega)$ we have $$ \frac{\log (1+U_t)^p}{\log t} \leq 1+\varepsilon, \qquad \varepsilon >0, \quad k\delta\leq t\leq(k+1)\delta, $$ hence \begin{equation}\notag \limsup\limits_{t\rightarrow\infty}\frac{\log U_t}{\log t}\leq\limsup\limits_{t\rightarrow\infty}\frac{\log (1+U_t)}{\log t}\leq\frac{1}{p}, \quad \mathbb{P}\mbox{-}a.s., \quad p>1. \end{equation} In other words, for any $\xi \in (0,1-1/p)$ there exists an a.s. finite random time $\widebar{T} (\omega)$ such that \begin{equation}\notag \log U_t\leq \left( \frac{1}{p}+\xi \right)\log t, \qquad t\geq \widebar{T}. \end{equation} It follows that \begin{equation}\notag \limsup\limits_{t\rightarrow\infty}\frac{U_t}{t}\leq\limsup\limits_{t\rightarrow\infty}\frac{t^{\xi + 1/p}}{t}=0, \end{equation} therefore we have \begin{equation}\nonumber \limsup\limits_{t\rightarrow\infty}\frac{S_t}{t}\leq0,\quad \limsup\limits_{t\rightarrow\infty}\frac{I_t}{t}\leq0, \quad \limsup\limits_{t\rightarrow\infty}\frac{R_t}{t}\leq0, \quad \mathbb{P}\mbox{-}a.s. \end{equation} This, together with the positivity of the solution, implies that \begin{equation}\notag \lim\limits_{t\rightarrow\infty}\frac{S_t}{t}=0,\quad \lim\limits_{t\rightarrow\infty}\frac{I_t}{t}=0,\quad \lim\limits_{t\rightarrow\infty}\frac{R_t}{t}=0, \quad \mathbb{P}\mbox{-}a.s. \end{equation} \end{Proof} The next Lemma~\ref{l3.4} is proved by using Kunita's inequality \eqref{eq:BDGPoisson2} for jump processes instead of the Burkholder-Davis-Gundy inequality \eqref{bdg0} for continuous martingales. \begin{lemma} \label{l3.4} Assume that $(H_1)$-$(H_2)$ and $(H_3^{(p)})$-$(H_4^{(p)})$ hold for some $p>2$, and let $(S_t,I_t,R_t)$ be the solution of the system \eqref{lsir1}-\eqref{lsir3} with initial condition $(S_0,I_0,R_0)\in\real _+^3$. Then we have $$ \lim\limits_{t\rightarrow\infty}\frac{1}{t}\int_0^t S_{r^-}\int_{\real^3 \setminus\{0\}}\gamma_1(z)\tilde{N}(dr,dz)=0,\quad \lim\limits_{t\rightarrow\infty}\frac{1}{t}\int_0^t I_{r^-}\int_{\real^3 \setminus\{0\}}\gamma_2(z)\tilde{N}(dr,dz)=0, $$ and \begin{equation}\notag \lim\limits_{t\rightarrow\infty}\frac{1}{t}\int_0^t R_{r^-} \int_{\real^3 \setminus\{0\}}\gamma_3(z)\tilde{N}(dr,dz)=0, \quad \mathbb{P}\mbox{-}a.s.\end{equation} \end{lemma} \begin{Proof} Denote $$ X_1(t):=\int_0^tS_{r^-}\int_{\real^3 \setminus\{0\}}\gamma_1(z)\tilde{N}(dr,dz), \quad X_2(t):=\int_0^tI_{r^-}\int_{\real^3 \setminus\{0\}}\gamma_2(z)\tilde{N}(dr,dz), $$ and $$ X_3(t):=\int_0^tR_{r^-}\int_{\real^3 \setminus\{0\}}\gamma_3(z)\tilde{N}(dr,dz), \quad t\in \real_+. $$ By Kunita's inequality \eqref{eq:BDGPoisson2}, for any $p \geq 2$ there exists a positive constant $C_p$ such that \begin{align*} & \E\bigg[\sup\limits_{0< r\leq t}|X_1(r)|^p\bigg] \\ & \leq C_p\E\bigg[\bigg(\int_0^t|S_r|^2 \int_{\real^3 \setminus\{0\}}|\gamma_1(z)|^2\nu(dz)dr\bigg)^{p/2}\bigg] +C_p\E\bigg[\int_0^t |S_r|^p \int_{\real^3 \setminus\{0\}}|\gamma_1(z)|^p\nu(dz)dr\bigg] \\ &= C_p \bigg( \int_{\real^3 \setminus\{0\}}\gamma_1^2(z)\nu(dz) \bigg)^{p/2} \E \bigg[\bigg(\int_0^t|S_r|^2dr\bigg)^{p/2}\bigg] +C_p \left( \int_{\real^3 \setminus\{0\}}\gamma_1^p(z)\nu(dz) \right) \E\bigg[\int_0^t|S_r|^p dr\bigg] \\ &\leq C_p t^{p/2} \left( \int_{\real^3 \setminus\{0\}}\gamma_1^2(z)\nu(dz)\right)^{p/2}\E\bigg[\left(\sup\limits_{0< r\leq t}|S_r|^2\right)^{p/2}\bigg] \\ & \quad +C_p \int_{\real^3 \setminus\{0\}}\gamma_1^p(z)\nu(dz) \int_0^t\E[|S_r|^p]dr \\ &\leq C_p t^{p/2} \left( \int_{\real^3 \setminus\{0\}}\gamma_1^2(z)\nu(dz) \right)^{p/2} \E\left[\sup\limits_{0< r\leq t}|S_r|^p\right] +C_pM_0t\int_{\real^3 \setminus\{0\}}\gamma_1^p(z)\nu(dz), \end{align*} where the bound \eqref{dvp6} has been used in the last inequality. Combining the above inequality with \eqref{bofE} yields \begin{eqnarray*} \lefteqn{ \E\bigg[ \sup\limits_{k\delta\leq t\leq(k+1)\delta}|X_1(t)|^p\bigg] } \\ &\leq & C_p ( (k+1)\delta )^{p/2} \left( \int_{\real^3 \setminus\{0\}}\gamma_1^2(z)\nu(dz) \right)^{p/2} \E\bigg[\sup\limits_{k\delta\leq t\leq(k+1)\delta}|S_r|^p\bigg] \\ & & +C_p M_0(k+1)\delta \int_{\real^3 \setminus\{0\}}\gamma_1^p(z)\nu(dz) \\ &\leq & C_p2M_0 ( (k+1)\delta ) ^{p/2} \left( \int_{\real^3 \setminus\{0\}}\gamma_1^2(z)\nu(dz) \right)^{p/2} +C_p \delta M_0(k+1) \int_{\real^3 \setminus\{0\}}\gamma_1^p(z)\nu(dz). \end{eqnarray*} Let $\varepsilon >0$ be arbitrary. It follows from Doob's martingale inequality (see e.g. Theorem~3.8 in Chapter~1 of \cite{mao2008}) that \begin{eqnarray*} \lefteqn{ \mathbb{P}\bigg( \sup\limits_{k\delta\leq t\leq(k+1)\delta}|X_1(t)|^p>(k\delta)^{1+\varepsilon +p/2}\bigg) \leq (k\delta)^{-1-\varepsilon -p/2} \E\bigg[ \sup\limits_{k\delta\leq t\leq(k+1)\delta}|X_1(t)|^p\bigg] } \\ & \leq&\frac{C_p2M_0 ( (k+1)\delta )^{p/2}}{(k\delta)^{1+\varepsilon +p/2}} \left(\int_{\real^3 \setminus\{0\}}\gamma_1^2(z)\nu(dz) \right)^{p/2} +\frac{C_pM_0(k+1)\delta}{(k\delta)^{1+\varepsilon +p/2}} \int_{\real^3 \setminus\{0\}}\gamma_1^p(z)\nu(dz). \end{eqnarray*} By the Borel-Cantelli lemma it follows that for almost all $\omega\in\Omega$ the bound \begin{equation} \nonumber \sup\limits_{k\delta\leq t\leq(k+1)\delta}|X_1(t)|^p\leq(k\delta)^{1+\varepsilon +p/2} \end{equation} holds for all but finitely many $k$. Thus, for almost all $\omega\in\Omega$ there exists $k_0 (\omega)$ such that for all $k\geq k_0 (\omega)$ we have \begin{equation}\notag \frac{\log |X_1(t)|}{\log t} \leq \frac{1}{2} + \frac{1+\varepsilon}{p}, \qquad \varepsilon >0, \quad k\delta\leq t\leq(k+1)\delta, \end{equation} hence \begin{equation}\notag \limsup_{t\to \infty} \frac{\log |X_1(t)|}{\log t} \leq \frac{1}{2} + \frac{1}{p}, \qquad p>2, \end{equation} and as in the proof of Lemma~\ref{l3.1}, this yields \begin{equation}\notag \limsup\limits_{t\rightarrow\infty}\frac{|X_1(t)|}{t}\leq\limsup\limits_{t\rightarrow\infty}\frac{t^{ 1/2+1/p}}{t}=0 \end{equation} since $p>2$, which shows that \begin{equation}\notag \lim\limits_{t\rightarrow\infty}\frac{|X_1(t)|}{t}=0, \quad \mathbb{P}\mbox{-}a.s.\end{equation} By similar arguments, we also obtain \begin{equation}\notag \lim\limits_{t\rightarrow\infty}\frac{X_2(t)}{t}=0,\quad \lim\limits_{t\rightarrow\infty}\frac{X_3(t)}{t}=0, \quad \mathbb{P}\mbox{-}a.s.\end{equation} \end{Proof} \begin{remark} \label{r0} \noindent We note that by the continuity of $p\mapsto {\lambda}(p)$ in \eqref{lmbda2}, in order for $(H_3^{(p)})$ to hold for some $p>2$ it suffices that $(H_3^{(2)})$ be satisfied, i.e. \vspace*{0.2cm} \begin{equation}\nonumber \textbf{$(H_3^{(2)})$}: ~~\mu >\frac{\Vert \varrho \Vert_\infty }{2}+\frac{{\lambda}(2)}{2}. \end{equation} \end{remark} The next Lemma~\ref{l3.3} can be proved on \eqref{dvp6}, by noting that the argument of Lemma~2.2 in \cite{wulia} is valid for correlated Brownian motions $(B^\varrho_1 (t), B^\varrho_2 (t), B^\varrho_3(t))$, without requiring the continuity of $(S_t,I_t,R_t)_{t\in \real_+}$. \begin{lemma} \label{l3.3} Assume that $(H_1)$-$(H_2)$ and $(H_3^{(p)})$-$(H_4^{(p)})$ hold for some $p>1$, and let $(S_t,I_t,R_t)$ be the solution of \eqref{lsir1}-\eqref{lsir3} with initial condition $(S_0,I_0,R_0)\in\real _+^3$. Then, $\mathbb{P}$-$a.s$, we have \begin{equation}\notag \lim\limits_{t\rightarrow\infty}\frac{1}{t}\int_0^t S_rdB^\varrho_1(r)=0,~\lim\limits_{t\rightarrow\infty}\frac{1}{t}\int_0^t I_rdB^\varrho_2(r)=0,~ \lim\limits_{t\rightarrow\infty}\frac{1}{t}\int_0^t R_rdB^\varrho_3(r)=0.\end{equation} \end{lemma} \section{Extinction and persistence} By virtue of the large time estimates for the solution of \eqref{lsir1}-\eqref{lsir3} and its diffusion and jump components obtained in Section~\ref{sec2}, in this section we determine the threshold behavior of the stochastic SIR epidemic model. \\ In Theorems~\ref{t4.1} and \ref{t4.2} below, the extinction and persistence of the disease is characterized by means of the critical reproduction number $\mathcal{\widebar{R}}_0$ in \eqref{rn}, which shows that the additional environmental noise induced by L\'evy jumps can limit the outbreak of the disease. In the sequel, we let \begin{equation} \nonumber 0 < \beta_i :=\frac{1}{2}\varrho_{i,i}+\int_{\real^3 \setminus\{0\}} \left( \gamma_i(z)-\log (1+\gamma_i(z)) \right) \nu(dz),\quad i=1,2,3, \end{equation} which is finite under $(H_1)$, and we consider the following condition: \vspace*{0.2cm}\textbf{$(H_5)$} ~$\displaystyle \int_{\real^3 \setminus\{0\}} \big( \log (1+\gamma_i(z)) \big)^2\nu(dz)<\infty$. \bigskip \noindent We note that the basic reproduction number $ \mathcal{\widebar{R}}_0$ becomes lower in the presence of jumps. \begin{theorem}\label{t4.1} {\em (Extinction)}. Assume that $(H_1)$-$(H_2)$, $(H_3^{(p)})$-$(H_4^{(p)})$ and $(H_5)$ hold for some $p>2$. If in addition \begin{equation}\notag \mathcal{\widebar{R}}_0 :=\mathcal{R}_0-\frac{\beta_2}{\mu+\varepsilon +\eta} < 1, \end{equation} then for any initial condition $(S_0,I_0,R_0)\in\real _+^3$, the disease vanishes with probability one in large time, i.e. the solution $(S_t,I_t,R_t)$ of \eqref{lsir1}-\eqref{lsir3} satisfies \begin{equation}\notag \lim\limits_{t\rightarrow\infty}\langle S\rangle_t=\frac{\Lambda}{\mu}, \quad \lim\limits_{t\rightarrow\infty}I_t=0 \mbox{ ~and~ } \lim\limits_{t\rightarrow\infty}R_t=0, \quad \mathbb{P}\mbox{-}a.s. \end{equation} \end{theorem} The proof of Theorem~\ref{t4.1} follows the lines of the proof of Theorem~2.1 in \cite{wulia}, up to the new Condition~$(H_3^{(p)})$ which allows for infinite L\'evy measures in Lemma~\ref{l3.1}. For reference, the proof of Theorem~\ref{t4.1} is stated in Appendix. \\ Next, we consider the persistence of the system \eqref{lsir1}-\eqref{lsir3}. We recall that the system \eqref{lsir1}-\eqref{lsir3} is said to be persistent in the mean if \begin{equation}\notag \liminf\limits_{t\rightarrow\infty}\frac{1}{t}\int_0^tS_rdr>0,\quad \liminf\limits_{t\rightarrow\infty}\frac{1}{t}\int_0^tI_rdr>0, \quad \liminf\limits_{t\rightarrow\infty}\frac{1}{t}\int_0^tR_rdr>0, \quad \mathbb{P}\mbox{-}a.s.\end{equation} In Theorem~\ref{t4.2} we explore the conditions for the disease to be endemic, in other words, sufficient conditions for the persistence of the infected population $I_t$. \begin{theorem}\label{t4.2} {\em (Persistence)}. Assume that $(H_1)$-$(H_2)$, $(H_3^{(p)})$-$(H_4^{(p)})$ and $(H_5)$ hold for some $p>2$. If in addition \begin{equation}\notag \mathcal{\widebar{R}}_0 :=\mathcal{R}_0-\frac{\beta_2}{\mu+\varepsilon+\eta} > 1, \end{equation} then for any initial condition $(S_0,I_0,R_0)\in\real _+^3$, the solution $(S_t,I_t,R_t)$ of \eqref{lsir1}-\eqref{lsir3} satisfies $$ \lim\limits_{t\rightarrow\infty}\langle S\rangle_t=S^\ast+\frac{\beta_2}{\beta}, \quad \lim\limits_{t\rightarrow\infty}\langle I\rangle_t=\frac{\mu}{\beta}(\mathcal{\widebar{R}}_0-1), \quad \lim\limits_{t\rightarrow\infty}\langle R\rangle_t=\frac{\eta}{\beta}(\mathcal{\widebar{R}}_0-1), \quad \mathbb{P}\mbox{-}a.s., $$ where $S^\ast := ( \mu+\varepsilon+\eta ) / \beta$ is the equilibrium value for the susceptible population $S_t$ in the corresponding deterministic SIR model. \end{theorem} For reference, the proof of Theorem~\ref{t4.2} is stated in Appendix. It follows the lines of the proof of Theorem~3.1 in \cite{wulia}, up to the use of Lemma~\ref{Lemma 2.2} (in Appendix) which extends Lemma~2 of \cite{mazhien} to discontinuous functions. \section{Numerical experiments} \label{sec4} In this section, we provide numerical simulations for the behavior of \eqref{lsir1}-\eqref{lsir3} using tempered stable processes. The (compensated) one-dimensional tempered stable L\'evy process $$ Y(t) = \int_0^t \int_{\real \setminus\{0\}} z \tilde{N}(ds,dz), \qquad t\in \real_+, $$ is defined by its L\'evy measure \eqref{measure2.0} on $\real \setminus \{0\}$ where $k_-,k_+,\lambda_-,\lambda_+ > 0 $ and $\alpha \in (0,2)$, i.e. \begin{equation} \label{measure3} \nu (dz ) = \frac{k_-}{z^{\alpha +1}} \re^{-\lambda_- z } dz + \frac{k_+}{z^{\alpha +1}} \re^{-\lambda_+ z } dz. \end{equation} As $\nu (\real )=\infty$, the tempered stable process $(Y(t))_{t\in \real_+}$ is not covered by the proof arguments of \cite{amllevy}, \cite{wulia} and \cite{xiaobing}, in particular the quantity defined in \eqref{lmbda} is not finite in this case. \subsubsection*{Random simulations} We use the simulation algorithm of \cite{rosinski4} for the tempered stable process with L\'evy measure \eqref{measure2}. Consider $(\epsilon_j)_{j\geq 1}$ an independent and identically distributed (i.i.d.) Bernoulli $( -\lambda_-, \lambda_+)$-valued random sequence with distribution $(k_-/(k_-+k_+), k_+/(k_-+k_+))$, $(\xi_j)_{j\geq 1}$ an i.i.d. uniform $U(0,1)$ random sequence, and $(\eta_j)_{j\geq 1}$, $(\eta^\prime_j)_{j\geq 1}$ i.i.d. exponentially distributed random sequences with parameter $1$, with $\Gamma_j:=\eta^\prime_1+\cdots+\eta^\prime_j$, $j\geq 1$. We also let $(u_j)_{j\geq 1}$ denote an i.i.d. sequence of uniform random variables on $[0,T]$, where $T>0$, and assume that the sequence $(\epsilon_j)_{j\geq 1}$, $(\xi_j)_{j\geq 1}$, $(\eta_j)_{j\geq 1}$, $(\eta^\prime_j)_{j\geq 1}$, and $(u_j)_{j\geq 1}$ are mutually independent. By Theorem~5.3 in \cite{rosinski4}, the tempered stable process $Y (t)$ with L\'evy measure \eqref{measure2} admits the following representations. \\ \noindent $(i)$ If $\alpha\in(0,1)$, set \begin{equation} \nonumber Y(t)=\sum_{j=1}^\infty\mathbb{I}_{(0,t]}(u_j)\min\left\{\left(\frac{T(k_-+k_+)}{\alpha\Gamma_j}\right)^{1/\alpha} \hskip-0.2cm , \frac{\eta_j}{|\epsilon_j|} \xi_j^{1/\alpha} \right\}\frac{\epsilon_j}{|\epsilon_j|}, \qquad t\in[0,T]. \end{equation} \noindent $(ii)$ If $\alpha\in(1,2)$, set \begin{equation} \nonumber Y(t) = \sum_{j=1}^\infty\left(\mathbb{I}_{(0,t]}(u_j) \min\left\{\left(\frac{k_-+k_+}{\alpha\Gamma_j/T}\right)^{1/\alpha} \hskip-0.2cm , \frac{\eta_j }{|\epsilon_j|} \xi_j^{1/\alpha} \right\}\frac{\epsilon_j}{|\epsilon_j|}- x_0 \frac{t}{T}\left(\frac{k_-+k_+}{\alpha j/T}\right)^{1/\alpha}\right)+tb_T, \end{equation} $t\in[0,T]$, with $x_0= ( k_--k_+ ) / (k_-+k_+)$, $x_1=k_+ \lambda_+^{-1-\alpha}-k_-\lambda_-^{-1-\alpha}$, and $$ b_T:= \frac{x_0}{T} \zeta\left( \frac{1}{\alpha} \right) \left( \frac{T(k_-+k_+)}{\alpha} \right)^{1/\alpha} - x_1 \Gamma(1-\alpha), $$ where $\zeta$ is the Riemann zeta function. \begin{figure}[H] \centering \hskip-0.2cm \begin{subfigure}{.48\textwidth} \centering \includegraphics[width=1.\textwidth]{alpha02} \caption{\small Tempered stable process with $\alpha =0.2$} \end{subfigure} \hskip0.5cm \begin{subfigure}{.48\textwidth} \centering \includegraphics[width=1.\textwidth]{alpha17} \caption{\small Tempered stable process with $\alpha =1.7$} \end{subfigure} \caption{\small Simulated sample paths of the tempered stable process.} \end{figure} \noindent Next, we take $\gamma_i(z):= \sigma_i z_i$ with $\sigma_i>0$, $i=1,2,3$, and we consider the system \begin{subequations} \begin{empheq}[left=\empheqlbrace]{align} \nonumber dS_t&= ( \Lambda-\beta S_tI_t-\mu S_t ) dt+S_tdB^\varrho_1 (t)+ \sigma_1 S_tdY(t), \\[7pt] \nonumber dI_t&= ( \beta S_tI_t-(\mu+\varepsilon+\eta)I_t ) dt+I_tdB^\varrho_2 (t)+ \sigma_2 I_tdY(t), \\[7pt] \nonumber dR_t&= ( \eta I_t-\mu R_t ) dt+R_tdB^\varrho_3 (t)+ \sigma_3 R_tdY(t), \end{empheq} \end{subequations} i.e. \eqref{sdjakl} reads $Z_i(t) = B^\varrho_i (t)+Y(t)$, $i =1,2,3$, and $(Y(t))_{t\in \real_+}$ is a one-sided tempered stable process with $k_-=0$ in \eqref{measure2}. We note that $(H_1)$-$(H_2)$, $(H_5)$ are satisfied, and that $(H_4^{(p)})$ holds for all $\alpha \in (0,1)$ and $p>1$. In addition, letting $\overline{\sigma} := \max ( \sigma_1,\sigma_2,\sigma_3)$, the quantity \begin{eqnarray*} \lambda (p) & = & c_p \overline{\sigma}^2 \int_{\real^3 \setminus\{0\}} z^2 {\nu}(dz) + c_p \overline{\sigma}^p \int_{\real^3 \setminus\{0\}} z^p {\nu}(dz) \\ & = & c_p k_+ \overline{\sigma}^2 \frac{\Gamma(2-\alpha)}{\lambda_+^{2-\alpha}} + c_p k_+ \overline{\sigma}^2 \frac{\Gamma(p-\alpha)}{\lambda_+^{p-\alpha}} \end{eqnarray*} in \eqref{lmbda2} is finite when $p > \alpha$, where $\Gamma(\cdot)$ is the Gamma function. We note that the variance $k_+ t \Gamma(2-\alpha)/\lambda_+^{2-\alpha}$ of the one-sided tempered stable process $Y(t)$ is an increasing function of $\alpha \in (0,1)$ when $\lambda_+\geq 1$. \bigskip \noindent First, we take $\alpha=0.7$, $\lambda_+=1.2$, $k_+=2.8$ with the initial condition $(S_0,I_0,R_0)=(1.6,0.4,0.04)$ and the parameters $\Lambda=8$, $\mu=5.3$, $\beta=4.8$, $\eta=1$ and $\varepsilon=0.5$. The covariance matrix is set at $\varrho=10^{-2} \left(\begin{array}{ccc} 4 & 3.2 & 3.0 \\ 3.2 & 4 & 3.84 \\ 3 & 3.84 & 4.69 \end{array}\right)$, with $\sigma_1=0.2$, $\sigma_2=0.8$ and $\sigma_3=0.5$, in which case Condition~$(H_3^{(2)})$ is also satisfied by Remark~\ref{r0}. \vskip-0.2cm \begin{figure}[H] \centering \hskip-0.2cm \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\textwidth]{1101extI7} \caption{\small Extinction of the infected population} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\textwidth]{1101extR7} \caption{\small Extinction of the recovered population} \end{subfigure} \caption{\small Disease extinction in the epidemic population model with $\alpha = 0.7$.} \label{fig2} \end{figure} \vspace{-0.2cm} \noindent We note that the deterministic system is persistent as $\mathcal{R}_0=1.0655>1$, with the positive equilibrium value $E^\ast=(S^\ast,I^\ast,R^\ast)=(1.417, 0.0723, 0.0136)$. On the other hand, for the stochastic system with $\alpha = 0.7$ we have $\mathcal{\widebar{R}}_0=0.9976<1$, and disease extinction is induced by the jump noise with \begin{equation}\notag \lim\limits_{t\rightarrow\infty}\langle S\rangle_t=\frac{\Lambda}{\mu} = 1.5, \quad \lim\limits_{t\rightarrow\infty}I_t=0,\quad \lim\limits_{t\rightarrow\infty}R_t=0, \quad \mathbb{P}\mbox{-}a.s. \end{equation} according to Theorem~\ref{t4.1}, see Figures~\ref{fig2}-\ref{fig3}. \vspace{-0.4cm} \begin{figure}[H] \centering \hskip-1.4cm \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1.05\textwidth]{1101extS7} \caption{\small Dynamical behavior of $S(t)$} \end{subfigure} \hskip-0.5cm \begin{subfigure}{.5\textwidth} \centering \vskip-0.1cm \includegraphics[width=1.2\textwidth,height=7.05cm]{07junzhi} \vskip-0.5cm \caption{\small Time averages of $S(t)$, $I(t)$ and $R(t)$} \end{subfigure} \caption{\small Disease extinction in the epidemic population model with $\alpha = 0.7$.} \label{fig3} \end{figure} \vspace{-0.3cm} \noindent We also note that the tempered stable model generates jumps of large size which can model sudden disease outbreak. Next, we decrease the value of the index to $\alpha=0.2$ and keep the initial value and other parametric values unchanged, in which case Condition~$(H_3^{(2)})$ still holds true and $\mathcal{\widebar{R}}_0=1.00767>1$. \begin{figure}[H] \centering \hskip-0.2cm \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\textwidth]{1101perI2} \caption{\small Persistence of the infected population} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\textwidth]{1101perR2} \caption{\small Persistence of the recovered population} \end{subfigure} \caption{\small Persistence in the epidemic population model with $\alpha = 0.2$.} \label{fig4} \end{figure} \vspace{-0.2cm} \noindent Based on Theorem~\ref{t4.2}, the solution $(S_t,I_t,R_t)$ of stochastic system (1.1a)-(1.1c) satisfies $\lim\limits_{t\rightarrow\infty}\langle S\rangle_t=1.49$, $\lim\limits_{t\rightarrow\infty}\langle I\rangle_t=0.0085$, $\lim\limits_{t\rightarrow\infty}\langle R\rangle_t=0.0016$. The system is persistent and the disease becomes endemic, as illustrated in Figures~\ref{fig4}-\ref{fig5}. \vspace{-0.3cm} \begin{figure}[H] \centering \hskip-1.4cm \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1.05\textwidth]{1101perS2} \caption{\small Dynamical behavior of $S(t)$} \end{subfigure} \hskip-0.5cm \begin{subfigure}{.5\textwidth} \centering \vskip-0.2cm \includegraphics[width=1.2\textwidth,height=6.8cm]{02junzhi} \vskip-0.5cm \caption{\small Time averages of $S(t)$, $I(t)$ and $R(t)$} \end{subfigure} \caption{\small Persistence in the epidemic population model with $\alpha = 0.2$.} \label{fig5} \end{figure} \vspace{-0.2cm} \noindent Finally, we consider a pure jump model with two different values $\alpha^{(1)}$ and $\alpha^{(2)}$ and L\'evy measures $\nu^{(1)} (dz)$ and $\nu^{(2)} (dz)$ given by \eqref{measure3} as $$ \nu^{(j)} (dz ) = \frac{ k_+}{z^{\alpha^{(j)} +1}} \re^{-\lambda_+ z } dz, \qquad j = 1,2, $$ while normalizing the jump size variances $$ \big( \sigma^{(1)}_i\big)^2 \int_{\real^3 \setminus\{0\}} z^2 {\nu}(dz) = \big( \sigma^{(2)}_i\big)^2 \int_{\real^3 \setminus\{0\}} z^2 {\nu}(dz) $$ to the same level in both cases, i.e. $$ k_+ \big(\sigma^{(1)}_i\big)^2 \frac{\Gamma(2-\alpha^{(1)})}{\lambda_+^{2-\alpha^{(1)}}} = k_+ \big(\sigma^{(2)}_i\big)^2 \frac{\Gamma(2-\alpha^{(2)})}{\lambda_+^{2-\alpha^{(2)}}} $$ with $k_+=2.8$, $\lambda_+=1.2$. When $\alpha^{(1)}=0.2$ we take $\sigma_1^{(1)}=0.2$, $\sigma_2^{(1)}=0.8$ and $\sigma_3^{(1)}=0.5$, in which case we have $\mathcal{\widebar{R}}_0=1.01>1$ and both $I(t)$ and $R(t)$ are persistent. \begin{figure}[H] \centering \hskip-0.2cm \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\textwidth]{case2perI} \caption{\small Persistence of $I(t)$ for $\alpha^{(1)} =0.2$} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\textwidth]{case2I} \caption{\small Extinction of $I(t)$ for $\alpha^{(2)} = 0.9$} \end{subfigure} \caption{\small Behavior of the infected population for two different values of $\alpha$.}\label{fig6} \end{figure} \noindent When $\alpha^{(2)}=0.9$ we take $\sigma_1^{(2)}=0.1857$, $\sigma_2^{(2)}=0.7426$ and $\sigma_3^{(2)}=0.4641$, in which case we have $\mathcal{\widebar{R}}_0=0.99<1$, and both $I(t)$ and $R(t)$ become extinct, showing that persistence and extinction can depend on the shape of the jump size distribution for a given variance level, see Figures~\ref{fig6}-\ref{fig7}. In particular, the presence of larger positive jumps for small values of $\alpha$ can result into persistence of the disease. \begin{figure}[H] \centering \hskip-0.2cm \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\textwidth]{case2perR} \caption{\small Persistence of $R(t)$ for $\alpha^{(1)} = 0.2$} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\textwidth]{case2R} \caption{\small Extinction of $R(t)$ for $\alpha^{(2)} = 0.7$} \end{subfigure} \caption{\small Behavior of the recovered population for two different values of $\alpha$.}\label{fig7} \end{figure} \section{Appendix} This section is devoted to proof arguments which are similar to the literature, see \cite{wulia}. The next Lemma~\ref{Lemma 2.2}, which extends Lemma~2 of \cite{mazhien} to possibly discontinuous functions $f$, is needed for the proof of Theorem~\ref{t4.2}. \begin{lemma}\label{Lemma 2.2} Let $f: \real_+ \to \real_+$ be a function integrable on any interval $[0,t]$, $t>0$, and consider $\Phi : \real_+ \to \real$ a function such that $\lim\limits_{t\rightarrow\infty} ( \Phi(t) / t ) =0$. \begin{enumerate}[i)] \item Assume that there exist nonnegative constants $\rho_0 \geq 0$, $T\geq0$ such that \begin{equation}\notag \log f(t)\leq\rho t-\rho_0\int_0^tf(r)dr+\Phi(t), \quad \mathbb{P}\mbox{-}a.s. \end{equation} for all $t\geq T$, where $\rho \in \real$. Then we have $$ \left\{ \begin{array}{ll} \displaystyle \limsup\limits_{t\rightarrow\infty}\frac{1}{t}\int_0^tf(r)dr\leq\frac{\rho}{\rho_0}, \quad \mathbb{P}\mbox{-}a.s. & \mbox{if } \quad \rho\geq0; \\ \\ \displaystyle \lim\limits_{t\rightarrow\infty}f(t)=0, \quad \mathbb{P}\mbox{-}a.s. & \mbox{if } \quad \rho<0. \end{array} \right. $$ \item Assume that there exists positive constants $\rho$, $\rho_0$, and $T\geq0$ such that \begin{equation}\notag \log f(t)\geq\rho t-\rho_0\int_0^tf(r)dr+\Phi(t), \quad \mathbb{P}\mbox{-}a.s. \end{equation} for all $t\geq T$. Then we have \begin{equation}\notag \liminf\limits_{t\rightarrow\infty}\frac{1}{t}\int_0^tf(r)dr\geq\frac{\rho}{\rho_0}, \quad \mathbb{P}\mbox{-}a.s. \end{equation} \end{enumerate} \end{lemma} \begin{Proof} Define $$F(t)=\int_0^tf(r)dr.$$ By the integrability of $f(t)$, the Lebesgue differentiation theorem (see e.g. Theorem 1.6.11 in \cite{tao}) shows that $F(t)$ is continuous and almost everywhere differentiable, with $$\frac{dF(t)}{dt}=f(t)$$ for almost every $t\geq0$. The rest of the proof is the same as in Lemma 2 of \cite{mazhien}, and is omitted. \end{Proof} \begin{Proofy}\ref{t4.1}. The proof of existence and uniqueness of solutions for stochastic differential equation driven by L\'evy processes in \cite{applebk2} ensure the integrability of $S_t$, $I_t$ and $R_t$ on any bounded interval $[0,T]$. In view of \eqref{lsir1}-\eqref{lsir3}, we deduce \begin{eqnarray*} \lefteqn{ \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \frac{S_t-S_0}{t}+\frac{I_t-I_0}{t} = \Lambda-\mu\langle S\rangle_t-(\mu+\varepsilon+\eta)\langle I\rangle_t+\frac{1}{t}\int_0^tS_rdB^\varrho_1(r)+\frac{1}{t}\int_0^tI_rdB^\varrho_2(r) } \\ & &+\frac{1}{t}\int_0^tS_{r^-}\int_{\real^3 \setminus\{0\}}\gamma_1(z)\tilde{N}(dr,dz) +\frac{1}{t}\int_0^tI_{r^-}\int_{\real^3 \setminus\{0\}}\gamma_2(z)\tilde{N}(dr,dz) , \end{eqnarray*} which yields \begin{equation}\label{relation} \mu\langle S\rangle_t+(\mu+\varepsilon+\eta)\langle I\rangle_t=\Lambda-\varphi (t), \end{equation} where \begin{eqnarray*} \varphi (t) & :=& \frac{S_t-S_0}{t}+\frac{I_t-I_0}{t}-\frac{1}{t}\int_0^tS_rdB^\varrho_1(r)-\frac{1}{t}\int_0^tI_rdB^\varrho_2(r)\\ & &-\frac{1}{t}\int_0^tS_{r^-}\int_{\real^3 \setminus\{0\}}\gamma_1(z)\tilde{N}(dr,dz) -\frac{1}{t}\int_0^tI_{r^-}\int_{\real^3 \setminus\{0\}}\gamma_2(z)\tilde{N}(dr,dz). \end{eqnarray*} By the It\^{o} formula for L\'evy-type stochastic integrals (see Theorem~1.16 in \cite{yiteng}) and \eqref{relation}, letting \begin{equation} \label{m2t} M_2(t):=\int_0^t\int_{\real^3 \setminus\{0\}}\log (1+\gamma_2(z))\tilde{N}(ds,dz) \end{equation} we have \begin{eqnarray} \nonumber \log I_t & = & \log I_0+\beta\int_0^tS_rdr-(\mu+\varepsilon+\eta)t-\beta_2t +B^\varrho_2(t)+M_2(t) \\ \nonumber &=&\log I_0+\left( \beta\frac{\Lambda}{\mu}-(\mu+\varepsilon+\eta + \beta_2 ) \right) t -\frac{\beta(\mu+\varepsilon+\eta)}{\mu}\int_0^tI_rdr \\ \label{lnI} & & -\frac{\beta}{\mu}t\varphi (t) +B^\varrho_2(t) + M_2(t) \\ \nonumber &\leq&\log I_0+ (\mu+\varepsilon+\eta)(\mathcal{\widebar{R}}_0-1) t -\frac{\beta}{\mu}t\varphi (t) +B^\varrho_2(t) +M_2(t). \end{eqnarray} We deduce from Lemmas~\ref{l3.1}, \ref{l3.3} and \ref{l3.4} that \begin{equation} \label{fai} \lim\limits_{t\rightarrow\infty}\varphi (t)=0, \quad \mathbb{P}\mbox{-}a.s. \end{equation} In addition, under $(H_5)$ we have \begin{eqnarray*} \int_0^t\frac{d\langle M_2,M_2\rangle(r)}{(1+r)^2}dr & = & \int_0^t\frac{1}{(1+r)^2} dr \int_{\real^3 \setminus\{0\}} \big( \log (1+\gamma_2(z)) \big)^2 \nu(dz) \\ & = & \frac{t}{1+t}\int_{\real^3 \setminus\{0\}} \big( \log (1+\gamma_2(z)) \big)^2\nu(dz)<+\infty, \quad t\in \real_+, \end{eqnarray*} hence by the law of large numbers for local martingales (see Theorem~1 in \cite{dashudinglvlevy}) we have \begin{equation}\label{yang} \lim\limits_{t\rightarrow\infty}\frac{M_2(t)}{t}=0, \quad \mathbb{P}\mbox{-}a.s. \end{equation} By the law of large numbers (see Theorem~3.4 in Chapter~1 of \cite{mao2008}) we also get \begin{equation} \label{**} \lim\limits_{t\rightarrow\infty}\frac{B^\varrho_2(t)}{t}=0, \quad \mathbb{P}\mbox{-}a.s. \end{equation} Therefore, by \eqref{lnI}, if $\mathcal{\widebar{R}}_0<1$ we have \begin{equation}\nonumber \limsup\limits_{t\rightarrow\infty}\frac{\log I_t}{t}\leq (\mu+\varepsilon+\eta)(\mathcal{\widebar{R}}_0-1)<0, \quad \mathbb{P}\mbox{-}a.s., \end{equation} which, together with the positivity of $I_t$, implies \begin{equation}\label{ID} \lim\limits_{t\rightarrow\infty}I_t=0, \quad \mathbb{P}\mbox{-}a.s. \end{equation} In other words, the disease goes to extinction with probability one. Furthermore, from \eqref{relation} we obtain \begin{equation}\notag \lim\limits_{t\rightarrow\infty}\langle S\rangle_t=\frac{\Lambda}{\mu}, \quad \mathbb{P}\mbox{-}a.s. \end{equation} We derive from \eqref{lsir3} that \begin{equation}\notag \frac{R_t-R_0}{t}=-\frac{\mu}{t}\int_0^tR_rdr+\frac{\eta}{t}\int_0^tI_rdr+ \frac{1}{t}\int_0^tR_rdB^\varrho_3(r)+\frac{1}{t}\int_0^tR_{r^-}\int_{\real^3 \setminus\{0\}}\gamma_3(z)\tilde{N}(dr,dz), \end{equation} and taking limits on both sides yields \begin{eqnarray}\label{limr} \nonumber \mu \lim\limits_{t\rightarrow\infty}\frac{1}{t}\int_0^tR_rdr &=&\eta \lim\limits_{t\rightarrow\infty}\frac{1}{t}\int_0^tI_rdr-\lim\limits_{t\rightarrow\infty}\frac{R_t}{t} +\lim\limits_{t\rightarrow\infty}\frac{1}{t}\int_0^tR_rdB^\varrho_3(r)\\ & & +\lim\limits_{t\rightarrow\infty}\frac{1}{t}\int_0^tR_{r^-}\int_{\real^3 \setminus\{0\}}\gamma_3(z)\tilde{N}(dr,dz), \quad \mathbb{P}\mbox{-}a.s. \end{eqnarray} Together with \eqref{ID} and the conclusions in Lemmas~\ref{l3.1}, \ref{l3.3} and \ref{l3.4}, we conclude to \begin{equation}\notag \lim\limits_{t\rightarrow\infty}R_t=0, \quad \mathbb{P}\mbox{-}a.s.\end{equation} \end{Proofy} \begin{Proofy}\ref{t4.2}. By \eqref{relation} we deduce that \begin{equation}\notag \beta\langle S\rangle_t=\frac{\beta\Lambda}{\mu}-\frac{\beta}{\mu}\varphi (t)-\frac{\beta(\mu+\varepsilon+\eta)}{\mu}\langle I\rangle_t. \end{equation} It then follows from \eqref{lnI} that \begin{eqnarray*} \log I_t&=&\log I_0+\beta\int_0^tS_rdr-(\mu+\varepsilon+\eta + \beta_2 ) t \\ & &+B^\varrho_2(t)+\int_0^t\int_{\real^3 \setminus\{0\}}\log (1+\gamma_2(z))\tilde{N}(ds,dz)\\ & = & \left( \beta\frac{\Lambda}{\mu}-(\mu+\varepsilon+\eta)-\beta_2 \right) t -\frac{\beta(\mu+\varepsilon+\eta)}{\mu}\int_0^tI_rdr + \Psi (t), \end{eqnarray*} where we denote $$ \Psi(t):=\log I_0-\frac{\beta}{\mu}t\varphi (t)+B^\varrho_2(t)+M_2(t), \qquad t\in \real_+, $$ and $M_2(t)$ is defined as in \eqref{m2t}. From \eqref{fai}, \eqref{yang} and \eqref{**} it follows that $\lim\limits_{t\rightarrow\infty}(\Psi(t)/t)=0$, hence applying Lemma \ref{Lemma 2.2} to the function $f(t):=I_t$ which is a.s. integrable over $[0,T]$, $T>0$, we obtain \begin{equation}\nonumber \lim\limits_{t\rightarrow\infty}\langle I\rangle_t=\frac{\mu\left(\beta \Lambda / \mu -(\mu+\varepsilon+\eta)-\beta_2 \right)}{\beta(\mu+\varepsilon+\eta)}=\frac{\mu}{\beta}(\mathcal{\widebar{R}}_0-1), \quad \mathbb{P}\mbox{-}a.s. \end{equation} Consequently, on account of \eqref{relation} and \eqref{fai} we get \begin{equation}\nonumber \lim\limits_{t\rightarrow\infty}\langle S\rangle_t=\frac{\Lambda}{\mu}-\frac{(\mu+\varepsilon+\eta)}{\beta} (\mathcal{\widebar{R}}_0-1) =S^\ast+\frac{\beta_2}{\beta}, \quad \mathbb{P}\mbox{-}a.s., \end{equation} and it follows from \eqref{limr} and Lemmas~\ref{l3.1}, \ref{l3.3} and \ref{l3.4} that \begin{equation}\nonumber \lim\limits_{t\rightarrow\infty}\langle R\rangle_t=\frac{\eta}{\beta}(\mathcal{\widebar{R}}_0-1), \quad \mathbb{P}\mbox{-}a.s. \end{equation} \end{Proofy} \subsubsection*{Conclusion} In this paper, we consider a stochastic version of the SIR epidemic model \eqref{lsir1}-\eqref{lsir3}, driven by correlated Brownian and L\'{e}vy jump components with heavy tailed increments. We present new solution estimates using the parameter $\lambda(p)$ defined in \eqref{lmbda2} and Kunita's inequality for jump processes in the key Lemmas~\ref{l3.1} and \ref{l3.4}. Our approach relaxes the restriction on the finiteness of the L\'{e}vy measure $\nu(dz)$ imposed in \cite{amllevy} and \cite{wulia}, and our definition of the parameter $\lambda(p)$ in \eqref{lmbda2} applies to a wider range of L\'{e}vy measures. In Theorems~\ref{t4.1} and \ref{t4.2} we derive the basic reproduction number $\mathcal{\widebar{R}}_0$ which characterizes the extinction and persistence properties of the stochastic epidemic system \eqref{lsir1}-\eqref{lsir3}. As an illustration we present numerical simulations based on tempered stable processes, showing that the additional presence of jumps and the level of the index $\alpha \in (0,1)$ can have a significant influence on the dynamical behavior of the epidemic system. \footnotesize \setcitestyle{numbers} \def\cprime{$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def\cprime{$'$}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:introduction} Introduction} In 2015, Protesescu \emph{et al.}~\cite{protesescu-15-psk} reported a new class of semiconductor nanocrystal (NC) materials, all-inorganic lead-halide perovskites CsPbX$_{3}$ (X = Cl, Br, I), having exceptional optoelectronic properties. These NCs emit and absorb strongly, are free from blinking, and the emission frequency can be tuned over the whole visible range by varying the NC size and the halide composition X (including mixtures of different halides)~\cite{protesescu-15-psk,becker-18-psk}. This has led to numerous recent applications~\cite{chaudhary-21-psk} to light-emitting diodes~\cite{li-16-psk,deng-16-psk}, lasers~\cite{yakunin-15-psk,pan-15-psk}, and single-photon sources~\cite{utzat-19-psk}, among others. The typical size range of the synthesized NCs is 6--16~nm~\cite{protesescu-15-psk,chen-17-psk,brennan-17-psk,becker-18-psk}, which is comparable to or greater than the semiconductor Bohr diameter ($2a_{B}\approx6$~nm for CsPbBr$_{3}$; see Sec.~\ref{subsec:parameters}). Consequently, excitons in these NCs are in the regime of intermediate confinement, with partially formed bound excitons having a strong correlation between electron and hole. This correlation can have important consequences. For instance, the radiative decay rate of the band-edge exciton in CsPbBr$_{3}$ is enhanced~\cite{efros-82-sqd,takagahara-87-sqd} by factors of order 3--16 (depending on NC size in the range 6--16~nm) compared to its value calculated assuming noninteracting carriers~\cite{becker-18-psk} or a mean-field treatment of the carrier-carrier Coulomb interaction~\cite{nguyen-20b-psk}. In general, special many-body techniques are necessary to treat intermediate confinement theoretically. In this paper, we revisit the question of correlated single excitons in the context of inorganic lead-halide perovskite NCs. A commonly used approach to compute exciton properties in NCs is configuration interaction (CI)~\cite{takagahara-87-sqd,chang-98-sqd,shumway-01-sqd,tyrrell-15-sqd}. Quantum Monte Carlo has also been used~\cite{shumway-01-sqd}. Recent applications to NCs of CsPbX$_{3}$ have involved Hartree-Fock (HF) and low orders of many-body perturbation theory (MBPT)~\cite{nguyen-20a-psk,nguyen-20b-psk}, and a one-parameter variational method~\cite{becker-18-psk,sercel-19-psk,tamarat-19-psk} introduced by Takagahara~\cite{takagahara-87-sqd}. We employ an envelope-function approach~\cite{Kira-Koch} and consider both the effective-mass approximation (EMA) and $\mathbf{k}\cdot\mathbf{p}$ models in which the valence band (VB) and conduction band (CB) are coupled, such as the $4\times4$ and $8\times8$ $\mathbf{k}\cdot\mathbf{p}$ models~\cite{efros-00-sqd}. The latter enable one to compute the ``$\mathbf{k}\cdot\mathbf{p}$ corrections'' to the EMA, which account for nonparabolic terms in the band dispersion as well as VB-CB mixing induced by the finite size of the NC~\cite{ekimov-93-sqd}. We consider approaches to correlation in which the carrier-carrier Coulomb interaction is treated to all orders of perturbation theory. We start from a CI expansion, a basic all-order approach, and then generalize this to include a more complete treatment of $\mathbf{k}\cdot\mathbf{p}$ corrections. An important aspect of our formalism is that the equations are adapted explicitly to spherical symmetry, involving 1D radial integrals and angular factors, which leads to a computationally efficient procedure in intermediate confinement. Applications are given to NCs of CsPbBr$_{3}$ and compared with recent experiments. The paper is organized as follows. In Sec.~\ref{sec:formalism}, we discuss the all-order many-body formalisms that we employ. The spherical reduction of the many-body equations to radial integrals and angular factors is given in the Appendix. The applications to CsPbBr$_{3}$ are then described in Sec.~\ref{sec:applications}. First, in Sec.~\ref{subsec:parameters}, we give the material parameters for bulk CsPbBr$_{3}$ needed as input to the EMA and $\mathbf{k}\cdot\mathbf{p}$ models. Some of these parameters remain rather uncertain at present. We also discuss the ``quasi-cubic'' spherical confining potential used to model the cuboid NCs of CsPbBr$_{3}$. Applications are then given to the correlation energy (Sec.~\ref{subsec:Ecorr}), the ground-state radiative lifetime (Sec.~\ref{subsec:lifetime}), the long-range Coulomb contribution to the exciton fine structure (Sec.~\ref{subsec:LR-FS}), and the one-photon absorption cross section (Sec.~\ref{subsec:1PA_xsection}). In Secs.~\ref{subsec:Ecorr}--\ref{subsec:LR-FS} we also discuss the analytical results that can be derived for the EMA in two cases: noninteracting carriers and in the limit of large NC sizes. Partly as a test of our methods, we check where possible that in the large-size limit our numerical procedures give the expected results. Our conclusions are given in Sec.~\ref{sec:Conclusions}. Throughout, all formulas are given in atomic units (a.u.), $\hbar=|e|=m_{0}=4\pi\epsilon_{0}=1$. \section{\label{sec:formalism} All-order correlated excitons} We consider a set of interacting carriers (electrons and holes) confined by a mesoscopic potential $V_{\text{ext}}(\mathbf{r})$. The bulk band structure is described by a $\mathbf{k}\cdot\mathbf{p}$ Hamiltonian $h_{\mathbf{k}\cdot\mathbf{p}}$ and carrier states are expressed in terms of products of envelope and Bloch functions~\cite{Kira-Koch}. The total Hamiltonian is \begin{equation} H=\sum_{ij}\{i^{\dagger}j\}\BraOperKet{i}{h_{\mathbf{k}\cdot\mathbf{p}}+V_{\text{ext}}}{j}+\frac{1}{2}\sum_{ijkl}\{i^{\dagger}j^{\dagger}lk\}\BraOperKet{ij}{g_{12}}{kl}\,,\label{eq:Hamiltonian} \end{equation} where $i,j,\ldots$, etc., refer to the electron states in the bands (valence and conduction) included in the calculation, and the notation $\{i_{1}^{\dagger}i_{2}^{\dagger}\ldots j_{1}j_{2}\ldots\}$ indicates a normally ordered product of creation (and annihilation) operators for electron states $i_{1},i_{2},\ldots$ (and $j,j_{2},\ldots$ ). The Hamiltonian $h_{\mathbf{k}\cdot\mathbf{p}}$ can can include a $\mathbf{k}\cdot\mathbf{p}$ coupling term between the VB and the CB, as in the $8\times8$ or $4\times4$ $\mathbf{k}\cdot\mathbf{p}$ models~\cite{efros-00-sqd}. Alternatively, the VB and CB can be uncoupled, as in EMA models~\cite{Kira-Koch}. We consider both cases in the following. In envelope-function formalisms, the Coulomb interaction is a sum of long-range (LR) and short-range (SR) terms~\cite{Knox}, \begin{equation} g_{12}=g_{12}^{\text{LR}}+g_{12}^{\text{SR}}\,.\label{eq:g12sum} \end{equation} The LR term has the form of the usual Coulomb interaction \begin{equation} g_{12}^{\text{LR}}=\frac{1}{\varepsilon_{\text{in}}|\mathbf{r}_{1}-\mathbf{r}_{2}|}\,,\label{eq:g12LR} \end{equation} including a suitable dielectric constant $\varepsilon_{\text{in}}$ for the semiconductor material (which should correspond to low frequencies and a length scale of the size of the NC). We will not do so in the applications here, but the LR Coulomb interaction can also be generalized using macroscopic electrostatics~\cite{Jackson} to a system of dielectrics, where induced polarization charges form near the boundaries between different materials---for example, in studies of the effect of the dielectric mismatch with the environment~\cite{karpulevich-19-sqd}. The SR term is a contact interaction~\cite{Knox}, \begin{equation} g_{12}^{\text{SR}}=\gamma^{\text{SR}}\delta^{3}(\mathbf{r}_{1}-\mathbf{r}_{2})\,.\label{eq:g12SR} \end{equation} In discussions of exciton fine structure, the constant $\gamma^{\text{SR}}$ is proportional to the exchange Coulomb matrix element between the Bloch states of the VB and CB \cite{Knox,[{}] [{ [Sov.\ Phys.\ JETP \textbf{33}, 108 (1971)].}] pikus-71-sqd}. The SR term $g_{12}^{\text{SR}}$ is formally of order $(L_{\text{atom}}/L_{\text{meso}})^{2}$~\cite{Knox}, where $L_{\text{meso}}$ the mesoscopic length scale (the size of the quantum dot or the semiconductor Bohr radius) and $L_{\text{atom}}$ is the atomistic length scale. In a finite-size system, the quantity $L_{\text{atom}}/L_{\text{meso}}$ is also the small parameter that appears in $\mathbf{k}\cdot\mathbf{p}$ perturbation theory, $\mathbf{k}\cdot\mathbf{p}\sim L_{\text{atom}}/L_{\text{meso}}$, and the SR Coulomb interaction thus has a leading order $O[(\mathbf{k}\cdot\mathbf{p})^{2}]$. By contrast, the LR Coulomb interaction typically gives larger contributions of order $O(1)$ in $\mathbf{k}\cdot\mathbf{p}$ perturbation theory, with the $\mathbf{k}\cdot\mathbf{p}$ terms providing small corrections (see below). An exception can occur when the leading orders of the LR Coulomb term are suppressed for some reason. This happens in the exciton fine structure, where the leading LR Coulomb term is also $O[(\mathbf{k}\cdot\mathbf{p})^{2}]$ and the LR and SR terms thus yield contributions with a similar order of magnitude (see below and Sec.~\ref{subsec:LR-FS}). We will consider only the LR term in our applications, $g_{12}=g_{12}^{\text{LR}}$, although the general many-body formalism applies to the SR term as well. We choose a single-particle basis $\Ket{i}$ for the calculations satisfying \begin{equation} (h_{\mathbf{k}\cdot\mathbf{p}}+V_{\text{ext}}+U)\Ket{i}=\epsilon_{i}\Ket{i}\,,\label{eq:basis} \end{equation} where $U$ is a mean field, which is in principle arbitrary. Possible choices are an independent-particle basis $U=0$, or a HF basis $U=V_{\text{HF}}$, such as a configuration-averaged HF basis~\cite{nguyen-20a-psk,nguyen-20b-psk}, which takes account of the electron-hole ($e$-$h$) interaction in a mean-field approximation. We will take $V_{\text{ext}}$ and $U$ to be spherically symmetric. This will lead to computationally efficient procedures in which only the radial dimension needs to be treated numerically, while the angular dimensions can be handled analytically. Even though NCs of inorganic perovskites are cuboid~\cite{protesescu-15-psk}, they can be well approximated for many purposes (e.g., ground-state energies, exciton fine structure, absorption cross sections) by a spherical ``quasi-cubic'' potential~\cite{sercel-19a-psk,nguyen-20a-psk,nguyen-20b-psk,blundell-21a-psk}. Nonspherical terms in $V_{\text{ext}}$, arising from shape corrections to the NC or from the underlying crystal lattice, can in principle be treated later in the calculation as perturbations. Noninteracting or mean-field states are physically appropriate only in the strong-confinement limit of small NC sizes $R$, where the carrier-carrier Coulomb energy is small compared to the kinetic energy and thus yields only a small perturbation of the independent-particle picture. As the NC grows in size and eventually exceeds the semiconductor Bohr radius $R>a_{B}$, partially bound excitons begin to form and an exciton expressed in the basis (\ref{eq:basis}) acquires strong correlation corrections~\cite{efros-82-sqd,takagahara-87-sqd}. We are interested here in studying such a confined exciton to all orders in the Coulomb interaction $g_{12}$. As a first step, we consider CI in the space of all single-exciton states~\cite{takagahara-87-sqd,chang-98-sqd,shumway-01-sqd,tyrrell-15-sqd}. In this approach, a general correlated exciton state $\Ket{\alpha}$ is written \begin{equation} \Ket{\alpha}=\sum_{eh}\mathcal{X}_{eh}^{\alpha}\Ket{\psi_{eh}}\,,\label{eq:CIexciton} \end{equation} where $\Ket{\psi_{eh}}$ is an uncorrelated single-exciton state containing an electron $e$ and a hole $h$, \begin{equation} \Ket{\psi_{eh}}=\{e^{\dagger}h\}\Ket{0}\,,\label{eq:ehexciton} \end{equation} with $\Ket{0}$ the effective vacuum (no carriers present in the NC). The amplitudes $\mathcal{X}_{eh}^{\alpha}$ in Eq.~(\ref{eq:CIexciton}) are found from the CI eigenvalue problem \begin{equation} \sum_{e'h'}\BraOperKet{\psi_{eh}}{H}{\psi_{e'h'}}\mathcal{X}_{e'h'}^{\alpha}=\omega_{\alpha}\mathcal{X}_{eh}^{\alpha}\,,\label{eq:CIeigenvalue} \end{equation} and are normalized according to \begin{equation} \sum_{eh}|\mathcal{X}_{eh}^{\alpha}|^{2}=1\,.\label{eq:CInorm} \end{equation} To determine the matrix of the Hamiltonian $H$ in Eq.~(\ref{eq:CIeigenvalue}), we add and subtract the mean field $U$ from Eq.~(\ref{eq:Hamiltonian}), \begin{eqnarray} H & = & \sum_{i}\{i^{\dagger}i\}\epsilon_{i}+\frac{1}{2}\sum_{ijkl}\{i^{\dagger}j^{\dagger}lk\}\BraOperKet{ij}{g_{12}}{kl}\nonumber \\ & & {}+\sum_{ij}\{i^{\dagger}j\}\BraOperKet{i}{(-U)}{j}\,,\label{eq:Hamiltonian2} \end{eqnarray} from which it follows that \begin{eqnarray} \BraOperKet{\psi_{eh}}{H}{\psi_{e'h'}} & = & (\epsilon_{e}-\epsilon_{h})\delta_{ee'}\delta_{hh'}\nonumber \\ & & {}+\BraOperKet{e}{(-U)}{e'}\delta_{hh'}-\BraOperKet{h'}{(-U)}{h}\delta_{ee'}\nonumber \\ & & {}-\BraOperKet{eh'}{g_{12}}{e'h}+\BraOperKet{eh'}{g_{12}}{he'}\,.\label{eq:Hmatrix} \end{eqnarray} Many-body diagrams for $\sum_{e'h'}\BraOperKet{\psi_{eh}}{H}{\psi_{e'h'}}\mathcal{X}_{e'h'}^{\alpha}$ are shown in Figs.~\ref{fig:Excitations}(a)--(e). To solve the eigenvalue problem (\ref{eq:CIeigenvalue}), we generate a basis set containing all basis states $\Ket{e}$ and $\Ket{h}$ up to a high energy cutoff and compute the matrix elements $\BraOperKet{\psi_{eh}}{H}{\psi_{e'h'}}$. It is then possible to extract the eigenvalues $\omega_{\alpha}$ and eigenvectors $\mathcal{X}_{eh}^{\alpha}$ for the correlated ground-state exciton, $\alpha=0$, together with as many excited exciton states $\alpha>0$ as are desired. We find that the exciton energies $\omega_{\alpha}$ are nearly independent of the choice of mean field $U=0$ or $U=V_{\text{HF}}$ (there is no difference, for practical purposes, between these two choices of $U$). The main role of a HF mean field $U=V_{\text{HF}}$ in this formalism is that fewer basis states $\Ket{e}$ and $\Ket{h}$ are required to obtain a given precision (basis-set truncation error), since the HF basis states already contain mean-field information about the $e$-$h$ interaction and are more physical. We will refer to this approach as CI singles (CIS). \begin{figure} \includegraphics[scale=0.35]{many-body} \caption{\label{fig:Excitations}Excitations present in the various approaches to correlated excitons. Diagrams (a)--(e) represent $\sum_{e'h'}\BraOperKet{\psi_{eh}}{H}{\psi_{e'h'}}\mathcal{X}_{e'h'}^{\alpha}$ and correspond (in order) to the five terms in Eq.~(\ref{eq:Hmatrix}) for the CIS method. The BSE approach is given by the four diagrams (a)--(d). The RPAE method includes diagrams (a)--(e) and additionally the diagrams (f) and (g), which represent $\sum_{e'h'}B_{eh,e'h'}\mathcal{Y}_{e'h'}^{\alpha}$ and correspond (in order) to the two terms in Eq.~(\ref{eq:Bmatrix}). Notation: dashed horizontal line, Coulomb interaction; thick horizontal line, all-order amplitude; horizontal line with cross, potential counter-term $-U$.} \end{figure} The matrix element of a one-body operator $M=\sum_{ij}\{i^{\dagger}j\}\BraOperKet{i}{M}{j}$ between a correlated exciton state $\Ket{\alpha}$ and the NC ground state $\Ket{0}$ in the CIS approach is \begin{equation} \BraOperKet{\alpha}{M}{0}=\sum_{eh}(\mathcal{X}_{eh}^{\alpha})^{*}\BraOperKet{e}{M}{h}\,.\label{eq:CImxel} \end{equation} For example, $M$ could be the momentum operator that enters in interband absorption and emission~\cite{Kira-Koch}. When $V_{\text{ext}}+U$ is spherically symmetric, the basis states $\Ket{e}$ and $\Ket{h}$ have exact total angular momentum quantum numbers $F_{e}$ and $F_{h}$, respectively, which couple to an exact total angular momentum $F_{\text{tot}}$ for an exciton state~\cite{ekimov-93-sqd}. Parity is also an exact quantum number. In the Appendix, we give the reduction of Eqs.~(\ref{eq:CIeigenvalue})--(\ref{eq:CImxel}) to radial integrals and angular factors for the spherical case. The computational gain in making this reduction is not only that 1D radial integrals are much faster to evaluate than 3D integrals, but also that the sums over magnetic substates can be performed analytically, so that the effective sizes of the basis set and the matrix $\BraOperKet{\psi_{eh}}{H}{\psi_{e'h'}}$ are much smaller. In the presence of nonspherical terms, these angular-momentum quantum numbers are only approximate. In envelope-function $\mathbf{k}\cdot\mathbf{p}$ approaches, when the electron band index changes at one vertex of a Coulomb interaction, the matrix element is suppressed, being formally of order $O(\mathbf{k}\cdot\mathbf{p})$ in $\mathbf{k}\cdot\mathbf{p}$ perturbation theory~\cite{Kira-Koch}. Small ``$\mathbf{k}\cdot\mathbf{p}$ corrections'' of this sort correspond to nonparabolic terms in the electron dispersion relation and to VB-CB mixing induced by the finite size of the confining potential $V_{\text{ext}}$~\cite{ekimov-93-sqd}. The $\mathbf{k}\cdot\mathbf{p}$ corrections to Coulomb matrix elements can be picked up straightforwardly in $\mathbf{k}\cdot\mathbf{p}$ approaches in which the VB and CB are coupled, such as the $8\times8$ or $4\times4$ $\mathbf{k}\cdot\mathbf{p}$ models~\cite{efros-00-sqd}, where they arise as cross terms between small and large components of the carrier wave functions in the expression for the matrix element~\cite{nguyen-20a-psk,nguyen-20b-psk}. Referring to Fig.~\ref{fig:Excitations}, we see that in diagram (e), which corresponds to the last term in Eq.~(\ref{eq:Hmatrix}), the band index necessarily changes (from the VB to the CB) at \emph{both} vertices of the Coulomb interaction, so that this term is formally of order $O[(\mathbf{k}\cdot\mathbf{p})^{2}]$. Diagram (e) corresponds to the $e$-$h$ exchange interaction.\footnote{Note that in atomic and molecular physics Fig.~\ref{fig:Excitations}(d) is referred to as the ``exchange'' term and Fig.~\ref{fig:Excitations}(e) as the ``direct'' term, which is the reverse of the convention used in quantum-dot literature and in this paper.} In contrast, the direct $e$-$h$ Coulomb interaction in diagram (d) has no change of band index at either vertex (for states $e$ and $e'$ in the same CB, and states $h$ and $h'$ in the same VB) and the term is therefore formally of $O(1)$ in $\mathbf{k}\cdot\mathbf{p}$ perturbation theory. It follows that to a good approximation we can drop the last (exchange) term $\BraOperKet{eh'}{g_{12}}{he'}$ in Eq.~(\ref{eq:Hmatrix}) and take only the first four terms. We will refer to this simplified approach as the particle-hole Bethe-Salpeter equation or BSE approach. \begin{figure} \includegraphics[scale=0.37]{ladders} \caption{\label{fig:ladders}(a) Effective two-body interaction formed by summing all particle-hole ladder diagrams (arrows pointing up indicate electron states, arrows pointing down hole states); (b) all-order vertex correction to a one-body operator (e.g., momentum operator for interband absorption); (c) first-order Coulomb exchange interaction for exciton fine structure; (d) all-order Coulomb exchange interaction.} \end{figure} The relation to the particle-hole BSE can be seen by considering the perturbative (iterative) solution of the CI eigenvalue equation (\ref{eq:CIeigenvalue}). The $O(1)$ diagram shown in Fig.~\ref{fig:Excitations}(d), when iterated, generates an effective two-body interaction given by the sum of all particle-hole ladder diagrams~\cite{Mahan}, shown in Fig.~\ref{fig:ladders}(a). Use of the correlated final-state wave function $\Ket{\alpha}$ to evaluate a matrix element of a one-body operator (\ref{eq:CImxel}) will then bring in an all-order ``vertex correction,'' shown in Fig.~\ref{fig:ladders}(b), in which the Coulomb ladders connect the ingoing and outgoing states of the one-body operator. For NCs in intermediate confinement, the vertex correction can enhance the absorption rate by large factors~\cite{efros-82-sqd,takagahara-87-sqd}, for example, by as much as 3--16 for the ground-state exciton in NCs of the inorganic perovskite CsPbBr$_{3}$~\cite{becker-18-psk,nguyen-20b-psk}. In the EMA, in which $\mathbf{k}\cdot\mathbf{p}$ corrections are absent, the BSE approach is entirely of order $O(1)$. In the following, we will denote this case BSE$_{0}$. It is also possible to use the BSE approach when the basis states are generated within a VB-CB-coupled $\mathbf{k}\cdot\mathbf{p}$ approximation, such as the $4\times4$ or $8\times8$ $\mathbf{k}\cdot\mathbf{p}$ models. This approach, which we denote BSE$_{\mathbf{k}\cdot\mathbf{p}}$, brings in a subset of $\mathbf{k}\cdot\mathbf{p}$ corrections associated with the basis states. Clearly, further $\mathbf{k}\cdot\mathbf{p}$ corrections can be obtained by using the CIS approach instead, which includes additionally the $O[(\mathbf{k}\cdot\mathbf{p})^{2}]$ exchange diagram in Fig.~\ref{fig:Excitations}(e). However, a still more complete treatment of $\mathbf{k}\cdot\mathbf{p}$ corrections is provided by an approach analogous to the random-phase approximation with exchange (RPAE) used in atomic and molecular physics~\cite{amusia-75-sqd} and cluster physics~\cite{guet-92-sqd}. In RPAE, the CI eigenvalue problem (\ref{eq:CIeigenvalue}) is replaced by a $2\times2$ block eigenvalue problem~\cite{amusia-75-sqd} \begin{equation} \left(\begin{array}{cc} A & B\\ B^{*} & A^{*} \end{array}\right)\left(\begin{array}{c} \bm{\mathcal{X}}^{\alpha}\\ \bm{\mathcal{Y}}^{\alpha} \end{array}\right)=\omega_{\alpha}\left(\begin{array}{cc} 1 & 0\\ 0 & -1 \end{array}\right)\left(\begin{array}{c} \bm{\mathcal{X}}^{\alpha}\\ \bm{\mathcal{Y}}^{\alpha} \end{array}\right)\,.\label{eq:RPAEeigenvalue} \end{equation} Here, $\bm{\mathcal{X}}^{\alpha}$ and $\bm{\mathcal{Y}}^{\alpha}$ are vectors of amplitudes $\mathcal{X}_{eh}^{\alpha}$ and $\mathcal{Y}_{eh}^{\alpha}$ in the space of uncorrelated excitons $(e,h)$, and the matrix $A$ is identical to that appearing in the CIS eigenvalue problem (\ref{eq:CIeigenvalue}), \begin{equation} A_{eh,e'h'}=\BraOperKet{\psi_{eh}}{H}{\psi_{e'h'}}\,,\label{eq:Amatrix} \end{equation} which is given in detail by Eq.~(\ref{eq:Hmatrix}). The matrix $B$ is \begin{equation} B_{eh,e'h'}=-\BraOperKet{ee'}{g_{12}}{h'h}+\BraOperKet{ee'}{g_{12}}{hh'}\,.\label{eq:Bmatrix} \end{equation} RPAE thus includes the dominant correlation diagrams of the BSE approach, Figs.~\ref{fig:Excitations}(a)--(d), the additional exchange diagram of CIS, Fig.~\ref{fig:Excitations}(e), and two further diagrams associated with the $B$ matrix, shown in Figs.~\ref{fig:Excitations}(f) and (g). These two diagrams are both of order $O[(\mathbf{k}\cdot\mathbf{p})^{2}]$ in $\mathbf{k}\cdot\mathbf{p}$ perturbation theory. Physically, the $B$ matrix accounts for two-particle/two-hole correlations in the ground state~\cite{amusia-75-sqd}. The RPAE eigenvector is normalized according to \begin{equation} \sum_{eh}(|\mathcal{X}_{eh}^{\alpha}|^{2}-|\mathcal{Y}_{eh}^{\alpha}|^{2})=1\,,\label{eq:RPAEnorm} \end{equation} and the matrix element (\ref{eq:CImxel}) in RPAE becomes \begin{equation} \BraOperKet{\alpha}{M}{0}=\sum_{eh}[(\mathcal{X}_{eh}^{\alpha})^{*}\BraOperKet{e}{M}{h}+(\mathcal{Y}_{eh}^{\alpha})^{*}\BraOperKet{h}{M}{e}]\,.\label{eq:RPAEmxel} \end{equation} The angular reduction of Eqs.~(\ref{eq:RPAEeigenvalue})--(\ref{eq:RPAEmxel}) for a spherically symmetric potential is given in the Appendix. Note that the many-body formalisms BSE, CIS, and RPAE, presented in Eqs.~(\ref{eq:CIexciton})--(\ref{eq:RPAEmxel}), apply to any single-particle basis $\Ket{i}$ and not just to the envelope-function basis that we use in our applications. In Eq.~(\ref{eq:basis}), the term $h_{\mathbf{k}\cdot\mathbf{p}}+V_{\text{ext}}$ can be reinterpreted as any suitable effective single-particle Hamiltonian describing states of the finite-size NC. \section{\label{sec:applications}Application to perovskite nanocrystals} \subsection{\label{subsec:parameters}Parameters and model} Inorganic lead-halide perovskites are ``inverted'' direct-gap semiconductors having a $p_{1/2}$-like CB and an $s$-like VB. The VB maximum ($R_{6}^{+}$) and CB minimum ($R_{6}^{-}$) lie at the $R$ point of the Brillouin zone~\cite{protesescu-15-psk,becker-18-psk}. This VB-CB pair can be described by the $4\times4$ $\mathbf{k}\cdot\mathbf{p}$ model~\cite{even-14b-psk,yang-17-psk,becker-18-psk} or used for EMA calculations. The $p_{1/2}$-like CB is split by spin-orbit coupling from a higher-lying $p_{3/2}$-like band, whose minimum ($R_{8}^{-}$) lies about 1~eV above the minimum of the $p_{1/2}$-like band~\cite{becker-18-psk}. The $s$-like VB together with the $p_{1/2}$- and $p_{3/2}$-like CBs can be described by an extended $8\times8$ $\mathbf{k}\cdot\mathbf{p}$ model~\cite{efros-00-sqd}. The bulk CsPbBr$_{3}$ material parameters that we use are summarized in Table~\ref{tab:parameters}. The bandgap $E_{g}$ and reduced effective mass $\mu^{*}=m_{e}^{*}m_{h}^{*}/(m_{e}^{*}+m_{h}^{*})$ of the $s$-like VB and $p_{1/2}$-like CB were measured by Yang \emph{et al}.~\cite{yang-17-psk}\ for the orthorhombic phase of CsPbBr$_{3}$ at cryogenic temperatures. While $\mu^{*}$ is known experimentally, the individual effective masses $m_{e}^{*}$ and $m_{h}^{*}$ are not. However, there is theoretical~\cite{becker-18-psk,protesescu-15-psk} and experimental~\cite{fu-17a-psk} evidence that $m_{e}^{*}$ and $m_{h}^{*}$ are approximately equal in inorganic lead-halide perovskites, so we will assume $m_{e}^{*}=m_{h}^{*}$. The spin-orbit coupling parameter $\Delta_{\text{soc}}$ is defined as the energy splitting of the $p_{3/2}$- and $p_{1/2}$-like CBs at the $R$ point; we estimate $\Delta_{\text{soc}}$ from a fit to the experimental absorption spectra (see Sec.~\ref{subsec:1PA_xsection}). Finally, the ``effective'' dielectric constant $\eps_{\text{eff}}$ was inferred by Yang \emph{et al.}~\cite{yang-17-psk} from the bulk exciton binding energy (see also Ref.~\cite{shcherbakov-wu-21-psk}). Since $\eps_{\text{eff}}$ applies to a length scale of order the semiconductor Bohr radius $a_{B}$, we use $\eps_{\text{in}}=\eps_{\text{eff}}$ to screen the carrier-carrier Coulomb interactions {[}Eq.~(\ref{eq:g12LR}){]}. The Kane parameter $E_{P}$ has also not been measured. Estimates of $E_{P}$ were made in Ref.~\cite{nguyen-20a-psk} based on the $4\times4$ and $8\times8$ $\mathbf{k}\cdot\mathbf{p}$ models discussed above, together with the assumption that remote bands (those not included in the model) make zero contribution. The resulting values $E_{P}^{(4\times4)}$ and $E_{P}^{(8\times8)}$ are given in Table~\ref{tab:parameters} and can be seen to differ significantly. Given that the only difference between the two values is the inclusion of the $p_{3/2}$-like CB in $E_{P}^{(8\times8)}$, it seems likely that other remote bands may make further significant contributions to $E_{P}$. An estimate using density-functional theory (DFT) \cite{becker-18-psk} found $E_{P}=39.9$~eV, which seems quite high. As in Ref.~\cite{nguyen-20a-psk}, we take the view that $E_{P}$ is presently uncertain, a conservative range being $10\,\text{eV}\leqE_{P}\leq40\,\text{eV}$. We will assume a value $E_{P}=20$~eV in our applications that is intermediate between $E_{P}^{(4\times4)}$ and $E_{P}^{(8\times8)}$. \begin{table}[tb] \caption{\label{tab:parameters}Bulk material parameters for CsPbBr$_{3}$ used in this paper. $E_{P}^{(4\times4)}$ and $E_{P}^{(8\times8)}$ are estimates of the Kane parameter derived in Ref.~\protect\cite{nguyen-20a-psk} from the $4\times4$ and $8\times8$ $\mathbf{k}\cdot\mathbf{p}$ models. The parameters $\eps_{\text{eff}}$ and $\eps_{\text{opt}}$ are the effective and optical dielectric constants of CsPbBr$_{3}$, respectively, and $\eps_{\text{out}}$ is the optical dielectric constant of the surrounding medium. $\Delta_{\text{soc}}$ is the spin-orbit coupling parameter of the $8\times8$ $\mathbf{k}\cdot\mathbf{p}$ model. Further explanation is given in Sec.~\ref{subsec:parameters}.} \begin{ruledtabular} \begin{tabular}{ld} & \multicolumn{1}{c}{CsPbBr$_{3}$}\\ \hline $\mu^{*}$ ($\times m_{0}$) & 0.126\footnotemark[1]\\ $m_{e}^{*}=m_{h}^{*}$ ($\times m_{0}$) & 0.252\\ $E_{g}$ (eV) & 2.342\footnotemark[1]\\ $E_{P}^{(4\times4)}$ (eV) & 27.9\footnotemark[2]\\ $E_{P}^{(8\times8)}$ (eV) & 16.4\footnotemark[2]\\ $E_{P}$ (eV) & 20.0\\ $\Delta_{\text{soc}}$ (eV) & 0.8\footnotemark[3]\\ $\eps_{\text{eff}}$ & 7.3\footnotemark[1]\\ $\eps_{\text{opt}}$ & 4.84\footnotemark[4]\\ $\varepsilon_{\text{out}}$ & 2.4\footnotemark[5]\\ \end{tabular} \footnotetext[1]{Reference~\cite{yang-17-psk}} \footnotetext[2]{Reference~\cite{nguyen-20a-psk}} \footnotetext[3]{From a fit to experimental absorption spectra (see Sec.~\ref{subsec:1PA_xsection}).} \footnotetext[4]{Reference~\cite{dirin-16-psk}, for a wavelength of 500~nm.} \footnotetext[5]{Applies to toluene.} \end{ruledtabular} \end{table} NCs of CsPbBr$_{3}$ are cuboid~\cite{protesescu-15-psk}. For the above material parameters, one finds $2a_{B}=6.1$~nm, implying that NCs with edge lengths $L$ in the experimentally interesting range $L=6$--16~nm are in the regime of intermediate confinement, with strongly correlated excitons. We take the confining potential to be a spherical well with infinite walls \begin{equation} V_{\text{ext}}^{\text{sph}}(r)=\left\{ \begin{matrix}0\text{, if }r<R\\ \infty\text{, otherwise} \end{matrix}\right.\,.\label{eq:sphericalWell} \end{equation} The radius $R$ is chosen to be \begin{equation} R=L/\sqrt{3}\,,\label{eq:radiusL} \end{equation} which ensures that the noninteracting ground-state energy of a carrier (hole or electron) matches that in a cubic well with edge length $L$~\cite{sercel-19a-psk,nguyen-20a-psk}. This quasi-cubic spherical potential can be shown to reproduce well other properties of a cubic well, such as the Coulomb energy of the $1S_{e}$-$1S_{h}$ exciton~\cite{nguyen-20a-psk}, the correlation energies of the trion and biexciton~\cite{nguyen-20a-psk}, and the one-~\cite{nguyen-20b-psk} and two-photon~\cite{blundell-21a-psk} absorption cross sections. \subsection{\label{subsec:Ecorr}Correlation energy of ground-state exciton} An example of a calculation of the correlation energy is given in Table~\ref{tab:Ecorr}. We here use the BSE$_{0}$ method (in the EMA) with a configuration-averaged HF basis set~\cite{nguyen-20a-psk,nguyen-20c-psk} $U=V_{\text{HF}}$. The states of the basis set are cut off at principal quantum numbers $n_{\text{max}}=12$ in each angular-momentum channel ($s$, $p_{1/2}$, $p_{3/2}$, \ldots , etc.). We then perform a series of calculations with the orbital angular momentum of the basis set truncated at $l_{\text{max}}=K$ for all $K$ in the range $1\leq K\leq12$, giving a set of total exciton energies $E_{\text{tot}}(K)$. For $K\geq2$, the partial-wave contribution $\delta E(K)$ for orbital angular momentum $K$ is defined as $\delta E(K)=E_{\text{tot}}(K)-E_{\text{tot}}(K-1)$. Since we define the correlation energy as the difference between the total energy and the configuration-averaged HF energy, $E_{\text{corr}}=E_{\text{tot}}-E_{\text{HF}}$, we can take the contribution for $K\leq1$ to be $\delta E(K\leq1)=E_{\text{tot}}(1)-E_{\text{HF}}$. \begin{table} \caption{\label{tab:Ecorr}Contribution $\delta E(K)$ of individual partial waves to the correlation energy of the ground-state $1S_{e}$-$1S_{h}$ exciton ($F_{\text{tot}}=1$), for NCs of CsPbBr$_{3}$ with edge lengths $L=9$~nm and 12~nm. Calculations are performed with the BSE$_{0}$ method using an EMA basis set. The correlation energy is here defined as the difference between the total exciton energy and the exciton energy in the configuration-averaged HF approximation (see Ref.~\cite{nguyen-20a-psk} for more details), $E_{\text{corr}}=E_{\text{tot}}-E_{\text{HF}}$. We find $E_{\text{HF}}=0.08756199$~Ha for $L=9$~nm and $E_{\text{HF}}=0.08640365$~Ha for $L=12$~nm. Notation: $K$ is the partial wave (see text); `extrap.'\ is the extrapolated sum of terms for $K=13$ to infinity. Units: mHa.} \begin{ruledtabular} \begin{tabular}{ldd} $K$ & \multicolumn{1}{c}{$L=9$~nm} & \multicolumn{1}{c}{$L=12$~nm}\\ \hline $\leq1$ & -0.02624 & -0.02782\\ 2 & -0.21087 & -0.21166\\ 3 & -0.06176 & -0.07328\\ 4 & -0.02343 & -0.03096\\ 5 & -0.01053 & -0.01491\\ 6 & -0.00534 & -0.00792\\ 7 & -0.00297 & -0.00455\\ 8 & -0.00177 & -0.00277\\ 9 & -0.00112 & -0.00178\\ 10 & -0.00074 & -0.00119\\ 11 & -0.00050 & -0.00082\\ 12 & -0.00036 & -0.00058\\ extrap. & -0.00123 & -0.00206\\ Total & -0.34683(1) & -0.38030(1)\\ \end{tabular} \end{ruledtabular} \end{table} From Table~\ref{tab:Ecorr}, one sees that the correlation energy of the ground-state exciton $1S_{e}$-$1S_{h}$ is dominated by the contribution $K=2$, but higher partial waves also make significant contributions. Asymptotically for large $K$, one finds $\delta E(K)\sim1/K^{3}$, which enables us to make an estimate of the contribution from $K=13$ to infinity. The estimated numerical error from the principal-quantum-number cutoff and partial-wave extrapolation is given in the final line. Note that, at the level of approximation used here (EMA and BSE$_{0}$), the two possible total angular momenta $F_{\text{tot}}=0$ and 1 of the ground-state $1S_{e}$-$1S_{h}$ exciton should be degenerate. However, it turns out that the partial-wave contributions for $F_{\text{tot}}=0$ and 1 are slightly different (e.g., for an edge length $L=9$~nm, $\delta E(2)=-0.21087$~mHa for $F_{\text{tot}}=1$ and $\delta E(2)=-0.17266$~mHa for $F_{\text{tot}}=0$). Nevertheless, we find that the final extrapolated energy for the two different values of $F_{\text{tot}}$ agree to within the estimated error shown in the table (which applies to $F_{\text{tot}}=1$), thus providing a useful test of the numerics. A feature of this approach is that the partial-wave expansion becomes more slowly convergent as the NC size $L$ increases. This can be seen in Table~\ref{tab:Ecorr}, where the relative contribution to the correlation energy from the extrapolated terms $K\geq13$ is greater for $L=12$~nm than for $L=9$~nm. In fact, the method eventually becomes unwieldy for edge lengths $L\agt50$~nm owing to the slow convergence, but for intermediate confinement in the experimentally interesting size range $6\,\text{nm}\leq L\leq16\,\text{nm}$, one can readily achieve energies converged to a fractional error of 10$^{-3}$ or better in a few seconds of calculation on a single core. Figure~\ref{fig:conf_energy} shows the ground-state confinement energy in various many-body approximations (but all using the EMA), as a function of the NC edge length for $L=8$--30~nm. The confinement energy here is defined as \begin{equation} E_{\text{conf}}=E_{\text{tot}}-E_{g}\,.\label{eq:Econf} \end{equation} Two cases can be handled analytically in the EMA. First, for noninteracting carriers, the ground-state single-particle wave function is \begin{equation} \psi_{1S}(\mathbf{r})=(2\pi R)^{-1/2}\frac{1}{r}\sin\left(\frac{\pi r}{R}\right)\,,\label{eq:gs_wave_function} \end{equation} and the noninteracting confinement energy of a $1S_{e}$-$1S_{h}$ exciton is \begin{equation} E_{\text{non}}=\frac{\pi^{2}}{2\mu R^{2}}\,,\label{eq:e_non} \end{equation} which tends to zero in the bulk limit $R\rightarrow\infty$. \begin{figure} \includegraphics[scale=0.5]{conf_energy} \caption{\label{fig:conf_energy}Ground-state exciton confinement energy {[}Eq.~(\ref{eq:Econf}){]} for a NC of CsPbBr$_{3}$ vs.\ NC edge length, in various approximations. All approaches assume the EMA. Continuous lines (top to bottom): `noninteracting', noninteracting carriers; HF, configuration-averaged Hartree-Fock (following Ref.~\protect\cite{nguyen-20a-psk}); MBPT(2), many-body perturbation theory up to second order (following Ref.~\protect\cite{nguyen-20a-psk}); BSE$_{0}$, particle-hole Bethe-Salpeter equation. Dashed lines (top to bottom): $E_{\text{asym}}$, the large-$R$ asymptotic form of the confinement energy, Eq.~(\ref{eq:Easym}); $E_{\infty}$, the $R\rightarrow\infty$ limit of the confinement energy, Eq.~(\ref{eq:Einfinity}).} \end{figure} The bulk limit for interacting particles can also be handled analytically in the EMA, by solving a two-body Schr\"odinger equation~\cite{Mahan}, \begin{align} & \left(-\frac{1}{2m_{e}^{*}}\nabla_{e}^{2}-\frac{1}{2m_{h}^{*}}\nabla_{h}^{2}+\frac{1}{\eps_{\text{in}}|\mathbf{r}_{e}-\mathbf{r}_{h}|}\right)\Psi(\mathbf{r}_{e},\mathbf{r}_{h})\nonumber \\ & \quad\quad\quad\quad\quad\quad\quad\quad\quad=\left(E_{\text{tot}}-E_{g}\right)\Psi(\mathbf{r}_{e},\mathbf{r}_{h})\,,\label{eq:2body_eqn} \end{align} where $\Psi(\mathbf{r}_{e},\mathbf{r}_{h})$ goes to zero at the NC boundary. In Eq.~(\ref{eq:2body_eqn}), the electron and hole are effectively regarded as different species of particle, with one particle of each type present. It follows that a CI solution of Eq.~(\ref{eq:2body_eqn}) is formally identical to the method we called BSE$_{0}$ in Sec.~\ref{sec:formalism}. In particular, since the electron and hole are regarded as distinguishable, the wave function $\Psi(\mathbf{r}_{e},\mathbf{r}_{h})$ is not required to be antisymmetric under interchange of electron and hole coordinates, and so the final exchange term in Eq.~(\ref{eq:Hmatrix}) is absent, as in the definition of the BSE$_{0}$ method. Also, as mentioned above, the spin-independence of the Hamiltonian has the consequence that the solutions for $F_{\text{tot}}=0$ and 1 are degenerate. In the large-$R$ limit, Eq.~(\ref{eq:2body_eqn}) can be solved using relative $\mathbf{r}=\mathbf{r}_{e}-\mathbf{r}_{h}$ and center-of-mass (CM) $\mathbf{R}_{\text{CM}}=(m_{e}^{*}\mathbf{r}_{e}+m_{h}^{*}\mathbf{r}_{h})/(m_{e}^{*}+m_{h}^{*})$ coordinates, \begin{equation} \Psi(\mathbf{r}_{e},\mathbf{r}_{h})=\psi_{\text{CM}}(\mathbf{R}_{\text{CM}})\psi_{\text{rel}}(\mathbf{r})\,.\label{eq:2body_soln} \end{equation} This solution represents a bound exciton with CM fluctuations. We are interested here in the ground state. The CM wave function $\psi_{\text{CM}}(\mathbf{R}_{\text{CM}})$ then has the same form as Eq.~(\ref{eq:gs_wave_function}), and the relative wave function $\psi_{\text{rel}}(\mathbf{r})$ is the $1s$ hydrogen-like solution for effective mass $\mu$ (and a dielectric constant $\eps_{\text{in}}$). The large-$R$ asymptotic confinement energy, including the CM contribution, is \begin{equation} E_{\text{asym}}(R)=\frac{\pi^{2}}{2(m_{e}^{*}+m_{h}^{*})R^{2}}-\frac{\mu}{2\eps_{\text{in}}^{2}}\,.\label{eq:Easym} \end{equation} In the limit $R\rightarrow\infty$, the confinement energy is the bulk $1s$ exciton binding energy, \begin{equation} E_{\infty}=-\frac{\mu}{2\eps_{\text{in}}^{2}}\,,\label{eq:Einfinity} \end{equation} which is $E_{\infty}=-32.17$~meV for the parameters in Table~\ref{tab:parameters}. Figure~\ref{fig:conf_energy} shows that configuration-averaged HF picks up only about 45\% of the exciton binding energy in the large-$R$ limit. This is improved to about 65\% using MBPT up to 2nd order. At $L=30$~nm, however, the all-order BSE$_{0}$ solution is within only 4\% of its asymptotic value $E_{\text{asym}}(R)$~(\ref{eq:Easym}); at this size, the CM energy in Eq.~(\ref{eq:Easym}) is still significant. We also see that 2nd-order MBPT gives a good account of the correlation energy for smaller NC sizes $L\alt10$~nm, including the strong-confinement limit. For larger sizes $10\text{ nm}\alt L\alt16\text{ nm}$ of experimental interest, however, BSE$_{0}$ is significantly different from HF, MBPT, and $E_{\text{asym}}(R)$; all-order approaches thus become the preferred method of calculation for these sizes. A discussion of $\mathbf{k}\cdot\mathbf{p}$ corrections to the correlation energy will be left to Sec.~\ref{subsec:LR-FS}. These corrections in general lift the degeneracy between the solutions for $F_{\text{tot}}=0$ or 1 and contribute to the exciton fine structure. \subsection{\label{subsec:lifetime}Radiative lifetime of the ground-state exciton} The radiative lifetime $\tau_{\alpha}$ of an exciton state $\alpha$ is given by~\cite{efros-82-sqd,nguyen-20b-psk} \begin{equation} \frac{1}{\tau_{\alpha}}=\frac{4}{9}\frac{n_{\text{out}}\omega_{\alpha}}{c^{3}}f_{\varepsilon}^{2}\left|M_{\alpha}\right|^{2}\,,\label{eq:tau_alpha} \end{equation} where $n_{\text{out}}=\sqrt{\eps_{\text{out}}}$ is the refractive index of the surrounding medium, with $\eps_{\text{out}}$ its optical dielectric constant, $\omega_{\alpha}$ is the frequency of the emitted photon (total exciton energy), and $M_{\alpha}=\RME{\alpha(F_{\text{tot}})}{M}{0}$ is the reduced momentum matrix element, which is given by Eq.~(\ref{eq:RPAEredmxel}). A selection rule for one-photon emission requires $F_{\text{tot}}=1$, otherwise $M_{\alpha}=0$. The quantity $f_{\varepsilon}$ is the dielectric screening factor, defined as the ratio of photon electric field inside the NC to that at infinity, which we take to have the value for a sphere~\cite{Jackson} (see Refs.~\cite{becker-18-psk} and \cite{nguyen-20b-psk} for further discussion), \begin{equation} f_{\varepsilon}^{\text{sph}}=\frac{3\varepsilon_{\text{out}}}{\varepsilon_{\text{opt}}+2\varepsilon_{\text{out}}}\,.\label{eq:feps-sphere} \end{equation} As in the previous section, two cases of interest can be handled analytically within the EMA. First, for noninteracting carriers the reduced momentum matrix element for the ground-state $1S_{e}$-$1S_{h}$ exciton is given by~\cite{nguyen-20b-psk} \begin{equation} \left|M_{\alpha}\right|^{2}=\left|\BraKet{1S_{e}}{1S_{h}}\right|^{2}\left|\RME{J_{\text{CB}}}{p}{J_{\text{VB}}}\right|^{2}=E_{P}\,.\label{eq:redmxel_non} \end{equation} Here, $\BraKet{1S_{e}}{1S_{h}}=1$ is the overlap of envelope functions, and $\RME{J_{\text{CB}}}{p}{J_{\text{VB}}}$ is the reduced interband momentum matrix element between Bloch states; this satisfies $\left|\RME{J_{\text{CB}}}{p}{J_{\text{VB}}}\right|^{2}=E_{P}$, which can be regarded as the definition of the Kane parameter $E_{P}$~\cite{nguyen-20b-psk}. Hence, for noninteracting particles the lifetime of the $1S_{e}$-$1S_{h}$ exciton is \begin{equation} \frac{1}{\tau_{\text{non}}}=\frac{4}{9}\frac{n_{\text{out}}}{c^{3}}f_{\varepsilon}^{2}\left(E_{g}+\frac{\pi^{2}}{2\mu R^{2}}\right)E_{P}\,.\label{eq:tau_non} \end{equation} \begin{figure} \includegraphics[scale=0.54]{tau} \caption{\label{fig:tau_rad}Log-log plot of the radiative lifetime of the $1S_{e}$-$1S_{h}$ ground-state exciton of a NC of CsPbBr$_{3}$ vs.\ NC edge length, in various approximations. All theories assume the EMA. Dashed lines: $\tau_{\text{asym}}$, large-$R$ asymptotic form of the lifetime {[}Eq.~(\ref{eq:tau_asym}){]}; $\tau_{\text{non}}$, lifetime for noninteracting particles {[}Eq.~(\ref{eq:tau_non}){]}. Continuous lines (top to bottom): HF, configuration-averaged Hartree-Fock; MBPT, many-body perturbation theory up to first order (following Ref.~\protect\cite{nguyen-20b-psk}); BSE$_{0}$, particle-hole Bethe-Salpeter equation.} \end{figure} The other case is the large-$R$ limit for interacting carriers. Here one makes use of the bound-exciton wave function $\Psi(\mathbf{r}_{e},\mathbf{r}_{h})$~(\ref{eq:2body_soln}) and generalizes the overlap $\BraKet{1S_{e}}{1S_{h}}$ in Eq.~(\ref{eq:redmxel_non}) to~\cite{efros-82-sqd} \begin{align} & \int\Psi(\mathbf{r}_{e},\mathbf{r}_{h})\delta^{3}(\mathbf{r}_{e}-\mathbf{r}_{h})\,d^{3}\mathbf{r}_{e}d^{3}\mathbf{r}_{h}\nonumber \\ & \quad\quad\quad\quad=\psi_{\text{rel}}(\bm{0})\int\psi_{\text{CM}}(\mathbf{r}')\,d^{3}\mathbf{r}'\nonumber \\ & \quad\quad\quad\quad=\frac{2\sqrt{2}}{\pi}\left(\frac{\mu R}{\eps_{\text{in}}}\right)^{3/2}\,,\label{eq:ovlp_subst} \end{align} which gives the large-$R$ asymptotic form of the lifetime~\cite{efros-82-sqd}, \begin{equation} \frac{1}{\tau_{\text{asym}}}=\frac{4}{9}\frac{n_{\text{out}}}{c^{3}}f_{\varepsilon}^{2}\left(E_{g}-\frac{\mu}{2\eps_{\text{in}}^{2}}\right)E_{P}\left(\frac{8}{\pi^{2}}\frac{\mu^{3}R^{3}}{\eps_{\text{in}}^{3}}\right)\,.\label{eq:tau_asym} \end{equation} For large $R$, the lifetime thus goes as $\tau\sim1/R^{3}$. \begin{figure} \includegraphics[scale=0.5]{tau_kp} \caption{\label{fig:tau_kp_corr}Plot of $\mathbf{k}\cdot\mathbf{p}$ corrections to the lifetime of the $1S_{e}$-$1S_{h}$ ground-state exciton of a NC of CsPbBr$_{3}$ vs.\ NC edge length, for various levels of theory. The $\mathbf{k}\cdot\mathbf{p}$ correction is defined relative to the BSE$_{0}$ value (Fig.~\ref{fig:tau_rad}), which corresponds to zero. Continuous curves (top to bottom): CIS, configuration-interaction singles; RPAE, random-phase approximation with exchange; BSE$_{\mathbf{k}\cdot\mathbf{p}}$, particle-hole Bethe-Salpeter equation using basis states of the $4\times4$ $\mathbf{k}\cdot\mathbf{p}$ model.} \end{figure} These analytical results together with various numerical results are shown in Fig.~\ref{fig:tau_rad} as a function of NC edge length. One sees that the mean-field HF approach gives almost the same lifetime as noninteracting carriers, and both tend to a constant as $L\rightarrow\infty$. Correlation has a large effect as $L$ increases, however. Although MBPT up to first order~\cite{nguyen-20b-psk} accounts quite well for the lifetime in the strong-confinement limit $L\ll2a_{B}=6.1$~nm, it deviates significantly from both BSE$_{0}$ and $\tau_{\text{asym}}$ in intermediate confinement and also has the wrong large-$R$ limit~\cite{nguyen-20b-psk}. In intermediate confinement, only BSE$_{0}$ gives a satisfactory description of the radiative lifetime. \begin{figure}[!b] \includegraphics[scale=0.5]{tau_expt} \caption{\label{fig:tau_expt}Comparison of theoretical and experimental lifetimes of NCs of CsPbBr$_{3}$. Continuous lines (top to bottom): HF, configuration-averaged Hartree-Fock (Ref.~\protect\cite{nguyen-20b-psk}); MBPT, many-body perturbation theory up to first order (Ref.~\protect\cite{nguyen-20b-psk}); BSE$_{0}$, particle-hole Bethe-Salpeter equation using EMA basis states; RPAE, random-phase approximation with exchange using basis states of the $4\times4$ $\mathbf{k}\cdot\mathbf{p}$ model. Experimental values: Becker \emph{et al}., Ref.~\protect\cite{becker-18-psk}; Fu \emph{et al}., Ref.~\protect\cite{fu-17-psk}; Canneson \emph{et al}., Ref.~\protect\cite{canneson-17-psk}.} \end{figure} The $\mathbf{k}\cdot\mathbf{p}$ corrections to the radiative lifetime are shown in Fig.~\ref{fig:tau_kp_corr} using various approaches within the $4\times4$ $\mathbf{k}\cdot\mathbf{p}$ model. Overall, the $\mathbf{k}\cdot\mathbf{p}$ corrections are small, up to about 5\% of the lifetime in intermediate confinement. However, the more complete RPAE approach gives significantly different results from CIS and BSE$_{\mathbf{k}\cdot\mathbf{p}}$ and, in situations where $\mathbf{k}\cdot\mathbf{p}$ corrections are of interest, is therefore to be preferred. Our results are compared with the available experimental data (at cryogenic temperatures) in Fig.~\ref{fig:tau_expt}. In the size range of interest, the all-order methods give significantly improved agreement with experiment and bring the theory into good agreement with the measurement of Becker \emph{et al.}~\cite{becker-18-psk}. However, the decay rate is approximately proportional to the Kane parameter $E_{P}$ {[}see, e.g., Eq.~(\ref{eq:tau_asym}){]}, whose value is presently quite uncertain. A larger value of $E_{P}$ might favor the other measurements. We also note that the measurements disagree with themselves by up to a factor of 2. Further discussion of possible errors in both theory and experiment is given in Ref.~\cite{nguyen-20b-psk}. Finally, we note that, in our discussion of the large-$R$ limit of the lifetime {[}Eq.~(\ref{eq:tau_asym}){]}, we have assumed an idealized situation in which the carriers remain in a single pure quantum state (the ground state) for all NC sizes $R$. For sufficiently large sizes at finite temperature, this assumption will fail and the predicted decrease in lifetime as $1/R^{3}$ will break down~\cite{takagahara-87-sqd}. However, Fig.~\ref{fig:tau_expt} makes it clear that a strong renormalization due to correlation is observable in the experimental data for the synthesized NC sizes at cryogenic temperatures. \subsection{\label{subsec:LR-FS}Fine structure: long-range Coulomb interaction} In this section we illustrate the application of all-order methods to the ground-state exciton fine structure, focusing on one contribution, the LR Coulomb interaction~(\ref{eq:g12LR}), which has received much attention recently~\cite{ben-aich-19-psk,sercel-19-psk,ben-aich-20-psk,tamarat-19-psk}. Other contributions to the fine structure (with comparable size) include the SR Coulomb interaction~(\ref{eq:g12SR})~\cite{becker-18-psk,sercel-19-psk,ben-aich-20-psk,tamarat-19-psk}, NC shape and lattice deformations~\cite{ben-aich-19-psk,sercel-19-psk,ben-aich-20-psk}, and a possible strong Rashba interaction~\cite{becker-18-psk,sercel-19-psk,swift-21-psk}. The leading Coulomb contribution to exciton fine structure is the exchange interaction, Figs.~\ref{fig:ladders}(c) and (d)~\cite{Knox,pikus-71-sqd}. As discussed in Sec.~\ref{sec:formalism}, this term is formally of order $O[(\mathbf{k}\cdot\mathbf{p})^{2}]$ in $\mathbf{k}\cdot\mathbf{p}$ perturbation theory. In the present approach, $\mathbf{k}\cdot\mathbf{p}$ corrections enter via the small components of the wave function in VB-CB-coupled methods such as the $4\times4$ $\mathbf{k}\cdot\mathbf{p}$ model. By keeping track of both large and small components when evaluating Coulomb matrix elements, the $\mathbf{k}\cdot\mathbf{p}$ corrections then propagate ``automatically'' to the final energy. It is thus possible to extract the fine-structure contribution directly by taking the difference of the total energy for the $F_{\text{tot}}=1$ and $F_{\text{tot}}=0$ fine-structure states of the ground-state exciton, testing this difference carefully for numerical significance. In Fig.~\ref{fig:LR_fine_structure}, we show the LR fine-structure contribution calculated this way in various many-body approximations as a function of NC size. A positive value of the fine structure indicates that the $F_{\text{tot}}=1$ state is higher in energy than the $F_{\text{tot}}=0$ state. Note that the only all-order method shown in the figure is CIS. The BSE$_{\mathbf{k}\cdot\mathbf{p}}$ method does not reproduce the fine structure accurately. This happens because BSE$_{\mathbf{k}\cdot\mathbf{p}}$ (like BSE$_{0}$) excludes the last term in Eq.~(\ref{eq:Hmatrix}) {[}Fig.~\ref{fig:Excitations}(e){]}, which is the term that generates the exchange energy {[}Figs.~\ref{fig:ladders}(c) and (d){]}. The RPAE method does include this term, but the additional fine-structure $\mathbf{k}\cdot\mathbf{p}$ corrections contained in RPAE relative to CIS are very small, giving a modification of only about 1\% or less of the CIS result over the range of sizes shown. As with the radiative lifetime, correlation is seen to be a very important effect in intermediate confinement, and the all-order CIS or RPAE methods give significantly different results from HF and MBPT in this regime. \begin{figure} \includegraphics[scale=0.5]{LR_fine_structure} \caption{\label{fig:LR_fine_structure}Log-log plot of LR fine structure for a NC of CsPbBr$_{3}$ vs.\ NC edge length, in various approximations. All approaches use the $4\times4$ $\mathbf{k}\cdot\mathbf{p}$ model. Dashed lines (top to bottom): `contact', large-$R$ asymptotic value calculated assuming an effective contact interaction, Eqs.~(\ref{eq:g12_LRFS}) and (\ref{eq:LRFS_asym}); `noninteracting', noninteracting carriers, Eq.~(\ref{eq:LRFS_non}). Continuous lines (top to bottom): CIS, configuration-interaction singles; MBPT, many-body perturbation theory up to first order; HF, configuration-averaged Hartree-Fock. The curve for RPAE is indistinguishable from that for CIS, while BSE$_{\mathbf{k}\cdot\mathbf{p}}$ accounts poorly for the LR fine structure and is not shown (see text for further discussion).} \end{figure} As before, it is possible to make analytical progress in two cases within the EMA. Using external-leg wave-function corrections to first order in $\mathbf{k}\cdot\mathbf{p}$ perturbation theory, it can be shown~\cite{pikus-71-sqd,takagahara-93-sqd,tong-11-sqd} that the lowest-order exchange energy, Fig.~\ref{fig:ladders}(c), is given in a spherical approximation by the matrix element of the effective contact interaction, \begin{equation} \tilde{g}_{12}=\frac{4\pi}{3}\frac{1}{\eps_{\text{in}}\omega_{\alpha}^{2}}\delta^{3}(\mathbf{r}_{1}-\mathbf{r}_{2})\mathbf{p}_{1}\cdot\mathbf{p}_{2}\,.\label{eq:g12_LRFS} \end{equation} Here, $\omega_{\alpha}$ is the exciton energy, $\delta^{3}(\mathbf{r}_{1}-\mathbf{r}_{2})$ acts only on the envelope wave functions, and $\mathbf{p}_{1}\cdot\mathbf{p}_{2}$ acts only on the Bloch functions. The operator~(\ref{eq:g12_LRFS}) is strictly valid only when the external states in Fig.~\ref{fig:ladders}(c) are $S$ states. For external states of higher angular momentum, a second term with quadrupole symmetry in principle also contributes~\cite{tong-11-sqd}. Evaluating the matrix element $\BraOperKet{eh}{\tilde{g}_{12}}{he}$ using the noninteracting $1S_{e}$ and $1S_{h}$ wave functions~(\ref{eq:gs_wave_function}) gives the LR fine structure for noninteracting particles~\cite{tamarat-19-psk,sercel-19-psk} \begin{equation} \mathcal{F}_{\text{non}}=\frac{4\pi}{9}\frac{E_{P}}{\eps_{\text{in}}}\left(E_{g}+\frac{\pi^{2}}{2\mu R^{2}}\right)^{-2}\frac{\xi}{R^{3}}\,,\label{eq:LRFS_non} \end{equation} where $\xi\approx0.672$1. We can use this expression to test our numerics, by comparing $\mathcal{F}_{\text{non}}$ with the fine structure calculated numerically from the lowest-order exchange diagram, Fig.~\ref{fig:ladders}(c), using noninteracting $1S$ states on the external legs that were generated numerically in the $4\times4$ $\mathbf{k}\cdot\mathbf{p}$ model (including large and small components). We find agreement with Eq.~(\ref{eq:LRFS_non}) to about one part in $10^{4}$ for large $R$. This exercise fails when the external states have orbital angular momentum greater than zero, because the effective operator in Eq.~(\ref{eq:g12_LRFS}) excludes the quadrupole term. An estimate of the LR fine structure in the large-$R$ limit can also be made, by evaluating the expectation value $\BraOperKet{\Psi}{\tilde{g}_{12}}{\Psi}$ using the bound-exciton wave function $\Psi$ in Eq.~(\ref{eq:2body_soln}). This yields \begin{equation} \mathcal{F}_{\text{asym}}=\frac{4\pi}{9}\frac{E_{P}}{\eps_{\text{in}}}\left(E_{g}-\frac{\mu}{2\eps_{\text{in}}^{2}}\right)^{-2}\left(\frac{1}{\pi}\frac{\mu^{3}}{\eps_{\text{in}}^{3}}\right)\,.\label{eq:LRFS_asym} \end{equation} $\mathcal{F}_{\text{asym}}$ is a constant, representing the LR fine-structure contribution of the bulk exciton, with a value $\mathcal{F}_{\text{asym}}=0.869$~meV for the parameters in Table~\ref{tab:parameters}. However, Eq.~(\ref{eq:LRFS_asym}) is only approximate, because the derivation implicitly assumes external states with angular momentum greater than zero on the external legs of the effective operator $\tilde{g}_{12}$~(\ref{eq:g12_LRFS}). This can be seen by expressing the diagram in Fig.~\ref{fig:ladders}(d) in terms of all-order CI amplitudes~(\ref{eq:CIexciton}), \begin{equation} \mathcal{F}_{\alpha}=\sum_{ehe'h'}(\mathcal{X}_{e'h'}^{\alpha})^{*}\mathcal{X}_{eh}^{\alpha}\BraOperKet{e'h}{g_{12}}{h'e}\,.\label{eq:LRFS_all-order} \end{equation} The sum here is over all exciton channels $(e,h)$ and $(e',h')$, implying contributions from $P$, $D$, etc., states in the Coulomb matrix element. A numerical estimate of the large-$R$ limit can instead be made using the all-order CIS method, which is set up to handle the angular couplings in full generality. From Fig.~\ref{fig:LR_fine_structure}, one sees that the LR fine structure is not quite asymptotic (constant) at $L=35$~nm and has a numerical value that is of the same order of magnitude as $\mathcal{F}_{\text{asym}}$, but about half the size. As mentioned in Sec.~\ref{sec:formalism}, the contribution of the SR Coulomb interaction to the exciton fine structure is also formally of order $O[(\mathbf{k}\cdot\mathbf{p})^{2}]$, and studies have shown that the SR and LR fine-structure contributions are comparable \cite{sercel-19-psk,tamarat-19-psk,ben-aich-19-psk}. The leading SR fine-structure term can be included in the CIS and RPAE approaches by putting $g_{12}=g_{12}^{\text{LR}}+g_{12}^{\text{SR}}$ in the exchange term $\BraOperKet{eh'}{g_{12}}{he'}$ {[}Fig.~\ref{fig:Excitations}(e){]} in Eq.~(\ref{eq:Hmatrix}). The $O(1)$ particle-hole ladder terms, generated by iterating Fig.~\ref{fig:Excitations}(d), will then provide the vertex-renormalization terms in Fig.~\ref{fig:ladders}(d), which are also important for the SR fine structure. \subsection{\label{subsec:1PA_xsection}One-photon absorption cross section} The one-photon absorption cross section for laser frequency $\omega$ can be written~\cite{elliott-57-sqd,efros-82-sqd} \begin{equation} \sigma^{(1)}(\omega)=\sum_{\alpha}T_{\alpha}^{(1)}\Delta_{\alpha}(\omega-\omega_{\alpha})\,,\label{eq:sigma_1PA} \end{equation} where $T_{\alpha}^{(1)}$ is the one-photon ``transition strength'' to a final-state exciton $\alpha$ with energy $\omega_{\alpha}$, and $\Delta_{\alpha}(\omega-\omega_{\alpha})$ is a line-shape function for this transition. The transition strength is given by \begin{equation} T_{\alpha}^{(1)}=\frac{4\pi^{2}}{3}\frac{f_{\varepsilon}^{2}}{n_{\text{out}}c\omega}\left|M_{\alpha}\right|^{2}\,,\label{eq:T1_alpha} \end{equation} involving the same quantities as Eq.~(\ref{eq:tau_alpha}). To evaluate Eq.~(\ref{eq:sigma_1PA}), we extract a large number of correlated exciton states $\alpha$ using the CIS~(\ref{eq:CIeigenvalue}) or RPAE~(\ref{eq:RPAEeigenvalue}) eigenvalue equations, from the ground state up to a high energy cutoff (typically a few hundred eigenstates up to an excitation energy of several eV). The computed transition strengths $T_{\alpha}^{(1)}$ are then broadened using the phenomenological approach of Ref.~\cite{nguyen-20b-psk} (which should be consulted for more details). A Gaussian line-shape function is chosen emphasizing inhomogeneous broadening mechanisms, \begin{equation} \Delta_{\alpha}(\omega-\omega_{\alpha})=\frac{1}{\sigma_{\alpha}\sqrt{2\pi}}\exp\left[-\frac{(\omega-\omega_{\alpha})^{2}}{2\sigma_{\alpha}^{2}}\right]\,,\label{eq:line-shape} \end{equation} with width parameters \begin{equation} \sigma_{\alpha}^{2}=\left(\sigma_{\alpha}^{\text{size}}\right)^{2}+\left(\sigma_{\alpha}^{\text{other}}\right)^{2}\,.\label{eq:line_width} \end{equation} The width $\sigma_{\alpha}^{\text{size}}$ is a transition-dependent term representing the range of NC sizes present in the ensemble, which we take as $\delta L/L\approx5$\% for CsPbBr$_{3}$~\cite{chen-17-psk}. Other broadening mechanisms are represented by $\sigma_{\alpha}^{\text{other}}$, which is given the constant value 80~meV for all transitions except the ground-state exciton, for which we take $\sigma_{\alpha}^{\text{other}}=52$~meV. Because the line-shape functions are normalized to unity, $\int_{0}^{\infty}\Delta_{\alpha}(\omega-\omega_{\alpha})\,d\omega=1$, the broadening parameters do not change significantly the average value of the computed cross section $\sigma^{(1)}$, only the appearance of substructure related to individual transitions $T_{\alpha}^{(1)}$. The broadening parameters above, which are physically reasonable, were chosen to reproduce approximately the resolution of individual transitions observed in the experimental absorption spectra of Ref.~\cite{chen-17-psk} for an edge length of $L=9.4$~nm (see inset to Fig.~\ref{fig:1PA}). \begin{figure} \includegraphics[scale=0.60]{1PA} \caption{\label{fig:1PA}One-photon absorption for a NC of CsPbBr$_{3}$ with edge length $L=9.4$~nm. \emph{Upper panel}: Transition strengths $T_{\alpha}^{(1)}$, Eq.~(\ref{eq:T1_alpha}), calculated within RPAE using the $8\times8$ $\mathbf{k}\cdot\mathbf{p}$ model. The quantum numbers of some dominant transitions are indicated; $1S$ indicates a final-state exciton $1S_{e}$-$1S_{h}$, $1P$ indicates $1P_{e}$-$1P_{h}$, etc. \emph{Lower panel}: continuous lines show one-photon absorption cross sections $\sigma^{(1)}$ calculated in various approximations. HF: configuration-averaged Hartree-Fock; MBPT: many-body perturbation theory up to first order (following Ref.~\protect\cite{nguyen-20b-psk}); BSE$_{\mathbf{k}\cdot\mathbf{p}}$: particle-hole Bethe-Salpeter approach using single-particle states from the $8\times8$ $\mathbf{k}\cdot\mathbf{p}$ model. The dashed line, which is nearly coincident with the BSE$_{\mathbf{k}\cdot\mathbf{p}}$ line, indicates the RPAE approximation. \emph{Inset of lower panel}: unnormalized one-photon absorption cross section (arbitrary units) taken from the experiment of Chen \emph{et al}.~\cite{chen-17-psk}\ for a NC of edge length $L=9.4$~nm. The experimental absorption cross section is scaled so that the threshold peak ($1S_{e}$-$1S_{h}$) has the same numerical value as the theoretical BSE$_{\mathbf{k}\cdot\mathbf{p}}$ curve.} \end{figure} A study of excitons at cryogenic temperatures in bulk (CH$_{3}$NH$_{3}$)PbBr$_{3}$~\cite{tanaka-03-psk}, which may be expected to have a similar band structure to CsPbBr$_{3}$, showed a sharp absorption line at an excitation energy of 1.07~eV above threshold. This was attributed to transitions from the $s$-like VB ($R_{6}^{+})$ to the $p_{3/2}$-like CB ($R_{8}^{-})$ at the $R$ point of the Brillouin zone; higher-lying structures around 1.6~eV above threshold were attributed to transitions to the $p_{1/2}$-like CB at the $M$ point. The absorption spectra of NCs of CsPbBr$_{3}$~\cite{brennan-17-psk,chen-17-psk} show analogous features in the form of steps in the absorption cross section, the first occurring around 0.6--0.8~eV above threshold (e.g., see the inset to Fig.~\ref{fig:1PA}). Interpreted as the threshold for absorption to the $p_{3/2}$-like CB band at the $R$ point ($R_{6}^{+}\rightarrow R_{8}^{-}$), this implies a value of about 0.6--0.8~eV for the spin-orbit coupling parameter $\Delta_{\text{soc}}$ discussed in Sec.~\ref{subsec:parameters} (with small corrections due to the confinement shifts present in the NC spectra, which have an order of magnitude of several tens of meV for an edge length of 9~nm, as shown in Fig.~\ref{fig:conf_energy}). DFT calculations for CsPbBr$_{3}$~\cite{becker-18-psk,sercel-19-psk} show a similar energy ordering of band-structure features to that observed for (CH$_{3}$NH$_{3}$)PbBr$_{3}$ in Ref.~\cite{tanaka-03-psk}, although the predicted value of $\Delta_{\text{soc}}$ is somewhat larger (e.g., $\Delta_{\text{soc}}=1.54$~eV in Ref.~\cite{sercel-19-psk}). For this calculation, we use the $8\times8$ $\mathbf{k}\cdot\mathbf{p}$ model discussed in Sec.~\ref{subsec:parameters} in order to include the secondary absorption threshold to the $p_{3/2}$-like CB, assuming $\Delta_{\text{soc}}=0.8$~eV. The $8\times8$ $\mathbf{k}\cdot\mathbf{p}$ model also involves Luttinger parameters $\gamma_{i}$ ($i=1$--3) \cite{efros-00-sqd}, which describe the effective-mass and intraband couplings within the $p_{3/2}$-like CB. The Luttinger parameters are not known for CsPbBr$_{3}$; we choose $\gamma_{i}$ such that $\tilde{\gamma}_{i}=0$, where $\tilde{\gamma}_{i}$ is the contribution to $\gamma_{i}$ from remote bands (not included in the $8\times8$ $\mathbf{k}\cdot\mathbf{p}$ model). Transition strengths obtained using RPAE are shown in Fig.~\ref{fig:1PA} (upper panel). The first few dominant transitions are to excitons of the form $1l_{e}$-$1l_{h}$ (where $l$ denotes orbital angular momentum), which give nonzero overlaps $\BraKet{1l_{e}}{1l_{h}}$ in Eq.~(\ref{eq:redmxel_non}) for the noninteracting case. However, correlation and $\mathbf{k}\cdot\mathbf{p}$ corrections allow numerous other transitions with reduced strength, such as $2S_{e}$-$1S_{h}$, which add to the overall absorption strength. Also, a line such as $1P_{e}$-$1P_{h}$ has a small fine structure, with components $1(P_{1/2})_{e}$-$1(P_{1/2})_{h}$, $1(P_{3/2})_{e}$-$1(P_{1/2})_{h}$, $1(P_{1/2})_{e}$-$1(P_{3/2})_{h}$, and $1(P_{3/2})_{e}$-$1(P_{3/2})_{h}$. Theoretical absorption cross sections after line broadening are shown in the lower panel of Fig.~\ref{fig:1PA} and compared to the experimental cross section of Chen \emph{et al.}~\cite{chen-17-psk} for a NC of edge length 9.4~nm. As discussed in Ref.~\cite{nguyen-20b-psk}, measurements of the absolute cross section, which have been performed in some cases for particular wavelengths, disagree with each other by up to an order of magnitude. We therefore focus here on the \emph{shape} of the absorption curve. The all-order cross sections from BSE$_{\mathbf{k}\cdot\mathbf{p}}$ and RPAE are found to be in good qualitative agreement with the measured cross section of Ref.~\cite{chen-17-psk}. Similar agreement is found with Ref.~\cite{brennan-17-psk}, where the absorption curve for $L=9.2$~nm has a similar shape near threshold, with a step around 0.6~eV above threshold. The threshold peak (at about 510~nm) is a transition to the $1S_{e}$-$1S_{h}$ exciton. A weaker transition around 470~nm is just visible in the experimental (and theoretical) curve, and is due mainly to a combination of the $1P_{e}$-$1P_{h}$, $1D_{e}$-$1D_{h}$, and $1F_{e}$-$1F_{h}$ transitions. The secondary transition to the $p_{3/2}$-like CB (around 410~nm) is perhaps more pronounced in the theoretical curve than in the measurement. The strength of this transition relative to the threshold transition is fixed in our approach by the $8\times8$ $\mathbf{k}\cdot\mathbf{p}$ model, in which the $4\times4$ $\mathbf{k}\cdot\mathbf{p}$ model is embedded with the ratios of coupling constants constrained by symmetry. However, the ratio of the cross section at 375~nm to its value at the threshold peak is in good agreement with experiment. Note that the theoretical $1S_{e}$-$1S_{h}$ threshold peak, which is based on a band gap measured at cryogenic temperatures (Table~\ref{tab:parameters}), is redshifted by about 0.1~eV compared to the experimental peak, which was measured at room temperature or above. This is likely due to the temperature-dependent shift of the band gap and possibly also to a change of crystal phase~\cite{yang-17-psk}. A better fit to the experiment could have been found by choosing $E_{g}$ to be about 0.1~eV larger, compensated by a value of $\Delta_{\text{soc}}$ that is about 0.1~eV smaller, so that the step in the absorption curve occurs at the same wavelength. \begin{table} \caption{\label{tab:beta}Transition strengths and renormalization factors for the principal one-photon-absorption transitions in Fig.~\ref{fig:1PA}. The columns HF, MBPT (1st-order), and RPAE give the transition strengths summed over fine-structure components, Eq.~(\ref{eq:T1_summed}), for the indicated model (units: $10^{-15}$~eV\,cm$^{2}$). The renormalization factors $\beta_{\text{MBPT}}$ and $\beta_{\text{RPAE}}$ are the enhancement of the transition strength relative to the HF model.} \begin{ruledtabular} \begin{tabular}{lddddd} Transition & \multicolumn{1}{c}{HF} & \multicolumn{1}{c}{MBPT} & \multicolumn{1}{c}{RPAE} & \multicolumn{1}{c}{$\beta_{\text{MBPT}}$} & \multicolumn{1}{c}{$\beta_{\text{RPAE}}$}\\ \hline 1S & 0.21 & 0.66 & 1.02 & 3.1 & 4.8\\ 1P & 0.58 & 1.46 & 1.38 & 2.5 & 2.4\\ 1D & 0.88 & 2.00 & 1.75 & 2.3 & 2.0\\ 1F & 1.12 & 2.09 & 1.73 & 1.9 & 1.6\\ \end{tabular} \end{ruledtabular} \end{table} In Fig.~\ref{fig:1PA}, the cross section for HF can be seen to be qualitatively different, rising steadily from a weak threshold peak. The cross section obtained using 1st-order MBPT~\cite{nguyen-20b-psk} is a partial improvement on the HF cross section. To understand this phenomenon in more detail, we consider the transition strength of the first few dominant transitions in the various approaches. We find that the various many-body treatments tend to redistribute the transition strength among the fine-structure components in different ways. Since the fine-structure lines are nearly coincident and only the sum of their transition strengths contributes to the observable cross section, for this analysis we therefore sum the transition strength over fine-structure components, \begin{equation} \tilde{T}_{\alpha}^{(1)}=\sum_{F,F'}T_{\alpha}^{(1)}(F,F')\,.\label{eq:T1_summed} \end{equation} Summed transition strengths are given in Table~\ref{tab:beta}. We also give the enhancement or renormalization factor $\beta$ due to correlation, which is defined as the ratio of $\tilde{T}_{\alpha}^{(1)}$ in a given many-body approach to its value at HF level. The renormalization factor for the threshold $1S_e$-$1S_{h}$ transition is large, about 4.8 for $L=9.4$~nm, and increases further (approximately as $L^{3}$) for larger $L$. This is the same enhancement factor that applies to the radiative decay rate of the ground-state exciton discussed in Sec.~\ref{subsec:lifetime}. However, as emphasized in Ref.~\cite{nguyen-20b-psk}, the renormalization factor decreases rapidly with increasing energy, approaching unity. In fact, while 1st-order MBPT underestimates the $1S_e$-$1S_{h}$ enhancement factor, it overestimates slightly the factor for the excited-state transitions. The effect of this is that the absorption cross section in an all-order approach tends to have a more prominent threshold peak, followed by a ``leveling out'' of the cross section at higher energies, bringing the shape of the absorption cross section into better agreement with experiment. \section{\label{sec:Conclusions} Conclusions} We have presented various many-body formalisms, within a $\mathbf{k}\cdot\mathbf{p}$ envelope-function approach, for treating all-order correlated single excitons in NC quantum dots. These formalisms apply both to the EMA and to $\mathbf{k}\cdot\mathbf{p}$ models in which the VB and CB are coupled, such as the $4\times4$ $\mathbf{k}\cdot\mathbf{p}$ model. The latter allow one to treat the ``$\mathbf{k}\cdot\mathbf{p}$ corrections'' to the EMA. The simplest many-body approach, valid to order $O(1)$ in $\mathbf{k}\cdot\mathbf{p}$ perturbation theory, was BSE$_{0}$ using an EMA basis set. The most complete treatment of $\mathbf{k}\cdot\mathbf{p}$ corrections was given by RPAE. The various formalisms were explicitly adapted to spherical symmetry and expressed in terms of radial integrals and angular factors. The resulting approaches are very rapid and accurate for systems in intermediate confinement, where correlation effects are in general strong (the methods typically require a few seconds of computation time on a single core). To illustrate these techniques, we considered a class of semiconductor NCs of great recent interest, inorganic lead-halide perovskites such as CsPbBr$_{3}$. The most commonly synthesized size range of these NCs corresponds to intermediate confinement. We showed that in this size range, the all-order methods gave significant improvements in accuracy compared to mean-field methods (HF) or to perturbative methods (MBPT to first or second order), and were significantly more accurate also than asymptotic results that can be derived analytically for the infinite-size limit. This was shown to be true for the correlation energy, the radiative lifetime of the ground-state exciton, and the LR Coulomb contribution to the exciton fine structure. Partly as a test of our methods, we also checked that the all-order formalism had the expected large-size limit, where this was known. The all-order correlation formalism allows one to generate correlated excited states rigorously. We used excited states to investigate the one-photon absorption cross section, where the all-order methods were shown to give a significant improvement in the shape of the cross section (versus laser wavelength) near and just above threshold. Also, because the $\mathbf{k}\cdot\mathbf{p}$ corrections are integrated directly into the all-order formalism for VB-CB-coupled models, the approach allows one to calculate the LR exciton fine structure, an order $O[(\mathbf{k}\cdot\mathbf{p})^{2}]$ effect, by direct subtraction of the total energy of the fine-structure levels. In other problems, such as the exciton correlation energy or lifetime, the $\mathbf{k}\cdot\mathbf{p}$ corrections were found to be quite small (e.g., up to about 5\% for the lifetime). However, it was shown that, in situations where these $\mathbf{k}\cdot\mathbf{p}$ corrections are interesting, the more complete RPAE method can give significantly different $\mathbf{k}\cdot\mathbf{p}$ corrections than the other methods, such as BSE$_{\mathbf{k}\cdot\mathbf{p}}$, and RPAE is therefore to be preferred. We considered only single excitons in this paper. Other excitonic systems, such as trions or biexcitons, can be treated by generalized CI approaches~\cite{shumway-01-sqd,tyrrell-15-sqd} or quantum Monte Carlo~\cite{shumway-01-sqd}. \begin{acknowledgments} The authors would like to thank T.\ P.\ T.\ Nguyen and Sum T.\ C.\ for helpful discussions. S.A.B.\ is grateful to Fr\'ed\'eric Schuster and the CEA's PTC program ``Materials and Processes'' for financial support. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Radio galaxies are morphologically classified into Fanaroff-Riley classes I (FRI) and II (FRII), depending on the lack or the presence of hotspots on the edges of their jets \citep{fanaroff}. However, there is a class of radio galaxies that does not show the typical morphology of FRI and FRII sources. These objects are called X-shaped radio galaxies (XRGs) because, apart from the bright 'primary' lobes, they exhibit a pair of weak 'secondary' lobes (or 'wings'), which are oriented with an angle that gives the structure a cross-like shape \citep{leahywilliams}. Primary lobes often host jets with hotspots, while wings never host jets. It was estimated that the XRGs represent $\sim$10\% of FRII sources \citep{leahyparma}. The sizes of the wings are comparable or longer than those of the primary lobes, even if the wings can appear smaller due to projection effects. Spectral indices are typically steeper in the wings, suggesting that they are radiatively older than the lobes. However, XRGs with a flatter index in the wings than in the lobes have been detected as well \citep{lalrao2004,lalrao2019}. Empirically, it was found that the radio power of FRIIs is higher than that of FRIs \citep{fanaroff}. The radio power of the XRGs is of the same order as FRI and FRII division power $P_{1.4} =10^{25} \; \mathrm{W\cdot Hz^{-1}}$ at 1.4 GHz \citep{dennettthorpe,cheung2009}. Moreover, radio emission seems to be linked to the host galaxy properties. Indeed, XRGs are generally associated with high ellipticity ($\varepsilon \ge 0.2$) galaxies and it was observed that the lobes and the wings are aligned with the host major and minor optical axes, respectively \citep{capetti}. Similar alignments are found in the X-rays band with the axes of the hot atmosphere surrounding the galaxy \citep{hodgeskluck2010}. A study comparing a sample of XRGs to a control sample of classical radio galaxies with similar redshift, in addition to radio and optical luminosities finds that the XRGs host supermassive black holes (SMBHs) with statistically higher masses than classical sources \citep{mezcua}. Several theoretical models have been proposed to explain the nature of XRGs and the process that generates their morphology and properties, however there is currently not a general consensus on a formation scenario \citep[see][and references therein]{gopal}. Reorientation models support the idea that wings consist of fossil emission along a previous direction of the jets \citep{wirth,merrittekers,rees,dennettthorpe,liu}. According to hydrodynamical models, the backflow plasma coming from the hotspots is diverted by a strong environment gas pressure, giving rise to the wings \citep{leahywilliams,kraft,capetti,hodgeskluck2011}. In the double active galactic nuclei (AGN) model, the lobes and wings are formed independently by two active SMBHs of a binary system \citep{lalrao2007,lalrao2019}. Finally, wings could result from the deflection of the jets after the collision with the gas present in a stellar shell \citep{gopal}. In this work we report on our study of the radio galaxy in the Abell 3670 (\object{A3670}) galaxy cluster by means of new multifrequency Karl G. Jansky Very Large Array (JVLA) data. Previous radio observations \citep{gregorini1994} revealed the peculiar morphology of this source, making A3670 an XRG candidate. We aim to accurately study the morphology of A3670 and characterise its spectral properties for the first time in order to confirm the XRG classification and investigate the origin of its wings. The paper is organised as follows: in Sect. 2 we describe the A3670 galaxy cluster and its brightest cluster galaxy (BCG); in Sect. 3 we present the new JVLA radio data and summarise the data reduction and imaging processes; in Sect. 4 we present the radio, spectral index, and radiative age maps. We also combine the radio properties with optical properties of the BCG taken from the literature in order to discuss the theoretical models proposed to explain the XRGs origin; in Sect. 5 we compare our results to the models; in Sect. 6 we summarise our work and give our conclusions on the nature of A3670. Throughout this paper we adopt a $\Lambda$CDM cosmology with $H_0=73\;\mathrm{km\cdot s^{-1}\cdot Mpc^{-1}}$, $\Omega_{\rm M}=0.27$ and $\Omega_{\rm \Lambda}=0.73$. The spectral index $\alpha$ is defined as $S\propto \nu^{-\alpha}$, where $S$ is the flux density, and $\nu$ is the frequency. \section{The galaxy cluster Abell 3670} A3670 ($RA_{\rm J2000}$ $20^h14^{m}18^s$, $Dec_{\rm J2000}$ $-29^o44'51''$\footnote{http://ned.ipac.caltech.edu/}) is a richness-class 2 galaxy cluster located at redshift $z=0.142$ \citep{coziol}, which gives an angular scale of $1''=2.4 \; \mathrm{kpc}$, and with a luminosity distance of $D_{\rm L}=645$ Mpc. Its BCG is a giant elliptical galaxy classified as a dumbbell galaxy \citep{gregorini1992,andreon}. Dumbbell galaxies are optical systems presenting two nuclei with similar magnitude, which are surrounded by a common stellar halo \citep{valentijn}. The two nuclei of A3670 are separated by $7''$, corresponding to $\simeq17$ kpc according to the adopted cosmology. The BCG has an ellipticity of $\varepsilon=0.28$ and a position angle of $PA=24^o$ measured from north to east \citep{makarov}\footnote{http://leda.univ-lyon1.fr/}. Its B-band and K-band apparent magnitudes are also available: $m_{\rm B}=17.65$ and $m_{\rm K}=12.74$. We use these parameters in Sect. \ref{radio-ottico} to analyse the connection between the optical and radio properties. No X-ray data are available for this cluster. The BCG hosts a radio galaxy (\object{MRC 2011-298}) with a peculiar shape, as shown by \citet{gregorini1994}, who observed the source at 5.5 GHz using the Very Large Array (VLA). In fact, it exhibits a pair of bright lobes in the north-south direction and a pair of weak wings in the east-west direction, oriented with an angle of about 90$^o$. This morphology resembles that of the XRGs and makes A3670 a candidate of this class of objects. \section{Observations and data reduction} \begin{table*} \caption[]{Details of new JVLA radio data analysed in this work (PI: M. Gitti. Project code:14B-027).} \label{dati} $$ \begin{tabular}{ccccc} \hline \noalign{\smallskip} & 1.5 GHz & 5.5 GHz & 6 GHz & 9 GHz\\ & (L-band) & (C-band) & (C-band) & (X-band) \\ \noalign{\smallskip} \hline \noalign{\smallskip} Observation date & 10-Jan.-2015 & 11-Jan.-2015 & 20-Sep.-2014 & 9-Jan.-2015 \\ Frequency coverage (GHz) & 1-2 & 4.5-6.5 & 4-8 & 8-10 \\ Array configuration & CnB & CnB & DnC & CnB \\ Spectral windows & 16 & 16 & 34 & 16 \\ On source time (min) & 22 & 40 & 22 & 35 \\ \noalign{\smallskip} \hline \end{tabular} $$ \begin{tablenotes} \item {\small \textbf{Notes}. Each JVLA dataset contains a number of spectral windows, which are parted into 64 channels. Spectral windows for the datasets at 5.5, 6, and 9 GHz have 128 MHz bandwidth. Spectral windows for the dataset at 1.5 GHz have 64 MHz bandwidth.} \end{tablenotes} \end{table*} We performed new observations of the radio galaxy in A3670 with the JVLA at 1.5 (L-band), 5.5 (C-band), and 9 (X-band) GHz in CnB configuration, and at 6 (C-band) GHz in DnC configuration. The details of the observations and datasets are summarised in Table \ref{dati}. In all the observations we used 3C48 as the flux density calibrator and J2003-3251 as the phase calibrator. Data reduction was carried out with the National Radio Astronomy Observatory (NRAO) Common Astronomy Software Applications ({\ttfamily CASA}) v. 4.7. First, we carefully edited the visibilities of the calibrators in order to remove Radio Frequency Interference (RFI), adopting both the manual and automatic flagging (with {\ttfamily RFLAG} and {\ttfamily EXTEND} modes of the {\ttfamily FLAGDATA} task). Then we performed the standard calibration procedure\footnote{The description of calibration steps can be found at https://science.nrao.edu/facilities/vla/docs/manuals/obsguide/calibration} with multiple editing iterations. Finally we self-calibrated the phases of each dataset. After the processes of calibration and self-calibration, $\sim$15\% of the visibilities in C-band (CnB configuration) and 20\% in X-band and C-band (DnC configuration) were flagged. All of them are consistent with the 15\% flagged visibilities expected at these frequencies. However, we were forced to flag more than 55\% of the L-band visibilities, due to the high level of RFI in this dataset (the typical expected value is 40\%). The flagging of entire spectral windows in L-band caused the central frequency to shift from 1.5 to 1.7 GHz. The imaging process was carried out with the {\ttfamily CLEAN} task, by setting a multifrequency synthesis mode for continuum emission analysis. The curvature of the sky was parametrised by {\ttfamily gridmode=WIDEFIELD}. In L-band, we also used {\ttfamily multiscale=[0,1,5,10]} to highlight the faint extended emission on scales from point-like to 2$\times$beam size. For each dataset, we produced three maps with different baseline weightings to study the source emission at varying resolution and sensitivity. The parameter {\ttfamily weighting=NATURAL} provides more weight to short baselines, which are sensitive to the extended emission, but it degrades the resolution. On the other hand, {\ttfamily weighting=UNIFORM} provides the same weight to short and long baselines, so sensitivity to the extended emission decreases, but the resolution improves. The parameters {\ttfamily weighting=BRIGGS, robust=N} give an intermediate weight depending on the value of the {\ttfamily robust} parameter ({\ttfamily N=2} and {\ttfamily N=$-$2} correspond to a natural and a uniform weight, respectively). We reached the best compromise between resolution and sensitivity by fixing {\ttfamily robust=0.5}. Therefore in the following analysis we use the {\ttfamily BRIGGS=0.5} maps only (otherwise it is specified). \section{Results} In this section we present the analysis of the multifrequency maps of the radio galaxy A3670. First, we show the radio, spectral index, and radiative age maps. Then, we analyse the connection between the radio galaxy and its host. \subsection{Radio morphology} \label{sectionmappe} \begin{figure*} \centering \includegraphics[height=8.5cm,width=8.5cm]{Lbriggs.png} \includegraphics[height=8.5cm,width=8.8cm]{C1briggs.png} \includegraphics[height=8.5cm,width=8.5cm]{C2briggs.png} \includegraphics[height=8.5cm,width=8.5cm]{Xbriggs.png} \caption{A3670 maps obtained with {\ttfamily BRIGGS 0.5} weighting parameter. In all panels, the contour levels are $-3\sigma$, $3\sigma$, $6\sigma$, $12\sigma$, $24\sigma$, $48\sigma$, ... {\it Top left}: 1.7 GHz map at a resolution $9.9''\times9.0''$ (RMS noise is 0.067 mJy$\cdot$ beam$^{-1}$). {\it Top right}: 5.5 GHz map at a resolution $3.8''\times2.7''$ (RMS noise is 0.012 mJy$\cdot$ beam$^{-1}$). {\it Bottom left}: 6 GHz map at a resolution $14.7''\times6.8''$ (RMS noise is 0.047 mJy$\cdot$ beam$^{-1}$). {\it Bottom right}: 9 GHz map at a resolution $2.5''\times1.7''$ (RMS noise is 0.007 mJy$\cdot$ beam$^{-1}$).} \label{mappe} \end{figure*} \begin{table*} \caption{Properties derived from A3670 radio maps in Fig. \ref{mappe}. Columns 2 and 3 list the resolution and the RMS noise of the maps. Column 4 and 5 list the total flux density and radio power within the $3\sigma$ contour levels. Column 6 indicates the detected components and column 7 lists their flux densities.} \label{fluxtab} \begin{tabular}{ccccccc} \hline \noalign{\smallskip} Frequency & Resolution & RMS & Total flux density & Total radio power & Component & Component flux density \\ (GHz) & (arcsec $\times$ arcsec) & (mJy$\cdot$beam$^{-1}$) & (mJy) & (W$\cdot$Hz$^{-1}$) & & (mJy) \\ \hline \noalign{\smallskip} &&&&& Primary lobes & $294\pm15$ \\ 1.7 & $9.9\times9.0$ & 0.067 & $349\pm18$ & $(1.6\pm0.1)\cdot10^{25}$ & East wing & $32\pm2$ \\ &&&&& West wing & $23\pm1$ \\ \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} &&&&& North jet & $16.2\pm0.5$ \\ &&&&& South jet & $24\pm1$ \\ 5.5 & $3.8\times2.7$ & 0.012 & $127\pm4$ & $(5.9\pm0.2)\cdot10^{24}$ &Primary lobes & $120\pm4$ \\ &&&&& East wing & $3.3\pm0.1$ \\ &&&&& West wing & $4.2\pm0.1$ \\ \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} &&&&& Primary lobes & $124\pm4$ \\ 6 & $14.7\times6.8$ & 0.047 & $140\pm4$ & $(6.5\pm0.2)\cdot10^{24}$ & East wing & $8.3\pm0.2$ \\ &&&&& West wing & $8.4\pm0.3$ \\ \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} &&&&& North jet & $11.1\pm0.3 $ \\ 9 &$2.5\times1.7$& 0.007& $76\pm2 $& $(3.5\pm0.1)\cdot10^{24}$ & South jet & $17\pm1 $ \\ &&&&& Primary lobes & $76\pm2$ \\ \noalign{\smallskip} \hline \end{tabular} \end{table*} A3670 maps are shown in Fig. \ref{mappe}. Table \ref{fluxtab} summarises the resolution, RMS noise, and flux density of each source component. The L-band map at 1.7 GHz (Fig. \ref{mappe}, top left) has a low resolution ($9.9''\times9.0''$), but it allows us to study the extended emission. The source exhibits the primary lobes along the north-south direction and a pair of weak wings, nearly perpendicular to the lobes, along the east-west direction. The flux density of the lobes is $S_{\rm lobes}=294\pm15$ mJy, while the east and the west wings have $S_{\rm Ew}=32\pm2$ mJy and $S_{\rm Ww}=23\pm1$ mJy, respectively. The total length of the lobes is $l_{\rm lobes}\simeq60''\simeq145$ kpc, whereas that of the wings is $l_{\rm Ew}\simeq75''\simeq180$ kpc and $l_{\rm Ww}\simeq60''\simeq145$ kpc. The ratio between the projected lengths of the wings and lobes is 2.8. Thanks to the high resolution ($3.8''\times2.7''$) reached in the C-band (array CnB) map at 5.5 GHz (Fig. \ref{mappe}, top right), we can resolve the jets and the core within the primary lobes. We observe that the jets are characterised by a curvature and an S-shaped structure. No hotspots are detected. Both the wings are faint ($S<5$ mJy), but the eastern one appears as diffuse emission, while the western wing is better defined. The resolution of the C-band map at 6 GHz (Fig. \ref{mappe}, bottom left) is lower ($14.7''\times6.8''$) and the extended emission becomes more evident. Even if we cannot resolve the jets and the core, wings are well defined. The X-band map at 9 GHz (Fig. \ref{mappe}, bottom right) has the highest resolution ($2.5''\times1.7''$) and shows well resolved jets and core, while it confirms the absence of hotspots. We notice that the jets bending acts not only in the outer regions of the source, but also in the inner ones near the core. A faint ($S\simeq1$ mJy) diffuse emission associated with the west wing is visible at the $3\sigma$ level. The south and north jets have a flux density of $S_{\rm Sj}=17\pm1$ mJy and $S_{\rm Nj}=11.1\pm0.3$ mJy, respectively. Their lengths are similar $l_{\rm Sj}\simeq l_{\rm Nj}\simeq18''\simeq40$ kpc. On the basis of the morphological properties, we can classify A3670 as an FRI-type XRG. In order to compare the radio power of A3670 to those of classical radio galaxies and other XRGs, we measured the 1.4 GHz flux density $S_{1.4}=374\pm19$ mJy and calculated the corresponding radio power as: \begin{equation} P_{1.4}=4 \pi D_{\rm L}^{2}S_{1.4}(1+z)^{{\alpha-1}}=(1.7\pm0.1)\cdot10^{25} \; {\rm W\cdot Hz^{-1}} \label{power} \end{equation} where we used the mean spectral index $\alpha=0.9\pm0.1$ between 1.7 and 9 GHz. The radio power of A3670 is consistent with the typical XRGs radio power, intermediate between that of FRIs and FRIIs \citep{dennettthorpe}. \subsection{Spectral indices} \begin{table} \caption[]{{\ttfamily CLEAN} parameters adopted to obtain the multifrequency maps to be used as input for the spectral index maps.} \label{spixtab} $$ \begin{tabular}{cccc} \hline \noalign{\smallskip} Map 1 & Map 2 & {\ttfamily UVRANGE} & {\ttfamily RESTORINGBEAM} \\ (GHz) & (GHz) & (k$\lambda$) & (arcsec $\times$ arcsec) \\ \noalign{\smallskip} \hline \noalign{\smallskip} 5.5 & 9 & 2-91 & 3.0 $\times$ 2.0 \\ 1.7 & 6 & 1-23 & 12.0 $\times$ 7.0 \\ 1.7 & 9 & 2-23 & 9.5 $\times$ 6.5 \\ \noalign{\smallskip} \hline \end{tabular} $$ \end{table} \begin{figure*} \centering \includegraphics[height=6.8cm,width=7.8cm]{spixXC1NEW.png} \includegraphics[height=6.8cm,width=7.8cm]{errspixXC1.png} \caption{Spectral index map between 5.5 and 9 GHz at resolution $3''\times2''$ ({\it left}) and associated error map ({\it right}). Contour levels are those of the lower frequency radio map. The core has an inverted spectral index $\alpha\simeq-0.5$. Jets have $0.5\le\alpha\le1$. Primary lobes have $\alpha\simeq1.5$. The inner regions of the wings have $1.5\le\alpha\le2.5$.} \label{spixXC1_noL}% \end{figure*} \begin{figure*} \centering \includegraphics[height=6.8cm,width=7.8cm]{spixC2LNEW.png} \includegraphics[height=6.8cm,width=7.8cm]{errspixC2L.png} \caption{Spectral index map between 1.7 and 6 GHz at resolution $12''\times7''$ ({\it left}) and associated error map ({\it right}). Contour levels are those of the lower frequency radio map. Primary lobes have $0.5\le\alpha\le0.8$. Wings have $1\le\alpha\le1.6$.} \label{spixC2L}% \end{figure*} \begin{figure*} \centering \includegraphics[height=6.8cm,width=7.8cm]{spixXLNEW.png} \includegraphics[height=6.8cm,width=7.8cm]{errspixXL_XC1L.png} \caption{Spectral index map between 1.7 and 9 GHz at resolution $9.5''\times6.5''$ ({\it left}) and associated error map ({\it right}). Contour levels are those of the lower frequency radio map. Near the core $\alpha\le0.6$. The jets have $0.7\le\alpha\le0.9$. The inner parts of the wings have $1.0\le\alpha\le1.6$.} \label{spixXL}% \end{figure*} Radio maps can be combined to obtain spectral index maps. For all the considered frequencies, we produced new maps with {\ttfamily weighting=UNIFORM} and the same uv range, dimensions, and clean beam. Table \ref{spixtab} summarises the adopted parameters. In Fig. \ref{spixXC1_noL} we report the spectral index map between 5.5 and 9 GHz. This high resolution map ($3''\times2''$) shows the resolved core and jets. The core exhibits an inverted spectral index of $\alpha\simeq-0.5$, probably due to self-absorption effects. The jets show a spectral index of $0.5\le\alpha\le1$. Even if the spectral index distribution is not homogenous, we notice a steepening along the east-west direction. In the primary lobes the spectral index is $\alpha\simeq1.5$, while in the inner regions of the wings it is $1.5\le\alpha\le2.5$. In Fig. \ref{spixC2L} we report the spectral index map between 1.7 and 6 GHz. The resolution is low ($12''\times7''$), but it allows us to study the outer parts of the wings. The jets are not resolved and in the primary lobes $0.5\le\alpha\le0.8$. The wings show a steeper spectral index than that of the lobes, with values of $1\le\alpha\le1.6$. In Fig. \ref{spixXL} we report the map between 1.7 and 9 GHz, which clearly shows the index steepening from the centre towards the wings. The region near the core shows a flat spectral index of $\alpha\le0.6$, in the jets $0.7\le\alpha\le0.9$ and in the outer regions $1.0\le\alpha\le1.6$. \subsection{Radiative ages} \begin{figure*} \centering \includegraphics[height=8cm,width=8cm]{tribble.png} \includegraphics[height=8cm,width=8cm]{tribble_error.png} \caption{Radiative age ({\it left}) and associated error ({\it right}) maps obtained from the fit of a Tribble model with $B=1.8$ $\mu$G and $\Gamma=0.55$. The radiative age increases from the centre to the outer regions: the jets are $10\le t_{\rm rad}\le20$ Myr old, the primary lobes are $20\le t_{\rm rad}\le30$ Myr old, and the inner parts of the wings are $30\le t_{\rm rad}\le40$ Myr old. Typical errors are $<5$ Myr.} \label{agemap}% \end{figure*} \begin{figure*} \centering \includegraphics[height=6cm,width=6.5cm]{spixC2LcampionataNSOE.png} \smallskip \smallskip \smallskip \includegraphics[height=6cm,width=6.5cm]{ageC2LcampionataNSOE.png} \smallskip \smallskip \includegraphics[height=5.5cm,width=17.5cm]{spixNSOE.png} \smallskip \smallskip \includegraphics[height=5.5cm,width=17.5cm]{senzafanti.png} \caption{{\it Top}: Sampling of the spectral index map between 1.7 and 6 GHz ({\it left}) and age map ({\it right}) with beam-size ($12''\times7''$) elliptical regions along the lobes and wings. The region '0' indicates the centre of the source. {\it Middle}: Spectral index profile along the wings ({\it left}) and primary lobes ({\it right}) derived from the sampling above. The index $\alpha$ steepens in both the directions, from the centre to the outer regions. {\it Bottom}: Radiative age profile along the wings ({\it left}) and the primary lobes ({\it right}). The red points represent the analytical age (Eq. \ref{formulaage}), the green points represent the values derived from the age map ({\ttfamily BRATS}). We note that they are consistent within $1\sigma$ errors. The age increases towards the outer parts of the source and we estimate a difference $\Delta t=22\pm7$ Myr between the ages of the wings and lobes.} \label{samplingspix}% \end{figure*} The radiative age maps were obtained with the Broadband Radio Astronomy ToolS software \citep[{\ttfamily BRATS}\footnote{http://www.askanastronomer.co.uk/brats/},][]{harwood,harwood2015}. {\ttfamily BRATS} computes age maps by combining radio images at different frequencies and compares them to ageing models through different statistical tests. We considered three ageing models: KP \citep{kardashev,pacholczyk}, JP \citep{jaffe} and Tribble \citep{tribble}. The KP model assumes a constant pitch angle (i.e. the angle between the electron velocity and the magnetic field vectors), the JP model considers instead a time-averaged pitch angle with respect to the radiative lifetime of an electron. Both models use a constant magnetic field. The Tribble model adopts JP-like radiative losses, but assumes also a Gaussian spatial distribution for the magnetic field. The input parameters necessary to the fit are the magnetic field $B$ (which is the average value of the Gaussian distribution in the case of the Tribble model) and the injection index $\Gamma$ (i.e. the initial spectral index). We used the 1.7, 5.5, and 9 GHz as input radio maps and we set $B=1.8$ $\mu$G (fixed as the equipartition magnetic field that we estimated for A3670) and $\Gamma=0.55$ (calculated by the {\ttfamily BRATS findinject} task). None of the models was rejected at the $68\%$ confidence level. The KP and JP models gave the highest $\chi_{\rm red}^2=1.44$ and the best $\chi_{\rm red}^2=1.21$, respectively, while the Tribble one gave an intermediate $\chi_{\rm red}^2=1.33$. The latter can describe a more general case than the KP and JP models, because the varying pitch angle and magnetic field appear more plausible than assuming constant values in every region of the source. Thus we consider the Tribble model as our best fit and the correspondent radiative age map obtained is shown in Fig. \ref{agemap}. The radiative age of the jets is $10\le t_{\rm rad}\le20$ Myr, while primary lobes are $20\le t_{\rm rad}\le30$ Myr old. The inner parts of the wings are $30\le t_{\rm rad}\le40$ Myr old. These results confirm the radial ageing along the wings direction and prove that the lobes are younger than the wings. The radiative age map in Fig. \ref{agemap} does not allow us to evaluate the outer parts of the wings at distances $>50$ kpc from the centre. However, as a first approximation, it is possible to use the following analytical expression (derived in Appendix A under simple assumptions) for the radiative age of a source observed at two frequencies as a function of the injection and spectral indices $\Gamma$ and $\alpha$: \begin{equation} t_{\rm rad}=A\frac{\sqrt{B}}{\left( B^2+B^2_{\rm CMB}\right)\sqrt{1+z}}\sqrt{\frac{\left(\alpha-\Gamma\right)\ln\left(\frac{\nu_1}{\nu_2}\right)}{\nu_1-\nu_2}} \; {\rm Myr} \label{formulaage} \end{equation} where $B$ and $B_{\rm CMB}$$=3.25(1+z)^2$ (the equivalent Cosmic Microwave Background magnetic field) are expressed in $\mu$G, while the observing frequencies $\nu_1$ and $\nu_2$ are expressed in GHz. We adopt the constant $A=1590$ \citep[e.g.][]{slee,murgia}. In order to obtain a local spectral index in the outer regions, we sampled the spectral index map between 1.5 and 6 GHz with elliptical regions of the same beam sizes ($12''\times7''$), in both north-south and east-west directions, following the shape of the jets and the wings (Fig. \ref{samplingspix}, top left). The corresponding spectral index profiles (Fig. \ref{samplingspix}, middle) show that $\alpha$ steepens from region '0' (the centre of the source) to the outer regions of the wings and the lobes. In Eq. \ref{formulaage} we assumed $\Gamma=0.55$ and the local $\alpha$ from the sampling above (Fig. \ref{samplingspix}). As a first check, we verified that the estimated local age is consistent with that of the map in Fig. \ref{agemap}, by sampling it with the same regions (Fig. \ref{samplingspix}, top right). Then we produced the radiative age profiles shown in Fig. \ref{samplingspix}, bottom. The red points represent the analytical age from Eq. \ref{formulaage}, the green points represent the sampled age of the map (from {\ttfamily BRATS}). We note that they are consistent within $1\sigma$ errors. We observe an increasing ageing towards the outer parts of the source. In particular, the estimated ages of regions '-9' and '-1' are $t_{-9}=49\pm7$ Myr and $t_{-1}=27\pm2$ Myr. Therefore we conclude that the wings are $\Delta t=22\pm7$ Myr older than the lobes. We caution that equipartition conditions are ruled out by various inverse Compton observations, with the equipartition magnetic field $B_{\rm eq}$ likely being an overestimate of the true field strength $B_{\rm true}$ \citep[e.g.][]{ineson}. Owing to this, the spectral ages obtained above should be considered as lower limits for the true radiative ages. However, in the case of A3670, it is valid the inequality $B_{\rm true} < B_{\rm eq} < B_{\rm CMB}$, thus the inverse Compton losses dominate anyway and the true ages are not expected to be significantly different from the ones we estimate. \subsection{The radio galaxy and its host properties} \label{radio-ottico} \begin{figure} \centering \includegraphics[height=6.5cm,width=7cm]{dssNEW.png} \smallskip \includegraphics[height=6.5cm,width=7cm]{positionangleNEW.png} \caption{{\it Top}: A3670 optical image (R-band, DSS) with overlaid 9 GHz radio contours. The primary and secondary optical nuclei are indicated with 'A' and 'B', respectively. No radio emission associated with the secondary nucleus is detected. {\it Bottom}: A3670 optical image (R-band, DSS) with overlaid 1.7 GHz radio contours. The white lines indicates the direction of the optical axes. The major axis is aligned with the primary lobes, while the minor one is nearly aligned with the wings.} \label{ottico}% \end{figure} In order to compare the radio properties of A3670 to the optical properties of its host, in Fig. \ref{ottico} we overlay the radio contours at 1.7 GHz and 9 GHz to the R-band Digitized Sky Survey (DSS\footnote{http://archive.eso.org/dss/dss}) image of the host galaxy. As already mentioned, A3670 is a dumbbell galaxy. In the top panel of Fig. \ref{ottico}, the secondary optically fainter nucleus 'B' does not present radio emission and only the primary one 'A' is currently active. This result is in agreement with \citet{mchardy}, who found that only the brightest nucleus of a multiple optical system is usually active in the radio band. The ellipticity and position angle of the host galaxy are $\varepsilon=0.28$ and $PA=24^o$ (measured from north to east). The cross in the bottom panel of Fig. \ref{ottico} indicates the direction of its major and minor optical axes. The major axis is aligned with the radio lobes, while the minor axis is nearly aligned with the wings. These results are consistent with \citet{capetti}, who finds the high ellipticity of the XRGs host galaxy and with \citet{gillone}, who reports radio-optical alignments similar to the ones we observed in A3670. The SMBH mass of the XRGs is found to be statistically higher than that of classical radio galaxies \citep{mezcua}. Several previous works \citep{kormendy,magorrian,ferraresemerritt} demonstrated that a strong correlation between the host and its central SMBH exists. We adopted these correlations to estimate the SMBH mass of A3670. By using the apparent magnitudes, we calculated the absolute magnitudes $M_{\rm B}=-21.75$ and $M_{\rm K}=-26.33$. Then we used the following mass-magnitude relations \citep{graham}: \begin{equation} \log\left({\frac{M_{\rm BH}}{M_\odot}}\right)=-0.40^{+0.05}_{-0.05} \left(M_{\rm B}+19.50\right)+8.27^{+0.08}_{-0.08}=9.17^{+0.03}_{-0.03} \label{massaB} \end{equation} \begin{equation} \log\left({\frac{M_{\rm BH}}{M_\odot}}\right)=-0.33^{+0.09}_{-0.09} \left(M_{\rm K}+24\right)+8.33^{+0.15}_{-0.15}=9.10^{+0.06}_{-0.06} \label{massaK} \end{equation} Eq. \ref{massaB} and \ref{massaK} both give a SMBH mass of $M_{\rm BH}\sim10^9 \; M_{\odot}$, which is consistent with the results from the statistical analysis of \citet{mezcua}. \section{Discussion} In this section we briefly present the theoretical models proposed to explain the XRGs origin and we discuss their properties applied to the specific case of A3670 in order to determine the most plausible one. \subsection{Slow precession model} In some cases, radio sources associated with a dumbbell galaxy exhibit primary and secondary lobes as typical XRGs, but their wings are off-set with respect to the centre and therefore these objects are classified as Z-shaped radio galaxies \citep{gopal2003}. This morphology could be induced by a slow precession of the jets, due to the tidal interaction with a companion galaxy \citep{wirth,dennettthorpe}. A precession caused by the interaction with the secondary optical nucleus may be responsible for the S-shape of the jets in A3670. However, we do not observe any off-set in the wings, therefore it is unlikely that this mechanism could have produced the wings, given the different morphologies of the X-shaped and Z-shaped sources. \subsection{Reorientation models} In reorientation models, the spin of an active SMBH may abruptly change its direction ('spin-flip'). As a consequence, the jets are reoriented and the primary lobes evolve along the new direction, while the wings are fossil emission from previous jets. This phenomenon can occur after the coalescence with another SMBH \citep{merrittekers} or after the interaction between a SMBH (or a binary black hole system) and unstable regions of the accretion disk \citep{rees,dennettthorpe,liu}. These models can explain why the wings never host jets and can also reproduce XRG morphologies with very extended wings. Moreover, they are consistent with a steep spectral index in the wings and with the high SMBH mass, which is caused by the coalescence or the presence of a non-resolved binary system. However, these models cannot explain the high ellipticity of the host galaxy and the observed radio-optical alignments \citep[e.g.][]{hodgeskluck2010,gopal}. A3670 exhibits extended wings and a high SMBH mass that would be consistent with reorientation models. Nevertheless, they cannot account for the radio-optical alignments we find. The majority of the XRGs has an FRII morphology and FRI-type XRGs are rarer \citep{saripalli}. It is suggested that this evidence may be due to a different interaction between the SMBH and the accretion disk in the two classes of radio galaxies \citep{liu}. In fact, FRIIs typically have a radiatively efficient accretion disk, while FRIs have a radiatively inefficient one \citep{ghisellini}, which weakly interacts with the SMBH. This result makes the spin reorientation less likely and may well explain the lower number of FRI-type XRGs \citep{liu}. In the case of the XRG in A3670, which is an FRI, it appears implausible that a disk-SMBH interaction could have reoriented the jets by $\simeq90^o$, but we cannot exclude that a past coalescence may be the responsible. \subsection{Hydrodynamical models} Hydrodynamical models consider FRII-type jets aligned with the major axis of an high ellipticity galaxy, so that the environment gas pressure is stronger along the major axis with respect to the minor axis. The backflow plasma coming from the hotspots is therefore redirected towards the minor axis, where the minimum resistance of the gas allows the formation of the wings. In the buoyant backflow model \citep{leahywilliams,kraft} the wings plasma is led by the buoyancy force and evolves at subsonic speed. In a variant of this model \citep{capetti}, strong backflows may form a cocoon, which becomes over-pressured with respect to the surrounding gas and ejects plasma outflows at supersonic speed along the steepest pressure gradient (i.e. the minor axis), thus producing more extended wings. Three-dimensions numerical simulations suggest a supersonic origin and a subsonic evolution of the wings \citep{hodgeskluck2011}. Hydrodynamical scenarios are supported by the direct observation of the high ellipticity of the host and the multiwavelength alignments in some well-studied XRGs \citep[e.g.][]{kraft}. The ellipticity and the radio-optical alignments in A3670 are consistent with these evidences. A steep spectral index in the wings would be explained, because wings travel for a longer distance than the lobes. However, hydrodynamical models cannot reproduce very extended wings, as those of A3670, and do not account for the high mass of its SMBH. Moreover, they usually require a FRII jet with hotspot in order to produce strong backflows driving the formation of the wings at subsonic or supersonic speed \citep{capetti,saripalli,hodgeskluck2011}. \citet{saripalli} consider the possibility of a jet evolution from FRII to FRI, that would make the wings to be produced in a past phase of activity when the jets still had hotspots, by a similar hydrodynamical process. The intermediate XRGs radio power would be a confirmation of this evolution. Thus, we cannot exclude these scenarios, but they can hardly explain the extended wings in A3670. \subsection{Double AGN model} In the double AGN model, the lobes and the wings are independently produced by two active SMBH in a non-resolved binary system \citep{lalrao2007,lalrao2019}. This scenario can explain the XRGs with both steep and flat spectral indices in the wings. The estimated high mass of the XRG black hole is consistent with this model, because it would be the sum of the masses of two non-resolved SMBHs. Nevertheless, given the observed low occurrence of dual AGN \citep{burke}, it seems unlikely that the two SMBHs are active at the same time. Furthermore, this model cannot explain why the jets are never oriented along the wings and does not account for the alignments with the host galaxy. On the other hand, we cannot exclude that the primary optical nucleus of A3670 hosts a non-resolved SMBH binary system. It is possible that one of the two SMBHs had generated the wings in a previous phase of activity and then switched-off. After a time of $t\simeq20$ Myr (that is the estimated difference between the ages of the wings and the lobes), the other one switched-on and produced the lobes. \subsection{Jet-stellar shell interaction model} \begin{figure} \centering \includegraphics[height=6.5cm,width=7cm]{uniform.png} \caption{A3670 9 GHz map obtained with the {\ttfamily UNIFORM} weighting parameter (resolution is $2.0''\times1.2''$, RMS noise is 0.012 mJy$\cdot$ beam$^{-1}$; contour levels are $30\sigma$, $35\sigma$, $40\sigma$, $45\sigma$). The northern jet is at first deflected at $\simeq10$ kpc from the centre of the galaxy. } \label{uniform}% \end{figure} According to the jet-stellar shell interaction model, the gas present in a stellar shell deflects the radio jets, thus producing the wings \citep{gopal}. This scenario predicts that a recent merger with a gas-rich disk galaxy has both activated the SMBH and produced a system of stellar shells. Stellar shells are rotating arc-shaped structures that have been found in $\sim10\%$ of local elliptical galaxies, roughly aligned with their optical major axis \citep{malin}. In particular, the interaction between the jets in \object{Centaurus A} and its shells at kpc scales \citep{gopalsaripalli} suggests that a similar phenomenon can occur in XRGs. For instance, in Centaurus A shells, both neutral and molecular hydrogen are found, with an estimated mass of $M_{\rm H}\simeq4\cdot10^7 \; M_\odot$ and an average density of $n_{\rm H}\simeq4\cdot10^{-2}$ cm$^{-3}$ \citep{gopal2003}. Since the shells are aligned with the host major axis, the interaction can take place exclusively along this direction and the jets plasma is deflected towards the minor axis. In this model, the size of the wings depends on the duration of the interaction between the jet and the shells, which is determined by physical parameters like density and velocity, therefore very extended structures can be produced. Moreover, this scenario can account for high SMBH masses, because it is plausible to assume that a binary black hole system has formed or a coalescence has occurred, as a consequence of the galactic merging. Even if this model can reproduce the observed XRGs features, up to now stellar shells are confirmed only in two XRGs: \object{3C 403} \citep{almeida} and \object{4C+00.58} \citep{hodgeskluck2010b}. Observations and numerical simulations suggest that BCGs are the result of several galaxy mergers \citep[e.g.][]{delucia,rasmussen}. The observed high SMBH masses and extended stellar halos in BCGs strongly support this formation scenario. A3670 exhibits the common properties of the dominant cluster galaxies and therefore a system of shells may have been produced after one of these mergers, as found in some other BCGs \citep{kluge}. In Fig. \ref{uniform} we report the radio contours of the high resolution ($2.0''\times1.2''$) 9 GHz map obtained with the {\ttfamily UNIFORM} weighting. We notice that the northern jet begins to be deflected at $\simeq 10$ kpc from the galaxy centre, on similar scales at which the shells are typically detected. Moreover, the rotational motion of the shells could explain the S-shape of the jets in A3670. \section{Summary and conclusions} In this work we investigated the properties and origin of the candidate XRG in the A3670 cluster. To this aim, we processed and analysed new JVLA radio data at 1.5, 5.5, 6, and 9 GHz. Here we summarise our results. We obtained radio maps at different frequencies in order to accurately study the target. By considering the absence of hotspots in the primary lobes, we classify A3670 as an FRI-type XRG. We measure a 1.4 GHz radio power of $P_{1.4}=(1.7\pm0.1)\cdot10^{25} \; {\rm W\cdot Hz^{-1} }$. The spectral index maps show a steepening from the centre to the outer regions of the source. The radiative age map suggests a progressive ageing from the centre, but it does not allow us to cover the whole source. Therefore, after checking its consistency with the age map, we used an approximate analytical expression to derive the radiative age as a function of the spectral index, which was measured out to the external regions. We estimate that the wings are $\Delta t=22\pm7$ Myr older than the lobes. Comparison between radio and optical data allowed us to verify some properties typically observed in XRGs. From literature data, we find a high ellipticity ($\varepsilon=0.28$) of the radio-loud component of the dumbbell system ('A' in Fig. \ref{ottico}). We report the alignments between the major and minor axes of the 'A' component with the radio lobes and wings, respectively. We estimate a high SMBH mass of $M_{\rm BH}\sim10^9\;M_{\odot}$. Finally, we discussed the theoretical models proposed to explain the origin of XRGs. The considered models (slow precession, reorientation, hydrodynamical, and double AGN) cannot completely explain the observed properties of A3670, apart from the jet-stellar shell interaction one. Future multiwavelength data would allow us to better constrain the formation and evolution mechanism of the XRG in A3670, in particular further optical data are needed to confirm the presence of shells in the BCG of A3670. \begin{acknowledgements} We thank the referee for his comments and suggestions, that have significantly improved the presentation of the paper. LB thanks A. Ignesti for his useful suggestions. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We acknowledge the usage of the HyperLeda database (http://leda.univ-lyon1.fr). Based on photographic data obtained using The UK Schmidt Telescope. The UK Schmidt Telescope was operated by the Royal Observatory Edinburgh, with funding from the UK Science and Engineering Research Council, until 1988 June, and thereafter by the Anglo-Australian Observatory. Original plate material is copyright (c) of the Royal Observatory Edinburgh and the Anglo-Australian Observatory. The plates were processed into the present compressed digital form with their permission. The Digitized Sky Survey was produced at the Space Telescope Science Institute under US Government grant NAG W-2166. \end{acknowledgements} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} There has been a rekindling of interest in low surface brightness (LSB) galaxies with the recent discovery of ultra-diffuse galaxies (UDGs) in the Coma cluster. UDGs are extended LSB galaxies with effective radii $R_{\rm e} \gtrsim 1.5$~kpc, central surface brightnesses $\mu(g,0) \gtrsim 24~\mbox{mag arcsec$^{-2}$}$, and exponential light profiles, i.e. \Sersic/ index, $n\unsim1$. The first $47$ UDGs were catalogued by \citet{vanDokkum_2015}, mostly outside the cluster core, using the remarkable Dragonfly Telephoto Array \citep{Abraham_2014}. Several groups have since reported the discovery of more UDGs in the Coma cluster \citep[e.g.,][]{Koda_2015, Yagi_2016,Ruiz_2018, Zaritsky_2019, Chilingarian_2019}. To date, $\unsim30$ Coma cluster UDGs have been spectroscopically confirmed as cluster members \citep{Kadowaki_2017, Alabi_2018, Ruiz_2018, Chilingarian_2019}. While the debate about the true origin and nature of these enigmatic galaxies still persists in the literature, we note that most of the Coma cluster UDGs still lack colour information: a fundamental diagnostic in understanding the formation and evolution of galaxies \citep{Renzini_2006}. This omission may be directly linked to the fact that most of the UDG discoveries were made from analysis of single-band photometry and that faint photometry is challenging. Only $52$ UDGs out of the $854$ Coma cluster LSB galaxies in the catalogue of \citet{Yagi_2016} have $B-R$ colour measurements, and all within a $\unsim0.7$~deg$^2$ area\footnote{This is only $\unsim10$~per cent of the area defined by the projected virial radius of the Coma cluster, i.e. $\unsim2.9$~Mpc \citep{Kubo_2007}}. The faint LSB galaxy catalogue of \citet{Adami_2006} with $B-R$ colour covers a similarly small central $\unsim0.6$~deg$^2$ area within the Coma cluster. The recently published SMUDGes catalogue \citep{Zaritsky_2019}, which extends beyond the virial radius of the Coma cluster, has colour measurements available for only $43$ UDGs from the \citet{Yagi_2016} catalogue. The small sample size and limited radial coverage of Coma cluster UDGs with known colours therefore makes a systematic photometric study that is targeted at UDGs an urgent necessity. While the environmental variation of galaxy colours with projected distance from the centre of the Coma cluster is well established in the literature for bright galaxies \citep{Terlevich_2001, Mahajan_2011} and dwarf galaxies \citep{Secker_1997}, the situation for UDGs, and in general, LSB galaxies within the Coma cluster, remains unclear. For example, \citet{Adami_2006} did not find any significant variation in colour with clustercentric radius in their faint LSB galaxy sample, perhaps due to the radially limited nature of their data. \citet{Terlevich_2001} attributed the blueing of mean galaxy colours with projected distance from the centre of the Coma cluster to an age effect, i.e., the cluster core is dominated by redder galaxies with older stellar ages, while the outskirts region is dominated by bluer galaxies with younger stellar ages. This is consistent with results from the spectroscopic studies of \citet{Smith_2009} and \citet{Smith_2011}, although \citet{Carter_2002} claimed that the environmental colour trends could also be a metallicity effect. \citet{Smith_2011} additionally showed that low mass galaxies have steeper radial age trends compared to their more massive counterparts. There are indications that UDGs (and by extension LSB galaxies) may have a similar clustercentric trends to dwarf galaxies \citep{Roman_2017, Alabi_2018, Ferre_2018, Mancera_2019, Zaritsky_2019, Chilingarian_2019} but the true behaviour within the Coma cluster is still unknown due to the paucity of UDGs with optical colour data. Furthermore, while the vast majority of cluster UDGs are quiescent and lie on the red-sequence of the colour--magnitude relation \citep{Koda_2015}, a few UDGs with significant deviations from the red-sequence --mostly bluewards-- have been reported \citep{Roman_2017, Chilingarian_2019, Zaritsky_2019}, mostly in low density environments including the cluster outskirts \citep{Alabi_2018}. \citet{Adami_2006} previously identified faint LSB galaxies in the Coma cluster with colours redder or bluer than the red-sequence galaxies. They interpreted these faint LSB galaxies as recent cluster infalls from less--dense environments where they have been ``pre-processed'' to varying degrees \citep{Zabludoff_1998}. Identifying galaxies that are bluer or redder relative to the red-sequence region from coherent photometry is therefore useful in understanding the origin of present-day cluster LSB galaxies. In this work, we present a catalogue of galaxies within $\unsim4$~deg$^2$ of the Coma cluster. We probe the same region of the Coma cluster as in the $R$-band LSB galaxy studies of \citet{Koda_2015} and \citet{Yagi_2016}, hitherto the most comprehensive works on Coma cluster LSB galaxies, employing an additional $V$-band filter in order to obtain colours and securely discriminate against contamination from background galaxies. We present a description of our imaging data in Section~\ref{sec:data}. We give details of our data analysis in Sections~\ref{sec:obj_detect} and \ref{sec:gal_model} including galaxy detection, galaxy modelling with \texttt{GALFIT}, and removal of confirmed contaminants from our catalogue. We present our Coma cluster colour--magnitude diagram in Section~\ref{sec:cmd} and discuss the residual contamination in our final catalogue. Finally, we present our final catalogue and summarize our results in Section~\ref{sec:summ}. Throughout this work, we adopt a distance of $100$~Mpc, a redshift of $0.023$, a distance modulus of $(m-M)_{\rm 0} = 35$ \citep{Carter_2008}, a virial radius of $\unsim2.9$~Mpc \citep{Kubo_2007}, position angle of $71.5$~deg \citep{Plionis_1994}, and central co-ordinates RA: 12:59:48.75 and Dec: +27:58:50.9 (J2000) for the Coma cluster. We also adopt the following cosmology: $\rm \Omega_m=0.3$, $\rm \Omega_{\Lambda}=0.7$ and, $\rm H_{0}=70\ \mbox{km s$^{-1}$}\ Mpc^{-1}$. \section{Data} \label{sec:data} We analyse the Coma cluster Subaru/Suprime-Cam \citep{Miyazaki_2002} $V$ and $R$ band imaging data previously reduced and published in the weak-lensing study of \citet{Okabe_2014}. The imaging is made up of a mosaic of $18$ individual pointings with $2\hbox{$^\prime$}$ overlap between adjacent fields and a total sky coverage of $\unsim4$~deg$^2$. Details of the exposure times and seeing per pointing can be found in \citet{Okabe_2014} alongside a summary of the data reduction process. Average exposure time and seeing FWHM are $14$~minutes $\& ~1\hbox{$^{\prime\prime}$}$ and $26$~minutes $\& ~0.7\hbox{$^{\prime\prime}$}$, in the $V$ and $R$-bands, respectively. We note that our $R$-band imaging is slightly different from that used by \hypertarget{Y16}{\citet[][hereafter \hyperlink{Y16}{Y16}]{Yagi_2016}}, where they included additional imaging data with worse seeing in $4$ pointings in their analysis. Figure~\ref{fig:outline} shows the mosaic of the $18$ pointings and the overlapping regions between them. \begin{figure} \includegraphics[width=0.48\textwidth]{figures/COMA_IMG.pdf}\hspace{0.01\textwidth}\\ \caption{\label{fig:outline} Mosaic of the $18$ pointings used to observe the Coma cluster. The final mosaic covers $\unsim4.2\hbox{$^\circ$}$ region of the Coma cluster. Galaxy IDs in the final catalogue are prefixed with the names of their originating pointing as shown in the plot. Multiple galaxy detections from the overlapping regions between adjacent pointings are used later in the text to quantify reliable uncertainties on the modelled galaxy structural parameters. As a guide to the physical scale, we show a circle with projected radius of $1$~Mpc within which we have marked the positions of the three cD galaxies: NGC~4889 (green square), NGC~4874 (red circle), and NGC~4839 (blue pentagon).} \end{figure} We used the publicly available code by \citet{Kelly_2014} to check and confirm the zero-point magnitudes originally used for photometric calibration by \citet{Okabe_2014} using bright stars in common with the Sloan Digital Sky Survey (SDSS) catalogue \citep{Ahn_2012}. Our zero-points are similar to those obtained by \citet{Okabe_2014}. We apply Galactic extinction corrections on a per-pointing basis to the magnitude and surface brightness measurements, using values from the dust extinction maps of \citet{Schlafly_2011}. This is because extinction variation from object to object within the same pointing is very small compared to the estimated photometric uncertainties (see Section~\ref{sec:est_unc} for more details). The applied corrections vary from $0.021$--$0.033$~mag and $0.017$--$0.026$~mag in the $V$ and $R$ bands, respectively. We also applied ``K-corrections'' of $0.03$ and $0.02$ to $V$ and $R$-band photometry, respectively, determined using the K-corrections calculator \citep{Chilingarian_2010, Chilingarian_2012}. All magnitudes hereafter are in the AB system, and are extinction and K--corrected. \section{Object Detection} \label{sec:obj_detect} We perform initial object detection on both $V$ and $R$-band images with \texttt{SExtractor} \citep{Bertin_1996}, adjusting the configuration criteria to maximize the detection of galaxies from the Coma cluster low surface brightness catalogue of \protect\hyperlink{Y16}{Y16}. We run \texttt{SExtractor} in the ``dual-image'' mode, using the $R$-band imaging for object detection and use the following criteria, similar to that used in \protect\hyperlink{Y16}{Y16}, to identify galaxy candidates in our \texttt{SExtractor} catalogue: \begin{itemize} \item non-zero \texttt{SExtractor} Petrosian RADIUS, \item \texttt{SExtractor} FWHM $\geq 5$~pixels ($\sim 0.5$~kpc at the distance of Coma cluster), \item \texttt{SExtractor} FLAG parameter $< 4$ (objects with defects such as saturated pixels or truncated isophotes are excluded), \item \texttt{SExtractor} CLASS\_STAR parameter $< 0.5$ (objects with CLASS\_STAR~$\sim1$ are foreground stars), \item $R$-band magnitude uncertainty $< 0.2$ (this corresponds to a minimum signal-to-noise ratio (SNR) $\sim5$). \end{itemize} This initial list of criteria help to exclude spurious detections, foreground stars, unresolved compact sources (globular clusters), and some background galaxies from our catalogue, reducing the size of our \texttt{SExtractor} source list by a factor of $\sim4$ to $\sim200,000$ objects. Next, we use the peak surface brightness -- magnitude diagram as a diagnostic to further remove contaminants from our sample of galaxy candidates. As shown in Figure~\ref{fig:bkgrd_sep}, galaxies that belong to the Coma cluster tend to occupy a distinct region compared to the fainter background galaxies. We compile and show a sample of confirmed Coma cluster galaxies from the literature which we match to our object catalogue. These galaxies vary from giants to dwarf galaxies and are selected from the spectroscopic studies of \citet{Mobasher_2001}, \citet{Edwards_2002}, \citet{Aguerri_2005}, \citet{Smith_2009}, the NASA/IPAC Extragalactic Database (NED)\footnote{http://ned.ipac.caltech.edu/} Coma cluster galaxy list, and SDSS. The compilation, however, contains only bright galaxies, i.e. $R\leq19$~mag, which have been the traditional subjects of spectroscopic studies. We extend this sample to fainter magnitudes ($R\sim24$~mag) by including the LSB galaxies from \hyperlink{Y16}{Y16}, which have been shown to be mostly cluster members in various studies \citep[e.g.][]{Kadowaki_2017, Alabi_2018, RL_2018, Chilingarian_2019}. The inclined line shows the expected peak surface brightness for a galaxy with $R_{\rm e} \sim 1.2\hbox{$^{\prime\prime}$}$ assuming an exponential profile, i.e. \Sersic/ index, $n=1$, (the worst FWHM seeing in the $V$-band equivalent to $0.6$~kpc at the distance of the Coma Cluster) at various magnitudes. We therefore exclude galaxy detections smaller than this size-limit from subsequent analysis, since they are most likely background sources, ultracompact dwarfs, or imaging defects such as false detections in the halos of bright stars. Likewise, we use the LSB catalogue of \protect\hyperlink{Y16}{Y16} to define the faint surface brightness limit of our catalogue. We adopt a peak surface brightness limit of $\mu_{max,R}\sim26.1$~\mbox{mag arcsec$^{-2}$}~which is equivalent to a mean effective surface brightness of $27.2$~\mbox{mag arcsec$^{-2}$}. We also exclude all galaxy candidates ($209$) with saturated central pixels seen around $\mu_{max,R}\sim18.1$~\mbox{mag arcsec$^{-2}$}. With the application of these thresholds, the size of our catalogue is now reduced to $27,437$ galaxies which is better suited for subsequent \texttt{GALFIT} analysis, including $1,305$ galaxies detected in more than one pointing. Likewise, out of the $854$ LSB galaxies published in \protect\hyperlink{Y16}{Y16}, our catalogue contains $757$ galaxies with detection in both bands, alongside $181$ duplicate detections. We later use these duplicate detections to estimate the \textit{true} uncertainties on the structural parameters of our galaxies. \begin{figure} \includegraphics[width=0.48\textwidth]{figures/Bkgrd_R.png}\hspace{0.01\textwidth}\\ \caption{\label{fig:bkgrd_sep} Peak surface brightness--$R$-band magnitude diagram showing Coma cluster galaxy candidates (brown dots) from our \texttt{SExtractor} analysis. We also show spectroscopically confirmed Coma cluster galaxies from various sources in the literature (magenta X's). At any surface brightness, likely Coma cluster galaxies have brighter apparent magnitudes. The inclined line shows the expected peak surface brightness for a galaxy with $R_{\rm e} \sim 1.2\hbox{$^{\prime\prime}$}$ (the worst FWHM seeing in our imaging data equivalent to $\sim0.6$~kpc at the distance of Coma cluster), while the horizontal line is our faint peak surface brightness limit ($\unsim26.1$~\mbox{mag arcsec$^{-2}$}), defined using the low surface brightness catalogue of \citet[][Y16]{Yagi_2016} (red dots). Galaxy candidates with saturated central pixels around $\mu_{max,R}\sim18.1$~\mbox{mag arcsec$^{-2}$} are excluded from subsequent analysis. The magnitudes shown here and hereafter have all been corrected for Galactic extinction and are K-corrected.} \end{figure} \begin{figure} \includegraphics[width=0.48\textwidth]{figures/duplicates.png}\hspace{0.01\textwidth}\\ \caption{\label{fig:duplicates} Variation in \texttt{GALFIT} $V$-band magnitude measurements in the $1,305$ galaxies with duplicate detections. We show the low surface brightness galaxies from the catalogue of \protect\hyperlink{Y16}{Y16} as red circles, while the remaining cluster galaxies are shown as smaller brown circles. The corresponding error-bars are the standard deviations determined in magnitude bins of width $\Delta V = 2$~mag and they increase up to a maximum of $\sim0.5$ at very faint magnitudes.} \end{figure} \section{Galaxy modelling with GALFIT} \label{sec:gal_model} The projected intensity profile, $I(R)$, of a galaxy can be described with a \Sersic/ function \citep{Sersic_1968,Graham_2005} of the form: \begin{equation} I(R) = I_{e}\exp\left[-b_{n}\left(\left(\frac{R}{R_e}\right)^{1/n} - 1\right)\right] \end{equation} where $I_{e}$ is the intensity at the effective radius $R_{\rm e}$ which encloses half the total galaxy luminosity, $n$ is the \Sersic/ index, also known as the profile shape parameter, and $b_{n}$ is a complicated function of $n$. Bright elliptical galaxies which are centrally concentrated typically follow de Vaucouleurs profiles \citep{deV_1948, deV_1959} with $n\sim4$, while the less centrally concentrated spiral disks and LSB galaxies have exponential profiles with $n\sim1$. To obtain the structural properties of all the galaxies in our catalogue, we fit \Sersic/ functions in both $V$ and $R$ bands independently with \texttt{GALFIT}\footnote{http://users.obs.carnegiescience.edu/peng/work/galfit} \citep{Peng_2010}. We make postage-stamp cutouts for each galaxy candidate using the positions and size estimates from our initial \texttt{SExtractor} analysis as a basis. Each cutout is centred on the central pixel coordinates from \texttt{SExtractor} with dimensions set to $10$ times the $R_{\rm e}$ to allow for good sky estimation. We then identify and mask out all bright sources within the frame that do not belong to the target galaxy in an automated way. We jointly fit for the position, magnitude, $R_{\rm e}$, $n$, axis ratio ($q$), and position angle ($PA$) of the target galaxy, as well as the sky background value. Point-spread functions (PSF) used in the fitting process are constructed for each pointing from a sample of bright, unsaturated stars (on average, we use $\sim50$ stars per pointing), pre-selected from Section~\ref{sec:obj_detect}. \subsection{Quality Control} \label{sec:qty_ctrl} In order to identify galaxies with reliable structural parameters, we compare the formal uncertainties on the magnitudes obtained from \texttt{GALFIT} with estimates of their systematic errors. We obtain these systematic errors from the sample of galaxies with duplicate observations and require that for a good fit, the formal uncertainties should be less than the systematic errors \citep[][]{Haussler_2007}. This requirement implies that the uncertainties on the magnitude measurements are never dominated by pixel noise. Figure~\ref{fig:duplicates} shows how $V$-band magnitude measurements differ in galaxies with duplicate detections. At fainter magnitudes and in the LSB galaxies, where SNR is significantly reduced and galaxy edges are difficult to identify, the differences between duplicate measurements are significantly increased, reaching $\unsim0.5$~mag. We use the standard deviations of these distributions, i.e., $\sigma_{\Delta V}\unsim0.16$; $\sigma_{\Delta R}\unsim0.20$, as limits in determining good and acceptable fits. This, in addition to a visual examination of the fits, reduces the size of our Coma cluster catalogue to $11,496$ galaxies. A visual examination of all the discarded galaxy candidates reveal that most of them are either spurious detections, typically, with extremely large $n$ values and smaller $R_{\rm e}$ or faint sources in heavily crowded fields. \subsection{Correction to total magnitudes} \label{sec:mag_corr} Since the galaxy--background boundary is not as well-defined in LSB galaxies compared to their HSB counterparts \citep[e.g.][]{Trujillo_2001, Adami_2006, Haussler_2007, Wittmann_2017}, we investigate how robust the total magnitudes from \texttt{GALFIT} analysis are by also obtaining their curve-of-growth (COG) total magnitudes. We fix the position, $q$, and $PA$ to parameter values from our previous \texttt{GALFIT} analysis and estimate the total magnitudes within consecutive isophotes, starting from $0.25R_{\rm e}$, and moving outwards in steps of $3$~pixels, i.e., $0.3$~kpc, stopping when the magnitude difference between successive isophotes is $<0.02$~mag. We exclude galaxies that have nearby neighbors within a radius of $7R_{\rm e}$ from this analysis. This exercise shows that the total integrated magnitude from \texttt{GALFIT} is systematically fainter than the asymptotic magnitude from the COG analysis, with the corrections becoming significant in both $V$ and $R$ bands, i.e., $\geq 0.2$~mag, only when the mean galaxy surface brightness is fainter than $\unsim24$~\mbox{mag arcsec$^{-2}$}. We show the mean magnitude corrections as a function of mean effective surface brightness in Figure~\ref{fig:mag_corr} and fit linear functions which we apply to total magnitudes from \texttt{GALFIT} analysis. \begin{align} \begin{aligned} \Delta V &= 0.037 \times \langle \mu_{\rm eff,V} \rangle - 0.67 \\ \Delta R &= 0.044 \times \langle \mu_{\rm eff,R} \rangle - 0.82 \end{aligned} \label{eq:mag_corr} \end{align} \subsection{Estimation of true uncertainties on structural parameters} \label{sec:est_unc} As already mentioned in Section~\ref{sec:qty_ctrl}, the formal errors from \texttt{GALFIT} are significantly lower than the systematic errors obtained from duplicate detections, at least for the total magnitudes. The formal uncertainties are based entirely on Poisson pixel noise and may be unrealistic estimates in cases such as when the adopted model does not adequately describe the galaxy light profile or when the boundaries of galaxies are difficult to identify due to low SNR, rapidly varying sky background, or crowding from nearby neighbours. Figure~\ref{fig:err_est} shows how the deviations between repeat measurements of model parameters for the same galaxies vary as a function of mean surface brightness in both $V$ and $R$-bands. We fit linear functions of the form \begin{equation} {\rm log}~\sigma = \alpha \times \langle \mu_{\rm eff} \rangle + \beta \label{eq:err_est} \end{equation} to the distribution of the 68th percentile of the difference of each model parameter in bins of mean surface brightness, with $\alpha$ and $\beta$ being the fit coefficients, respectively. The best-fitting $\alpha$ and $\beta$ coefficients are summarized in Table~\ref{tab:coeff} and are used to compute $\sigma$ for any measured mean surface brightness. Our final adopted error estimates, shown in Table~\ref{tab:complete_tab}, are obtained by adding $\sigma$ in quadrature to the \texttt{GALFIT} formal uncertainties. \begin{figure} \includegraphics[width=0.48\textwidth]{figures/cmp_mag_COG.png}\hspace{0.01\textwidth}\\ \caption{\label{fig:mag_corr} Mean offset between total magnitudes from \texttt{GALFIT} and asymptotic magnitudes from curve-of-growth analysis as a function of mean effective surface brightness for galaxies in our sample that are relatively isolated. The magnitude correction factor increases as galaxies become fainter, reaching a maximum of $\unsim0.4$~mag at $\unsim27$~\mbox{mag arcsec$^{-2}$}, where the galaxy--background boundaries become vague.} \end{figure} \begin{figure*} \includegraphics[width=0.96\textwidth]{figures/duplicates_V2.png}\hspace{0.01\textwidth}\\ \caption{\label{fig:err_est} Variation of the structural parameters obtained from \texttt{GALFIT} modelling of galaxies with duplicate detections. We show on the x-axis $V$ and $R$-band mean surface brightness within the effective radius (\textit{left-hand panels} and \textit{right-hand panels}, respectively) and on the y-axis, magnitude, effective radius, \Sersic/ index, and axial ratio. The dashed lines are fits to the 68th percentile of the corresponding ordinate parameter (in log) and are used to estimate the true model uncertainties as described in the text.} \end{figure*} \begin{table} \centering \begin{tabular}{@{}l c c c c} \hline \hline & \multicolumn{2}{c}{$V$} & \multicolumn{2}{c}{$R$}\\ \hline Parameter & $\alpha$ & $\beta$ & $\alpha$ & $\beta$ \\ \hline mag & $0.137$ & $-4.393$ & $0.042$ & $-2.11$ \\ $R_{\rm e}$ & $0.183$ & $-5.603$ & $0.096$ & $-3.326$ \\ \Sersic/~$n$ & $0.18$ & $-5.178$ & $0.129$ & $-3.734$ \\ $q$ & $0.248$ & $-4.318$ & $0.085$ & $-3.219$ \\ $PA$ & $0.299$ & $-4.438$ & $0.189$ & $-3.831$ \\ \hline \end{tabular} \caption{Summary of the coefficients used in eq. \ref{eq:err_est} to estimate the systematic errors unaccounted for in the \texttt{GALFIT} analysis.} \label{tab:coeff} \end{table} \subsection{Removal of spectroscopic contaminants from the Cluster catalogue} \label{sec:fin_clean} As a final step in minimizing contaminants in our final catalogue, we retrieve spectroscopically confirmed foreground stars and background galaxies (including quasi-stellar objects (QSO)) in the direction of the Coma cluster from the literature. This compilation comes mostly from SDSS and NED ($266$ foreground stars, $208$ QSOs, and $1042$ background galaxies) supplemented with $31$ and $20$ background galaxies from \citet{Adami_2009} and \citet{Chiboucas_2010}, respectively. We also include $2$ galaxies from the \hyperlink{Y16}{Y16} LSB catalogue known to be background galaxies \citep{Kadowaki_2017, Alabi_2018}. This sample, which we compare with our catalogue, is such that the foreground stars have $-300 \leq V_{\rm los}\ [\mbox{km s$^{-1}$}] \leq300$, the QSOs $V_{\rm los} \geq 30000$~$\mbox{km s$^{-1}$}$, while the background galaxies have $V_{\rm los} \geq 11000$~$\mbox{km s$^{-1}$}$, all with photometric $V$-band magnitudes ranging from $14.5$ to $22$. While our catalogue is devoid of all the foreground spectroscopic stars, we find a QSO\footnote{This QSO is SDSS\ J125712.28+280543.3 at $z=1.29$ with $V$-band $R_{\rm e}\unsim5.4\hbox{$^{\prime\prime}$}$.} with $17.4\ V$--band magnitude as well as $251$ background galaxies in our catalogue as shown in Figure~\ref{fig:CMD}. Half of these spectroscopic background galaxies were catalogued as Coma cluster galaxy candidates in \citet{GMP_1983}. Appendix~\ref{sec:lit_cmp} contains an extensive comparison of our structural parameters with the literature after removing all the known contaminants. \begin{figure*} \includegraphics[width=0.94\textwidth]{figures/CMD.png}\hspace{0.01\textwidth}\\ \caption{\label{fig:CMD} Colour--magnitude diagram of Coma cluster galaxies (brown circles) detected and analysed in this work. Galaxies from the low surface brightness catalogue of \protect\hyperlink{Y16}{Y16}, which mostly belong to the Coma cluster, are shown as red circles. Spectroscopically confirmed background galaxies in the direction of Coma cluster from the literature (filled squares) have been colour-coded by their redshifts. Also shown is a spectroscopically confirmed background quasi-stellar object (star marker with red edges, colour-coded by its redshift). We overlay the red-sequence (black solid line) obtained by a linear fit to the bright, spectroscopically confirmed cluster galaxies, extrapolated to faint magnitudes. We also show the $1\sigma$ intrinsic scatter (black, dashed lines) around the red-sequence. The intrinsic scatter increases from $\unsim0.06$~mag at $V=15$~mag to $\unsim0.28$~mag for the faintest LSB galaxies. Most of the low surface brightness galaxies from \protect\hyperlink{Y16}{Y16} have colours consistent with the red-sequence and fall within the limits of the $1\sigma$ intrinsic scatter.} \includegraphics[width=0.9\textwidth]{figures/CMD_bkgrd_cleanV2.png}\hspace{0.01\textwidth} \caption{\label{fig:CMD_bkgrd_II} Histograms of colour--magnitude distribution of galaxies in the direction of the Coma cluster (\textit{Left panel}), and in a control field -- the Subaru Deep Field (\textit{Middle Panel}), overlaid with the red-sequence of known Coma cluster galaxies. The solid and dashed black lines in all the panels are the same as in Figure~\ref{fig:CMD}. The background contaminants are mostly faint galaxies with $V\unsim22.5$~mag and have very red colours $V-R\unsim0.7$. We summarize the number of galaxy candidates on and off the red-sequence in the left panel and the corresponding contamination levels in the middle panel. The \textit{Right Panel} shows the colour--magnitude distribution of the remaining $6,891$ Coma cluster galaxies after statistically correcting for the background contaminants. The colour-bar is the number density of the galaxies.} \end{figure*} \section{colour--magnitude diagram} \label{sec:cmd} We show the colour--magnitude diagram (CMD) of all galaxies with detections in both $V$ and $R$-bands in Figure~\ref{fig:CMD}. The $V-R$ colours shown here (and used in all subsequent analyses) are from the \texttt{SExtractor} dual-image mode analysis described in Section~\ref{sec:obj_detect}. We measure the colours within matched apertures based on $2.5~\times$ the Kron radius \citep{Kron_1980} parameter determined from the $R$-band imaging rather than from the $V$ and $R$ total magnitudes. We note that these matched aperture colours are generally $0.2-0.3$~mag redder than colours obtained from the total magnitudes. The black solid line, obtained from a linear fit to the bright ($13 \leq V\ \rm{[mag]} \leq 18$) and spectroscopically confirmed Coma cluster galaxies in our catalogue, highlights the red-sequence of galaxies. We extrapolate the red-sequence to fainter magnitudes and show in Figure~\ref{fig:CMD} that the faint LSB galaxies from the \protect\hyperlink{Y16}{Y16} are mostly consistent with the red-sequence, in agreement with a previous result from \citet{Koda_2015}. The $1\sigma$ intrinsic scatter around the red-sequence was obtained in magnitude bins with width $2$~mag and is shown as dashed lines in Figure~\ref{fig:CMD}. The scatter increases from $\unsim0.06$~mag at $V=15$~mag to $\unsim0.28$~mag for the faintest LSB galaxies. For subsequent analyses and discussion, we define three regions in our CMD: galaxies with colours within the limits defined by the $1\sigma$ scatter around the best-fit line to red-sequence galaxies (RSG); galaxies with colours redder than the $1\sigma$ limit ($>$RSG), i.e. redder than the average RSG; and galaxies with colors bluer than the $1\sigma$ limit ($<$RSG), i.e. bluer than the average RSG. In the context of cluster photometry, the red-sequence is normally expected to be populated by quiescent galaxies that are bound to the host cluster. However, our analysis in Section~\ref{sec:fin_clean} shows that $\unsim10$~per cent of the RSG with magnitudes brighter than $V \mathord{\sim} 19$~mag do not belong to the Coma cluster. These bright background galaxies which have redshifts $z\unsim0.2$ are known in the literature to be notoriously difficult to isolate with photometry only \citep{Adami_2006, Adami_2009, Mahajan_2011}. As earlier mentioned, we have excluded these background galaxies from our catalogue and are left with $1,564$ cluster galaxies with $V < 19$~mag. At fainter apparent magnitudes, i.e., $19 \leq V \rm{\ [mag]} \leq 22$, spectroscopic background galaxies in the direction of the Coma cluster from the literature have elevated redshifts $z \geq 0.3$ and are almost exclusively $>$RSG. A careful examination of these fainter background galaxies reveal that they generally have \Sersic/ index, $n>2$. We assume that all other faint $>$RSG galaxies with $n>2$ are background galaxies and therefore exclude them from our catalogue. This brings the number of remaining galaxies in our catalogue to $9,179$, with $64$, $27$, and $9$ per cent being RSG, $>$RSG, and $<$RSG, respectively. \subsection{Control field} \label{sec:SDF} It should be noted that the spectroscopic sample used to isolate background galaxy contaminants is limited to high surface brightness galaxies, and as we have already shown, the red-sequence and surface brightness--magnitude diagnostics are both inadequate for weeding out all background galaxies. We therefore seek to know, statistically, the residual contamination level in our final catalogue. To this end, we analyse the $V$ and $R$ band observation of the Subaru Deep Field \citep[SDF;][]{Kashikawa_2004} as a control field. The SDF is at a similar Galactic latitude as the Coma cluster and was observed under similar photometric conditions (FWHM seeing $\unsim0.98\hbox{$^{\prime\prime}$}$). It has similar field-of-view and limiting surface brightness as each of our Coma pointings, although it reaches a limiting magnitude that is $\unsim1.5$~mag deeper than our Coma imaging data. Applying the same galaxy detection and quality control constraints (Sections~\ref{sec:obj_detect} \& ~\ref{sec:cmd}) to the control field and assuming that it is representative of our Coma fields, we expect $2,840$ background galaxies within our combined Coma fields, mostly at fainter magnitudes, i.e., $V \mathord{\sim} 22$~mag, and redder colours, i.e., $V-R \mathord{\sim} 0.7$, as shown in Figure~\ref{fig:CMD_bkgrd_II}. Our control field analysis suggests that a minimum contamination level of $\unsim0.02$ per cent is realised in our photometric catalogue below the red-sequence ($<$RSG). Along the red-sequence, i.e. in the RSG group, the residual contamination is $\unsim20$ per cent, mostly among the fainter ($V > 19$~mag) LSB galaxies. Above the red-sequence ($>$RSG), the residual contamination level is very high, reaching $\unsim60$ per cent. We summarize the residual contamination levels along and off the red-sequence in Figure~\ref{fig:CMD_bkgrd_II} and note that the total number of Coma cluster galaxies in our catalogue drops to $6,891$ galaxies after applying the statistical background-galaxy correction. \begin{figure*} \includegraphics[width=0.64\textwidth]{figures/N4911_field.pdf}\hspace{0.01\textwidth}\\ \caption{\label{fig:NGC4911} An example of the wide range of galaxies with structural parameters present in our final catalogue. We show the $V$-band Subaru/Suprime-Cam imaging of a field at $\unsim0.6$~Mpc from the centre of the Coma cluster with dimensions $120$~kpc~$\times$~$120$~kpc. The galaxies in our final catalogue are marked with white $1R_{\rm e}$ isophotes. This field contains the giant spiral galaxy, NGC~4911, several dwarf galaxies, and the ultra-diffuse galaxy, Y48 ($\rm COMA\_12\_3561$). It also includes a faint, disrupting galaxy ($\rm COMA\_11\_4120$) that is $\unsim75$~kpc away from the NGC~4911--NGC~4911A interacting pair. North is up and East is left.} \includegraphics[width=0.44\textwidth]{figures/size_mag_A.png}\hspace{0.01\textwidth} \includegraphics[width=0.46\textwidth]{figures/SB_mag_B.png}\hspace{0.01\textwidth}\\ \caption{\label{fig:summ_cat} \textit{Left panel}: Size--magnitude distribution of Coma cluster galaxies. The brown circles are the $9,179$ galaxies successfully modeled with a single \Sersic/ function as described in the text. The dashed lines correspond to constant mean effective surface brightness. We detect Coma cluster galaxies as small as $\unsim0.6$~kpc and with mean surface brightness within the effective radius as faint as $\unsim27.5$~\mbox{mag arcsec$^{-2}$}. For context, we show the regions where the low surface brightness galaxies in the catalogue of \citet{Yagi_2016} may be found (red parallelogram), as well as that of the ultra-diffuse galaxies (black, dashed parallelogram). We also show and label some Local Group dwarf galaxies (black crosses) from the compilation of \citet{McConnachie_2012} that would fall within our detection limits, assuming they were observed at the distance of the Coma cluster. These are the dwarf irregular galaxy Wolf-Lundmark-Melotte (WLM), the Fornax dwarf spheroidal galaxy and the dwarf elliptical galaxy NGC~205. \textit{Right panel}: Mean surface brightness within the effective radius of Coma cluster galaxies versus their $V$-band apparent magnitudes. The dashed lines correspond to constant effective radius. The outlines and the black crosses are the same as shown in the left panel.} \end{figure*} \begin{figure*} \includegraphics[width=0.96\textwidth]{figures/hist_summ.png}\hspace{0.01\textwidth} \caption{\label{fig:summ_hist} Distribution of the structural parameters from $V$-band \texttt{GALFIT} analysis of Coma cluster galaxies in bins of high (HSB), intermediate (ISB), and low (LSB) mean surface brightness, as defined in the text. \textit{Top panels} show the distributions of the $V$-band magnitudes, physical sizes, \Sersic/ indices while the \textit{bottom panels} show the axial ratios and the folded position angles of Coma cluster galaxies, respectively. Within the size and surface brightness limits of our catalogue, the most likely Coma cluster galaxy has $V\unsim21$~mag, $R_{\rm e}\unsim0.8$~kpc, $n\unsim1$, and $q\unsim0.8$. The solid vertical line in the second subpanel is the fiducial size limit for UDGs. There is no obvious discontinuity at this size limit that would mark UDGs as a distinct galaxy subpopulation. In the position angle subpanel, we have highlighted the major-axis of the cluster with the black line and also show the $PA$ of NGC~4874, the central cD galaxy, with the gray line. We note that there is a remarkable drop in the number of galaxies with $PA\unsim120\hbox{$^\circ$}$ in all surface brightness bins.} \end{figure*} \section{Summary of Results} \label{sec:summ} From our analysis thus far, we have obtained structural parameters (total magnitudes, $R_{\rm e}$, $n$, $q$, and $PA$ in both $V$ and $R$-bands) in $9,179$ galaxies in the direction of the Coma cluster. We show an example in Table~\ref{tab:complete_tab} and make the complete catalogue available online. Due to the depth of our imaging data, we also identified $233$ galaxies with faint, tidal features such as plumes, shells, rings, etc (see Figure~\ref{fig:NGC4911}). These features are the telltale signatures of the hierarchical nature of structure formation in the Universe. Table~\ref{tab:complete_tab} contains a summary of galaxies with such tidal features. As an example to highlight the wide range of galaxies in our catalogue, we show a $120$~kpc~$\times$~$120$~kpc field near NGC~4911 in Figure~\ref{fig:NGC4911}. This field contains galaxies ranging from the giant spiral galaxy (NGC~4911) to the ultra-diffuse galaxy (Y48), as well as several dwarf galaxies. In addition, the field also contains a disrupting galaxy ($\rm COMA\_11\_4120$), which as far as we know has not been previously catalogued in the literature. Figure~\ref{fig:summ_cat} shows the size--magnitude distribution of the Coma cluster galaxies. As expected, galaxy sizes correlate with their luminosities. In the $V$-band, we detect galaxies with mean surface brightness within the effective radius, $\langle \mu_{\rm eff,V} \rangle$, ranging from $20$ to $27.5$~\mbox{mag arcsec$^{-2}$}, effective radius, $R_{\rm e}$, from $\unsim0.6$ to $\unsim15$~kpc, and apparent magnitudes from $12$ to $24$~mag. We therefore span a region of parameter space that contains a diversity of galaxies suitable for a systematic exploration of the Coma cluster. As shown in both panels of Figure~\ref{fig:summ_cat}, our catalogue contains a large sample of low surface brightness galaxies ($\langle \mu_{\rm eff,V} \rangle \geq 24$~\mbox{mag arcsec$^{-2}$}) with small sizes ($R_{\rm e} \leq 1$~kpc) structurally similar to some Local Group dwarf galaxies. Our catalogue also contains galaxies with intermediate surface brightness $22 \leq \langle \mu_{\rm eff,V} \rangle \leq 24.5$ and $R_{\rm e} \geq 2.5$~kpc, a region of the parameter space relatively unexplored and often misunderstood as being devoid of galaxies (compare our Figure~\ref{fig:summ_cat} with fig.~$4$ from \citealt{Koda_2015}). This confirms the previous result from the \citet{Danieli_2019} where they also reported that this region is filled although their sample is incomplete below $\unsim2$~kpc. We split our final catalogue into high surface brightness (HSB), intermediate surface brightness (ISB), and LSB galaxies using the following mean surface brightness limits -- HSB: $\langle \mu_{\rm eff,V} \rangle < 23$~\mbox{mag arcsec$^{-2}$}; ISB: $23 \geq \langle \mu_{\rm eff,V} \rangle < 24.5$~\mbox{mag arcsec$^{-2}$}; and LSB: $\langle \mu_{\rm eff,V} \rangle \geq 24.5$~\mbox{mag arcsec$^{-2}$}. These limits, which are in line with the categorization scheme used in \citet{Martin_2019}, are simply for convenience and we do not ascribe any particular astrophysical meaning to them. The bright limit of the LSB category is consistent with the LSB limit used in \protect\hyperlink{Y16}{Y16} catalogue. Our catalogue contains $2,290$~HSB, $3,833$~ISB, and $3,056$~LSB galaxies as defined above. This implies a factor of $3$ increase in the number of LSB galaxies in the Coma cluster relative to the \protect\hyperlink{Y16}{Y16} catalogue after accounting for background galaxy contamination. We show the distributions of the structural parameters in the $3$ surface brightness categories in Figure~\ref{fig:summ_hist} and note that regardless of surface brightness, the most likely galaxy in our catalogue has $V$-band magnitude~$\unsim21$~mag, $R_{\rm e}\unsim0.8$~kpc, $n\unsim1$, $q\unsim0.8$, and is most likely aligned along the $PA$ of the cluster, i.e. $\unsim71$~deg \citep{Plionis_1994}. The second subpanel shows that there is no obvious discontinuity in the sizes of the LSB galaxy subpopulation at $1.5$~kpc that would mark UDGs as a distinct galaxy population. Lastly, we note that \citet{Forbes_2020} used $V-R$ colours from this work, which were based on the difference between the total $V$ and total $R$ magnitudes, to explore the possible trends of globular cluster specific frequency with host UDG colour. Here, we have we used a different approach to calculate the $V-R$ colours of UDGs using colours from matched apertures as discussed in Section~\ref{sec:cmd}. This results in a similar specific frequency vs UDG colour trend to that found by \citet{Forbes_2020}. \begin{figure*} \includegraphics[width=0.96\textwidth]{figures/newUDGs.png}\hspace{0.01\textwidth}\\ \caption{\label{fig:new_UDGs} $V$-band Subaru/Suprime-Cam thumbnails of the $29$ newly discovered Coma cluster ultra-diffuse galaxies. These UDGs have a typical $R_{\rm e}\unsim2$~kpc and $\langle \mu_{\rm eff,V} \rangle\unsim25$~\mbox{mag arcsec$^{-2}$}. Each thumbnail is $10\times10$~kpc across. North is up and East is to the left.} \end{figure*} \subsection{New ultra-diffuse galaxies in the Coma cluster} \label{sec:new_udg} Having established that our catalogue is rich in LSB galaxies, we immediately conduct a further search for new ultra-diffuse galaxies within the outlined UDG region shown in Figure~\ref{fig:summ_cat}. This region contains $110$ galaxies after excluding all the UDGs already identified in the literature, but only $29$ satisfy all the commonly used UDG criteria, i.e. $R_{\rm e} \geq 1.5$~kpc; mean surface brightness within $R_{\rm e}$,~$\langle \mu_{\rm eff,V} \rangle \geq24.5$~\mbox{mag arcsec$^{-2}$}\ (combining criterion 6 from \protect\hyperlink{Y16}{Y16} and using the mean UDG colour $V-R\unsim0.5$); and $\langle \mu_{\rm eff,V} \rangle - \mu_{\rm eff,V} \leq 0.8$ (see criterion 7 in \protect\hyperlink{Y16}{Y16}) where $\mu_{\rm eff,V}$ is the surface brightness at $R_{\rm e}$. We note that this last condition limits the sample of acceptable UDGs to those with \Sersic/ index, $n \leq 1.25$, i.e. exponential light profiles. Two factors may be responsible for these new detections: our data while covering the same sky area as that used in \protect\hyperlink{Y16}{Y16} had better seeing, and we have applied the detection methods originally introduced in \protect\hyperlink{Y16}{Y16} to multi-band data, making our measurements more reliable. We show these newly discovered UDGs in Figure~\ref{fig:new_UDGs} and present their structural parameters in Table~\ref{tab:UDGs}. Out of these newly catalogued UDGs, $23$ lie along the red-sequence of Coma cluster galaxies, i.e. they are RSG as discussed in Section~\ref{sec:cmd}, with the remaining $2$ having colours redder than the red-sequence region. \begin{figure*} \includegraphics[width=0.96\textwidth]{figures/COL_RAD_ult.png}\hspace{0.01\textwidth}\\ \caption{\label{fig:col_rad} Mean residual $V-R$ colours of Coma cluster galaxies after subtracting off the global red-sequence fit versus the projected clustercentric radius. We show results for the high (HSB), intermediate (ISB), and low (LSB) surface brightness galaxies, using the limits discussed in the text. Ultra-diffuse galaxies (UDG) which are represented by the solid red line have similar mean residual colour trends as the LSB galaxies out to the virial radius of the Coma cluster ($\unsim2.9$~Mpc). We do not include galaxies that are redder than the $1\sigma$ scatter around the best-fit red-sequence line in this plot due to their high background contamination rate. The dashed horizontal line corresponds to galaxies with mean colours consistent with the red-sequence and it is shown to guide the eye. LSB galaxies within the cluster core have redder mean residual colours relative to the red-sequence relation at the $2\sigma$ level and they show a more dramatic transition at $\unsim0.6$~Mpc compared to galaxies with higher surface brightness, beyond which their mean residual colour flattens out, before rising in the cluster outskirts.} \end{figure*} \subsection{Environmental trends in clustercentric colour distribution} \label{sec:rad_udg} We now revisit the issue of environmental colour trends in the Coma cluster with emphasis on the LSB galaxies in our photometric catalogue. To recap, our catalogue contains $3,056$~LSB galaxies, most of which are along the red-sequence with $154$ and $631$ galaxies bluer and redder than the red-sequence region of the colour--magnitude diagram, respectively, as defined in Section~\ref{sec:cmd}. Our control field analysis in Section~\ref{sec:SDF} already shows that we can expect very little background galaxy contamination in the subsample that is bluer than the red-sequence, and up to $\unsim60$~per cent contamination in the LSB galaxy subsample that is redder than the red-sequence. Along the red-sequence, the maximum background galaxy contamination level is $\unsim20$~per cent. Galaxy clusters such as Coma are well known to show a decreasing colour gradient with projected clustercentric distance \citep{Terlevich_2001}, which is believed to be directly linked to a corresponding age gradient \citep{Smith_2009, Smith_2011}. The densest region of the cluster is observed to be dominated by redder galaxies while bluer galaxies dominate the less dense outskirt regions. Within the Coma cluster, the correlation of galaxy colours with their local environment has been previously observed in dwarf galaxies \citep{Secker_1997}, although \citet{Adami_2006} reported no radial trends in their sample of faint LSB galaxies. \citet{Alabi_2018}, however, found hints that UDGs within the cluster core may be redder than those in the cluster outskirts, although the analysis was severely limited by incomplete and inhomogeneously sourced colour data. More generally, LSB galaxies are expected to have formed their stars rapidly at earlier epochs \citep{Martin_2019} while those observed in dense environments may have experienced enhanced star-formation quenching via ram-pressure stripping and tidal puffing-up effects from their host clusters \citep{Moore_1996, Johansson_2009}. Imprints of such physical processes are expected to be seen in the radial distribution of the galaxy colours \citep{Jiang_2019}. \citet{Sales_2019} recently used the IllustrisTNG cosmological simulations to show that UDGs that were accreted early into present--day massive clusters ($\unsim10^{14}\rm M_\odot$) are found mostly within the cluster cores while those accreted more recently from the field environments dominate in the cluster outskirts. Figure~\ref{fig:col_rad} shows how the mean residual $V-R$ colour distribution with respect to the best-fit red-sequence line varies with projected clustercentric radii out to the cluster virial radius, i.e. $\unsim2.9$~Mpc \citep{Kubo_2007}. We have excluded all galaxies that are redder than the $1\sigma$ scatter around the best-fit line of the red-sequence without spectroscopic confirmation as Coma cluster members in estimating the mean colour residuals. This is due to their high background galaxy contamination rate. Relative to the red-sequence, LSB galaxies (and UDGs) have redder mean colours compared to ISB and HSB galaxies within the cluster core. They have a mean residual colour that is $0.034\pm0.014$~mag redder compared to higher surface brightness galaxies. This result, which is significant at the $2\sigma$ level, suggests that LSB galaxies are most vulnerable to the severe star-formation quenching effects of the cluster-core environment. Outside the cluster-core, the mean residual colour trends are similar out to $\unsim1.8$~Mpc for the various surface brightness subsamples. The high mean residual colour at larger projected clustercentric radii, which is more evident in the LSB galaxies, can be attributed to an increase in the contribution from the redder, fainter background contamination galaxies to our final catalogue (see Section~\ref{sec:SDF}). There is a noticeable transition at $\unsim0.6$~Mpc ($\unsim0.3R_{200}$), which is more pronounced in the LSB galaxies, already hinted at in \citet{Alabi_2018}. This transition radius corresponds to the projected radius within which UDGs that formed from cluster tidal effects dominate the UDG clustercentric number density profile in the cosmological simulations of \citet[][see their fig. 7]{Sales_2019}. The observed transition radius ($\unsim0.6$~Mpc) is similar to the the projected scale radius ($r_{\rm s}\unsim27\hbox{$^\prime$}$ or $\unsim0.8$~Mpc at the distance of the Coma cluster) of the Coma cluster dark matter halo \citep{Okabe_2014}. Similar transition radii have been observed in the Virgo cluster \citep{Chung_2009} and more recently in galaxy clusters in the SAMI Galaxy Survey \citep{Owers_2019}. In both studies, the transition radius, although inferred from different parameters, was linked to environmental quenching effects which is most pronounced in the the cluster cores. For example, spiral galaxies within $\unsim0.5$~Mpc ($\unsim0.3R_{200}$) from the centre of Virgo cluster have HI disks that are smaller than their stellar disks, while in the SAMI Galaxy Survey, galaxies with strong Balmer absorption features but no recent star-formation episodes, were found exclusively within $\unsim0.6R_{200}$ of the cluster centres. \section{Conclusion} In this work, we have obtained structural parameters simultaneously in $V$ and $R$-bands for $9,179$ galaxies within an area of $\unsim4$~deg$^2$ in the Coma cluster using Subaru/Subprime-Cam data. Importantly, we have coherently obtained the $V-R$ colours of the ultra-diffuse galaxies in the Coma cluster, and in the more general class of low surface brightness galaxies, out to $\unsim2.6$~Mpc from the centre of the cluster. Our catalogue contains galaxies with magnitudes as faint as $V\unsim24$~mag, effective radius, $R_{\rm e}$, as small as $\unsim0.6$~kpc, and mean effective surface brightness within effective radius, $\langle \mu_{\rm eff,V} \rangle$, as low as $\mathord{\sim} 27.5$~\mbox{mag arcsec$^{-2}$}. Our catalogue contains an unprecedented $3,056$ LSB galaxies in the direction of Coma cluster with mean effective surface brightness fainter than $24$~\mbox{mag arcsec$^{-2}$}~in the $V$-band. Out of these Coma cluster LSB galaxies, we found $29$ new UDGs, previously uncatalogued in the literature. In addition, we identified galaxies with faint tidal features within the Coma cluster. We make this catalogue publicly available. We have confirmed earlier results that most Coma cluster UDGs lie along the red-sequence of the colour--magnitude relation, but we found subpopulations of UDGs outside the red-sequence region. We also investigated clustercentric trends in galaxy colours in order to understand how the locally varying environment within the cluster affects the LSB galaxies compared to co-spatial higher surface brightness galaxies within the Coma cluster. We obtained the important result of a cluster transition radius at projected radius $\unsim0.6$~Mpc, within which the LSB galaxies are on average redder than galaxies with higher surface brightnesses at the $2\sigma$ level. This transition radius is similar to the region reported by \citet{Sales_2019} based on the IllustrisTNG cosmological simulation of massive clusters of galaxies within which ancient infalls dominate the UDG population. \newpage \section*{Acknowledgements} We thank the anonymous referee for the thoughtful reading of the manuscript and for the valuable feedback. DAF thanks the ARC for financial support via DP160101608. JPB and AJR acknowledge financial support from AST-1616598, AST-1518294 and AST-1616710. AJR was supported by the Research Corporation for Science Advancement as a Cottrell Scholar. This paper was based in part on data collected at the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. This research made use of the ``K-corrections calculator'' service available at http://kcor.sai.msu.ru/, \texttt{TOPCAT} software available at http://www.starlink.ac.uk/topcat/ \citep{Taylor_2005}, and \texttt{SEP} software \citep{Barbary_2016}. \begin{sidewaystable*} \scriptsize \begin{tabular}{@{}l c c c c c c c c c c c c c c c l} \hline ID & RA & Dec & $V$ & $R_{\rm e}{_{_{V}}}$ & $\langle \mu{_{\rm {eff},V}} \rangle$ & $n{_{_{V}}}$ & $q{_{_{V}}}$ & $PA{_{_{V}}}$ & $R$ & $R_{\rm e}{_{_{R}}}$ & $\langle \mu{_{\rm {eff},R}} \rangle$ & $n{_{_{R}}}$ & $q{_{_{R}}}$ & $PA{_{_{R}}}$ & $V-R$ & comment \\ & [Degree] & [Degree] & [mag] & [kpc] & [\mbox{mag arcsec$^{-2}$}] & & & [deg] & [mag] & [kpc] & [\mbox{mag arcsec$^{-2}$}] & & & [deg] & [mag] & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & (13) & (14) & (15) & (16) & (17)\\ \hline COMA\_10\_1339 & $195.41327 $ & $27.160273 $ & $18.81\pm0.21$ & $4.31\pm 0.3 $ & $25.55\pm 0.26$ & $11.7 \pm50.62$ & $0.89 \pm 0.14$ & $80.79 $ & $18.41\pm0.23$ & $5.87 \pm 0.4 $ & $25.81 \pm 0.27$ & $13.42\pm 0.69 $ & $0.88 \pm 0.14$ & $80.58 $ & $0.81$ & $\rm LSB, >RS, GMP1928 $ \\ COMA\_10\_2134 & $195.34398 $ & $26.830603 $ & $19.36\pm0.18$ & $2.64\pm 0.22$ & $25.04\pm 0.25$ & $1.02 \pm 0.51$ & $0.7 \pm 0.14$ & $146.28$ & $18.7 \pm0.16$ & $2.3 \pm 0.19$ & $24.08 \pm 0.24$ & $0.43 \pm 0.55 $ & $0.49 \pm 0.12$ & $141.41$ & $0.71$ & $\rm LSB, Y16 $ \\ COMA\_10\_2157 & $195.24165 $ & $26.97631 $ & $19.37\pm0.22$ & $3.43\pm 0.24$ & $25.62\pm 0.27$ & $0.62 \pm 0.58$ & $0.56 \pm 0.15$ & $147.71$ & $18.43\pm0.19$ & $3.74 \pm 0.23$ & $24.86 \pm 0.23$ & $0.66 \pm 0.59 $ & $0.6 \pm 0.13$ & $148.88$ & $0.71$ & $\rm LSB, Y11, DF44 $ \\ COMA\_10\_2222 & $195.31602 $ & $27.210728 $ & $19.41\pm0.2 $ & $2.93\pm 0.23$ & $25.31\pm 0.26$ & $0.76 \pm 0.54$ & $0.57 \pm 0.14$ & $9.26 $ & $18.58\pm0.17$ & $3.05 \pm 0.21$ & $24.57 \pm 0.23$ & $0.77 \pm 0.58 $ & $0.6 \pm 0.13$ & $9.08 $ & $0.62$ & $\rm LSB, Y13,DFX1, GMP2175 $ \\ COMA\_10\_3142 & $195.19983 $ & $27.00997 $ & $19.9 \pm0.16$ & $1.77\pm 0.21$ & $24.71\pm 0.3 $ & $1.6 \pm 0.48$ & $0.84 \pm 0.13$ & $111.51$ & $19.69\pm0.18$ & $1.91 \pm 0.22$ & $24.66 \pm 0.3 $ & $1.4 \pm 0.58 $ & $1.0 \pm 0.13$ & $20.13 $ & $0.38$ & $\rm LSB $ \\ COMA\_10\_3630 & $195.32954 $ & $27.054224 $ & $20.11\pm0.25$ & $2.92\pm 0.26$ & $26.0 \pm 0.31$ & $0.82 \pm 0.63$ & $0.73 \pm 0.15$ & $16.89 $ & $19.69\pm0.21$ & $2.61 \pm 0.25$ & $25.35 \pm 0.3 $ & $0.82 \pm 0.61 $ & $0.87 \pm 0.13$ & $178.5 $ & $0.73$ & $\rm LSB, Y14, DF42 $ \\ COMA\_10\_4551 & $195.27219 $ & $27.15981 $ & $20.51\pm0.17$ & $1.42\pm 0.21$ & $24.85\pm 0.37$ & $0.78 \pm 0.49$ & $0.8 \pm 0.13$ & $149.52$ & $19.75\pm0.16$ & $1.43 \pm 0.19$ & $24.1 \pm 0.33$ & $0.81 \pm 0.56 $ & $0.8 \pm 0.12$ & $145.8 $ & $0.6$ & $\rm LSB, Y12 $ \\ COMA\_10\_4682 & $195.44453 $ & $27.1667 $ & $20.57\pm0.18$ & $1.51\pm 0.22$ & $25.03\pm 0.36$ & $1.01 \pm 0.51$ & $0.97 \pm 0.14$ & $0.6 $ & $19.81\pm0.17$ & $1.52 \pm 0.2 $ & $24.29 \pm 0.33$ & $0.98 \pm 0.56 $ & $0.98 \pm 0.13$ & $53.72 $ & $0.55$ & $\rm LSB, Y18 $ \\ COMA\_10\_5089 & $195.197 $ & $26.78314 $ & $20.73\pm0.27$ & $2.48\pm 0.27$ & $26.26\pm 0.36$ & $0.73 \pm 0.67$ & $0.66 \pm 0.16$ & $141.95$ & $20.07\pm0.22$ & $2.39 \pm 0.27$ & $25.53 \pm 0.32$ & $0.71 \pm 0.62 $ & $0.64 \pm 0.14$ & $140.62$ & $0.47$ & $\rm LSB, Y8,DF46 $ \\ COMA\_10\_5457 & $195.18846 $ & $27.113832 $ & $20.87\pm0.2 $ & $1.53\pm 0.23$ & $25.36\pm 0.39$ & $0.99 \pm 0.55$ & $0.89 \pm 0.14$ & $112.1 $ & $20.2 \pm0.18$ & $1.47 \pm 0.21$ & $24.6 \pm 0.36$ & $0.9 \pm 0.58 $ & $0.83 \pm 0.13$ & $114.37$ & $0.46$ & $\rm LSB, Y6 $ \\ COMA\_10\_5633 & $195.5619 $ & $27.14568 $ & $20.94\pm0.32$ & $2.75\pm 0.3 $ & $26.7 \pm 0.4 $ & $1.6 \pm 0.74$ & $0.69 \pm 0.17$ & $54.99 $ & $20.25\pm0.22$ & $2.39 \pm 0.28$ & $25.71 \pm 0.34$ & $1.21 \pm 0.63 $ & $0.77 \pm 0.14$ & $59.63 $ & $0.58$ & $\rm LSB, Y22 $ \\ COMA\_10\_5695 & $195.40529 $ & $27.006954 $ & $20.97\pm0.18$ & $1.27\pm 0.22$ & $25.06\pm 0.42$ & $1.84 \pm 0.52$ & $0.79 \pm 0.14$ & $21.01 $ & $20.96\pm0.2 $ & $1.29 \pm 0.24$ & $25.08 \pm 0.45$ & $1.43 \pm 0.6 $ & $0.82 \pm 0.13$ & $25.41 $ & $0.38$ & $\rm LSB $ \\ COMA\_10\_5842 & $195.53276 $ & $26.87714 $ & $21.03\pm0.16$ & $1.07\pm 0.21$ & $24.75\pm 0.46$ & $1.0 \pm 0.48$ & $0.84 \pm 0.13$ & $70.96 $ & $20.93\pm0.18$ & $1.04 \pm 0.21$ & $24.59 \pm 0.48$ & $0.99 \pm 0.58 $ & $0.85 \pm 0.13$ & $70.04 $ & $0.41$ & $\rm LSB $ \\ COMA\_10\_5902 & $195.27608 $ & $27.168835 $ & $21.05\pm0.16$ & $1.04\pm 0.21$ & $24.7 \pm 0.47$ & $1.56 \pm 0.47$ & $0.96 \pm 0.13$ & $44.88 $ & $20.82\pm0.17$ & $0.99 \pm 0.2 $ & $24.37 \pm 0.47$ & $1.52 \pm 0.57 $ & $0.73 \pm 0.13$ & $102.32$ & $0.51$ & $\rm LSB $ \\ COMA\_10\_6090 & $195.30705 $ & $26.866377 $ & $21.12\pm0.15$ & $0.92\pm 0.2 $ & $24.51\pm 0.5 $ & $0.27 \pm 0.46$ & $0.52 \pm 0.13$ & $123.13$ & $20.33\pm0.14$ & $0.9 \pm 0.17$ & $23.66 \pm 0.43$ & $0.61 \pm 0.54 $ & $0.89 \pm 0.12$ & $101.31$ & $0.94$ & $\rm LSB, >RS $ \\ COMA\_10\_6392 & $195.49966 $ & $26.873947 $ & $21.24\pm0.16$ & $0.94\pm 0.21$ & $24.67\pm 0.51$ & $0.89 \pm 0.47$ & $0.87 \pm 0.13$ & $8.58 $ & $20.89\pm0.17$ & $0.94 \pm 0.2 $ & $24.31 \pm 0.49$ & $0.9 \pm 0.56 $ & $0.92 \pm 0.13$ & $39.74 $ & $0.59$ & $\rm LSB $ \\ COMA\_10\_6422 & $195.42538 $ & $26.661331 $ & $21.25\pm0.15$ & $0.88\pm 0.2 $ & $24.55\pm 0.53$ & $1.03 \pm 0.46$ & $0.79 \pm 0.13$ & $3.59 $ & $21.1 \pm0.17$ & $0.83 \pm 0.2 $ & $24.27 \pm 0.54$ & $1.06 \pm 0.56 $ & $0.81 \pm 0.13$ & $18.33 $ & $0.41$ & $\rm LSB $ \\ COMA\_10\_6462 & $195.58606 $ & $27.102407 $ & $21.26\pm0.25$ & $1.7 \pm 0.26$ & $25.98\pm 0.41$ & $0.79 \pm 0.63$ & $0.46 \pm 0.15$ & $14.57 $ & $20.42\pm0.2 $ & $1.78 \pm 0.25$ & $25.24 \pm 0.36$ & $0.76 \pm 0.61 $ & $0.5 \pm 0.13$ & $11.53 $ & $0.62$ & $\rm LSB, Y24 $ \\ COMA\_10\_6536 & $195.48225 $ & $27.10293 $ & $21.29\pm0.15$ & $0.86\pm 0.2 $ & $24.54\pm 0.54$ & $0.47 \pm 0.46$ & $0.8 \pm 0.13$ & $174.8 $ & $20.93\pm0.16$ & $0.85 \pm 0.19$ & $24.15 \pm 0.51$ & $0.72 \pm 0.56 $ & $0.89 \pm 0.13$ & $5.25 $ & $0.59$ & $\rm LSB $ \\ COMA\_10\_6577 & $195.130359$ & $26.793341 $ & $21.31\pm0.16$ & $0.96\pm 0.21$ & $24.77\pm 0.51$ & $0.79 \pm 0.48$ & $0.43 \pm 0.13$ & $20.89 $ & $20.76\pm0.16$ & $0.93 \pm 0.19$ & $24.16 \pm 0.47$ & $1.02 \pm 0.56 $ & $0.45 \pm 0.13$ & $17.05 $ & $0.8$ & $\rm LSB, >RS $ \\ COMA\_10\_6622 & $195.5833 $ & $27.209324 $ & $21.32\pm0.23$ & $1.52\pm 0.25$ & $25.8 \pm 0.43$ & $0.58 \pm 0.61$ & $0.72 \pm 0.15$ & $57.73 $ & $20.57\pm0.19$ & $1.48 \pm 0.23$ & $24.99 \pm 0.39$ & $0.57 \pm 0.6 $ & $0.75 \pm 0.13$ & $55.81 $ & $0.6$ & $\rm LSB, Y25 $ \\ COMA\_10\_6670 & $195.3495 $ & $26.75686 $ & $21.34\pm0.23$ & $1.47\pm 0.25$ & $25.73\pm 0.43$ & $0.83 \pm 0.6 $ & $0.67 \pm 0.15$ & $28.37 $ & $20.62\pm0.19$ & $1.47 \pm 0.24$ & $25.02 \pm 0.4 $ & $0.83 \pm 0.6 $ & $0.73 \pm 0.13$ & $27.93 $ & $0.47$ & $\rm LSB, Y15 $ \\ COMA\_10\_6731 & $195.36357 $ & $26.667072 $ & $21.38\pm0.22$ & $1.41\pm 0.25$ & $25.69\pm 0.44$ & $0.38 \pm 0.59$ & $0.58 \pm 0.15$ & $26.1 $ & $21.24\pm0.21$ & $1.3 \pm 0.26$ & $25.38 \pm 0.48$ & $0.45 \pm 0.62 $ & $0.67 \pm 0.13$ & $29.89 $ & $0.49$ & $\rm LSB $ \\ COMA\_10\_6914 & $195.12947 $ & $27.020304 $ & $21.45\pm0.29$ & $1.93\pm 0.28$ & $26.45\pm 0.43$ & $0.64 \pm 0.7 $ & $0.75 \pm 0.16$ & $155.83$ & $20.73\pm0.23$ & $2.0 \pm 0.28$ & $25.8 \pm 0.38$ & $0.81 \pm 0.64 $ & $1.0 \pm 0.14$ & $152.38$ & $0.58$ & $\rm LSB, Y3 $ \\ COMA\_10\_7073 & $195.24005 $ & $26.781855 $ & $21.52\pm0.19$ & $1.1 \pm 0.23$ & $25.29\pm 0.49$ & $0.5 \pm 0.54$ & $0.73 \pm 0.14$ & $80.52 $ & $21.2 \pm0.18$ & $1.0 \pm 0.22$ & $24.76 \pm 0.51$ & $0.68 \pm 0.59 $ & $0.76 \pm 0.13$ & $75.55 $ & $0.53$ & $\rm LSB $ \\ COMA\_10\_7100 & $195.54 $ & $27.135769 $ & $21.53\pm0.19$ & $1.04\pm 0.23$ & $25.2 \pm 0.51$ & $0.82 \pm 0.53$ & $0.66 \pm 0.14$ & $60.87 $ & $21.26\pm0.19$ & $1.04 \pm 0.23$ & $24.91 \pm 0.51$ & $0.9 \pm 0.59 $ & $0.62 \pm 0.13$ & $58.82 $ & $0.53$ & $\rm LSB $ \\ COMA\_10\_7154 & $195.144787$ & $27.16266 $ & $21.55\pm0.17$ & $0.91\pm 0.22$ & $24.9 \pm 0.54$ & $0.88 \pm 0.5 $ & $0.89 \pm 0.14$ & $146.76$ & $21.42\pm0.18$ & $0.83 \pm 0.21$ & $24.57 \pm 0.58$ & $1.02 \pm 0.58 $ & $0.66 \pm 0.13$ & $113.28$ & $0.41$ & $\rm LSB $ \\ COMA\_10\_7241 & $195.57613 $ & $26.69556 $ & $21.59\pm0.27$ & $1.62\pm 0.27$ & $26.21\pm 0.45$ & $0.22 \pm 0.67$ & $0.69 \pm 0.16$ & $64.54 $ & $20.97\pm0.22$ & $1.68 \pm 0.27$ & $25.67 \pm 0.42$ & $0.17 \pm 0.63 $ & $0.75 \pm 0.14$ & $65.6 $ & $0.4$ & $\rm LSB, Y23 $ \\ COMA\_10\_7270 & $195.244732$ & $27.152234 $ & $21.61\pm0.16$ & $0.8 \pm 0.21$ & $24.68\pm 0.59$ & $0.3 \pm 0.47$ & $0.64 \pm 0.13$ & $76.16 $ & $20.98\pm0.15$ & $0.76 \pm 0.18$ & $23.95 \pm 0.54$ & $0.38 \pm 0.55 $ & $0.66 \pm 0.12$ & $75.74 $ & $0.93$ & $\rm LSB, >RS $ \\ COMA\_10\_7279 & $195.54225 $ & $26.934872 $ & $21.61\pm0.19$ & $1.02\pm 0.23$ & $25.23\pm 0.52$ & $0.95 \pm 0.54$ & $0.91 \pm 0.14$ & $39.0 $ & $21.22\pm0.19$ & $1.1 \pm 0.23$ & $25.0 \pm 0.5 $ & $1.05 \pm 0.6 $ & $0.93 \pm 0.13$ & $54.9 $ & $0.63$ & $\rm LSB, >RS $ \\ COMA\_10\_7296 & $195.25078 $ & $26.677841 $ & $21.62\pm0.17$ & $0.9 \pm 0.22$ & $24.96\pm 0.56$ & $1.17 \pm 0.5 $ & $0.53 \pm 0.14$ & $6.42 $ & $20.97\pm0.16$ & $0.86 \pm 0.19$ & $24.22 \pm 0.52$ & $1.4 \pm 0.56 $ & $0.53 \pm 0.13$ & $7.34 $ & $0.85$ & $\rm LSB, >RS $ \\ COMA\_10\_7323 & $195.48868 $ & $26.91578 $ & $21.63\pm0.15$ & $0.76\pm 0.2 $ & $24.59\pm 0.61$ & $0.96 \pm 0.46$ & $0.85 \pm 0.13$ & $57.95 $ & $21.24\pm0.16$ & $0.7 \pm 0.19$ & $24.03 \pm 0.6 $ & $1.0 \pm 0.55 $ & $0.87 \pm 0.12$ & $59.36 $ & $0.62$ & $\rm LSB $ \\ COMA\_10\_7374 & $195.10555 $ & $27.184435 $ & $21.65\pm0.18$ & $0.92\pm 0.22$ & $25.03\pm 0.55$ & $0.62 \pm 0.51$ & $0.5 \pm 0.14$ & $35.87 $ & $21.02\pm0.17$ & $0.93 \pm 0.2 $ & $24.44 \pm 0.51$ & $0.75 \pm 0.57 $ & $0.54 \pm 0.13$ & $38.77 $ & $0.5$ & $\rm LSB, Y2 $ \\ COMA\_10\_7441 & $195.551728$ & $27.151625 $ & $21.69\pm0.15$ & $0.74\pm 0.21$ & $24.62\pm 0.62$ & $0.39 \pm 0.47$ & $0.52 \pm 0.13$ & $5.03 $ & $21.12\pm0.15$ & $0.68 \pm 0.18$ & $23.86 \pm 0.59$ & $0.56 \pm 0.55 $ & $0.46 \pm 0.12$ & $178.49$ & $0.87$ & $\rm LSB, >RS $ \\ COMA\_10\_7508 & $195.283688$ & $27.076344 $ & $21.72\pm0.15$ & $0.73\pm 0.21$ & $24.6 \pm 0.63$ & $0.1 \pm 0.46$ & $0.74 \pm 0.13$ & $48.81 $ & $21.43\pm0.16$ & $0.71 \pm 0.19$ & $24.24 \pm 0.62$ & $0.06 \pm 0.56 $ & $0.85 \pm 0.13$ & $58.59 $ & $0.57$ & $\rm LSB $ \\ COMA\_10\_7564 & $195.32216 $ & $26.761515 $ & $21.74\pm0.2 $ & $1.01\pm 0.23$ & $25.34\pm 0.54$ & $1.02 \pm 0.55$ & $0.57 \pm 0.14$ & $159.69$ & $21.73\pm0.19$ & $0.82 \pm 0.23$ & $24.86 \pm 0.63$ & $1.11 \pm 0.59 $ & $0.67 \pm 0.13$ & $140.72$ & $0.52$ & $\rm LSB $ \\ COMA\_10\_7634 & $195.30801 $ & $26.792051 $ & $21.77\pm0.15$ & $0.69\pm 0.2 $ & $24.54\pm 0.66$ & $1.37 \pm 0.46$ & $0.73 \pm 0.13$ & $171.36$ & $21.39\pm0.16$ & $0.65 \pm 0.19$ & $24.03 \pm 0.64$ & $1.06 \pm 0.55 $ & $0.79 \pm 0.12$ & $161.72$ & $0.68$ & $\rm LSB, >RS $ \\ $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ &$\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$\\ \hline \hline \end{tabular} \caption{Catalogue of Coma cluster galaxies with structural parameters from \texttt{GALFIT} analysis. This is an example page from the complete catalogue which is available online. The columns are (1): Galaxy ID, written to indicate the corresponding field, (2) and (3): Galaxy position (RA and Dec, respectively) in degrees (J2000), (4)--(9): and (10)--(15): are the total AB magnitude, circularized effective radius, mean surface brightness within the effective radius, \Sersic/ index, axial ratio (all with corresponding uncertainties derived following the description in the text), and position angle from $V$ and $R$-band analysis, respectively. We have applied K-correction and Galactic extinction correction to our photometry. $PA$ is measured East from North, (16): Galaxy colour obtained from \texttt{SExtractor} analysis, (17): comments about galaxy, starting with surface brightness category as defined in Section~\ref{sec:summ} (HSB, ISB or LSB), galaxies with colours redder than the 1$\sigma$ intrinsic scatter around the red-sequence are identified with $>$RS as explained in the text, identifier from the literature -- Identifier from \protect\hyperlink{Y16}{Y16}, Identifier from \citet{GMP_1983} and tidal features -- (I: interacting; D: disturbed morphology; JF: Jelly-fish; R: Rings; P: Plumes; S: Shells)} \label{tab:complete_tab} \end{sidewaystable*} \begin{table*} \begin{tabular}{@{}l c c c c c c c c c} \hline ID & RA & Dec & $V$ & $R_{\rm e}{_{_{V}}}$ & $\langle \mu{_{\rm {eff},V}} \rangle$ & $n{_{_{V}}}$ & $q{_{_{V}}}$ & $PA{_{_{V}}}$ & $V-R$\\ & [Degree] & [Degree] & [mag] & [kpc] & [\mbox{mag arcsec$^{-2}$}] & & & [deg] & [mag] \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) \\ \hline COMA\_14\_1859 & $195.21524$ & $28.871029$ & $19.21 \pm 0.16$ & $2.41 \pm 0.21$ & $24.69 \pm 0.25$ & $0.51 \pm 0.47$ & $0.95 \pm 0.13$ & $ 74$ & $ 0.52$\\ COMA\_23\_2058 & $194.72876$ & $28.65372 $ & $19.32 \pm 0.16$ & $2.3 \pm 0.21$ & $24.7 \pm 0.25$ & $0.32 \pm 0.47$ & $0.36 \pm 0.13$ & $156$ & $ 0.47$\\ COMA\_30\_2126 & $194.51813$ & $27.056221$ & $19.36 \pm 0.15$ & $2.1 \pm 0.2 $ & $24.54 \pm 0.26$ & $1.11 \pm 0.46$ & $1.0 \pm 0.13$ & $ 83$ & $0.46$\\ COMA\_33\_2937 & $194.4494 $ & $28.542898$ & $19.79 \pm 0.17$ & $2.03 \pm 0.22$ & $24.9 \pm 0.29$ & $1.24 \pm 0.5 $ & $0.94 \pm 0.14$ & $ 15$ & $0.44$\\ COMA\_14\_3083 & $195.15186$ & $28.930553$ & $19.87 \pm 0.17$ & $1.91 \pm 0.21$ & $24.84 \pm 0.29$ & $1.05 \pm 0.49$ & $0.71 \pm 0.13$ & $179$ & $0.47$\\ COMA\_32\_3240 & $194.50377$ & $27.88233 $ & $19.95 \pm 0.18$ & $2.08 \pm 0.22$ & $25.11 \pm 0.3 $ & $0.95 \pm 0.52$ & $0.91 \pm 0.14$ & $ 56$ & $0.39$\\ COMA\_12\_3266 & $195.17593$ & $28.164309$ & $19.96 \pm 0.17$ & $1.84 \pm 0.21$ & $24.85 \pm 0.3 $ & $1.21 \pm 0.49$ & $0.84 \pm 0.13$ & $ 21$ & $0.45$\\ COMA\_42\_3597 & $193.80339$ & $27.786472$ & $20.1 \pm 0.16$ & $1.62 \pm 0.21$ & $24.71 \pm 0.32$ & $1.01 \pm 0.48$ & $0.76 \pm 0.13$ & $ 92$ & $0.46$\\ COMA\_41\_3676 & $193.95773$ & $27.398582$ & $20.13 \pm 0.19$ & $2.04 \pm 0.23$ & $25.25 \pm 0.31$ & $1.13 \pm 0.54$ & $0.99 \pm 0.14$ & $ 80$ & $0.36$\\ COMA\_24\_3717 & $194.97047$ & $28.7967 $ & $20.15 \pm 0.15$ & $1.5 \pm 0.21$ & $24.6 \pm 0.34$ & $1.02 \pm 0.46$ & $0.7 \pm 0.13$ & $ 88$ & $0.5$\\ COMA\_31\_3861 & $194.49976$ & $27.619015$ & $20.21 \pm 0.17$ & $1.74 \pm 0.22$ & $24.98 \pm 0.32$ & $1.08 \pm 0.5 $ & $0.99 \pm 0.14$ & $120$ & $0.46$\\ COMA\_32\_3906 & $194.32854$ & $28.154984$ & $20.23 \pm 0.16$ & $1.53 \pm 0.21$ & $24.72 \pm 0.34$ & $1.0 \pm 0.48$ & $0.81 \pm 0.13$ & $ 63$ & $0.38$\\ COMA\_30\_3930 & $194.60199$ & $26.745077$ & $20.24 \pm 0.16$ & $1.58 \pm 0.21$ & $24.81 \pm 0.34$ & $0.86 \pm 0.49$ & $0.9 \pm 0.13$ & $ 12$ & $0.55$\\ COMA\_22\_4042 & $194.72652$ & $28.08718 $ & $20.29 \pm 0.17$ & $1.64 \pm 0.22$ & $24.93 \pm 0.34$ & $0.84 \pm 0.5 $ & $0.92 \pm 0.14$ & $157$ & $0.48$\\ COMA\_12\_4296 & $195.2982 $ & $28.145302$ & $20.41 \pm 0.22$ & $2.11 \pm 0.24$ & $25.6 \pm 0.33$ & $1.04 \pm 0.58$ & $0.71 \pm 0.15$ & $ 47$ & $0.46$\\ COMA\_22\_4410 & $195.06532$ & $28.029459$ & $20.45 \pm 0.17$ & $1.5 \pm 0.22$ & $24.9 \pm 0.36$ & $1.08 \pm 0.5 $ & $0.69 \pm 0.14$ & $ 37$ & $0.54$\\ COMA\_34\_4521 & $194.25638$ & $28.87275 $ & $20.5 \pm 0.18$ & $1.58 \pm 0.22$ & $25.06 \pm 0.35$ & $0.85 \pm 0.51$ & $0.75 \pm 0.14$ & $ 36$ & $1.18$\\ COMA\_32\_4544 & $194.46745$ & $28.105352$ & $20.51 \pm 0.18$ & $1.58 \pm 0.22$ & $25.07 \pm 0.35$ & $0.83 \pm 0.52$ & $0.92 \pm 0.14$ & $ 70$ & $0.33$\\ COMA\_41\_4570 & $193.86931$ & $27.684788$ & $20.52 \pm 0.18$ & $1.55 \pm 0.22$ & $25.05 \pm 0.36$ & $0.75 \pm 0.51$ & $0.62 \pm 0.14$ & $158$ & $0.41$\\ COMA\_22\_4600 & $194.71304$ & $28.13771 $ & $20.54 \pm 0.18$ & $1.58 \pm 0.22$ & $25.1 \pm 0.36$ & $0.63 \pm 0.52$ & $0.84 \pm 0.14$ & $ 72$ & $0.52$\\ COMA\_30\_4629 & $194.23543$ & $27.045115$ & $20.55 \pm 0.17$ & $1.48 \pm 0.22$ & $24.96 \pm 0.36$ & $0.76 \pm 0.5 $ & $0.99 \pm 0.14$ & $ 60$ & $0.64$\\ COMA\_22\_4735 & $194.77484$ & $27.97042 $ & $20.6 \pm 0.18$ & $1.56 \pm 0.22$ & $25.13 \pm 0.36$ & $0.29 \pm 0.52$ & $0.46 \pm 0.14$ & $ 21$ & $0.55$\\ COMA\_22\_5227 & $194.9666 $ & $27.82323 $ & $20.79 \pm 0.21$ & $1.71 \pm 0.24$ & $25.52 \pm 0.37$ & $0.44 \pm 0.57$ & $0.33 \pm 0.14$ & $ 42$ & $0.71$\\ COMA\_31\_5314 & $194.59314$ & $27.38581 $ & $20.82 \pm 0.2 $ & $1.53 \pm 0.23$ & $25.31 \pm 0.38$ & $0.78 \pm 0.54$ & $0.76 \pm 0.14$ & $100$ & $0.45$\\ COMA\_12\_5647 & $195.48859$ & $28.138306$ & $20.95 \pm 0.22$ & $1.73 \pm 0.25$ & $25.71 \pm 0.38$ & $1.03 \pm 0.6 $ & $0.95 \pm 0.15$ & $161$ & $0.53$\\ COMA\_23\_5706 & $194.8023 $ & $28.353645$ & $20.97 \pm 0.23$ & $1.8 \pm 0.25$ & $25.82 \pm 0.38$ & $1.06 \pm 0.61$ & $0.99 \pm 0.15$ & $ 39$ & $0.29$\\ COMA\_13\_6651 & $195.3371 $ & $28.539967$ & $21.34 \pm 0.25$ & $1.7 \pm 0.26$ & $26.05 \pm 0.42$ & $0.7 \pm 0.64$ & $0.77 \pm 0.16$ & $ 46$ & $0.4$\\ COMA\_32\_6911 & $194.60907$ & $27.867151$ & $21.45 \pm 0.25$ & $1.55 \pm 0.26$ & $25.96 \pm 0.44$ & $1.01 \pm 0.63$ & $0.98 \pm 0.15$ & $122$ & $0.28$\\ COMA\_22\_6931 & $194.78117$ & $28.13321 $ & $21.46 \pm 0.27$ & $1.73 \pm 0.27$ & $26.21 \pm 0.43$ & $1.02 \pm 0.67$ & $0.81 \pm 0.16$ & $133$ & $0.5$\\ \hline \hline \end{tabular} \caption{Newly discovered ultra-diffuse galaxies in the Coma cluster catalogued here for convenience. Columns are the same as in Table~\ref{tab:complete_tab} although we only show parameters from the $V$-band.} \label{tab:UDGs} \end{table*} \section*{Data Availability} The data underlying this article are available in the article and in its online supplementary material. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{section::introduction} The impact of macroeconomic news on the stock market has been the subject of considerable amount of research during the past thirty years. Asset pricing theory suggests that news about macroeconomic factors such as employment or the level of output should influence financial markets, since it carries information about the aggregate investment opportunities set of the economy (see \cite{Merton1973} and \cite{Breeden1979}). Despite this, empirical evidence supporting market effects of real economic news remains weak and surprisingly mixed \footnote{Previous studies found statistically-significant association between stock market returns and various monetary variables such different interest rates (e.g. \cite{chen1986economic}, \cite{chen1991financial}, \cite{chan1998risk}) and monetary aggregates (e.g. \cite{cornell1983money}, \cite{pearce1983reaction}, \cite{pearce}). }. In many cases, important macroeconomic news is found to have no effect at all. For example, \cite{flannery} examine the market impact of $17$ macroeconomic announcements and find that news relating to industrial production, unemployment and the real GNP appears to have no significant impact on stock prices. Similarly, earlier studies such as, for example, \cite{pearce} and \cite{Jain1988} find that the markets largely discount statistical announcements about industrial production and unemployment, even after taking prior expectations into the account. \cite{Ghent2010} also fails to find a significant market reaction to GDP and unemployment news. The different stages of the business cycle can change the context within which macroeconomic signals are interpreted, and the market impact of news becomes more apparent once the current economic conditions are taken into the account. \cite{Boyd2005} and \cite{mcqueen} find that news about rising unemployment has a negative effect on stock prices during economic contractions, since it can indicate lower corporate earnings and dividends, but a positive effect during expansions, as it may signal a greater likelihood of lower interest rates. More recently, \cite{Birz2011} find that after controlling for both the market expectations and the stages of the business cycle, the market reacts significantly to GDP and unemployment announcements. While the economic conditions associated with the market's interpretation and reaction to news are becoming better understood, interestingly, no studies appear to focus on news characteristics associated with greatest market impact. For example, controlling for economic conditions and expectations, does unusually-good or unusually-bad news lead to stronger market response? If so, is there asymmetry in this relationship? In other words, is the market effect of economic news non-linear and asymmetric? The aim of this paper is to fill this gap. We re-asses the market effect of macroeconomic news using a relatively-new statistical methodology based on the theory of conditional copulas, and a new comprehensive measure of macroeconomic news that is derived from the indexing of news wires. Our measure quantifies polarity, intensity and volume of news related to U.S. employment, industrial activity, housing and construction, and the energy market. In addition to the news that relate to scheduled releases of economic statistics, our measure captures qualitative signals such as, for example, comments made by senior U.S. policy officials, policy developments, as well as man-made and natural disasters that may have an economic impact, and is arguably the broadest measure of economic news used in the literature to date. By adopting the copula approach, we are able to construct a flexible model that allows non-linear and asymmetric market effects of economic news. To our knowledge, this represents the first application of copulas to the analysis of the news-stock market relationship. Using the copula model, we map the association between economic news and the stock market in high detail and find that, controlling for expectations, surprises associated with releases of economic data, and prevailing economic conditions, the market impact of news is heavily skewed toward the most unfavorable announcements, which tend to be associated with significant market declines in all stages of the business cycle. In other words, it appears that it is only when the news is all doom and gloom, the market really listens. Lastly, since we're able to isolate the data-revealing component of news, we show that the qualitative economic news that our index captures does have a significant market effect. This may have implications for policy makers seeking to minimize potential disruptions associated with their announcements and comments. This paper is organized as follows. Section \ref{section::news_variable} details the construction of our news variable, which we call the Macroeconomic News Index. The index is based on manual review and classification of news-wire releases, and we review the data sources, indexing methodology, and the resulting index series in this section. Section \ref{section::methodology} introduces the copula approach and details our modeling methodology. Section \ref{section::empirical_results} presents empirical results relating to the market impact of economic news. Lastly, a discussion of the results is provided in Section \ref{section::discussion}. \section{Our news variable: the macroeconomic news index} \label{section::news_variable} Recent evidence suggests that the stock market reaction to economic news is not limited to responses to scheduled statistical releases. For example, while many failed to find any significant relationship between GDP announcements and the stock market in the past, \cite{Birz2011} document substantial market reaction to output-related news using a more comprehensive news measure based on the classification of newspaper headlines of \cite{Lott2004}. We develop a new quantitative measure of U.S. macroeconomic news that is based on full review and classification of releases carried by major business news wires. In terms of economic coverage, professional news wire services differ from newspapers in several important ways, and we begin by detailing our news data sources and the indexing methodology used here. \subsection{Sources of economic news} We use the Thomson Reuters Newswires (TRN) and the Dow Jones Energy Service (DJES) as the sources of public information that underpins the news variable that we construct. Thomson Reuters is one of the largest news providers in the world, and maintains several newswires that cover a variety of U.S. business and economic developments. The Dow Jones Energy Service is a specialized source of news relating to the global energy industry, with coverage ranging from market information and significant firm-specific news to geopolitical events and energy policy. Since both are professional services, they emphasize timeliness of coverage and are usually among the first to break the story. Unlike the newspaper coverage, newswire releases tend to be highly-condensed and factual, making interpretation easier in most cases. They also contain few journalist's opinions, which substantially reduces political bias of economic coverage that has been found to exist in newspaper stories (e.g. see \cite{Mullainathan2005}, \cite{Groseclose2005}, \cite{dyck2003media} and \cite{Lott2004}). Their position at the top of the news chain, breadth of coverage and unambiguous format make the newswires an excellent source of information for indexing that is comprehensive, reduces the chance of misinterpretation, and contains little noise arising from the repetition of news by multiple outlets. Since our aim is to capture and quantify the inflow of news relating to U.S. macroeconomic conditions, we restrict attention to releases that follow four key themes usually associated with economic activity. In particular, we focus on stories covering U.S. industrial production, national and regional labor markets, housing markets and construction industry, and the energy policy and markets. Industrial production, employment, and construction activity are often viewed as important indicators that carry signals about aggregate output, consumption, and return to capital. Energy costs can affect factor productivity and corporate profitability, which motivates our choice of energy news as one of index components. We use the Dow Jones Factiva as the source of all historical newswire releases. Factiva captures and stores all messages carried by TRN and DJES in full, exactly as they appeared at the time of the release, without edits or revisions that may have been made to the content at a later date. Updates and corrections to earlier releases are typically issued by TRN and DJES separately, and are also included into the index. The overall pool of stories transmitted during our sample period is extremely large, and only a minority of wires are related to U.S. macroeconomic conditions. We utilize the Dow Jones Intelligent Indexing (DJII) service to identify relevant releases that fit one of our four selected themes. \subsection{Indexing methodology} For every month in our sample, the construction of the index proceeds as follows. First, we collect all economic stories that were carried by TRN and DJES during the month and sort them into one of the four groups based on economic theme. For example, during April 2014 DJII search yields $27$ newswires relating to U.S. employment, $17$ wires covering U.S. industrial production, $7$ wires on housing markets and construction industry, and a further 27 releases focusing on energy markets and policy that were carried by TRN and DJES. Once the relevant stories are identified, we review the content of every story in full and classify the release either as a ``positive", a ``negative", or a ``neutral". A story is viewed as ``positive" if it contains information indicating improvement in current or future macroeconomic conditions, which for energy news is interpreted as easing of market conditions and a likely decline in price of key carriers such as oil or liquefied natural gas. ``neutral" stories are those that signal neither deterioration nor improvement. A large portion of industrial production, employment, and housing and construction-related newswires are driven by scheduled releases of macroeconomic statistics issued, among others, by U.S. Department of Commerce, the Bureau of Labor Statistics and Federal Reserve Banks, or by revisions to their previously-released figures. When interpreting such releases, it becomes important to account for prior expectations since, for example, a sound gain of $50,000$ in non-farm payrolls may in fact be viewed as negative news in times when higher growth was expected. Fortunately, in addition to the headline number, most data-related TRN releases also include a measure of expectations based on the most recent Reuters poll of forecasters. We therefore classify such releases in relation to expectations and treat the release as ``positive" if it exceeds expectations, ``negative" if it falls short of expectations, and ``neutral" if it meets the expectations. The part of the index that is driven by scheduled releases of core macroeconomic data therefore captures ``economic surprises", or the unexpected component of macroeconomic news. Revisions to past economic data are another major component of data-driven news. For example, growth estimates issued by the U.S. Bureau of Economic analysis tend to be revised multiple times, and revisions can be substantial and occur several months after initial issue. Many historical macroeconomic data that are available today are therefore different form what was reported at the time of the release. The flow of such revisions represents important information that is absent from many contemporary data, and we include all news relating to revisions into the index. The remaining, or non data-driven industrial production, employment, and housing and construction-related stories, along with the bulk of stories covering the energy markets, are qualitative in nature and include comments made by senior officials and policy-makers such as, for example, members of the board of governors of the Federal Reserve System, U.S. Secretary of Commerce or the Secretary of Energy, important geo-political developments such as sanctions or energy cartel quota decisions, as well as natural disasters with potential for economic impact. We describe the content of TRN and DJES economic releases along with specific classification guidelines in detail in the rest of this section. The total number of newswire releases reviewed during the construction of the index across the four news groups is presented in Table \ref{table::releasecountsummary}. \begin{table}[htbp] \begin{center} \begin{tabular}{lrrrr} \hline \textit{News group} & \multicolumn{1}{l}{\textit{Positive}} & \multicolumn{1}{l}{\textit{Negative}} & \multicolumn{1}{l}{\textit{Neutral}} & \multicolumn{1}{l}{\textit{Total}} \\ \hline \textit{Employment} & 3328 & 3220 & 479 & 7027 \\ \textit{Housing} & 982 & 830 & 126 & 1938 \\ \textit{Industry} & 1407 & 946 & 107 & 2460 \\ \textit{Energy} & 3830 & 3203 & 1281 & 8314 \\ \hline \textit{Total} & 9547 & 8199 & 1993 & 19739 \\ \hline \end{tabular} \end{center} \caption{Number of Reuters and Dow Jones Energy Service newswires by category, January 1999 to April 2014.} \label{table::releasecountsummary} \end{table} Since our classification is similar to the grouping of responses to the University of Michigan Consumer Sentiment Index, we adopt its methodology and use the difference in percentage of positive and negative stories as the basis for the index. An analogous indexing approach is used in \cite{Birz2011}. To calculate the monthly value of the Macroeconomic News Index we first find the ratio of positive to negative stories in each of the four news groups. For example, Table \ref{table::subindexexample} shows the raw news counts and the resulting four \textit{sub-indexes} for April 2014. This yields a set of four indexes capturing polarity of U.S. employment, housing, industrial production, and energy-related news that are interesting in their own right, and are reviewed in more detail in Section \ref{section::macronewsindex}. The final value of the Macroeconomic News Index for the month is calculated by finding the average of the four sub-indexes, which amounts to $0.03$ during April 2014. It is evident that positive values of sub-indexes and of the final index indicate that a given month was dominated by positive news stories, while the magnitude of the reading is proportional to prevalence of certain type of news, with $1$ and $-1$ representing entirely positive and negative news-months respectively. \begin{table}[htbp] \begin{center} \begin{tabular}{lrrrr} \hline & \textit{Positive} & \textit{Negative} & \textit{Neutral} & \textit{Sub-index} \\ \hline \textit{Employment} & 17 & 10 & 0 & 0.26 \\ \textit{Housing} & 0 & 5 & 2 & -0.71 \\ \textit{Industry} & 12 & 3 & 2 & 0.53 \\ \textit{Energy} & 13 & 13 & 1 & 0 \\ \hline \end{tabular} \end{center} \caption{Raw news counts and resulting news sub-indexes, April 2014.} \label{table::subindexexample} \end{table} The indexing methodology adopted here, as arguably other approaches to the building of such index, has some limitations. Here, the averaging of the four sub-indexes places equal weight on employment, housing, industry and energy-related news. Alternative weighting is clearly possible. To allow for alternative index definitions, we make our disaggregated data available in addition to the final index series. Next, we review the four sub-indexes and the resulting macroeconomic news index in more detail. \subsection{Employment news sub-index} \label{section::macronewsindex} Nearly two thirds of employment-related wires cover scheduled releases and revisions to economic statistics by the agencies of the U.S. Department of Labor, with a large share of releases issued by the Bureau of Labor Statistics. These include statistics on national labor force, participation rates, employment and unemployment rates, non-farm payrolls, job openings and labor turnover, as well as jobless claims reports. State-level statistical releases also make a substantial part of the labor news flow and are included into the index, but their share relative to the national stories declines substantially closer to the end of our sample. Such data-driven releases contain least ambiguity, and interpretation is straightforward in most cases. For example, on August 1, 2014 Reuters wrote \begin{quote} ``U.S. job growth slowed a bit in July and the unemployment rate unexpectedly rose, pointing to slack in the labor market ... Nonfarm payrolls increased 209,000 last month after surging by 298,000 in June, the Labor Department said on Friday. Economists had expected a 233,000 job gain", \end{quote} which we recorded as ``negative" due to indication of raising unemployment and lower than expected job growth. Most non data-driven employment-related releases contain comments by senior officials on the state of the U.S. labor market. For example, on July 15, 2014, Reuters issued a release citing the Chair of the Federal Reserve: \begin{quote} ``Labor force participation appears weaker than one would expect based on the ageing of the population and the level of unemployment. These and other indications that significant slack remains in labor markets are corroborated by the continued slow pace of growth in most measures of hourly compensation", \end{quote} which we also classify as ``negative". While such comments clearly indicate a negative view of the labor market, they may be perceived as good news in other context. For example, financial markets may view this as signal suggesting greater likelihood of monetary easing. Interaction between market impact of employment news and other economic variables is documented in, for example, \cite{Boyd2005}, where negative unemployment news is found to have a negative effect on equity prices during economic contractions, but a positive effect during expansions. Such non-linear effects can be captured using appropriate econometric models and controls, as we do in Section \ref{section::empirical_results}, rather than by contextual indexing of news. Here, we make particular effort to classify releases from the standpoint of underlying real economic signals, rather than its market interpretation or impact. Figure \ref{fig::eni} of the Appendix shows the monthly values of employment news sub-index and the total number of employment-related news releases carried by Reuters between January 1999 and April 2014. While the index remained positive during most of the past decade and a half, the two periods of time dominated by persistent negative news coincide with $2001-2002$ and $2007-2009$ U.S. NBER recessions. The volume of labor market news varies substantially during our sample period reaching a peak of 83 releases in October 2002, with peak volume occurring roughly in the aftermath of both recessions, and on average amounts to 38 releases per month. \subsection{Housing and construction news sub-index} As with labor market news, most housing and construction releases are data-driven, and cover statistics by the U.S. Department of Commerce, National Association of Home Builders, National Association of Realtors, the American Institute of Architecture as well as the U.S. Department of Housing and Urban Development. Such releases typically include data on new and existing home sales, new home completions, building permits and housing starts, home financing costs, mortgage issue and refinance rates, home affordability and rents, and even architectural billings. Some data, such as home sales, building permits and completions are periodically revised, and we include news of revisions into the index. A substantial number of releases cover related commodity-market news such as, for example, significant changes in the price of construction timber. The remaining, non-data driven releases include news about major policy initiatives aimed at stimulating the construction industry and occasional extreme weather events that adversely affect home building. The monthly values of the housing and construction news sub-index and the volume of housing-related news releases that appear in our sample are shown in Figure \ref{fig::housingnewssubindex}. As with the labor market news, the $1999 - 2014$ period was dominated by generally positive stories, with the exception of the five year span surrounding the U.S. housing crisis. The number of housing and construction-related releases carried by Reuters declines steadily and on average amounts to only $10$ releases per month, which is the lowest among the four news groups in the sample. \subsection{Industry news sub-index} The majority of signals about the U.S. industrial activity that receive news coverage are in the form of commentary, special reports, and statistical releases issued by the Federal Reserve banks, mainly of Chicago, Philadelphia and Richmond, industry associations such as the Institute of Supply Management (ISM) and the Manufacturers Alliance, as well as research circulated by major financial firms. These include data on industrial output, durable goods orders and non-farm productivity, along with a range of indexes of national and state-level manufacturing activity such as, for example, the Chicago Fed Midwest Manufacturing Index or the PMI Purchasing Managers' Index. Periodic polls of analysts expectations by Reuters as well as data revisions also receive significant attention and are frequently mentioned in the news. As with employment and construction-related news, non-data driven industry releases mostly contain commentary by senior officials on the state of U.S. manufacturing and other industrial indicators. The number of industry-related Reuters releases and the monthly values of the industry news sub-index are shown in Figure \ref{fig::ini}. As with employment news, periods or persistently-negative industry news appear to roughly coincide with the two most recent NBER recessions. Interestingly, the most recent span of negative industry news occurred in late $2011$ and early $2012$ -- well past the June $2009$ NBER recession end date, which supports the ``double-dip" view of the latest U.S. recession. \subsection{Energy news sub-index} Among the four groups of news included into the index, energy news contains the greatest number and by far the widest range of stories. Data-driven releases represent a minority of energy news and, in addition to announcements of major price changes for energy carriers such as crude oil or liquefied natural gas, include production figures, quota assignment and compliance rates by members of the OPEC, statistics on U.S. Strategic Petroleum Reserves, petroleum balances and other items tracked by the U.S. Energy Information Administration, as well as data released by the International Energy Agency. Non-data component of energy news is also very broad and contains news of natural and man-made disasters that adversely affect energy supply such as, for example, a major hurricane in the Gulf of Mexico, oil spill, or pipeline breakdown that interrupts oil production and refining, international and regional conflicts, as well as policy developments that lead to supply restrictions or easing such as sanctions or rules that permit new extraction. Figure \ref{fig::enni} shows the number of energy-related DJES releases along with the values of the energy news sub-index. Set against the backdrop of steadily raising energy prices, combined with political turmoil and military action that involved some of the largest energy producers in the world, energy news sub-index remained negative throughout most of the past fifteen years, reflecting both growing supply risks and global energy demand. \subsection{Combined index} Final values of the Macroeconomic News Index, as well as the combined number of processed Reuters and DJIA releases, are shown in Figure~\ref{fig::mni}. The index appears to correctly reflect the inflow of poor economic news during the two NBER recessions during 1999-2014 and the prevalence of good news indicating periods of growth in between. \section{Our methodology: the conditional copula approach} \label{section::methodology} \subsection{A copula approach} To gain a deeper understanding of the relationship between macroeconomic news and equity returns, we adopt a relatively-new statistical approach that is based on the theory of copulas. Consider a pair of random variables $X$ and $Y$, and let $F(x;\theta_x)$ and $G(y;\theta_y)$ represent their marginal distribution functions with parameters $\theta_x$ and $\theta_y$, and $H(x,y)$ be the joint CDF. Following a result by \cite{sklar}, the joint CDF can be expressed as \begin{equation} \label{eq::sklars} H(x,y) = C[F(x;\theta_x),G(y;\theta_y);\theta_c],\hspace{5mm} (x,y) \in \mathbb{R}^2, \end{equation} where $C:[0,1]^2\rightarrow[0,1]$ is the so-called copula of $X$ and $Y$, and $\theta_c$ is a vector containing copula parameters. Letting $u = F(x;\theta_x)$ and $v = G(y;\theta_y)$, it is evident that the copula is simply the joint CDF of $(u,v)$ which we can write as $C(u,v;\theta_c) = H[F^{-1}(u;\theta_x),G^{-1}(v;\theta_y)]$. Copulas are becoming central to the analysis of dependence as they provide a complete description of the association between $X$ and $Y$ that is also unique in the case that the variables are continuous. Different families of copulas represent a variety of dependence structures, with parameters in $\theta_c$ measuring the strength of association. For example, some of the better-known copula families include the Gaussian copula that captures linear correlation, Gumbel and Clayton copulas that measure asymmetric dependence that may be stronger among larger or smaller values of the data, as well as $t$ and Symmetrized Joe-Clayton (SJC) copulas that allow for tail dependence, or dependence among data extremes. Many well-known measures of association can be represented in terms of the copula. For example, rank-correlation measures such as Kendall's $\tau$ and Spearman's $\rho$ can be expressed in terms of $C$ as \begin{equation} \tau = 4 \int_{0}^1 \int_{0}^1 C(u,v) dC(u,v) - 1 \end{equation} and \begin{equation} \rho=12 \int_{0}^1 \int_{0}^1 C(u,v) dudv -3. \end{equation} A major advantage of the copula approach is that it enables the separate modelling of the marginal behaviour of $X$ and $Y$ and of the dependence structure embedded in $C$, meaning that a rich model of association that is free from limitations imposed by marginals $F$ and $G$ can be specified. For an introduction to copulas see \cite{Joe1997} and \cite{Nelsen2006}, and \cite{Cherubini2004} and \cite{Patton2009} for applications of copulas in finance. When the marginal models include other control variables, this amounts to the \textit{conditional copula} approach of \cite{Patton2006}. In this case, the copula captures dependence after ``netting out" the effects of variables included into the marginals as controls. Our aim here is to explore the nature of dependence between our macroeconomic news variable and aggregate equity returns by specifying and fitting an accurate copula model to our data. While much work has been done to assess the impact of economic news on equity returns, our particular interest is in probing for non-linearities in this relationship such as tail dependence, which refers to dependence among extremes, or the tendency of very-large (or small) values of one variable to be associated with very-large (or small) values of another. In other words, in addition to gaining a better understanding of the overall association between macroeconomic news and security returns using the copula approach, our goal is to also measure the market impact of extreme news events. Such extreme dependence is usually studied through the so-called upper- and lower-tail dependence coefficients denoted $\lambda_u$ and $\lambda_l$ respectively and defined as \begin{eqnarray} \lambda_u &=& \lim_{u\rightarrow 1^{-1}} Pr[F(x)\geq u | G(y) \geq u] = \lim_{u \rightarrow 1^{-1}} \frac{1 - 2u + C(u,u)}{1-u}, \\ \lambda_l &=& \lim_{u\rightarrow 0^+} Pr[F(x)\leq u | G(y) \leq u] = \lim_{u \rightarrow 0^+} \frac{C(u,u)}{u}. \end{eqnarray} Greater values of $\lambda_u$ ($\lambda_l$) indicate stronger tendency of large (small) extremes of the variables to co-occur. One of our objectives is therefore to obtain estimates of coefficients $\lambda_u$ and $\lambda_l$ between our economic news variable and security market returns. \subsection{Estimation} Differentiating equation (\ref{eq::sklars}) will reveal that the joint PDF of $X$ and $Y$ can be represented as \begin{equation} f(x,y;\theta_x,\theta_y,\theta_c) = f(x;\theta_x) g(y;\theta_y) c(u,v;\theta_c), \end{equation} where $f(x;\theta_x)$ and $g(y;\theta_y)$ are the marginal densities and $c(u,v;\theta_c)$ is the so-called \textit{copula density} defined as \begin{eqnarray} c(u,v;\theta_c) = \frac{\partial^2 C(u,v;\theta_c)}{\partial u \partial v}. \end{eqnarray} Parameter estimates $\hat{\theta}_x$, $\hat{\theta}_y$ and $\hat{\theta}_c$ can therefore be obtained by maximizing the corresponding log-likelihood function \begin{equation} \log(f(x,y;\theta_x,\theta_y,\theta_c)) = \log(f(x;\theta_x)) + \log(g(y;\theta_y)) + \log(c(u,v;\theta_c)), \end{equation} where $l_x = \log(f(x;\theta_x))$ and $l_y = \log(g(y;\theta_y))$ are the log-likelihoods for the marginal models and $l_c = \log(c(u,v;\theta_c))$ is the \textit{copula log-likelihood}. \cite{Joe1996} propose a two-step method where marginal models are first estimated independently using maximum likelihood so that to obtain the values for $u$ and $v$, and estimates of copula parameters are obtained second by maximizing $l_c$, given the marginal estimates. When the marginals are specified parametrically, this method is known as Inference Functions for the Margins (IFM). When the marginals are non-parametric, this procedure is known as Canonical Maximum Likelihood (CML). Consistency and asymptotic normality of the IFM estimator under a set of regularity conditions is shown in \cite{Joe1997}. The estimator is also known to be highly-efficient (for example, see \cite{Michelis2010}), and we use it as the main estimation method here along with delete-one jack-knife procedure for estimating coefficient variances, as in \cite{Joe1996}. \section{Empirical results} \label{section::empirical_results} We use the monthly dividend-inclusive return to a value-weighted portfolio of NYSE, NASDAQ and NYSE Arca-listed securities as our measure of aggregate U.S. equity returns, with all data taken from the Center for Research in Security Prices (CRSP) monthly stock files. Next, we specify and estimate the marginal models for security returns and the macroeconomic news index, and the copula model of dependence. \subsection{Series diagnostic tests} Our estimation approach requires that all series be stationary. In addition, to ensure validity of copula estimates, the marginal models must accurately capture any serial dependence such as autocorrelation or conditional heteroskedasticity that may be present in the series. We therefore begin our analysis by conducting diagnostic tests of our news variable $MNI_t$, value-weighted market returns $R_t$, Employment News Sub-Index $ENI_t$, Industry News Sub-Index $INI_t$, Energy News Sub-Index $ENNI_t$ and Housing and Construction News Sub-Index series $HNI_t$. Table \ref{table::diagnostictests} shows rejection decisions for the null-hypotheses of unit root based on the ADF test, conditional heteroskedasticity obtained with the ARCH test of \cite{Engle1982}, and serial correlation based on the Ljung-Box Q-test, for $12$ monthly lags of the series, all carried out at $5\%$ significance level. \begin{table}[htbp] \begin{center} \begin{tabular}{lccc} \hline & \multicolumn{ 3}{c}{Reject null at $5\%$} \\ \hline & Unit Root & Heteroskedasticity & Autocorrelation \\ \hline $R_t$ & Reject & Do not reject & Reject \\ $MNI_t$ & Reject & Do not reject & Do not reject \\ $ENI_t$ & Reject & Do not reject & Do not reject \\ $INI_t$ & Reject & Reject & Do not reject \\ $ENNI_t$ & Reject & Do not reject & Do not reject \\ $HNI_t$ & Reject & Do not reject & Do not reject \\ \hline \end{tabular} \end{center} \caption{Diagnostic tests for news varaibles and security returns.} \label{table::diagnostictests} \end{table} We find that while none of the series appear to have a unit root, with the exception of $INI_t$, conditional heteroskedasticity is present in all series, and with the exception of $R_t$, all series are serially-correlated. \subsection{The marginal models} \label{section::marginalmodels} We begin with the marginal model for security returns. Similar to \cite{Birz2011}, we include an economic surprise variable into the marginal model for market returns to control for the effect of surprises associated with releases of economic statistics. Since the impact of news can depend on the current economic conditions, we also include the current unemployment rate to control for such interactions. As the returns are not serially-correlated, we do not include any lags. Our marginal model for security returns is then specified as follows: \begin{eqnarray} \label{eq::returnsmodel} R_t &=& \beta_0 + \beta_1 X_t + \beta_2 L_t + \epsilon_{r,t}, \\ \sigma^2_{r,t} &=& \theta_0 + \theta_1 \sigma^2_{r,t-1} + \theta_2 \epsilon^2_{r,t-1}\text{, }\sqrt{\frac{1}{\sigma^2_{r,t}}}\epsilon_{r,t} \sim N(0,1), \end{eqnarray} where $R_t$ is the value-weighted market return during the month $t$, $X_t$ is economic data surprises, $L_t$ is the current de-trended U.S. unemployment rate, and $\epsilon_{r,t}$ is the error term with variance $\sigma^2_{r,t}$, and $N(0,1)$ is the standard normal distribution. We therefore model $R_t$ as conditionally-normal, with time-varying mean that depends on the current economic conditions and real surprises, and variance that follows a $GARCH(1,1)$ process to capture volatility clusters. We use the U.S. economic surprise index of \cite{scotti2013surprise}, which shows accumulated differences between expected and actual economic data releases, as our measure of $X_t$. Controls for economic data surprises and current economic conditions here imply that our analysis of association between market returns and our economic news variable will be net of market reactions to surprises in statistical releases, and will also not be driven by interaction between the markets and the different stages of the business cycle. In other words, the dependence between equity markets and economic news that we assess in the next section is not driven by the data-revealing component of economic news, and is not due to the market effects of real economic fluctuations, but represents short-term market reactions to the various news-wire signals that we index. Maximum likelihood estimates of the marginal model in (\ref{eq::returnsmodel}) are shown in Table \ref{table::returnmodelestimates}. \begin{table}[htbp] \begin{center} \begin{tabular}{lccccccc} \hline & $\hat{\beta}_0$ & $\hat{\beta}_1$ & \multicolumn{1}{c}{$\hat{\beta}_2$} & $\hat{\theta}_0$ & $\hat{\theta}_1$ & $\hat{\theta}_2$ & $R^2$ \\ \hline \textit{Estimate} & 0.008 & 0.006 & 0.004 & 0.001 & 0.760 & 0.224 & 0.021 \\ \textit{t-Ratio} & \textbf{(2.65)} & (1.04) & (1.21) & (0.85) & \textbf{(8.92)} & \textbf{(2.56)} & - \\ \hline \end{tabular} \end{center} \caption{Maximum likelihood estimates of returns marginal model. Bold indicates significance at 5 s.l.} \label{table::returnmodelestimates} \end{table} As expected, the $GARCH(1,1)$ terms are highly-significant, confirming the presence of conditional heteroskedasticity in the return series. Much in line with previous literature, economic data surprises and unemployment are borderline-significant but still improve the overall model fit. The marginal model for our economic news variable is specified as follows: \begin{eqnarray} \label{eq::mnimodel} MNI_t &=& \alpha_0 + \alpha_1 MNI_{t-1} + \alpha_2 MNI_{t-2} + \epsilon_{m,t},\\ \sigma^2_{m,t} &=& \gamma_0 + \gamma_1 \sigma^2_{m,t-1} + \gamma_2 \epsilon^2_{m,t-1}\text{, }\sqrt{\frac{1}{\sigma^2_{m,t}}}\epsilon_{m,t} \sim N(0,1), \end{eqnarray} where $MNI_t$ is the value of the macroeconomic news index for month $t$ and $\epsilon_{m,t}$ is the error term that follows $GARCH(1,1)$ process as before. The number of lagged months of the news variable included into the model was determined first by estimating autoregressive model of order $6$ and then eliminating insignificant lags. Parameter estimates for model (\ref{eq::mnimodel}) are collected in Table \ref{table::mnimarginalmodelestimates}. \begin{table}[htbp] \begin{center} \begin{tabular}{lccccccc} \hline & $\hat{\alpha}_0$ & $\hat{\alpha}_1$ & $\hat{\alpha}_2$ & $\hat{\gamma}_0$ & $\hat{\gamma}_1$ & $\hat{\gamma}_2$ & $R^2$ \\ \hline \textit{Estimate} & 0.025 & 0.243 & 0.279 & 0.019 & 0.497 & 0.088 & 0.19 \\ \textit{t-Ratio} & (1.51) & \textbf{(3.31)} & \textbf{(3.56)} & (0.69) & (0.73) & (0.698) & - \\ \hline \end{tabular} \end{center} \caption{Maximum likelihood estimates of the marginal model for the macroeconomic news index. Bold indicates significance at $5\%$ s.l.} \label{table::mnimarginalmodelestimates} \end{table} Highly-significant estimates of auto-regressive coefficients indicate a substantial degree of persistence of economic news which is perhaps unsurprising given the persistent nature of business cycles. \subsection{Goodness of fit tests for marginal models} Before specifying and estimating the copula model, it is important to ensure that the marginals are correctly-specified. To this end, we use the $K-S$, $ARCH$ and $LBQ$ tests to probe for normality, homoskedasticity and absence of serial correlation of the residuals from models (\ref{eq::returnsmodel}) and (\ref{eq::mnimodel}) and find that we cannot reject any of the nulls at $5\%$ significance level for both models, indicating a good fit. Note also that probability transforms $u = F(x;\theta_x)$ and $v = G(y;\theta_y)$ should be uniformly distributed on $[0,1]$. Following \cite{Patton2006}, we calculate the transforms of the series $R_t$ and $MNI_t$ as $\hat{u}_t = \Phi(\hat{\epsilon}_{r,t} / \hat{\sigma}_{r,t})$ and $\hat{v}_t = \Phi(\hat{\epsilon}_{m,t} / \hat{\sigma}_{m,t})$, where $\Phi()$ is the standard normal CDF and $\hat{\epsilon}_{r,t} / \hat{\sigma}_{r,t}$ and $\hat{\epsilon}_{m,t} / \hat{\sigma}_{m,t}$ are the standardized residuals from the marginal models. As an additional goodness of fit check, we test the transforms for uniformity using the K-S test, and find that we cannot reject the null in both cases, with p-values being close to one. \subsection{Empirical copula table} To gain an initial understanding of the nature of association between security returns and economic news in our data, we construct the so-called \textit{empirical copula table} (for other examples see \cite{knight2005diversification} and \cite{ning2010dependence}). The copula table gives an overview of the structure of dependence, and is often used as the first step of the copula model selection process. First, we arrange the probability transforms $\hat{u}_t$ and $\hat{v}_t$ of the return and news index series in ascending order and then sort them evenly into four bins. In each case, the first bin contains the bottom $25\%$ smallest observations, and the fourth bin which contains the top $25\%$ largest observations in the sample. We then construct a frequency table with four rows and four columns, where the $j$ row and $i$ column shows the total number of elements in $j$'th bin of the news index series and $i$'th bin of the security return series. For example, the number in cell $(4,4)$ of this table shows the total number of observation pairs in the sample that are in top $25\%$ for both variables. It should be evident that such table represents the joint frequency distribution of probability transforms $\hat{u}_t$ and $\hat{v}_t$, and since these are uniform, our frequency counts should also be evenly distributed among the table cells when the variables are independent. That is, if security returns are independent from economic news, conditional on our controls, we should expect to see $11$ observations in each of the $16$ bins, since our monthly series spanning January-1999 to December-2013 contain a total of $180$ observations. A number greater than $11$ indicates a tendency of our variables to ``cluster" together in a particular bin. For example, a number in cell $(4,4)$ of this table that is substantially larger than $11$ would indicate that largest $25\%$ values of the news index tend to be associated with largest $25\%$ of market returns, and so on. \begin{table}[htbp] \begin{center} \begin{tabular}{c|cccc} \hline \multicolumn{1}{c|}{\textit{Bin}} & 1 & 2 & 3 & 4 \\ \hline 1 & \textbf{16} & 6 & 14 & 12 \\ 2 & 10 & 8 & 12 & 10 \\ 3 & 14 & 9 & 14 & 11 \\ 4 & 8 & 12 & 13 & 11 \\ \hline \end{tabular} \end{center} \caption{Empirical copula for security returns and macroeconomic news index.} \label{table::empirical_copula_table} \end{table} We present the empirical copula table for our news variable and security market returns in Table \ref{table::empirical_copula_table}, with the largest deviation from independence count highlighted in bold. Interestingly, greatest deviation from expected count occurs among the bottom $25\%$ of news-return pairs where observed count exceeds expected by almost $50\%$, indicating that dependence between returns and economic news appears to be heavily skewed toward the lower tail of the joint distribution. The count in bin $(4,4)$ showing clustering of observations in top $25\%$ for both variables equals to that expected under independence, suggesting that little association exists among largest returns and news index values. In other words, the empirical coupla table appears to suggest that unusually-bad macroeconomic news tends to lead to substantial market declines, while equally unusually-good news shows no market effects. \subsection{Copula model selection, estimation, and main result} Asymmetric tail and nonlinear dependence between financial series has attracted some recent attention. For example, \cite{Patton2006} and \cite{Michelis2010} use copulas to study asymmetries in exchange rate dependence, and \cite{Ning2009} use a copula model to probe for tail dependence among security returns and trading volume. Here, since our marginal models include controls, our modelling approach is similar to the conditional copula models of \cite{Patton2006} and \cite{Michelis2010}. While our initial results indicate dependence that is skewed toward the lower distribution tail, we begin formal model selection by fitting several bi-variate copulas with a variety of dependence structures to the news index and market returns using the IFM method and assess their fit. We find that among Gaussian, Gumbel, Clayton, t and SJC coupla families, the one-parameter Clayton copula yields the superior fit on the basis of Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). The bi-variate Clayton copula is defined as \begin{equation} \label{eqn::copulamodel} C(u,v;\theta) = \left [ \max(u^{-\theta} + v^{-\theta} - 1,0) \right ]^{-1/\theta}\text{, }(u,v)\in [0,1]^2, \end{equation} where $\theta \geq 0$ is the dependence parameter such that greater values of $\theta$ indicate stronger association between the variables, and the case of $\theta = 0$ corresponds to independence. The Clayton copula features lower, but not upper-tail dependence, with tail-dependence coefficients given by $\lambda_l = 2^{-1/\theta}$ and $\lambda_u = 0$. Significant positive values of $\theta$ therefore also indicate the presence of lower-tail dependence. The superior fit of the Clayton copula here supports our findings from the empirical copula table. \begin{table}[htbp] \begin{center} \begin{tabular}{lccccc} \hline & \textit{Estimate} & \textit{t-ratio} & \textit{AIC} & \textit{BIC} & \textit{LogL} \\ \hline $\hat{\theta}$ & \multicolumn{1}{c}{0.134} & \multicolumn{1}{c}{\textbf{(1.899)}} & \multicolumn{1}{c}{-2.504} & \multicolumn{1}{c}{0.667} & \multicolumn{1}{c}{2.264} \\ \hline \end{tabular} \end{center} \caption{IFM estimation results of bi-variate conditional Clayton copula model. Bold indicates significance at $5\%$ s.l.} \label{table::copularesult} \end{table} IFM estimation results of the conditional Clayton copula model using our macroeconomic news index, value-weighted market returns and the marginal models specified in Section \ref{section::marginalmodels} are shown in Table \ref{table::copularesult}. The estimate of the Clayton dependence parameter is significant at $5\%$ s. l. indicating that, controlling for economic data surprises and persistence of news, macroeconomic news has a significant effect on security returns, and that this effect is skewed toward the lower distribution tail. In other words, we find that markets react strongly and negatively to extremely-poor economic news, but show no similar tendency to respond to good news, and that this relationship is not driven by the data-revealing content of economic news releases. \subsection{Robustness checks} In this section, we perform robustness checks to our result in Table \ref{table::copularesult}. Firstly, since our Macroeconomic News Index is calculated at monthly frequency, journalists writing the news-wires can observe market returns within the month, which may affect the tone of their reporting and create potential for reverse causation. Fortunately, TRN and DJES news-wires tend to be condensed and factual and contain little in terms of journalist opinions or subjective context. Classification of news-wires is therefore hardly affected by the reporter's tone. The index, however, reflects comments made by policy officials, which represents greater potential for reverse causation since such comments may be induced by market events in the first place. In other words, officials may issue commentary in response to market movements, which then gets objectively reported by TRN or DJIS and reflected in the index. Following \cite{Birz2011}, we note that if news is indeed driven by market activity, we should see more news releases during the high-activity months. To test this, we regress the absolute values of market returns on de-trended volume of news measured by the total number of news-wire releases in a given month and find the coefficient on news volume to be insignificant, with p-value close to one. Market activity is therefore not associated with volume of news. As a further test, we also re-estimate the marginal model for market returns given in (\ref{eq::returnsmodel}), but include de-trended number of news-wires as an additional control. As before, we find the news volume to be insignificant. As a final check, we keep the de-trended news volume in (\ref{eq::returnsmodel}) and re-estimate the entire Clayton copula model with IFM. Unsurprisingly, our estimation results change very little, and the Clayton dependence parameter remains significant with the new t-ratio of $1.91$. We therefore conclude that reverse causation between market returns and economic news is highly unlikely. \section{Discussion} \label{section::discussion} It is interesting to note that since the probability transforms $\hat{u}_t$ and $\hat{v}_t$ are serially uncorrelated, the asymmetric market effect of news that we document here is not driven by any particular period of time such as, for example, the two NBER recessions in our sample, but is persistent throughout the business cycle. This may have implications for policy makers that have the potential to issue financially-disruptive comments. From the stock market standpoint, upbeat talk seems cheap, while economic pessimism can be costly at all times, not only during crises. The news variable that we construct here may have some broader applications. For example, \cite{Engle2008} develop a model for low-frequency volatility that is driven by macroeconomic causes. Our news index may simplify the estimation of such models since it provides quantitative estimate of broader economic news that is available at a higher frequency than many scheduled statistical releases. \bibliographystyle{elsarticle-harv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $(R,\mathfrak{ m})$ be a Henselian excellent Noetherian local ring, $f=(f_1,\ldots,f_r)$ a system of polynomials in $Y=(Y_1,\ldots,Y_m)$ over $R$ and $\hat{y}$ a zero of $f$ in the completion $\hat{R}$ of $R$. \begin{thm} (Popescu \cite{P}, \cite{P2}, Swan \cite{S})\label{po} For every $c\in \mathbb{N}$ there exists a zero $y$ of $f$ in $R$ such that $y\equiv \hat{y}$ modulo $\mathfrak{ m}^c$. \end{thm} M. Artin proved in \cite[Theorem 1.10]{Ar0} the most important case of this theorem, that is when $R$ is the algebraic power series ring in $x=(x_1,\ldots,x_n)$ over a field $\mathbb{K}$. Usually we rewrite Theorem \ref{po} saying that excellent Henselian local rings have the Artin approximation property. Now suppose that $\hat{R}$ is the formal power series ring in $x=(x_1,\ldots,x_n)$ over a field $\mathbb{K}$ and some components of $\hat{y}$ have some constraints, that is they depend only on some of the variables $x_j$. M. Artin asked if it is possible to find $y\in R^m$ such that the correspondent components depend on the same variables $x_j$ (see \cite[Question 4]{Ar}). More precisely, we have the following question. For a set $J\subset [n]$ we denote by $\mathbb{K}[\![x_J]\!]$ the ring of formal power series in the $x_j$ for $j\in J$. \begin{question}\label{q} (Artin Approximation with constraints \cite[Problem 1, page 68]{R}) Let $R$ be an excellent local subring of $\mathbb{K}[\![x]\!]$, $x=(x_1,\ldots,x_n)$ such that the completion of $R$ is $\mathbb{K}[\![x]\!]$ and $f\in R[Y]^r$, $Y=(Y_1,\ldots,Y_m)$. Assume that there exists a formal solution $\hat{y}\in \mathbb{K}[\![x]\!]^m$ of $f=0$ such that $\hat{y}_i\in \mathbb{K}[\![x_{J_i}]\!]$ for some subset $J_i\subset [n]$, $i\in [m]$. Is it possible to approximate $\hat{y}$ by a solution $y\in R^m$ of $f=0$ such that $y_i\in R\cap \mathbb{K}[\![x_{J_i}]\!]$, $i\in [m]$? \end{question} If $R$ is the algebraic power series ring in $x=(x_1,x_2,x_3)$ over $\mathbb{C}$ then Becker \cite{Be} gave a counterexample. If the set $(J_i)$ is totally ordered by inclusion, that is the so called Nested Artin Approximation then this question has a positive answer in \cite{P}, \cite[Corollary 3.7]{P2} (see also \cite[Theorem 3.1]{CPR} for an easy proof in the linear case). However, when $R$ is the convergent power series ring in $x=(x_1,x_2,x_3)$ over $\mathbb{C}$ then Gabrielov \cite{Ga} gave a counterexample (see also \cite{Iz} for a general account on this problem). A field extension $\mathbb{K}\subset \mathbb{K}'$ is algebraically pure (see \cite{P1}, \cite{BNP}) if every finite system of polynomial equations has a solution in $\mathbb{K}$ if it has one in $\mathbb{K}'$. Any field extension of an algebraically closed field is algebraically pure \cite{P1}. In connection with Question \ref{q} the following theorem was proved. \begin{thm}(Kosar-Popescu \cite[Theorem 9]{KP})\label{t} Let $\mathbb{K} \to \mathbb{K}'$ be an algebraically pure morphism of fields and $x=(x_1, \ldots , x_n)$. Let $J_i$, $i\in [m]$ be subsets of $[n]$, and $A_i=\mathbb{K}\langle x_{J_i}\rangle$, resp. $A'_i=\mathbb{K}'\langle x_{J_i}\rangle$, $i\in [m]$ be the algebraic power series in $x_{j_I}$ over $\mathbb{K}$ resp. $\mathbb{K}'$. Set $\mathcal{N}= A_1 \times \cdots \times A_m$ and $\mathcal{N'}=A'_1 \times \cdots \times A'_m$. Let $f$ be a system of polynomials from $\mathbb{K}\langle x\rangle [Y]$, $Y=(Y_1,\ldots,Y_m)$, and ${\hat y}\in \mathcal{N'}$, such that $f({\hat y})=0$. Then there exist $y\in \mathcal{N}$ such that $f(y)=0$ and $\text{ord}( y_i)=\text{ord} (\hat y_i)$ for $i\in [m]$. \end{thm} The goal of our paper is to replace somehow in Theorem \ref{t} the algebraic power series by formal power series (see Theorem \ref{q2}) and to state a certain Artin strong approximation with constraints property of the formal power series ring in $x$ over a field $\mathbb{K}$ which is so-called $\aleph_0$-complete (see Corollary \ref{cor}). This condition on $\mathbb{K}$ is necessary (see Remarks \ref{r1}, \ref{r2}). Finally we apply these results to extend approximation results due to J. Denef and L. Lipshitz for differential equations with coefficients in the ring of univariate polynomials to the case of several indeterminates (see Corollaries \ref{cor1} and \ref{cor2}). Finite fields, uncountable algebraically closed fields and ultraproducts of fields over $\mathbb{N}$ are $\aleph_0$-complete (see Theorem \ref{t0}). If $(\mathbb{K}_n)_n$ is a sequence of fields and $\mathcal F$ is an ultrafilter of $\mathbb{N}$ we denote by $(\mathbb{K}_n)^*$ the ultraproduct (over the natural numbers) defined as $\left(\prod_{n\in\mathbb{N}}\mathbb{K}_n\right)/\mathcal F$, that is the factor of $\left(\prod_{n\in\mathbb{N}}\mathbb{K}_n\right)$ by the ideal $\{(x_n)_{n\in \mathbb{N}}\in \left(\prod_{n\in\mathbb{N}}\mathbb{K}_n\right) :\{n\in \mathbb{N}:x_n=0\}\in \mathcal{F}\}$. When $\mathbb{K}$ is a single field, $\mathbb{K}^*$ denotes the ultrapower $\left(\prod_{n\in\mathbb{N}}\mathbb{K}\right)/\mathcal F$. \section{Solutions of countable systems of polynomial equations} \begin{defi} Let $\mathbb{K}$ be a field. We say that $\mathbb{K}$ is \emph{$\aleph_0$-complete} if every countable system $\mathcal S$ of polynomial equations (in a countable number of indeterminates) has a solution in $\mathbb{K}$ if and only if every finite sub-system of $\mathcal S$ has a solution in $\mathbb{K}$. \end{defi} \begin{thm}\label{t0} The following fields are $\aleph_0$-complete: \begin{enumerate} \item[a)] Every finite field. \item[b)] Every uncountable algebraically closed field. \item[c)] Every ultraproduct of fields over the natural numbers. \end{enumerate} \end{thm} \begin{rem} Every ultraproduct is either finite or uncountable. So every algebraically closed field which is an ultraproduct is necessarily uncountable. \end{rem} \begin{proof} Let $\mathcal S$ be a system of countably many polynomial equations with coefficients in a field $\mathbb{K}$. We list the polynomial equations of $\mathcal S$ as $P_1,\ldots, P_n,\ldots$ which depends on the variables $x_1,\ldots, x_l,\ldots$.\\ For any $N\in\mathbb{N}$ let $D_N$ be an integer such that the polynomials $P_i$, for $i\leq N$, depend only on the $x_j$ for $j\leq D_N$.\\ Let us define the canonical projection maps: $$\pi_{l,k} : \mathbb{K}^l=\mathbb{K}^{k}\times \mathbb{K}^{l-k}\longrightarrow \mathbb{K}^k \ \ \forall l\geq k\geq 1$$ that sends the vector $(x_1,\ldots, x_l)$ onto $(x_1,\ldots, x_k)$. We also define the projection maps $$\pi_k :\mathbb{K}^\mathbb{N}\longrightarrow \mathbb{K}^k\ \ \ \forall k\geq 1$$ that send the sequence $(x_1,\ldots, x_n,\ldots)$ onto $(x_1,\ldots, x_k)$.\\ Let $$V_\infty:=\{x=(x_n)_n\in\mathbb{K}^\mathbb{N}\mid P_i(x)=0\ \forall i\in\mathbb{N}\}$$ and $$V_{N}:=\{x=(x_n)_n\in\mathbb{K}^\mathbb{N}\mid P_1(x)=\ldots=P_N(x)=0\} \ \ \forall N\in\mathbb{N}.$$ Then we have that $V_\infty=\cap_{N\in\mathbb{N}}V_N$. By assumption, for every integer $N\geq 1$ we have that $$V_N=\pi_{D_N}(V_N)\times\mathbb{K}^{\mathbb{N}\setminus \{1,\ldots,D_N\}}.$$ For every positive integers $N$ and $k$ we define $$C_N^k=\pi_{k}(V_N).$$ Now set $$C^k:=\cap_{N\in\mathbb{N}}C_N^k.$$ We claim that, if for every $k$, $C_k\neq \emptyset$, then $\mathcal S$ has a solution; indeed, by construction $(x_1,\ldots, x_k)\in C^k$ if and only if for every $N$ and $k$ there exists $(x_{k+1},\ldots, )\in\mathbb{K}^\mathbb{N}$ such that $(x_1,\ldots, x_k,x_{k+1},\ldots)\in V_N$. In particular $\pi_{k+1,k}(C^{k+1})=C^k$ for every $k$.\\ Now let $x_1\in C^1$. Then there exists $x_2\in\mathbb{K}$ such that $(x_1,x_2)\in C^2$. By induction we can find a sequence of elements $x_n\in\mathbb{K}$ such that for every $k$ $$(x_1,\ldots, x_k)\in C^k.$$ Thus the sequence $x=(x_n)_n\in V_N$ for every $N$ so it belongs to $V_\infty$. Hence $\mathcal S$ has a solution.\\ \\ a) Let us assume that $\mathbb{K}$ is a finite field.\\ Then the $C_N^k$ are finite subsets of $\mathbb{K}^k$. Since $V_{N+1}\subset V_N$ for every $N$, the sequence $(C_N^k)_N$ is decreasing so it stabilizes. Therefore $C_k\neq \emptyset$ and $\mathcal S$ has a solution.\\ \\b) Now let us assume that $\mathbb{K}$ is an uncountable algebraically closed field. We have that $$C_N^k=\pi_k(V_N)=\pi_{D_N,k}\left(\{x=(x_1,\ldots, x_{D_N})\in\mathbb{K}^{D_N}\mid P_1(x)=\ldots=P_N(x)=0\}\right).$$ Thus the $C_N^k$ are constructible subsets of $\mathbb{K}^k$ since $\mathbb{K}$ is algebraically closed (by Chevalley's Theorem). Let us recall that a constructible set is a finite union of sets of the form $X\backslash Y$ where $X$ and $Y$ are Zariski closed subsets of $\mathbb{K}^{k}$.\\ Thus the sequence $(C_N^k)_N$ is a decreasing sequence of constructible subsets of $\mathbb{K}^k$. Let $F_N^k$ denote the Zariski closure of $C_N^k$. Then the sequence $(F_N^k)_N$ is a decreasing sequence of Zariski closed subsets of $\mathbb{K}^k$. By Noetherianity this sequence stabilizes, i.e. $F_{N}^k=F_{N_0}^k$ for every $N\geq N_0$ and some positive integer $N_0$. By assumption $C_{N_0}^k\neq \emptyset$ so $F_{N_0}^k\neq \emptyset$. Let $F$ be an irreducible component of $F_{N_0}^k$. \\ Since $C_N^k$ is constructible, $C_N^k=\cup_i\left(X^N_i\backslash Y^N_i\right)$ for a finite number of Zariski closed sets $X^N_i$ and $Y^N_i$ with $X^N_i\backslash Y^N_i\neq \emptyset$ and $X^N_i$ is assumed irreducible. Since $X_i^N$ is irreducible the Zariski closure of $X^N_i\backslash Y^N_i$ is $X_i^N$. Therefore for $N\geq N_0$ we have that $$F_{N_0}^k=F_N^k=\cup_i X^N_i.$$ But $F$ being irreducible, for every $N\geq N_0$ one of the $X_i^N$ has to be equal to $F$. Thus for every $N\geq N_0$ there exists a closed proper subset $Y_N\subset F$ such that $$F\backslash Y_N\subset C_N^k\ \ \forall N\geq N_0.$$ Since $\mathbb{K}$ is uncountable $$\bigcup_{N\geq N_0}Y_N\subsetneq F.$$ This is a well known fact (see for instance Exercice 5.10, \cite{L} p. 76). This implies that $C^k\neq \emptyset$ and $\mathcal S$ has a solution.\\ \\ Finally c) is given as in Lemma 2.17 \cite{P1}. \hfill\ \end{proof} \begin{rem} It is quite straightforward to prove that a field $\mathbb{K}$ that is $\aleph_1$-saturated is $\aleph_0$-complete (for the definition of a saturated model see \cite[Section 2.3]{CK}). One can prove that the three fields of Theorem \ref{t0} are $\aleph_1$-saturated providing an alternative proof of the fact that these fields are $\aleph_0$-complete. \end{rem} \begin{example} Let $\mathbb{K}=\overline\mathbb{Q}$ be the algebraic closure of $\mathbb{Q}$. Since $\overline\mathbb{Q}$ is countable we may list its elements as $\a_1$, $\a_1$, \ldots, $\a_l$, \ldots. Let $\mathcal S$ be the system of equations: $$P_1=0,P_l=\left(x_1-\a_{l}\right)x_{l}-1= 0 \ \ \forall l\geq 2.$$ For every integer $N\geq 1$ the vector $$\left(\a_N, \frac{1}{\a_N-\a_{2}}, \ldots, \frac{1}{\a_N-\a_{N-1}}\right)\in\mathbb{K}^{N-1}$$ is a solution of $$P_1=\cdots= P_{N-1}=0.$$ But $\mathcal S$ has no solution. Indeed if $x=(x_1,\ldots, x_n,\ldots)\in\mathbb{K}^\mathbb{N}$ was a solution of $\mathcal S$ then we would have that \begin{equation}\label{eq}(x_1-\a_{l})x_{l}=1\ \ \ \forall l\geq 2.\end{equation} But $x_1\in\overline\mathbb{Q}$ so $x_1=\a_{l_0}$ for some $l_0\geq 0$. Thus \eqref{eq} for $l=l_0$ would give $$0=(x_1-\a_{l_0})x_{l_0}=1$$ which is impossible. So $\overline\mathbb{Q}$ is not an $\aleph_0$-complete field. \end{example} \begin{example} Let $\mathbb{K}=\mathbb{R}$ be the field of real numbers. Let $\mathcal S$ be the system of equations: $$P_1=0, P_l=x_l^2-(x_1-l)=0\ \ \forall l\geq 2.$$ Then $P_1=\cdots =P_l=0$ has a solution $x=(x_1,\ldots,x_n)$ if and only if $x_1-l\geq 0.$\\ In particular $\mathcal S$ has no solution. So $\mathbb{R}$ is not an $\aleph_0$-complete field. \end{example} \section{Approximation with constraints} We recall some elementary facts on algebraically pure field extensions, referring to \cite{P1} and \cite[(2.3)]{BNP} for details. \begin{rem} \begin{enumerate} \item If $\mathbb{K}\longrightarrow \L$ is a field extension of real closed fields then it is algebraically pure. \item If $\mathbb{K}$ is an infinite field and $x=(x_1,\ldots,x_n)$ then $\mathbb{K}\longrightarrow \mathbb{K}(x)$ is algebraically pure. \cite{P1} \item If $\mathbb{K}$ is a field and $x=(x_1,\ldots, x_n)$, we denote by $\mathbb{K}\lg\!\lg x\rangle\!\rangle$ the field of algebraic power series, and by $\mathbb{K}\{\!\{x\}\!\}$ the field of convergent power series (when $\mathbb{K}$ is a complete valued field). Then $\mathbb{K}\lg\!\lg x\rangle\!\rangle\longrightarrow \mathbb{K}\{\!\{x\}\!\}$ and $\mathbb{K}\{\!\{x\}\!\}\longrightarrow \mathbb{K}(\!(x)\!)$ are algebraically pure by Artin approximation theorem. \cite{Ar0} \item If $\mathbb{K}_1\longrightarrow \mathbb{K}_2$ and $\mathbb{K}_2\longrightarrow \mathbb{K}_3$ are algebraically pure then $\mathbb{K}_1\longrightarrow \mathbb{K}_3$ is algebraically pure. \cite{P1} \end{enumerate} \end{rem} \begin{lem}\label{lem_ultra}\cite{BNP} Let $\mathbb{K}$ be a field and let $\mathbb{K}^*$ be an ultrapower of $\mathbb{K}$. Then the morphism $\mathbb{K}\longrightarrow \mathbb{K}^*$ sending every element $a\in\mathbb{K}$ onto the constant sequence $(a,\ldots, a,\ldots)$ is algebraically pure. \end{lem} \begin{proof} Let $\mathcal S=(P_i)_{i\in I}$ be a finite system of polynomial equations with coefficients in $\mathbb{K}$ in the indeterminates $Y_1,\ldots, Y_m$. Let us assume that there exists $y^*\in(\mathbb{K}^*)^m$ such that $$P_i(y^*)=0\ \ \ \forall i\in I.$$ Let $(y_n)_{n\in\mathbb{N}}\in(\mathbb{K}^m)^\mathbb{N}$ be a sequence whose image in $(\mathbb{K}^*)^m$ is $y^*$. There for every $i\in I$ there exists $\mathcal U_i\in \mathcal F$ (here $\mathcal F$ denotes the ultrafilter such that $\mathbb{K}^*=\mathbb{K}^\mathbb{N}/\mathcal F$) such that $$\forall n\in U_i,\ \ P_i(y_n)=0.$$ Since $I$ is finite the intersection $\mathcal U:=\cap_{i\in I}\mathcal U_i\in\mathcal F$. Thus for every $n\in \mathcal U$ we have that $$P_i(y_n)=0 \ \ \forall i\in I.$$ Hence $\mathcal S$ has a solution in $\mathbb{K}^m$. Therefore $\mathbb{K}\longrightarrow \mathbb{K}^*$ is algebraically pure. \end{proof} \begin{prop}\label{prop2} Let $\mathbb{K}$ be a $\aleph_0$-complete field. Let $x=(x_1,\ldots, x_n)$, $Y=(Y_1,\ldots, Y_m)$, $f=(f_1,\ldots,f_r)\in \mathbb{K}[\![x]\!][Y]^r$ and $J_i\subset [n]$, $i\in [m]$.\\ If for every $c\in\mathbb{N}$ there exists $y^{(c)}\in\mathbb{K}[\![x]\!]^m$, with $y^{(c)}_i\in \mathbb{K}[\![x_{J_i}]\!]$ for every $i$, such that $$f(y^{(c)})\equiv 0\ \mbox{modulo}\ (x)^c$$ then there exists $y\in\mathbb{K}[\![x]\!]^m$, with $y_i\in \mathbb{K}[\![x_{J_i}]\!]$ for every $i$, such that $$f(y)=0.$$ \end{prop} \begin{proof} Let us set $$B_i:=\mathbb{N}^{\varepsilon_{1,i}}\times\cdots \times \mathbb{N}^{\varepsilon_{m,i}}$$ where $\varepsilon_{k,i}=1$ if $k\in J_i$, $\varepsilon_{k,i}=0$ if $k\notin J_i$, and $$Y_i=\sum_{\a\in B_i}Y_{i,\a}x^\a \ \ \ \forall i=1,\ldots, m.$$ We denote by $P_{k,\b}$ the coefficient of $x^\b$ in $f_k(\sum_{\a\in B_1}Y_{1,\a}x^\a,\ldots,\sum_{\a\in B_m}Y_{m,\a}x^\a)$. Let us denote by $\mathcal S$ the system of polynomial equations \begin{equation}\label{2}P_{k,\b}=0,\ k\in[p],\ \b\in \mathbb{N}^n.\end{equation} depending on the variables $Y_{i,\a}$ for $i\in[m]$ and $\a\in B_i$.\\ Since $\mathbb{K}$ is a $\aleph_0$-complete field and every finite sub-system of $\mathcal S$ has a solution, $\mathcal S$ has a solution $(y_{i,\a})_{i\in[m],\a\in B_i}$ with coefficients in $\mathbb{K}$. Thus if $y=(y_1,\ldots, y_m)$ with $$y_i=\sum_{\a\in B_i} y_{i,\a}x^\a$$ then we have that $f(y)=0$. \hfill\ \end{proof} \begin{example} In \cite{BDLD} two examples are given that show that this statement is no longer true without the condition of $\mathbb{K}$ being $\aleph_0$-complete: the first one is a system of polynomial equations over the algebraic closure of $\mathbb F_p$ (see Example (i) p. 200 \cite{BDLD}) and the second one is an example of polynomial equations over $\mathbb{Q}$ (see Example (ii) p. 200 \cite{BDLD}).\\ \end{example} \begin{thm}\label{q2} Let $\mathbb{K}\subset \mathbb{K}'$ be an algebraically pure field extension where $\mathbb{K}$ is $\aleph_0$-complete. We set $x=(x_1,\ldots,x_n)$ and $f\in \mathbb{K}[\![x]\!][Y]^r$, $Y=(Y_1,\ldots,Y_m)$.\\ Assume that there exists a solution $\hat{y}\in \mathbb{K}'[\![x]\!]^m$ of $f=0$ such that $$\hat{y}_i\in \mathbb{K}'[\![x_{J_i}]\!]$$ for some subsets $J_i\subset [n]$, $i\in [m]$. Then there is a solution $y\in \mathbb{K}[\![x]\!]^m$ of $f=0$ such that $y_i\in \mathbb{K}[\![x_{J_i}]\!]$ and $\text{ord} (y_i)=\text{ord} (\hat{y}_i)$, $i\in [m]$. \end{thm} \begin{proof} Let us write $\hat y_i=\sum_{\a\in B_i}\hat y_{i,\a}x^\a$ where $B_i\subset \mathbb{N}^n$ denotes the support of $\hat y_i$. We have that $$f(\hat y)=0\Longleftrightarrow f_k(\hat y )=0 \ \ \forall k=1,\ldots, r$$ $$\Longleftrightarrow \forall k, \ \forall \b\in\mathbb{N}^n \text{ the coefficient of } x^\b \text{ in } f_k(\hat y) \text{ is } 0$$ Let us denote by $P_{k,\b}$ the coefficient of $x^\b$ in $f_k$ after replacing each $Y_i$ by the term $\sum_{\a\in B_i}Y_{i,\a}x^\a$, and let $\mathcal S$ be the system of equations $$P_{k,\b}=0\ \ \forall k\in\mathbb{N}, \ \forall \b\in\mathbb{N}^n $$ in the indeterminates $Y_{i,\a}$ for $i=1,\ldots, m$ and $\a\in B_i$. Since $\mathcal S$ has a solution in $\mathbb{K}'$ every finite sub-system of $\mathcal S$ has a solution in $\mathbb{K}'$ and, since $\mathbb{K}\longrightarrow \mathbb{K}'$ is algebraically pure, every finite sub-system of $\mathcal S$ has a solution in $\mathbb{K}$. Then, since $\mathbb{K}$ is a $\aleph_0$-complete field the system $\mathcal S$ has a solution $(y_{i,\a})_{i\in[m],\a\in B_i}$ with coefficients in $\mathbb{K}$. This means that if $y=(y_1,\ldots, y_m)$ with $$y_i=\sum_{\a\in B_i} y_{i,\a}x^\a$$ then $f(y)=0$. Since $B_i$ is the support of $\hat y_i$,the support of $y_i$ is included in the support of $\hat y_i$ for every $i$. In particular we have that $\text{ord}(\hat y_i)\leq \text{ord}(y_i)$ for every $i$.\\ Now let us assume moreover that $\text{ord}(\hat y_i)=c_i$ and that, for every $i=1,\ldots, m$, $\hat y_{i,\a_i}\neq 0$ with $|\a_i|=c_i$ (here for $\b=(\b_1,\ldots, \b_n)$ we set $|\b|:=\b_1+\cdots+\b_n$). Then there exists, for $i=1,\ldots, m$, an element $\hat z_i\in\mathbb{K}'$ such that $$\hat y_{i,\a_i}\hat z_i=1, \ \forall i=1,\ldots, m.$$ By adding the equations \begin{equation}\label{eq}Y_{i,\a_i}Z_i=1, \ \forall i=1,\ldots, m.\end{equation} to the system $\mathcal S$ we can suppose that there exists $z_i\in\mathbb{K}$ for every $i$ such that Equations \eqref{eq} are satisfied. Thus $$\text{ord}(y_i)=c_i=\text{ord}(\hat y_i)\ \ \forall i=1,\ldots,m$$ and the theorem is proven. \hfill\ \end{proof} \begin{rem}\label{r1} By Lemmas 5.1 and 5.2 \cite{R} every system $\mathcal T$ of partial polynomial differential equations with coefficients in $\mathbb{K}[\![x]\!]$ (with $x=(x_1,\ldots, x_n)$) and indeterminates $Y_1$, \ldots, $Y_m$, provides a system $\mathcal S$ of polynomial equations with coefficients in $\mathbb{K}[\![x]\!][t]$ (with $t=(t_1,\ldots, t_l)$) and indeterminates $Y_1$, \ldots, $Y_m$, $Z_{1}$,\ldots, $Z_{k}$ such that $y\in\mathbb{K}[\![x]\!]^m$ is a solution of $\mathcal T$ if and only if there exists $z\in\mathbb{K}[\![x,t]\!]^k$ such that $(y,z)$ is a solution of $\mathcal S$ and $z$ satisfies some constraints conditions as in Proposition \ref{prop2}.\\ By Corollary 4.7 \cite{DL} there exists a system of partial differential equations $\mathcal T$ defined over $\overline\mathbb{Q}$ having a solution whose components are in $\mathbb{C}[\![x]\!]$ but no solution whose components are in $\overline\mathbb{Q}[\![x]\!]^m$. So it shows that there exists a system of polynomial equations $\mathcal S$ with coefficients in $\overline\mathbb{Q}[x]$ which has no solution $y\in\overline\mathbb{Q}[\![x]\!]^m$ such that $y_i\in\overline\mathbb{Q}[\![x_{J_i}]\!]$ for every $i$ for some $J_i\subset [n]$, but has a solution $y'\in\mathbb{C}[\![x]\!]^m$ such that $y'_i\in\mathbb{C}[\![x_{J_i}]\!]$ for every $i$.\\ This shows that Theorem \ref{q2} is no longer true in general if $\mathbb{K}$ is not $\aleph_0$-complete.\\ Moreover since this system $\mathcal S$ has a solution with coefficients in $\mathbb{C}$ satisfying the constraints conditions and since $\overline\mathbb{Q}\longrightarrow \mathbb{C}$ is algebraically pure, for every $c\in\mathbb{N}$ there exists $y^{(c)}\in\overline\mathbb{Q}[\![x]\!]^m$ (satisfying the constraints conditions) such that $f(y^{(c)})\in (x)^c$. But there is no $y\in\overline\mathbb{Q}[\![x]\!]^m$ (satisfying the constraints conditions) such that $f(y)=0$. This also provides an example showing that Proposition \ref{prop2} is not true if $\k=\overline\mathbb{Q}$. \end{rem} \begin{cor}\label{cor} Let $\mathbb{K}$ be a $\aleph_0$-complete field. Let us set $x=(x_1,\ldots,x_n)$, $f=(f_1,\ldots,f_r)\in \mathbb{K}[\![x]\!][Y]^r$, $Y=(Y_1,\ldots,Y_m)$ and $J_i\subset [n]$, $i\in [m]$. Then there exists a map $\nu:\mathbb{N}^m\to \mathbb{N} $ such that if $y'=(y'_1,\ldots,y'_m)$, $y'_i\in \mathbb{K}[\![x_{J_i}]\!]$, $i\in [m]$ satisfies $f(y')\equiv 0$ modulo $(x)^{\nu(c)}$ for some $c=(c_1,\ldots,c_m)\in \mathbb{N}^m$ and $\text{ord} (y'_i)=c_i$, $i\in [m]$ then there exists $y_i\in \mathbb{K}[\![x_{J_i}]\!]$ for all $ i\in [m]$ such that $y=(y_1,\ldots,y_m)$ is a zero of $f$ and $\text{ord}( y_i)=c_i$ for all $i\in [m]$. \end{cor} \begin{proof} Let $c$ be as above. For proof by contradiction suppose that for each $q\in \mathbb{N}$ there exists ${\hat y}_q\in \mathbb{K}[\![x]\!]^m$ with $f({\hat y})\equiv 0$ modulo $x^{q}$, ${\hat y}_{q,i}\in \mathbb{K}[\![x_{J_i}]\!]$, $\text{ord} (\hat y_{q,i})=c_i$, but there exists no solution $y'$ in $\mathbb{K}[\![x]\!]$ with $ y'_{i}\in \mathbb{K}[\![x_{J_i}]\!]$, $\text{ord} (y'_{i})=c_i$. Then let us define $y^*_i=[(y_{qi})_q]\in \mathbb{K}[\![x_{J_i}]\!]^*$. So we have that $f(y^*)\in \cap_q x^q\mathbb{K}[\![x]\!]^*$. Set $\bar {y}= y^*$ modulo $\cap_q x^q\mathbb{K}[\![x]\!]^*$ which corresponds to an element in $\mathbb{K}^*[\![x]\!]$ with $f(\bar{y})=0$ (see Lemma 3.4 \cite{BDLD}), $\text{ord} (\bar {y}_i)=c_i$ and $\bar{y}_i\in \mathbb{K}^*[\![x_{J_i}]\!]$. By Lemma \ref{lem_ultra} and Theorem \ref{q2} there exists $y\in \mathbb{K}[\![x]\!]^m$ with $f(y)=0$, $\text{ord} (y_i)=c_i$ and $y_i\in \mathbb{K}[\![x_{J_i}]\!]$. We obtain a contradiction, so the theorem is true. \hfill\ \end{proof} \begin{rem} \label{r2} In Example (iii) p. 201 \cite{BDLD} an example of a system of polynomial equations over $\mathbb{C}$ with constraints is given for which the following is shown: there is no $\nu\in \mathbb{N}$ such that if there exists $\hat y\in \mathbb{C}[\![x]\!]^m $ with $f(x,\hat y)\in (x)^\nu$ with the given constraints then there exists a solution $y\in\mathbb{C}[\![x]\!]$ of $f=0$ with same constraints and such that $y\equiv \hat y$ modulo $(x)$. \end{rem} \section{Approximation for differential equations} \begin{cor}\label{cor1} Let $\mathbb{K}$ be a $\aleph_0$-complete field. Let $F$ be a system of polynomial equations in $z_1,\ldots,z_q$ and some of their differentials $\partial^{|j_1|} z_{i_1}/\partial x^{j_1},\ldots, \partial^{|j_s|} z_{i_s}/\partial x^{j_s}$, $i_1,\ldots,i_s\in [q]$, and $j_1,\ldots,j_s\in {\bf N}^n$, with coefficients in $\mathbb{K}[\![x]\!]$. If $F=0$ has approximate solutions up to any order then $F=0$ has a solution with coefficients in $\mathbb{K}[\![x]\!]$. \end{cor} \begin{proof} Exactly as in Remark \ref{r1}, Lemmas 5.1 and 5.2 \cite{R} show that for such a system $F=0$ there is a system of polynomial equations $G=0$ with coefficients in $\mathbb{K}[\![x]\!][t]$ (with $t=(t_1,\ldots, t_l)$) and indeterminates $Y_1$, \ldots, $Y_m$, $Z_{1}$,\ldots, $Z_{k}$ such that $y\in\mathbb{K}[\![x]\!]^m$ is a solution of $F=0$ if and only if there is $z\in\mathbb{K}[\![x,t]\!]^k$ such that $(y,z)$ is a solution of $G=0$ with constraints.\\ Moreover $y\in\mathbb{K}[\![x]\!]^m$ is an approximate solution of $F=0$ up to order $c$ if and only if there is $z\in\mathbb{K}[\![x,t]\!]^k$ such that $(y,z)$ is an approximate solution of $G=0$ up to degree $c$ with constraints. This shows that Proposition \ref{prop2} implies Corollary \ref{cor1}. \hfill\ \end{proof} \begin{rem} This theorem has been proven in \cite{DL} in the case of a single indeterminate $x$ under some different hypothesis on $\mathbb{K}$, namely $\mathbb{K}$ has to be a characteristic zero field which is either algebraically closed, a real closed field or a Henselian valued field. Still in \cite{DL} they remark that this theorem is quite easy to prove when $\mathbb{K}=\mathbb{C}$. \\ Again in \cite{DL} is given an example of a system of partial differential equations with coefficients in $\mathbb{R}[\![x_1,\ldots,x_n]\!]$ for $n\geq 2$ having approximate solution up to any degree, but no exact solution (see Corollary 4.10 \cite{DL}). And Corollary 4.7 \cite{DL} provides an analogous example in the case where $\mathbb{K}=\overline\mathbb{Q}$. These examples show that the univariate case and the case of several variables $x$ are different. \end{rem} \begin{cor}\label{cor2} Let $\mathbb{K}$ be a $\aleph_0$-complete field. Let $F$ be a system of differential equations in $z_1,\ldots,z_q$ and some of their differentials $\partial^{|j_1|} z_{i_1}/\partial x^{j_1},\ldots, \partial^{|j_s|} z_{i_s}/\partial x^{j_s}$, $i_1,\ldots,i_s\in [q]$, and $j_1,\ldots,j_s\in {\bf N}^n$ with coefficients in $\mathbb{K}[\![x]\!]$. Then there exists a map $\tau:{\bf N}^{q+s}\to {\bf N} $ such that if $z'=(z'_1,\ldots,z'_q)$, satisfies $$F(z',\partial^{|j_1|} z'_{i_1}/\partial x^{j_1},\ldots, \partial^{|j_s|} z_{i_s}/\partial x^{j_s})\equiv 0 \ \mbox{modulo}\ (x)^{\tau(c)}$$ for some $c=(c_1,\ldots,c_q,c_{i_1,j_1},\ldots,c_{i_s,j_s})\in {\bf N}^{q+s}$ and $\text{ord} (z'_i)=c_i$, $i\in [q]$, $$\text{ord} \left(\frac{\partial^{|j_k|} z'_{i_k}}{\partial x^{j_k}}\right)=c_{i_k,j_k},$$ $k\in [s]$ then there exists $z=(z_1,\ldots,z_q)\in \mathbb{K}[\![x]\!]^q$ a solution of $F$ together with its corresponding differentials such that $\text{ord}( z_i)=c_i$ for all $i\in [q]$ and $$\text{ord} \left(\frac{\partial^{|j_k|} z_{i_k}}{\partial x^{j_k}}\right)=c_{i_k,j_k}, \ \ k\in [s].$$ \end{cor} \begin{proof} Let $f\in \mathbb{K}[\![x]\!][Y]^r$, $Y=(Y_1,\ldots,Y_m)$, $m>q+s$ be the transformation of $F$ in an algebraic system of equations with constraints as done in the proof of Corollary \ref{cor1}. Assume that $z_i$ corresponds to $Y_i$ and $\partial^{|j_k|} z_{i_k}/\partial x^{j_k} $ corresponds to $Y_{q+k}$. Then applying Corollary \ref{cor} to $f$ we get a function $\tau:{\bf N}^{q+s}\to {\bf N}$ which works also in our case $F$. \hfill\ \end{proof} {\bf Acknowledgements:} We thank the referee of their relevant and helpful comments.\\ This work has been partially elaborated in the frame of the International Research Network ECO-Math.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The {\sc Phojet} event generator is a Monte Carlo implementation of the two-component Dual Parton Model. This model combines results obtained within Regge theory, Gribov's reggeon calculus \cite{Gribov67a-e,Gribov68c-e} and Abramowski-Gribov-Kancheli (AGK) cutting rules \cite{Abramovski73-e} with perturbative QCD predictions for hard interaction processes (see for example \cite{Innocente88,Hahn90,Aurenche92a}, a review is given in \cite{Capella94a}). The Dual Parton Model describes high-mass diffractive hadron production in terms of enhanced graphs like the triple-pomeron graph \cite{Kaidalov79}. As already discussed in \cite{Kaidalov74a}, within this approach, diffractive processes can be considered as collisions of a color neutral object, the pomeron, with hadrons, photons or other pomerons. However, it is important to note that the pomeron cannot be considered as an ordinary hadron. It is only a theoretical object providing an effective description of the important degrees of freedom of a certain sum of Feynman diagrams in Regge limit (e.g.\ the available c.m. energy is large compared to the momentum transfer characterizing the scattering process). In this sense, pomeron-hadron or pomeron-pomeron interactions can only be discussed in the framework of collisions of other particles like hadrons or photons. Experimental data support the interpretation of diffractive particle production in terms of pomeron-hadron or pomeron-pomeron collisions. It was found that high-mass diffraction dissociation exhibits similar features as non-diffractive hadron production where the mass of the diffractively produced system corresponds to the collision energy in non-diffractive interactions \cite{Goulianos83,Bernard86a}. Furthermore, it is well known that, in order to obtain a reasonable Monte Carlo description of non-diffractive hadron production, multiple partonic interactions between projectile and target are needed at high energies \cite{Aurenche84a,Sjostrand87b,Aurenche92a}. Given the striking similarities between diffractive and non-diffractive multiparticle production one may expect that multiple soft as well as hard interactions may also play an important role in pomeron-hadron/photon/pomeron interactions. Another consequence of the triple-pomeron graph interpretation is an energy-dependent normalization of the pomeron flux. Experimentally, the pomeron flux in target diffraction dissociation is given by the cross section for rapidity gap events divided by the corresponding pomeron-target cross section. However, since at high energies additional projectile-target interactions are likely to fill the rapidity gap with hadrons, the experimentally observable pomeron flux is smaller than the flux implied by the triple-pomeron graph. Furthermore, it may depend not only on projectile but also on target properties. However, in the case of a large rapidity gap between the jets in two-jet events, as observed at TEVATRON \cite{Abachi96b,Abe98a}, at least one pomeron has a large virtuality and standard Regge phenomenology cannot be applied. Therefore, one has to introduce a new kind of process like soft color reconnection (SCR) \cite{Buchmuller95a,Buchmuller95b,Ingelman95a,Amundson96a,Eboli97a} or perturbative gluon-ladder exchange \cite{Mueller92a,DelDuca93a}. The {\sc Phojet} Monte Carlo is a first attempt to built a model which accounts for both effects, multiple soft and hard interactions between the constituents of the projectile and target as well as multiple interactions in the pomeron-projectile, pomeron-target, and pomeron-pomeron scattering subprocesses. The model includes also SCR in hard scattering processes similar to \cite{Eboli97a}. \section{The Model\label{the-model}} The realization of the DPM in {\sc Phojet} \cite{Engel95a,Engel95d} with a hard and a soft component is similar to the event generator {\sc Dtujet} \cite{Aurenche92a,Bopp94a} for $p$--$p$ and $\bar p$--$p$ collisions. Interactions of hadrons are described in terms of reggeon (${I\!\!R}$) and pomeron (${I\!\!P}$) exchanges. The pomeron exchange is artificially subdivided into {\it soft} processes and processes with at least one large momentum transfer ({\it hard} processes). This allows us to use the predictive power of the QCD-improved Parton Model with lowest-order QCD matrix elements \cite{Combridge77,Duke82a} and parton distribution functions (PDFs). Practically, soft and hard processes are distinguished by applying a transverse momentum cutoff $p_\perp^{\mbox{\scriptsize cutoff}}$ of about 3 GeV/$c$ to the scattered partons. The pomeron is considered as a two-component object with the Born graph cross section for pomeron exchange given by the sum of hard and soft cross sections. \begin{figure}[!htb] \centerline{\epsfig{file=enh-gra-gen.eps,width=11cm}} \medskip \caption{Enhanced pomeron exchange graphs considered in the model: a) triple-pomeron, b) loop-pomeron, and c) double-pomeron graphs. The zig-zag lines represent pomeron propagators. \label{enh-gra} } \end{figure} High-mass diffraction dissociation is described in the following way. In order to get an efficient parametrization of Born graph cross sections describing diffraction within Gribov's reggeon calculus, we calculate the triple-, loop- and double-pomeron graphs shown in Fig.~\ref{enh-gra} using a renormalized pomeron intercept $\alpha_{\tilde{I\!\!P}} = 1+\Delta_{\tilde{I\!\!P}} = 1.08$. The corresponding formulae are given in \cite{Engel96e}. Shadowing corrections are approximated by unitarizing the enhanced graphs together with the leading one-pomeron exchange (including soft and hard contributions) in a two-channel eikonal model \cite{Aurenche92a,Engel95a,Engel95d}. In case of diffractive multiparticle production we have to consider, in addition to the shadowing contribution from multiple pomeron exchange between projectile and target, also rescattering effects in pomeron-hadron and pomeron-pomeron interactions. For the cross section calculation the introduction of a renormalized pomeron trajectory takes this effect into account. However, for the calculation of particle production, a model for the hadronic final states which correspond to the unitarity cut of such a renormalized pomeron propagator is needed. {}Following Refs.~\cite{Cardy74a,Kaidalov86d} we assume that the pomeron-pomeron coupling can be described by the formation of an intermediate hadronic system $h^\star$ the pomerons couple to. Assuming that this intermediate hadronic system has properties similar to a pion, the $n$-$m$ pomeron coupling $g_{n-m}$ reads \cite{Kaidalov86d} \begin{equation} g_{n-m} = G \prod_{i=1}^{n+m-2} g_{h^\star{I\!\!P}} \label{n-m-coupling} \end{equation} with $g_{h^\star{I\!\!P}} = g_{\pi{I\!\!P}}$ being the pomeron-pion coupling. $G$ is a scheme-dependent constant. Hence, pomeron-hadron and pomeron-pomeron scattering should exhibit features similar to pion-hadron and pion-pion scattering, respectively. To introduce hard interactions in diffraction dissociation, the impact parameter amplitude of the exchanged (renormalized) pomerons in pomeron--hadron and pomeron--pomeron scattering is again interpreted as the eikonalized amplitude of soft and hard interactions \begin{equation} a_{A{I\!\!P}}(M_{\rm D},\vec B) \approx \frac{i}{2}\ G\ \left\{ 1 - \exp\left[-\chi_{\rm S}^{\rm diff}(M_{\rm D},\vec B) -\chi_{\rm H}^{\rm diff}(M_{\rm D},\vec B) \right] \right\}\ . \label{two-comp-diff0} \end{equation} The diffractive eikonal functions read \begin{eqnarray} \chi_{\rm S}^{\rm diff}(M_{\rm D},\vec B) &=& \frac{g_{A{I\!\!P}}^0 g_{h^\star{I\!\!P}}^0 (M_D^2/s_0)^{\Delta_{I\!\!P}}}{8 \pi b_{I\!\!P}(M_D^2)} \exp\left( -\frac{\vec{B}^2}{4 b_{I\!\!P}(M_D^2)}\right) \label{two-comp-diff1} \\ \chi_{\rm H}^{\rm diff}(M_{\rm D},\vec B) &=& \frac{\sigma_{\rm hard}^{A{I\!\!P}}(M_D^2)}{8 \pi b_{\rm h,diff}}\exp\left( -\frac{\vec{B}^2}{4 b_{\rm h,diff}}\right), \label{two-comp-diff2} \end{eqnarray} where $\sigma_{\rm hard}^{A{I\!\!P}}$ is the parton model cross section for hard pomeron--$A$ scattering ($A$ can be a hadron, photon or pomeron). In all calculations the pomeron PDFs proposed by Capella, Kaidalov, Merino, and Tran (CKMT) \cite{Capella95a,Capella96a} with a hard gluon component are used. To estimate the sensitivity of the model results to non-factorizing coherent pomeron contributions as proposed in \cite{Collins93,Collins95a}, we use optionally also a toy model with a direct pomeron-quark coupling \cite{Kniehl94}. In this case, the pomeron is treated similar to a photon having a flavor independent, unknown quark coupling $\lambda$. The corresponding matrix elements are given in \cite{Engel97d}. \section{Soft color reconnection} Both the CDF and D0 collaborations have found dijet production by color--singlet exchange \cite{Abachi96b,Abe98a}. These Jet--gap--Jet (JgJ) events are not due to traditional diffractive processes. The two jets separated by a rapi\-dity gap are in polar angle back--to--back correlated. Certainly in double--diffractive events describing the diffractively produced systems on both sides of the gap by pomeron--hadron scattering, we would also find jets, but these jets would not be back--to--back correlated. Therefore, we have to consider these events as mainly due to a new mechanism of hard pomeron exchange. To describe these events within the {\sc Phojet} Monte Carlo, we introduce SCR between hard scattered partons in nondiffractive events following Eboli, Gregores and Halzen \cite{Eboli97a}. This mechanism is quite similar to the the soft color interaction mechanism described by Ingelman \cite{Ingelman98}. We use the following SCR probabilities \cite{Eboli97a} in {\sc Phojet} \begin{equation} {}F_{qq}:F_{qg}:F_{gg}=\frac{1}{9}:\frac{1}{24}:\frac{1}{64}. \end{equation} The simplest hard q--q event, where SCR leads to a rapidity gap between two jets is an event with just one single hard valence--quark -- valence--quark scattering. In normal events in the Dual Parton Model we get two color strings each being stretched between one scattered quark and the diquark of the other hadron, no rapidity gap is present in such events. In events with SCR we get a color reconnection caused by the exchange of soft gluons, now the color strings connect the hard scattered quark and the diquark of the same hadron, these are events with a rapidity gap. The simplest hard g--g event where SCR leads to a rapidity gap between two jets is an event with just one hard g--g scattering. In normal events we get again two color strings connecting the (soft) valence quark of one hadron via the hard scattered gluon to the (soft) diquark of the other hadron. In events with SCR the color strings are stretched from the (soft) valence quark of one hadron via one hard scattered gluon to the (soft) diquark of the same hadron. Such events might have a rapidity gap. In most events we have multiple soft and hard interactions, even if a rapidity gap appears in one of the multiple collisions, the gap might be filled by hadrons resulting from the other collisions. The Monte Carlo simulation of complete events incorporates this effect, in this way {\sc Phojet} accounts already for the gap survival probability \cite{Bjorken93a,Bopp97a}. \section{Previous comparisons with hadron--hadron and Photon--hadron data\label{comparison-diff}} \subsection{Diffractive cross sections} Studying diffractive cross sections is not the primary concern of this paper. Results on diffractive cross sections were already presented using the {\sc Dtujet} model in Refs.~\cite{Aurenche92a,Engel92a,Bopp94a} and using the present {\sc Phojet} model in Refs.~\cite{Engel95a,Engel96e}, we include updated results for these cross sections here. In Fig.~\ref{ppdif}.a data on single diffractive cross sections \cite{Chapman74,Schamberger75,Albrow76,Armitage82,% Ansorge86,Robinson89,Amos90a,Amos93a,Abe94c} are compared with our model results ($M_{\rm D}^2 < 0.05 s$). It is to be noted that the data on single diffractive cross sections at collider energies are subject to large uncertainties. Nevertheless the rise of the cross section from ISR energies to the energies of the CERN and {}FERMILAB colliders is less steep than expected from the Born level expression which is the triple pomeron formula. However, within our model a renormalized pomeron flux as proposed in \cite{Goulianos95a} is not needed. There are two reasons this: (i) The eikonal unitarization procedure in the model suppresses the rapidity gap survival probability. This effect is well known (see for example \cite{Capella76,Gotsman95a}) and can be tested directly comparing diffraction dissociation in deep inelastic scattering and photoproduction at HERA \cite{Bopp97a}. (ii) The graph for double-pomeron scattering has cuts which correspond to single diffraction dissociation. However, due to the negative sign of these contributions the diffractive cross section is significantly reduced at high energies as compared to a model with an eikonalized triple pomeron graph only \cite{PhD-RE}. \begin{figure}[thb] \centering \begin{center} \unitlength1mm \begin{picture}(135,62) \put(0,0){\epsfig{figure=sad971.ps,width=6.0cm}} \put(2,3){a)} \put(65,0){\epsfig{figure=sad972.ps,width=6.2cm}} \put(67,3){b)} \end{picture} \end{center} \vspace*{-3mm} \medskip \caption{(a) Single and double diffractive $p-\bar p$ cross sections as a {}function of the center of mass energy $\protect\sqrt s$. Model results are compared to data on single diffractive cross sections \protect\cite{Chapman74,Schamberger75,Albrow76,Armitage82,% Ansorge86,Robinson89,Amos90a,Amos93a,Abe94c}. In addition, some experimental estimates for the cross section on double diffraction dissociation \protect\cite{Ansorge86,Robinson89} are shown (triangles). (b) The energy dependence of the central diffraction cross section. We compare the cross section as obtained from \protect{\sc Phojet} with unitarization using a supercritical pomeron with the cross section obtained by Streng \protect\cite{Streng86a} without unitarization and with a critical pomeron. Both cross sections are for the same two kinematic cuts: $M_{\rm CD}>2$ GeV/c${}^2$ and Feynman-$x$ of the scattered hadron $x_F >0.95$ (upper curves) and $0.97$ (lower curves). \label{ppdif} } \end{figure} In Fig.~\ref{ppdif}.b we compare as function of the energy the central diffraction cross sections in proton-proton collisions, which we obtain from {\sc Phojet} with the cross section calculated by Streng \cite{Streng86a}. In {\sc Phojet} we use a supercritical pomeron with $\Delta_{{\tilde{I\!\!P}}}$ = 0.08 whereas Streng \cite{Streng86a} uses a critical Pomeron with $\Delta_{{I\!\!P}}$ = 0. Note that also the double-pomeron cross section grows in Born approximation with $s$ like $\sim s^{2\Delta_{\tilde{I\!\!P}}}$. This rapid increase is damped in {\sc Phojet} by the unitarization procedure. At high energies, contributions from multiple interactions become important. The rapidity gaps are filled with hadrons due to inelastic rescattering and the cross section for central diffraction gets strongly reduced. In contrast, Streng calculates only the Born term cross section. {}Figure~\ref{ppdif}.b illustrates the differences obtained using different theoretical methods. We stress, both methods use the measured single diffractive cross sections to extract the triple-pomeron coupling. \subsection{Diffractive hadron- and jetproduction} There are some experiments on diffractive particle production at colliders, which we have studied previously using {\sc Phojet} \cite{Engel97d,Engel95c}. Generally, we have reached a good agreement. We do not present these comparisons again here. Among others, the following experiments have studied hadron production in single diffraction dissociation at the CERN--SPS and DESY--HERA and colliders: \begin{enumerate} \item The UA--4 Collaboration \cite{Bozzo84b,Bernard86a,Bernard87b} measured pseudorapidity distributions of charged hadron production for different masses of the diffractive system. We have already twice compared earlier versions of the Dual Parton Model \cite{Ranft87c,Roesler93} to this data. New in the present model is hard diffraction and multiple interactions in the diffractive hadron production, therefore we have again compared to this data and we find a reasonable agreement \cite{Engel97d}. In the model, multiple interactions and minijets lead to a rising rapidity plateau in pomeron--proton collisions in a similar way as observed in hadron--hadron collisions. \item Hard diffractive proton--antiproton interactions were investigated by the UA--8 Collaboration \cite{Brandt92}. In this experiment the existence of a hard component of diffraction was demonstrated for the first time. Because of the importance of these findings, we compared them already in \cite{Engel95c} to our model and found the model to be consistent with this experiment. Therefore we will not repeat this comparison here. \item Results on single photon diffraction dissociation and in particular hard single diffraction were presented by both experiments at the HERA electron--proton collider \cite{Ahmed95a,Aid95b,Derrick95a,Derrick95h,Derrick95i}. The ZEUS Collaboration \cite{Derrick95h} has presented differential and integrated jet pseudorapidity cross sections for jets with $E^{\rm jet}_{\perp} >$ 8 GeV. The absolute normalization of these data is given. This allows a severe check of the model. In \cite{Engel97d} we have compared the differential jet pseudorapidity cross sections from ZEUS \cite{Derrick95h} to the model. \end{enumerate} \section{Comparing hadron production in diffractive processes to non-diffractive particle production in $p$--$p$ and $\gamma$--$\gamma$ reactions\label{comparison-channel}} \begin{figure}[thb] \centering \begin{center} \unitlength1mm \begin{picture}(135,62) \put(0,0){\epsfig{figure=sad975.ps,width=6.0cm}} \put(2,3){a)} \put(65,0){\epsfig{figure=sad976.ps,width=6.0cm}} \put(67,3){b)} \end{picture} \end{center} \vspace*{-3mm} \medskip \caption{(a) Jet transverse energy distributions in non-diffractive $p$--$p$ and $\gamma$--$\gamma$ collisions compared with the jet transverse energy distribution in central diffraction (pomeron--pomeron collisions). For the latter channel we give the distributions separately for the full model, the model without multiple interactions (s) and the model with a direct pomeron coupling (d). The distributions were generated with \protect{\sc Phojet}, the c.m.\ energy / diffractive mass is 100 GeV in all cases. (b) Jet pseudorapidity distributions in non-diffractive $p$--$p$ and $\gamma$--$\gamma$ collisions compared with the jet pseudorapidity distribution in single diffraction (pomeron--$p$ scattering). The distributions were generated with \protect{\sc Phojet}, again the c.m.\ energy / diffractive mass is 100 GeV in all cases, but the pseudorapidities in the collisions with pomerons given refer to the $\protect\sqrt s$ = 2 TeV $p$--$p$ collisions used to generate the diffractive events. \label{pt100jpopo} } \end{figure} In Sections II we have already pointed out, that our model for particle production in pomeron--hadron/photon collisions and pomeron--pomeron collisions has the same structure characterized by multiple soft collisions and multiple minijets like models {}for hadron production in non-diffractive hadron--hadron collisions. Therefore, again we expect the main differences in comparison to other channels in the hard component due to the differences between the pomeron and hadron structure functions and due to the existence or nonexistence of a direct pomeron--quark coupling. The differences in the parton structure functions of protons, photons and pomerons lead to quite different energy dependences of the hard cross sections. In all processes where pomerons are involved, single diffraction and central diffraction, hard processes become important already at lower energies. For pomeron--pomeron scattering at low energy the hard cross section is about a factor 100 bigger than that of $p$--$\bar p$ collisions. At high energies the opposite happens, the hard cross sections in all processes where pomerons are involved rise less steeply with the energy than in purely hadronic or photonic processes. The reason for this is the different low-$x$ behavior of the parametrization of the structure functions used. However, nothing is known at present from experiment about the low-$x$ behavior of the pomeron structure function. In Fig.~\ref{pt100jpopo}.a we compare jet transverse energy distributions in $p$--$p$ and $\gamma$--$\gamma$ collisions with the ones in ${I\!\!P}$--${I\!\!P}$ collisions. In the channels with pomerons we present again the distributions according to our full model, according to the model without multiple interactions and the model with a direct pomeron--quark coupling. In all non-diffractive collisions we have $\sqrt s$ = 100 GeV and the diffractive events are generated in $\sqrt s $ = 2 TeV collisions with $M_D = 100$ GeV/$c^2$. The differences in the jet transverse energy distributions between the channels are as to be expected more pronounced than in the hadron $p_{\perp}$ distributions. We observe an important reduction in the jet distributions in the model without multiple interactions. The effect of the direct pomeron coupling is as dramatic as the effect due to the direct photon coupling. The $E_{\perp}$ distributions in the ${I\!\!P}$--$\gamma$ and ${I\!\!P}$--${I\!\!P}$ channels extend up to the kinematic boundary. In the latter two cases as in the case of $\gamma$--$\gamma$ collisions the entries at large $E_{\perp}$ come only from direct processes. In Fig.~\ref{pt100jpopo}.b we compare jet pseudorapidity distributions in $p$--$p$, $\gamma$--$\gamma$ and ${I\!\!P}$--$p$, again, all collisions at $\sqrt s$ = 100 GeV with the diffractive events generated in $\sqrt s $ = 2 TeV collisions. {}For the jets we observe substantial differences in the shape of the pseudorapidity distributions. \section{Single diffraction and central diffraction at TEVATRON} In Figs.~\ref{fdndm} to \ref{fdndetaj}.b we present some cross sections calculated using {\sc Phojet} at TEVATRON energy. The distributions are mass distributions in single and central diffraction (Fig.~\ref{fdndm}) and jet pseudorapidity distributions in single and central diffraction using $E_{\perp}$ thresholds of 5 and 15 GeV (Fig.~\ref{fdndetaj}.a and .b). In all Figs. we give the plots for three different cuts for the {}Feynman-$x$ of the diffractive nucleons $x_F>$ 0.9, 0.95 and 0.97. It is obvious, that all distributions and cross sections depend strongly on these cuts. One of the results obtained by the D0 Collaboration is the ratio of double--pomeron exchange (DPE)\footnote{ In \protect\cite{Albrow97a} the term double--pomeron exchange is used instead of central diffraction.} to non--diffractive (ND) dijet events \cite{Albrow97a}: \begin{equation} \left(\frac{\sigma({\rm DPE})}{\sigma({\rm ND})}\right)_{E_{\perp}^{\rm jet}>15 {\rm GeV}} \approx 10^{-6} \end{equation} Within the {\sc Phojet} model one gets the following cross sections:\\ \hspace*{1cm}Non-diffractive interactions (ND): $\sigma ({\rm ND}) = $ 45.2 mb,\\ \hspace*{1cm}Both-side single diffraction dissociation (SD): $\sigma ({\rm SD}) =$ 11.2 mb,\\ \hspace*{1cm}Central diffraction (CD): $\sigma ({\rm CD}) =$ 0.64 mb.\\ {}From these cross sections together with Figs.\ like \ref{fdndetaj} we get, always calculated for $E_{\perp}^{\rm jet}$ larger than 15 GeV:\\ \hspace*{1cm}(CD)/(ND)$\approx 2 \times 10^{-6}$,\\ \hspace*{1cm}(SD)/(ND)$\approx 4 \times 10^{-3}$,\\ \hspace*{1cm}(CD)/(SD)$\approx 0.5 \times 10^{-3}$.\\ Despite the fact that no experimental acceptance has been considered for these {\sc Phojet} results it is interesting to find the (CD)/(ND) ratio so close to the D0 value given above. \begin{figure}[thb] \centering \epsfig{figure=fnal11.ps,width=10.0cm} \medskip \caption{ Distribution of the diffractive mass in single diffraction dissociation (pomeron--proton) and central diffraction (pomeron--pomeron) at TEVATRON with $\sqrt s = 1.8$ TeV for three different cuts of the Feynman-$x$ of the diffractive nucleons. \label{fdndm} } \end{figure} \begin{figure}[thb] \centering \begin{center} \unitlength1mm \begin{picture}(135,62) \put(0,0){\epsfig{figure=fnal12.ps,width=6.0cm}} \put(2,3){a)} \put(65,0){\epsfig{figure=fnal15.ps,width=6.0cm}} \put(67,3){b)} \end{picture} \end{center} \vspace*{-3mm} \medskip \caption{(a) Pseudorapidity distribution of jets with $E_{\perp}$ larger than 5 GeV and 15 GeV in (one side) single diffraction (Pom--p) at TEVATRON for three different cuts of the Feynman-$x$ of the diffractive nucleon. The upper curves with the same plotting symbol are generally {}for $E_{\perp}$ = 5 GeV, the lower curves are for $E_{\perp}$ = 15 GeV. (b) Pseudorapidity distribution of jets with $E_{\perp}$ larger than 5 GeV in central diffraction (Pom--Pom) at TEVATRON for three different cuts of the Feynman-$x$ of the diffractive nucleons. \label{fdndetaj} } \end{figure} \section{Diffractive dijet production at TEVATRON} Data on dijet production in single diffraction dissociation using a rapidity gap trigger were published by the CDF Collaboration \cite{Abe97a}. Same side ($\eta^{{\rm jet}1} \times \eta^{{\rm jet}2} > 0$) dijets were selected with $E^{\rm jet}_{\perp}>$ 20 GeV in the jet pseudorapidity window 1.8 $<|\eta^{\rm jet}|<$ 3.5. The gap trigger did demand no charged hadrons in the range 3.2 $<|\eta |<$ 5.9 opposite to the jets and no calorimeter hit above 1.5 GeV in the range 2.4 $<|\eta |<$ 4.2 opposite to the jets. The ratio of dijet events with gap (JJg) to dijets without gap (JJ) was found to be \begin{equation} R_{\rm JJg-CDF} = \frac{\rm (JJg)}{\rm (JJ)} = (0.75 \pm 0.05 \pm 0.09) \%\ . \end{equation} Using {\sc Phojet} we got so far good statistics only for $E^{\rm jet}_{\perp}>$ 10 GeV when using the CDF pseudorapidity restrictions. We obtained the cross sections $\sigma_{\rm JJ}$ = 50.4 $\mu$b and $\sigma_{\rm JJg}$ = 0.107 $\mu$b. This gives the ratio $R_{\rm JJg- PHOJET}$ = 0.21$\%$. There are two possible reasons for this ratio being smaller than the one found by CDF: (i) the different $E_{\perp}$ cut and (ii) The CKMT pomeron structure functions \cite{Capella95a,Capella96a} used in the calculation might not contain enough hard gluons. \begin{figure}[thb] \centering \begin{center} \unitlength1mm \begin{picture}(135,62) \put(0,0){\epsfig{figure=jjgdndetj.ps,width=6.0cm}} \put(2,3){a)} \put(65,0){\epsfig{figure=jjgdndphij12.ps,width=6.0cm}} \put(67,3){b)} \end{picture} \end{center} \vspace*{-3mm} \medskip \caption{(a) $E^{\rm jet}_{\perp}$ distributions in JJ and JJg events obtained in {\sc Phojet} using the CDF triggers. (b) $\phi^{{\rm jet}1}$ - $\phi^{{\rm jet}2}$ distributions in JJ and JJg events obtained with {\sc Phojet} using the CDF trigger. \label{jjgdndetj} } \end{figure} In Fig.\ref{jjgdndetj}.a we present $E_{\perp}^{\rm jet}$ distributions calculated from {\sc Phojet} for the JJ and JJG events. Within the statistics of the Monte Carlo calculation both distributions seem to have the same shape. In Fig.\ref{jjgdndetj}.b we present $\phi^{{\rm jet}1}-\phi^{{\rm jet}2}$ distributions for the JJ and JJg events. Again within the statistics both distributions seem to be quite similar. However, in the JJ events we find more often additional jet-pairs than in the JJg events. Therefore we would expect a more narrow correlation of the two jets in the JJg events. \section{Dijet production by color--singlet exchange at TEVATRON} We will refer here only to the data on dijet production by color--singlet exchange published by the CDF and D0 Collaborations \cite{Abachi96b,Abe98a}. More preliminary data have been presented at this meeting. D0 \cite{Abachi96b} finds opposite side ($\eta^{{\rm jet}1}\times \eta^{{\rm jet}2} < 0$) dijets with $E^{\rm jet}_{\perp}>$ 30 GeV and $|\eta^{\rm jet}|>$ 2. The pseudorapidity gap is at $|\eta|<$ 1.3. The fraction of JgJ events is found to be \begin{equation} R_{\rm JgJ-D0} = \frac{\rm (JgJ)}{\rm (JJ)} = (1.07 \pm 0.10 ^{+ 0.25}_{-0.13}) \% \end{equation} CDF \cite{Abe98a} uses opposite side jets with $E^{\rm jet}_{\perp}>$ 20 GeV and $3.5 >|\eta^{\rm jet}|>$ 1.8 with a gap at $|\eta|<$ 1.0. The gap fraction is found to be \begin{equation} R_{\rm JgJ-CDF} = \frac{\rm (JgJ)}{\rm (JJ)} = (1.13 \pm 0.12 \pm 0.11) \%\ . \end{equation} {}Furthermore, the jets are found to be back--to--back correlated in $\phi^{{\rm jet}1}-\phi^{{\rm jet}2}$. In {\sc Phojet} using SCR model as described in Section III we find with the D0 trigger \begin{equation} R_{\rm JgJ-PHOJET-D0} = \frac{\rm (JgJ)}{\rm (JJ)} = 0.43 \%\ . \end{equation} Here 0.1\% background JgJ events with only an accidental gap was subtracted, this background was determined in a run without the use of SCR. With the CDF trigger we find \begin{equation} R_{\rm JgJ-PHOJET-CDF} = \frac{\rm (JgJ)}{\rm (JJ)} = 0.50 \% \end{equation} where 0.5\% background JgJ events had to be subtracted. \begin{figure}[thb] \centering \begin{center} \unitlength1mm \begin{picture}(135,56) \put(0,0){\epsfig{figure=wjetjj20.ps,width=6.0cm}} \put(2,3){a)} \put(65,0){\epsfig{figure=wjetjj30.ps,width=6.0cm}} \put(67,3){b)} \end{picture} \end{center} \vspace*{-3mm} \medskip \caption{(a) Monte Carlo predictions for the fractions of g--g, g--q and q--q hard scatterings in the JJ events (without gap trigger), JgJ events obtained with SCR and background (bg) JgJ events (obtained without SCR) for the CDF trigger. (b) The fractions of g--g, g--q and q--q events for JJ events (without gap trigger), JgJ events obtained with SCR and background (bg) JgJ events (obtained without SCR) for the D0 trigger. \label{wjetjj} } \end{figure} In the {\sc Phojet} Monte Carlo we can subdivide the hard scattering events into g--g, g--q and q--q scatterings. In Fig.\ref{wjetjj} we plot for the CDF and D0 triggers the fractions of g--g, g--q and q--q events for JJ events (without gap trigger), JgJ events obtained with SCR and background JgJ events (obtained without SCR). At both energies we find, that q--q scattering dominates the JgJ events, but g--q and g--g scattering contributes also. {}For the q--q events the fraction of JgJ events due to SCR (background subtracted) is always smaller than 1\% of the q--q scatterings without gap. In principle we could calculate from this separately the gap survival probabilities for q--q, g--q and g--g scatterings, with the present statistics of JgJ events however, we prefer not yet to give such numbers. In JgJ events we find generally only one or two pairs of jets whereas in events without a large gap the average number of jets is definitely larger than this. In Fig.~\ref{fdndphij12}.a we present the $\phi^{{\rm jet}1}-\phi^{{\rm jet}2}$ distribution calculated with {\sc Phojet} for the JJ and JgJ events for the CDF trigger together with the corresponding distribution published by CDF \cite{Abe98a}. All distributions are rather similar. However, with bigger statistics we expect to see a more narrow correlation in the JgJ events, in which further jets are much less frequent than in the JJ events. In Fig. \ref{fdndphij12}.b we present the calculated $E_{\perp}^{\rm jet}$ distributions. Within the statistics of the Monte Carlo runs we do not find differences between the distributions corresponding to the JJ and JgJ events. \begin{figure}[thb] \centering \begin{center} \unitlength1mm \begin{picture}(135,62) \put(0,0){\epsfig{figure=fdndphij12.ps,width=6.0cm}} \put(2,3){a)} \put(65,0){\epsfig{figure=fdndetjjgj.ps,width=6.0cm}} \put(67,3){b)} \end{picture} \end{center} \vspace*{-3mm} \medskip \caption{(a) The $\phi^{{\rm jet}1}-\phi^{{\rm jet}2}$ distributions corresponding to the CDF trigger \protect\cite{Abe98a}. (b) The $E_{\perp}^{\rm jet}$ distributions corresponding to the CDF trigger. \label{fdndphij12} } \end{figure} \begin{figure}[thb] \centering \epsfig{figure=rjgj.ps,width=10.0cm} \medskip \caption{ The change of $R_{\rm JgJ}$ with the $E_{\perp}^{\rm jet}$, Preliminary data from the D0 Collaboration \protect\cite{Abbott97} are compared to the {\sc Phojet} results obtained with SCR. \label{rjgj} } \end{figure} The change of $R_{\rm JgJ}$ with the $E_{\perp}^{\rm jet}$ was studied by the D0 Collaboration \cite{Abbott97}. A modest rise of the color singlet fraction with $E_{\perp}^{\rm jet}$ was found. In Fig.~\ref{rjgj} we compare the {\sc Phojet} results on $R_{\rm JgJ}$ with these preliminary data. The {\sc Phojet} predictions exhibit a flat $E^{\rm jet}_{\perp}$ dependence being still compatible with the data \footnote{Very recent data of the CDF Collaboration \protect\cite{Abe98a} on a similar ratio show also a flat $E^{\rm jet}_{\perp}$ dependence}. The D0 Collaboration \cite{Abbott97} also found $R_{\rm JgJ}$ at $\sqrt s$ = 630 GeV to be a factor 2.6$\pm $ 0.6 larger than at $\sqrt s$ = 1.8 TeV. The ratio of $R_{\rm JgJ}$ at these energies calculated with {\sc Phojet} is consistent with the data, see Fig.~\ref{rjgj}. \section{Conclusions and summary\label{summary}} The processes implemented in {\sc Phojet} allow to study hard and soft diffraction (e.g.\ ${I\!\!P}$--$p$, ${I\!\!P}$--$\gamma$ and ${I\!\!P}$--${I\!\!P}$ collisions) in many channels. We hope that this tool and forthcoming data on hard and soft single diffraction dissociation and central diffraction from TEVATRON and HERA help to answer important questions like: (i) Is soft color reconnection the correct mechanism to describe color singlet exchange processes between jets? Could this mechanism be responsible for other features of diffractive processes as well? (ii) Can hard diffraction consistently be described by pomeron structure functions? What is the low-$x$ behaviour of the pomeron structure function? (iii) Are there multiple soft and multiple hard collisions in diffraction like in $p$--$p$, $p$--$\gamma$ or $\gamma$--$\gamma$ interactions? (vi) Does a super-hard component of the pomeron exist (e.g. can the data interpreted with a direct pomeron-quark coupling)? Single-inclusive jet pseudorapidity distributions show only a small sensitivity to a possible super-hard structure of the pomeron (see Fig.~\ref{pt100jpopo}.b). By contrast it is possible to use the jet transverse momentum distribution to explore a super-hard pomeron structure. Furthermore, in the model, diffractive events containing jets produced by direct pomeron-parton scattering exhibit much less soft hadronic background than other JJg events. This soft underlying event feature might allow for a crucial test of the models on a super-hard pomeron. However, without using pomeron PDFs tuned to the latest HERA data reliable predictions for TEVATRON energies cannot be obtained. All features of the JgJ events observed at TEVATRON so far are qualitatively well described in the Monte Carlo implementation of the SCR model \cite{Eboli97a}. However, first comparisons with data indicate that the ratio JgJ/JJ obtained with a simple SCR model is too small. Further investigations, higher statistics in the Monte Carlo events and more precise data are required to draw definite conclusions. \noindent {\bf Acknowledgments}\\ The authors are grateful to S.~Roesler for many discussions. One author (R.E.) thanks T.K.~Gaisser for helpful comments and remarks. The work of R.E.\ is supported by the U.S.\ Department of Energy under Grant DE-FG02-91ER40626.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Samples} \label{sec:samples} The cubic phases of Ge-based B20 alloys can only be stabilized under high pressure/high temperature conditions \cite{Tsvyashchenko2012}. The samples taken for this study have thus been synthesized under 8 GPa in a toroidal high-pressure apparatus by melting reaction with Mn, Fe and Ge at the Institute of High Pressure Physics (Troitsk, Moscow, Russia). Pellets of well-mixed powdered constituents were placed in rock-salt pipe ampoules and then directly electrically heated to T $\approx$ 1600$^\circ$C. Then, the samples were quenched to room temperature before releasing the applied pressure. The total mass of each batch is $\approx 100-150\,\text{mg}$.\\ As a consequence of the synthesis procedure, the samples have a metastable crystal structure and are obtained in a polycrystalline form, with crystallite sizes larger than $10-100$ $\mu$m. X-ray powder diffraction \cite{Dyadkin2014,Valkovskiy2016} confirmed the B20 structure of the samples used in the small-angle neutron scattering (SANS) experiments presented in the main text. Previous SANS studies of these samples revealed the helimagnetic ordering of the compounds at low temperatures and yielded a preliminary definition of the critical temperatures (see Refs. \onlinecite{Altynbaev2014,Altynbaev2016a,Altynbaev2016b} for more details).\\ \section{Small-angle neutron scattering data analysis} \label{sec:sans} \subsection{Lineshape analysis} \label{sec:sans_data_ijkl} In a small-angle neutron scattering (SANS) experiment, the scattered intensity is recorded using a two-dimensional position-sensitive detector. Examples of such maps are presented in Figs. \ref{fig:IvQ_30}a--c. In order to perform a quantitative analysis of the observed (temperature- and field-dependent) magnetic structures, the intensity is radially averaged and the resulting $I\,\text{vs.}\,Q$ curves are described using the following function: \begin{equation} \label{eq:1} I(Q) = I_{bckg} + \, A \cdot \underbrace{\frac{\kappa_s / \pi}{\kappa_s^2 + \left(Q - k_s\right)^2}}_{L_s} + \, B \cdot \underbrace{\frac{\kappa_A / \pi}{\kappa_A^2 + \left(Q - k_A\right)^2}}_{L_A} + \, C \cdot I_{abn}(Q) \quad , \end{equation} where: \begin{itemize} \item $I_{bckg}$ is a Q-independent background level, \item $A \cdot L_s$ is a Lorentzian profile centered at $Q = k_s$ with a half-width at half-maximum $\kappa_s$, which corresponds to the scattering due to the (incompletely reoriented) spin spirals, \item $B \cdot L_A$ is a Lorentzian profile centered at $Q = k_A$ with a half-width at half-maximum $\kappa_A$, which corresponds to the scattering due to the SK lattices stabilized within the A-phase, \item $C \cdot I_{abn}$ is a smeared Heaviside function centered at $Q = k_s$, which describes phenomenologically the inelastic scattering denoted as "abnormal" in Ref. \onlinecite{Altynbaev2014}. \end{itemize} As a general trend, the Lorentzian widths and positions are found to be field-independent. In what follows, the parameters $\kappa_{s,A}$ and $k_{s,A}$ are thus kept constant for the analysis of the fixed-temperature field scans. \begin{figure} \includegraphics[width=.9\textwidth]{Fig1s.pdf} \caption{\textbf{(a--c)} Small-angle scattering maps, taken at different fields at $T = 100$ K for Mn$_{0.7}$Fe$_{0.3}$Ge. The white sectors were used for radial averages of the intensity. \textbf{(d)} $I \text{ vs. } Q$ plots in the direction perpendicular to the external field. Best fits of Eq. \ref{eq:1} to the data for $H = 0.6$ T (helical phase) and $H = 1.0$ T (A-phase) are shown as blue and black solid lines, respectively. They essentially consist in a single Lorentzian profile. On the other hand, the experimental curve for $H=0.8$ T --which sits on the lower edge of the A-phase-- is better described with the sum of two Lorentzian functions centered in the same positions as for the lower and higher field values (shown with black and blue dashed lines, respectively). The resulting fit curve is shown as the red solid line.} \label{fig:IvQ_30} \end{figure} \subsection{Determination of the A-phase borders} \label{sec:sans_data_30} \begin{figure} \includegraphics[width=.9\textwidth]{Fig2s.pdf} \caption{SANS maps, taken at $T = 100$ K for Mn$_{0.9}$Fe$_{0.1}$Ge at fields $H=1.2$ T \textbf{(a)} and $H=1.6$ T \textbf{(b)}. The white sectors in \textbf{(a--b)} have been used for performing the radial averages. \textbf{(c)} $I\,\text{vs.}\,Q$ plots in the direction perpendicular to the external field. Solid black and blue lines are fits with Eq. \ref{eq:1} of the data collected at applied magnetic fields of $H = 1.2$ T and 1.6 T, respectively. At the field of 1.2 T, the experimental profile is well-described using a single Lorentzian, sitting on top of a diffuse signal associated with the "abnormal" scattering of Eq. \ref{eq:1} (magenta dashed line). On the other hand, the description of the data at $H = 1.6$ T requires an additional Lorentzian profile (red dashed line), centered at a Q value that is smaller than the one corresponding to the helical ordering (black dashed line).} \label{fig:IvQ_10} \end{figure} The critical fields $H_{\rm a1}$ and $H_{\rm a2}$ are defined as the lower and upper borders of the A-phase, respectively (see, {\it e.g.}, Fig. 3 of main text). They were accurately determined by analyzing the $I\,\text{vs.}\,Q$ curves, obtained after radial integration of the scattered intensity in the direction perpendicular to the external field (see sectors in Fig. \ref{fig:IvQ_30}a--c). Examples of such plots are given in Fig. \ref{fig:IvQ_30}d for Mn$_{0.7}$Fe$_{0.3}$Ge.\\ At fields smaller than $H_{a1}$, the reflection coming from incompletely reoriented helical structure is always described using a single Lorentzian profile $L_s$, setting $B = 0$ in Eq. \ref{eq:1}. With field increase, the scattering peak broadens and its center of gravity shifts to lower values of momentum transfer. As illustrated in Fig. \ref{fig:IvQ_30}d, it is actually best described using two Lorentzian profiles.\\ \begin{figure} \includegraphics[width=\textwidth]{Fig3s.pdf} \caption{Small-angle scattering intensity maps, taken at $T = 100$ K for Mn$_{0.9}$Fe$_{0.1}$Ge at fields $H=2.6$ T \textbf{(a)} and $H=1.6$ T \textbf{(b)} and the corresponding $I \text{ vs. } \phi$ plots \textbf{(c)}. The sectors that have been taken for the averaging of the intensity are marked with white colour on SANS maps \textbf{(a--b)}. The value of the isotropic intensity was chosen as 0 and 0.2 for field values $H = 1.6$ T and $H = 2.6$ T, respectively, for better visibility. Red solid line is the guide for the eyes that shows peaks of the intensity for azimuthal angles equal to 0, 90, 180 and 270 degrees for experimental curve taken at $H=1.6$ T.} \label{fig:IvPhi_10} \end{figure} This suggests the emergence of an additional magnetic phase, coexisting with the conical state but showing a different periodicity. In analogy with the vast majority of cubic chiral magnet, we treat this "extra" intensity as the signature of the A-phase, populated with magnetic SKs. This is justified owing to the selection rule for magnetic neutron scattering, which dictates that only the magnetic moment component which is perpendicular to the scattering vector contributes to the scattered intensity. Here, it implies that the additional intensity reflects the spatial modulation of the longitudinal magnetization ({\it i.e.}, the component oriented along the applied field).\\ With further increase of the magnetic field, the first Lorentzian disappears completely ($A = 0$ in Eq. \ref{eq:1}) while the second one solely remains ($B \neq 0$ in Eq. \ref{eq:1}) up to $H_{\rm a2}$, thereby defining the upper border of the A-phase. \subsection{The Mn$_{0.9}$Fe$_{0.1}$Ge case} \label{sec:sans_data_10} While the signal associated with the A-phase is clearly seen on the SANS maps for the Mn$_{0.8}$Fe$_{0.2}$Ge and Mn$_{0.7}$Fe$_{0.3}$Ge samples (see Fig. \ref{fig:IvQ_30} of this supplement and Figs. 1,2 of main text), it is much weaker in the case of Mn$_{0.9}$Fe$_{0.1}$Ge. However, applying the analysis strategy described above allows retrieving the (H,T) borders of the A-phase in this particular case. This fact is illustrated in Fig. \ref{fig:IvQ_10}, where a doubled peak is evidenced in the intermediate field range. A fit of Eq. \ref{eq:1} to the data indeed reveals two distinct periodicities, similar to the example given in Sec. \ref{sec:sans_data_30}.\\ \begin{figure} \includegraphics[width=0.6\textwidth]{Fig4s.pdf} \caption{Field-dependence of the integrated intensities of the peaks from the helical structure (black circles) and from the A-phase (red triangles) for Mn$_{0.9}$Fe$_{0.1}$Ge.} \label{fig:IvsH_all} \end{figure} As a cross-check for the existence of a A-phase signal in Mn$_{0.9}$Fe$_{0.1}$Ge, it is also interesting to consider the azimuthal dependence of the intensity $I\,\text{vs.}\,\phi$ (Fig. \ref{fig:IvPhi_10}c). In the purely conical state ({\it e.g.}, $H = 2.6\,\text{T}$ in Fig. \ref{fig:IvPhi_10}c), the latter is composed of two peaks, centered around $\phi = 0$ and $180^{\circ}$, {\it i.e.} parallel and antiparallel to the applied field. On the other, for fields where a double peak is observed in the $I\,\text{vs.}\,Q$ plots, some additionnal intensity appears at angles $\phi = 90$ and $270^{\circ}$, {\it i.e.} perpendicular to the applied field ({\it e.g.}, $H = 1.6\,\text{T}$ in Fig. \ref{fig:IvPhi_10}c).\\ \subsection{Building the (H,T) phase diagrams} \label{sec:sans_data_mnop} In order to render the (H,T) phase diagrams presented in Fig. 3 of main text for all studied compositions, the field evolutions of the the $I\,\text{vs.}\,Q$ curves are considered for both parallel and perpendicular directions with respect to the applied magnetic field. This allows obtaining the critical fields attributed to the conical (along the field) and SK (perpendicular to the field) phases, by plotting the H-dependence of the fit parameters $A$ and $B$ of Eq. \ref{eq:1} (Fig. \ref{fig:IvsH_all}). Namely, the region of existence of the A-phase corresponds to the field range within which $B \neq 0$. On the other hand, the maximum of $A$ marks the first (conical) critical field $H_{\rm C1}$, while $H_{\rm C2}$ is defined through a linear extrapolation of $A \rightarrow 0$ at the largest fields.\\ \bibliographystyle{apsrev}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} Policymakers and project managers regularly conduct randomized experiments (``A/B tests'') to assess potential changes to policy or product. A key metric is the \emph{average treatment effect (ATE)}, the difference in the population-average outcome when everyone or no one is treated. ATEs are easily estimated by differences in the sample-average outcome within treatment groups, barring interference. Estimation from observational data is also possible under appropriate assumptions, \eg, unconfoundedness \citep{imbens2015causal}. Identifying an individual's outcome with their utility -- as we will throughout this paper -- the ATE is the difference in social welfare in these two counterfactual scenarios. By linearity, this coincides with the population-average of each \emph{individual}'s treatment effect, the difference in their own utility in the two counterfactual scenarios. It is widely recognized, however, that treatment effects can vary widely between individuals \citep{heckman1997making,crump2008nonparametric}. Thus, even if the ATE is positive, there is a \emph{risk} that many individuals are harmed by the proposed change. Crucially, \emph{distributional} treatment effects (DTEs), which compare the two counterfactual utility distributions beyond their means, \emph{cannot} capture this risk. Indeed, \citet{imbens2009recent} note ``quantile effects are defined as differences between quantiles of the two marginal potential outcome distributions, and not as quantiles of the unit level effect.'' They nonetheless advocate for the former because policy ``choice should be governed by preferences of the policymaker over these distributions.'' However, such rational-decision-making framing presumes a policymaker facing a choice between lotteries drawing at random from individual outcomes. Instead, concerned with equity beyond social welfare, we should worry about the individuals, not the policymaker. Hypothetically, harm to some individuals is possible even when the ``treat-all'' utility distribution first-order-dominates ``treat-none'' so that \emph{any} expected-increasing-utility-function-maximizer would choose ``treat-all.'' One way to gain further insight into heterogeneity and hence inequities is to consider conditional ATEs (CATEs) given pre-treatment covariates. For example, if we observe a discrete sensitive attribute (\eg, race), we can simply compare the CATE in each attribute-value group.\footnote{We may still make some inferences on these even if we do not observe such attributes; see \citealp{chen2019fairness,kallus2021assessing}.} But it may not always be clear what are relevant such attributes and whether we are omitting important ones. Given rich and continuous covariates, we can still reliably learn the CATE function by leveraging recent advances in causal machine learning \citep{slearner,xlearner,drlearner,rlearner,causaltree,causalforest}. It may still not be clear, nonetheless, whether the covariates are relevant for fairness considerations, what groups are captured in this way, and/or how to summarize the many individual predictions of complex machine-learned CATEs. It is therefore particularly appealing to focus directly on the distribution of \emph{individual} treatment effects (ITEs), such as the average effects among the worst-affected 10\%, 20\%, \etc, corresponding to the conditional value at risk (CVaR) of this distribution. The challenge is that no ITE can ever be observed -- the so-called Fundamental Problem of Causal Inference. Nonetheless, regardless of whether covariates are meaningful for fairness considerations, if they control for heterogeneity, CATE may predict ITE well. In this paper, we leverage this to proxy these important but unidentifiable treatment-effect risk measures. Specifically, we provide the tightest-possible upper and lower bounds given by CATE on the CVaR of ITE. By construction these are functions of distributions of observables. What remains is inference from data, whether experimental or observational. Since the CATE function can be high-dimensional, especially if we use a lot of covariates to control for heterogeneity, inference is difficult and na\"ive plug-in approaches fail. We design debiased estimators and confidence intervals for our bounds that overcome this challenge by being exceedingly robust: given rough, machine-learned estimates of CATE and other nuisances, they behave as though we used perfect estimates; they remain consistent even when some nuisances are mis-estimated; and surprisingly they remain valid as bounds even when CATE is mis-estimated. We conclude by using our tools to illustrate treatment-effect risk in a case study of job-search-assistance benefits. \section{Problem Set Up and Definitions} Each individual in the population is associated with two potential outcomes, $Y^*(0),\,Y^*(1)\in\Rl$, corresponding to individual utility under ``treat-all'' and ``treat-none,'' respectively, and baseline covariates (observable characteristics), $X\in\mathcal X$. The ITE, ATE, and CATE are, respectively, \begin{align*} \delta&=Y^*(1)-Y^*(0), \qquad\bar\tau=\E[Y^*(1)]-\E[Y^*(0)]=\E\delta=\E\catef(X)\\ \catef(X)&=\E[\delta\mid X]=\mu(X,1)-\mu(X,0),\quad\text{where $\mu(X,a)=\E[Y^*(a)\mid X]$}. \end{align*} We assume $\E\delta^2<\infty$ throughout. Of interest is the average effect among the $(100\times\alpha)\%$-worst affected, formalized by $\cvarat\alpha(\delta)$, where for any $Z$ \citep{rockafellar2000optimization}\footnote{CVaR is sometimes defined for the right tail, corresponding to our $-\cvarat\alpha(-Z)$.} \begin{equation}\label{eq:cvar}\cvarat\alpha(Z)=\sup_\beta\prns{\beta+\frac1\alpha\E(Z-\beta)_-},\end{equation} where $(u)_-=u\wedge0$. The $\sup$ is attained by $\beta$ equal the $\alpha$-quantile: \begin{equation}\label{eq:quantile} F_Z^{-1}(\alpha)=\inf\fbraces{\beta:F_Z(\beta)\geq\alpha}, \quad\text{where}~F_Z(z)=\Prb{Z\leq z}. \end{equation} Provided $F_Z(F_Z^{-1}(\alpha))=\alpha$ (\eg, $Z$ continuous), then $\cvarat\alpha(Z)=\Eb{Z\mid Z\leq F_Z^{-1}(\alpha)}$. Otherwise, $\cvarat\alpha(Z)\in[\Eb{Z\mid Z< F_Z^{-1}(\alpha)},\,\Eb{Z\mid Z\leq F_Z^{-1}(\alpha)}]$, and, unlike these two endpoints, $\cvarat\alpha(Z)$ is continuous in $\alpha$ and coherent \citep{artzner1999coherent}. It is therefore the \emph{correct} generalization of ``average of the $(100\times\alpha)\%$-lowest values'' when ambiguous due to discontinuities. We consider data from a randomized experiment or observational study. Each individual is associated with a treatment $A\in\{0,1\}$, and we observe the \emph{factual} outcome $Y=Y^*(A)$ (never $Y^*(1-A)$). The data is $(X_i,A_i,Y_i)\sim(X,A,Y)$, $1\leq i\leq n$. We assume unconfoundedness throughout: $Y^*(a)\indep A\mid X$.\footnote{And $Y=Y^*(A)$ assumes non-interference \citep{rubin1986comment}.} Randomized experiments (our focus) ensure this by design (often with $X\indep A$). Our results nonetheless extend to observational settings assuming unconfoundedness. Under unconfoundedness, ATE and CATE are identifiable, \ie, are functions of the $(X,A,Y)$-distribution: $\mu(X,a)=\E[Y\mid X,A=a]$, $\catef(X)=\mu(X,1)-\mu(X,0)$, $\bar\tau=\E\catef(X)$ ($=\E[Y\mid A=1]-\E[Y\mid A=0]$ if $X\indep A$). Define also the propensity score $e(X)=\Prb{A=1\mid X}$ and marginal-outcome regression $\bar\mu(X)=\E[Y\mid X]=e(X)\mu(X,1)+(1-e(X))\mu(X,0)$. We now illustrate treatment-effect risk and its \emph{un}identifiability, which motivates us to consider the tightest-possible \emph{identifiable} bounds (\cref{sec:bounds}) and inference thereon (\cref{sec:inference}). \begin{example}[Simple Example]\label{ex:simple} Suppose $$\begin{pmatrix}Y^*(0)\\Y^*(1)\end{pmatrix}\sim\mathcal N\prns{\begin{pmatrix}\mu(0)\\\mu(1)\end{pmatrix},\,\begin{pmatrix}1&\rho\\\rho&1\end{pmatrix}},~\mu(1)\geq\mu(0),~\rho\in[-1,1].$$ If $\bar\tau=\mu(1)-\mu(0)>0$, the $Y^*(1)$-distribution \emph{first-order-dominates} $Y^*(0)$. If $\mu(1)=\mu(0)$, the distributions are \emph{indistinguishable}. However, the ITE-distribution depends on $\rho$: $\delta\sim\mathcal N(\mu(1)-\mu(0),\sqrt{2-2\rho})$, $\cvarat{0.1}(\delta)=\bar\tau-1.75 \sqrt{2-2\rho}$. The unidentifiability of $\cvarat{0.1}(\delta)$ follows because the $(A,Y)$-distribution is fixed given just $\mu(0),\mu(1),\Prb{A=1}$ while $\cvarat{0.1}(\delta)$ varies with $\rho$. \end{example} \begin{remark}[Covariate-conditional policies] Treat (\ie, rollout to) all or none is often the choice faced by project managers, but given covariates we can learn covariate-conditional treatment policies \citep{kallus2018balanced,athey2017efficient,qian2011performance,zhao2012estimating,kallus2021minimax, kitagawa2018should}. Learning aside, treating only when $\catef(X)>0$ ensures all covariate-defined groups have nonnegative group-average effects.\footnote{However, even this ideal can induce disparate impacts \citep{kallus2019assessing}.} Personalizing on all available covariates is however generally infeasible due to operational, non-stationarity, and/or ethical/reputational concerns. Nonetheless, given any policy $\pi:\mathcal X\to\{0,1\}$, we may simply redefine ITE as $Y(\pi(X))-Y(0)$ and our results still apply. This is especially relevant when $\pi$ personalizes on some covariates and the rest explain heterogeneity conditionally thereon. \end{remark} \begin{remark}[Risk of observed vs unobserved variables] CVaR is an example of coherent risk measures \citep{artzner1999coherent}, which are used to assess distributions beyond expectations and are equivalent to distributionally-robust worst-case expectations \citep{ruszczynski2006optimization}. For example, CVaR is the worst-case expectation among distributions with Radon-Nikodym derivative to the given distribution bounded by $1/\alpha$. Other distributional divergences can also define ambiguity sets \citep[\eg,][]{ben2013robust,bertsimas2018robust,esfahani2018data}. Alternative approaches limit the \emph{complexity} of subpopulations \citep{NEURIPS2020_07fc15c9,kearns2018preventing}. In both finance \citep{krokhmal2002portfolio}, distributionally-robust supervised learning \citep{bagnell2005robust}, demographics-free fair learning \citep{NEURIPS2020_07fc15c9}, and CVaR-DTEs \citep{kallus2019localized}, the variable whose risk is of interest is \emph{always observed}. \Eg, model loss on each training example is observed. In contrast, we consider risk of an \emph{unobserved variable}, hence we study bounds in \cref{sec:bounds}. For inference, we are uniquely concerned with risk of an \emph{unknown function}, hence we develop learning-robust methods in \cref{sec:inference}. \end{remark} \section{Bounds}\label{sec:bounds} \subsection{Upper Bound: The CATE-CVaR} An upper bound on $\cvarat\alpha(\delta)$ is crucial: if negative or substantially below ATE, the change poses certifiable risk or inequity to an $(100\times\alpha)\%$-subpopulation. \begin{theorem}[Upper Bound by CATE-CVaR]\label{thm:cvarbound} \begin{equation}\label{eq:cvarbound} \cvarat\alpha(\delta)\leq\cvarat\alpha(\catef(X)). \end{equation} Moreover, given any $X$-distribution and integrable $\tau:\mathcal X\to\Rl$, some $(X,\delta)$-distribution has the given $X$-marginal, $\catef(X)=\E[\delta\mid X]$, and \cref{eq:cvarbound} holding with equality. \end{theorem} Since $\catef(X)$ represents our \emph{best guess} for $\delta$ (in squared error), imputing the unknown $\delta$ with $\catef(X)$ seems reasonable. \Cref{thm:cvarbound} shows this in fact provides an upper bound.\footnote{\label{footnote:coherentrisk}\Cref{eq:cvarbound} extends to any coherent risk by writing $\delta=\catef(X)+(\delta-\catef(X))$ and using sub-additivity.} If $\catef(X)$ is continuous, $\cvarat\alpha(\catef(X))=\Efb{\delta\mid \catef(X)\leq F_{\catef(X)}^{-1}(\alpha)}$, and \cref{eq:cvarbound} is intuitive: $\cvarat\alpha(\delta)$ is worst average effect among \emph{all} $(100\times\alpha)\%$-subpopulations, while $\cvarat\alpha(\catef(X))$ only among $X$-defined subpopulations. This bound is also tight: given just $\catef(X)$, it cannot be improved.\footnote{The bound need not be tight given the $(X,A,Y)$-distribution, which characterizes more than the mean of the $(\delta\mid X)$-distribution, as described by the Fr\'echet-Hoeffding bounds. We focus on best bounds given just by CATE, which is the common tool to understand effect heterogeneity in practice.} \Cref{thm:cvarbound} implies an ordering: \begin{equation}\label{eq:ordering}\cvarat{\alpha_1}(\delta)\leq\cvarat{\alpha_2}(\delta)\leq\cvarat{\alpha_2}(\catef(X))\leq\cvarat{\alpha_3}(\catef(X))\leq\bar\tau~~~\forall~0<\alpha_1\leq\alpha_2\leq\alpha_3\leq1.\end{equation} \begin{remark}[CVaR as summary of CATE]\label{remark:summary} Aside from being a bound, $\cvarat\alpha(\catef(X))$ is of independent interest as a summary of effect heterogeneity along meaningful covariates $X$ of explicit equity concern. When $X$ is more than a few discrete groups, understanding the many facets of estimated heterogeneity is challenging, both interpretationally and statistically. We could test for $X$-heterogeneity \citep{crump2008nonparametric,sawilowsky1990nonparametric,gail1985testing,davison1992treatment}.\footnote{There are also tests for heterogeneity \emph{not} explained by $X$ \citep{ding2019decomposing,ding2016randomization}. These, like us, leverage bounds on unidentifiable quantities.} \Eg, omnibus test $H_0:0\in\argmin_\gamma\E(\catef(X)-\bar\tau-\gamma^\top (X-\E X))^2$ \citep{chernozhukov2018generic}. This, however, may detect minor heterogeneity in small subpopulations, may not assess magnitude or direction, and may be inappropriate if we expect heterogeneity. In contrast, $\cvarat\alpha(\catef(X))$ is a simple, meaningful summary of $\catef(X)$. Inference, however, is a challenge. We tackle this in \cref{sec:inference}. \end{remark} \begin{remark}[Inter-quantile averages of CATE]\label{remark:gate} CVaR of CATE can in fact permit us to summarize average effects in the middle, not just the tails. Consider any $0<\alpha<\alpha'<1$. Provided that $F_{\catef(X)}(F_{\catef(X)}^{-1}(\alpha))=\alpha$, $F_{\catef(X)}(F_{\catef(X)}^{-1}(\alpha'))=\alpha'$ (\eg, $\catef(X)$ is continuous), we have that \begin{equation}\label{eq:interquantile} \Eb{Y^*(1)-Y^*(0)\mid F_{\catef(X)}^{-1}(\alpha)<\catef(X)\leq F_{\catef(X)}^{-1}(\alpha')} = \frac{\alpha'\cvarat{\alpha'}(\catef(X))-\alpha\cvarat{\alpha}(\catef(X))}{\alpha'-\alpha}. \end{equation} \Cref{eq:interquantile} is the average effect among individuals with CATE between the $\alpha$- and $\alpha'$-quantiles. A similar but different quantity is considered in \citet{chernozhukov2018generic}: the average effect among individuals in inter-quantile ranges of an \emph{estimate} of CATE fit on a split sample, rather than the true CATE. They consider averaging this over splits, but that average still need not correspond to \cref{eq:interquantile}, and this approach is \emph{not} robust to errors in the CATE estimate, meaning these errors will propagate to non-negligible terms in the estimate and its variance. In contrast, by leveraging the unique optimization structure of CVaR, in \cref{sec:inference} we provide an estimator that \emph{is} robust to such errors, allowing us to estimate the CVaR of the true CATE, rather than a split-sample-estimated CATE. By writing \cref{eq:interquantile} using CVaR, we can then leverage these results to get robust estimates for inter-quantile averages, as we will explain in \cref{remark:comparing}. \end{remark} \begin{remark}[\emph{Who} is negatively affected?] Suppose we find $\cvarat\alpha(\catef(X))<0$ while $\bar\tau>0$, where $\alpha$ is ``substantial'' -- the social-welfare benefit of the proposal is borne by some substantial negatively-impacted subpopulation. While that may already cool enthusiasm for the proposal, we may wonder \emph{who} are the harmed individuals, \eg, to help design a new, better treatment. Assuming continuity, $\cvarat\alpha(\catef(X))$ is the ATE among individuals with $\catef(X)\leq F_{\catef(X)}^{-1}(\alpha)$ -- an \emph{identifiable} group. A question is interpretation. This is easy if $\catef(X)$ is linear or tree (or estimated using such models, which still gives a bound per \cref{thm:doublyvalid}). We can also consider summaries of this group, \eg, fraction belonging to sensitive groups, or learn simpler models to explain membership \citep{lakkaraju2019faithful,ribeiro2016should}. Alternatively, given we detect substantial inequities, we can \emph{separately} investigate which variables negatively modulate treatment effect by, \eg, studying $\argmin_\gamma\E(\catef(X)-\bar\tau-\gamma^\top X)^2$ \citep{drlearner,chernozhukov2018generic}. \end{remark} \subsection{Lower Bounds under Limited Residual Heterogeneity Range}\label{sec:lowerboundbounded} Much as we try to best control for heterogeneity, disparate effect-predictiveness of covariates may mean some negative ITEs are averaged out and hidden while others are singled out. A remedy when concerned about disproportionate predictiveness among sensitive groups (\eg, race) would be to include these (or proxies) within $X$. But, we may always worry about missing something. A lower bound can provide assurances about what the upper bound may be missing. This depends on how much residual heterogeneity remains. Our first set of lower bounds limit the range of residual heterogeneity, \ie, almost-sure bounds on $\delta-\catef(X)$, while our second set of lower bounds limit its variance, \ie, bounds on $\op{Var}(\delta\mid X)=\E(\delta-\catef(X))^2$. \begin{theorem}\label{thm:cvarlb2} Suppose $\abs{\catef(X)-\delta}\leq b$. Then \begin{equation}\label{eq:cvarlb2} \cvarat\alpha(\delta)\geq\sup_\beta\prns{\beta+\frac1{2\alpha}\E[(\catef(X)-b-\beta)_-]+\frac1{2\alpha}\E[(\catef(X)+b-\beta)_-]}. \end{equation} Moreover, given any $X$-distribution and integrable $\tau:\mathcal X\to\Rl$, some $(X,\delta)$-distribution has the given $X$-marginal, $\catef(X)=\E[\delta\mid X]$, $\abs{\catef(X)-\delta}\leq b$, and \cref{eq:cvarlb2} holding with equality. \end{theorem} The right-hand side of \cref{eq:cvarlb2} is the $\alpha$-CVaR of the equal-mixture distribution of $\catef(X)-b$ and $\catef(X)+b$. It reduces to $\cvarat\alpha(\catef(X))$ when $b=0$ (equivalent to $\delta=\catef(X)$). When $\alpha=1$, it becomes $\bar\tau$ for any $b\geq0$ (as necessary for tightness). The lower bound is established via weak semi-infinite duality and its tightness by exhibiting the equal-mixture distribution. Since $(\catef(X)\pm b-\beta)_-\geq (\catef(X)-\beta)_--b$, \cref{eq:cvarlb2} upper bounds $\cvarat\alpha(\catef(X))-b$. This simpler bound is tight if we only assume a one-sided-bounded range. \begin{theorem}\label{thm:cvarlb1} Suppose $\catef(X)-\delta\leq b$. Then \begin{equation}\label{eq:cvarlb1} \cvarat\alpha(\delta)\geq\cvarat\alpha(\catef(X))-b. \end{equation} Moreover, for $\alpha<1$, given any $\varepsilon>0$, $X$-distribution, and integrable $\tau:\mathcal X\to\Rl$, some $(X,\delta)$-distribution has the given $X$-marginal, $\catef(X)=\E[\delta\mid X]$, $\catef(X)-\delta\leq b$, and \cref{eq:cvarlb1} holding with equality up to $\varepsilon$-error. \end{theorem} The lower bound is immediate and its tightness given by exhibiting a skewed two-point-mass distribution. For $\alpha=1$, \cref{eq:cvarlb1} simply reads $\bar\tau\geq\bar\tau-b$, but for \emph{any} $\alpha<1$, \cref{eq:cvarlb1} is actually \emph{tight}. \subsection{Lower Bounds under Limited Residual Heterogeneity Variance}\label{sec:lowerboundvariance} Limiting residual heterogeneity within a range may be implausible, or plausible only with large constants, yielding a weak bound. We next explore the implication of the residual ITE-variance after controlling for $X$, which we can bound given observables. \begin{theorem}\label{thm:cvarlb3} Suppose $\op{Var}(\delta\mid X)\leq \bar\sigma^2(X)$ for some integrable $\bar\sigma^2:\mathcal X\to\Rl_+$. Then \begin{equation}\label{eq:cvarlb3} \cvarat\alpha(\delta)\geq\sup_\beta\prns{\beta+\frac1{2\alpha}\Eb{\catef(X)-\beta-\sqrt{(\catef(X)-\beta)^2+\bar\sigma^2(X)}}}. \end{equation} Moreover, given any $\varepsilon>0$, $X$-distribution, and integrable $\tau:\mathcal X\to\Rl$, some $(X,\delta)$-distribution has the given $X$-marginal, $\catef(X)=\E[\delta\mid X]$, $\op{Var}(\delta\mid X)\leq \bar\sigma^2(X)$, and \cref{eq:cvarlb3} holding with equality up to $\varepsilon$-error. \end{theorem} The proof of \cref{thm:cvarlb3} leverages strong duality for convex semi-infinite optimization. Note \cref{eq:cvarlb3} equals $\cvarat\alpha(\catef(X))$ whenever $\bar\sigma^2(X)=0$ and $\bar\tau$ whenever $\alpha=1$. Since $\abs{\delta-\catef(X)}\leq b\implies\op{Var}(\delta\mid X)\leq b^2$, plugging $\bar\sigma^2(X)=b^2$ into \cref{eq:cvarlb3} must be looser than \cref{eq:cvarlb2} by tightness. Triangle inequality verifies this directly: $\sum_{\pm}(\catef(X)\pm b-\beta)_-=\catef(X)-\beta-\frac12\sum_{\pm}\abs{\catef(X)\pm b-\beta}\geq \catef(X)-\beta-\sqrt{(\catef(X)-\beta)^2+b^2}$. A residual-variance bound is both more plausible and easier to calibrate than an absolute bound. Letting $\rho(X)=\op{Corr}(Y(0),Y(1)\mid X)\in[-1,1]$, we have \begin{align}\label{eq:vardecomp} \op{Var}(\delta\mid X)=\op{Var}(Y\mid X,A=0)+\op{Var}(Y\mid X,A=1)-2\rho(X)\op{Var}^{1/2}(Y\mid X,A=0)\op{Var}^{1/2}(Y\mid X,A=1), \end{align} where all terms but $\rho(X)$ are identifiable. Thus, postulating different potential-outcome correlations, we obtain different bounds. \Cref{eq:vardecomp} is maximized for $\rho(X)=-1$, which is tight, as all correlations are realizable. Thus, plugging $\bar\sigma^2(X)=(\op{Var}^{1/2}(Y\mid X,A=0)+\op{Var}^{1/2}(Y\mid X,A=1))^2$ into \cref{eq:cvarlb3} yields a tight lower bound on ITE-CVaR, given conditional expectations and variances. We may obtain better bounds if we postulate larger $\rho(X)$. \Cref{thm:cvarlb3} also implies a simpler but looser bound. \begin{corollary}\label{cor:cvarlb4} \begin{align} \label{eq:cvarlb4a} 0\leq\cvarat\alpha(\catef(X))-&\cvarat\alpha(\delta)\leq\frac{1}{2\alpha}\Eb{\op{Var}^{1/2}(\delta\mid X)}\\ \label{eq:cvarlb4b} &\leq\frac{1}{2\alpha}\Eb{\op{Var}^{1/2}(Y\mid X,A=0)+\op{Var}^{1/2}(Y\mid X,A=1)}\\ \label{eq:cvarlb4c} & \leq\frac{1}{2\alpha}\sqrt{\Eb{(Y-\mu(X,A))^2\mid A=0}}+\frac{1}{2\alpha}\sqrt{\Eb{(Y-\mu(X,A))^2\mid A=1}} . \end{align} \end{corollary} \Cref{eq:cvarlb4a} more transparently bounds the slack in \cref{eq:cvarbound} in terms of residual effect variance. However, it is not tight, as can be seen for $\alpha=1$. \Cref{eq:cvarlb4c} is even looser but appealing as it avoids $\op{Var}(Y\mid X,A)$, depending only on the root-mean-squared error of regressing $Y$ on $X$ for each $A\in\{0,1\}$ (\ie, the numerator of nonparametric $R^2$). \section{Inference}\label{sec:inference} We next turn to estimating the bounds developed in \cref{sec:bounds} and constructing confidence intervals. Recall our data $(X_i,A_i,Y_i)\sim(X,A,Y)$, $1\leq i\leq n$, may be experimental or observational. The only relevant technical difference between these two cases is whether propensity, $e(X)=\Prb{A=1\mid X}$, is known or not. While it matters not here, note that $e(X)$ is usually constant in experiments ($A\indep X$). In observational settings $e(X)$ may be estimated. We focus here on inference on CATE-CVaR. We provide analogous procedures for the lower bounds of \cref{thm:cvarlb1,thm:cvarlb2,thm:cvarlb3,cor:cvarlb4} in \cref{apx:lbest}. Fix $\alpha$. Our inferential target is $$\Psi=\cvarat\alpha(\catef(X))=\beta^*+\frac1{\alpha}\E(\catef(X)-\beta^*)_-,\quad\text{where}~\beta^*=F_{\catef(X)}^{-1}(\alpha)=\inf\fbraces{\beta:\Prb{\catef(X)\leq\beta}\geq\alpha}.$$ Since $\catef(X)$ is not directly observed, the first step is fitting it. Fortunately, recent advances in causal machine learning provide excellent tools for this \citep{slearner,xlearner,drlearner,rlearner,causaltree,causalforest}. Given an estimate $\hat\tau$, we might consider a plug-in approach: $\hat\Psi^\text{plug-in}=\sup_\beta\fprns{\beta+\frac1{n\alpha}\sum_{i=1}^n(\hat\tau(X_i)-\beta)_-}$. Unfortunately, the statistical behavior of $\hat\Psi^\text{plug-in}$ depends heavily on that of $\hat\tau$: if $\hat\tau$ converges slowly and/or has non-negligible bias, as occurs when fit by flexible machine-learning methods, both estimation rates and valid inference may be imperiled for $\hat\Psi^\text{plug-in}$. \begin{algorithm}[t!] \caption{Point estimate and confidence interval for $\cvarat\alpha(\catef(X))$}\label{alg:est} \begin{algorithmic}[1] \STATEx\textbf{Input:} Level $\alpha\in(0,1)$, data $\{(X_i,A_i,Y_i):i=1,\dots,n\}$, number of folds $K$, $e,\mu,\tau$-estimators \FOR{$k=1,\dots,K$} \STATE{Estimate $\hat e^{(k)},\hat\mu^{(k)},\hat\tau^{(k)}$ using data $\{(X_i,A_i,Y_i):i\not\equiv k-1~\text{(mod $K$)}\}$} \STATE{Set $\hat\beta^{(k)}=\inf\fbraces{\beta:\sum_{i\not\equiv k-1~\text{(mod $K$)}}\fprns{\findic{\hat\tau^{(k)}(X_i)\leq \beta}-\alpha}\geq0}$\label{alg:est betastep}} \STATE{\textbf{for}~$i\equiv k-1~\text{(mod $K$)}$~\textbf{do}~set $\phi_{i}=\phi(X_i,A_i,Y_i;\hat e^{(k)},\hat \mu^{(k)},\hat \tau^{(k)},\hat \beta^{(k)}) $\label{alg:est phistep}} \ENDFOR \STATE{Set $\hat\Psi=\frac1n\sum_{i=1}^n\phi_{i}$, $\hat{\op{se}}=\sqrt{\frac1{n(n-1)}\sum_{i=1}^n(\phi_{i}-\hat\Psi)^2}$\label{alg:est psistep}} \STATE{Return $\hat\Psi$ as point estimate and $[\hat\Psi\pm \Phi^{-1}((1+\gamma)/2)\hat{\op{se}}]$ as $\gamma$-confidence intervals} \end{algorithmic} \end{algorithm} Instead, we develop a debiasing approach that is \emph{insensitive} to CATE-estimation, accommodating both misspecified parametric models and flexible-but-imprecise machine-learning CATE-estimators. The main challenge is estimating $\beta^*$, which cannot be expressed by an estimating equation in $X,Y(0),Y(1)$, so its efficient/orthogonal estimation is unclear, unlike the case of quantile/CVaR treatment effects \citep{kallus2019localized,belloni2017program,firpo2007efficient}. Fortunately, we care only about $\Psi$, not $\beta^*$, and special optimization structure in $\Psi$ gives robustness to perturbations. so even rough estimates suffice. Our approach is therefore unique: we treat both $\tau$ and $\beta^*$ as nuisance parameters, together with $e,\mu$, and ensure simultaneous orthogonality to all four nuisances. \Cref{alg:est} summarizes our procedure. It proceeds by approximating the sample average of $\Psi=\E\phi(X,A,Y,e,\mu,\tau,\beta^*)$, where, we define \begin{equation}\label{eq:phi} \phi(X,A,Y;\check e,\check\mu,\check\catef,\check\beta)=\check\beta+\frac1\alpha\indic{\check\catef(X)\leq\check\beta}\prns{\check\mu(X,1)-\check\mu(X,0)+\frac{A-\check e(X)}{\check e(X)(1-\check e(X))}\prns{Y-\check\mu(X,A)}-\check\beta}. \end{equation} We first estimate the unknown $(e,\mu,\tau,\beta^*)$. We do so using ``cross-fitting'' over $K$ even folds so that nuisance estimates are independent of samples where applied \citep{schick1986,doubleML,zheng2011cross}.\footnote{We may avoid cross-fitting and fit nuisances once on the whole sample if we assume estimates belong to a Donsker class with probability tending to 1; we omit this option for brevity.} As we discuss in detail in \cref{sec:cateest}, we treat $\tau$ as a separate nuisance even though $\tau(x)=\mu(x,1)-\mu(x,0)$. For one, this enables the use of specialized CATE-learners. We also treat $\beta^*$ as a separate nuisance (not as a parameter as in \citealp{kallus2019localized}) and fit it as the quantile of $\hat\tau(X)$ in the out-of-fold data. As simple regressions, $e$ and $\mu$ can be fit by parametric regression or standard machine-learning methods such as random forests, gradient boosting, neural networks, \etc. \begin{remark}[Comparing different levels and inter-quantile averages]\label{remark:comparing} To assess disparities, we may want to compare $\cvarat{\alpha}(\catef(X))$ to ATE (equivalently, $\cvarat1(\catef(X))$). To get good confidence intervals on $\cvarat{\alpha'}(\catef(X))-\cvarat{\alpha}(\catef(X))$, we can replace $\phi_i$ in \cref{alg:est phistep} of \cref{alg:est} with the difference of $\phi_i$'s for $\alpha'$ and $\alpha$ (using the same nuisances except $\hat\beta^{(k)}$). Setting $\alpha'=1$, this will, in particular, correctly yield smaller confidence intervals on $\bar\tau-\cvarat{\alpha}(\catef(X))$ for $\alpha$ near $1$. Similarly, if we want confidence intervals on inter-quantile average effects as in \cref{remark:gate}, then per \cref{eq:interquantile} we may simply replace $\phi_i$ in \cref{alg:est phistep} of \cref{alg:est} with the difference of $\phi_i$'s for $\alpha'$ and $\alpha$, weighted by $\frac{\alpha'}{\alpha'-\alpha}$ and $\frac{\alpha}{\alpha'-\alpha}$, respectively. We may also consider covariances of $\phi_i$'s corresponding to many $\alpha$-levels for constructing simultaneous intervals. \end{remark} \begin{remark}[Partial-identification intervals]\label{remark:pi} While \cref{alg:est} focuses on CATE-CVaR, which upper bounds ITE-CVaR, in \cref{apx:lbest} we provide inference procedures for lower bounds on ITE-CVaR. These can be combined to construct intervals containing ITE-CVaR with probability $\gamma$. By union bound, we can simply combine the one-sided $(1+\gamma)/2$-confidence intervals for the lower and upper bounds. But coverage may be conservative ($>\gamma$) for the partial-identification interval given by the bounds. For calibrated $\gamma$-coverage (asymptotically), we must account for correlation between lower- and upper-bound estimates, given by the correlation between $\phi_i$'s for each procedure. Then, we can construct calibrated intervals following Appendix A.4 of \citet{kallus2021assessing}. \end{remark} \begin{remark}[Monotonicity]\label{remark:rearrangement} While $\cvarat\alpha(\catef(X))$ is monotone in $\alpha$, \cref{alg:est}'s output for different $\alpha$ may not be due to estimation errors. We can post-process to ensure monotonicity using rearrangement \citep{hardy1952inequalities}, which only improves estimation and does not affect inference \citep{chernozhukov2010quantile}. We use this in \cref{sec:casestudy}. \end{remark} \subsection{Local Robustness and Confidence Intervals}\label{sec:localrobust} We now establish favorable guarantees for \cref{alg:est}. First, we show it is insensitive to slow but consistent estimation of nuisances, having first-order behavior as if we used true values. We will need some minimal regularity. \begin{assumption}[Regularity]\label{asm:regularity} $\bar e\leq e\leq1-\bar e$ and $\abs{Y}\leq B$ for positive constants $\bar e,\,B>0$.\break $F_{\catef(X)}$ is continuously differentiable at $F^{-1}_{\catef(X)}(\alpha)$. \end{assumption} The first condition ensures that the $X$-distributions of experimental groups \emph{overlap}. It is usually guaranteed in randomized experiments by setting $e(X)$ constant ($A\indep X$). In unconfounded observational studies, it is a standard assumption. The second condition requires bounded outcomes and is largely technical to make analysis tractable. The third condition prohibits degeneracy of the quantile. The same is needed for asymptotic normality of sample quantiles of \emph{observed} variables. If $\catef(X)$ is discrete, the condition may be replaced by $\exists \varepsilon>0:F^{-1}_{\catef(X)}(\alpha-\varepsilon)=F^{-1}_{\catef(X)}(\alpha+\varepsilon)$, yielding superefficient quantile estimation. The only problematic case is multiplicity of $\{\beta:F_{\catef(X)}(\beta)=\alpha\}$, but only finitely-many such ``bad'' $\alpha$'s exist. Since the focus is on $X$ being rich, we focus on the continuous case and the condition in \cref{asm:regularity}. We first show how, under \cref{asm:regularity}, estimation rates for $\hat\tau^{(k)}$ translate to rates for $\hat\beta^{(k)}$. \begin{lemma}\label{lemma:betalemma} Suppose \cref{asm:regularity} holds. Then, for each $k=1,\dots,K$, $\hat\beta^{(k)}$ in \cref{alg:est betastep} of \cref{alg:est} satisfies $$ \fabs{\hat\beta^{(k)}-\beta^*}=O_p(n^{-1/2}\vee \fmagd{\hat\tau^{(k)}-\tau}_r^{\frac{r}{r+1}})\quad\forall r\in[1,\infty]. $$ \end{lemma} We now show robust oracle-like behavior for $\hat\Psi$. \begin{theorem}\label{thm:asympnormal} Suppose \cref{asm:regularity} holds and that for $k=1,\dots,K$, $\fmagd{\hat e^{(k)}-e}_2=o_p(1)$, $\fmagd{\hat\mu^{(k)}-\mu}_2=o_p(1)$, $\fmagd{\hat e^{(k)}-e}_2\fmagd{\hat\mu^{(k)}-\mu}_2=o_p(n^{-\frac{1}{2}})$, $\fmagd{\hat\tau^{(k)}-\tau}_\infty=o_p(n^{-\frac{1}{4}})$, $\fPrb{\fmagd{\hat\mu^{(k)}}_\infty\leq B}\to1$, and $\fPrb{\bar e\leq \hat e^{(k)} \leq1-\bar e}\to1$. Then $\hat\Psi,\,\hat{\op{se}}$ in \cref{alg:est psistep} of \cref{alg:est} satisfy \begin{align*} &\hat\Psi=\frac1n\sum_{i=1}^n\phi(X,A,Y;e,\mu,\tau,\beta^*)+o_p(n^{-1/2})=\Psi+O_p(n^{-1/2}),\\ &\fPrb{\Psi\in[\hat\Psi\pm \Phi^{-1}((1+\gamma)/2)\hat{\op{se}}]}\to\gamma~~\forall \gamma. \end{align*} \end{theorem} The rate assumptions on $e$ and $\mu$ are lax: it suffices to have $o_p(n^{-1/4})$-rates on both or no rate on $\mu$ at all if $e$ is known. This parallels standard conditions in double-machine-learning ATE-estimation, achievable by a variety of machine-learning methods \citep{doubleML}. We explore the condition on $\tau$ in \cref{sec:cateest}. \subsection{Double Robustness and Double Validity}\label{sec:caterobust} \Cref{thm:asympnormal} guarantees good performance if all nuisances are estimated slowly, but still consistently. But even if nuisances are inconsistent, we perform well. First, we establish a property mirroring doubly-robust ATE-estimation \citep{RRZDoubleRobust}: even if $e$ or $\mu$ are inconsistent, we remain consistent, provided $\tau$ is consistently estimated, albeit slowly. \begin{theorem}[Double robustness]\label{thm:doublyrobust} Fix any $\tilde e,\tilde\mu$ with $\bar e\leq \tilde e\leq 1-\bar e$, $\fmagd{\tilde\mu}_\infty\leq B$. Let $r_n\to0$ be a deterministic sequence. Suppose \cref{asm:regularity} holds and that for $k=1,\dots,K$, $\fmagd{\hat e^{(k)}-\tilde e}_2=o_p(1)$, $\fmagd{\hat\mu^{(k)}-\tilde\mu}_2=o_p(1)$, $\fmagd{\hat\tau^{(k)}-\tau}_\infty=O_p(r^{1/2}_n)$, $\fPrb{\fmagd{\hat\mu^{(k)}}_\infty\leq B}\to1$, $\fPrb{\bar e\leq \hat e^{(k)} \leq1-\bar e}\to1$, and $$\text{either}\quad\fmagd{\hat e^{(k)}-e}_2=O_p(r_n)\quad\text{or}\quad\fmagd{\hat\mu^{(k)}-\mu}_2=O_p(r_n).$$ Then $\hat\Psi$ in \cref{alg:est psistep} of \cref{alg:est} satisfies: $$ \hat\Psi=\Psi+O_p(r_n\vee n^{-1/2}). $$ \end{theorem} \Cref{thm:doublyrobust} is particularly strong in experiments ($e$ known): we can get away with $\hat\mu^{(k)}=0$. We need only estimate CATE at $o_p(n^{-1/4})$-rates to ensure $O_p(n^{-1/2})$-consistency. It would appear we must consistently estimate CATE to have hope of estimating its CVaR. While true, we next show that \emph{even} if we mis-estimate CATE \emph{and also} one of $e,\mu$, we \emph{still} get an upper bound on CATE-CVaR (hence on ITE-CVaR). This appears to be the second finding of a double-validity property since being first documented in sensitivity analysis \citep{dorn2021doubly}. We first establish the population-level bound behavior and then state the implication for estimation. \begin{lemma}\label{lemma:popdoublevalid} Fix any $\tilde\catef:\mathcal X\to\Rl$. Let $\tilde\beta=F_{\tilde\catef(X)}^{-1}(\alpha)$. Suppose \cref{asm:regularity} holds with $\tau$ replaced with $\tilde\catef$. Then: \begin{equation}\label{eq:popvalid} \cvarat\alpha(\catef(X))\leq \tilde\beta+\frac1\alpha\Efb{\findic{\tilde\catef(X)\leq\tilde\beta}(\catef(X)-\tilde\beta)}. \end{equation} \end{lemma} \begin{theorem}[Double validity]\label{thm:doublyvalid} Fix any $\tilde e,\tilde\mu,\tilde\catef$ with $\bar e\leq \tilde e\leq 1-\bar e$, $\fmagd{\tilde\mu}_\infty\leq B$, $\fmagd{\tilde\catef}_\infty\leq 2B$. Let $r_n\to0$ be a deterministic sequence. Suppose \cref{asm:regularity} holds with $\tau$ replaced with $\tilde\catef$ and that for $k=1,\dots,K$, $\fmagd{\hat e^{(k)}-\tilde e}_2=o_p(1)$, $\fmagd{\hat\mu^{(k)}-\tilde\mu}_2=o_p(1)$, $\fmagd{\hat\tau^{(k)}-\tilde\catef}_\infty=O_p(r_n)$, $\fPrb{\fmagd{\hat\mu^{(k)}}_\infty\leq B}\to1$, $\fPrb{\bar e\leq \hat e^{(k)} \leq1-\bar e}\to1$, and $$\text{either}\quad\fmagd{\hat e^{(k)}-e}_2=O_p(r_n)\quad\text{or}\quad\fmagd{\hat\mu^{(k)}-\mu}_2=O_p(r_n).$$ Then $\hat\Psi$ in \cref{alg:est psistep} of \cref{alg:est} satisfies: $$ \hat\Psi\geq\Psi-O_p(r_n\vee n^{-1/2}). $$ \end{theorem} \Cref{thm:doublyvalid} guarantees extensive robustness and suggests a practical, blackbox-free approach in experimental settings: set $\hat\mu^{(k)}=0$ and use simple \emph{misspecified} parametric models (\eg, linear) for CATE-estimation, and we still estimate a valid ITE-CVaR bound at fast $O_p(n^{-1/2})$-rates. \subsection{CATE-Estimation and Rates}\label{sec:cateest} \Cref{alg:est} accepts separate learners for \emph{both} $\mu$ \emph{and} $\tau$. So, while $\tau(x)=\mu(x,1)-\mu(x,0)$, we need \emph{not} have $\hat\tau^{(k)}(X)=\hat\mu^{(k)}(x,1)-\hat\mu^{(k)}(x,0)$, and in fact we should not. Recent work advocates and provides specialized methods for \emph{directly} estimating CATE \citep{causaltree,causalforest,xlearner,slearner,rlearner,drlearner}. This is important because \cref{alg:est} uses the $\mu$- and $\tau$-estimates differently and, correspondingly, our theoretical results impose different assumptions on each. The $\tau$-estimate is used for approximating the event $\indic{\tau(X)\leq\beta^*}$, which is crucial for targeting CVaR correctly. In contrast, the $\mu$-estimate is just used in order to estimate a weighted-average treatment effect, given the weights $\indic{\tau(X)\leq\beta^*}$, and is therefore interchangeable with propensity. We next review different options for CATE-estimation and how these ensure the conditions of \cref{thm:asympnormal,thm:doublyrobust,thm:doublyvalid}. We emphasize that these need not be understood as exhaustive list of which learners to use: practically, the nuisance-estimation rates are high-level assumptions that generally say one may safely plug-in black-box machine-learning estimators to \cref{alg:est}: no restrictions are made but rates (no metric-entropy conditions), estimators can be flexible/nonparametric in that rates can be much slower than ``parametric" $O_p(n^{-1/2})$-rates, and results are exceedingly robust to inconsistent estimation. \subsubsection{Experimental settings}\label{sec:pseudoexp} A major issue with CATE-estimation by differencing outcome regressions is that effect signals are easily lost. CATE is generally simpler and less variable than baseline mean outcomes, $\mu(X,0),\mu(X,1)$. For example, many variables often help predict outcomes, but few modulate the treatment effect. It is therefore imperative to learn CATE directly. In experimental settings ($e$ known) we can construct a pseudo-outcome $\Delta=\frac{A-e(X)}{e(X)(1-e(X))}Y$ and, since $\catef(X)=\Eb{\Delta\mid X}$, learn CATE by regressing $\Delta$ on $X$, using any supervised-learning method. One case that theoretically ensures $\fmagd{\hat \tau^{(k)}-\tau}_\infty=o_p(n^{-1/4})$ is when $\tau(x)$ is more-than-$d/2$-smooth in $x\in\R d$ \citep[Theorem 1]{stoneglobal}. Another option is $\tau(x)$ linear with $o(\sqrt{n}/\log d)$-nonzero coefficients \citep{belloni2017program}. Note this works \emph{regardless} of $\mu$ being nice. Or, we may avoid black-box models (and cross-fitting) altogether by using simple linear regression of $\Delta$ on $X$ to obtain a valid bound per \cref{thm:doublyvalid}. To satisfy the other conditions, for \cref{thm:doublyrobust,thm:doublyvalid} we can set $\mu=0$, and for \cref{thm:asympnormal} we need only estimate $\mu$ consistently without rate. We can either estimate $\mu$ directly or only estimate $\bar\mu(X)=\Eb{Y\mid X}$ and set $\hat\mu^{(k)}(X,A)=\hat{\bar\mu}^{(k)}(X)+(A-e(X))\hat\tau^{(k)}(X)$. Consistency for either is immediate from $\EY^2<\infty$ \cite {gyorfi2002distribution}. \subsubsection{Observational settings} When $e$ is unknown, the pseudo-outcome-construction needs refinement. One option is DR-leaner \citep{drlearner}: regress $\Delta=\hat\mu(X,1)-\hat\mu(X,0)+\frac{A-\hat e(X)}{\hat e(X)(1-\hat e(X))}(Y-\hat\mu(X,A))$ on $X$, where $\hat e,\hat \mu$ are appropriately cross-fitted. Another is R-learner \citep{rlearner}: let $\hat\tau$ minimize the average of $\prns{Y-\hat{\bar\mu}(X)-(A-\hat e(X))\hat\tau(X)}^2$, where $\hat e,\hat{\bar\mu}$ are appropriately cross-fitted. \citet[Corollary 3]{drlearner} provides rates for local-polynomial R-learners: if $e(x)$ is $s_e$-smooth in $x\in\R d$, $\bar\mu(x)$ $s_\mu$-smooth, and $\tau(x)$ more-than-$d/2$-smooth, then we obtain $o_p(n^{-1/4})$-rate pointwise error, provided $s_e\geq s_\mu,\,\frac{s_e+s_\mu}{2}>\frac{d}{8}$. To convert pointwise-error bounds to sup-norm-error bounds, we may follow the discretization approach of \citet{stoneglobal}, incurring only logarithms. Or, we can simply use linear R- or DR-learners and get a valid bound per \cref{thm:doublyvalid}. \section{Case Study}\label{sec:casestudy} \begin{figure}[t!]\centering% \begin{minipage}{0.333\textwidth} \centering \includegraphics[width=\linewidth]{job_cvar} \vspace{-1.5em}\caption{$\cvarat\alpha(\catef(X))$}\label{fig:job_cvar} \end{minipage}\hfill% \begin{minipage}{0.333\textwidth} \centering \includegraphics[width=\linewidth]{job_cvarvsate} \vspace{-1.5em}\caption{$\cvarat\alpha(\catef(X))-\bar\tau$}\label{fig:job_cvarvsate} \end{minipage}\hfill% \begin{minipage}{0.333\textwidth} \centering \includegraphics[width=\linewidth]{job_cvar_bad} \vspace{-1.5em}\caption{$\cvarat\alpha(\tau_1(X_1))$}\label{fig:job_cvar_bad} \end{minipage}\\[0.75\baselineskip]% \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=\linewidth]{job_bounded_bounds} \vspace{-1.5em}\caption{Bounds based on\break residual-heterogeneity range}\label{fig:job_bounded_bounds} \end{minipage}\hfill% \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=\linewidth]{job_condvar_bounds} \vspace{-1.5em}\caption{Bounds based on\break residual-heterogeneity variance}\label{fig:job_condvar_bounds} \end{minipage \end{figure} We now demonstrate our bounds and inference.\footnote{Replication code is available at \url{https://github.com/CausalML/TreatmentEffectRisk}.} While we consider a program-evaluation example, we believe our results are also particularly relevant to A/B testing on online platforms, where, after testing, product innovations are usually either scrapped/reworked or broadly rolled out, and where ATEs are often small, creating an opportunity for many users to be negatively impacted despite positive average effects. Little such data is public, however. \subsection{Background and Setup} \citet{behaghel2014private} analyze a large-scale randomized experiment comparing assistance programs offered to French unemployed individuals. They compare three arms: individuals in the ``control'' arm receive the standard services of the Public Employment Services, in ``public'' receive an intensive counseling program run by a public agency, and in ``private'' a similar program run by private agencies. We consider a hypothetical scenario where the private-run counseling program ($A=0$) is currently being offered to the unemployed and we consider the change to a public-run program ($A=1$).\footnote{Some individuals assigned to the additional counseling refused it. We nonetheless restrict our attention to intent-to-treat interventions, considering hypothetically making available either the public-run or private-run counseling to unemployed individuals, who may decline it.} We take reemployment within six months as our (binary) outcome. The ATE is $1.22$ percentage points (90\%-CI $[-0.35, 2.8]$), a $4.9\%$ increase in reemployment. This suggests a positive/neutral effect, so a policymaker might hypothetically consider this an acceptable policy change, \eg, if the public-run program provided cost savings.\footnote{\citet[section IV]{behaghel2014private} discuss why public-run programs fare better.} To apply our methodology, we consider all pre-treatment covariates in table~2 of \citet{behaghel2014private}, except we treat as numeric (rather than dichotomize) age, number children, years experience, salary target, assignment timing, and number unemployment spells. Other variables quantify education, employment level and type, gender, martial status, national origin, region, unemployment reason, and long-term-unemployment risk. The propensity is constant. As recommended in \cref{sec:pseudoexp}, we fit CATE using a pseudo-outcome linear regression. We estimate $\mu$ using cross-fitted gradient-boosting machines. \subsection{Upper bounds} \Cref{fig:job_cvar} presents inference on CATE-CVaR using \cref{alg:est} for $\alpha\in\{0.01,0.02,\dots,1\}$. The line represents our point estimate, after rearrangement as recommended in \cref{remark:rearrangement},\footnote{We present the figure without rearrangement in \cref{apx:casestudy}.} and the shaded region represents point-wise 90\%-confidence intervals. Note uncertainty grows for smaller $\alpha$. We see that the ATE-estimate (right-most point) is positive with an interval containing zero. We find, however, that some 56\%-sized $X$-defined-subpopulation has a negative effect at 90\%-confidence.\footnote{Since outcome is binary, the \emph{largest} fraction that can have a negative effect is $(50\times(1-\bar\tau))\%$, so either $\bar\tau<0$ or at most half may be negatively affected. The ATE interval indeed contains zero with confidence only 90\%.} This strongly suggests that the change, if enacted could materially negatively impact a large portion of the population, despite the positive/neutral ATE. Thus, considering treatment effect \emph{risk} provides a crucial metric not reflected in the ATE. This risk is also \emph{not} reflected in DTEs: the binary potential-outcome distributions are \emph{fully} specified by just $\E[Y(0)],\,\E[Y(1)]$.\footnote{In particular, the $\alpha$-quantile DTE is uselessly \emph{zero} for \emph{all} $\alpha\in[0,1]\backslash\{1-\E[Y(0)],1-\E[Y(1)]\}$ and the $\alpha$-CVaR DTE is $\frac1\alpha(\E[Y(1)]-1+\alpha)_+-\frac1\alpha(\E[Y(0)]-1+\alpha)_+$, which is not even monotonic. For illustration we plot it in \cref{apx:casestudy}.} In \cref{fig:job_cvarvsate} we focus on comparing CATE-CVaR to ATE following \cref{remark:comparing}. The only difference to \cref{fig:job_cvar} is a slight vertical shift and that confidence intervals (correctly) shrink to a point as $\alpha\to1$, enabling more confident conclusions comparing subpopulations to the population. In \cref{fig:job_cvar_bad} we consider CATE-CVaR when we capture less heterogeneity, using only age, high-school dropout, African national origin, and Paris-region resident as covariates ($X_1$). This detects no significant risk. \subsection{Lower bounds} While the upper bounds show a significant subpopulation can be negatively harmed, being only bounds, it may be the subpopulation can be harmed even more or an even larger subpopulation can be harmed. Lower bounds help us understand how much greater the risk might be. In \cref{fig:job_bounded_bounds} we consider our lower bounds (vs ATE) when limiting the residual-heterogeneity range given by Theorems \ref{thm:cvarlb2} (two-sided range) and \ref{thm:cvarlb1} (one-sided range). Since it may be hard to justify and calibrate a limited range, in \cref{fig:job_condvar_bounds} we consider lower bounds given by \cref{thm:cvarlb3,cor:cvarlb4} by limiting residual-heterogeneity variance. For the former, we fit $\op{Var}(Y\mid A,X)$ using gradient-boosting machines and construct $\bar\sigma^2(X)$ per \cref{eq:vardecomp} by varying constant values of $\rho(X)=\rho\in[-1,1]$. Recall $\rho=-1$ always yields an assumption-free bound. We use the same model to estimate the right-hand side of \cref{eq:cvarlb4b}. We compute the cross-validated root-mean-squared prediction error to estimate the right-hand side of \cref{eq:cvarlb4c}. We observe that assuming perfectly-conditionally-correlated potential outcomes yields a lower bound very close to the upper bound. The bounds of \cref{cor:cvarlb4} appear loose; indeed they are not tight. \section{Concluding Remarks} We study the average effect on those worst-affected by a proposed change as a measure of its \emph{risk}, how to tightly bound it given covariates that explain some heterogeneity, and how to make robust inferences on these bounds even when this heterogeneity is roughly estimated. This provides very practical tools for assessing policy and product changes beyond their ATE and DTEs. We can safely use flexible yet biased/slow-to-converge machine learning, or we can avoid black-box models and easily get good bounds by considering only linear projections of heterogeneity . In the hypothetical case study this detected that, what appeared to be a positive/neutral change could actually very negatively impact a substantial subpopulation. We focused on experimental (or, unconfounded observational) settings without interference, where risk is already unidentifiable \emph{despite} randomization. A future direction is to consider the impact of interference \citep{johari2022experimental,athey2018exact} or confounding \citet{tan2006}, where even ATEs are unidentifiable and fairness is harder to assess \citep{kilbertus2020sensitivity,jung2020bayesian,kallus2018residual}. Interestingly, for partial identification under \citet{tan2006}' model, $X$-conditional outcome-CVaR plays a crucial role \citep{dorn2021doubly}. Another direction may be to consider other risk measures, such as given by Kullback-Leibler ambiguity sets \citep{ahmadi2012entropic}. Per \cref{footnote:coherentrisk}, the tight upper bound is still the risk measure applied to CATE, but it remains to compute lower bounds and design robust inference methods. \begin{acks} I thank Netflix's Dar\'io Garc\'ia Garc\'ia, Molly Jackman, Danielle Rosenberg, William Nelson, and Martin Tingley for very helpful conversations. \end{acks} \bibliographystyle{ACM-Reference-Format}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction:}$\quad \,\,\,\,\,\,$ Purity for modules over general rings was defined in \cite{Coh959} and many relative versions of purity have been considered since then. We consider those purities which, like the original one, are determined by classes $S$ of finitely presented modules. We present a number of characterisations of $S$-pure-exact sequences and of the associated classes of relatively projective and relatively injective modules. We also show the relation between the purity for left modules which is determined by $S$ and the purity for right modules determined by $S$; this is said most directly in terms of the matrices presenting the modules in $S$. Al-Kawarit and Cauchot \cite{AlkaCo11} gave conditions under which purities determined by matrices of certain sizes are different. We obtain related results over semiperfect rings and we also consider this question in detail over finite-dimensional algebras. Over finite-dimensional algebras we give a description of the $S$-pure-injective modules in terms of the type-definable category generated by $\tau S$ and, in the case of tame hereditary algebras, and using results from \cite{Pre09} and \cite{Rin98}, we give a complete description of these modules. Finally we give a number of characterisations of rings whose indecomposable modules are $S$-pure-injective. All rings in this paper are associative with unity, and all modules are unital. We write $_{R}M$ $(M_{R})$ to indicate a left (right) $R$-module, and we use $R$-Mod denote the category of all left $R$-modules. The endomorphism ring of a module $M$ is denoted by {\rm End}$_{R}(M)$. We use Add$(T)$ $($resp., add$(T))$ to denote the class of all modules that are direct summands of direct sums $($resp. finite direct sums$)$ of modules from $T.$ Also, we use Prod$(T)$ to denote the class of all modules that are direct summands of direct products of modules from $T.$ We use the notation $\underset{n\times m}{M}(R)$ for the set of all $n\times m$ matrices over $R$. All matrices in this paper are matrices with finitely many rows and finitely many columns and all classes of modules are closed under isomorphisms. A module is said to be finitely presented if it is the factor module of a free module of rank $n$ modulo a $m$-generated submodule, for some $n,m\in\mathbb{Z}^{+}.$ Let $S$ be a class of left $R$-modules. Following Warfield \cite{War69(b)}, an exact sequence $0\rightarrow A\overset{f}{\rightarrow}B\overset{g}{\rightarrow}C\rightarrow0$ of left $R$-modules is said to be $S$-pure if the sequence $0\rightarrow \textup{Hom}_{R}\left(M,A\right)\rightarrow \textup{Hom}_{R}\left(M,B\right)\rightarrow\textup{Hom}_{R}\left(M,C\right)\rightarrow0$ is exact, for all $M\in S;$ in this case $f$ is said to be an $S$-pure monomorphism and $g$ is said to be an $S$-pure epimorphism. Note that $S$-pure=$S\cup \{_RR\}$-pure. If $S$=$R$-Mod then a short exact sequence of modules is $S$-pure if and only if it is pure. A module $M$ is said to be $S$-pure-injective $($resp. $S$-pure-projective$)$, if $M$ is injective $($resp. projective$)$ relative to every $S$-pure exact sequence of modules. Clearly the class of $S$-pure-injective $($resp. $S$-pure-projective$)$ modules is closed under direct summands and direct products $($resp. direct sums$).$ This paper contains five sections. In section 1, many characterizations and properties of $S$-purity, $S$-pure-injectivity and $S$-pure-projectivity are given. For example, we prove that, if $S$ is a class of finitely presented modules then a module $M$ is $S$-pure-projective if and only if it is projective relative to every $S$-pure exact sequence $0\rightarrow K\rightarrow E\rightarrow F\rightarrow0$ where $E$ is $S$-pure-injective. Dually, $M$ is $S$-pure-injective if and only if $M$ is injective relative to every $S$-pure exact sequence $0\rightarrow K\rightarrow P\rightarrow L\rightarrow0$ where $P$ is $S$-pure-projective. In \cite{PuPrRo99} purity and $S$-purity are compared. In particular, it is proved that $S$-purity and purity are equivalent if and only if $S$-pure-injectivity and pure-injectivity are equivalent if and only if $R$-mod$\subseteq$add$(S\cup \{_RR\})$ \cite[Theorem 2.5, p.2136]{PuPrRo99}. In section 2 of this paper we compare $S$-purity and $T$-purity for arbitrary classes $S$ and $T$ of finitely presented left $R$-modules. For example, in Theorem~\ref{thm:2.1(2.15),(2.17),cor(2.18),cor(1.6),2report,p.14-16,p.5} we prove that if $S$ and $T$ are classes of finitely presented left $R$-modules, then the following statements are equivalent: $\left(1\right)$ every $T$-pure short exact sequence of left $R$-modules is $S$-pure; $\left(2\right)$ $S\subseteq \textup{add}(T\cup\{_{R}R\});$ $\left(3\right)$ $D^*_{\mathcal G}\subseteq $Prod$(({D_{\mathcal H}}\cup{ R_{R}})^*)$ where $^*$ denotes the dual of a module $;\left(4\right)$ the corresponding assertions for right modules. Also, in Proposition~\ref{thm:2.7(3.20),blue,p.5} we prove that if each indecomposable direct summand of a module in $T$ has local endomorphism ring and each module in $T$ is a direct sum of indecomposable modules then every $S$-pure short exact sequence modules is $T$-pure if and only if each indecomposable direct summand of a module in $T$ is a direct summand of a module in $S\cup\{_{R}R\}.$ In section 3, we study $(n,m)$-purity over semiperfect rings. In Theorem~\ref{thm:3.13(3.13),2report,p.20} we give a generalization of \cite[Theorem 3.5(1), p.3888]{AlkaCo11} in which we prove that if $(n,m)$ and $(r,s)$ are any two pairs of positive integers such that $n\neq r$ and if one of the following two conditions is satisfied: $\left(a\right)$ $R$ is semiperfect and there exists an ideal $I$ of $R$ with $\textup{gen}(I_R)=$$\textup{max}\{n,r\}$ and $I\subseteq e_{j}R$ for some local idempotent $e_{j}$; $\left(b\right)$ $R$ is Krull-Schmidt and there exists a right ideal $I$ of $R$ with $\textup{gen}(I)=$$\textup{max}\{n,r\}$ and $I\subseteq e_{j}R$ for some local idempotent $e_{j}$, then: $\left(1\right)$ $(m,n)$-purity and $(s,r)$-purity of short exact sequences of left $R$-modules are not equivalent; $\left(2\right)$ $(n,m)$-purity and $(r,s)$-purity of short exact sequences of right $R$-modules are not equivalent. In section 4, we study purity over finite-dimensional algebras. Firstly, we compare purities over the Kronecker algebra over an algebraically closed field $k$. In Proposition~\ref{prop:4.4(new)} we prove that if $R$ is a finite-dimensional algebra over a field $k$ and it is not of finite representation type, then for every $r\in \mathbb{Z}^{+}$, there is $n>r$ such that $(\aleph_{0},n)$-purity$\neq (\aleph_{0},r)$-purity for left $R$-modules. Let $\mathcal H$ be a set of matrices over a tame hereditary finite-dimensional algebra $R$ over a field $k$. Conditions under which the generic module is $L_{\mathcal H}$-pure-injective are given in Proposition~\ref{prop:4.10(prop 4.49, p.44)}. Finally, we give a complete description of the full support topology closure of any class of indecomposable finite-dimensional modules over a tame hereditary finite-dimensional algebra $R$ over a field $k$. In the last section we give a condition on a left $R$-module $M$ such that every $S$-pure submodule of $M$ is a direct summand and prove that such module is a direct sum of indecomposable submodules. As a corollary of this result we give a characterizations of rings over which every indecomposable left $R$-module is $S$-pure-projective. \section{Purities} \subparagraph*{\textmd{Let $n,m\in \mathbb Z^{+}.$ An $R$-module $M$ is said to be $(n,m)$-presented if it is the factor module of the module $R^n$ modulo an $m$-generated submodule. Let $H$ be an $n\times m$ matrix over $R$. Then right $($resp. left$)$ multiplication by $H$ determines a homomorphism $\rho_{H}:{_RR}^{n}\rightarrow{_RR}^{m}$ $($resp. $\lambda_{H}:R_{R}^{m}\rightarrow R_{R}^{n}).$ Then $H$ determines the $(m,n)$-presented left $R$-module $R^{m}/$im$(\rho_H)$; we will denote it by $L_{H}$. Also, $H$ determines the $(n,m)$-presented right $R$-module $R^{n}/$im$(\lambda_H);$ we will denote it by $D_{H}$. Let ${ \mathcal H }$ be a set of matrices over a ring $R$; we will denote by $L_{ \mathcal H }$ the class of left $R$-modules $\{L_{H}\mid H\in { \mathcal H } \}$ and by $D_{ \mathcal H }$ the class of right $R$-modules $\{D_{H}\mid H\in$${ \mathcal H }$$\}.$ In view of proposition~\ref{cor:1.18(2.5),2report,p.11} below we may, where convenient, interpret $L_{\emptyset}$ as $\{_RR\}$ and $D_{\emptyset}$ as $\{R_R\}$, }} \subparagraph*{\textmd{ The following theorem collects together and extends results from the literature $($in particular see \cite{Fac98} and \cite{Wis91} $)$. A proof can be found in the author's thesis \cite{Meh}.}} \begin{thm}\label{thm:(1.3),2report,p3} Let R be an algebra over a commutative ring K and let E be an injective cogenerator for K-modules. Let $S$ be a class of finitely presented left $R$-modules, let ${ \mathcal H }$ be a set of matrices over $R$ such that $L_{ \mathcal H }$-purity=S-purity and let $\Sigma:$$0\rightarrow A\overset{f}{\rightarrow}B\overset{g}{\rightarrow}C\rightarrow0$ be an exact sequence of left $R$-modules. Then the following statements are equivalent. $\left(1\right)$ $\Sigma$ is an $S$-pure exact sequence of left R-modules. $\left(2\right)$ For any two positive integers n,m, for any $n\times m$ matrix H in $ \mathcal H$ and for all $\bar{a}\in A^{n},$ if the matrix equation H$\bar{x}$=f$\bar{a}$ has a solution in $B^{m}$ then the equation H$\bar{x}$=$\bar{a}$ has a solution in $A^{m}.$ $\left(3\right)$ The sequence $0\rightarrow M\otimes_{R}A\rightarrow M\otimes_{R}B\rightarrow M\otimes_{R}C\rightarrow0$ is exact, for all $M\in D_ \mathcal H.$ $\left(4\right)$ For any two positive integers n,m, for any $n\times m$ matrix H in $ \mathcal H$ and for all $\bar{c}\in C^{m},$ if $H\bar{c}$=$0,$ then there is $\bar{b}\in B^{m}$ with $g(\bar{b})$=$\bar{c}$ and $H\bar{b}$=$0.$ $\left(5\right)$ For any two positive integers n,m and for any $n\times m$ matrix H in $ \mathcal H,$ for every commutative diagram of left R-modules \[\begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=3em, column sep=3em] { \ &R^n&R^m \\ 0 &A & B \\ }; \path[->] (m-1-2) edge node[auto] {$\scriptstyle\ ^{\rho_{H}}$} (m-1-3); \path[->] (m-1-2) edge node[left] {$\scriptstyle\alpha$} (m-2-2) (m-2-1) edge node[auto] {$\scriptstyle$} (m-2-2); \path[->] (m-2-2) edge node[auto] {$\scriptstyle\ f$} (m-2-3); \path[->] (m-1-3) edge node[left] {$\scriptstyle$} (m-2-3); \end{tikzpicture}\] there exists a homomorphism $\beta:R^{m}\rightarrow A$ such that $\alpha$=$\beta \rho_H.$ $\left(6\right)$ The dual exact sequence of right R-modules $0\rightarrow C^{*}\rightarrow B^{*}\rightarrow A^{*}\rightarrow0$ is $D_{\mathcal H}$-pure, where $M^{*}$={\rm Hom}$_{K}(M,E).$\end{thm} We retain the notation $M^{*}$ for the dual of a module with respect to $_{K}E$ as above. Let $T$ be a class of left $R$-modules. Note that if $S\subseteq T\subseteq R$-Mod then every $T$-pure exact sequence of left $R$-modules is $S$-pure, so $S$-pure-injective implies $T$-pure-injective and $S$-pure-projective implies $T$-pure-projective. \begin{prop}\label{cor:1.18(2.5),2report,p.11}(as \cite[Proposition 1, p.700]{War69(a)}) Let $S$ be a class of finitely presented left R-modules and let M be a left R-module. Then: $\left(1\right)$ There exists an $S$-pure exact sequence of left R-modules $0\rightarrow K\rightarrow F\rightarrow M\rightarrow0$ with F being a direct sum of copies of modules in $S\cup\{_{R}R\}.$ $\left(2\right)$ \textup{Add}$(S\cup\{_{R}R\})$ is the class of $S$-pure-projective left R-modules. \end{prop} \begin{cor}\label{cor:1.19(2.6),2report,p.11} Let $S$ be a class of finitely presented left R-modules and let $ \mathcal H$ be a set of matrices over R such that $L_{ \mathcal H }$-purity=S-purity. Then for any left R-module N there is an $S$-pure monomorphism $\alpha:N\rightarrow F^{*}$ such that F is a direct sum of copies of modules in $D_{\mathcal H}\cup\{R_{R}\}.$ In particular, see Theorem~\ref{thm:28(4.27),black,p.22}, $F^{*}$ is $S$-pure-injective.\end{cor} \begin{proof} Let $N$ be any left $R$-module. By the right hand version of Proposition~\ref{cor:1.18(2.5),2report,p.11}, there is a $D_{\mathcal H}$-pure exact sequence of right $R$-modules \textit{$0\rightarrow G\overset{f}{\rightarrow}F\overset{g}{\rightarrow}N^{*}\rightarrow0$ }where \textit{ $F$ }is a direct sum of copies of modules in $D_{\mathcal H}\cup\{R_{R}\}.$ By the right hand version of Theorem~\ref{thm:(1.3),2report,p3}, the dual exact sequence of left $R$-modules \textit{$0\rightarrow N^{**}\overset{g^{*}}{\rightarrow}F^{*}\overset{g^{*}}{\rightarrow}G^{*}\rightarrow0$ is $L_{\mathcal H}$-}pure. The canonical monomorphism $\varphi_{N}:N\rightarrow N^{**}$ is pure $($see, e.g., \cite[Corollary 1.30, p.17]{Fac98}$)$ and hence it is \textit{$L_{\mathcal H}$-}pure. Since a composition of $L_{\mathcal H}$-pure monomorphisms clearly is $L_{\mathcal H}$-pure, $g^{*}\varphi_{N}:N\rightarrow F^{*}$ is an \textit{$L_{\mathcal H}$-}pure monomorphism.\end{proof} Let $S$ be a class of left $($or right$)$ $R$-modules. We use $S^*$ to denote the class $\{M^*\mid M\in S\}$. \begin{thm}\label{thm:28(4.27),black,p.22}(as \cite[Theorem 1]{War69(b)}) Let S be a class of finitely presented left R-modules and let ${ \mathcal H }$ be a set of matrices over R such that $L_{ \mathcal H }$-purity=S-purity, then \textup{Prod}$((D_{\mathcal H}$ $\cup\{R_{R}\})^{*})$ is the class of $S$-pure-injective left R-modules. \end{thm} \begin{proof} Let $M$ be any $S$-pure-injective left $R$-module. By Corollary~\ref{cor:1.19(2.6),2report,p.11}, there exists an $S$-pure, hence split, monomorphism $\alpha:M\rightarrow F^{*}$ where $F=\underset{i\in I}{\bigoplus}F_{i}$ with $F_{i}\in D_{\mathcal H}\cup\{R_{R}\}.$ Since $F^{*}=(\underset{i\in I}{\bigoplus}F_{i})^{*}\simeq \underset{i\in I}{\prod}F_{i}^{*}$ it follows that $M\in \textup{Prod}((D_{\mathcal H}\cup\{R_{R}\})^{*}).$ Conversely, let $H\in\mathcal H$ and let $\Sigma:$ $0\rightarrow A\rightarrow B\rightarrow C\rightarrow0$ be any $L_H$-pure exact sequence of left $R$-modules. By Theorem~\ref{thm:(1.3),2report,p3}, the sequence $D_{H}\otimes_{R}\Sigma:$$0\rightarrow D_{H}\otimes_{R}A\rightarrow D_{H}\otimes_{R}B\rightarrow D_{H}\otimes_{R}C\rightarrow0$ is exact. Since $E$ is an injective $K$-module, the sequence $0\rightarrow {\rm Hom}_{K}(D_{H}\otimes_{R}C,E)\rightarrow {\rm Hom}_{K}(D_{H}\otimes_{R}B,E)\rightarrow {\rm Hom}_{K}(D_{H}\otimes_{R}A,E)\rightarrow0$ is exact. This is isomorphic to the sequence $0\rightarrow {\rm Hom}_{R}(C,{\rm Hom}_{K}(D_{H},E))\rightarrow {\rm Hom}_{R}(B,{\rm Hom}_{K}(D_{H},E))\rightarrow {\rm Hom}_{R}(A,{\rm Hom}_{K}(D_{H},E))\rightarrow0$. That is, the sequence $0\rightarrow {\rm Hom}_{R}(C,D_{H}^{*})\rightarrow {\rm Hom}_{R}(B,D_{H}^{*})\rightarrow {\rm Hom}_{R}(A,D_{H}^{*})\rightarrow0$ is exact. Therefore, $D^{*}_{H}$ is $L_H$-pure-injective. By, for instance, \cite[Theorem 3.2.9, p.77]{EnJe00}, $R_R^*$ is injective and thus each module in $(D_{\mathcal H}\cup\{R_{R}\})^{*}$ is $S$-pure-injective. It follows that every module in \textup{Prod}$((D_{\mathcal H}$ $\cup\{R_{R}\})^{*})$ is $S$-pure-injective. \end{proof} \begin{prop}\label{prop:26(2.13),2report,p.13} Let S be a class of finitely presented left R-modules, let ${ \mathcal H }$ be a set of matrices over R such that $L_{ \mathcal H }$-purity=S-purity and let $\Sigma:$ $0\rightarrow A\rightarrow B\rightarrow C\rightarrow0$ be any exact sequence of left R-modules. Then the following statements are equivalent: $\left(1\right)$ $\Sigma$ is $S$-pure. $\left(2\right)$ Every $S$-pure-injective left R-module is injective relative to $\Sigma.$ $\left(3\right)$ $D^{*}_{H}$ is injective relative to $\Sigma,$ for all $H\in{ \mathcal H }.$ $\left(4\right)$ Every $S$-pure-projective left R-module is projective relative to $\Sigma.$ \end{prop} \begin{proof} $\left(1\right)\Rightarrow\left(2\right)$ and $\left(1\right)\Rightarrow\left(4\right)$ are obvious and $\left(2\right)\Rightarrow\left(3\right)$ is immediate from Theorem~\ref{thm:28(4.27),black,p.22}. $\left(3\right)\Rightarrow\left(1\right)$ Let $H\in\mathcal H.$ By hypothesis, the sequence\[ 0\rightarrow \textup{Hom}_{R}(C,\textup{Hom}_{K}(D_{H},E))\rightarrow \textup{Hom}_{R}(B,\textup{Hom}_{K}(D_{H},E))\rightarrow \textup{Hom}_{R}(A,\textup{Hom}_{K}(D_{H},E))\rightarrow0,\] equivalently, the sequence\[ 0\rightarrow \textup{Hom}_{K}(D_{H}\otimes_{R}C,E)\rightarrow\textup{Hom}_{K}(D_{H}\otimes_{R}B,E)\rightarrow \textup{Hom}_{K}(D_{H}\otimes_{R}A,E)\rightarrow0\] is exact. Since $E$ is an injective cogenerator for $K$-modules, it follows $($see \cite[Lemma 3.2.8, p.77]{EnJe00}$)$ that the sequence $0\rightarrow D_{H}\otimes_{R}A\rightarrow D_{H}\otimes_{R}B\rightarrow D_{H}\otimes_{R}C\rightarrow0$ is exact. Thus $\Sigma$ is $S$-pure. $\left(4\right)\Rightarrow\left(1\right)$ This is immediate from Proposition~\ref{cor:1.18(2.5),2report,p.11}, and the definition of $S$-pure exact sequence.\end{proof} \begin{thm}\label{thm:20(2.7),2report,p.11} Let $S$ be a class of finitely presented left $R$-modules. Then for a left R-module M: $\left(1\right)$ M is S-pure-projective if and only if it is projective relative to every $S$-pure exact sequence $0\rightarrow K\rightarrow E\rightarrow F\rightarrow0$ of left R-modules where E is $S$-pure-injective; $\left(2\right)$ M is S-pure-injective if and only if M is injective relative to every $S$-pure exact sequence $0\rightarrow K\rightarrow P\rightarrow L\rightarrow0$ of left R-modules where P is $S$-pure-projective.\end{thm} \begin{proof} $\left(1\right)$ $(\Rightarrow)$ is obvious. $\left(\Leftarrow\right)$ Let $0\rightarrow A\overset{\mu}{\rightarrow}B\overset{\nu}{\rightarrow}C\rightarrow0$ be any $S$-pure exact sequence of left $R$-modules. By Corollary~\ref{cor:1.19(2.6),2report,p.11} and Theorem~\ref{thm:28(4.27),black,p.22}, there is an $S$-pure exact sequence $0\rightarrow B\overset{\lambda}{\rightarrow}P\overset{\rho}{\rightarrow}N\rightarrow0$ of left $R$-modules where $P$ is $S$-pure-injective. We have the following pushout diagram: \[\begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=2em, column sep=2em] { & & 0 & 0& \\ 0& A & B & C&0 \\ 0& A & P & D&0 \\ & & N & N \\& & 0 & 0& \\ }; { [start chain] \; \chainin (m-2-2); { [start branch=B] \chainin (m-3-2) [join={node[right] {$\scriptstyle I_A$}}];} } { [start chain] \; \chainin (m-1-4); { [start branch=B] \chainin (m-2-4) [join={node[right] {$$}}];} ; \chainin (m-2-4); { [start branch=B] \chainin (m-3-4) [join={node[right] {$\scriptstyle \varphi$}}];} ; \chainin (m-3-4); { [start branch=B] \chainin (m-4-4) [join={node[right] {$\scriptstyle \delta$}}];} ; \chainin (m-4-4); { [start branch=B] \chainin (m-5-4) [join={node[right] {$$}}];} } { [start chain] \; \chainin (m-1-3); { [start branch=B] \chainin (m-2-3) [join={node[right] {$$}}];} ; \chainin (m-2-3); { [start branch=B] \chainin (m-3-3) [join={node[right] {$\scriptstyle \lambda$}}];} ; \chainin (m-3-3); { [start branch=B] \chainin (m-4-3) [join={node[right] {$\scriptstyle \rho$}}];} ; \chainin (m-4-3); { [start branch=B] \chainin (m-5-3) [join={node[right] {$$}}];} } { [start chain] \chainin (m-2-1); \chainin (m-2-2) [join={node[above] {}}]; \chainin (m-2-3) [join={node[above] {$\scriptstyle\mu$}}]; \chainin (m-2-4) [join={node[above] {$\scriptstyle\nu$ }}]; \chainin (m-2-5) [join={node[above] {}}]; } { [start chain] \chainin (m-3-1); \chainin (m-3-2) [join={node[above] {}}]; \chainin (m-3-3) [join={node[above] {$\scriptstyle\alpha$}}]; \chainin (m-3-4) [join={node[above] {$\scriptstyle\beta$ }}]; \chainin (m-3-5) [join={node[above] {}}]; } { [start chain]; \chainin (m-4-3) [join={node[above] {$\alpha$}}]; \chainin (m-4-4) [join={node[above] {$\scriptstyle I_N $ }}]; ]; } \end{tikzpicture}\] Since $\mu$ and $\lambda$ are $S$-pure $R$-monomorphisms so is $\lambda\mu$. Since $\alpha$=$\lambda\mu,$ the exact sequence $0\rightarrow A\overset{\alpha}{\rightarrow}P\overset{\beta}{\rightarrow}D\rightarrow0$ is $S$-pure. Let $\psi\in {\rm Hom}_{R}(M,C).$ By hypothesis, there is $\gamma\in {\rm Hom}_{R}(M,P)$ such that $\beta\gamma$=$\varphi\psi.$ We have $\rho\gamma$=$\delta\beta\gamma$=$\delta\varphi\psi =0$ so im$(\gamma)\subseteq$ ker$(\rho)$=im$(\lambda)$ and hence $\gamma$=$\lambda\gamma^{^{\prime}}$ for some $\gamma^{^{\prime}}\in {\rm Hom}_{R}(M,B).$ Then we have $\varphi\nu\gamma^{^{\prime}}$=$\beta\lambda\gamma^{^{\prime}}$=$\beta\gamma$=$\varphi\psi.$ Since $\varphi$ is a monomorphism, $\nu\gamma^{^{\prime}}$=$\psi.$ Hence $M$ is $S$-pure-projective. $\left(2\right)$ The proof is dual to that of $\left(1\right).$ \end{proof} \begin{cor}\label{cor:22(3.25),blue,p.10} Let $S$ be a class of finitely presented left $R$-modules. Then the following statements are equivalent: $\left(1\right)$ For every $S$-pure exact sequence $0\rightarrow N\rightarrow M\rightarrow K\rightarrow0$ of left R-modules, if M is $S$-pure-projective, then N is $S$-pure-projective. $\left(2\right)$ For every $S$-pure exact sequence $0\rightarrow N\rightarrow M\rightarrow K\rightarrow0$ of left R-modules, if M is $S$-pure-injective, then K is $S$-pure-injective.\end{cor} \begin{proof} $\left(1\right)\Rightarrow\left(2\right)$ Let \textit{$0\rightarrow N\overset{\nu}{\rightarrow}M\overset{\mu}{\rightarrow}K\rightarrow0$ }be any $S$-pure exact sequence of left $R$-modules where $M$ is $S$-pure-injective. Let \textit{$0\rightarrow A\overset{\alpha}{\rightarrow}B\overset{\beta}{\rightarrow}C\rightarrow0$ }be any\textit{ }$S$-pure exact sequence of left $R$-modules where $B$ is $S$-pure-projective. By hypothesis, $A$ is $S$-pure-projective. Let $f:A\rightarrow K$ be any $R$-homomorphism. Thus there is an $R$-homomorphism $g:A\rightarrow M$ such that $\mu g=f.$ Since $M$ is $S$-pure-injective, there is an $R$-homomorphism $h:B\rightarrow M$ such that $h\alpha=g.$ Put $\lambda=\mu h,$ thus $\lambda\alpha=(\mu h)\alpha=\mu(h\alpha)=\mu g=f.$ Hence $K$ is injective relative to every $S$-pure exact sequence \textit{$0\rightarrow A\rightarrow B\rightarrow C\rightarrow0$ } where $B$ is $S$-pure-projective. By Theorem~\ref{thm:20(2.7),2report,p.11}, $K$\textit{ }is $S$-pure-injective. $\left(2\right)\Rightarrow\left(1\right)$ Let \textit{$0\rightarrow N\overset{\nu}{\rightarrow}M\overset{\mu}{\rightarrow}K\rightarrow0$ }be any $S$-pure exact sequence of left $R$-modules where $M$ is $S$-pure-projective. Let \textit{$0\rightarrow A\overset{\alpha}{\rightarrow}B\overset{\beta}{\rightarrow}C\rightarrow0$ }be any\textit{ }$S$-pure exact sequence of left $R$-modules where $B$ is $S$-pure-injective. By hypothesis, $C$ is $S$-pure-injective. Let $f:N\rightarrow C$ be any $R$-homomorphism. Thus there is an $R$-homomorphism $g:M\rightarrow C$ such that $g\nu=f.$ Since $M$ is $S$-pure-projective, there is an $R$-homomorphism $h:M\rightarrow B$ such that $\beta h=g.$ Put $\lambda=h\nu,$ thus $\beta\lambda=\beta h\nu=g\nu=f.$ Hence $N$ is projective relative to every $S$-pure exact sequence \textit{$0\rightarrow A\rightarrow B\rightarrow C\rightarrow0$ }of left $R$-modules where $B$ is $S$-pure-injective. By Theorem~\ref{thm:20(2.7),2report,p.11}, $N$\textit{ }is $S$-pure-projective. \end{proof} \section{Comparing purities} \begin{thm}\label{thm:2.1(2.15),(2.17),cor(2.18),cor(1.6),2report,p.14-16,p.5} Let S and T be classes of finitely presented left R-modules and let $\mathcal{G}$ and $\mathcal{H}$ be sets of matrices over R such that $L_{ \mathcal G}$-purity=S-purity and $L_{ \mathcal H }$-purity=T-purity. Then the following statements are equivalent. $\left(1\right)$ Every T-pure short exact sequence of left R-modules is S-pure. $\left(2\right)$ Every T-pure exact sequence $0\rightarrow A\rightarrow B\rightarrow C\rightarrow0$ of left R-modules where B is T-pure-injective is S-pure. $\left(3\right)$ Every S-pure-projective left R-module is T-pure-projective. $\left(4\right)$ $S\subseteq \textup{add}(T\cup\{_{R}R\}).$ $\left(5\right)$ Every T-pure exact sequence $0\rightarrow A\rightarrow B\rightarrow C\rightarrow0$ of left R-modules where B is T-pure-projective is S-pure. $\left(6\right)$ Every S-pure-injective left R-module is T-pure-injective. $\left(7\right)$ $D^*_{\mathcal G}\subseteq $Prod$(({D_{\mathcal H}}\cup{ R_{R}})^*).$ $\left(8\right)$ The corresponding assertions for right modules.\end{thm} \begin{proof} $\left(1\right)\Rightarrow\left(2\right)$ and $\left(1\right)\Rightarrow\left(5\right)$ are obvious. $\left(2\right)\Rightarrow\left(3\right)$ Let $M$ be any $ S$-pure-projective left $R$-module and let $\Sigma:$ $0\rightarrow A\rightarrow B\rightarrow C\rightarrow0$ be any $T$-pure exact sequence of left $R$-modules where $B$ is $T$-pure-injective. By hypothesis, $\Sigma$ is $S$-pure and hence the sequence $0\rightarrow \textup{Hom}_{R}\left(M,A\right)\rightarrow \textup{Hom}_{R}\left(M,B\right)\rightarrow \textup{Hom}_{R}\left(M,C\right)\rightarrow0$ is exact. Thus $M$ is projective relative to every $T$-pure exact sequence $\Sigma:$ $0\rightarrow A\rightarrow B\rightarrow C\rightarrow0$ of left $R$-modules where $B$ is $T$-pure-injective. By Theorem~\ref{thm:20(2.7),2report,p.11}, $M$ is $T$-pure-projective. $\left(3\right)\Rightarrow\left(4\right)$ This follows by Proposition~\ref{cor:1.18(2.5),2report,p.11}. $\left(4\right)\Rightarrow\left(1\right)$ Let $\Sigma:$ $0\rightarrow A\rightarrow B\rightarrow C\rightarrow0$ be any $T$-pure exact sequence of left $R$-modules and let $M\in S$. By assumption and Proposition~\ref{cor:1.18(2.5),2report,p.11} $M$ is $T$-pure-projective. Thus the sequence $0\rightarrow \textup{Hom}_{R}\left(M,A\right)\rightarrow \textup{Hom}_{R}\left(M,B\right)\rightarrow \textup{Hom}_{R}\left(M,C\right)\rightarrow0$ is exact. Therefore $\Sigma$ is $S$-pure. $\left(5\right)\Rightarrow\left(6\right)$ Let $M$ be any $S$-pure-injective left $R$-module and let $\Sigma:$ $0\rightarrow A\rightarrow B\rightarrow C\rightarrow0$ be any $T$-pure exact sequence of left $R$-modules where $B$ is $T$-pure-projective. By hypothesis, $\Sigma$ is $S$-pure and hence the sequence $0\rightarrow \textup{Hom}_{R}\left(C,M\right)\rightarrow \textup{Hom}_{R}\left(B,M\right)\rightarrow \textup{Hom}_{R}\left(A,M\right)\rightarrow0$ is exact. It follows by Theorem~\ref {thm:20(2.7),2report,p.11} that $M$ is $T$-pure-injective. $\left(6\right)\Rightarrow\left(7\right)$ Let $M\in D^*_{\mathcal G},$ thus $M$ is an $S$-pure-injective left $R$-module $($by Theorem~\ref{thm:28(4.27),black,p.22}). By hypothesis, $M$ is $T$-pure-injective so by Theorem~\ref {thm:28(4.27),black,p.22} we have that $M\in$Prod$(({D_{\mathcal H}}\cup{ R_{R}})^*)$. $\left(7\right)\Rightarrow\left(1\right)$ Let $\Sigma:$ $0\rightarrow A\rightarrow B\rightarrow C\rightarrow0$ be any $T$-pure exact sequence of left $R$-modules. Let $G\in\mathcal G,$ thus by hypothesis, $D^*_{G} \in$ Prod$(({D_{\mathcal H}}\cup{ R_{R}})^*),$ hence $D^*_{G}$ is $T$-pure-injective, in particular $D^*_{G}$ is injective relative to $\Sigma$. By Proposition~\ref {prop:26(2.13),2report,p.13}, $\Sigma$ is $S$-pure. $\left(1\right)\Rightarrow\left(8\right)$ Let $\Sigma:$$0\rightarrow A\rightarrow B\rightarrow C\rightarrow0$ be any $D_\mathcal{H}$-pure exact sequence of right $R$-modules. By the right hand version of Theorem~\ref {thm:(1.3),2report,p3}, the exact sequence of left $R$-modules $\Sigma^{*}:$ $0\rightarrow C^{*}\rightarrow B^{*}\rightarrow A^{*}\rightarrow0$ is $T$-pure. By hypothesis, $\Sigma^{*}$ is $S$-pure and hence by Theorem~\ref {thm:(1.3),2report,p3} again, $\Sigma$ is $D_{\mathcal G}$-pure. $\left(8\right)\Rightarrow\left(1\right)$ This follows by right/left symmetry. \end{proof} The following corollary is immediately obtained from Theorem~\ref{thm:2.1(2.15),(2.17),cor(2.18),cor(1.6),2report,p.14-16,p.5}. \begin{cor}\label{cor:2.2(2.15(a)) general case of (cor.(2.19),2report,p16)} Let S and T be classes of finitely presented left R-modules and let $\mathcal{G}$ and $\mathcal{H}$ be sets of matrices over R such that $L_{ \mathcal G}$-purity=S-purity and $L_{ \mathcal H }$-purity=T-purity. Then the following statements are equivalent: $\left(1\right)$ T-purity =S-purity for short exact sequences of left R-modules. $\left(2\right)$ S-pure-projectivity=T-pure-projectivity for left R-modules. $\left(3\right)$ \textup{add}$(S\cup\{_{R}R\})$= \textup{add}$(T\cup\{_{R}R\}).$ $\left(4\right)$ S-pure-injectivity=T-pure-injectivity for left R-modules. $\left(5\right)$ \textup{Prod}$(\{D^*_{G}\mid G\in\mathcal G\cup\{\underset{1\times1}{0}\}\})$=\textup{Prod}$(\{ D^*_{H}\mid H\in\mathcal{H\cup}\{\underset{1\times1}{0}\}\}).$ $\left(6\right)$ The corresponding assertions on the right.\end{cor} A short exact sequence $\left(\Sigma\right)$ of left $($resp. right$)$ $R$-modules is called $(m,n)$-pure if it remains exact when tensored with any $(m,n)$-presented right $($resp. left$)$ $R$-module. A left $R$-module $M$ is said to be $(m,n)$-pure-projective $($resp. $(m,n)$-pure-injective$)$ if it is projective $($resp. injective$)$ relative to every $(m,n)$-pure exact sequence of left $R$-modules . A short exact sequence $\left(\Sigma\right)$ of left $($or right$)$ $R$-modules is called $(\aleph_{0},n)$-pure exact $($resp. $(m,\aleph_{0})$-pure exact$)$ if, for each positive integer $m$ $($resp. $n)$ $\left(\Sigma\right)$ is $(m,n)$-pure \cite{AlkaCo11}. Observe that the $(m,n)$-pure exact sequences of left $R$-modules are exactly the $L_{\mathcal H}$-pure exact sequences, where $\mathcal H$=$M_{m\times n}(R)$, and the $(n,m)$-pure exact sequences of right $R$-modules are exactly the $D_{\mathcal H}$-pure exact sequences of right modules. Also, $(\aleph_{0},n)$-pure exact sequences of left $R$-modules are exactly the $L_{\mathcal H}$-pure exact sequences, where $\mathcal H$= $\underset{m\in\mathbb{Z^+}}{\bigcup} M_{m\times n}(R)$ and then the $(n,\aleph_{0})$-pure exact sequences of right $R$-modules are exactly the $D_{\mathcal H}$-pure exact sequences. Note that for left modules $(n,m)$-presented implies $(m,n)$-pure-projective, where as for right modules $(n,m)$-presented implies $(n,m)$-pure-projective. For all $n,m,s,t\in \mathbb Z^{+}$ with $n\geq s$ and $m\geq t$, since every $(t,s)$-presented right $R$-module is $(m,n)$-presented it follows that every $(m,n)$-pure exact sequence of left $R$-modules is $(t,s)$-pure. \begin{cor}\label{cor:2.3(2.15(b)),p.3} Let $n,m,s,t\in \mathbb Z^{+}$. Then the following statements are equivalent: $\left(1\right)$ Every $(m,n)$-pure short exact sequence of left $R$-modules is $(s,t)$-pure. $\left(2\right)$ Every $(n,m)$-pure short exact sequence of right $R$-modules is $(t,s)$-pure. $\left(3\right)$ Every $(s,t)$-pure-projective $($resp. $(s,t)$-pure-injective$)$ left $R$-module is $(m,n)$-pure-pr- ojective $($resp. $(m,n)$-pure-injective$)$. $\left(4\right)$ Every $(t,s)$-presented left $R$-module is in \textup{add}$(\{M \mid M$ is an $(n,m)$-presented left $R$-module$\}).$ $\left(5\right)$ Every $(s,t)$-presented right $R$-module is in \textup{add}$(\{M \mid M$ is an $(m,n)$-presented right $R$-module$\}).$\end{cor} \begin{proof} Take $S=L_{\mathcal{G}}$ and $T=L_{\mathcal{H}}$ where $\mathcal G$=${M_{s\times t}}(R)$ and $\mathcal H$=${M_{m\times n}}(R)$ and apply Theorem~\ref{thm:2.1(2.15),(2.17),cor(2.18),cor(1.6),2report,p.14-16,p.5}.\end{proof} \begin{prop}\label{thm:2.7(3.20),blue,p.5} Let S and T be classes of finitely presented left R-modules. Consider the following statements: $\left(1\right)$ Every S-pure short exact sequence of left $R$-modules is T-pure. $\left(2\right)$ Each indecomposable direct summand of a module in T is in \textup{add}$(S\cup\{_{R}R\}).$ $\left(3\right)$ Each indecomposable direct summand of a module in T is a direct summand of a module in $S\cup\{_{R}R\}.$ Then $\left(1\right)$ implies $\left(2\right)$ and $\left(a\right)$ If each indecomposable direct summand of a module in T has local endomorphism ring then $\left(2\right)$ implies $\left(3\right).$ $\left(b\right)$ If each module in T is a direct sum of indecomposable modules then $\left(3\right)$ implies $\left(1\right).$ \end{prop} \begin{proof} $\left(1\right)\Rightarrow\left(2\right)$ This follows by Theorem~\ref{thm:2.1(2.15),(2.17),cor(2.18),cor(1.6),2report,p.14-16,p.5}. $\left(a\right)$ Assume that each indecomposable direct summand $M$ of a module in $T$ has local endomorphism ring, thus by hypothesis, $M\in$add$(S\cup\{_{R}R\}).$ Suppose that $M$ is a direct summand of $\underset{i\in I}{\bigoplus}F_{i}$ where $F_{i}\in S\cup\{_{R}R\},$ for all $i\in I$ and $I$ is a finite set and let $B$ be a submodule of $\underset{i\in I}{\bigoplus}F_{i}$ such that $M\oplus B=\underset{i\in I}{\bigoplus}F_{i}.$ Since End$_{R}(M)$ is local we have $($see, e.g., \cite[Theorem 2.8, p.37]{Fac98}$)$ that $M$ has the finite exchange property. So $($see, e.g., \cite[Lemma 2.7, p.37]{Fac98}$)$ there is an index $j\in I$ and a direct sum decomposition $F_{j}=B_{j}\oplus C_{j}$ of $F_{j}$ with $M\simeq C_{j}$. Hence $M$ is a direct summand of a module in $S\cup\{_{R}R\}.$ $\left(b\right)$ This follows directly using Proposition~\ref{prop:26(2.13),2report,p.13}. \end{proof} A ring $R$ is said to be Krull-Schmidt if every finitely presented left (or right) $R$-module is a direct sum of modules with local endomorphism rings $($see \cite[p.97]{Fac98}$)$. \begin{cor}\label{cor:2.8(3.21),blue,p.6} Let R be a left Krull-Schmidt ring and let n,m be positive integers. Then the following statements are equivalent: $\left(1\right)$ $(m,n)$-purity=$(\aleph_{0},n)$-purity for short exact sequences of left $R$-modules. $\left(2\right)$ For each $s\in \mathbb Z^{+},$ each indecomposable $(n,s)$-presented left R-module is a direct summand of an $(n,m)$-presented left R-module. $\left(3\right)$ $(n,m)$-purity=$(n,\aleph_{0})$-purity for short exact sequences of right $R$-modules. $\left(4\right)$ For each $s\in \mathbb Z^{+},$ each indecomposable $(s,n)$-presented right R-module is a direct summand of an $(m,n)$-presented right R-module. \end{cor} \begin{proof} Put $S=L_\mathcal G$ and $T=L_\mathcal H$, where $\mathcal G$=${M_{m\times n}}(R)$ and $\mathcal{H}=\underset{t\in\mathbb{Z^+}}{\bigcup} M_{t\times n}(R).$ Since $R$ is Krull-Schmidt, each indecomposable direct summand of a module in $T$ has local endomorphism ring and each module in $T$ is a direct sum of indecomposable modules. Hence the result follows on applying Proposition~\ref{thm:2.7(3.20),blue,p.5} and Corollary~\ref{cor:2.3(2.15(b)),p.3}. \end{proof} \section{$(m,n)$-Purity over semiperfect rings} \subparagraph*{\textmd{ Let $M$ be a finitely presented left (or right) $R$-module, we denote by gen$(M)$ its minimal number of generators and by rel$(M)$ the minimal number of relations on these generators. Therefore there is an exact sequence $ R^{\textup{rel}(M)}\rightarrow R^{\textup{gen}(M)}\rightarrow M\rightarrow0$}} and it follows easily that rel$(M)$ is the minimal number of relations on any generating set of $M$. \begin{rem}\label{rem:3.1(3.1),2report,p.16} Let M be a finitely presented left R-module and let N be a direct summand of M. Then it is easy to see that {\rm gen}$(N)\leq${\rm gen}$(M)$ and {\rm rel}$(N)\leq${\rm rel}$(M)+${\rm gen}$(M).$ \end{rem} \begin{prop}\label{prop:3.2(3.2),2report,p.16} Let H be any matrix over a ring R such that {\rm End}$_{R}(L_{H})$ is local and $L_H$ is not projective. Set $\mathcal H$=$\bigcup\{{M_{r\times q}}(R) \mid$ q<{\rm gen}$(L_{H})$ or $r+q<${\rm rel}$(L_{H})$ $\}.$ Then $L_{H}$ is an $L_H$-pure-projective left R-module which is not $L_{\mathcal H}$-pure-projective and hence not $L_{\mathcal G}$-pure-projective for any ${\mathcal G}\subseteq{\mathcal H}$. In particular $L_H$-purity and $L_{\mathcal H}$-purity are not equivalent.\end{prop} \begin{proof} By Proposition~\ref{cor:1.18(2.5),2report,p.11}, $L_{H}$ is $L_H$-pure-projective and, if $L_H$ is $L_{\mathcal H}$-pure-projective, then $L_{H}\in$ Add$(L_{\mathcal H}\cup\{_{R}R\}).$ Since End$_{R}(L_H)$ is local, $L_H$ is, as in Proposition~\ref{thm:2.7(3.20),blue,p.5}, a direct summand of a module in $L_{\mathcal H}\cup\{_{R}R\}.$ Thus either $L_H$ is a direct summand of $L_G,$ where $\underset{r\times q}{G}\in\mathcal H$ or $L_H$ is projective. If $L_H$ is a direct summand of $L_G,$ by Remark~\ref{rem:3.1(3.1),2report,p.16}, {\rm gen}$(L_H)\leq$gen$(L_G)\leq q$ and rel$(L_H)\leq$rel$(L_G)+$ gen$(L_G)\leq r+q$ and this contradicts $G\in\mathcal H$. \end{proof} Note that if $M$ is a left $R$-module, $I$ is a left ideal of $R$ and $\alpha\in {\rm End}_{R}(M)$ then there is an induced homomorphism $\overline{\alpha}:M/IM\rightarrow M/IM$ which is an isomorphism if $\alpha$ is an isomorphism. Let $R$ be a ring and let $J$ be its Jacobson radical. Recall that $R$ is semiperfect if $R/J$ is semisimple and idempotents lift modulo $J$. Say that an idempotent $e\in R$ is local if $eRe$ is a local ring. We have $($e.g., \cite[42.6, p.375]{Wis91}$)$ that $R$ semiperfect if and only if $R$=$e_{1}R\oplus e_{2}R\oplus\cdots\oplus e_{n}R,$ for local orthogonal idempotents $e_{i}$. \begin{lem}\label{lem:3.10(3.10),2report,p.18} Let $m\in \mathbb Z^{+}$. Suppose that one of the following two conditions is satisfied. $\left(1\right)$ The ring $R$ is semiperfect and $I$ is a nonzero ideal with $\textup{gen}(I_R)=m$ and $I\subseteq e_{j}R$ for some local idempotent $e_j$ of R. $\left(2\right)$ The ring $R$ is Krull-Schmidt and $I$ is a nonzero right ideal with $\textup{gen}(I)=m$ and $I\subseteq e_{j}R$ for some local idempotent $e_j$ of R. Then $e_{j}R/I$ is a finitely presented right R-module with {\rm gen}$(e_{j}R/I)$=1, {\rm rel}$(e_{j}R/I)$=m and ${\rm End}_{R}(e_{j}R/I)$ is a local ring.\end{lem} \begin{proof} Let $P$=$e_{j}R.$ Then {\rm gen}$(P/I)$=$1$ and clearly {\rm rel}$(P/I)$={\rm gen}$(I)$=$m.$ In case $\left(1\right)$: Since ${\rm End}_{R}(e_{j}R)\simeq e_{j}Re_{j}$ it follows that ${\rm End}_{R}(P)$ is a local ring. Let $\alpha\in {\rm End}_{R}(P/I)$ and consider the following diagram: \[\begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=3em, column sep=3em] { \ P&P/I \\ P &P/I \\ }; \path[->] (m-1-1) edge node[auto] {$\scriptstyle\pi$} (m-1-2); \path[->] (m-1-2) edge node[auto] {$\scriptstyle\alpha$} (m-2-2) (m-2-1) edge node[auto] {$\scriptstyle\pi$} (m-2-2); \path[dashed ][->] (m-1-1) edge node[left] {$\scriptstyle\alpha^{'}$} (m-2-1); \end{tikzpicture}\] where $\pi$ is the natural epimorphism. By projectivity of $P,$ there exists an $R$-homomorphism $\alpha^{'}:P\rightarrow P$ such that $\pi \alpha^{'}$=$\alpha \pi$ and $\alpha^{'}(I) \subseteq I.$ Since ${\rm End}_{R}(P)$ is a local ring, either $\alpha^{'}$ or $1_{P}$-$\alpha^{'}$ is an isomorphism. The inverse of that isomorphism will, as noted above, induce an isomorphism on $P/I=P/PI$ which will be an inverse of $\alpha$ or $1_{(P/I)}-\alpha$, as appropriate. Hence ${\rm End}_{R}(e_{j}R/I)$ is a local ring. In case $\left(2\right)$: Since $e_{j}R$ is a local right $R$-module, every homomorphic image of $e_{j}R$ is indecomposable \cite[Proposition 4.1, p.246]{War71}. Hence $e_{j}R/I$ is indecomposable. Since $R$ is Krull-Schmidt, ${\rm End}_{R}(e_{j}R/I)$ is a local ring. \end{proof} Let $R$ be any ring and $M$ be any finitely presented right $R$-module. An Auslander-Bridger dual of $M$ is denoted by $D(M)$ and defined as follows. Choose an exact sequence $Q\overset{\phi}{\rightarrow}P\rightarrow M\rightarrow0$ in which $P$ and $Q$ are finitely generated projective right $R$-modules. Define $D(M)$ to be the cokernel of the homomorphism $\phi^{+}:P^{+}\rightarrow Q^{+}$ where $X^{+}$=${\rm Hom}_{R}(X,R_{R})$, for any right $R$-module $X$ \cite{War75}. Although $D(M)$ depends on the choice of exact sequence, if $D^{\prime}(M)$ is another such dual of $M$ then $D(M)\oplus A\simeq D^{\prime}(M)\oplus B$ for some finitely generated projective modules, $A,B.$ \begin{lem}\label{lem:3.12(3.12),2report,p.19} Let $m\in \mathbb Z^{+}$ and let M be any $($1,m$)$-presented right R-module. Then $D(M)$ is a $(n,m)$-pure-projective left R-module, for all $n\in \mathbb Z^{+}.$ \end{lem} \begin{proof} Applying Hom$_R(-,R_R)$ to a presentation $R_{R}^{m}\overset{\lambda_H}{\longrightarrow}R_{R}^{1}\rightarrow M\rightarrow0$ of $M$ gives the presentation $_RR^{1}\overset{\rho_H}{\longrightarrow}_RR^{m}\rightarrow D(M)\rightarrow0$ of $D(M)$. Thus $D(M)$ is $(m,1)$-presented hence $(1,m)$-pure-projective, hence $(n,m)$-pure-projective for all $n\geq 1$. \end{proof} \begin{prop}\label{prop:3.12(3.12(3)),2report,p.19} Let $m\in \mathbb Z^{+}$. Suppose that one of the following two conditions is satisfied. $\left(1\right)$ The ring $R$ is semiperfect and $I$ is a nonzero ideal with $\textup{gen}(I_R)=m+1$ and $I\subseteq e_{j}R$ for some local idempotent $e_j$ of R. $\left(2\right)$ The ring $R$ is Krull-Schmidt and $I$ is a nonzero right ideal with $\textup{gen}(I)=m+1$ and $I\subseteq e_{j}R$ for some local idempotent $e_j$ of R. \noindent Then $D(e_{j}R/I)$ is not an $L_{\mathcal H}$-pure-projective left R-module, where $\mathcal H$=$\bigcup$$\{$$\underset{s\times t}{M}(R)\mid s,t\in \mathbb Z^{+}$ with t<m+1$\}.$ \end{prop} \begin{proof} By Lemma~\ref{lem:3.10(3.10),2report,p.18}, ${\rm End}_{R}(e_{j}R/I)$ is a local ring and hence ${\rm End}_{R}(D(e_{j}R/I))$ is local \cite[Theorem 2.4, p.196]{War75}. Since {\rm gen}$(e_{j}R/I)$=$1$ and {\rm rel}$(e_{j}R/I)$= $m+1$ $($by Lemma~\ref{lem:3.10(3.10),2report,p.18}$),$ it follows easily that {\rm gen}$(D(e_{j}R/I))$=$m+1$ and {\rm rel}$(D(e_{j}R/I))$=$1.$ Since $e_jR/I$ is not projective, neither is $D(e_{j}R/I)$ so, by Proposition~\ref{prop:3.2(3.2),2report,p.16}, $D(e_{j}R/I)$ is not $L_{\mathcal H}$-pure-projective. \end{proof} The following theorem is a generalization of \cite[Theorem 3.5(1), p.3888]{AlkaCo11}. \begin{thm}\label{thm:3.13(3.13),2report,p.20} Let $(n,m)$ and $(r,s)$ be any two pairs of positive integers such that $n\neq r$. Suppose that one of the following two conditions is satisfied: $\left(a\right)$ $R$ is semiperfect and there exists an ideal I of R with $\textup{gen}(I_R)=$$\textup{max}\{n,r\}$ and $I\subseteq e_{j}R$ for some local idempotent $e_{j}$ $\left(b\right)$ $R$ is Krull-Schmidt and there exists a right ideal I of R with $\textup{gen}(I)=$$\textup{max}\{n,r\}$ and $I\subseteq e_{j}R$ for some local idempotent $e_{j}$. \noindent Then: $\left(1\right)$ $(m,n)$-purity and $(s,r)$-purity of short exact sequences of left R-modules are not equivalent; $\left(2\right)$ $(n,m)$-purity and $(r,s)$-purity of short exact sequences of right R-modules are not equivalent.\end{thm} \begin{proof} $\left(1\right)$ Without loss of generality, we can assume that $n<r$. By Lemma~\ref{lem:3.12(3.12),2report,p.19} and Proposition~\ref{prop:3.12(3.12(3)),2report,p.19}, $D(e_{j}R/I)$ is $(s,r)$-pure-projective and not $(m,n)$-pure-projective. Thus $(m,n)$-pure-projectivity and $(s,r)$-pure-projectivity of left $R$-modules are not equivalent and hence by Corollary~\ref{cor:2.3(2.15(b)),p.3}, $(m,n)$-purity and $(s,r)$-purity for left $R$-modules are not equivalent. $\left(2\right)$ By $\left(1\right)$ and Corollary~\ref{cor:2.3(2.15(b)),p.3}. \end{proof} \begin{cor}\label{cor:3.14(3.14),blue,p.20(a)} Let $R$ be a local ring, let $I$ be a finitely generated ideal of R and set $\textup{gen}(I_R)=r$, then for all $n<r$ and for all m,s: $\left(1\right)$ $(m,n)$-purity and $(s,r)$-purity for left R-modules are not equivalent. $\left(2\right)$ $(n,m)$-purity and $(r,s)$-purity for right R-modules are not equivalent. \end{cor} \begin{proof} Since $R$ is local it is a semiperfect and 1 is a local idempotent. By Theorem~\ref{thm:3.13(3.13),2report,p.20}, the result holds.\end{proof} Let $M$ be a finitely generated left module over a semiperfect ring $R.$ Warfield in \cite{War75} defined Gen$(M)$ to be the number of summands in a decomposition of $M/JM$ as a direct sum of simple modules where $J=J(R).$ If $M$ is a finitely presented left module over a semiperfect ring $R,$ and $f:P\rightarrow M$ a projective cover, with $K=$ker$(f),$ then Warfield defined Rel$(M)$ by Rel$(M)$=Gen$(K).$ If $M$ is a left $R$-module and $x\in M$, we say $x$ is a local element if $Rx$ is a local module. The number of elements in any minimal generating set of local elements of $M$ is exactly Gen$(M)$ \cite[Lemma 1.11]{War75}. One may use these to obtain similar results, for example the following. \begin{prop}\label{prop:3.20(3.27),blue,p.12} Let H be a matrix over a semiperfect ring R such that $L_ H$ is not projective and \textup{End}$_{R}(L_{H})$ is a local ring and let $\mathcal H$=$\{K\mid K$ is a matrix with \textup{Gen}$(L_{H})>$ \textup{Gen}$(L_{K})$ or \textup{Rel} $(L_{H})>$\textup{Rel} $(L_{K})\}.$ Then $L_{H}$ is not $L_{\mathcal H}$-pure-projective.\end{prop} \begin{proof} Assume that $L_{H}$ is $L_{\mathcal H}$-pure-projective, thus by Proposition~\ref{cor:1.18(2.5),2report,p.11}, $L_{H}\in$ add$(L_{\mathcal H}\cup\{_{R}R\}).$ Since End$_{R}(L_H)$ is a local ring, $L_H$ is as in Proposition~\ref{thm:2.7(3.20),blue,p.5}, a direct summand of a module in $L_{\mathcal H}\cup\{_{R}R\}.$ Thus either $L_H$ is a direct summand of $L_{D},$ where $D\in\mathcal{H}$ or $L_H$ is a direct summand of $_{R}R$. Since $L_H$ is not projective, $L_H$ is a direct summand of $L_{D},$ thus by \cite[Lemma 1.10, p.192]{War75}, Gen$(L_H)\leq$Gen$(L_{D})$ and Rel$(L_H)\leq$Rel$(L_{D})$ and this contradicts ${D}\in\mathcal{H}$. Therefore, $L_{H}$ is not $L_{\mathcal{H}}$-pure-projective. \end{proof} \begin{rem*}\label{cor:3.21(3.28),blue,p.13} Since, if $K$ is an $r\times q$ matrix, we have \textup{Gen}$(R).q\geq$\textup{Gen}$(R).$\textup{gen}$($ $L_K)\geq \textup{Gen}(L_K)$ and similarly for relations, if $H$ is as in Proposition~\ref{prop:3.20(3.27),blue,p.12} then $L_H$ is not $L_{{\mathcal H}_{i}}$-pure-projective for any of the sets of matrices: ${\mathcal H}_{1}$=$\{K\mid$ \textup{Gen}$(L_{H})>$\textup{Gen}$(R)$ \textup{gen}$(L_{K})$ or \textup{Rel}$(L_{H})>$\textup{Gen}$(R)$ \textup{rel}$(L_{K})\};$ ${\mathcal H}_{2}$=$\{\underset{r\times q}{K}\mid r,q\in \mathbb Z^{+}$such that \textup{Gen}$(L_{H})>$$q$\textup{Gen}$(R)$ or \textup{Rel}$(L_{H})>$\textup{Rel}$(L_{K})\};$ ${\mathcal H}_{3}$ =$\bigcup\{\underset{r\times q}{M}(R)\mid r,q\in \mathbb Z^{+}$such that \textup{Gen}$(L_{H})>$$q$\textup{Gen}$(R)$ or \textup{Rel}$(L_{H})>$$r$\textup{Gen}$(R)\}.$ \end{rem*} \section{Purity over finite-dimensional algebras} \subparagraph*{\textmd{In this section we assume some knowledge of the representation theory of finite-dimensional algebras, for which see \cite{AsSiSk06}, \cite{AuReSm95} for example. Let $R$ be a Krull-Schmidt ring and let $M$ be any finitely presented left $R$-module. We will use ind$(M)$ to denote the class of $($isomorphism types of$)$ indecomposable direct summands of $M$. If $S$ is a class of finitely presented left $R$-modules, we define ind$(S)=\underset{M\in S}{\bigcup}$ind$(M).$ }} \begin{prop}\label{prop:4.1(4.12)(a),p.7(a)} Let R be a Krull-Schmidt ring and let S be a class of finitely presented left R-modules. Then the following statements are equivalent for a left R-module M: $\left(1\right)$ M is S-pure-projective. $\left(2\right)$ M is \textup{ind}$(S)$-pure-projective. $\left(3\right)$ M isomorphic to a direct sum of modules in \textup{ind}$(S\cup\{_{R}R\}).$ \end{prop} \begin{proof} Since $R$ is a Krull-Schmidt ring, each element in $S\cup\{_{R}R\}$ is a direct sum of modules in ind$(S\cup$ $\{_{R}R\})$ so this follows by Proposition~\ref{cor:1.18(2.5),2report,p.11}.\end{proof} The following corollary is immediate from Proposition~\ref{prop:4.1(4.12)(a),p.7(a)} and Theorem~\ref{thm:2.1(2.15),(2.17),cor(2.18),cor(1.6),2report,p.14-16,p.5}. \begin{cor}\label{cor:4.2(4.13),p.8} Let R be a Krull-Schmidt ring and let S and T be two classes of finitely presented left R-modules. Then T-purity implies S-purity if and only if $\textup{ind}(S)\subseteq \textup{ind}(T\cup\{_{R}R\}).$ \end{cor} Let $R=k\tilde{A}_{1}$ be the Kronecker algebra over an algebraically closed field $k.$ Left $R$-modules may be viewed as representations of the quiver $\underset{\beta}{\overset{\alpha}{_{1}\bullet\leftleftarrows\bullet_{2}}}$. The preinjective and preprojective indecomposable finite-dimensional left $R$-modules are up to isomorphism uniquely determined by their dimension vectors. For $n\in\mathbb{N}$ we will denote by $I_{n}$ (resp. $P_{n}$) the finite-dimensional indecomposable preinjective (resp. preprojective) left $R$-module with dimension vector $(n,n+1)$ (resp. $(n+1,n)).$ Also, for $n\in \mathbb{Z}^{+}$ we will use $R_{\lambda,n}$ to denote the finite-dimensional indecomposable regular left $R$-module with dimension vector $(n,n)$ and parameter $\lambda\in k\cup\{\infty\}$ where $R_{\lambda,1}$ is the module $\underset{\lambda}{\overset{1}{k\leftleftarrows k}}$ for $\lambda \in k$ and $R_{\infty,1}=\underset{1}{\overset{0}{k\leftleftarrows k}}$. \begin{example}\label{example:4.3((Example 4.14, p.8) and (New version of Proposition 4.32, p.27))} Let $R=k\tilde{A_{1}}$ be the Kronecker algebra over an algebraically closed field $k.$ Let $n,r\in\mathbb{Z}^{+}$ and let $S_{1}=\{P_{i}\mid i\leq n\},$ $S_{2}=\{I_{i}\mid i\leq n-1\},$ $S_{3}=\{R_{\lambda,i}\mid i\leq n$ and $\lambda\in k\cup\{\infty\}\}$ and $S_{4}=\{R_{\lambda,1}\mid\lambda\in k\cup\{\infty\}\}\cup\{P_{0},P_{1}\}.$ Then: $\left(i\right)$ $S_{1}\cup S_{2}\cup S_{3}$-purity$=(\aleph_{0},n)$-purity$=(2n+1,n)$-purity, for short exact sequences of left $R$-modules. $\left(ii\right)$ $S_{4}$-purity$=(1,1)$-purity for left R-modules. $\left(iii\right)$ $\,$ $(1,1)$-purity is not equivalent to $(\aleph_{0},n)$-purity for left $R$-modules. $\left(iv\right)$ $($n,1$)$-purity is not equivalent to $($r,2$)$-purity for left $R$-modules. \end{example} \begin{proof} $\left(i\right)$ Let $\mathcal H =\underset{m\in\mathbb{Z}^{+}}{\bigcup}\underset{m\times n}{M}(R)$. It follows directly from the description of the finite-dimensional indecomposable modules and Remark~\ref{rem:3.1(3.1),2report,p.16} that ind$(L_{\mathcal H})=S_1\cup S_2\cup S_3$. Thus, by Proposition~\ref{prop:4.1(4.12)(a),p.7(a)} we have that $S_1\cup S_2\cup S_3$-purity=$L_{\mathcal H}$-purity=$(\aleph_{0},n)$-purity for left $R$-modules. Let $M\in S_{1}\cup S_{2}\cup S_{3}.$ It can be checked that, if $M\in S_1$ then rel$(M)\leq 2n-1$, if $M\in S_2$ then rel$(M)\leq 2n+1$ and if $M\in S_3$ then rel$(M)\leq 2n$ and hence rel$(M)\leq 2n+1$ in all cases. Since gen$(M)\leq n,$ each module in $S_{1}\cup S_{2}\cup S_{3}$ is $(n,2n+1)$-presented. Thus $(2n+1,n)$-purity=$S_{1}\cup S_{2}\cup S_{3}$-purity$=(\aleph_{0},n)$-purity. $\left(ii\right)$ Let $\lambda\in k\cup \{\infty\}$ and let $M\in R_{\lambda,1}\oplus P_0$. Since the sequence $_{R}R\overset{(\alpha+\lambda\beta)\times-}{\longrightarrow}_{R}R\rightarrow M\rightarrow0$ is exact, $M$ is $(1,1)$-presented and hence $R_{\lambda,1}$ is a direct summand of a $(1,1)$-presented module. Thus every module in $S_4$ is a direct summand of a $(1,1)$-presented module. Conversely, let $N$ be any indecomposable direct summand of a $(1,1)$-presented left $R$-module, thus gen$(N)=1$ and rel$(N)\leq 2$ $($by Remark~\ref{rem:3.1(3.1),2report,p.16}$)$ and hence either $N=P_0$ or $N=P_1$ or $N=R_{\lambda,1}$ for some $\lambda\in k\cup \{\infty\}$. Thus $N$ is a direct summand of a module in $S_4\cup \{_RR\}$. By Proposition~\ref{thm:2.7(3.20),blue,p.5}, $S_{4}$-purity$=(1,1)$-purity. $\left(iii\right)$ Assume that $(1,1)$-purity=$(\aleph_{0},n)$-purity for some $n\in\mathbb{Z}^{+}$. Thus, by $\left(i\right)$ and $\left(ii\right)$ above we have that $S_{4}$-purity=$S_{1}\cup S_{2}\cup S_{3}$-purity. This contradicts Corollary~\ref{cor:4.2(4.13),p.8}, because $I_0 \in S_{1}\cup S_{2}\cup S_{3}$ and $I_0 \notin S_4$. $\left(iv\right)$ Note that $R_{R}=e_{1}R \oplus e_{2}R$, where $e_1R$ $($resp. $e_2R$$)$ is the preprojective right $R$-module of dimension vector $(0,1)$ $($resp. $(1,2)$$)$. Let $I_R=J(e_2R)$, since $I_R=\alpha R\oplus\beta R$ it follows that gen$(I_R)=2.$ By Theorem~\ref{thm:3.13(3.13),2report,p.20} we have that $(n,1)$-purity and $(r,2)$-purity for left $R$-modules are not equivalent. \end{proof} \begin{prop}\label{prop:4.4(new)} Let $R$ be a finite-dimensional algebra over a field $k$. If $R$ is not of finite representation type, then for every $r\in \mathbb{Z}^{+}$, there is $n>r$ such that $(\aleph_{0},n)$-purity$\neq (\aleph_{0},r)$-purity for left R-modules. \end{prop} \begin{proof} Suppose that $R$ is not of finite representation type. Assume that there is $r\in \mathbb{Z}^{+}$ such that for all $n>r$ then $(\aleph_{0},n)$-purity$= (\aleph_{0},r)$-purity for left $R$-modules. Since $R$ is a finite-dimensional algebra and it is not finite representation type it follows from \cite[Corollary 1.5, p.194]{AuReSm95} that there is a finitely generated indecomposable left $R$-module $M$ such that gen$(M)\geq r+1$. By assumption, $(\aleph_{0},\textup{gen}(M))$-purity$= (\aleph_{0},r)$-purity for left $R$-modules and hence by Corollary~\ref{cor:4.2(4.13),p.8}, $M\in$ind$(\{(r,s)$- presented left $R$-modules $\mid$ $s\in \mathbb{Z}^{+}\})$, which is a contradiction. \end{proof} Let $R$ be an algebra over a field $k$. From now, we use $M^{*}$ to denote Hom$_k(M,k)$ for any $R$-module $M$. \begin{prop}\label{prop:4.4(Proposition 4.36, p.33 and Proposition 4.48(a), p.43(a))} Let R be a finite-dimensional algebra over a field $k$ and let $\mathcal H$ be a set of matrices over R. Then a left R-module M is $L_{\mathcal H}$-pure-injective if and only if M is a direct summand of a direct product of modules in $\textup{ind}(\{$$D_{H}^{*}\mid H\in\mathcal H\cup\{\underset{1\times1}{0}\}\}).$ \end{prop} \begin{proof} This follows by Theorem~\ref{thm:28(4.27),black,p.22} since each module $D_{H}^{*}$ is a finite direct sum of indecomposable modules. \end{proof} We now describe these modules in terms of ind$(\{L_H\mid H\in\mathcal H\cup\{\underset{1\times1}{0}\}\})$. \begin{thm}\label{thm:4.5} Let R be a finite-dimensional algebra over a field $k$ and let $S$ be a set of indecomposable finite-dimensional modules. Then the $S$-pure-injective left $R$-modules are the direct summands of direct products of modules in $\tau S\cup R$-\textup{inj}, where $\tau$ is the Auslander-Reiten translate and R-\textup{inj} denotes the set of indecomposable injective left R-modules. \end{thm} \begin{proof} The Auslander-Reiten translate of a module $M$ is given by the formula $\tau M=(DM)^*$ where $DM$ is the Auslander-Bridger dual $(=$transpose$)$ of $M$ obtained from a minimal projective resolution of $M$. In particular $\tau L_H=(D_H)^*$ so this follows from Proposition~\ref{prop:4.4(Proposition 4.36, p.33 and Proposition 4.48(a), p.43(a))}. \end{proof} \begin{cor}\label{cor:4.5(Proposition 4.48, p.43)} Let R be a finite-dimensional algebra over a field $k,$ let $\mathcal H$ be a set of matrices over R. If $\textup{ind}\{$\textup{Hom}$_{k}(D_{H},k)\mid H\in\mathcal H\cup\{\underset{1\times1}{0}\}\}$ is finite then it is the set of indecomposable $L_{\mathcal H}$-pure-injective left R-modules and every $L_{\mathcal H}$-pure-injective module is a direct sum of copies of these modules.\end{cor} \begin{proof} This follows since if $M$ is indecomposable of finite length over its endomorphism ring then every product of copies of $M$ is a direct sum of copies of $M$ $($see, for example, \cite[Theorem 4.4.28, p.180]{Pre09}$)$. \end{proof} Recall $($see\cite[3.4.7]{Pre09}$)$ that a subclass $T$ of $R$-Mod is said to be definable if it is closed under direct products, direct limits and pure submodules. A class $T$ of pure-injective modules closed under direct products, direct summands and isomorphisms is definable if and only if each direct sum of modules in $T$ is pure-injective, that is if and only if each element in $T$ is $\Sigma$-pure-injective $($see, for example, \cite[ 4.4.12]{Pre09}$)$. In this case every module in $T$ is a direct sum of indecomposable modules. Let $S$ be a class of finitely presented left $R$-modules. We denote by $S$-Pinj the class of $S$-pure-injective left $R$-modules. \begin{cor}\label{cor:4.7(Lemma 4.44, p.40)} Let R be a finite-dimensional algebra over a field k and let $\mathcal H$ be a set of matrices over R. Then $L_{\mathcal H}$-Pinj is a definable subclass of $R$-Mod if and only if each direct sum of modules in $\textup{ind}(\{D_{H}^{*}\mid H\in\mathcal H\})$ is pure-injective. \end{cor} \begin{proof} Let $T=\textup{ind}(\{D_{H}^{*}\mid H\in\mathcal H\})$ and $T^\prime=T\cup R$-inj. One direction follows from the remarks above and Proposition~\ref{prop:4.4(Proposition 4.36, p.33 and Proposition 4.48(a), p.43(a))}. $(\Leftarrow)$. By hypothesis, each direct sum of modules in $T$ is pure-injective. Since $R$ is a left Noetherian ring, each direct sum of modules in $R$-inj is injective. Thus each direct sum of modules in $T^\prime$ is pure-injective and hence is $\Sigma$-pure-injective. Let $M\in L_{\mathcal H}$-Pinj. By Proposition~\ref{prop:4.4(Proposition 4.36, p.33 and Proposition 4.48(a), p.43(a))}, there exists a subfamily $\{M_i\}_{i\in I}$ of $T^\prime$ such that $M$ is a direct summand of $\underset{i\in I}{\prod}M_{i}$. By the proof above, $\underset{i\in I}{\bigoplus}M_{i}$ is $\Sigma$-pure-injective. Since $\underset{i\in I}{\prod}M_{i}$ is in the definable subcategory generated by $\underset{i\in I}{\bigoplus}M_{i}$ it follows from \cite[Proposition(4.4.12), p.176]{Pre09} that $\underset{i\in I}{\prod}M_{i}$ is $\Sigma$-pure-injective. It follows that $M$ is $\Sigma$-pure-injective and hence each element in $L_{\mathcal H}$-Pinj is $\Sigma$-pure-injective. Therefore $L_{\mathcal H}$-Pinj is a definable subclass of $R$-Mod. \end{proof} Every finite-dimensional module is $\Sigma$-pure-injective and by \cite[Theorem 4.6, p.750]{Len83} every direct sum of preinjective modules is $\Sigma$-pure-injective. The equivalence of (1) and (2) in the next result therefore follows from the description of the $\Sigma$-pure-injective modules in \cite[Theorem 2.1, p.847]{PrPu02} and the equivalence with (3) follows since the duality Hom$_k(-,k)$ interchanges preprojective and preinjective modules and sends regular modules to regular modules. \begin{prop}\label{prop:4.8(prop 4.44(a), p.41)} Let $R$ be a tame hereditary finite-dimensional algebra over a field $k$ and let $\mathcal H$ be a set of matrices over $R.$ Then the following statements are equivalent. $\left(1\right)$ $L_{\mathcal H}$-Pinj is a definable subclass of $R$-Mod. $\left(2\right)$ The set of preprojective or regular modules in $\textup{ind}(\{D_{H}^{*}\mid H\in\mathcal H\})$ is finite. $\left(3\right)$ The set of preinjective or regular modules in $\textup{ind}(\{D_{H}\mid H\in\mathcal H\})$ is finite. \end{prop} Let $_{R}$pinj be the isomorphism classes of indecomposable pure-injective left $R$-modules and let $T\subseteq R$-ind, the class of all finitely presented indecomposable left $R$-modules. We use fsc$(T)$ $($resp. $\overline{T})$ to denote the closure of $T$ in the full support topology $($resp. the Ziegler topology$).$ Recall that fsc$(T)$ is the class Prod$(T)\cap_{R}$pinj. See for instance \cite [Sections 5.1.1 and 5.3.7]{Pre09} for details about the Ziegler topology and the full support topology. \begin{prop}\label{prop:4.10(prop 4.49, p.44)} Let R be a tame hereditary finite-dimensional algebra over a field k. Let $\mathcal H$ be a set of matrices over R such that $L_{\mathcal H}$-Pinj is definable. Then the following statements are equivalent. $\left(1\right)$ The generic module is $L_{\mathcal H}$-pure-injective. $\left(2\right)$ The set of preinjective left R-modules in $\textup{ind}(\{D_{H}^{*}\mid H\in\mathcal H\})$ is infinite. $\left(3\right)$ All but at most $n(R)-2$ Pr\"{u}fer modules are $L_{\mathcal H}$-pure-injective, where $n(R)$ is the number of isomorphism classes of simple R-modules. $\left(4\right)$ At least one Pr\"{u}fer R-module is $L_{\mathcal H}$-pure-injective. \end{prop} \begin{proof} $\left(1\right)\Rightarrow\left(2\right).$ Let $T=\textup{ind}($ $\{D_{H}^{*}\mid H\in\mathcal H\})$. Assume that the set of preinjective left $R$-modules in $T$ is finite. Since $L_{\mathcal H}$-Pinj is a definable it follows from Proposition~\ref{prop:4.8(prop 4.44(a), p.41)} that $T$ is finite. By Corollary~\ref{cor:4.5(Proposition 4.48, p.43)}, the generic module cannot be $L_{\mathcal H}$-pure-injective. $\left(2\right)\Rightarrow\left(3\right).$ Let $X$ be the class of all indecomposable $L_{\mathcal H}$-pure-injective modules. Since $L_{\mathcal H}$-Pinj is definable it follows from \cite[Theorem 5.1.1, p.211]{Pre09} that $X$ is a closed set of the Ziegler topology. Since $X$ contains infinitely many non-isomorphic preinjective modules, by \cite[Corollary, p.113]{Rin98}, all but at most $n(R)-2$ Pr\"{u}fer modules belong to $X,$ where $n(R)$ is the number of isomorphism classes of simple $R$-modules. $\left(3\right)\Rightarrow\left(4\right).$ This is obvious. $\left(4\right)\Rightarrow\left(1\right).$ Assume that there is a Pr\"{u}fer module which is $L_{\mathcal H}$-pure-injective. As noted in $\left(3\right)$, $X$ is a closed set of the Ziegler topology and by hypothesis, it contains at least one module which is not of finite length. By \cite[Theorem, p.106]{Rin98}, the generic module belongs to $X$.\end{proof} \begin{rem}\label{rem:4.11(Corollary 4.50, p.45)} If R is the Kronecker algebra over a field k then condition $\left(3\right)$ above becomes: $\left(3\right)$ Every Pr\"{u}fer module is $L_{\mathcal H}$-pure-injective. \end{rem} \begin{lem}\label{lem:4.12} Let $T\subseteq R$-ind. If \emph{Prod}$(T)$ is definable then $\overline{T}=$\emph{fsc}$(T).$\end{lem} \begin{proof} Suppose that Prod$(T)$ is definable. It is clear that fsc$(T)\subseteq\overline{T}.$ Since $T\subseteq$Prod$(T)$ it follows that $D(T)\subseteq$Prod$(T),$ where $D(T)$ is the definable subcategory generated by $T.$ Thus $\overline{T}\subseteq$fsc$(T)$ and hence $\overline{T}=$fsc$(T).$\end{proof} \begin{rem}\label{rem:4.13} Let $T$ be a class of pure-injective left $R$-modules and let $S\subseteq T.$ If \emph{Prod}$(T)$ is a definable subclass of $R$-Mod then so is \emph{Prod}$(S).$\end{rem} \begin{cor}\label{cor:4.14} Let R be a tame hereditary finite-dimensional algebra over a field k and let $\textbf{I}_{1}$ be a class of indecomposable preinjective left R-modules. Then \emph{fsc}$(\textbf{I}_{1})=\overline{\textbf{I}_{1}}.$ \end{cor} \begin{proof} By \cite[Theorem 3.2, p.351]{Zay97}, Prod$(\emph{\textbf{I}})$ is definable, where $\emph{\textbf{I}}$ is the class of all indecomposable preinjective left $R$-modules. Since $\emph{\textbf{I}}_{1}\subseteq \emph{\textbf{I}}$ it follows from Remark~\ref{rem:4.13} that Prod$(\emph{\textbf{I}}_{1})$ is definable. By Lemma~\ref{lem:4.12}, fsc$(\textbf{I}_{1})=\overline{\textbf{I}_{1}}.$\end{proof} \begin{rem}\label{rem:4.15} Let $R$ be a finite-dimensional algebra over a field $k$ and let $T\subseteq R$-\emph{ind}. Then $T$ is the class of all indecomposable finite-dimensional left $R$-modules in \emph{Prod}$(T).$ This follows from \cite[Corollary 5.3.33, p.250]{Pre09} \end{rem} The following fact is known; it can be found stated in \cite[p.47]{Rin00}. We include a proof here. \begin{prop}\label{prop:4.16(Proposition 6.5, p.4)} Let R be a tame hereditary finite-dimensional algebra over a field $k$ and let $\textbf{P}_{1}$ be a class of indecomposable preprojective left R-modules. Then \emph{fsc}$(\textbf{P}_{1})=\textbf{P}_{1}.$ \end{prop} \begin{proof} Let $M\in$fsc$(\emph{\textbf{P}}_{1})$. Thus $M$ is a direct summand of $\underset{i\in I}{\prod}P_{i}$ where $P_{i}\in \emph{\textbf{P}}_{1}.$ Choose a non-zero element $a\in M,$ so $a_{j}\neq0$ for some $j\in I,$ where $a_{j}$ is the jth component in $a.$ Define $\alpha:M\rightarrow P_{j}$ by $\alpha=\pi_{j}i$ where $i:M\rightarrow\underset{i\in I}{\prod}P_{i}$ is the inclusion and $\pi_{j}:\underset{i\in I}{\prod}P_{i}\rightarrow P_{j}$ is the projection. Since $\alpha(a)=a_{j}\neq0$ it follows that Hom$_{R}(M,\emph{\textbf{P}})\neq0,$ where $\emph{\textbf{P}}$ is the class of all indecomposable preprojective left $R$-modules. By \cite[Lemma 1, p.46]{Cra98}, $M$ has a preprojective direct summand, and hence $M$ is finite-dimensional and therefore we have from Remark~\ref{rem:4.15} that $M\in \emph{\textbf{P}}_{1}.$ \end{proof} \begin{lem}\label{lem:4.17(Lemma 6.9, p.7)} Let R be a tame hereditary finite-dimensional algebra over a field k and let $\textbf{R}_{1}$ be a class of indecomposable regular left R-modules. Then: $\left(1\right)$ The generic module does not belong to \emph{fsc}$(\textbf{R}_{1})$. $\left(2\right)$ There is no Pr\"{u}fer R-module in \emph{fsc}$(\textbf{R}_{1})$.\end{lem} \begin{proof} $\left(1\right)$ Assume that the generic module $G\in$fsc$(\emph{\textbf{R}}_{1}),$ thus $G\in$Prod$(\emph{\textbf{R}}_{1}).$ As in the proof of Proposition~\ref{prop:4.16(Proposition 6.5, p.4)} it follows that Hom$_{R}(G,\emph{\textbf{R}}_{1})\neq 0,$ contradicting \cite[p.46]{Rin00}. Therefore $G\notin$fsc$(\emph{\textbf{R}}_{1})$. $\left(2\right)$ Assume that there is a Pr\"{u}fer module $M$ such that $M\in$fsc$(\emph{\textbf{R}}_{1}).$ By \cite[Proposition 3, p.110]{Rin98}, the generic module $G$ is a direct summand of $M^{I}$ for some $I$ so $G\in$Prod$(\emph{\textbf{R}}_{1})$ and this contradicts $\left(1\right)$ above. Thus there is no Pr\"{u}fer module in fsc$(\emph{\textbf{R}}_{1})$. \end{proof} Let $R$ be a tame hereditary finite-dimensional algebra over a field $k$ and let $S$ be a simple regular left $R$-module $($that is, a module which is simple in the category of regular modules$)$. We use $S[\infty]$ $($resp. $\hat{S})$ to denote the Pr\"{u}fer $($resp. adic$)$ left $R$-module corresponding to $S,$ see \cite[p.106]{Rin98} for the definitions of these modules. Also, we use $T_{S}$ to denote the class $T_{S}=\{M\mid M$ is an indecomposable regular left $R$-module with Hom$_{R}(M,S)\neq0\}.$ \begin{thm}\label{thm:4.18(Theorem 6.10, p.8)} Let $R$ be a tame hereditary finite-dimensional algebra over a field $k.$ Let $\textbf{R}_{1}$ be a class of indecomposable regular left R-modules and let $S$ be a simple regular left $R$-module. Then the following statements are equivalent: $\left(1\right)$ $\hat{S}\in$\emph{fsc}$(\textbf{R}_{1}).$ $\left(2\right)$ $\textbf{R}_{1}\cap T_{S}$ is infinite. $\left(3\right)$ $\hat{S}\in$\emph{fsc}$(\textbf{R}_{1}\cap T_{S}).$ \end{thm} \begin{proof} $\left(1\right)\Rightarrow\left(2\right).$ Suppose that $\hat{S}\in$fsc$(\emph{\textbf{R}}_{1}).$ Assume that $\emph{\textbf{R}}_{1}\cap T_{S}$ is finite. Let $D=\{M\mid$Hom$_{R}(M,S)=0\}.$ By \cite[Examples, p.42]{Cra98}, $D$ is a definable subclass of $R$-Mod and hence $C=D\cap_{R}$pinj is a closed set in the Ziegler topology. Since $\emph{\textbf{R}}_{1}\cap T_{S}$ is a finite class of finite-dimensional indecomposable modules it follows from \cite[ 2.5]{Cra98} that $\emph{\textbf{R}}_{1}\cap T_{S}$ is a closed set in the Ziegler topology and hence $C\cup(\emph{\textbf{R}}_{1}\cap T_{S})$ is. Thus $C\cup(\emph{\textbf{R}}_{1}\cap T_{S})$ is a closed set in the full support topology. Since $\emph{\textbf{R}}_{1}\subseteq C\cup(\emph{\textbf{R}}_{1}\cap T_{S})$ it follows that fsc$(\emph{\textbf{R}}_{1})\subseteq$fsc$(C\cup(\emph{\textbf{R}}_{1}\cap T_{S}))=C\cup(\emph{\textbf{R}}_{1}\cap T_{S}).$ Since Hom$_{R}(\hat{S},S)\neq0$ it follows that $\hat{S}\notin C\cup(\emph{\textbf{R}}_{1}\cap T_{S})$ and hence $\hat{S}\notin$fsc$(\emph{\textbf{R}}_{1})$ and this contradicts the hypothesis. Thus $\emph{\textbf{R}}_{1}\cap T_{S}$ is infinite. $\left(2\right)\Rightarrow\left(3\right)$ Suppose that $\emph{\textbf{R}}_{1}\cap T_{S}$ is infinite, thus $(\emph{\textbf{R}}_{1}\cap T_{S})^{*}$ is an infinite class of regular right $R$-modules. Let $X\in(\emph{\textbf{R}}_{1}\cap T_{S})^{*},$ thus $X=M^{*}$ for some $M\in \emph{\textbf{R}}_{1}\cap T_{S}.$ Hence Hom$_{R}(M,S)\neq0.$ Thus Hom$_{R}(S^{*},X)\neq0$ for all $X\in(\emph{\textbf{R}}_{1}\cap T_{S})^{*}.$ By \cite[Proposition 1, p.107]{Rin98}, $S^{*}[\infty]$ is the direct limit of a chain of monomorphisms $X_{1}\rightarrow X_{2}\rightarrow X_{3}\rightarrow\cdots$ with $X_{i}\in(\emph{\textbf{R}}_{1}\cap T_{S})^{*}.$ Therefore, by \cite[Proposition 2.1, p.736]{Len83}, there is a pure exact sequence $0\rightarrow N\rightarrow\underset{j\in J}{\bigoplus}Y_{j}\rightarrow S^{*}[\infty]\rightarrow0$ with $Y_{j}\in(\emph{\textbf{R}}_{1}\cap T_{S})^{*}.$ Therefore the exact sequence $0\rightarrow(S^{*}[\infty])^{*}\rightarrow(\underset{j\in J}{\bigoplus}Y_{j})^{*}\rightarrow N^{*}\rightarrow0$ is split. By \cite[Examples, p.44]{Cra98}, $\hat{S}=(S^{*}[\infty])^{*}$ and hence $\hat{S}$ is a direct summand of $\underset{j\in J}{\prod}N_{j}^{*}.$ Thus $\hat{S}\in$Prod$(\emph{\textbf{R}}_{1}\cap T_{S})$ and this implies that $\hat{S}\in$fsc$(\emph{\textbf{R}}_{1}\cap T_{S}).$ $\left(3\right)\Rightarrow\left(1\right)$. This is obvious. \end{proof} \begin{cor}\label{cor:4.19(Corollary (6.11), p.10)} Let $R$ be a tame hereditary finite-dimensional algebra over a field $k$ and let $\textbf{R}_{1}$ be a class of indecomposable regular left R-modules. Then \emph{fsc}$(\textbf{R}_{1})=\textbf{R}_{1}\cup\{\hat{S}\mid \textbf{R}_{1}\cap T_{S}$ is infinite$\}.$ \end{cor} \begin{proof} This follows by Theorem~\ref{thm:4.18(Theorem 6.10, p.8)}, Remark~\ref{rem:4.15} and Lemma~\ref{lem:4.17(Lemma 6.9, p.7)}. \end{proof} In the following corollary we give a complete description of the closure of any subclass of $R$-ind in the full support topology and hence, by Theorem~\ref{thm:4.5}, a description of the indecomposable $S$-pure-injective modules for any purity defined by a class $S$ of finitely presented modules. \begin{cor}\label{cor:4.20(Corollary (6.12), p.10)} Let $R$ be a tame hereditary finite-dimensional algebra over a field $k.$ Let $\textbf{I}_{1}$ $($resp. $\textbf{P}_{1},$ resp. $\textbf{R}_{1})$ be a class of indecomposable preinjective $($resp. preprojective, resp. regular$)$ left R-modules. Then \emph{fsc}$(\textbf{I}_{1}\cup \textbf{P}_{1}\cup \textbf{R}_{1})=\overline{\textbf{I}_{1}}\cup \textbf{P}_{1}\cup \textbf{R}_{1}\cup\{\hat{S}\mid \textbf{R}_{1}\cap T_{S}$ is infinite$\}.$\end{cor} \begin{proof} This follows from Corollary~\ref{cor:4.14}, Proposition~\ref{prop:4.16(Proposition 6.5, p.4)} and Corollary~\ref{cor:4.19(Corollary (6.11), p.10)}.\end{proof} \section{Rings whose indecomposable modules are $S$-pure-projective.} \subparagraph*{\textmd{Let $T$ be a set. A family $F$ of subsets of $T$ is said to be directed if for any $U,V\in F,$ there exists $W\in F$ such that $U\subseteq W$ and $V\subseteq W.$ }} \subparagraph*{\textmd{By using Theorem~\ref{thm:(1.3),2report,p3}, we can prove the following lemma.}} \begin{lem}\label{lem:5.1(Lemma 4.16, p.10)} Let S be a class of finitely presented left R-modules and let $\{N_{i}\}_{i\in I}$ be any directed family of $S$-pure submodules of a left R-module M. Then $N=\underset{i\in I}{\bigcup}N_{i}$ is an $S$-pure submodule of M.\end{lem} Let $N$ be a submodule of a left $R$-module $M$ and let $T$ be a set of submodules of $M.$ We will use $N(T)$ to denote the submodule $N(T)=N+\underset{A\in T}{\sum}A.$ The next lemma follows using Lemma~\ref{lem:5.1(Lemma 4.16, p.10)}. \begin{lem}\label{lem:5.2(Lemma 4.18, p.12)} Let S be a class of finitely presented left R-modules, let $N$ be a submodule of a left $R$-module $M$ and let $T$ be a set of submodules of $M.$ If $N(F)$ is an S-pure submodule of M for all finite subsets F of T, then $N(T)$ is an S-pure submodule of M. \end{lem} \begin{defn}\label{defn:5.3(Definition 4.19, p.13)} Let S be a class of finitely presented left R-modules, let $N$ be a submodule of a left $R$-module $M$ and let $T_{0}$ be the set of all indecomposable submodules of $M.$ A subclass $T\subseteq T_{0}$ said to be $S$-$N$-independent $($in $M)$ if $N(T)=N\oplus(\underset{B\in T}{\sum}\oplus B)$ and $N(T)$ is S-pure submodule of $M.$ This will be the case if and only if every finite subset of $T$ is $S$-$N$-independent $M$. \end{defn} \begin{thm}\label{thm:5.5(Theorem 4.22, p.16)} Let S be any set of finitely presented left R-modules and let M be a left R-module. Suppose that every S-pure submodule $M_{0}$ of M for which $M/M_{0}$ is indecomposable is a direct summand of M. Then every S-pure submodule of M is a direct summand of M and M is a direct sum of indecomposable submodules. \end{thm} \begin{proof}$($The following proof is based on an argument in \cite[Proposition 1.13, p.53]{Azu92}$).$ Let $N$ be any $S$-pure submodule of $M.$ If $N=M$ then $N$ is a direct summand of $M.$ Assume that $N\neq M,$ thus there is $x\in M$\textbackslash{}$N.$ Let $F=\{K\mid N\subseteq K,$ $x\notin K$ and $K$ is an $S$-pure submodule of $M\}.$ Since $N\in F$ it follows that $F$ is a non-empty family. Let $\{M_{i}\}_{i\in I}$ be any directed subfamily of $F$ and let $A=\underset{i\in I}{\bigcup}M_{i}.$ It is clear that $N\subseteq A$ and $x\notin A.$ By Lemma~\ref{lem:5.1(Lemma 4.16, p.10)}, $A$ is an $S$-pure submodule of $M$ and hence $A\in F.$ By Zorn's lemma, $F$ has a maximal element, say $M_{0},$ thus $M_{0}$ is an $S$-pure submodule of $M$ with $N\subseteq M_{0}$ and $x\notin M_{0}.$ We will prove that $M/M_{0}$ is indecomposable. Assume that $M/M_{0}$ is not indecomposable, thus there are two non-zero submodules $M_{1}/$ $M_{0},$ $M_{2}/M_{0}$ of $M/M_{0}$ such that $M/M_{0}=(M_{1}/M_{0})\oplus(M_{2}/M_{0}).$ Therefore $M_{0}\underset{\neq}{\subset}M_{1}$, $M_{0}\underset{\neq}{\subset}M_{2}$ and $M_{1}\cap M_{2}=M_{0}.$ Since $M_{1}/M_{0}$ and $M_{2}/M_{0}$ are direct summands of $M/M_{0}$ they are $S$-pure submodules of $M/M_{0}.$ Since $M_{0}$ is an $S$-pure submodule of $M$ it follows from \cite[33.3(4), p.276]{Wis91} that $M_{1}$ and $M_{2}$ are $S$-pure submodules of $M.$ Thus, by maximality of $M_{0},$ we have that $x\in M_{1}\cap M_{2}$ and this is a contradiction. Hence $M/M_{0}$ is a non-zero indecomposable left $R$-module. By assumption, $M_{0}$ is a direct summand of $M,$ say $M=N_{0}\oplus M_{0}.$ Thus $N_{0}\simeq M/M_{0}$ is a non-zero indecomposable submodule of $M$ with $N+N_{0}=N\oplus N_{0}.$ Since $N$ is an $S$-pure submodule of $M$ and $N\subseteq M_{0}\subseteq M$ it follows that $N$ is $S$-pure submodule of $M_{0}$ and hence $N\oplus N_{0}$ is an $S$-pure submodule of $N_{0}\oplus M_{0}=M.$ Thus, for any proper $S$-pure submodule $N$ of $M,$ there exists a non-zero indecomposable submodule $N_{0}$ of $M$ such that $N\cap N_{0}=0$ and $N\oplus N_{0}$ is an $S$-pure submodule of $M.$ Let $T$ be the family of all $S$-$N$-independent subsets in $M.$ Since $\{0\}\in T$ it follows that $T$ is non-empty. Let $D$ be any directed subfamily of $T$ and let $U$ be the union of all members of $D.$ Then $U\in T$ since every finite subset of $U$ is $S$-$N$-independent. By Zorn's lemma, $T$ has a maximal element, say $W.$ Now we will prove that $N(W)=M.$ Assume that $N(W)\neq M,$ thus $N(W)$ is a proper $S$-pure submodule of $M.$ Hence there exists a non-zero indecomposable submodule $B$ of $M$ such that $N(W)\cap B=0$ and $N(W)+B=N(W)\oplus B=N\oplus(\underset{A\in W}{\sum}\oplus A)\oplus B$ is an $S$-pure submodule of $M,$ as seen above. Hence $W\cup\{B\}$ properly contains $W$ and is $S$-$N$-independent in $M.$ This contradicts the maximality of $W$ in $T.$ Therefore, $N(W)=M.$ Since $N(W)=N\oplus(\underset{A\in W}{\sum}\oplus A)$ it follows that $N$ is a direct summand of $M$ and $M/N\simeq\underset{A\in W}{\sum}\oplus A$ is a direct sum of indecomposable submodules. If we take $N=0$ then we see that $M$ is a direct sum of indecomposable submodules.\end{proof} \begin{cor}\label{cor:5.6(Theorem 4.24, proposition 4.25, Corollary 4.25(a), p.21(a))} Let S be any set of finitely presented left R-modules. Then the following statements are equivalent: $\left(1\right)$ Every indecomposable left R-module is S-pure-projective. $\left(2\right)$ For any left R-module M, every S-pure submodule of M is a direct summand of M. $\left(3\right)$ Every left R-module is S-pure-projective. $\left(4\right)$ Every left R-module is S-pure-injective. $\left(5\right)$ Every left R-module is a direct sum of modules in \textup{ind}$(S\cup\{_{R}R\})$. \end{cor} \begin{proof} $\left(1\right)\Rightarrow\left(2\right).$ Let $M$ be any left $R$-module and let $N$ be any $S$-pure submodule of $M$ such that $M/N$ is indecomposable. By hypothesis, $M/N$ is $S$-pure-projective hence the $S$-pure exact sequence $0\rightarrow N\overset{i}{\rightarrow}M\overset{\pi}{\rightarrow}M/N\rightarrow0$ splits and hence $N$ is a direct summand of $M.$ By Theorem~\ref{thm:5.5(Theorem 4.22, p.16)}, every $S$-pure submodule of $M$ is a direct summand of $M$. $\left(2\right)\Rightarrow\left(3\right).$ Let $M$ be any left $R$-module and let $\Sigma:0\rightarrow L\overset{f}{\rightarrow}N\overset{g}{\rightarrow}M\rightarrow0$ be any $S$-pure exact sequence of left $R$-modules. By hypothesis, im$(f)$ is a direct summand of $N$ and hence $\Sigma$ is split so $M$ is $S$-pure-projective. $\left(3\right)\Rightarrow\left(5\right).$ Assume that every left $R$-module is $S$-pure-projective, thus every left $R$-module is pure-projective. By \cite[Proposition 4.4, p.73]{Azu92}, $R$ is a left Artinian ring and hence $R$ is Krull-Schmidt, by \cite[p.164]{Pre09}. Let $M$ be any left $R$-module. By hypothesis and Proposition~\ref{prop:4.1(4.12)(a),p.7(a)}, $M$ isomorphic to a direct sum of modules in ind$(S\cup\{_{R}R\}).$ Thus every left $R$-module is a direct sum of modules in ind$(S\cup\{_{R}R\})$. $\left(5\right)\Rightarrow\left(1\right).$ Assume that every left $R$-module is a direct sum of modules in ind$(S\cup\{_{R}R\})$. Let $M$ be an indecomposable left $R$-module, thus $M$ is a direct sum of modules in ind$(S\cup\{_{R}R\})$. Since each module in ind$(S\cup\{_{R}R\})$ is $S$-pure-projective and the class of $S$-pure-projective left $R$-modules is closed under direct sums (by \cite[ p.278]{Wis91}) it follows that $M$ is $S$-pure-projective. Hence every indecomposable left $R$-module is $S$-pure-projective. $\left(2\right)\Leftrightarrow\left(4\right).$ By using \cite[33.7, p.279]{Wis91}. \end{proof} \subparagraph*{Acknowledgements:} The results of this paper will be part of my doctoral thesis at the University of Manchester. I wish to express my sincere thanks to my supervisor Professor Mike Prest, for his valuable help and suggestions and for his help in the preparation of this paper. Also, I would like to express my thank and gratitude to the Iraqi government for sponsoring and giving me the opportunity to study for a PhD at the University of Manchester.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The transition from hadronic matter to the deconfined state of free quarks and gluons is a prime subject of current research in elementary particle physics. From the theoretical point of view two of the most important questions concern the transition temperature and the order of the transition at zero and finite chemical potential. In this context the role of the related chiral symmetry restoration is of special interest. A perturbative treatment of the plasma at the transition temperature is not valid, thus lattice QCD is the preferred tool to study the transition. Most of the state-of-the-art simulations so far have been performed using staggered fermions, having the advantage of being numerically cheap compared to other fermion discretisations. Recent results with 2+1 flavours of dynamical quarks can be found in refs.~\cite{hotQCD,BW-group}. However, there are conceptual problems concerning the staggered approach to lattice QCD (see e.g. the discussions in \cite{stagg}) and a cross-check of the staggered results is needed using other fermionic discretisations. Several groups have already started to perform simulations with $\mathcal{O}(a)$ improved Wilson fermions of the Sheikholeslami-Wohlert type \cite{QCDSF,WhotQCD} as well as with maximally twisted mass \cite{tmft}. All these simulations still suffer from unphysically large pion masses and lack continuum extrapolations. In this proceedings article we give an update on the results concerning our study of the $N_f=2$ phase transition, using non-perturbatively $\mathcal{O}(a)$-improved Wilson fermions, lattices with $N_t\geq12$ and pion masses lower than 600 MeV. We aim to extract the transition temperature and the order of the transition in the chiral limit, which is still not settled until today. There are two scenarios \cite{scen1,scen2}: In the first scenario, the chiral critical line in the $\{m_{u,d},m_{s},T\}$-parameter space never reaches the $m_{u,d}=0$ axis, while the second one implies the existence of a tricritical point at $m_{u,d}=0$, which extends into the direction of finite chemical potential as a critical line. With our $N_f=2$ simulation, we therefore address a question which is important for the enlarged phase diagram of the $N_f=2+1$ theory as well as on the phase diagram at finite density. \section{Setup of the simulations} \begin{table}[t] \centering \begin{tabular}{c|cccccccc} \hline scan & Lattice & Block size & $\kappa$ & $\beta$-range & $\tau$ & $\tau_{int}[P]$ & $f_{meas}$ & Statistic \\ \hline $A$ & $12\times24^3$ & $6^4$ & 0.13595 & $5.270-5.320$ & 2.0 & $\mathcal{O}(30)$ & 1 & $\mathcal{O}(25000)$ \\ $B$ & $16\times32^3$ & $8^4$ & 0.13650 & $5.400-5.575$ & 2.0 & $\mathcal{O}(10)$ & 2 & $\mathcal{O}(5000)$ \\ \hline \end{tabular} \caption{Run parameters for scans $A$ and $B$. We show the DDHMC block size, the Monte Carlo time $\tau$ of the trajectories, the measurement frequency $f_{meas}$ and the integrated autocorrelation time $\tau_{int}$ of the plaquette $P$.} \label{tab1} \end{table} We employ two degenerate flavours of nonperturbatively $\mathcal{O}(a)$ improved Wilson fermions, using the Sheikholeslami-Wohlert lattice Dirac operator \cite{SW-action} \begin{equation} \label{eq-sw-op} D_{SW} = D_W + c_{SW} \: \frac{i\:a\:\kappa}{4} \: \sigma_{\mu\nu} \: \hat{F}^{\mu\nu} \; . \end{equation} Here $D_W$ is the usual Wilson Dirac operator, $\kappa$ is the hopping parameter, $\sigma_{\mu\nu}$ the totally antisymmetric tensor and $\hat{F}^{\mu\nu}$ the 'clover leaf' representation of the gluonic field strength tensor on the lattice. The clover coefficient $c_{SW}$ is tuned with $\beta$, using the interpolation formula from \cite{npcsw}. The simulations are generated using the deflation accelerated DDHMC algorithm, introduced by L\"uscher \cite{DDHMC}. This algorithm is also used intensively in the context of the CLS effort \cite{CLS_wiki} for simulations at zero temperature \cite{CLS,CLS_POS}. To scan the temperature, we vary the lattice spacing $a$ via the bare lattice coupling $\beta$, which is connected to the temperature by $T=1/[N_t\:a(\beta)]$. This method enables us to get a fine resolution around the critical temperature, in contrast to the fixed scale approach, and to use the modified Multi-Histogram method as introduced in \cite{ownpos}. The scale is set after the determination of the critical coupling $\beta_c$ by an additional run at $T=0$. \begin{figure}[t] \centering \includegraphics[angle=-90, width=.38\textwidth]{polb_k0_13595_su.pdf} \caption{Plot of the results for the Polyakov loop suszeptibility of scan $A$.} \label{fig0} \end{figure} To investigate the properties of the finite temperature transition we look at the behaviour of the average plaquette $\ev{P}$, the real part of the Polyakov loop $\textnormal{Re}\left[\ev{L}\right]$ and the chiral condensate $\ev{\bar{\psi}\psi}$. We also find it beneficial to compute the Polyakov loop in an APE-smeared version since the smearing leads to a more pronounced signal for the phase transition, as already observed in \cite{hw-89}. We define the generalised susceptibilities $\chi(O)$ by \begin{equation} \label{susz} \chi(O) \equiv N_s^3 \: \left( \ev{O^2} - \ev{O}^2 \right) , \end{equation} where $O$ is any of the observables above and $N_s$ the spatial lattice size. These generalised susceptibilities should show a notable peak at the transition point. In addition, the behaviour of the peak under a change in the spatial volume is governed by the corresponding critical exponents, encoding information about the order of the transition. \section{Simulation results} \label{results} \begin{figure}[t] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[angle=-90, width=.85\textwidth]{pol_k0_13650_ew.pdf} \end{minipage} \begin{minipage}{.45\textwidth} \centering \includegraphics[angle=-90, width=.85\textwidth]{pol_k0_13650_su.pdf} \end{minipage} \caption{Plot of the results for scan $B$. {\bf Left:} Real part of the Polyakov loop in the unsmeared and APE-smeared version. The smeared results are rescaled such that we could show them in a single plot. {\bf Right:} The susceptibility of the APE-smeared Polyakov loop, together with a gaussian fit to the points around the peak position.} \label{fig1} \end{figure} \begin{figure}[h] \centering \includegraphics[angle=-90, width=.38\textwidth]{plaq_k0_13650_su.pdf} \caption{The susceptibility of the plaquette from scan $B$.} \label{fig2} \end{figure} So far our simulations where done on two different lattices, with simulation parameters as listed in table \ref{tab1}. Scan $A$ was designed as a test run for the algorithm and the measurement routines at $T\neq0$. We therefore set the hopping parameter $\kappa$ to the critical value where the phase transition occurs for the real part of the Polyakov loop at the lattice with $\beta=5.29$ of \cite{QCDSF}. We show the signal of the Polyakov loop susceptibility in figure \ref{fig0}. The data is consistent with a phase transition at $\beta_c=5.301(3)$, where we see a strong increase in the signal and a peak in the susceptibility. The resulting transition temperature is slightly higher than the one obtained in \cite{QCDSF}, but mainly consistent when we take into account that at a pion mass of roughly 600 MeV the transition is hardly a sharp phase transition but a broad crossover. For more details see \cite{ownpos}. Scan $B$ is our first scan at a somewhat lighter pion mass and $N_t=16$. We show the behaviour of $\textnormal{Re}\left[\ev{L}\right]$ together with the smeared version $\textnormal{Re}\left[\ev{L}_{sm}\right]$, and the susceptibility of the latter in figure \ref{fig1}. Compared to \cite{ownpos} we enhanced the resolution around the transition point and increased statistics. At the right of figure \ref{fig1} we also show a Gaussian fit to 5 points around the transition point, from which we obtain the peak position to be $\beta_c=5.499(2)$. The value of $\chi^2/d.o.f.$ is around 1 for the fit. Fortunately there already exists a run for $T=0$ with parameters $\beta=5.50$ and $\kappa=0.13650$ \cite{CLS_POS}, leading to a transition temperature in physical units of roughly $T_c(m_{\pi}=\:510\:\textnormal{MeV},a=0.053\:\textnormal{fm})\approx233$ MeV. The determination of the scale is still in progress and thus the estimate of $T_c$ in physical units must be considered preliminary. Indeed the new scale determination in \cite{CLS_POS} changed the temperature by around 10\% compared to \cite{ownpos}. Therefore one should keep in mind that the systematic error might still be large. The peak in the susceptibility of the Polyakov loop is reproduced by the other observables as well and we show the behaviour of the susceptibility of the plaquette in figure \ref{fig2}. It is important to note that the susceptibility of the plaquette shows a general decrease due to the behaviour of the corresponding expectation value. \section{Conclusions and outlook} In this proceedings article, we give an update of our effort to obtain the QCD deconfinement transition temperature for two dynamical flavours in the chiral limit. Compared to \cite{ownpos}, we have refined the resolution of scan $B$ around the transition point and enlarged statistics. The new scale determination in \cite{CLS_POS} changed the transition temperature for scan $B$ to 233 MeV, which is of the order of the transition temperatures from the twisted mass simulations \cite{tmft} at comparable physical pion mass. The peak in the susceptibilities is reproduced by all observables, as shown for the example of the plaquette in figure \ref{fig2}. Currently we extend scan $B$ to obtain a finer resolution around the peak and to employ the Multi-Histogram method discussed in \cite{ownpos}. In addition we enlarge the set of scans at $N_t=16$ to lighter pion masses and larger volumes. \section*{Acknowledgments} The simulations were done on the WILSON cluster at the University of Mainz (see \cite{CLS}) and on JUGENE at FZ Juelich under NIC Grant Nr. 3330. We are indebted to the institutes for these facilities. We also like to thank H.B. Meyer for many fruitful discussions. B.B. is funded by the DFG via SFB 443. L.Z. is supported by DFG PH 158/3-1.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Weil heights have an important role in Diophantine geometry, and particular Weil heights with nice properties, called canonical heights, are sometimes very useful. The theory of canonical heights has had deep applications throughout the field of Arithmetic geometry. Over abelian varieties $A$ defined over a number field $K$, N\'{e}ron and Tate constructed canonical height functions $\hat{h}_L: A(\bar{K}) \rightarrow \mathbb{R}$ with respect to symmetric ample line bundles $L$ which enjoy nice properties, and can be used to prove Mordell-Weil theorem for the rational points of the variety. More generally, in [4], Call and Silverman constructed canonical height functions on projective varieties $X$ defined over a number field which admit a morphism $f:X \rightarrow X$ with $f^*(L) \cong L^{\otimes d}$ for some line bundle $L$ and some $d >1$. In another direction, Silverman [19] constructed canonical height functions on certain $K3$ surfaces $S$ with two involutions $\sigma_1, \sigma_2$ (called Wheler's $K3$ surfaces) and developed an arithmetic theory analogous to the arithmetic theory on abelian varieties. It was an idea of Kawaguchi [10] to consider polarized dynamical systems of several maps, namely, given $X/K$ a projective variety, $f_1,...f_k:X \rightarrow X$ morphisms on defined over $K$, $\mathcal{L}$ an invertible sheaf on $X$ and a real number $d>k$ so that $f_1^*\mathcal{L}\otimes ... \otimes f_k^*\mathcal{L} \cong \mathcal{L}^{\otimes d}$, he constructed a canonical height function associated to the polarized dynamical system $(X, f_1,..., f_k, \mathcal{L})$ that generalizes the earlier constructions mentioned above. In the Wheler's K3 surfaces' case above, for example, the canonical height defined by Silverman arises from the system formed by $(\sigma_1, \sigma_2)$ by Kawagushi's method. Given $X/\mathbb{C}$ smooth projective variety, $f:X \dashrightarrow X$ dominant rational map inducing $f^*:$NS$(X)_{\mathbb{R}} \rightarrow$NS$(X)_{\mathbb{R}}$ on the N\'{e}ron-Severi group, the dynamical degree is defined as $\delta_f := \lim_{n \rightarrow \infty} \rho((f^n)^*)^{\frac{1}{n}}$, where $\rho$ denotes the spectral radius of a given linear map, or the biggest number among the absolute values of its eigenvalues. This limit converges and is a birational invariant that has been much studied over the last decades. In [12] we find a list of references. In [12], Kawaguchi and Silverman studied an analogous arithmetic degree for $X$ and $f$ defined over $\bar{\mathbb{Q}}$ on points with well defined foward orbit over $\bar{\mathbb{Q}}$. Namely, $\alpha_f(P):= \lim_{n \rightarrow \infty} h^+_X(f^n(P))^{\frac{1}{n}}$, where $h_X$ is a Weil height relative to an ample divisor and $h^+_X= \max \{1, h_X\}$. Such degree measures the arithmetic complexity of the orbit of $P$ by $f$, and $\log \alpha_f(P)$ has been interpreted as a measure of the arithmetic entropy of the orbit $\mathcal{O}_f(P)$. It is showed in [12] that the arithmetic degree determines the height counting function for points in orbits, and that the arithmetic complexity of the $f$-orbit of an algebraic point never exceeds the geometrical-dynamical complexity of the map $f$, as well as more arithmetic consequences. We ask if this kind of research could be done in the setting of general dynamical systems as treated by Kawaguchi, with several maps, as in the case of Wheler's K3 surfaces. This is the first subject found in this work. Given $X/K$ be a projective variety, $f_1,...,f_k:X \dashrightarrow X$ rational maps, $\mathcal{F}_n=\{f_{i_1} \circ ... \circ f_{i_n} ; i_j =1,...,k \}$, we define a more general dynamical degree of a system of maps as $\delta_{\mathcal{F}}=\lim \sup_{n\rightarrow \infty}\max_{f \in \mathcal{F}_n} \rho (f^*)^{\frac{1}{n}}$, and extend the definition of arithmetic degree for $\alpha_{\mathcal{F}}(P)= \frac{1}{k} \lim_{n \rightarrow \infty}\{ \sum_{f \in \mathcal{F}_n} h_X^+(f(P))\}^{\frac{1}{n}}$, obtaining also the convergence of $\delta_{\mathcal{F}}$, and that $\alpha_{\mathcal{F}}(P) \leq \delta_{\mathcal{F}}$ when $\alpha_{\mathcal{F}}(P)$ exists. Motivated by [12], we give an elementary proof that our new arithmetic degree is related with height counting functions in orbits, when $\alpha_{\mathcal{F}}(P)$ exists, by: \begin{center} $\lim_{B \rightarrow \infty} \dfrac{ \# \{ n \geq 0 ; \sum_{f \in \mathcal{F}_n} h_X(f(P)) \leq B\}}{ \log B}= \dfrac{1}{ \log (k. \alpha_{\mathcal{F}}(P))} $, \end{center} \begin{center} $\lim \inf _{B \rightarrow \infty} (\# \{Q \in \mathcal{O}_{\mathcal{F}}(P); h_X(Q) \leq B \})^{\frac{1}{\log B}} \geq k^{ \frac{1}{ \log (k. \alpha_{\mathcal{F}}(P)})}$. \end{center} We are able to extend theorem 1 of [12], showing explicitely how the dynamical degree of a system with several maps can offer an uniform upper bound for heights on iterates of points in orbits, when $K$ is a number field or an one variable function field. Precisely, for every $\epsilon >0$, there exists a positive constant $C=C(X, h_X, f, \epsilon)$ such that for all $P \in X_{\mathcal{F}}(\bar{K})$ and all $n \geq 0$, \begin{center} $\sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq C. k^n.(\delta_{\mathcal{F}} + \epsilon)^n . h^+_X(P).$ \end{center} In particular, $h^+_X(f(P)) \leq C. k^n.(\delta_{\mathcal{F}} + \epsilon)^n . h^+_X(P)$ for all $f \in \mathcal{F}_n.$ This theorem becomes a tool to show the second very important theorem of this work. As we have seen, for a pair $(X/K, f_1,...,f_k, L)$ with $k$ self-morphisms on $X$ over $K$, and $L$ a divisor satisfying a linear equivalence $\otimes^k_{i=1}f^*_i(L) \sim L^{\otimes d}$ for $d>k$, there is a well known theory of canonical heights developed by Kawaguchi in [10]. Now we are partially able to generalize this to cover the case that the relation $\otimes^k_{i=1}f^*_i(L) \equiv L^{\otimes d}$ is only an algebraic relation. Hence the limit \begin{center} $\hat{h}_{L,\mathcal{F}}(P)= \lim_{n \rightarrow \infty}\dfrac{1}{d^n}\sum_{ f \in \mathcal{F}_n} h_L(f(P)).$ \end{center} converges for certain eigendivisor classes relative to algebraic relation. For $L$ ample and $K$ a number field, we obtain that : \begin{center} $\hat{h}_{L,\mathcal{F}}(P)=0 \iff P$ has finite $\mathcal{F}$-orbit. \end{center} These kind of generalization was firstly done for just one morphism by Y. Matsuzawa in [15], extending Call and Silverman's theory of canonical heights in [4], and we work out for several maps in the present work. \section{Notation, and first definitions} Throughout this work, $K$ will be either a number field or a one-dimensional function field of characteristic 0 . We let $\bar{K}$ be an algebraic closure of $K$. The uple $ (X, f_1,...,f_k)$ is called a dynamical system, where either $X$ is a smooth projective variety and $f_i:X \dashrightarrow X$ are dominant rational maps all defined over $K,$ or $X$ is a normal projective variety and $f_i:X \dashrightarrow X$ are dominant morphisms. We denote by $ h_X:X(\bar{K})\rightarrow [0, \infty)$ the absolute logarithmic Weil height function relative to an ample divisor $A$ of $X$, and for convenience we set $h_X^+(P)$ to be $ \max \{1, h_X(P)\}.$ The sets of iterates of the maps in the system are denoted by $\mathcal{F}_0=\{ $Id$\}, \mathcal{F}_1= \mathcal{F} =\{f_1,...,f_k\}$, and $\mathcal{F}_n=\{f_{i_1} \circ ... \circ f_{i_n} ; i_j =1,...,k \}$, inducing what we call $\mathcal{O}_{\mathcal{F}}(P)$ the forward $\mathcal{F}$-orbit of $P$=$\{ f(P); f \in \bigcup_{n \in \mathbb{N}} \mathcal{F}_n \}.$ A point $P$ is said preperiodic when its $\mathcal{F}$-orbit is a finite set. We write $ I_{f_i}$ for the indeterminacy locus of $f_i$, i.e., the set of points which $f_i$ is not well-defined, and $ I_{\mathcal{F}}$ for $ \bigcup_{i=1}^k I_{f_i}$. Also we define $ X_{\mathcal{F}}(\bar{K})$ as the set of points $P \in X(\bar{K})$ whose forward orbit is well-defined, in other words, $\mathcal{O}_{\mathcal{F}}(P) \cap I_{\mathcal{F}} = \emptyset$. The set of Cartier divisors on $X$ is denoted by Div$(X)$, while Pic$(X)$ denotes The Picard group of $X$, and NS$(X)=\mbox{Pic}(X)/\mbox{Pic}^{0}(X)$ is called the Neron-Severi Group of $X$. The equality in this group is denoted by the symbol $\equiv$, which is called algebraic equivalence. Given a rational map $f:X \dashrightarrow X$, the linear map induced on the tensorized N\'{e}ron-Severi Group NS$(X)_{\mathbb{R}}=$ NS$(X) \otimes \mathbb{R}$ is denoted by $f^*$. So, when looking for a dynamical system $(X,\mathcal{F})$, it is convenient for us to use the notation $\rho(\mathcal{F}_n):= \max_{f \in \mathcal{F}_n} \rho (f^*,$NS$(X)_{\mathbb{R}})$. For definitions and properties about Weil height functions, we refer to [8]. Next, we define the dynamical degree of a set of rational maps on a complex variety, which is a measure of the geometric complexity of the iterates of the maps in the set, when it exists. This is a generalization for several morphisms of the dynamical degree appearing in of [12]. \newline {\bf Definition 2.1: } \textit{Let $X/ \mathbb{C}$ be a (smooth) projective variety and let $\mathcal{F}$ be as above. The dynamical degree of $\mathcal{F}$, when it exists, is defined by} \begin{center} $\delta_{\mathcal{F}}=\lim \sup_{n\rightarrow \infty}\rho(\mathcal{F}_n)^{\frac{1}{n}}$ \end{center} In this sense, we also generalize the second definition in the introduction of [12], introducing now the arithmetic degree of a system of maps $\mathcal{F}$ at a point $P$. This degree measures the growth rate of the heights of $n$-iterates of the point by maps of the system as $n$ grows, and so it is a measure of the arithmetic complexity of $\mathcal{O}_{\mathcal{F}}(P)$.\newline {\bf Definition 2.2: } \textit{Let $P \in X_{\mathcal{F}}(\bar{K}).$ The arithmetic degree of $\mathcal{F}$ at $P$ is the quantity} \begin{center} $\alpha_{\mathcal{F}}(P)= \frac{1}{k} \lim_{n \rightarrow \infty}\{ \sum_{f \in \mathcal{F}_n} h_X^+(f(P))\}^{\frac{1}{n}} $ \end{center} \textit{assuming that the limit exists.}\newline {\bf Definition 2.3: } \textit{In the lack of the convergence, we define the upper and the lower arithmetic degrees as} \begin{center} $\bar{\alpha}_{\mathcal{F}}(P)= \frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_n} h_X^+(f(P))\}^{\frac{1}{n}} $ $\underline{\alpha}_{\mathcal{F}}(P)= \frac{1}{k} \lim \inf_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_n} h_X^+(f(P))\}^{\frac{1}{n}} $ \end{center} {\bf Remark 2.4: } Let $X$ be a projective variety and $D$ a Cartier divisor. If \newline $f:X \rightarrow X $ is a surjective morphism, then $f^*D$ is a Cartier divisor. In the case where $X$ is smooth, and $f:X \dashrightarrow X$ a merely rational map, we take a smooth projective variety $\tilde{X}$ and a birational morphism $\pi: \tilde{X} \rightarrow X$ such that $\tilde{f} := f \circ \pi: \tilde{X} \rightarrow X$ is a morphism. And we define $f^*D:=\pi_*(\tilde{f}^*D).$ It is not hard to verify that this definition is independent of the choice of $X$ and $\pi$. This is done in section 1 of [12] for example. \section{First properties for the arithmetic degree} In this section we check that the upper and lower degrees defined in the end of the section above are independent of the Weil height function chosen for $X$, and so they are well defined. Some examples of these degrees are computed is this section as well. We also present and prove our first counting result for points in orbits for several maps, and state an elementary and useful linear algebra's lemma.\newline {\bf Proposition 3.1: } \textit{The upper and lower arithmetic degrees $\bar{\alpha}_{\mathcal{F}}(P)$ and $ \underline{\alpha}_{\mathcal{F}}(P)$ are independent of the choice of the height function $h_X$.} \begin{proof} If the $\mathcal{F}$-orbit of $P$ is finite, then the limit $\alpha_{\mathcal{F}}(P)$ exists and is equal to 1, by definition of such limit, whatever the choice of $h_X$ is. So we consider the case when $P$ is not preperiodic, which allows us to replace $h_X^+$ with $h_X$ when taking limits. Let $h$ and $h^{\prime}$ be the heights induced on $X$ by ample divisors $D$ and $D^{\prime}$ respectively, and let the respective arithmetic degrees denoted by $\bar{\alpha}_{\mathcal{F}}(P)$, $\underline{\alpha}_{\mathcal{F}}(P)$, ${\bar{\alpha}}^{\prime}_{\mathcal{F}}(P)$, ${\underline{\alpha}}^{\prime}_{\mathcal{F}}(P)$. By the definition of ampleness, there is an integer $m$ such that $mD- D^{\prime}$ is ample, and thus the functorial properties of height functions imply the existence of a non-negative constant $C$ such that: \begin{center} $mh(Q) \geq h^{\prime}(Q) - C $ for all $ Q \in X(\bar{K}).$ \end{center} We can choose a sequence of indices $\mathcal{N} \subset \mathbb{N}$ such that: \begin{center} $\bar{\alpha}^{\prime}_{\mathcal{F}}(P)= \frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_n} h^{\prime}(f(P))\}^{\frac{1}{n}} = \frac{1}{k} \lim_{n \in \mathcal{N}} \{ \sum_{f \in \mathcal{F}_n} h^{\prime}(f(P))\}^{\frac{1}{n}}$ \end{center} Then \newline \newline $\bar{\alpha}^{\prime}_{\mathcal{F}}(P)=\frac{1}{k} \lim_{n \in \mathcal{N}} \{ \sum_{f \in \mathcal{F}_n} h^{\prime}(f(P))\}^{\frac{1}{n}} \newline \newline \leq \frac{1}{k} \lim_{n \in \mathcal{N}} \{ \sum_{f \in \mathcal{F}_n}m h(f(P))+C \}^{\frac{1}{n}} ~~\newline \newline \leq \frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_n}m h(f(P))+C \}^{\frac{1}{n}} ~ \newline \newline = \frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ m(\sum_{f \in \mathcal{F}_n} h(f(P)))+Ck^n \}^{\frac{1}{n}} \newline \newline = \frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_n} h(f(P))\}^{\frac{1}{n}} \newline \newline = \bar{\alpha}_{\mathcal{F}}(P). $ \newline This proves the inequality for the upper arithmetic degrees. Reversing the roles of $h$ and $h^{\prime}$ in the calculation above we also prove the opposite inequality, which demonstrates that $\bar{\alpha}_{\mathcal{F}}(P)={\bar{\alpha}}^{\prime}_{\mathcal{F}}(P).$ In the same way we prove that $\underline{\alpha}_{\mathcal{F}}(P)={\underline{\alpha}}^{\prime}_{\mathcal{F}}(P).$ \end{proof} Our next lemma says that points belonging to a fixed orbit have their upper and lower arithmetic degrees bounded from above by the respective arithmetic degrees of the given orbit generator point. \newline \newline {\bf Lemma 3.2: } \textit{Let $\mathcal{F}=\{f_1,..., f_k\}$ be a set of self-rational maps on $X$ defined over $\bar{K}$. Then, for all $P \in X_{\mathcal{F}}(\bar{K})$, all $l \geq 0$, and all $g \in \mathcal{F}_l$, } \begin{center} $\bar{\alpha}_{\mathcal{F}}(g(P)) \leq \bar{\alpha}_{\mathcal{F}}(P)$ and ~$\underline{\alpha}_{\mathcal{F}}(g(P)) \leq \underline{\alpha}_{\mathcal{F}}(P)$ \end{center} \begin{proof} We calculate \newline \newline $\bar{\alpha}_{\mathcal{F}}(g(P)) = \frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_n} h_X^+(f(g(P)))\}^{\frac{1}{n}}\newline \newline \frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_n , g^{\prime} \in \mathcal{F}_l } h_X^+(f(g^{\prime}(P))) - \sum_{f \in \mathcal{F}_n , g^{\prime} \in \mathcal{F}_l -\{g\}} h_X^+(f(g^{\prime}(P)))\}^{\frac{1}{n}} ~~~~~~~~~~~~~~~~~~~\newline \newline \leq \frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ [\sum_{f \in \mathcal{F}_{n+l}} h_X^+(f(P))]+ O(1).k^{n+l}\}^{\frac{1}{n}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline =\frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_{n+l}} h_X^+(f(P))\}^{\frac{1}{n}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline =\frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_{n+l}} h_X^+(f(P))\}^{\frac{1}{n+l} . (1+ \frac{l}{n})} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline =\frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_{n+l}} h_X^+(f(P))\}^{\frac{1}{n+l}} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline =\bar{\alpha}_{\mathcal{F}}(P)$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \newline \newline The proof for $\underline{\alpha}_{\mathcal{F}}(P)$ is similar. \end{proof} \newpage Here are some examples: \newline $\bf{Example \: 3.3}$: Let $S$ be a $K3$ surface in $\mathbb{P}^2 \times \mathbb{P}^2$ given by the intersection of two hypersurfaces of bidegrees (1,1) and (2,2) over $\overline{\mathbb{Q}}$, and assume that NS$(S) \cong \mathbb{Z}^2,$ generated by $L_i:= p_i^*O_{\mathbb{P}^2}(1), i=1,2$, where $p_i : S \rightarrow \mathbb{P}^2$ is the projection to the $i$-factor for $i=1,2.$ These induce noncommuting involutions $\sigma_1, \sigma_2 \in $ Aut$(S)$. By [19, Lemma 2.1], we have \begin{center} $\sigma_i^*L_i \cong L_i, \sigma_i^*L_j \cong 4L_i - L_j,$ for $ i \neq j.$\end{center} The line bundle $L:= L_1 + L_2$ is ample on $S$ and satisfies $\sigma_1^*L + \sigma^*_2L \cong 4L$, and thus $h:= \hat{h}_{L, \{ \sigma_1, \sigma_2\}}$ exists on $S(\overline{\mathbb{Q}})$ by [10, theorem 1.2.1]. Noting that $$ \sigma_1^* \sim \begin{bmatrix} 1 & 4 \\ 0 & -1 \end{bmatrix}, \sigma_2^* \sim \begin{bmatrix} -1 & 0 \\ 4 & 1 \end{bmatrix}, (\sigma_1 \circ \sigma_2)^* \sim \begin{bmatrix} -1 & -4 \\ 4 & 15 \end{bmatrix}, (\sigma_2 \circ \sigma_1)^* \sim \begin{bmatrix} 15 & 4 \\ -4 & -1 \end{bmatrix} , $$ $$ (\sigma_1 \circ \sigma_2 \circ \sigma_1)^* \sim \begin{bmatrix} 15 & 56 \\ -4 & -15 \end{bmatrix} , (\sigma_2 \circ \sigma_1 \circ \sigma_2)^* \sim \begin{bmatrix} -15 & -4 \\ 56 & 15 \end{bmatrix}, $$ we calculate that \begin{center}$\rho(\sigma_1^*)= 2+ \sqrt{3}, \rho( \sigma_2^*)= 2 + \sqrt{3}, \rho((\sigma_1 \circ \sigma_2 )^*)= 7 + 4 \sqrt{3}, \newline \rho(( \sigma_2 \circ \sigma_1)^*) = 7 + 4 \sqrt{3}, \rho((\sigma_1 \circ \sigma_2 \circ \sigma_1)^*)= 1, \rho((\sigma_2 \circ \sigma_1 \circ \sigma_2)^*)=1.$ \end{center} This gives that $\delta_{\{\sigma_1, \sigma_2\}}= 2 + \sqrt{3}. $ Furthermore, since $h$ is a Weil Height with respect to an ample divisor, \begin{center}$\alpha_{\{\sigma_1, \sigma_2\}}(P) = (1/2) . \lim_{n \rightarrow \infty } [\sum_{f \in \{ \sigma_1, \sigma_2 \}_n} h(f(P))]^{\frac{1}{n}}=1/2.[4^n.h(P)]^{ \frac{1}{n}}=2 $ \end{center} for all $P \in S(\bar{\mathbb{Q}})$ non-preperiodic, i.e, $P$ such that $h(P) \neq 0.$ Observe that in this case $\bar{\alpha}_{\{\sigma_1, \sigma_2\}}(P)=2 \leq 2 + \sqrt{3}= \delta_{\{\sigma_1, \sigma_2\}},$ which we will prove in Corollary 1.16 to be true in our general conditions. \newline $\bf{Example \: 3.4}$: Let $S$ be a $K3$ surface in $\mathbb{P}^2 \times \mathbb{P}^2$, as in the example 1.4.5 of [10], given by the intersection of two hypersurfaces of bidegrees (1,2) and (2,1) over $\overline{\mathbb{Q}}$, and assume that NS$(S) \cong \mathbb{Z}^2,$ generated by $L_i:= p_i^*O_{\mathbb{P}^2}(1), i=1,2$, where $p_i : S \rightarrow \mathbb{P}^2$ is the projection to the $i$-factor for $i=1,2.$ These induce noncommuting involutions $\sigma_1, \sigma_2 \in $ Aut$(S)$. By similar computations we have $\sigma_i^*L_i \cong L_i, \sigma_i^*L_j \cong 5L_i - L_j,$ for $ i \neq j.$ The ample line bundle $L:= L_1 + L_2$ exists on $S$ and satisfies $\sigma_1^*L + \sigma^*_2L \cong 5L$, and thus $h:= \hat{h}_{L, \{ \sigma_1, \sigma_2\}}$ exists on $S(\overline{\mathbb{Q}})$ by [10, theorem 1.2.1]. Proceeding in the same way as in the previous example, we have that \begin{center} $\bar{\alpha}_{\{\sigma_1, \sigma_2\}}(P) =5/2 \leq \sqrt{\dfrac{23 + 5 \sqrt{21}}{2}}={\delta}_{\{\sigma_1, \sigma_2\}}.$ \end{center} $\bf{Example \: 3.5}$: Let $S$ be a hypersurface of tridegree (2,2,2) in $\mathbb{P}^1 \times \mathbb{P}^1 \times \mathbb{P}^1$ over $\overline{\mathbb{Q}}$, as in the example 1.4.6 of [10]. For $i=1,2,3,$ let $p_i:S \rightarrow \mathbb{P}^1 \times \mathbb{P}^1$ be the projection to the $(j,k)-$th factor with $\{i,j,k\}= \{1,2,3\}.$ Since $p_i$ is a double cover, it gives an involution $\sigma_i \in $ Aut$(S).$ Let also, $q_i:S \rightarrow \mathbb{P}^1$ be the projection to the $i-$th factor, and set $L_i := q_i^* O_{\mathbb{P}^1}, L:= L_1 + L_2 + L_3$ ample, and we assume that NS$(S) = <L_1,L_2,L_3> \cong \mathbb{Z}^3.$ By similar computations as above we have \begin{center} $ \sigma_i^*(L_i) \cong -L_i +2L_j + 2 L_k $ for $ \{i,j,k\}= \{ 1,2,3\}\newline \sigma_j^*(L_i) \cong L_i$ for $i \neq j.$ \end{center} Then $\sigma_1^*L + \sigma_2^*L + \sigma_3^*L \cong 5L,$ which gives us the existence of $h := \hat{h}_{L, \{ \sigma_1, \sigma_2, \sigma_3\}}$ by [10, theorem 1.2.1]. We note that if $h(P) \neq 0$, then a similar computation as in the previous examples yields $\alpha_{\{ \sigma_1,\sigma_2,\sigma_3\}}(P)= 5/3 $. While we can also calculate that: $$ (\sigma_3 \circ \sigma_2 \circ \sigma_1)^* \sim \begin{bmatrix} 1 & -2 & -2 \\ 2 & 3 & 10 \\ 2 & 6 & 15 \end{bmatrix} $$ with its big eigenvalue being aproximatelly $\rho( (\sigma_3 \circ \sigma_2 \circ \sigma_1)^*) \sim 18,3808$. As $(18,3808)^{1/3} \sim 2,639$, we have that $\delta_{\{\sigma_1,\sigma_2,\sigma_3\}} \geq 2,63 > 5/3=\alpha_{\{ \sigma_1,\sigma_2,\sigma_3\}}(P)$ \newline $\bf{Example \: 3.6}$: Let $A$ be an abelian variety over $\bar{\mathbb{Q}}$, $L$ a symmetric ample line bundle on $A$. Let $f= (F_0:...:F_N): \mathbb{P}^N \rightarrow \mathbb{P}^N$ be a morphism defined by the homogeneous polynomials $F_0,..., F_N$ of same degree $d >1$ such that $0$ is the only common zero of $F_0,..., F_N.$ Set $X= A \times \mathbb{P}^N, g_1=[2] \times \mbox{id}_{\mathbb{P}^N},$ and $g_2= \mbox{id}_A \times f.$ Put $M:= p_1^*L \otimes p_2^* O_{\mathbb{P}^N}(1),$ where $p_1$ and $p_2$ are the obvious projections. Then \begin{center} $\stackrel{(d-1) ~ \text{times}}{\overbrace{g_1^*(M) \otimes... \otimes g_1^*(M)}} \otimes g_2^*(M) \otimes g_2^*(M) \otimes g_2^*(M) \cong M^{\otimes (4d-1)}. $ \end{center} This gives us that a canonical height $h:= \hat{h}_{\{g_1,...,g_1,g_2,g_2,g_2\}}$ exists by [10, theorem 1.2.1]. Again, if $h(P) \neq 0$, then $ \alpha_{\{g_1,...,g_1,g_2,g_2,g_2\}}(P)=\dfrac{4d-1}{d+2}$, and we can also see that $\delta_{\{g_1,...,g_1,g_2,g_2,g_2\}}= \max \{\delta_f, \delta_{[2]} \}= \max \{d, 4 \}$, which leads also to the same as the previous examples, since $\dfrac{4d-1}{d+2} <\max \{d, 4 \}.$ \newline The next proposition is a counting orbit points result in the case of a system possibly with several maps. This result describes some information about the growth of the height counting function of the orbit of $P$ as given below.\newline {\bf Proposition 3.7: } \textit{Let $ P \in X_{\mathcal{F}}(\bar{K})$ whose $\mathcal{F}$-orbit is infinite, and such that the arithmetic degree $\alpha_{\mathcal{F}}(P)$ exists. Then} \begin{center} $\lim_{B \rightarrow \infty} \dfrac{ \# \{ n \geq 0 ; \sum_{f \in \mathcal{F}_n} h_X(f(P)) \leq B\}}{ \log B}= \dfrac{1}{ \log(k. \alpha_{\mathcal{F}}(P))} $ \end{center} \textit{and in particular,} \begin{center} $\lim \inf _{B \rightarrow \infty} (\# \{Q \in \mathcal{O}_{\mathcal{F}}(P); h_X(Q) \leq B \})^{\frac{1}{\log B}} \geq k^{ \frac{1}{ \log(k. \alpha_{\mathcal{F}}(P))}}$ \end{center} \begin{proof} Since $\mathcal{O}_{\mathcal{F}}(P)= \infty$, it is only necessary to prove the same claim with $h_X^+$ in place of $h_X.$ For each $\epsilon >0$, there exists an $n_0(\epsilon)$ such that \begin{center} $(1- \epsilon) \alpha_{\mathcal{F}}(P) \leq \dfrac{1}{k} (\sum_{f \in \mathcal{F}_n} h^+_X(f(P)))^{\frac{1}{n}} \leq (1 + \epsilon) \alpha_{\mathcal{F}}(P) $ for all $n \geq n_0(\epsilon).$ \end{center} It follows that \begin{center} $\{ n \geq n_0(\epsilon): (1 + \epsilon) \alpha_{\mathcal{F}}(P) \leq \dfrac{B^{\frac{1}{n}}}{k} \} \subset \{ n \geq n_0(\epsilon):\sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq B \}$ \end{center} and \begin{center} $ \{ n \geq n_0(\epsilon):\sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq B \} \subset \{ n \geq n_0(\epsilon): (1 - \epsilon) \alpha_{\mathcal{F}}(P) \leq \dfrac{B^{\frac{1}{n}}}{k} \}$ \end{center} Counting the number of elements in these sets yields \begin{center} $\dfrac{ \log B}{ \log (k (1 + \epsilon) \alpha_{\mathcal{F}}(P))} - n_0(\epsilon) -1 \leq \# \{ n \geq 0 : \sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq B \}$ \end{center} and \begin{center} $ \# \{ n \geq 0 : \sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq B \} \leq \dfrac{ \log B}{ \log (k (1 - \epsilon) \alpha_{\mathcal{F}}(P))} + n_0(\epsilon) +1 $ \end{center} Dividing by $\log B$ and letting $B \rightarrow \infty$ gives \begin{center} $ \dfrac{ 1}{ \log (k (1 + \epsilon) \alpha_{\mathcal{F}}(P))} \leq \lim \inf_{ B \rightarrow \infty} \dfrac{\# \{ n \geq 0 : \sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq B \} }{ \log B} $ \end{center} and \begin{center} $ \lim \sup_{ B \rightarrow \infty} \dfrac{\# \{ n \geq 0 : \sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq B \} }{ \log B} \leq \dfrac{ 1}{ \log (k (1 - \epsilon) \alpha_{\mathcal{F}}(P))}$ \end{center} Since the choice for $\epsilon$ is arbitrary, and the $ \lim \inf$ is less or equal to the $\lim \sup$, this finishes the proof that \begin{center} $ \lim_{ B \rightarrow \infty} \dfrac{\# \{ n \geq 0 : \sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq B \} }{ \log B}= \dfrac{ 1}{ \log (k .\alpha_{\mathcal{F}}(P))}$ \end{center} Moreover, we also have that \begin{center} $\{ n \geq 0 : \sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq B \} \subset \{ n \geq 0 : h^+_X(f(P)) \leq B $ for all $ f \in \mathcal{F}_n \}$ \end{center} and thus \begin{center} $\dfrac{ \log B}{ \log (k (1 + \epsilon) \alpha_{\mathcal{F}}(P))} - n_0(\epsilon) -1 \leq \# \{ n \geq 0 : h^+_X(f(P)) \leq B $ for all $ f \in \mathcal{F}_n \}$ \end{center} This implies that \begin{center} $\dfrac{ k^{\frac{ \log B}{ \log (k (1 + \epsilon) \alpha_{\mathcal{F}}(P))} - n_0(\epsilon)} -1}{k-1} \leq \# \{Q \in \mathcal{O}_{\mathcal{F}}(P); h_X^+(Q) \leq B \}$ \end{center} Taking $\frac{1}{\log B}$-roots and letting $B \rightarrow \infty$ gives \begin{center} $ k^{ \frac{1}{ \log (k. \alpha_{\mathcal{F}}(P))}} \leq \lim \inf _{B \rightarrow \infty} (\# \{Q \in \mathcal{O}_{\mathcal{F}}(P); h_X^+(Q) \leq B \})^{\frac{1}{\log B}}.$ \end{center} \end{proof} We finish this section by stating the following elementary lemma from linear algebra. This lemma will be useful in the following sections. \newline {\bf Lemma 3.8:} \textit{Let $A=(a_{ij}) \in M_r( \mathbb{C})$ be an $r$-by-$r$ matrix. Let $||A||=\max |a_{ij}|$, and let $\rho (A)$ denote the spectral radius of $A$. Then there are constants $c_1$ and $c_2$, depending on $A$, such that} \begin{center} $c_1\rho (A)^n \leq ||A^n|| \leq c_2 n^r \rho (A)^n$ for all $n \geq 0.$ \end{center} \textit{In particular, we have $\rho (A) = \lim_{n \rightarrow \infty} ||A^n||^{ \frac{1}{n}}$. } \begin{proof} See [12, lemma 14] \end{proof} \section{Some divisor and height inequalities for rational maps} We let $h,g :X \dashrightarrow X$ be rational maps, and $ f \in \mathcal{F}_n$ for $\mathcal{F}=\{f_1,...,f_k\}$ a dynamical system of self-rational maps on $X$. The aim of this section is mainly to prove the next result below. It states that the action of $f \in \mathcal{F}_n$ on the vector space NS$(X)_{\mathbb{R}}$ is related with the actions of the maps $f_1,...,f_k$ by the existence of certain inequalities. This result guarantees, for instance, that the dynamical degree exists, and afterwards will also be important in order to claim and prove that $h^+_X(f(P)) \leq O(1).k^n.(\delta_{\mathcal{F}} + \epsilon)^n h^+_X(P)$ for all $f \in \mathcal{F}_n$. {\bf Proposition 4.1:} \textit{Let $X$ be a smooth projective variety, and fix a basis $D_1,..., D_r$ for the vector space NS$(X)_{\mathbb{R}}$. A dominant rational map $h: X \dashrightarrow X$ induces a linear map on NS$(X)_{\mathbb{R}}$, and we write} \begin{center} $h^*D_j \equiv \sum_{i=1}^r a_{ij}(h)D_i$ \textit{and} $A(h)=(a_{ij}(h)) \in M_r(\mathbb{R}).$ \end{center} \textit{We let $||.||$ denote the sup norm on $M_r(\mathbb{R}).$ Then there is a constant $C \geq 1$ depending on $D_1,...,D_r$ such that for any dominant rational maps $h,g :X \dashrightarrow X,$ any $n \geq 1$, and any $ f \in \mathcal{F}_n$ we have} \begin{center} $\quad \quad \quad|| A( g \circ h)|| \leq C ||A(g)|| . ||A(h)||$ \newline $||A(f)|| \leq C.(r . \max_{i=1,...,k.}||A(f_i)||)^n.$ \end{center} The proof of this result will be made in the sequel. An immediate corollary of this is the convergence of the limit defining the dynamical degree. \newline {\bf Corollary 4.2:} \textit{The limit $\delta_{\mathcal{F}}=\lim \sup_{n\rightarrow \infty}\rho(\mathcal{F}_n)^{\frac{1}{n}}$ exists.} \begin{proof} With notation as in the statement of proposition 4.1, we have \begin{center} $\rho(\mathcal{F}_n)= \max_{f \in \mathcal{F}_n} \rho (f^*,$NS$(X)_{\mathbb{R}})= \max_{f \in \mathcal{F}_n} \rho (A(f))$ \end{center} Denoting $||A(\mathcal{G})|| = \max_{g \in \mathcal{G}}||A(g)||$, where $\rho(\mathcal{G}):= \rho(g)$ for $\mathcal{G}$ dynamical system and $g \in \mathcal{G}$, proposition 4.1 give us that \begin{center} $\log ||A(\mathcal{F}_{n+m})|| \leq \log ||A(\mathcal{F}_{m})|| + \log ||A(\mathcal{F}_{n})||+ O(1)$ \end{center} Using this convexity estimate, we can see that $\dfrac{1}{n}\log ||A(\mathcal{F}_n)||$ converges. Indeed, if a sequence $(d_n)_{n \in \mathbb{N}}$ of nonnnegative real numbers satisfies $d_{i+j} \leq d_i + d_j$, then after fixing a integer $m$ and writing $n=mq+r$ with $0 \leq r \leq m-1,$ we have \begin{center} $\dfrac{d_n}{n} = \dfrac{d_{mq+r}}{n} \leq \dfrac{(qd_m+d_r)}{n}= \dfrac{d_m}{m} \dfrac{1}{(1+ r/mq)} + \dfrac{d_r}{n} \leq \dfrac{d_m}{m} + \dfrac{d_r}{n}.$ \end{center} Now take the limsup as $n \rightarrow \infty$, keeping in mind that $m$ is fixed and \newline $ r \leq m-1$, so $d_r$ is bounded. This gives \begin{center} $\lim \sup_{n \rightarrow \infty} \dfrac{d_n}{n} \leq \dfrac{d_m}{m}.$ \end{center} taking the infimum over $m$ shows that \begin{center} $\lim \sup_{ n \rightarrow \infty} \dfrac{d_n}{n} \leq \inf_{ m \geq 1} \dfrac{d_m}{m} \leq \lim \inf_{ m \rightarrow \infty} \dfrac{d_m}{m} ,$ \end{center} and hence all three quantities must be equal. As the sequence $(||A(\mathcal{F}_{n})||^{1/n})_{n \in \mathbb{N}}$ is convergent and therefore bounded, lemma 3.8 guarantees that the sequence $(\rho(\mathcal{F}_n)^{1/n})_{n \in \mathbb{N}}$ is bounded as well. \end{proof} We also conjecture that the limit $\lim_{n\rightarrow \infty}\rho(\mathcal{F}_n)^{\frac{1}{n}}$ exists and is a birational invariant. The proof for dynamical degrees of systems with only one map given in [6, prop. 1.2] should be extented naturally for our present definition of degree with several maps. In the mentioned article, the dynamical degree is firstly defined using currents, and afterwards such definition is proved to coincide with the one using the limit of roots of spectral radius. Such result can be worked out in some future paper. Thus, from now on, we assume that \begin{center} $\delta_{\mathcal{F}}:=\lim_{n\rightarrow \infty}\rho(\mathcal{F}_n)^{\frac{1}{n}}$, \end{center} and that it exists. We start the proof of proposition 4.1 stating the following auxiliar proposition and lemmas whose proofs can be found in [12]:\newline {\bf Proposition 4.3:} \textit{Let $X^{(0)}, X^{(1)}, ..., X^{(m)}$ be smooth projective varieties of the same dimension $N$, and let $f^{(i)}: X^{(i)} \dashrightarrow X^{(i-1)}$ be dominant rational maps for $1 \leq i \leq m.$ Let $D$ be a nef divisor on $X^{(0)}$. Then for any nef divisor $H$ on $X^{(m)}$, we have} \begin{center} $(f^{(1)} \circ f^{(2)} \circ ... \circ f^{(m)})^*D . H ^{N-1} \leq (f^{(m)})^*... (f^{(2)})^* (f^{(1)})^*D.H^{N-1}.$ \end{center} \begin{proof} See [12, Prop. 17] \end{proof} For the lemmas, we need to set the following notation: \newline \begin{itemize} \item $N:$ The dimension of $X$, wich we assume is at least 2. \newline \item $\mbox{Amp}(X)$: The ample cone in NS$(X)_{\mathbb{R}}$ of all ample $\mathbb{R}-$divisors. \newline \item $\mbox{Nef}(X)$: The nef cone in NS$(X)_{\mathbb{R}}$ of all nef $\mathbb{R}-$divisors. \newline \item $\mbox{Eff}(X)$: The effective cone in NS$(X)_{\mathbb{R}}$ of all effective $\mathbb{R}-$divisors. \newline \item $\overline{\mbox{Eff}}(X): $ The $\mathbb{R}-$closure of Eff$(X)$. \newline \end{itemize} As described in [5, section 1.4], we have the facts \begin{center} Nef$(X)=\overline{\mbox{Amp}}(X)$ and Amp$(X)=$ int$($Nef$(X)).$ \end{center} In particular, since Amp$(X) \subset $ Eff$(X)$, it follows that Nef$(X) \subset \overline{\mbox{Eff}}(X).$\newline {\bf Lemma 4.4:} \textit{With notation as above, let $D \in \overline{\mbox{Eff}}(X) - \{0\}$ and $H \in $ Amp$(X).$ Then } \begin{center} $D.H^{N-1} > 0.$ \end{center} \begin{proof} See [12, lemma 18] \end{proof} \newpage {\bf Lemma 4.5:} \textit{Let $H \in $ Amp$(X)$, and fix some norm $|.|$ on the $\mathbb{R}-$vector space NS$(X)_{\mathbb{R}}$. Then there are constants $C_1, C_2 > 0$ such that} \begin{center} $C_1|v| \leq v.H^{N-1} \leq C_2|v|$ for all $v \in \overline{\mbox{Eff}}(X).$ \end{center} \begin{proof} See [12, lemma 19] \end{proof} Now we start the proof of proposition 4.1. We fix a norm $|.|$ on the $\mathbb{R}-$vector space NS$(X)_{\mathbb{R}}$ as before. Additionally, for any $A:$ NS$(X)_{\mathbb{R}} \rightarrow $ NS$(X)_{\mathbb{R}}$ linear transformation, we set \begin{center} $||A||^{\prime} = \sup_{ v \in \mbox{Nef} - \{0\}} \dfrac{|Av|}{|v|},$ \end{center} which exists because the set $\overline{\mbox{Eff}}(X) \cap \{ w \in \mbox{NS}(X)_{\mathbb{R}} : |w| = 1 \}$ is compact. \newline We note that for linear maps $A,B \in $ End(NS$(X)_{\mathbb{R}})$ and $c \in \mathbb{R}$ we have \begin{center} $|| A + B||^{\prime} \leq ||A||^{\prime} + ||B||^{\prime}$ and $||cA||^{\prime}=|c|||A||^{\prime}.$ \end{center} Further, since Nef$(X)$ generates NS$(X)_{\mathbb{R}}$ as an $\mathbb{R}-$vector space, we have $||A||^{\prime}=0$ if and only if $A=0.$ Thus $||.||^{\prime}$ is an $\mathbb{R}-$norm on End(NS$(X)_{\mathbb{R}}).$ \newline Similarly, for any linear map $A:$ NS$(X)_{\mathbb{R}} \rightarrow $NS$(X)_{\mathbb{R}},$ we set \begin{center} $||A||^{ \prime \prime}= \sup_{ v \in \mbox{Eff} - \{0\}} \dfrac{|Aw|}{|w|},$ \end{center} then $||.||^{ \prime \prime}$ is an $\mathbb{R}-$norm on End(NS$(X)_{\mathbb{R}}).$ We note that $ \overline{\mbox{Eff}}(X)$ is preserved by $f^*$ for $f$ self-rational map on $X$, and that Nef$(X) \subset \overline{\mbox{Eff}}(X).$ Thus if $v \in $Nef$(X),$ then $ g^*v$ and $h^*v$ belong to $ \overline{\mbox{Eff}}(X).$ This allows us to compute \newline \newline $||(g \circ h)^*||^{\prime}=\sup_{ v \in \mbox{Nef}(X) - \{0\}} \dfrac{|(g \circ h)^*v|}{|v|}~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~ ~~~~~~~~~~~~~\newline \newline \leq C_1^{-1}\sup_{ v \in \mbox{Nef}(X) - \{0\}} \dfrac{(g \circ h)^*v.H^{N-1}}{|v|} $ from lemma 4.5 ~~~~~~~~~~~~~~~~~~~~~~~~ \newline \newline $ \leq C_1^{-1}\sup_{ v \in \mbox{Nef}(X) - \{0\}} \dfrac{(h^* g^*v).H^{N-1}}{|v|} $ from proposition 4.3 ~~~~~~~~~~~~~~~~~~~~\newline \newline $ = C_1^{-1}\sup_{ v \in \mbox{Nef}(X) - \{0\}, g^*v \neq 0} \dfrac{(h^* g^*v).H^{N-1}}{|v|} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline = C_1^{-1}(\sup_{ v \in \mbox{Nef}(X) - \{0\}, g^*v \neq 0} \dfrac{(h^* g^*v).H^{N-1}}{|g^*v|} . \dfrac{|g^*v|}{|v|})~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \newline \newline \leq C_1^{-1}(\sup_{ v \in \mbox{Nef}(X) - \{0\}, g^*v \neq 0} \dfrac{(h^* g^*v).H^{N-1}}{|g^*v|}) .(\sup_{ v \in \mbox{Nef} - \{0\}} \dfrac{|g^*v|}{|v|})~~~~~~~~~~~~~~ \newline \newlin = C_1^{-1}(\sup_{ v \in \mbox{Nef}(X) - \{0\}, g^*v \neq 0} \dfrac{(h^* g^*v).H^{N-1}}{|g^*v|}) . || g^*||^{\prime}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \newline \newline \leq C_1^{-1}(\sup_{ w \in \overline{\mbox{Eff}}(X) - \{0\}} \dfrac{ (h^*w).H^{N-1}}{|w|}) . || g^*||^{\prime}$ since $ g^*v \in \overline{\mbox{Eff}}(X)~~~ ~~~~~~~~~~~~~~\newline \newline \leq C_1^{-1} C_2 (\sup_{ w \in \overline{\mbox{Eff}}(X) - \{0\}} \dfrac{ |h^*w|}{|w|}) . || g^*||^{\prime} $ from lemma 4.5~~~~~~~~~~~~~~~~~~~~~~~~~ \newline \newline $= C_1^{-1} C_2 ||h^*||^{\prime \prime}. ||g^*||^{\prime}.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ \newline We remember that we defined $||.||$ to be the sup norm on $M_r(\mathbb{R})=$ End$($NS$(X)_{\mathbb{R}}$, where the identification is via the given basis $D_1,..., D_r$ of NS$(X)_{\mathbb{R}}$. We thus have three norms $||.||, ||.||^{\prime}$ and $||.||^{\prime \prime}$ on End$($NS$(X)_{\mathbb{R}}$, so there are positive constants $C_3^{\prime}, C_4^{\prime}, C_3^{\prime \prime}$ and $ C_4^{\prime \prime}$ such that \begin{center} $C_3^{\prime}|| \gamma|| \leq || \gamma||^{\prime} \leq C_4^{\prime}|| \gamma||$ and $C_3^{\prime \prime}|| \gamma|| \leq || \gamma||^{\prime \prime} \leq C_4^{\prime \prime}|| \gamma||$ l $\forall \gamma \in $ End$($NS$(X)_{\mathbb{R}}.$ \end{center} Hence \newline \newline $||A(g \circ h)||=||(g \circ h)^*|| \leq C_3^{\prime -1}||(g \circ h)^*||^{\prime}~~~~~~~ \newline \newline \leq C_3^{\prime -1} C_1^{-1} C_2 ||h^*||^{\prime \prime}. ||g^*||^{\prime} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline \leq C_3^{\prime -1} C_1^{-1} C_2 C_4^{\prime} C_4^{\prime \prime} ||h^*||. ||g^*|| ~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline = C_3^{\prime -1} C_1^{-1} C_2 C_4^{\prime} C_4^{\prime \prime} ||A(h)||. ||A(g)||.~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~$ \newline \newline Similarly, if $v \in $ Nef$(X), f := f_{i_1} \circ ... \circ f_{i_n}\in \mathcal{F}_n$, then $f^*v \in \overline{\mbox{Eff}}(X).$ A similar calculation gives \newline \newline $||f^*||^{\prime}=\sup_{ v \in \mbox{Nef}(X) - \{0\}} \dfrac{|f^*v|}{|v|} ~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline \leq C_1^{-1}\sup_{ v \in \mbox{Nef}(X) - \{0\}} \dfrac{(f^*v).H^{N-1}}{|v|} $ from lemma 4.5 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline $= C_1^{-1}\sup_{ v \in \mbox{Nef}(X) - \{0\}} \dfrac{( f_{i_1} \circ ... \circ f_{i_n})^*v.H^{N-1}}{|v|} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~\newline \newline \leq C_1^{-1}\sup_{ v \in \mbox{Nef}(X) - \{0\}} \dfrac{((f_{i_n})^*... (f_{i_1})^*v).H^{N-1}}{|v|} $ from proposition 4.3~~~~~~~~~~~~~~~~~\newline \newline $ \leq C_1^{-1} C_2 (\sup_{ v \in \mbox{Nef}(X) - \{0\}} \dfrac{ |(f_{i_n})^*... (f_{i_1})^*v|}{|v|}) $ from lemma 4.5 ~~~~~~~~~~~ ~~~~~~~~~~~~~~\newline \newline $= C_1^{-1} C_2. ||(f_{i_n})^*... (f_{i_1})^*||^{\prime}.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~ ~~~~~~~~~~~~~~~~~~~~~~~~$ \newline\newline Hence \newline \newline $||A(f)||=||f^*|| \leq C_3^{\prime -1}||f^*||^{\prime}~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~ \newline \newline \leq C_3^{\prime -1} C_1^{-1} C_2 ||(f_{i_n})^*... (f_{i_1})^*||^{\prime} ~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~\newline \newline \leq C_3^{\prime -1} C_1^{-1} C_2 C_4^{\prime} C_4^{\prime \prime} ||(f_{i_n})^*... (f_{i_1})^*|| ~~~~~~~~~~~~~ ~~~~~~~~~~~\newline \newline \leq C_3^{\prime -1} C_1^{-1} C_2 C_4^{\prime} C_4^{\prime \prime} r^n ||(f_{i_n})^*||... ||(f_{i_1})^*||~~~~~~~ ~~~~~~~~~~~ \newline \newline \leq C_3^{\prime -1} C_1^{-1} C_2 C_4^{\prime} C_4^{\prime \prime}.[r. \max_{i=1,...,k.}||A(f_i)||]^n$, ~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline As we wanted to show. \newline As it was said in the beginning of this section, the next proposition is a height inequality for rational maps, with eyes towards future applications. \newline {\bf Proposition 4.6:} \textit{Let $X/\bar{K}$ and $ Y/\bar{K}$ be smooth projective varieties, \newline let $f:Y \dashrightarrow X$ be a dominant rational map defined over $\bar{K}$, let $D \in$ Div$(X)$ be an ample divisor, and fix Weil height functions $h_{X,D}$ and $h_{Y,f^*D}(P)$ associated to $D$ and $f^*D.$ Then } \begin{center} $h_{X,D} \circ f(P) \leq h_{Y,f^*D}(P) + O(1)$ \textit{for all} $P \in (Y - I_f)(\bar{K}),$ \end{center} \textit{where the $O(1)$ bound depends on $X,Y, f,$ and the choice of height functions, but is independent of $P$.} \begin{proof} See [12, Prop. 21]. \end{proof} \section{A bound for the sum of heights on iterates} This section is devoted for the proof of a quantitative upper bound for $\sum_{f \in \mathcal{F}_n} h^+_X(f(P))$ in terms of the dynamical degree $\delta_{\mathcal{F}}$ of the system.This is one of the main results of this work, and is stated below. As a corollary, we see that the arithmetic degree of any point is upper bounded by the dynamical degree of the system. \newline {\bf Theorem 5.1:} \textit{Let $K$ be a number field or a one variable function field of characteristic $0$ , let $\mathcal{F}=\{f_1,...,f_k\}$ be a set of dominant self rational maps on $X$ defined over $K$ as stated before, let $h_X$ be a Weil height on $X(\bar{K})$ relative to an ample divisor, let $h^+_X= \max \{h_X, 1 \}$, and let $\epsilon >0$. Then there exists a positive constant $C=C(X, h_X, f, \epsilon)$ such that for all $P \in X_{\mathcal{F}}(\bar{K})$ and all $n \geq 0$,} \begin{center} $\sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq C. k^n.(\delta_{\mathcal{F}} + \epsilon)^n . h^+_X(P).$ \end{center} \textit{In particular, $h^+_X(f(P)) \leq C. k^n.(\delta_{\mathcal{F}} + \epsilon)^n . h^+_X(P)$ for all $f \in \mathcal{F}_n.$} \newline \newline Before proving the theorem, we note that it implies the fundamental inequality $\bar{\alpha}_{\mathcal{F}}(P) \leq \delta_{\mathcal{F}}.$\newline {\bf Corollary 5.2:} \textit{Let $P \in X_{\mathcal{F}}(\bar{K}).$ Then} \begin{center} $\bar{\alpha}_{\mathcal{F}}(P) \leq \delta_{\mathcal{F}}.$ \end{center} \begin{proof} Let $ \epsilon >0.$ Then \begin{center} $\quad \quad\bar{\alpha}_{\mathcal{F}}(P) = \frac{1}{k} \lim \sup_{n \rightarrow \infty} \{\sum_{f \in \mathcal{F}_n} h^+_X(f(P))\}^{\frac{1}{n}} $ by definition of $\bar{\alpha}_{\mathcal{F}}~ ~~~~~~~\newline \newline \leq \lim \sup_{n \rightarrow \infty} ( C. (\delta_{\mathcal{F}} + \epsilon)^n . h^+_X(P))^{\frac{1}{n}} $ from theorem 5.1 ~~~~~~~~~~~~~~~ ~~~\newline \newline $ =\delta_{\mathcal{F}} + \epsilon.\quad \quad \quad \quad \quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~$ \end{center} This holds for all $\epsilon>0$, which proves that $\bar{\alpha}_{\mathcal{F}}(P) \leq \delta_{\mathcal{F}}.$ \end{proof} {\bf Lemma 5.3:} \textit{Let $E \in$ Div$(X)_{\mathbb{R}}$ be a divisor that is algebraic equivalent to 0, and fix a height function $h_E$ associated to $E.$ Then there is a constant $C=C(h_X, h_E)$ such that} \begin{center} $|h_E(P)| \leq C \sqrt{h_X^+(P)} $ \textit{for all} $P \in X(\bar{K}).$ \end{center} \begin{proof} See for example the book of Diophantine Geometry of Hindry-Silverman[8, Theorem B.5.9]. \end{proof} Theorem 5.1 will be a consequence from the slightly weaker result:\newline {\bf Theorem 5.4:} \textit{Let $K$ be a number field or a one variable function field of characteristic $0$ , let $\mathcal{F}=\{f_1,...,f_k\}$ be a set of dominant self rational maps on $X$ defined over $K$, let $h_X$ be a Weil height on $X(\bar{K})$ relative to an ample divisor, let $h^+_X= \max \{h_X, 1 \}$, and let $\epsilon >0$. Then there exists a positive constant $C=C(X, h_X, f, \epsilon)$, and $t$ positive integer such that for all $P \in X_{\mathcal{F}}(\bar{K})$ and all $n \geq 0$,} \begin{center} $\sum_{f \in \mathcal{F}_{nt}} h^+_X(f(P)) \leq C. k^{nt}.(\delta_{\mathcal{F}} + \epsilon)^{nt} . h^+_X(P).$ \end{center} Before proving it and then deduce theorem 5.1, we state and prove two auxiliar short lemmas.\newline {\bf Lemma 5.5:} \textit{In the situation above, there is a constant $C \geq 1$ such that \begin{center} $\sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq k^n.C^n . h^+_X(P).$ \end{center} for all $P \in X_{\mathcal{F}}(\bar{K})$.} \begin{proof} We take $H$ an ample divisor on $X$, $h_H \geq 1$ and $ h_{f_i^*H}$ height functions associated to $H$ and $f_i^*H$ respectively, so that \begin{center} $h_H(f_i(P)) \leq h_{f_i^*H}(P) + O(1) $ \end{center} for all $P \in X_{\mathcal{F}}(\bar{K})$ , with $O(1)$ depending on $H, f_i, f_i^*H, h_H, h_{f_i^*H},$ but not on $P$. Then, for $C$ enough large, we find that $h_{f_i^*H}(P) + O(1) \leq C h_H(P)$, and so $h_H(f_i(P)) \leq C h_H(P)$ for all $P \in X_{\mathcal{F}}(\bar{K})$, which yields \begin{center}$\sum_{f \in \mathcal{F}_n} h_H(f(P)) \leq k^n.C^n . h_H(P).$ \end{center} The proof is finished since $h_H$ and $h_X$ are associated with ample divisors, and therefore are commensurate. \end{proof} {\bf Lemma 5.6:} \textit{Let $\mathcal{A}_0:=\{a_0\}, a_0 \geq 1$, $k$ fixed, and for each $l \in \mathbb{N}$, $\mathcal{A}_l$ a set with $k^l$ positive real numbers such that \begin{center} $\sum_{a \in \mathcal{A}_n} a \leq \sum_{a \in \mathcal{A}_{n-1}} a + C_1(\sum_{a \in \mathcal{A}_{n-1}} \sqrt{a} + \sum_{a \in \mathcal{A}_{n-1}} \sqrt{a + C_2})$ for all $n \geq 1,$ \end{center} where $C_1, C_2$ are non-negative constants. Then there exists a positive constant $C$ depending only on $C_1, C_2$ such that \begin{center} $\sum_{a \in \mathcal{A}_n} a \leq k^{n-1}.C.n^2. a_0$ \end{center}} \begin{proof} $\sum_{a \in \mathcal{A}_n} a \leq \sum_{a \in \mathcal{A}_{n-1}} a + C_1(\sum_{a \in \mathcal{A}_{n-1}} \sqrt{a} + \sum_{a \in \mathcal{A}_{n-1}} \sqrt{a + C_2}) \newline =\sum_{a \in \mathcal{A}_{n-1}} [a + C_1( \sqrt{a} + \sqrt{a + C_2})] \leq \sum_{a \in \mathcal{A}_{n-1}} [a + C_1\sqrt{a}( 1 + \sqrt{1 + \dfrac{C_2}{a}})] \newline \leq \sum_{a \in \mathcal{A}_{n-1}} [a + C_1\sqrt{a}( 1 + \sqrt{1 + C_2})]= \sum_{a \in \mathcal{A}_{n-1}} [a + C_3\sqrt{a} ]$ with \newline $C_3:= C_1(1+ \sqrt{1+ C_2})$.\newline Thus we have $\sum_{a \in \mathcal{A}_1}a \leq \sum_{a \in \mathcal{A}_{0}} [a + C_3\sqrt{a} ]=a_0 + C_3\sqrt{a_0} \leq a_0(1 + C_3) \newline \leq a_0.C = a_0.C.k^0,$ where $C:=\max \{ \dfrac{C_3.k}{4}, 1 + C_3\}$, and we want to prove by induction that $\sum_{a \in \mathcal{A}_n}a \leq Ck^{n-1}n^2a_0.$ So we compute \newline \newline $\sum_{a \in \mathcal{A}_{n+1}}a \leq \sum_{a \in \mathcal{A}_{n}} [a + C_3\sqrt{a}] \leq \sum_{a \in \mathcal{A}_{n}} a+ C_3\sum_{a \in \mathcal{A}_{n}} \sqrt{a} \newline \leq \sum_{a \in \mathcal{A}_{n}}a + k^{n/2}C_3^{1/2}\sqrt{\sum_{a \in \mathcal{A}_{n}}a} \leq k^{n-1}Cn^2a_0 + k^{n/2}C_3^{1/2}\sqrt{k^{n-1}Cn^2a_0} \newline \leq k^{n-1}Cn^2a_0+ 2k^{(n-1)/2}\sqrt{C}\sqrt{k^{n-1}Cn^2a_0} \leq k^{n-1}Ca_0(n^2+\dfrac{2n}{\sqrt{a_0}})\newline \leq k^{n-1}Ca_0(n^2+2n) \leq k^{n}\tilde{C}a_0(n+1)^2,$ where we can make $\tilde{C}:=\max \{C_3/4, 1+ C_3 \}$ \end{proof} Now we start the proof of theorem 5.4 \newline \textit{ Proof of theorem 5.4:} We take $D_1,..,D_r$ very ample divisors forming a basis of NS$(X)_{\mathbb{R}}$, and $H \equiv \sum c_i D_i$ ample with $c_i \geq 0$ such that $H+ D_i, H-D_i$ are all ample. We consider a resolution of indeterminacy $p:Y \rightarrow X$ as a sequence of blowing ups working for each $f_i$, such that $g_i:=f_i \circ p$ is a morphism for each $i \leq k$, and Exc$(p)$ is the exceptional locus of $p$. For each $j\leq k, i \leq r$, we take effective divisors $\tilde{D}_i^{(j)}$ on $X$ with $\tilde{D}_i^{(j)}$ linearly equivalent to $D_i$, and such that none of the components of $g_j^*\tilde{D}_i^{(j)}$ are containded in Exc$(p)$. The divisor $Z_i^{(j)}:=p^*p_*g_j^*\tilde{D}_i^{(j)}- g_j^*\tilde{D}_i^{(j)}$ on $Y$ is effective and has support contained in Exc$(p)$. We denote $F_i^{(j)}:=g_j^*D_i$ for $i=1,...,r$, and take divisors $F_{r+1}^{(j)},...,F_s^{(j)}$ so that $F_1^{(j)},..., F_s^{(j)}$ form a basis for NS$(Y)_{\mathbb{R}}$. For $i \leq r$, we can see that $p^*p_*F_i^{(j)}-F_i^{(j)}$ and $Z_i^{(j)}$ are linearly equivalent. By [7, prop. 7.10], we can find $H^{\prime} \in$ Div$(Y)_{K}$ ample so that $p^*H - H^{\prime}$ is effective with support contained in Exc$(p)$. We consider $g_j^*D_i \equiv \sum_{m \leq s } a^{(j)}_{mi}F_m^{(j)}$ for $i=1,...,r$ and $A^{(j)}:=(a^{(j)}_{mi})_{m,i}$ the correspondent $s \times r-$matrix. We also denote $p_*F_i^{(j)} \equiv \sum_{l \leq r } b^{(j)}_{li}D_l; i=1,...,s,$ and $B^{(j)}:=(b^{(j)}_{li})_{l,i}$ the correspondent $r \times s-$matrix. We see that $B^{(j)}A^{(j)}$ is a matrix representing $f_j^*$ with respect to the basis $D_1,...,D_r$. Let us fix some notation: $\vec{D}:=(D_1,...,D_r), \vec{F}^{(j)}:=(F_1^{(j)},..., F_s^{(j)}), \vec{Z}^{(j)}:=(Z_1^{(j)},..., Z_s^{(j)}), \vec{c}:=(c_1,...,c_r), \newline E^{(j)}:=g_j^*H-<A^{(j)} \vec{c}, \vec{F}^{(j)}>, \vec{E^{\prime}}^{(j)}= ({E^{\prime}_1}^{(j)},...,{E^{\prime}_s}^{(j)}):=p_*\vec{F}^{(j)} - {B^{(j)}}^T\vec{D}. $ We note that $E^{(j)}$ and $\vec{E^{\prime}}^{(j)}$ are numerically zero divisors for each $j$. We choose height functions $h_{D_1},..., h_{D_r}$ for $D_1,...D_r$ respectively, and $h_H \geq 1$ with respect to $H$ such that $h_H \geq |h_{D_i}|$ for each $i \leq r$. All of these functions are independent of $\mathcal{F}$. Defining $h_{F^{(j)}_i}:=h_{D_i}\circ g_j, i=1,...,r$ height functions associated with $F_i^{(j)}$. For $i=r+1,...,s$ we fix height functions $h_{p_*F^{(j)}_i}$ with respect to the divisors $p_*F^{(j)}_i$, and we denote: $h_{\vec{D}}:=(D_1,...,D_r), h_{\vec{F}^{(j)}}:=(h_{F_1^{(j)}},...,h_{F_s^{(j)}}), \newline h_{p_*\vec{F}^{(j)}}:=(h_{p_*F_1^{(j)}},...,h_{p_*F_s^{(j)}}), h_{\vec{E^{\prime}}^{(j)}}:=(h_{{E^{\prime}_1}^{(j)}},...,h_{{E^{\prime}_s}^{(j)}}) = h_{p_*\vec{F}^{(j)}}-{B^{(j)}}^Th_{\vec{D}}, \newline h_{\vec{Z}^{(j)}}:= (h_{Z_1^{(j)}},...,h_{Z_s^{(j)}})=h_{p_*\vec{F}^{(j)}}\circ p - h_{\vec{F}^{(j)}},$ where $h_{\vec{Z}_i^{(j)}}$ and $h_{\vec{E^{\prime}}_i^{(j)}}$ are height functions associated with the divisors $\vec{Z}_i^{(j)}$ and $\vec{E^{\prime}}_i^{(j)}$. Also, define \begin{center} $h_{E^{(j)}}:=h_H \circ g_j - <A^{(j)} \vec{c}, h_{\vec{F^{(j)}}}>$. \end{center} We can suppose that $h_{Z_i^{(j)}} \geq 0$ on $Y-Z_i^{(j)}$. We can fix a height function $h_{H^{\prime}} \geq 1$ related to $H^{\prime}$, and a height function $h_{p^*H - H^{\prime}}$ related to $p^*H - H^{\prime}$ satisfying $h_{p^*H - H^{\prime}} \geq 0$ on $Y-$Exc$(p)$. Since $E^{(j)}$ and ${E_i^{\prime}}^{(j)}$ are numerically equivalent to zero, there exists a positive constant such that $|h_{{E}^{(j)}}| \leq C \sqrt{h_{H^{\prime}}}$ and $|h_{{E_i^{\prime}}^{(j)}}|\leq C \sqrt{h_H}$. Also, there exists a constant $\gamma \geq 0$ such that $h_H \circ p \geq h_{p^*H - H^{\prime}} + h_{H^{\prime}}- \gamma$ on $Y(\bar{K})$. Finally, if we denote by $M(f_j)$ the matrix representing $f_j^*$, linear map on NS$(X)_{\mathbb{R}},$ with respect to the basis $D_1,...,D_r$, $||M(f_j)||$ the maximum absolute value of its coefficients (norm of a matrix), then we make the notation $||M(\mathcal{F}_n)||:= \max_{f \in \mathcal{F}_n}||M(f)||$. For $P \in X_{\mathcal{F}}(\bar{K}), n \geq 1$, we compute: \newline \newline $\sum_{f \in \mathcal{F}_n} h_H(f(P))= \sum_{i \leq k}\sum_{f \in \mathcal{F}_{n-1}}h_H(f_i(f(P))) \newline \newline = \sum_{i \leq k,f \in \mathcal{F}_{n-1}}[(h_H \circ g_i)(p^{-1}f(P))-<A^{(i)}\vec{c}, h_{p_*F^{(i)}}\circ p>(p^{-1}f(P))\newline \newline +<A^{(i)}\vec{c}, h_{p_*F^{(i)}}>(f(P))]=\sum_{i\leq k,f \in \mathcal{F}_{n-1}}[<A^{(i)}\vec{c}, h_{F^{(i)}}-h_{p_*F^{(i)}}\circ p>(p^{-1}f(P))\newline \newline + h_{E^{(i)}}(p^{-1}f(P))+ <B^{(i)}A^{(i)}\vec{c},h_{\vec{D}}>(f(P)) +<A^{(i)}\vec{c},h_{{E^{\prime}}^{(i)}}>(f(P))]\newline \newline =\sum_{i\leq k,f \in \mathcal{F}_{n-1}}[<\vec{c},-h_{Z^{(i)}}>(p^{-1}f(P))+h_{E^{(i)}}(p^{-1}f(P))\newline \newline + <B^{(i)}A^{(i)}\vec{c},h_{\vec{D}}>(f(P))+ <\vec{c},{A^{(i)}}^Th_{{E^{\prime}}^{(i)}}>(f(P))] \newline \newline \leq \sum_{i\leq k,f \in \mathcal{F}_{n-1}}[h_{E^{(i)}}(p^{-1}f(P)) +<B^{(i)}A^{(i)}\vec{c},h_{\vec{D}}>(f(P))\newline \newline +<\vec{c},{A^{(i)}}^Th_{{E^{\prime}}^{(i)}}>(f(P))]\leq \sum_{i\leq k,f \in \mathcal{F}_{n-1}}[r^2||\vec{c} ||||B^{(i)}A^{(i)}|| h_H(f(P))\newline \newline+r||\vec{c}||C\sqrt{h_H(f(P))} +C \sqrt{h_{H^{\prime}}(p^{-1}f(P))}]\newline \newline \leq \sum_{i\leq k,f \in \mathcal{F}_{n-1}}[r^2||\vec{c} ||||B^{(i)}A^{(i)}|| h_H(f(P))+r||\vec{c}||C\sqrt{h_H(f(P))} +C \sqrt{h_{H^{\prime}}(f(P)) + \gamma}]$,\newline \newline where the last follows because $h_H \circ p \geq h_{p^*H - H^{\prime}} + h_{H^{\prime}}- \gamma$ on $Y(\bar{K})$ and $h_{p^*H - H^{\prime}} \geq 0$ on $Y-$Exc$(p)$.\newline Denoting by $R:=\max_i \{1, r^2||\vec{c} ||||B^{(i)}||||A^{(i)}||\}$, and dividing the whole inequality above by $R^n$, we obtain \newline \newline $\dfrac{1}{R^{n}}\sum_{f \in \mathcal{F}_n} h_H(f(P))\newline \leq k.[\sum_{f \in \mathcal{F}_{n-1}} \dfrac{h_H(f(P))}{R^n}+r||\vec{c}||C \sum_{f \in \mathcal{F}_{n-1}} \sqrt{\dfrac{h_H(f(P))}{R^{n-1}}}+C\sum_{f \in \mathcal{F}_{n-1}} \sqrt{\dfrac{h_H(f(P))}{R^{n-1}}+ \gamma}],$ \newline \newline which, by lemma 5.6, implies that \begin{center} $\sum_{f \in \mathcal{F}_n} h_H(f(P))\leq C_1 k^n n^2 R^n h_H(P),$ \end{center} for a positive constant $C_1.$ Fixing a real number $\epsilon >0$, let $\delta_{\mathcal{F}}= \lim \sup_n \rho(\mathcal{F}_n)^{1/n}$. Then by 3.8, we can check that $\delta_{\mathcal{F}}\geq \lim_n ||M(\mathcal{F}_n)||^{1/n},$ and hence there is a positive integer $l$ such that $\dfrac{||M(\mathcal{F}_l)||}{(\delta_{\mathcal{F}}+ \epsilon)^l} r^2 ||\vec{c}|| < 1.$ We fix such $l$ and we apply the arguments of last computations to conclude, for $\mathcal{F}_{ln}=(\mathcal{F}_l)_n$, that is it true that there exists a constant $C_1$ such that \begin{center} $\sum_{f \in \mathcal{F}_{nl}} h_H(f(P))\leq C_1 k^{ln} n^2\dfrac{R^{n}}{(\delta_{\mathcal{F}}+ \epsilon)^{nl}}(\delta_{\mathcal{F}}+ \epsilon)^{nl}h_H(P),$ \end{center} for $R \leq \max_{i} \{1, r^2||\vec{c} ||M(\mathcal{F}_l||\}$. Thus, there is a constant $C_2$ such that $C_1 n^2\dfrac{R^n}{(\delta_{\mathcal{F}}+ \epsilon)^{nl}} \leq C_2$ for all $n$. So we find that \begin{center} $\sum_{f \in \mathcal{F}_{nl}} h^+_X(f(P)) \leq C_2. k^{nl}.(\delta_{\mathcal{F}} + \epsilon)^{nl} . h^+_X(P)$ \end{center} for all $n$, showing theorem 5.4. \textit{Proof that theorem 5.4 implies theorem 5.1} We proved that for any $\epsilon >0$ there is a positive integer $l$ and a positive constant $C$ so that \begin{center} $\sum_{f \in \mathcal{F}_{nl}} h^+_X(f(P)) \leq C. k^{nl}.(\delta_{\mathcal{F}} + \epsilon)^{nl} . h^+_X(P),$ \end{center}for all $n,$ and $P \in X_{\mathcal{F}}(\bar{K})$. Given a integer $n$, there are $q\geq 0$ and $0<t<l$ such that $n=lq +t$. Let also $C_1$ be the constant of lemma 5.5. For $P \in X_{\mathcal{F}}(\bar{K})$, we calculate that \begin{center} $\sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\newline \newline \leq C. k^{lq}.(\delta_{\mathcal{F}} + \epsilon)^{lq}. \sum_{f \in \mathcal{F}_t} h^+_X(f(P)) \newline \newline \leq C. k^{lq}.(\delta_{\mathcal{F}} + \epsilon)^{lq}.C_1^t.k^t. h^+_X(P) \:\:\:\:\:\:\:\:\:\:\newline \newline \leq CC_1^{l-1}k^n(\delta_{\mathcal{F}} + \epsilon)^{n}h^+_X(P),\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:$ \end{center} as we wanted to show. \newline \section{The arising of new canonical heights} In this final section, we show that the canonical height limit, proposed and constructed by S. Kawaguchi in [10, theorem 1.2.1], is convergent for certain eigendivisor classes relative to algebraic equivalence, instead of linear equivalence case worked by Kawaguchi. The theorem is also an extension of theorem 5 of [12], where the eigensystem of the hypothesis has just one morphism.\newline {\bf Theorem 6.1:} \textit{Assume that $\mathcal{F}=f_1,...,f_k:X \rightarrow X$ are morphisms, and let $D \in $Div$(X)_{\mathbb{R}}$ that satisfies the algebraic relation} \begin{center} $\sum^k_{i=1} f^*_iD \equiv \beta D$\textit{ for some real number} $\beta >\sqrt{\delta_{\mathcal{F}}}k,$ \end{center} \textit{where $\equiv$ denotes algebraic equivalence in NS$(X)_{\mathbb{R}}.$ Then }\newline \textit{(a) For all $P \in X(\bar{K})$, the following limit converges:} \begin{center} $\hat{h}_{D,\mathcal{F}}(P)= \lim_{n \rightarrow \infty}\dfrac{1}{\beta^n}\sum_{ f \in \mathcal{F}_n} h_D(f(P)).$ \end{center} \textit{(b) The canonical height in (a) satisfies } \begin{center} $\sum^k_{i=1} \hat{h}_{D,\mathcal{F}}(f_i(P))=\beta \hat{h}_{D,\mathcal{F}}(P)$ and $\hat{h}_{D,\mathcal{F}}(P)= h_D(P) + O(\sqrt{h^+_X(P)}). $ \end{center} \textit{(c) If $\hat{h}_{D,\mathcal{F}}(P) \neq 0$, then $\underline{\alpha}_{\mathcal{F}}(P) \geq \beta/k.$}\newline \textit{(d) If $\hat{h}_{D,\mathcal{F}}(P) \neq 0$ and $\beta=\delta_{\mathcal{F}}k,$ then $\alpha_{\mathcal{F}}(P)= \delta_{\mathcal{F}}.$}\newline \textit{(e) Assume that $D$ is ample and that $K$ is a number field. Then} \begin{center} $\hat{h}_{D,\mathcal{F}}(P)=0 \iff P$ \textit{is preperiodic, i.e, has finite $\mathcal{F}$-orbit.} \end{center} \begin{proof} (a) Theorem 5.1 says that for every $\epsilon >0$ there is a constant \newline $C_1=C_1(X,h_X,\mathcal{F}, \epsilon)$ such that \begin{center} $\sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq C_1.k^n. (\delta_{\mathcal{F}} + \epsilon)^n . h^+_X(P)$ for all $n \geq 0.$ \end{center} We are given that $\sum^k_{i=1} f^*_iD \equiv \beta D.$ Applying lemma 5.3 with \newline $E=\sum^k_{i=1} f^*_iD - \beta D,$ we find a positive constant $C_2=C_2(D, \mathcal{F}, h_X)$ such that \begin{center} $|h_{\sum^k_{i=1} f^*_iD}(Q) - \beta h_D(Q)| \leq C_2 \sqrt{h^+_X(Q)} $ for all $Q \in X(\bar{K}).$ \end{center} Since we assumed that the $f_i$ are morphisms, standard functoriality of Weil height states that \begin{center} $ h_{\sum^k_{i=1} f^*_iD} = \sum^k_{i=1} h_D \circ f_i + O(1),$ \end{center} so the above inequality is reformulated as follows \begin{center} (**) $| \sum^k_{i=1} h_D(f_i(Q)) - \beta h_D(Q)| \leq C_3 \sqrt{h^+_X(Q)} $ for all $Q \in X(\bar{K}).$ \end{center} For $N \geq M \geq 0$ we estimate a telescopic sum, \begin{center} $|\beta^{-N} \sum_{f \in \mathcal{F}_N} h_D(f(P)) - \beta^{-M} \sum_{f \in \mathcal{F}_M} h_D(f(P))| ~~~~~~~~~~~~~~~~~~ ~~~~~ ~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: \newline \newline =|\sum^{N}_{n=M+1}\beta^{-n}[ \sum_{f \in \mathcal{F}_n} h_D(f(P))- \beta \sum_{f \in \mathcal{F}_{n-1}} h_D(f(P))]| ~~~~~ ~~~~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\newline \newline \leq \sum^{N}_{n=M+1}\beta^{-n}|\sum_{f \in \mathcal{F}_n} h_D(f(P))- \beta \sum_{f \in \mathcal{F}_{n-1}} h_D(f(P))| ~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:~~~~~~~~~~\newline \newline \leq \sum^{N}_{n=M+1}\beta^{-n} [\sum_{f \in \mathcal{F}_{n-1}}|\sum^k_{i=1} h_D(f_i(f(P))) - \beta h_D(f(P))| ] ~~~~~~~~~~ ~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\newline \newline \leq \sum^{N}_{n=M+1}\beta^{-n} (\sum_{f \in \mathcal{F}_{n-1}} C_3 \sqrt{h^+_X(f(P))}) $ by (**) $~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~ ~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:~~~~~\newline \newline \leq \sum^{N}_{n=M+1}\beta^{-n}.k^{(n-1)/2}. C_3 . \sqrt{\sum_{f \in \mathcal{F}_{n-1}}h^+_X(f(P))} $ by Cauchy-Schwarz ~~~~~~~~~~\: \newline \newline $\leq \sum^{N}_{n=M+1}\beta^{-n}.k^{n-1}. C_3 .C. (\delta_{\mathcal{F}} + \epsilon)^{(n-1)/2}.\sqrt{h_X^+(P)}$ by Thm. 5.1 ~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\newline \newline $\leq CC_3 \sqrt{h_X^+(P)}\sum_{n=M+1}^{\infty} [ \dfrac{k^2(\delta_{\mathcal{F}}+ \epsilon)}{\beta^{2}}]^{n/2}.$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: \end{center} And \begin{center} $\sum_{n=M+1}^{\infty} [ \dfrac{k^2(\delta_{\mathcal{F}}+ \epsilon)}{\beta^{2}}]^{n/2} < \infty \iff \dfrac{k^2(\delta_{\mathcal{F}}+ \epsilon)}{\beta^{2}} < 1.$ \end{center} Since $\beta > \sqrt{\delta_{\mathcal{F}}k^2},$ we can choose $0< \epsilon < \frac{\beta^{2}}{k^2} - \delta_{\mathcal{F}}$, which implies $\frac{k^2(\delta_{\mathcal{F}}+ \epsilon)}{\beta^{2}} < 1$ and the desired convergence. Also we obtain the following estimate (***): \begin{center} $ |\beta^{-N} \sum_{f \in \mathcal{F}_N} h_D(f(P)) - \beta^{-M} \sum_{f \in \mathcal{F}_M} h_D(f(P))| \newline \leq CC_3[ \dfrac{k^2(\delta_{\mathcal{F}}+ \epsilon)}{\beta^{2}}]^{M/2} \sqrt{h_X^+(P)}. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~$ \end{center} (b) The formula \begin{center} $\sum^k_{i=1} \hat{h}_{D,\mathcal{F}}(f_i(P))=\beta \hat{h}_{D,\mathcal{F}}(P)$ \end{center} follows immediately from the limit defining $\hat{h}_{D,\mathcal{F}}$ in part (a). Next, letting $N \rightarrow \infty$ and setting $M=0$ in (***) gives \begin{center} $ |\hat{h}_{D,\mathcal{F}}(P)- h_D(P)|= O(\sqrt{h^+_X(P)}),$ \end{center} which completes the proof of (b). (c) We are assuming that $\hat{h}_{D,\mathcal{F}}(P) \neq 0.$ If $\hat{h}_{D,\mathcal{F}}(P) <0,$ we change $D$ to $-D,$ so we may assume $\hat{h}_{D,\mathcal{F}}(P)>0.$ Let $H \in $ Div$(X)$ be an ample divisor such that $H+D$ is also ample (this can always be arranged by replacing $H$ with $mH$ for a sufficiently large $m$). Since $H$ is ample, we may assume that the height function $h_H$ is non-negative. We compute \begin{center} $\sum_{f \in \mathcal{F}_n}h_{D+H}(f(P)) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\newline \newline = \sum_{f \in \mathcal{F}_n}h_{D}(f(P)) + \sum_{f \in \mathcal{F}_n}h_{H}(f(P)) + O(k^n) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\newline \newline \geq \sum_{f \in \mathcal{F}_n}h_{D}(f(P)) + O(k^n)$ since $h_H \geq0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\newline \newline = \sum_{f \in \mathcal{F}_n}\hat{h}_{\mathcal{F},D}(f(P)) + O( \sum_{f \in \mathcal{F}_n}\sqrt{h^+_{X}(f(P))} ) $ from (b) $~~~~~~~~~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:~~~~~~~~ \newline \newline =\beta^{n}\hat{h}_{\mathcal{F},D}(P) + O( \sum_{f \in \mathcal{F}_n}\sqrt{h^+_{X}(f(P))} ) $ from (b) $~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:~~\:~~~~~\newline \newline \geq \beta^{n}\hat{h}_{\mathcal{F},D}(P) + O( \sqrt{\sum_{f \in \mathcal{F}_n}h^+_{X}(f(P))} ) $ since $ (x \rightarrow \sqrt{x})$ is convex $ ~~~~~~~~~~~~~~~~~~~~\:\newline \newline = \beta^{n}\hat{h}_{\mathcal{F},D}(P) + O( \sqrt{Ck^n(\delta_{\mathcal{F}} + \epsilon)^nh_X^+(P)})$ from Theorem 5.1.~~~~~~~~~~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:~~~~ \end{center} This estimate is true for every $\epsilon >0$, where $C$ depends on $\epsilon.$ Using the assumption that $\beta > \sqrt{k.\delta_{\mathcal{F}}} $ we can choose $\epsilon >0$ such that \newline $k.(\delta_{\mathcal{F}} + \epsilon) < \beta^{2}.$ This gives \begin{center} $\sum_{f \in \mathcal{F}_n}h_{D+H}(f(P)) \geq \beta^{n}\hat{h}_{\mathcal{F},D}(P) + o(\beta^n),$ \end{center} so taking $n^{th}$-roots, using the assumption that $\hat{h}_{\mathcal{F} ,D}(P) >0,$ and letting $n \rightarrow \infty$ yields \begin{center} $\underline{\alpha}_{\mathcal{F}}(P)=\lim \inf_{n \rightarrow \infty} \dfrac{1}{k}\{\sum_{f \in \mathcal{F}_n}h_{D+H}(f(P) \}^{1/n} \geq \dfrac{\beta}{k}.$ \end{center} (d) From (c) we get that $\underline{\alpha}_{\mathcal{F}}(P) \geq \dfrac{\beta}{k} = \dfrac{\delta_{\mathcal{F}}.k}{k}=\delta_{\mathcal{F}},$ while corollary 5.2 gives $\bar{\alpha}_{\mathcal{F}}(P) \leq \delta_{\mathcal{F}}.$ Hence the limit defining $\alpha_{\mathcal{F}}(P)$ exists and is equal to $\delta_{\mathcal{F}}.$ (e) First suppose that $\# \mathcal{O}_{\mathcal{F}}(P) < +\infty.$ Since $D$ is ample and the orbit of $P$ is finite, we have that $h_D \geq 0, \hat{h}_{\mathcal{F},D}(P) \geq 0$, and there is a constant $C>0$ such that $h_D(f(P)) \leq C $ for all $f \in \cup_{l \geq0} \mathcal{F}_l$. This gives \begin{center} $ |\hat{h}_{\mathcal{F},D}(P)| \leq \lim_{n \rightarrow \infty}\dfrac{1}{\beta^n}\sum_{ f \in \mathcal{F}_n} |h_D(f(P))| \leq \lim_{n \rightarrow \infty} C.\dfrac{k^n}{\beta^n}=0$ \end{center} Since $\beta > k.$ For the other direction, suppose that $\hat{h}_{\mathcal{F},D}(P)=0.$ Then for any $n \geq 0$ and $g \in \mathcal{F}_n,$ we apply part (b) to obtain \begin{center} $0=\beta^n\hat{h}_{\mathcal{F},D}(P)=\sum_{f \in \mathcal{F}_n}\hat{h}_{\mathcal{F},D}(f(P)) \geq \hat{h}_{\mathcal{F},D}(g(P)) \newline \geq h_D(g(P)) - c\sqrt{h_D(g(P))}. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ \end{center} This gives $h_D(g(P)) \leq c^2,$ where $c$ does not depend on $P$ or $n.$ This shows that $\mathcal{O}_{\mathcal{F}}(P)$ is a set of bounded height with respect to an ample height. Since $\mathcal{O}_{\mathcal{F}}(P)$ is contained in $X(K(P))$ and since we have assumed that $K$ is a number field, we conclude that $\mathcal{O}_{\mathcal{F}}(P)$ is finite. \end{proof} {\bf Remark 6.2:} In the same way as pointed in remark 29 of [12], when $f_1,...,f_k$ are morphisms, there is always one divisor class $D \in$ NS$(X)_{\mathbb{R}}$ such that $\sum^k_{i=1} f^*_iD \equiv \beta D$, where $\beta$ is the spectral radius of the linear map $\sum_{i \leq k} A(f_i)$ on NS$(X)_{\mathbb{R}}$. It would remain to check whether it satisfies $\beta >k.\sqrt{\delta_{\mathcal{F}}}$. This works for the non-trivial example 3.3, 3.4, 3.5 and 3.6, where the above height will coincide with the height constructed by Kawaguchi and Silverman.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Algorithms} \label{sec:algorithms} The feasibility conditions formulated above enable us to develop two methods for addressing optimization task~\eqref{eq:optA}: The first, derived from condition $\textsc{C}_1^*$, leads to a \emph{mixed-integer linear program (MILP)} that finds the smallest feasible sets $\mathcal{C}$ and $\mathcal{M}$, provided that $\mathbf{M} = \mathbf{I}$. {Since $\textsc{C}_1^*$ is necessary for $\textsc{C}'_1$ but not sufficient, the obtained sets $\mathcal{C}$ and $\mathcal{M}$ may be too small to be feasible.} While we often obtained feasible results anyway, the algorithm can also be used to generate a good initial solution for the second approach. The second method for solving problem~\eqref{eq:optA} is designed for all possible measurement matrices $\mathbf{M}$. It is a \emph{greedy} procedure based on hill climbing (HC) and condition $\textsc{C}_2$. Recalling that condition $\textsc{C}_2$ is sufficient for $\textsc{C}_1$ but not necessary, the obtained sets $\mathcal{C}$ and $\mathcal{M}$ may possibly be too large, but are guaranteed to be feasible. \subsection{MILP-Based Approach} \label{subsec:milpSearch} In this section, we develop a MILP for finding the smallest feasible sets $\mathcal{C}$ and $\mathcal{M}$ based on condition $\textsc{C}_1^*$, provided that $\mathbf{M} = \mathbf{I}$. The key is to formulate condition $\textsc{C}_1^*$ as a set of linear inequalities that holds for all choices of sets $\mathcal{C}$ and $\mathcal{M}$. To this end, consider the binary decision variables $\vlu_{\slc} \in \{0,1\}^N$ and $\vlu_{\slm} \in \{0,1\}^N$, defined element-wise as \begin{equation*} u_{\mathrm{c} j} = \begin{cases} 1 & j \in \mathcal{C} \\ 0 & \text{else} \end{cases} , \ u_{\mathrm{m} j} = \begin{cases} 1 & j \in \mathcal{M} \\ 0 & \text{else} \end{cases}, \end{equation*} for $j \in \{1,...N\}$. The decision variables $\vlu_{\slc}$ and $\vlu_{\slm}$ encode the elements of $\mathcal{C}$ and $\mathcal{M}$, respectively. Finding the smallest number of elements of $\mathcal{C}$ and $\mathcal{M}$ is thus equivalent to minimizing the cost $\|\vlu_{\slc}\|_1 + \gamma \|\vlu_{\slm}\|_1$. Let $\tilde{\vlx}^i \in \bsym{\mcX}$, $i = 1,\ldots,K$, be defined element-wise as \begin{equation*} \tilde{x}^i_j = \begin{cases} \overline{x}_j & A_{ij} \geq 0 \\ \underline{x}_j & \text{else} \end{cases}. \end{equation*} $\tilde{\vlx}^i$ is the $\mathbf{A}^i$-optimal analytical solution of~\eqref{eq:maxAfi} for the case when all variables are assumed to be free. Moreover, for any given set of controlled variables $\mathcal{C}$, the elements of $C^A(\bsym{\mcX}_\slf)$ can be identified with $\tilde{\vlx}_\slf^i = \tilde{\vlx}^i \circ (\bsym{1} - \vlu_{\slc})$, where $\bsym{1}$ is a vector of ones of appropriate dimension and $\circ$ represents the Hadamard product. Since we assume here that $\mathbf{M} = \mathbf{I}$, we can further partition the free variables into monitored and unmonitored variables, i.e., we can write $\bsym{1} - \vlu_{\slc} = \vlu_{\slm} + \vlu_{\slu}$, with $\vlu_{\slu} \in \{0,1\}^N$ being the binary vector that encodes the elements of the unmonitored variables. The $\mathbf{A}^j$-optimal corners of $\bsym{\mcX}_\slf$ can then be identified with $\tilde{\vlx}^j \circ \vlu_{\slu}$, $j=1,\ldots,K$. Similarly to $\tilde{\vlx}^i \circ \vlu_{\slm}$, a vector of length $N$ whose non-measured entries are zero, we now consider an associated control vector $\tilde{\vlx}_\slc^i \in \bsym{\mcX}$ for which \begin{equation*} \underline{\mathbfit{x}} \circ \vlu_{\slc} \leq \tilde{\vlx}_\slc^i \leq \overline{\mathbfit{x}} \circ \vlu_{\slc}, \end{equation*} i.e., $\tilde{\vlx}_\slc^i$ is a vector of length $N$ whose non-controlled entries are zero. Condition $\text{C}_1^*$ states that for all $i = 1,\ldots,K$, the control vector $\tilde{\vlx}_\slc^i$ for the $\mathbf{A}^i$-optimal corner $\tilde{\vlx}_\slf^i$ of $\bsym{\mcX}_\slf$ should be valid and that it should be the identical to the control vector for all other corners in $C^A(\bsym{\mcX}_\slf)$ that cannot be distinguished given the measurements. We thus can consider only the worst case of the unknown elements and write compactly \begin{equation*}\begin{aligned} \mathbf{A} \tilde{\vlx}_\slc^i + \tilde{\mxA}_\slm^i \vlu_{\slm} + \obar{\mxA}_\slu \mathbfit{u}_\mathrm{u} \leq \mathbfit{b}, \end{aligned}\end{equation*} with $\tilde{\mxA}_\slm^i$ and $\obar{\mxA}_\slu$ defined element-wise as, $k = 1,\ldots,K$, \begin{equation*}\begin{aligned} \tilde{A}^i_{\mathrm{m} , kj} = A_{kj} \tilde{x}^i_j, \quad \overline{A}_{\mathrm{u} , kj} = A_{kj} \tilde{x}^k_j. \end{aligned}\end{equation*} Thus, the mixed-integer linear program that solves~\eqref{eq:optA} when $\mathbf{M} = \mathbf{I}$ reads \beql{MILPMI} &\min_{\substack{\vlu_{\slc},\vlu_{\slm},\vlu_{\slu}, \tilde{\vlx}_\slc^i}} {\|\vlu_{\slc}\|_1 + \gamma \|\vlu_{\slm}\|_1} \\ \text{s.t. } & \mathbf{A} \tilde{\vlx}_\slc^i + \tilde{\mxA}_\slm^i \vlu_{\slm} + \obar{\mxA}_\slu \vlu_{\slu} \leq \mathbfit{b}, & \forall i = 1,...,K,\\ & \underline{\mathbfit{x}} \circ \vlu_{\slc} \leq \tilde{\vlx}_\slc^i \leq \overline{\mathbfit{x}} \circ \vlu_{\slc}, & \forall i = 1,...,K,\\ & \vlu_{\slc}+\vlu_{\slm}+\vlu_{\slu} = \bsym{1}. \eeq \begin{remark} \label{reductionA} Note that, particularly in large scale applications, there may be several constraints, i.e., rows of $\mathbf{A}$ and corresponding entries of $\mathbfit{b}$, that are not violated for any realization of $\mathbfit{x}$. Hence, when optimizing~\eqref{eq:MILPMI}, we only take into account the rows of $\mathbf{A}$, for which a violation of~\eqref{eq:X*} is possible, i.e., where $\mathbf{A}^i \tilde{\vlx}^i - \mathbfit{b}^i > 0$. This preprocessing is also utilized by the greedy search proposed below. \end{remark} \subsection{Greedy Approach} \label{subsec:hill} In this section, we first show how to check condition $\textsc{C}_2$ efficiently via a linear program (LP) for fixed sets $\mathcal{C}$ and $\mathcal{M}$. Thereafter we describe an iterative algorithm to choose and adapt these sets in order to find minimal feasible sets. For given $\mathcal{C}$ and $\mathcal{M}$, condition $\textsc{C}_2$ mandates to check if there exists a valid affine-linear control law that makes the system feasible for every possible value $\vlx_\slf \in \bsym{\mcX}_\slf$. More precisely, there should exist an affine-linear control law defined via $\mathbf{S}$ and $\mathbfit{w}$ such that for all $\vlx_\slf\in \bsym{\mcX}_\slf$ we have \beql{SM1} \underbrace{ \begin{bmatrix} \mxA_\slc\mathbf{S}\mxM^\slm_\slf + \mxA_\slf \\ \mathbf{S}\mxM^\slm_\slf\\ -\mathbf{S}\mxM^\slm_\slf \end{bmatrix}}_{\hat{\mxA}(\mathbf{S})} \vlx_\slf + \underbrace{\begin{bmatrix} \mxA_\slc \\ \mathbf{I} \\ -\mathbf{I} \end{bmatrix}}_{\mathbf{F}} \mathbfit{w} - \underbrace{\begin{bmatrix} \mathbfit{b} \\ \obar{\vlx}_\slc \\ -\ubar{\vlx}_\slc \end{bmatrix}}_{\mathbfit{l}} \leq \eta \underbrace{\begin{bmatrix} \bsym{1} \\ \cdot \\ \cdot \\ \end{bmatrix}}_{\mathbfit{v}}, \eeq where we introduce $\eta \in \mathbb{R}$ as an indicator of how far the system is from being infeasible. A control law is valid if $\eta \leq 0$. To tackle condition~\eqref{eq:SM1} for all $\vlx_\slf\in\bsym{\mcX}_\slf$ we only need to consider the maximum of the left hand side expression. Let $\hat{K} = K + 2|\mathcal{C}|$ be the number of rows of $\hat{\mxA}(\mathbf{S})$ and $N_\mathrm{f} = N - |\mathcal{C}|$ the number of free variables. We can introduce an upper bound on $\hat{\mxA}(\mathbf{S})\vlx_\slf$ via a matrix $\mathbf{H} \in \mathbb{R}^{\hat{K} \times N_\mathrm{f}}$, whose entries fulfill \beql{SM5} H_{ij} \geq \hat{A}_{ij}(\mathbf{S}) \overline{x}_{\mathrm{f} j}, \\ H_{ij} \geq \hat{A}_{ij}(\mathbf{S}) \underline{x}_{\mathrm{f} j}, \eeq for all $i = 1,...,\hat{K}$ and $j = 1,...,N_\mathrm{f}$. The upper bound of $\hat{\mxA}(\mathbf{S})\vlx_\slf$ is then given by $\mathbf{H}\bsym{1}$ and condition \eqref{eq:SM1} is equivalent to \beql{SM4} \mathbf{H} \bsym{1} + \mathbf{F} \mathbfit{w} - \mathbfit{l} \leq \eta \mathbfit{v}. \eeq Putting these results together allows us to compute the minimum possible value of $\eta$ for given $\mathcal{M}$ and $\mathcal{C}$ via the following linear program \beql{optTheta} & \min_{\substack{\eta, \mathbf{H}, \mathbfit{w}, \mathbf{S}}} \; \eta \\ \text{s.t. } & \mathbf{H} \bsym{1} + \mathbf{F} \mathbfit{w} - \mathbfit{l} \leq \eta \mathbfit{v},\\ & H_{ij} \geq \hat{A}_{ij}(\mathbf{S}) \overline{x}_{\mathrm{f} j}, \forall i = 1,...,\hat{K}, \forall j = 1,...,N_\mathrm{f},\\ & H_{ij} \geq \hat{A}_{ij}(\mathbf{S}) \underline{x}_{\mathrm{f} j}, \forall i = 1,...,\hat{K}, \forall j = 1,...,N_\mathrm{f}. \eeq The above described algorithm for testing the validity of $\textsc{C}_2$ for fixed $\mathcal{C}$ and $\mathcal{M}$ can now be used as a subroutine to minimize over the sets $\mathcal{C}$ and $\mathcal{M}$ as well. To do this, we proceed iteratively from initial sets $\mathcal{C}$ and $\mathcal{M}$ adapting them one element at a time. Since we want to measure the optimization progress also for non-feasible combinations $\mathcal{C}$ and $\mathcal{M}$, we extend the minimization objective to \beql{costHC} J(\mathcal{C},\mathcal{M}) = |\mathcal{C}| + \gamma|\mathcal{M}| + \mu \max(\eta, 0), \eeq where $\eta$ is the feasibility indicator obtained from solving problem~\eqref{eq:optTheta}. $\mu >0 $ is a weighting factor that penalizes the infeasibility of $\mathcal{C}$ and $\mathcal{M}$. We choose $\mu \gg 1$ to steer the iteration quickly towards feasible solutions. The cost function~\eqref{eq:costHC} is minimized via a greedy hill climbing procedure. In each iteration we compute the objective value for all sets $\mathcal{M}'$ or $\mathcal{C}'$ that can be generated by adding one element to either $\mathcal{M}$ or $\mathcal{C}$. We then choose the step which yields the largest improvement of the objective value~\eqref{eq:costHC}. As soon as the sets of controllers and measurements are feasible, we stop the iteration. It is well known that the solution of this greedy approach depends on the selection of the starting point. A natural option is to start with empty sets, selecting the most important controllers and measurements during the first iterations. Alternatively, we propose to use the MILP~\eqref{eq:MILPMI} formulation as an initial guess. More specifically, we solve the MILP~\eqref{eq:MILPMI} for $\mathbf{M} = \mathbf{I}$ first. We then use the found controller set $\mathcal{C}$ as a starting point for the greedy approach, while disregarding the found measurements. Instead, we start with an empty $\mathcal{M}$. This way the measurements resulting from general $\mathbf{M}$, which potentially allow for more compact control systems than the identity measurements, can be integrated well, but the critical controllers are already identified. \begin{remark} Since general MILP has exponential worst-case time complexity, this is an upper bound on the complexity of our first approach \eqref{eq:MILPMI}. In contrast, LP as used for our second approach \eqref{eq:optTheta} is known to have polynomial worst-case time complexity, and the hill climbing procedure only adds polynomial factors. However, for the realistic examples discussed in the next section we found the MILP approach to be more efficient than the hill climbing procedure. The latter's computation time depends strongly on the starting point. For the examined medium to large problem instances, it allowed finding small, guaranteed to be feasible solutions for $\mathcal{M}$ and $\mathcal{C}$ with very reasonable efforts. For cases when $\mathbf{M} = \mathbf{I}$ the MILP solution could often be verified to be feasible (and thus also optimal) by solving the small LP~\eqref{eq:optTheta} only once without further adaptation of $\mathcal{M}$ or $\mathcal{C}$. We thus see both algorithms as an important contribution for solving real control design problems with state constraints. \end{remark} \section{Feasible Sets of Controllers and Measurements} \label{sec:optCM} \subsection{Feasibility Conditions \& Problem Statement} The power flow model of the previous section can be abstracted as follows: let $\mathbfit{x} \in \bsym{\mcX} \subseteq \mathbb{R}^N$ be the variables that can be set externally. $\bsym{\mcX}$ is assumed to be a product of intervals, i.e., $\bsym{\mcX} = [\underline{x}_1,\overline{x}_1] \times \cdots \times [\underline{x}_N,\overline{x}_N]$. Variables $\mathbfit{x}$ can be partitioned into the \emph{controlled} variables $\vlx_\slc$, for which we will design a controller in the following, and the \emph{free} variables $\vlx_\slf$, that are left free to be determined either by other users, cooperative or malicious, by fixed external conditions, such as e.g. the weather, or at random. The index set of the controlled variables is denoted by $\mathcal{C}$ and the corresponding partitions of $\bsym{\mcX}$ as $\bsym{\mcX}_\slc$ and $\bsym{\mcX}_\slf$. We assume that the variables $\mathbfit{x}$ determine the system state uniquely and that the set of feasible system states $\bsym{\mcX}^*$ can be characterized via a set of linear inequalities, \beql{X*} \bsym{\mcX}^* = \{ \mathbfit{x} \in \boldsymbol{\mathcal{X}} : \mathbf{A} \mathbfit{x} \leq \mathbfit{b} \}, \eeq where $\mathbf{A} \in \mathbb{R}^{K \times N}$ and $\mathbfit{b} \in \mathbb{R}^K$. Similarly, we assume a set of possible measurements $\mathbfit{y} \subseteq \mathbb{R}^{L}$ to be linearly related to the system state, i.e., $\mathbfit{y} = \mathbf{M} \mathbfit{x}$ with $\mathbf{M} \in \mathbb{R}^{L \times N}$. We partition these possible measurements into the \emph{monitored} measurements $\vly^\slm$, that are used as inputs to the control law, and the \emph{unmonitored} variables $\vly^\slu$, that are not required for the controller and may or may not be recorded in practice. The index set of the monitored variables is denoted by $\mathcal{M}$. The defined partitions of $\mathbfit{x}$ and $\mathbfit{y}$ allow to partition the matrices $\mathbf{A}$ and $\mathbf{M}$ along their columns or rows as well, yielding $\mathbf{A}\mathbfit{x} = \mxA_\slc\vlx_\slc + \mxA_\slf\vlx_\slf$ and $\vly^\slm = \mxM^\slm \mathbfit{x} = \mxM^\slm_\slc\vlx_\slc + \mxM^\slm_\slf\vlx_\slf$. The aim of the paper is to determine the minimal set of controllers $\mathcal{C}$ and measurements $\mathcal{M}$ that allows for the design of a control law $\vlx_\slc(\vly^\slm)$ that can guarantee a feasible system state, independently of the state of the free variables $\vlx_\slf$. This can be formalized as follows. \begin{definition}[\textbf{Condition} $\textsc{C}_1$] \label{def:A} Sets $\mathcal{C}$ and $\mathcal{M}$ are feasible if \begin{align} \exists \vlx_\slc : \mxM^\slm_\slf(\bsym{\mcX}_\slf) \rightarrow \bsym{\mcX}_\slc \text{ s.t. } \forall \vlx_\slf \in \bsym{\mcX}_\slf: \\ \mxA_\slc \vlx_\slc(\vly^\slm_\slf) + \mxA_\slf \vlx_\slf \leq \mathbfit{b}, \nonumber \end{align} where $\vly^\slm_\slf = \mxM^\slm_\slf \vlx_\slf$. \end{definition} The idea behind this definition is that the control $\vlx_\slc(\vly^\slm_\slf)$ chosen for $\vly^\slm_\slf$ should be valid for the $\vlx_\slf$ from which $\vly^\slm_\slf$ originated. Note that we consider only control values for the steady state of the system in this paper. We do not examine whether and how it is possible to get there from arbitrary initial positions. Moreover, in order to simplify the notation of the involved sets, we have used only a part of $\vly^\slm$ as input to the control law $\vlx_\slc(\vly^\slm_\slf)$. However, since $\vly^\slm = \mxM^\slm_\slc \vlx_\slc + \vly^\slm_\slf$ one could easily rewrite the controller into the form $\vlx_\slc(\vly^\slm)$, i.e., directly using the measurements that are actually available to the controller. Since $\vlx_\slf$ uniquely determines $\vly^\slm_\slf$, we can also express the control law as $\vlx_\slc(\vlx_\slf)$. The formulation $\vlx_\slc(\vly^\slm_\slf)$ implies that $\vlx_\slc$ will attain the same value for all values of $\vlx_\slf$ that lead to the same measurements. We thus obtain the following equivalent condition. \begin{definition}[\textbf{Condition} $\textsc{C}_1'$] \label{def:B} Sets $\mathcal{C}$ and $\mathcal{M}$ are feasible if \begin{align} \exists \vlx_\slc : \bsym{\mcX}_\slf \rightarrow \bsym{\mcX}_\slc \text{ s.t. } \forall \vlx_\slf,\vlx_\slf' \in \bsym{\mcX}_\slf: \\ \mxA_\slc \vlx_\slc(\vlx_\slf) + \mxA_\slf \vlx_\slf \leq \mathbfit{b} \; \wedge \nonumber \\ \vlx_\slc(\vlx_\slf) = \vlx_\slc(\vlx_\slf') \text{ if } \mxM^\slm_\slf\vlx_\slf = \mxM^\slm_\slf\vlx_\slf'. \nonumber \end{align} \end{definition} These definitions allow us to state the optimization task we aim to solve in this work. \begin{problem} Find the set of controllers $\mathcal{C}$ and measurements $\mathcal{M}$ that solves \beql{optA} & \min_{\mathcal{C},\mathcal{M}}\;{|\mathcal{C}| + \gamma |\mathcal{M}|} \\ \text{s.t.}\; & \mathcal{C} \text{ and } \mathcal{M}\text{ are feasible w.r.t. $\textsc{C}_1$ or $\textsc{C}_1'$}. \eeq $|\mathcal{C}|$ and $|\mathcal{M}|$ denote the cardinality of $\mathcal{C}$ and $\mathcal{M}$. The cost of placing a sensor is weighted by $0 \leq \gamma \leq 1$ since it will typically be smaller than implementing a full actuator. One could additionally incorporate into the objective the varying efforts and costs for controlling certain elements or acquiring certain measurements. Instead of just weighting the total number of controllers and measurements we would then determine an individual weight for each element separately. While we do not follow this idea below, all algorithms could straightforwardly be adapted. \end{problem} \subsection{Related Conditions} \label{subsec:relatedCons} Verifying conditions $\textsc{C}_1$ and $\textsc{C}_1'$ based on their definition requires checking infinitely many values of $\vly^\slm_\slf$ or $\vlx_\slf$, respectively. We therefore derive two related conditions that are testable with finite computational resources. The relation of all derived conditions is presented in \figref{relation}. In the next section we then show how to exploit them to efficiently solve our problem. Condition $\textsc{C}_1$ requests the existence of a mapping $\vlx_\slc: \mxM^\slm_\slf(\bsym{\mcX}_\slf) \rightarrow \bsym{\mcX}_\slc$ yielding valid control values. One possibility is that this mapping is affine-linear. \begin{definition}[\textbf{Condition} $\textsc{C}_2$] \label{def:C_2} Condition $\textsc{C}_2$ is fulfilled if \begin{align} \exists \mathbf{S} \in \mathbb{R}^{|\mathcal{C}| \times |\mathcal{M}|}, \mathbfit{w} \in \mathbb{R}^{|\mathcal{C}|} \text{ s.t. } \forall \vlx_\slf \in \bsym{\mcX}_\slf: &\\ \vlx_\slc(\vly^\slm_\slf) \in \bsym{\mcX}_\slc \; \wedge \; \mxA_\slc \vlx_\slc(\vly^\slm_\slf) + \mxA_\slf \vlx_\slf &\leq \mathbfit{b}, \nonumber \end{align} where $\vly^\slm_\slf = \mxM^\slm_\slf \vlx_\slf$ and $\vlx_\slc(\vly^\slm_\slf) = \mathbf{S} \vly^\slm_\slf + \mathbfit{w}$. \end{definition} Condition $\textsc{C}_2$ is obviously sufficient for $\textsc{C}_1$. It is, however, not necessary as can be shown by counterexample, where piecewise linear control laws sometimes allow for fewer sensors and controllers. The condition is testable with finite efforts, as we show in \secref{algorithms}. The conditions presented so far are continuous in the sense that testing their validity requires checking an infinite set of possible realizations of $\vlx_\slf$ or $\vly^\slm_\slf$. However, since the possible values of $\vlx_\slf$ and $\vly^\slm_\slf$ are restricted to bounded polytopes, i.e., $\bsym{\mcX}_\slf$ and $\mxM^\slm_\slf(\bsym{\mcX}_\slf)$, we can derive a necessary condition for $\textsc{C}'_1$ based only on the \emph{corners} of such polytopes. In contrast to $\textsc{C}_2$, such necessary condition will not assume the control law $\vlu_{\slc}(\vly^\slm_\slf)$ to be affine-linear. \begin{definition}[\textbf{Corner}] $\mathbfit{z} \in \mathcal{Z}$ is an \emph{extreme point} or \emph{corner} of the convex set $\mathcal{Z}$ if there are no two distinct points $\mathbfit{z}_1,\mathbfit{z}_2 \in \mathcal{Z}$ and $\lambda \in (0,1)$ such that $\mathbfit{z} = \lambda \mathbfit{z}_1 + (1-\lambda)\mathbfit{z}_2$. \end{definition} \begin{figure} \centering \def150pt{150pt} \input{conditions.pdf_tex} \caption{Relation of the desired conditions $\textsc{C}_1$ and $\textsc{C}_1'$ to the conditions $\textsc{C}_1^*$, $\textsc{C}_2$, which are testable with finite resources.} \label{fig:relation} \end{figure} Denote $C(\bsym{\mcX}_\slf)$ as the set containing the corners of $\bsym{\mcX}_\slf$. The number of corners of $\bsym{\mcX}_\slf$, denoted as $|C(\bsym{\mcX}_\slf)|$, is finite, but grows exponentially with the number of free variables. A condition based on all corners of $\bsym{\mcX}_\slf$ would therefore be computationally prohibitive for larger dimensions of $\bsym{\mcX}_\slf$. Instead, we focus on a subset of corners only, namely those ones which have the maximum impact on the constraints $\mathbf{A} \mathbfit{x} \leq \mathbfit{b}$. Denote such subset by $C^A(\bsym{\mcX}_\slf)$. Let $\mathbf{A}^i$ be the $i$-th row of $\mathbf{A}$, with $i \in \{1,...,K\}$. Then, a point $\vlx_\slf$ belongs to $C^A(\bsym{\mcX}_\slf)$ if $\vlx_\slf \in C(\bsym{\mcX}_\slf)$ and if there exists $i \in \{1,...,K\}$ such that $\vlx_\slf$ is an optimal solution for \beql{maxAfi} \max_{\vlx_\slf \in C(\bsym{\mcX}_\slf)}\;{\mxA_\slf^i\vlx_\slf}. \eeq \begin{remark} \label{rem:Bc2Bcrr} Note that the optimization problem \eqref{eq:maxAfi} defining the elements of $C^A(\bsym{\mcX}_\slf)$ can be solved analytically for row $\mathbf{A}^i$ as \beql{XfCA} x_{j} = \begin{cases} \overline{x}_{j} & A_{ij} \geq 0 \\ \underline{x}_{j} & \text{else} \end{cases}, j \not\in \mathcal{C}. \eeq The optimal values thus depend only on the sign of the corresponding elements of $\mathbf{A}$. In many cases the optimal vectors for different rows of $\mathbf{A}$ will therefore coincide and the cardinality of $C^A(\bsym{\mcX}_\slf)$ is even smaller than its maximum possible value $K$. \end{remark} \begin{definition}[\textbf{Condition} $\textsc{C}_1^*$] \label{def:C_1*} Condition $\textsc{C}_1^*$ is fulfilled if \begin{align} \exists \vlx_\slc : C^A(\bsym{\mcX}_\slf) \rightarrow \bsym{\mcX}_\slc \text{ s.t. } \forall \vlx_\slf,\vlx_\slf' \in C^A(\bsym{\mcX}_\slf): \\ \mxA_\slc \vlx_\slc(\vlx_\slf) + \mxA_\slf \vlx_\slf \leq \mathbfit{b} \; \wedge \; \nonumber \\ \vlx_\slc(\vlx_\slf) = \vlx_\slc(\vlx_\slf') \text{ if } \mxM^\slm_\slf\vlx_\slf = \mxM^\slm_\slf\vlx_\slf'. \nonumber \end{align} \end{definition} Conditions $\textsc{C}_1/\textsc{C}_1'$ straightforwardly imply $\textsc{C}_1^*$ since $C^A(\bsym{\mcX}_\slf) \subseteq \bsym{\mcX}_\slf$. The reverse is not always true, as can be shown by counterexample. However, we will show below that this condition can be exploited for a very efficient computation of approximate sets $\mathcal{C}$ and $\mathcal{M}$, at least for the case when the set of possible measurements consists of the power set points at each node, i.e., when $\mathbf{M}$ is an identity matrix of appropriate dimensions, here denoted by $\mathbf{I}$. For nodes with zero droop constant, e.g., typical loads or small-scale generators, the measurement of the power set points is equivalent to measuring nodal power injections. Minimizing the objective $\mathcal{C} + \gamma|\mathcal{M}|$ with respect to condition $\textsc{C}_1^*$ or $\textsc{C}_2$ will provide a lower or upper bound for the optimal solution of problem \eqref{eq:optA}, respectively. In our experiments we found that for minimal sets $\mathcal{C}$ and $\mathcal{M}$ fulfilling $\textsc{C}_1^*$ it was often possible to determine valid affine-linear control realizations by testing $\text{C}_2$ for such sets, i.e., the upper and lower bound coincided. In this case, $\mathcal{C}$ and $\mathcal{M}$ are optimal solutions of \eqref{eq:optA}. \section{Feasible Power Flow as Linear (In-)Equalities} \section{Linear Power Flow} \label{sec:lpf} We analyze an electrical network with $N$ electrical buses connected by $T$ transmission lines under the common \textit{DC power flow} assumptions \cite{kundur1994power}. The voltage phase angles $\bsym{\theta}\in\mathbb{R}^N$ determine the nodal active power injections $\mathbfit{p}_\textsc{I} \in \mathbb{R}^N$ and the active power line flows $\mathbfit{p}_\textsc{F} \in \mathbb{R}^T$ as \beql{p2theta} \mathbfit{p}_\textsc{I} = \mathbf{B}_\textsc{I} \bsym{\theta}, \quad \mathbfit{p}_\textsc{F} = {\mathbf{B}}_\textsc{F} {\bsym{\theta}}, \eeq where the entries of $\mathbf{B}_\textsc{I} \in \mathbb{R}^{N \times N}$ and $\mathbf{B}_\textsc{F} \in \mathbb{R}^{T \times N}$ are defined element-wise as $B_{\textsc{I},jk} = -b_{jk}$ if $j \neq k$, $B_{\textsc{I},jj} = \sum_{k} b_{jk}$ and $B_{\textsc{F},jk} = b_{jk}$, with $b_{jk}$ the susceptance of the line connecting buses $j$ and $k$. Without loss of generality, we assume that exactly one generator or load is connected to each bus, with an externally defined active power set point $x_i$. If the sum of the set points in the grid is not balanced, a \textit{droop-based primary control} scheme \cite{kundur1994power} adjusts power injections $\mathbfit{p}_\textsc{I}$ under adaptation of the frequency to achieve this balance, such that in steady state we obtain \beql{x2p} \mathbfit{p}_\textsc{I} = \mathbfit{x} - \mathbfit{k} \Delta \omega. \eeq Here, $\mathbfit{k} \in \mathbb{R}^N$ represents the vector of droop constants, $k_i \geq 0$ and $\sum_i k_i > 0$, and $\Delta \omega \in \mathbb{R}$ the frequency deviation with respect to its nominal value. This common setup implies that the measurable quantities $\mathbfit{p}_\textsc{I}$, $\mathbfit{p}_\textsc{F}$, and $\Delta \omega$ are linearly determined by the controllable quantities $\mathbfit{x}$. The kernel of the Laplacian matrix $\mathbf{B}_\textsc{I}$ contains only the constant vectors for connected graphs, that is, a constant shift of the phase angles has no impact on $\mathbfit{p}_\textsc{I}$. We thus fix $\theta_1 = 0$ and delete the first column of $\mathbf{B}_\textsc{I}$ to obtain $\tilde{\mathbf{B}}_\textsc{I}$. The remaining dimensions of $\bsym{\theta}$ are denoted by $\tilde{\bsym{\theta}}$. We similarly reduce $\mathbf{B}_\textsc{F}$ to $\tilde{\mathbf{B}}_\textsc{F}$. The image of $\tilde{\mathbf{B}}_\textsc{I}$ moreover contains all vectors with balanced nodal injections. To handle unbalanced set points $\mathbfit{x}$, we add $\mathbfit{k}$ as the last column. This lets us compute for all $\mathbfit{x}$ with $\cdot$ denoting zero entries \beql{pL2p} \begin{bmatrix} \mathbfit{p}_\textsc{I} \\ \mathbfit{p}_\textsc{F} \\ \Delta \omega \end{bmatrix} = \begin{bmatrix} \tilde{\mathbf{B}}_\textsc{I} & \cdot \\ \tilde{\mathbf{B}}_\textsc{F} & \cdot \\ \cdot & 1 \end{bmatrix} \begin{bmatrix} \tilde{\bsym{\theta}} \\ \Delta \omega \end{bmatrix} = \begin{bmatrix} \tilde{\mathbf{B}}_\textsc{I} & \cdot \\ \tilde{\mathbf{B}}_\textsc{F} & \cdot \\ \cdot & 1 \end{bmatrix} \inv{\begin{bmatrix}\tilde{\mathbf{B}}_\textsc{I} & \mathbfit{k} \end{bmatrix}} \mathbfit{x}. \eeq In real systems the nodal injections $\mathbfit{p}_\textsc{I}$ will be limited above and below by the technical capabilities of the connected generator or load. Valid set points $\mathbfit{x}$ might be restricted to smaller intervals than the $\mathbfit{p}_\textsc{I}$, to leave some space for power generation scheduled by the primary controller. Similarly, line power flows $\mathbfit{p}_\textsc{F}$ and the frequency deviation $\Delta \omega$ are typically subject to upper and lower bounds. \section{Outlook} \label{sec:conclusions} The theoretic framework and the algorithms developed in this work allow for the efficient identification of critical controllers and measurements in complex power systems with uncertain producers and consumers. Unlike previous work, we take specific power limitations of lines, generators, and loads into account. This step strongly improves the applicability in practice, where our approach will help reducing control costs and efforts and increasing power systems' resilience. While we have only considered active power in this work, the approach can straightforwardly be applied to linearized power flow models taking into account also reactive power and voltages. Developing a MILP formulation for condition $\textsc{C}_2$ is also possible, but our experiments so far have not yielded satisfying run times. \section{Introduction} Volatile renewable energies are transforming classical power grids with few large generators into complex cyber-physical networks. These contain a large number of distributed generators and controllable loads, and power lines are often operated close to their limits. In this context, we ask: What is the smallest set of generators and/or loads that must be controlled based on the values of a minimal number of measurements, such that (s.t.) the entire system state is feasible for all possible values of the remaining elements? Being able to identify the (optimally small) set of critical elements in complex power grids reduces the cost and effort for their control. Moreover, it is an important ingredient to reduce such systems' potentially high vulnerability with respect to (w.r.t.) natural disasters or cyber-attacks \cite{Abedi2019}, enhancing their operational resilience. An increased protection status could be mandated for the identified critical elements, to keep the number of outages and failures in this group at a minimum, see \cite{Yuan2016} where the hardening of power systems to minimize system damage in case of disasters is examined. Our research question is an instance of the well-known \emph{optimal input/output selection problem}, also known as the \emph{optimal actuator/sensor placement problem}. Starting with classical work on controllability \cite{Kalman1960} this problem has attracted long-term research attention, in particular, for linear time-invariant systems. The problem has recently become very active again in the study of complex networks, see, e.g., \cite{Li2018}. While most formulations of the problem are NP-Hard due to its combinatorial nature, finding only the minimum set of actuators is possible in polynomial-time \cite{Pequito2016}. This finding is based on structural controllability theory \cite{Lin1974} and can be used to develop distributed algorithms for finding the minimum number of controlled and measured nodes \cite{Li2018}. Structural controllability theory can also be used to analyze cyber-security aspects in distributed power grids \cite{Pasqualetti2015}, e.g., for evaluating the detectability and identifiability of hacked nodes. Another line of research aims at designing control structures that minimize the control effort, using controllability metrics derived from the controllability Gramian of the system \cite{Lindmark2018}. Many of the related input/ouput selection problems are submodular which implies that greedy algorithms using these metrics, e.g., for the optimal placement of High-Voltage direct current lines in a simplified model of the European power transmission network, have provable suboptimality bounds~\cite{Summers2016}. Time-varying minimal configurations of sensors and actuators can be computed with the help of semi-definite programming \cite{Taha2019}. All these works are valid for linear (dynamical, algebraic) systems without state or input/output restrictions. In this contribution, we propose an alternative, novel approach based on the steady-state representation of the system only, but considering constrained variables. This is an important step towards real applications where power injections and line flows are always subject to physical limits. Our approach extends current work on the distributed control of power systems \cite{Molzahn2017}. For instance, the robust optimal power flow algorithm by \cite{Mesanovic2018} allows computing set points and droop constants for some generators while guaranteeing feasible grid operation for all power injections of other uncertain producers and consumers. While we use similar modeling, we focus on identifying the minimal sets of controllers and measurements that are required for computing such set points. The rest of the paper is organized as follows. \secref{lpf} introduces the employed linear power flow model. The feasibility of a given set of controllers and sensors is defined in \secref{optCM}. We also give a formal problem statement there as well as further computationally advantageous conditions for testing feasibility. In \secref{algorithms}, we exploit those conditions for developing two efficient algorithms that minimize the number of controllers and sensors. In \secref{results}, we apply the proposed algorithms to find the smallest number of controllers and sensors for 1) a simple microgrid consisting of 4 buses and 2) a modified version of the IEEE 118 bus test case. Finally, concluding remarks and an outlook for future research are provided in \secref{conclusions}. \section{Numerical Examples} \label{sec:results} The algorithms developed in~\ref{subsec:milpSearch} and~\ref{subsec:hill} are now applied to find the minimal feasible configuration of controllers and measurements for two exemplary power systems. We first demonstrate our setup and typical effects on a simple microgrid of 4 buses connected in a line. Subsequently, a modified version of the IEEE 118 bus test case is addressed. The experiments were performed using an i5 notebook with 8 GB of RAM. The algorithms were implemented in Matlab R2018b, using YALMIP \cite{Lofberg2004} as modeling language and CPLEX 12.9 as LP and MILP solver. \subsection{Simple Microgrid} \label{subsec:simpleGrid} \figref{simpleGrid} shows the considered microgrid consisting of three generators supplying a demand of 5 MW. It gives the topology of the grid together with the capacity limits of each transmission line and each generator/load. The generator located at bus 4 provides primary reserve, initially with a droop of 12 MW/Hz and later with 4 MW/Hz. The maximum allowed frequency deviation is $\pm 0.1$ Hz. We first assume that all transmission lines have a power transfer capacity of $\pm 10$ MW, which is adequate to avoid grid limitations. In scenario (d) we add an active line constraint in the middle. \begin{figure*}[h] \centering \begin{subfigure}[h]{\textwidth} \centering \includegraphics[width=0.5\columnwidth]{simpleGridCM1_V1} \hspace{3mm} \includegraphics[width=0.23\columnwidth]{simpleGridCM1_V1_Map_dF} \subcaption{ $\vlx_\slc = x_4; \; \vlx_\slf = \begin{bmatrix} x_1 \! & x_2 \! & x_3 \end{bmatrix}^\textsc{T}; \; \mathbf{S} = \emptyset; \; \mxM^\slm_\slf = \emptyset; \; \mathbfit{w} = 4.1.$ } \vspace{3mm} \label{subfig:simpleGrid1} \end{subfigure} \\ \begin{subfigure}[h]{\textwidth} \centering \includegraphics[width=0.5\columnwidth]{simpleGridCM2_V1} \hspace{3mm} \includegraphics[width=0.23\columnwidth]{simpleGridCM2_V1_Map_dF} \subcaption{ $\vlx_\slc = x_4; \; \vlx_\slf = \begin{bmatrix} x_1 \! & x_2 \! & x_3 \end{bmatrix}^\textsc{T}; \; \mathbf{S} = - 0.81 \begin{bmatrix} 1 & \! 1 \end{bmatrix}; \; \mxM^\slm_\slf = \begin{bmatrix} 1 \!& 0 \!& 0\\ 0 \!& 1 \!& 0 \end{bmatrix}; \; \mathbfit{w} = 5.$ } \vspace{3mm} \label{subfig:simpleGrid2} \end{subfigure} \\ \begin{subfigure}[h]{\textwidth} \centering \includegraphics[width=0.5\columnwidth]{simpleGridCM3_V1} \hspace{3mm} \includegraphics[width=0.23\columnwidth]{simpleGridCM2_V1_Map_dF} \subcaption{ $\vlx_\slc = x_4; \; \vlx_\slf = \begin{bmatrix} x_1 \! & x_2 \! & x_3 \end{bmatrix}^\textsc{T}; \; \mathbf{S} = -0.81; \; \mxM^\slm_\slf = \begin{bmatrix} 1 \!& 1 \!& 0 \end{bmatrix}; \; \mathbfit{w} = 5.$ } \vspace{3mm} \label{subfig:simpleGrid3} \end{subfigure} \\ \begin{subfigure}[h!]{\textwidth} \centering \includegraphics[width=0.5\columnwidth]{simpleGridCM4_V1} \hspace{3mm} \includegraphics[width=0.23\columnwidth]{simpleGridCM4_V1_Map_L23_dF} \subcaption{ $\vlx_\slc = \begin{bmatrix} x_1 \\ x_4 \end{bmatrix}; \vlx_\slf = \begin{bmatrix} x_2 \\ x_3 \end{bmatrix}; \mxM^\slm_\slf = \begin{bmatrix} 1 \!& 0 \end{bmatrix}; \mathbf{S} = - \begin{bmatrix} 0.98 \\ 0.34 \end{bmatrix}; \mathbfit{w} = \begin{bmatrix} 0.98 \\ 4.29 \end{bmatrix}.$ } \label{subfig:simpleGrid4} \end{subfigure} \caption{ Minimal sets of $\mathcal{C}$ and $\mathcal{M}$ for a simple microgrid. The gray squares represent potential controller/measurement locations. The selected controllers and measurements are highlighted in red and green, respectively. Scenarios (a) and (b) have $\mathbf{M}=\mathbf{I}$, whereas line flows and frequency deviation can also be measured in (c) and (d). For each scenario, the resulting (non-unique) affine-linear control realization is provided below together with the behavior of the potentially active constraints for all $\vlx_\slf\in\bsym{\mcX}_\slf$ on the right. For scenario (c) with multiple, equivalent optimal solutions, the colored frames denote alternative optimal solutions. The rationale behind the scenarios is as follows: In (b) the primary control droop is reduced compared to (a). In (c) we allow for additional measurements. In (d) we reduce the transfer capacity of the middle link to form an additional active constraint. } \label{fig:simpleGrid}% \end{figure*} In scenario (a) where only the power set point at each bus may be measured, it is sufficient to control the large generator located at bus 4 for achieving feasible grid operation. The set points of the remaining smaller generators can be chosen freely and no additional measurement devices are required. In scenario (b) we reduce the droop of the generator at bus 4 to 4 MW/Hz. This makes the measurement of the power injections at buses 1 and 2 necessary. Although the power injections at buses 1 and 2 can be chosen arbitrarily, they must be monitored so that the power produced by the generator located at bus 4 can be set appropriately to balance the system within the given frequency tolerance. In scenarios (a) and (b), where $\mathbf{M} = \mathbf{I}$, the solutions of the MILP were feasible (and optimal) without further adaptation of $\mathcal{M}$ and $\mathcal{C}$ and the greedy approach, starting from empty sets $\mathcal{M}$ and $\mathcal{C}$, produced the same results. In scenario (c) the measurement of the line flows and the grid frequency is added to the set of potential measurements, when performing the greedy optimization. This allows to reduce the number of measurements to only one. For this scenario the solution is not unique: one possibility is to take the measurement of the frequency deviation as controller input, yielding an adapted primary control scheme. An alternative solution that is shown in the figure is to monitor the sum of the outputs of generators 1 and 2 by measuring the line flow between bus 2 and 3 for controlling the set point of the generator at bus 4. This situation will be very common in future active distribution grids, where individual small scale loads or generators are not able to violate local grid constraints, but their aggregated effect is important to the system. Since the load is fixed, measuring the line between buses 3 and 4 would be equally informative. The feasibility of all these solution candidates was verified via LP \eqref{eq:optTheta}, obtaining valid affine-linear control realizations in all cases. In scenario (d), we constrain the capacity of the transmission line connecting buses 2 and 3 to the interval $[-1,1]$ MW. This represents an active grid constraint if the generators at buses 1 and 2 produce at maximum power. The solution obtained via hill climbing optimization consists of additionally controlling the power injection at bus 1. Again, several alternative solutions are possible. In scenarios (a), (b), and (c), the frequency deviation represents an active constraint to the operation of the system. Observe in \figref{simpleGrid} how in each case the resulting affine-linear control law keeps the frequency deviation inside the feasible region for all values of the non-controlled injections. In scenario (d), the designed controller also ensures feasible system operation despite the limited power capacity of the middle line. While for the demonstrated example all solutions can readily be verified manually, it shows that the situation may become much more complex in larger grids. The topological location of generators and loads in the grid is important as well as their capacity and their neighborhood. An automated algorithm for selecting critical elements to control and/or measure is thus very beneficial for complex networks with distributed generation and transmission lines that are operated close to their technical limits. In scenarios (c) and (d) the use of the greedy approach is required to deal with $\mathbf{M} \not= \mathbf{I}$. Using the MILP solution as an initial guess for $\mathcal{C}$ or starting with empty sets led to the same optimal objective function value. The solutions for $\mathcal{M}$ and $\mathcal{C}$ did not always agree exactly, but could be shown to be equally optimal. The total solver time for all scenarios is shown in Table I. As expected, the MILP optimization performs faster than the hill climbing optimization for the same instances. When computing the optimal sets for scenarios (a) and (b), the MILP algorithm was more than 2 times faster than the hill climbing with empty sets. It was also 1.2 times faster than the hill climbing that uses the MILP solution for $\mathcal{C}$ as initial guess, which corroborates the benefits of such concatenated optimization procedure. \begin{table}% \centering \begin{tabular}{|c|c|c|c|} \hline Scenario & MILP & HC (empty sets) & HC ($\mathcal{C}$ from MILP)\\ \hline (a) & 77 & 160 & 100 \\ (b) & 78 & 233 & 135 \\ (c) & - & 277 & 171 \\ (d) & - & 330 & 176 \\ \hline \end{tabular} \caption{Total solver time, in milliseconds, for the proposed optimization algorithms applied to the simple microgrid.} \label{} \end{table} \subsection{IEEE 118 Bus Test Case} We now analyze the modified version of the IEEE 118 bus test case, see \figref{IEEE118Grid}. This power system is composed of 54 generators, 99 loads, and 186 transmission lines. The topology of the power system, the load values and the line and generator capacities were taken from \cite{Al-Roomi2015}. We assume that each generator can be scheduled in the range of 10\%-90\% of its available capacity. In addition, we admit 10\% of uncertainty of each load in both directions. The maximum allowed frequency deviation is taken as $\pm 0.2$ Hz. \begin{figure}% \centering \begin{subfigure}[b]{\columnwidth} \includegraphics[width=\columnwidth]{IEEE118_MILP} \subcaption{} \label{subfig:IEEE118_MILP} \end{subfigure} \begin{subfigure}[b]{\columnwidth} \includegraphics[width=\columnwidth]{IEEE118_HC} \subcaption{} \label{subfig:IEEE118_HC} \end{subfigure} \caption{ Minimal sets of controllers and measurements for the modified IEEE 118 bus test case. The selected controllers and measurements are highlighted in red and green, respectively. (a) Only the nodal power set points may be measured. In this scenario, only 12 controllers and 20 sensors are required to guarantee feasible grid operation. (b) The measurement of line power flows and grid frequency deviation are additionally considered as possible. In this case only 3 sensors are required. } \label{fig:IEEE118Grid}% \end{figure} We first consider the case when only the power set points may be measured, i.e., $\mathbf{M} = \mathbf{I}$, see \subfigref{IEEE118_MILP}. We obtain an optimal set of 12 controller and 20 measurement devices to guarantee feasible grid operation. The remaining 96 injections can be left operating free and/or be manipulated deliberately and do not require any monitoring equipment. To obtain this result, we first use the MILP algorithm and then validate its solution via LP~\eqref{eq:optTheta}. The obtained $\eta$ is smaller than zero, thereby proving the feasibility and optimality of the MILP solution. When we initialize the greedy search with empty sets, we obtain a feasible solution consisting of 23 controllers and 9 measurements. As expected, the obtained solution in this case is larger than the one provided via MILP optimization. This confirms that taking the MILP solution as initial guess is beneficial for the greedy search. We now add the measurements of the line flows and the frequency deviation into the set of potential measurements, see \subfigref{IEEE118_HC}. This yields in total 305 possible sensor devices. We first apply the MILP algorithm and then the greedy one, starting with the controllers identified via the MILP. As expected, the solution is much sparser than before. The total number of required sensors is reduced from 20 to 3. The selected line flows confer a large amount of information that help avoiding grid capacity violations. It is insightful to observe the progress of the hill climbing procedure: buses with major generators connected are selected as controlled nodes first. The procedure is thus initially reducing the impact of the free variables on the system by controlling the highest uncertain injections first. When enough controlled nodes were selected, the selection of measurements starts to be significant for the minimization of the cost. Selected measurements are often related to nodes connected either to large non-controlled generators or to high uncertain loads. Remaining buses with smaller injections are mostly left unobserved. Table II shows the obtained solver time for all studied cases. The solution for case (a) using MILP optimization was found in about 2.57 minutes. Observe that the solution for case (b) was computed in about 28 min for the concatenated execution of both algorithms, compared to the ca. 154 minutes needed by the solver when starting hill climbing with empty sets. A single verification step using LP~\eqref{eq:optTheta} took less than a second. The computation time could further be improved, e.g., by testing not all possible set extensions in each step of the greedy search but using only a representative subset, selected by proximity in the graph. Another idea would be to add more than one element in each iteration. For the control design task described in this paper, however, the achieved computation time seemed acceptable even without these extensions. \begin{table}% \centering \begin{tabular}{|c|c|c|c|} \hline Scenario & MILP & HC (empty sets) & HC ($\mathcal{C}$ from MILP) \\ \hline (a) & 2.57 & 13 & 5.78 \\ (b) & - & 154 & 28 \\ \hline \end{tabular} \caption{Total solver time, in minutes, for the proposed optimization algorithms applied to the modified IEEE 118 bus test case.} \label{} \end{table}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Agent-based models (ABMs) are an attempt to understand how macroscopic regularities may emerge through processes of self-organization in systems of interacting agents. The approach is first and foremost a computational methodology and the mathematical formalization of ABMs is still in its infancy. This is probably due to the fact that a major motivation in the development of ABMs has been to relax a series of unrealistic assumptions made in other modeling frameworks just in order to keep mathematical tractability; namely, rationality, perfect information, agent homogeneity, among others. The other side of the coin is that the focus on computer models and algorithms makes difficult the comparison of different models and also complicates a rigorous analysis of the model behavior. In fact, the problems of code verification and model comparison including the discussion of standards for the replication of ABMs have nowadays become an area of research in its own (see e.g., \cite{Hales2003,Grimm2006,Galan2009}). Many of those issues would probably be resolved with a sound mathematical formulation of an ABM. On the other hand, it is also clear that the precise mathematical specification of a high-dimensional system of heterogeneous interacting agents along with their update mechanisms can be cumbersome. Agent-based systems are dynamical systems. Typically implemented on a computer, the time evolution is computed as an iterative process -- an algorithm -- in which agents are updated according to the specified rules. ABMs usually also involve a certain amount of stochasticity, because the agent choice and sometimes also the choice among different behavioral options is random. This is why Markov chain theory is a good candidate for the mathematical formalization of ABMs. To the authors knowledge, the first systematic approach to the development of mathematical formalism for ABMs in general is due to Laubenbacher and co-workers. Ref. \cite{Laubenbacher2009} reviews existing formal frameworks that have the potential to model ABMs, such as cellular automata and finite dynamical systems and argue for the latter as an appropriate mathematical framework. The possibility of using Markov chains in the analysis of ABMs has been pointed out in \cite{Izquierdo2009}. The main idea is to consider all possible configurations of the agent system as the state space ${\bf \Sigma}$ of a huge Markov chain. While Ref. \cite{Izquierdo2009} mainly relies on numerical computations to estimate the stochastic transition matrices of the models, we have shown how to derive explicitly the transition probabilities $\hat{P}$ in terms of the update function ${\bf u}$ and a probability distribution $\omega$ accounting for the stochastic parts in the model (\cite{Banisch2012son,Banisch2013eccs}). From general ABM to a particular class of models we refer to as single-step dynamics, this paper discusses in detail how to derive a microscopic Markov chain description (micro chain). It turns out that ABMs with a sequential update scheme can be conceived as random walks on regular graphs. This, in turn, hints at the possibility of reducing the state space of the microscopic Markov chain by exploiting systematically the dynamical symmetries that an ABM gives rise to. Namely, the existence of non-trivial automorphisms of the micro chain tells us that certain sets of micro configurations can be interchanged without changing the probability structure of the random walk. These sets of micro states can be aggregated or lumped into a single macro state and the resulting macro-level process is still a Markov chain. In Markov chain theory, such a state space reduction by which no information about the dynamical behavior is lost is known as lumpability \cite{Burke1958,Rosenblatt1959,Kemeny1976,Rogers1981,Buchholz1994,Goernerup2008}. Throughout the paper we use the Voter Model (VM from now on) as a simple paradigmatic example (\cite{Kimura1964,Castellano2009}, among many others). In the VM, agents can adopt two different states, which we may denote as white $\square$ and black $\blacksquare$. The attribute could account for the opinion of an agent regarding a certain issue, its approval or disapproval regarding certain attitudes. In an economic context $\blacksquare$ and $\square$ could encode two different behavioral strategies, or, in a biological context, the occurrence of mutants in a population of individuals. The iteration process implemented by the VM is very simple. At each time step, an agent $i$ is chosen at random along with one of its neighboring agents $j$ and one of them imitates the state of the other (by convention we assume the first to imitate the second). In the long run, the model leads to a configuration in which all agents have adopted the same state (either $\square$ or $\blacksquare$). In the context of biological evolution, this has been related to the fixation or extinction of a mutant in a population. The VM has also been interpreted as a simplistic form of a social influence process by which a shared convention is established in the entire population. This paper is organized as follows. In Section 2, we discuss the general structure of ABMs in form of a graph of the possible model transitions. Markovianity of the ABM process on the graph can be established by a so--called random mapping representation (Section 3). In Section \ref{cha:3.SSD} a class of models giving rise to single-step dynamics and therefore to random walks on regular graphs is discussed. Section 5 relates the symmetries in those graphs to partitions of the micro process with respect to which the chain is lumpable and Section 6 illustrates this at the example of a single-step model with $N$ agents that can be in three different states. We summarize these results in Section 7. \section{The Grammar of an ABM} Let us consider an abstract ABM with finite configuration space ${\bf \Sigma} = {\bf S}^N$ (meaning that there are $N$ agents with attributes $x_i \in {\bf S}$). Any iteration of the model (any run of the ABM algorithm) maps a configuration ${\bf x} \in {\bf \Sigma}$ to another configuration ${\bf y} \in {\bf \Sigma}$. In general, the case that no agent changes such that ${\bf x} = {\bf y}$ is also possible. Let us denote such a mapping by $F_z: {\bf \Sigma} \rightarrow {\bf \Sigma}$ and denote the set of all possible mappings by $\mathcal{F}$. Notice that any element of $\mathcal{F}$ can be seen as a word of length $|{\bf \Sigma}|$ over an $|{\bf \Sigma}|$-ary alphabet, and there are $|{\bf \Sigma}|^{|{\bf \Sigma}|}$ such words \cite{Flajolet1990}:3. Any $F_z \in \mathcal{F}$ induces a directed graph $({\bf \Sigma},F_z)$ the nodes of which are the elements in ${\bf \Sigma}$ (i.e., the agent configurations) and edges the set of ordered pairs $({\bf x},F_z({\bf x})), \forall {\bf x}\in{\bf \Sigma}$. Such a graph is called functional graph of $F_z$ because it displays the functional relations of the map $F_z$ on ${\bf \Sigma}$. That is, it represents the logical paths induced by $F_z$ on the space of configurations for any initial configuration ${\bf x}$. Each iteration of an ABM can be thought of as a stochastic choice out of a set of deterministic options. For an ABM in a certain configuration ${\bf x}$, there are usually several options (several ${\bf y}$) to which the algorithm may lead with a well-defined probability (see Fig. \ref{fig:AllAgentsChoicesVM}). Therefore, in an ABM, the transitions between the different configurations ${\bf x},{\bf y},\ldots \in {\bf \Sigma}$ are not defined by one single map $F_z$, but there is rather a subset $\mathcal{F}_Z \subset \mathcal{F}$ of maps out of which one map is chosen at each time step with a certain probability. Let us assume we know all the mappings $\mathcal{F}_Z = \{F_1,\ldots,F_z,\ldots,F_n\}$ that are realized by the ABM of our interest. With this, we are able to define a functional graph representation by $({\bf \Sigma},\mathcal{F}_Z)$ which takes as the nodes all elements of ${\bf \Sigma}$ (all agent configurations) and an arc $({\bf x},{\bf y})$ exists if there is at least one $F_z \in \mathcal{F}_Z$ such that $F_z({\bf x}) = {\bf y}$. This graph defines the >>grammar<< of the system for it displays all the logically possible transitions between any pair of configurations of the model. Consider the VM with three agents as an example. In the VM agents have two possible states (${\bf S} = \{\square,\blacksquare\}$) and the configuration space for a model of three agents is ${\bf \Sigma} = \{\square,\blacksquare\}^3$. In the iteration process, one agent $i$ is chosen at random along with one of its neighbors $j$ and agent $i$ imitates the state of $j$. This means that $y_i = x_j$ after the interaction event. Notice that once an agent pair $(i,j)$ is chosen the update is defined by a deterministic map ${\bf u}: {\bf S}^2 \rightarrow {\bf S}$. Stochasticity enters first because of the random choice of $i$ and second through the random choice of one agent in the neighborhood. Let us look at an example with three agents in the configuration ${\bf x} = (\square \blacksquare \blacksquare)$. If the first agent is chosen ($i = 1$ and $x_1 = \square$) then this agent will certainly change state to $y_1 = \blacksquare$ because it will in any case meet a black agent. For the second and the third agent ($i = 2$ or $i = 3$) the update result depends on whether one or the other neighbor is chosen because they are in different states. Noteworthy, different agent choices may lead to the same configuration. Here, this is the case if the agent pair $(2,3)$ or $(3,2)$ is chosen in which case the agent ($2$ or $3$) does not change its state because $x_2 = x_3$. Therefore we have ${\bf y} = {\bf x}$ and there are two paths realizing that transition. \begin{figure}[htbp] \centering \includegraphics[width=0.50\textwidth]{AllAgentsChoicesVM.eps} \caption{Possible paths from configuration ${\bf x} = (\square \blacksquare \blacksquare)$ in a small VM of three agents.} \label{fig:AllAgentsChoicesVM} \end{figure} In practice, the explicit construction of the entire functional graph may rapidly become a tedious task due to the huge dimension of the configuration space and the fact that one needs to check if $F_z({\bf x}) = {\bf y}$ for each mapping $F_z \in \mathcal{F}_Z$ and all pairs of configurations ${\bf x},{\bf y}$. On the other hand, the main interest here is a theoretical one, because, as a matter of fact, a representation as a functional graph of the form $\Gamma = ({\bf \Sigma},\mathcal{F}_Z)$ exists for any model that comes in form of an iterated computer algorithm. It is therefore a quite general way of formalizing ABMs and, as we will see in the sequel, it allows under some conditions to verify the Markovianity of the models at the micro level. \section{From Functional Graphs to Markov Chains} A functional graph $\Gamma = ({\bf \Sigma},\mathcal{F}_Z)$ defines the >>grammar<< of an ABM in the sense that it shows all possible transitions enabled by the model. It is the first essential step in the construction of the Markov chain associated with the ABM at the micro level because there is a non-zero transition probability only if there is an arrow in the functional graph. Consequently, all that is missing for a Markov chain description is the computation of the respective transition probabilities. For a class of models, including the VM, this is relatively simple because we can derive a random mapping representation \cite{Levin2009}:6/7 directly from the ABM rules. Namely, if $F_{z_1}, F_{z_2},\ldots$ is a sequence of independent random maps, each having the same distribution $\omega$, and $S_0 \in {\bf \Sigma}$ has distribution $\mu_0$, then the sequence $S_0, S_1,\ldots$ defined by \begin{equation} S_t= F_{z_t} (S_{t-1}), t \geq 1 \label{RMR1} \end{equation} is a Markov chain on ${\bf \Sigma}$ with transition matrix $\hat{P}$: \begin{equation}\label{RMR2} \hat{P}({\bf x},{\bf y}) = {\bf{Pr}_{\omega}}[z, F_z ({\bf x}) = {\bf y}]; {\bf x},{\bf y} \in {\bf \Sigma}. \end{equation} Conversely \cite{Levin2009}, any Markov chain has a random map representation (RMR). Therefore, in that case, (\ref{RMR1}) and (\ref{RMR2}) may be taken as an equivalent definition of a Markov chain. This is particularly useful in our case, because it shows that an ABM which can be described as above is, from a mathematical point of view, a Markov chain. This includes several models described in \cite{Izquierdo2009}. For general ABMs the explicit construction of a RMR can be cumbersome because it requires the dissection of stochastic and deterministic elements of the iteration procedure of the model. In the VM, this separation is clear-cut and therefore a RMR is obtained easily. In the VM, the random part consists of the choice of two connected agents $(i,j)$. Once this choice is made we know that $y_i = x_j$ by the interaction rule. This is sufficient to derive the >>grammar<< of the VM, because we need only to check one by one for all possible choices $(i,j)$ which transitions this choice induces on the configuration space. For a system of three agents, with all agents connected to the other two, the set of functions $\mathcal{F}_Z = \{F_1,\ldots,F_z,\ldots,F_n\}$ is specified in Table \ref{tab:RMR_VM3}. Notice that with three agents, there are 8 possible configurations indexed here by $a,b,\ldots,h$. Moreover, there are 6 possible choices for $(i,j)$ such that $\mathcal{F}_Z$ consists of $n=6$ mappings. \begin{table}[h] \centering \begin{tabular}{|c|c|c c c c c c c c|} \hline $z$ & $(i,j)$ & a & b & c & d & e & f & g & h\\ & & $\blacksquare\blacksquare\blacksquare$ & $\blacksquare\blacksquare\square$ & $\blacksquare\square\blacksquare$ & $\square\blacksquare\blacksquare$ & $\blacksquare\square\square$ & $\square\blacksquare\square$ & $\square\square\blacksquare$ & $\square\square\square$\\ \hline $1$ & $(1,2)$ & a & b & g & a & h & b & g & h\\ $2$ & $(1,3)$ & a & f & c & a & h & f & c & h\\ $3$ & $(2,1)$ & a & b & a & g & b & h & g & h\\ $4$ & $(3,1)$ & a & a & c & f & c & f & h & h\\ $5$ & $(2,3)$ & a & e & a & d & e & h & d & h\\ $6$ & $(3,2)$ & a & a & e & d & e & d & h & h\\ \hline \end{tabular} \caption{$\mathcal{F}_Z$ for the VM with three agents.} \label{tab:RMR_VM3} \end{table} Each row of the table represents a mapping $F_z: {\bf \Sigma} \rightarrow {\bf \Sigma}$ by listing to which configuration ${\bf y}$ the respective map takes the configurations $a$ to $h$. The first row, to make an example, represents the choice of the agent pair $(1,2)$. The changes this choice induces depend on the actual agent configuration ${\bf x}$. Namely, for any ${\bf x}$ with $x_1 = x_2$ we have $F_1({\bf x}) = F_{(1,2)}({\bf x}) = {\bf x}$. So the configurations $a,b,g,h$ are not changed by $F_{(1,2)}$. For the other configurations it is easy to see that $(\blacksquare\square\blacksquare)\rightarrow(\square\square\blacksquare)$ ($c \rightarrow g$), $(\square\blacksquare\blacksquare)\rightarrow(\blacksquare\blacksquare\blacksquare)$ ($d \rightarrow a$), $(\blacksquare\square\square)\rightarrow(\square\square\square)$ ($e \rightarrow h$), and $(\square\blacksquare\square)\rightarrow(\blacksquare\blacksquare\square)$ ($f \rightarrow b$). Notice that the two configurations $(\square\square\square)$ and $(\blacksquare\blacksquare\blacksquare)$ with all agents equal are not changed by any map and correspond therefore to the final configurations of the VM. In the RMR, we can use the possible agent choices $(i,j)$ in Table \ref{tab:RMR_VM3} directly to index the collection of maps $F_{(i,j)} \in \mathcal{F}_Z$. We denote as $\omega(i,j)$ the probability of choosing the agent pair $(i,j)$ which corresponds to choosing the map $F_{(i,j)}$. It is clear that we can proceed in this way in all models where the stochastic part concerns only the choice of agents. Then, the distribution $\omega$ is independent of the current system configuration and the same for all times ($\omega(z_t) = \omega(z)$). In this case, we obtain for the transition probabilities \begin{equation} \hat{P}({\bf x},{\bf y}) = {\bf{Pr}_{\omega}}[(i,j), F_{(i,j)} ({\bf x}) = {\bf y}] = \sum\limits_{\substack{(i,j):\{\bf F}_{(i,j)}({\bf x}) = {\bf y}}}^{} \omega(i,j). \label{eq:PhatVM01} \end{equation} That is, the probability of transition from ${\bf x}$ to ${\bf y}$ is the conjoint probability $\sum\omega(i,j)$ of choosing an agent pair $(i,j)$ such that the corresponding map takes ${\bf x}$ to ${\bf y}$ (i.e., $F_{(i,j)} ({\bf x}) = {\bf y}$). \section{Single-Step Dynamics and Random Walks on Regular Graphs} \label{cha:3.SSD} In the sequel, we focus on a class of models which we refer to as \emph{single-step dynamics}. They are characterized by the fact that only one agent changes at a time step. Notice that this is very often the case in ABMs with a sequential update scheme and that sequential update is, as a matter of fact, the most typical iteration scheme in ABMs. In terms of the >>grammar<< of these models, this means that non-zero transition probabilities are only possible between system configuration that differ in at most one position. And this gives rise to random walks on regular graphs. Consider a set of $N$ agents each one characterized by individual attributes $x_i$ that are taken in a finite list of possibilities ${\bf S} = \{1,\ldots,\delta\}$. In this case, the space of possible agent configurations is ${\bf \Sigma} = {\bf S}^N$. Consider further a deterministic update function ${\bf u}: {\bf S}^r \times \Lambda \rightarrow {\bf S}$ which takes configuration ${\bf x} \in {\bf \Sigma}$ at time $t$ to configuration ${\bf y} \in {\bf \Sigma}$ at $t+1$ by \begin{equation} y_i = {\bf u}(x_i,x_j,\ldots,x_k,\lambda). \label{eq:3.Update01} \end{equation} To go from one time step to the other in agent systems, usually, an agent $i$ is chosen first to perform a step. The decision of $i$ then depends on its current state $(x_i)$ and the attributes of its neighbors $(x_j,\ldots,x_k)$. The finite set $\Lambda$ accounts for a possible stochastic part in the update mechanism such that different behavioral options are implemented by different update functions ${\bf u}(\ldots,\lambda_1)$, ${\bf u}(\ldots,\lambda_2)$ etc. Notice that for the case in which the attributes of the agents $(x_i,x_j,\ldots,x_k)$ uniquely determine the agent decision we have ${\bf u}: {\bf S}^r \rightarrow {\bf S}$ which strongly resembles the update rules implemented in cellular automata (CA). As opposed to classical CA, however, a sequential update scheme is used in the class of models considered here. In the iteration process, first, a random choice of agents along with a $\lambda$ to index the possible behavioral options is performed with probability $\omega(i,j,\ldots,k,\lambda)$. This is followed by the application of the update function which leads to the new state of agent $i$ by Eq. (\ref{eq:3.Update01}). Due to the sequential application of an update rule of the form ${\bf u}: {\bf S}^r \times \Lambda \rightarrow {\bf S}$ only one agent (namely agent $i$) changes at a time so that all elements in ${\bf x}$ and ${\bf y}$ are equal except that element which corresponds to the agent that was updated during the step from ${\bf x}$ to ${\bf y}$. Therefore, $x_j = y_j, \forall j \neq i$ and $x_i \neq y_i$. We call ${\bf x}$ and ${\bf y}$ adjacent and denote this by ${\bf x} \stackrel{i}{\sim} {\bf y}$. It is then also clear that a transition from ${\bf x}$ to ${\bf y}$ is possible if ${\bf x} \stackrel{}{\sim} {\bf y}$. Therefore, the adjacency relation $\stackrel{}{\sim}$ defines the >>grammar<< $\Gamma_{SSD}$ of the entire class of single-step models. Namely, the existence of a map $F_z$ that takes ${\bf x}$ to ${\bf y}$, ${\bf y} = F_z({\bf x})$, implies that ${\bf x} \stackrel{i}{\sim} {\bf y}$ for some $i \in \{1,\ldots,N\}$. This means that any ABM that belongs to the class of single-step models performs a walk on $\Gamma_{SSD}$ or on a subgraph of it. Let us briefly consider the structure of the graph $\Gamma_{SSD}$ associated to the entire class of single-step models. From ${\bf x} \stackrel{i}{\sim} {\bf y}$ for $i = 1,\ldots,N$ we know that for any ${\bf x}$, there are $(\delta-1) N$ different vectors ${\bf y}$ which differ from ${\bf x}$ in a single position, where $\delta$ is the number of possible agent attributes. Therefore, $\Gamma_{SSD}$ is a regular graph with degree $(\delta-1) N + 1$, because in our case, the system may loop by $y_i = x_i$. As a matter of fact, our definition of adjacency as >>different in one position of the configuration<< is precisely the definition of so-called Hamming graphs which tells us that $\Gamma_{SSD} = H(N,\delta)$ (with loops). In the case of the VM, where $\delta = 2$ we find $H(N,2)$ which corresponds to the $N$-dimensional hypercube. As before, the transition probability matrix of the micro chain is denoted by $\hat{P}$ with $\hat{P}({\bf x},{\bf y})$ being the probability for the transition from ${\bf x}$ to ${\bf y}$. The previous considerations tell us that non-zero transition probabilities can exist only between two configurations that are linked in $H(N,\delta)$ plus the loop ($\hat{P}({\bf x},{\bf x})$). Therefore, each row of $\hat{P}$ contains no more than $(\delta-1) N +1$ non-zero entries. In the computation of $\hat{P}$ we concentrate on pairs of adjacent configurations. For ${\bf x} \stackrel{i}{\sim} {\bf y}$ with $x_i \neq y_i$ we have \begin{equation} \hat{P}({\bf x},{\bf y}) = \sum\limits_{\substack{(i,j,\ldots,k,\lambda): \{\bf y}_i = {\bf u}(x_i,x_j,\ldots,x_k,\lambda)} }^{} \omega(i,j,\ldots,k,\lambda) \label{eq:PhatSSD} \end{equation} which is the conjoint probability to choose agents and a rule $(i,j,\ldots,k,\lambda)$ such that the $i$th agent changes its attribute by $y_i = {\bf u}(x_i,x_j,\ldots,x_k,\lambda)$. For the probability that the model remains in ${\bf x}$, $\hat{P}({\bf x},{\bf x})$, we have \begin{equation} \hat{P}({\bf x},{\bf x}) = 1 - \sum^{}_{\substack{{\bf y} {\sim} {\bf x}}} \hat{P}({\bf x},{\bf y}). \label{eq:PhatSSD02} \end{equation} Eq. (\ref{eq:PhatSSD}) makes visible that the probability distribution $\omega$ plays the crucial role in the computation of the elements of $\hat{P}$, a fact that has been exploited in Ref. \cite{Banisch2013acs}. \section{Graph Symmetries and Markov Chain Aggregation} Markov chain aggregation concerns the question of what happens when the micro-level process -- defined by the micro chain $({\bf \Sigma},\hat{P})$ -- is projected onto a coarser partition of the state space ${\bf \Sigma}$. Such a situation naturally arises if the ABM is observed not at the micro level of ${\bf \Sigma}$, but rather in terms of a measure $\phi$ on ${\bf \Sigma}$ by which all configuration in ${\bf \Sigma}$ that give rise to the same measurement are mapped into the same macro state, say $X_k \in {\bf X}$. The first important question then concerns the \emph{lumpability} of the micro chain with respect to the partition ${\bf X}$. In the case of lumpability, the resulting macro-level process is still a Markov chain and the transition rates $P$ can be obtained in a relatively simple way from the microscopic transition matrix $\hat{P}$. Fig. \ref{fig:ProjectionGeneral} illustrates such a projection construction. \begin{figure}[hbtp] \centering \includegraphics[width=0.95\linewidth]{ProjectionGeneral.eps} \caption{A micro process (${\bf x},{\bf y}, {\bf z} \in {\bf \Sigma}$) is observed ($\phi$) at a higher level and this observation defines another macro level process ($X_k,X_l,X_m \in {\bf X}$). The micro process is a Markov chain with transition matrix $\hat{P}$. The macro process is a Markov chain (with $P$) only in the case of lumpability.} \label{fig:ProjectionGeneral} \end{figure} Necessary and sufficient conditions for lumpability are provided by Thm. 6.3.2 in \cite{Kemeny1976}. Let us denote by $\hat{p}_{{\bf x} Y} $ the conjoint probability for ${\bf x}$ to go to elements ${\bf y} \in Y$ where $Y \subseteq {\bf \Sigma}$ is a subset of the configuration space. Thm. 6.3.2 in \cite{Kemeny1976} states that a Markov chain $(\hat{P},{\bf \Sigma})$ is \emph{lumpable} with respect to a partition ${\bf X} = (X_1,\ldots,X_P)$ if for every two subsets $X_k$ and $X_l$ the sum $\hat{p}_{{\bf x} X_l} = \sum\limits_{{\bf y} \in X_l}^{} \hat{p}_{{\bf x} {\bf y}}$ is equal for all ${\bf x} \in X_k$. Moreover, these common values form the transition probabilities $P(X_k,X_l)$ for a new chain $(P,{\bf X})$. Since ABM micro chains can be seen as random walks on regular graphs, it is convenient to provide lumpability conditions in form of the symmetry structure of the micro chain. Let us restate the respective result (previously introduced in a similar form in \cite{Banisch2012son}, Prop. 3.1): \begin{theorem} \label{thm:symmetry} Let $(\bf{\Sigma}, \hat{P})$ be a Markov chain and ${\bf x},{\bf y}$ elements of ${\bf \Sigma}$. Consider a partition ${\bf X}$ of ${\bf \Sigma}$ and along with ${\bf X}$ a transformation group $\mathcal{G}$ acting on ${\bf \Sigma}$ that generates ${\bf X}$. (That is, the orbits of $\mathcal{G}$ on ${\bf \Sigma}$ construct ${\bf X}$.) If the Markov transition probability $\hat{P}$ is symmetric with respect to $\mathcal{G}$, \begin{equation}\label{eq:symmetry_lumpability} {\hat P} ({\bf x},{\bf y}) = \: {\hat P} ({\hat{\sigma}}({\bf x}),{\hat{\sigma}}({\bf y})) : \forall {\hat{\sigma}} \in {\mathcal{G}}, \end{equation} the partition ${\bf X} = (X_1,\dots, X_{n})$ is (strongly) lumpable. \end{theorem} \begin{proof} For the proof it is sufficient to show that any two configurations ${\bf x}$ and ${\bf x}'$ with ${\bf x}' = \hat{\sigma}({\bf x})$ satisfy \begin{equation} \hat{p}_{{\bf x} Y} = \sum\limits_{{\bf y} \in Y}^{} \hat{P}({\bf x},{\bf y}) = \sum\limits_{{\bf y} \in Y}^{} \hat{P}({\bf x}',{\bf y}) = \hat{p}_{{\bf x}' Y} \label{eq:3.macroEquivalence01} \end{equation} for all $Y \in {\bf X}$. Consider any two subsets $X,Y \in {\bf X}$ and take ${\bf x} \in X$. Because $\mathcal{G}$ preserves the partition it is true that ${\bf x}' \in X$. Now we have to show that Eq. (\ref{eq:3.macroEquivalence01}) holds. First the probability for ${\bf x}' = \hat{\sigma}({\bf x})$ to go to an element ${\bf y} \in Y$ is \begin{equation} \hat{p}_{\hat{\sigma}({\bf x}) Y} = \sum\limits_{{\bf y} \in Y}^{} \hat{P}(\hat{\sigma}({\bf x}),{\bf y}). \end{equation} Because the $\hat{\sigma}$ are bijections and preserve ${\bf X}$ we have $\hat{\sigma}(Y) = Y$ and there is for every ${\bf y} \in Y$ exactly one $\hat{\sigma}({\bf y}) \in Y$. Therefore we can substitute \begin{equation} \hat{p}_{\hat{\sigma}({\bf x}) Y} = \sum\limits_{{\bf y} \in Y}^{} \hat{P}(\hat{\sigma}({\bf x}),\hat{\sigma}({\bf y})) = \sum\limits_{{\bf y} \in Y}^{} \hat{P}({\bf x},{\bf y}) = \hat{p}_{{\bf x} Y}, \end{equation} where the second equality comes by the symmetry condition (\ref{eq:symmetry_lumpability}) that $\hat{P}({\bf x},{\bf y}) = \hat{P}(\hat{\sigma}({\bf x}),\hat{\sigma}({\bf y}))$. \end{proof} The usefulness of the lumpability conditions stated in Thm. \ref{thm:symmetry} becomes apparent recalling that ABMs can be seen as random walks on regular graphs defined by the functional graph or >>grammar<< of the model $\Gamma = ({\bf \Sigma},\mathcal{F}_Z)$. The full specification of the micro process $({\bf \Sigma}, \hat{P})$ is obtained by assigning transition probabilities to the connections in $\Gamma$ and we can interpret this as a weighted graph. The regularities of $({\bf \Sigma}, \hat{P})$ are captured by a number of non-trivial automorphisms which, in the case of ABMs, reflect the symmetries of the models. In fact, Thm. \ref{thm:symmetry} allows to systematically exploit the symmetries of an agent model in the construction of partitions with respect to which the micro chain is lumpable. Namely, the symmetry requirement in Thm. \ref{thm:symmetry}, that is, Eq. (\ref{eq:symmetry_lumpability}), corresponds precisely to the usual definition of automorphisms of $({\bf \Sigma}, \hat{P})$. The set of all permutations $\hat{\sigma}$ that satisfy (\ref{eq:symmetry_lumpability}) corresponds then to the automorphism group of $({\bf \Sigma}, \hat{P})$. \begin{lemma} Let $\mathcal{G}$ be the automorphism group of the micro chain $({\bf \Sigma}, \hat{P})$. The orbits of $\mathcal{G}$ define a lumpable partition ${\bf X}$ such that every pair of micro configurations ${\bf x},{\bf x}' \in {\bf \Sigma}$ for which $\exists\hat{\sigma} \in \mathcal{G}$ such that ${\bf x}' = \hat{\sigma}({\bf x})$ belong to the same subset $X_i \in {\bf X}$. \label{thm:Lambda} \end{lemma} \begin{note} Lemma \ref{thm:Lambda} actually applies to any $\mathcal{G}$ that is a proper subgroup of the automorphism group of $({\bf \Sigma}, \hat{P})$. The basic requirement for such a subset $\mathcal{G}$ to be a group is that be closed under the group operation which establishes that $\hat{\sigma}(X_i) = X_i$. With the closure property, it is easy that any such subgroup $\mathcal{G}$ defines a lumpable partition in the sense of Thm. \ref{thm:symmetry}. \end{note} \section{Groups of Automorphisms, Macro Chains and System Properties} In this section we illustrate the previous ideas at the example of three state single-step dynamics. Consider a system of $N$ agents each one characterized by an attribute $x_i \in \{a,b,c\}$, that is $\delta = 3$. As discussed in Section \ref{cha:3.SSD}, the corresponding graph $\Gamma$ encoding all the possible transitions is the Hamming graph $H(N,3)$. The nodes ${\bf x},{\bf y}$ in $H(N,3)$ correspond to all possible agent combinations and are written as vectors ${\bf x} = (x_1,\ldots,x_N)$ with symbols $x_i \in \{a,b,c\}$. The automorphism group of $H(N,3)$ is composed of two groups generated by operations changing the order of elements in the vector (agent permutations) and by permutations acting on the set of symbols ${\bf S}=\{a,b,c\}$ (agent attributes). Namely, it is given by the direct product \begin{equation} Aut(H(N,\delta)) = \mathcal{S}_N \otimes \mathcal{S}_{\delta} \label{eq:AutHNdelta} \end{equation} of the symmetric group $\mathcal{S}_N$ acting on the agents and the group $\mathcal{S}_{\delta}$ acting on the agent attributes. Let us first look at a very small system of $N = 2$ agents and $\delta = 3$ states. The corresponding microscopic structure -- the graph $H(2,3)$ -- is shown on the l.h.s. of Fig. \ref{fig:MicroVM.2agents3states.AutW}. It also illustrates the action of $\mathcal{S}_N$ on the ${\bf x},{\bf y} \in {\bf \Sigma}$, that is, the bijection induced on the configuration space by permuting the agent labels. Noteworthy, in the case of $N=2$ there is only one alternative ordering of agents denoted here as $\hat{\sigma}_{\omega}({\bf x})$ which takes $(x_1,x_2) \stackrel{\hat{\sigma}_{\omega}}{\longleftrightarrow} (x_2,x_1)$. The respective group $\mathcal{S}_{N=2}$ therefore induces a partition in which all configurations ${\bf x},{\bf y}$ with the same number of attributes $a,b,c$ are \emph{lumped} into the same set, which we may denote as $X_{\langle k_a,k_{b},k_c\rangle}$. See r.h.s. of Fig. \ref{fig:MicroVM.2agents3states.AutW}. \begin{figure}[htbp] \centering \includegraphics[width=0.80\textwidth]{MicroVM.2agents3states.AutW.eps} \caption{$H(2,3)$ and the reduction induced by $\mathcal{S}_N$.} \label{fig:MicroVM.2agents3states.AutW} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=.90\textwidth]{StateTopologyAndTransitions.T3.N8.eps} \caption{Transition structure (l.h.s) and state topology (r.h.s) of the VM with three attributes for $N=8$.} \label{fig:StateTopologyAndTransitions.T3.N8} \end{figure} More generally in the case of $N$ agents and $\delta$ agent attributes the group $\mathcal{S}_{N}$ induces a partition of the configuration space ${\bf \Sigma}$ by which all configurations with the same attribute frequencies are collected in the same macro set. Let us define $N_s ({\bf x})$ to be the number of agents in the configuration ${\bf x}$ with attribute $s$, $s = 1,\ldots,\delta$, and then $X_{\langle k_1, k_2, \dots, k_{\delta}\rangle} \subset {\bf \Sigma}$ as \begin{equation} \begin{split} X_{\langle k_1, \dots, k_s, \dots, k_{\delta} \rangle} = \left\{ \vphantom{\sum_{s=1}^{\delta}}{\bf x} \in {\bf \Sigma} \ : N_1({\bf x}) = k_1, \dots, N_s({\bf x}) = k_s, \dots\right.\\\left.\ldots, N_{\delta} ({\bf x}) = k_{\delta} \mbox{ and } \ \sum_{s=1}^{\delta} k_{s} = N\right\}. \end{split} \label{eq:X< >} \end{equation} Each $X_{\langle k_1, k_2, \dots, k_{\delta}\rangle}$ contains all the configurations ${\bf x}$ in which exactly $k_s$ agents hold attribute $s$ for any $s$. We use the notation $\langle k_1, k_2, \dots, k_{\delta }\rangle$ to indicate that $\sum_{s=1}^{\delta } k_{s} = N$. Therefore, the reduced state space is organized as a $\delta$-simplex lattice, see Fig. \ref{fig:StateTopologyAndTransitions.T3.N8}. For a model with $N=8$ and $\delta = 3$ the resulting reduced state space is shown in Fig. \ref{fig:StateTopologyAndTransitions.T3.N8}. The transition structure depicted in Fig. \ref{fig:StateTopologyAndTransitions.T3.N8} corresponds to the VM. The number of $a$, $b$ and $c$ agents is denoted by (respectively) $k$, $l$ and $m$ so that ${\bf X} =\{ X_{\langle k,l,m \rangle} : 0 \leq k,l,m \leq N, k+l+m = N \}$. The number of states for a system with $N$ agents is $S = \sum_{i = 0}^N (i+1) = \frac{(N + 1)(N + 2)}{2}$. For Voter-like models -- used, for instance, as models of opinion and social dynamics -- it is not unusual to study the dynamical behavior by looking at the time evolution of the respective attribute frequencies. It is important to notice, however, that the resulting partition is lumpable only if the transition matrix $\hat{P}$ is symmetric with respect to the action of $\mathcal{S}_N$ on ${\bf \Sigma}$, namely if Thm. \ref{thm:symmetry} holds for $\mathcal{S}_N$. We have shown in \cite{Banisch2012son} that this is only true for homogeneous mixing and the case of inhomogeneous interaction topologies is discussed in~\cite{Banisch2013acs}. \begin{figure}[htbp] \centering \includegraphics[width=0.95\textwidth]{MicroVM.2agents3states.AutD.eps} \caption{$H(2,3)$ and the reductions induced by $\mathcal{S}_{\delta}$.} \label{fig:MicroVM.2agents3states.AutD} \end{figure} Let us now consider $\mathcal{S}_{\delta}$. On the l.h.s. of Fig. \ref{fig:MicroVM.2agents3states.AutD} the graph $H(2,3)$ is shown along with the bijections on it induced by permutation of attributes $a$ and $c$, $abc \stackrel{\hat{\sigma}_{\delta_1}}{\longleftrightarrow} cba )$. Effectively, this corresponds to the situation of looking at >>one attribute ($b$) against the other two ($x = a \cup c$)<<. Noteworthy, taking that perspective (see graph in the middle of Fig. \ref{fig:MicroVM.2agents3states.AutD}) corresponds to a reduction of $H(2,3)$ to $H(2,2)$ or, more generally, of $H(N,3)$ to the hypercube $H(N,2)$. This means that, under the assumption of symmetric agent rules with respect to the attributes, single-step models with $\delta$ states are reducible to the binary case. Moreover, even the binary case allows for further reduction (see r.h.s. of Fig. \ref{fig:MicroVM.2agents3states.AutD}). Namely, assuming the additional symmetry $bx \stackrel{\hat{\sigma}_{\delta_2}}{\longleftrightarrow} xb )$ corresponding in a binary setting to the simultaneous flip of all agent states $x_i \rightarrow \bar{x}_i, \forall i$. The VM is a nice example in which independent of the interaction topology, $\hat{P}({\bf x},{\bf y}) = \hat{P}(\bar{{\bf x}},\bar{{\bf y}})$. This reduces the state space to one half of $H(N,2)$, which we shall denote as $H_{1/2}(N,2)$. \begin{figure}[htbp] \centering \includegraphics[width=0.95\textwidth]{AutProjectionScheme.eps} \caption{Different levels of description are associated to different symmetry groups of $H(N,3)$.} \label{fig:AutProjectionScheme} \end{figure} The most interesting reductions can be reached by the combination of $\mathcal{S}_N$ and $\mathcal{S}_{\delta}$. Fig. \ref{fig:AutProjectionScheme} shows possible combinations and the resulting macroscopic state spaces starting from $H(N,3)$. For instance, partitioning $H(N,3)$ by using the set of agent permutations $\mathcal{S}_N$ leads to state space organized as a triangular lattice (see also Fig. \ref{fig:StateTopologyAndTransitions.T3.N8}). Lumpability of the micro process $({\bf \Sigma},\hat{P})$ on $H(N,3)$ with respect to this state space rests upon the symmetry of the agent interaction probabilities with respect to all agent permutations (\cite{Banisch2013acs}, see \cite{Banisch2014acscodym} for a discussion of the non-lumpable case). From the triangular structure shown on the upper right in Fig. \ref{fig:AutProjectionScheme}, a further reduction ca be obtained by taking into account the symmetry of the interaction rules with respect to (at least) one pair of attributes, which we have denoted as $\hat{\sigma}_{\delta_1}$. The resulting macro process on ${\bf X} = (X_0,\ldots,X_N)$ is a random walk on the line with $N+1$ states, known as Moran process for the VM interaction (after \cite{Moran1958}). In a binary setting, the macro states $X_k$ collect all micro configurations with $k$ agents in state $\square$ (and therefore $N-k$ agents in $\blacksquare$). Notice that a Markov projection to the Moran process is possible also for $\delta > 3$ if the micro process is symmetric with respect to permutations of (at least) $\delta-1$ attributes. The group of transformations associated to this partition may be written as $\mathcal{S}_N \otimes \mathcal{S}_{\delta-1} \subset Aut(H(N,\delta))$. The reduction obtained by using the full automorphism group of $H(N,3)$ is shown on the bottom of Fig. \ref{fig:AutProjectionScheme}. With respect to the Moran process on ${\bf X} = (X_0,\ldots,X_N)$, it means that the pairs $\{X_k,X_{(N-k)}\}$ are lumped into the same state $Y_k$. This can be done if we have for any $k$, $P(X_k,X_{k \pm 1}) = P(X_{(N-k)},X_{(N-k) \mp 1})$. As a matter of fact, this description still captures the number of agents in the same state, but now information about in which state they are is omitted. This is only possible (lumpable) if the model implements completely symmetric interaction rules. \section{Summary} This paper analyses the probabilistic structure of a class of agent-based models (ABMs). In an ABM in which $N$ agents can be in $\delta$ different states there are $\delta^N$ possible agent configurations and each iteration of the model takes one configuration into another one. It is therefore convenient to conceive of the agent configurations as the nodes of a huge directed graph and to link two configurations ${\bf x},{\bf y}$ whenever the application of the ABM rules to ${\bf x}$ may lead to ${\bf y}$ in one step. If a model operates with a sequential update scheme by which one agent is chosen to update its state at a time, transitions are only allowed between system configurations that differ with respect to a single element (agent). The graph associated to those single-step models is the Hamming graph $H(N,\delta)$. The fact that a single-step ABM corresponds to a random walk on a regular graph allows for a systematic study of the symmetries in the dynamical structure of an ABM. Namely, the existence of non-trivial automorphisms of the ABM micro chain tells us that certain sets of agent configurations can be interchanged without changing the probability structure of the random walk. These sets of micro states can be aggregated or lumped into a single macro state and the resulting macro-level process is still a Markov chain. If the microscopic rules are symmetric with respect agent ($\mathcal{S}_N$) and attribute ($\mathcal{S}_{\delta}$) permutations the full automorphism group of $H(N,\delta)$ is realized and allows for a reduction from $\delta^N$ micro to around $N/2$ macro states. Moreover, different combinations of subgroups of automorphisms and the reductions they imply are rather meaningful in terms of observables and system properties. Notice finally that other update schemes (beyond single-step dynamics) -- even the case of synchronous update\footnote{The author thanks J\"{u}rgen Jost for this suggestion.} -- do not necessarily affect the symmetries of the micro process. The described approach may be applied to these cases as well. Extending the framework to models with continuous agent attributes is another challenging issue to be addressed by future work.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Numerical solution of the time-dependent Maxwell equations is an important computational problem arising in various scientific and engineering fields such as photonic crystal modeling, gas and oil industry, biomedical simulations, and astrophysics. Rather often the application environment suggests that the Maxwell equations have to be solved many times, for instance, for different source functions or different medium parameters)~\cite{BotchevHanseUppu2018}. The size of the spatial computational domain as well as the necessity to solve the equations many times make this task very demanding in terms of computational costs. Therefore, advanced computational techniques have to be applied, such as modern finite element discretizations~\cite{Descombes_ea2013_DGTD,Sarmany_ea2013} in space and efficient integration schemes in time. Along with multirate and implicit time integration schemes~\cite{VerwerBotchev09,Descombes_ea2016}, exponential time integration schemes~\cite{HochbruckOstermann2010} have recently been shown promising for solving the Maxwell equations in time~\cite{Hochbruck_Pazur_ea2015,Botchev2016,BotchevHanseUppu2018}. Exponential time integration schemes, which are essentially based on the notion of the matrix exponential, are attractive not only because of their excellent stability and accuracy properties but also due to their efficiency and potential for a parallelism in time~\cite{PARAEXP,Kooij_ea2017}. Most frequently, especially when the spatially discretized Maxwell operator $\mathcal{A}$ (defined in~\eqref{mxw1} below) is not a skew-symmetric matrix, the actions of the matrix exponential in exponential time integration schemes are evaluated by Krylov subspace methods. To be efficient, Krylov subspace methods often need to rely on rational approximations~\cite{DruskinKnizh98,MoretNovati04,EshofHochbruck06,PhD_Guettel} (so that the Krylov subspace is built up for a rational function of $\mathcal{A}$ rather then for $\mathcal{A}$ itself) and on the so-called restarting techniques~\cite{PhD_Niehoff,TalEzer2007,Afanasjew_ea08,PhD_Guettel,Eiermann_ea2011} (to keep the Krylov subspace dimension restricted). A popular variant of the rational Krylov subspace methods is the shift-and-invert (SAI) method~\cite{MoretNovati04,EshofHochbruck06}. Rational Krylov subspace methods and, in particular, the SAI Krylov subspace method as well as implicit time integration schemes involve solution of linear systems with the matrix $I+\gamma\mathcal{A}$, with $\gamma>0$ being a given parameter (which is, in case of implicit time stepping, the time step size). Despite the significant progress achieved last decennia in sparse direct linear system solvers, for three-dimensional (3D) problems iterative linear system solvers remain the methods of choice. The task of solving linear systems with the matrix $I+\gamma\mathcal{A}$ when $\mathcal{A}$ is a spatially discretized Maxwell operator is especially challenging for the Maxwell equations. This is caused not only by the fact that the matrix $\mathcal{A}$ has a saddle point structure but also due to special nonreflecting boundary conditions which are often imposed for the Maxwell equations. In this paper a preconditioner is proposed to solve iteratively linear systems with the matrix $\mathcal{A}$ when the popular perfectly matching layers (PML) boundary conditions are imposed. In this case the matrix $\mathcal{A}$ has the so-called double saddle point structure~\cite{BeikBenzi2018}. This paper is organized as follows. The problem is set up in Section~2. In Section~3 we present the nested Schur complement solver. Other possible preconditioners for these problems are discussed Section~4. Section~5 is devoted to numerical experiments. Finally, the conclusions are drawn in the last section whereas some background material is given in two appendices. \section{Problem formulation} We are interested in solving a system of time-dependent three-dimensional (3D) Maxwell equations \begin{equation} \label{mxw} \left\{ \begin{alignedat}{4} &\mu \partial_t H \,=\,& - \sigma_1 &H & \,-\, \nabla\times &E & \,+\, &J_H, \\ &\varepsilon \partial_t E \,=\,& \nabla\times &H & \,-\, \sigma_2 &E & \,+\, &J_E, \\ \end{alignedat} \right. \end{equation} where $H=H(x,y,z,t)$ and $E=E(x,y,z,t)$ are respectively magnetic and electric fields, $\mu$ is the magnetic permeability (as typical for photonics and gas-and-oil exploration applications, $\mu\equiv 1$ for all the tests considered in this paper; however, in general one can have $\mu=\mu(x,y,z)$) and $\varepsilon=\varepsilon(x,y,z)>0$ is the electric permittivity. Furthermore, $\sigma_{1,2}=\sigma_{1,2}(x,y,z)\geqslant 0$ are the conduction terms, such that $\sigma_2$ contains real physical conductivity as well as additional artificial conductivity related to nonreflective boundary conditions (in this work we use stretched coordinate formulation of the perfectly matched layers, PML, boundary conditions~\cite{Johnson2010_PML}), whereas $\sigma_1$ normally contains artificial PML conductivity values only. The functions $J_{H,E}=J_{H,E}(x,y,z,t)$ represent given source terms. If, for the moment, we assume that the homogeneous Dirichlet boundary conditions are supplied with~\eqref{mxw}, then a standard Yee finite difference discretization on a staggered Cartesian mesh results in a time-continuous space-discretized system \begin{equation} \label{mxw0} \begin{bmatrix} M_\mu & 0 \\ 0 & M_\varepsilon \end{bmatrix} \begin{bmatrix} \bm{h}' \\ \bm{e}' \end{bmatrix} = -\begin{bmatrix} M_{\sigma_1} & K \\ -K^T & M_{\sigma_2} \end{bmatrix} \begin{bmatrix} \bm{h} \\ \bm{e} \end{bmatrix} + \begin{bmatrix} \bm{j}_H \\ \bm{j}_E \end{bmatrix}, \end{equation} where the vector functions $\bm{h}(t)$ and $\bm{e}(t)$ contain the mesh values of the unknown fields, $M_\mu$, $M_\varepsilon$, and $M_{\sigma_{1,2}}$ are diagonal matrices containing the values of $\mu$, $\varepsilon$, and $\sigma_{1,2}$, respectively, $K$ and $K^T$ are discrete curl operators and $\bm{j}_{H,E}(t)$ are the mesh values of the source functions $J_{H,E}$. Note that a semidiscrete system of ordinary differential equations (ODEs), which is very similar to~\eqref{mxw0}, is also obtained when the standard Whitney-N\'ed\'elec vector finite elements are employed (see e.g.~\cite{RodrigueWhite01,BotchevVerwer09,VerwerBotchev09}). In this case $M_\mu$, $M_\varepsilon$, and $M_{\sigma_{1,2}}$ are the mass matrices. It is convenient to rewrite the system~\eqref{mxw0} as \begin{equation} \label{mxw1} \begin{gathered} \begin{bmatrix} \bm{h}' \\ \bm{e}' \end{bmatrix} = -A \cdot \begin{bmatrix} \bm{h} \\ \bm{e} \end{bmatrix} + \begin{bmatrix} M_\mu^{-1}\bm{j}_H(t) \\ M_\varepsilon^{-1}\bm{j}_E(t) \end{bmatrix}, \\ A = \begin{bmatrix} M_\mu^{-1} & 0 \\ 0 & M_\varepsilon^{-1} \end{bmatrix} \begin{bmatrix} M_{\sigma_1} & K \\ -K^T & M_{\sigma_2} \end{bmatrix} = \begin{bmatrix} M_{1} & K_1 \\ -K_2^T & M_{2} \end{bmatrix} \in\mathbb{R}^{n\times n}, \end{gathered} \end{equation} where $M_1=M_\mu^{-1}M_{\sigma_1}$, $K_1=M_\mu^{-1}K$, $M_2=M_\varepsilon^{-1}M_{\sigma_2}$, $K_2^T=M_\varepsilon^{-1}K^T$, and the inverse mass matrices are computed explicitly only if they are diagonal or block diagonal. The latter is the case if discontinuous Galerkin finite elements are used, see e.g.~\cite{Sarmany_ea2013}. We denote the size of the ODE system in~\eqref{mxw1} by $n$, and let $n=n_1+n_2$, where $n_{1,2}$ are the numbers of degrees of freedom associated with magnetic and electric fields, respectively. Employment of the nonreflective PML boundary conditions~\cite{Johnson2010_PML,PML94} means that auxiliary variables are added to the Maxwell system~\eqref{mxw} which, after space discretization, enter the semidiscrete system~\eqref{mxw1} as well. These additional variables are nonzero only in the so-called PML region (a region just outside the boundary of the domain of interest). Incorporation of the PML boundary conditions into~\eqref{mxw},\eqref{mxw1} (for a detailed derivation we refer to~\cite{deCloetMarissenWestendorp2015}) leads to the resulting semi-discrete ODE system of an extended size \begin{equation} \label{mxw2} y'(t)=-\mathcal{A} y(t) + g(t), \qquad \mathcal{A} = \begin{bmatrix} A & B_1^T \\ -B_2 & 0 \end{bmatrix}\in\mathbb{R}^{N\times N}, \end{equation} where $N=m+n$, with $m$ being the number of space-discretized auxiliary PML variables ($m$ is proportional to the number of mesh points in the PML region), and the matrices $B_{1,2}\in\mathbb{R}^{m\times n}$ couple these variables to the main variables $\bm{h}$ and $\bm{e}$. For representative values of $m$ and $n$ see Table~\ref{t:mesh} in the numerical experiment section below. The matrices $B_{1,2}$ are defined in more detail below in~\ref{app1}. \section{Nested Schur complement solver} Exponential time integration based on rational shift-and-invert Krylov subspace methods~\cite{EshofHochbruck06,Botchev2016} as well as implicit time integration~\cite{VerwerBotchev09} of systems~\eqref{mxw2} involves solution of linear systems \begin{equation} \label{ls} (I+\gamma\mathcal{A}) x = b, \end{equation} where $\mathcal{A}$ is defined in~\eqref{mxw2} and $\gamma>0$ is a given parameter, related (or equal) to the time step size. Matrices having nested saddle point structure\footnote{Formally speaking, $I+\gamma\mathcal{A}$ gets a saddle point structure when we switch the sign in the second block row of a linear system $(I+\gamma\mathcal{A}) x =b$.} as the matrix $\mathcal{A}$ are recently called double saddle point problems~\cite{BeikBenzi2018}. Our starting point in construction of preconditioners for matrices of this type is the observation that an efficient preconditioner should involve a Schur complement (see e.g.~\cite{MurphyGolubWathen2000,ActaNumerKKT2005,ElmanSilvesterWathen:book}). In particular, for matrices $$ \begin{bmatrix} \mathtt{A} & \mathtt{B} \\ \mathtt{C} & \mathtt{D} \end{bmatrix}, $$ good Schur complement-based block-diagonal preconditioners are $$ \begin{bmatrix} \mathtt{A} & \mathtt{O} \\ \mathtt{O} & \mathtt{D}-\mathtt{C}\mathtt{A}^{-1}\mathtt{B} \end{bmatrix} \quad\text{or}\quad \begin{bmatrix} \mathtt{A}-\mathtt{B}\mathtt{D}^{-1}\mathtt{C} & \mathtt{O} \\ \mathtt{O} & \mathtt{D} \end{bmatrix}. $$ For modern Krylov subspace methods such as GMRES, these preconditioners guarantee convergence in three iterations~\cite{MurphyGolubWathen2000}. Applying them for our matrix $I+\gamma\mathcal{A}$ means that linear systems with $$ \begin{aligned} \text{either (option~1)}\quad & \mathtt{D}-\mathtt{C}\mathtt{A}^{-1}\mathtt{B}= I+\gamma^2B_2(I+\gamma A)^{-1}B_1^T \\ \text{or (option~2)}\quad & \mathtt{A}-\mathtt{B}\mathtt{D}^{-1}\mathtt{C}= I+\gamma A+\gamma^2B_1^TB_2 \end{aligned} $$ have to be solved efficiently. Comparing these two possible options, we choose option~2 because of the simpler structure of the matrix. Furthermore, assuming for the moment that the systems with the matrix $I+\gamma A$ can be solved efficiently and taking into account that $B_1^TB_2$ is of a low rank, we may expect that $I+\gamma A$ can be a good preconditioner when solving systems with the matrix $I+\gamma A+\gamma^2B_1^TB_2$. This expectation is confirmed in practice: number of preconditioned by $I+\gamma A$ iterations to solve systems with $I+\gamma A+\gamma^2B_1^TB_2$ remain approximately constant as the spatial discretization mesh gets finer (see Table~\ref{t:B1B2}). Moreover, as shown by formula~\eqref{B1B2} in the appendix below, the matrix $\gamma^2B_1^TB_2$ depends on the mesh size in a way similar to the matrix $I+\gamma A$ does: only the $(1,2)$ and $(2,1)$ blocks in this matrix depend on the mesh size as $\sim 1/h$ (assuming $h=h_x=h_y=h_z$). Now a question arises whether and how the systems with the matrix \begin{equation} \label{I_gA} I+\gamma A = \begin{bmatrix} I + \gamma M_1 & \gamma K_1 \\ -\gamma K_2^T & I+\gamma M_2 \end{bmatrix} \end{equation} can be solved efficiently. We proceed in a similar way as for the matrix $I+\gamma\mathcal{A}$ and explore the two possible options for a Schur complement-based preconditioner: \begin{align} \label{sch1} \text{option 1:}\quad & I+\gamma M_2 + \gamma^2K_2^T(I+\gamma M_1)^{-1}K_1, \\ \text{option 2:}\quad & I+\gamma M_1 + \gamma^2K_1(I+\gamma M_2)^{-1}K_2^T. \notag \end{align} In many applications involving the Maxwell equations (such as, e.g., photonics and gas-and-oil exploration) the permeability $\mu$ is usually constant ($\mu\equiv 1$), whereas $\varepsilon$ is not and can be a strongly varying function. Since the motivation for this work is photonics modeling, we choose for option~1, where the matrix $\gamma^2K_2^T(I+\gamma M_1)^{-1}K_1$ has a simpler structure than the matrix $\gamma^2K_1(I+\gamma M_2)^{-1}K_2^T$ in option~2. It is convenient to rewrite the chosen Schur complement~\eqref{sch1} in the form \begin{equation} \label{sch2} I+\gamma M_2 + \gamma^2K_2^T(I+\gamma M_1)^{-1}K_1 = M_\varepsilon^{-1} \left[ M_\varepsilon +\gamma M_{\sigma_2} + \gamma^2 K^T(M_\mu + \gamma M_{\sigma_1})^{-1}K \right], \end{equation} which has an advantage that the matrix in brackets is symmetric positive definite. Further inspection of the bracketed matrix reveals that it is similar in structure to a shifted Laplacian, where the shift is given by $M_\varepsilon +\gamma M_{\sigma_2}$. For this reason a large variety of solvers is available for solving systems with this matrix. These include (i)~sparse direct factorization solvers (on coarse meshes); (ii)~multigrid solvers (which should not be too difficult in implementation since only one field is involved); (iii)~algebraic multigrid methods; (iv)~preconditioned conjugate gradients (CG). In this paper we use the CG solver preconditioned by the incomplete Cholesky IC(0) preconditioner~\cite{ICCG}. The described nested Schur complement approach can be used either as a Schur complement based preconditioner or as a ``direct'' solver, computing the inverse action $(I+\gamma\mathcal{A})^{-1}$ with the inner two-level iterative solver for the Schur complement. In Figure~\ref{f:alg} we outline the outer part of the introduced nested Schur complement solver for the case the action of $(I+\gamma\mathcal{A})^{-1}$ is computed. The inner part, corresponding to the action of $(I+\gamma A)^{-1}$, can then be computed in a similar fashion, as described above. \begin{figure} \centering{\begin{minipage}{0.8\linewidth} \fbox{Given $I+\gamma\mathcal{A}\in\mathbb{R}^{N\times N}$ and $b\in\mathbb{R}^N$ (cf.~\eqref{mxw1},\eqref{mxw2}), solve $(I+\gamma\mathcal{A})x=b$} \\ 1.~Partition $b$ into $b=\begin{bmatrix}b_1\\b_2\end{bmatrix}$, with $b_1\in\mathbb{R}^n$ and $b_2\in\mathbb{R}^m$. \\ 2.~Set $b_1:=b_1-\gamma B_1^Tb_2$. \\ 3.~The outer solver: solve $(I+\gamma A + \gamma^2 B_1^TB_2)x_1=b_1$ iteratively,\\$\phantom{\text{3.~}}$preconditioned by $I+\gamma A$. \\ 4.~Set $x_2:=b_2 + \gamma B_2x_1$ and $x:=\begin{bmatrix}x_1\\x_2\end{bmatrix}$. \end{minipage}} \caption{An algorithm description for the outer part of the nested Schur complement solver} \label{f:alg} \end{figure} \section{Other possible preconditioners} A variety of other different preconditioners are available for solving saddle point problems of type~\eqref{ls}, see e.g.~\cite{ActaNumerKKT2005,ElmanSilvesterWathen:book} and recent work~\cite{BeikBenzi2018}. However, a general problem with linear solvers employed in implicit and exponential time integrators is that the additional computational work spent for solving linear systems has to be paid off by an increase in a time step. Assume, for instance, that approximately ten iterations with a basic preconditioner have to be done per time step, such that costs of a preconditioned matrix--vector product (matvec) are approximately equal to the costs of an unpreconditioned matvec (which can be achieved by the Eisenstat's trick~\cite{EisenTrick}). Then the time step size have to be increased at least by a factor of ten to compensate for the increased costs. Such a time step increase, however, is not always possible due to accuracy restrictions, especially for mildly stiff problems such as the Maxwell equations with PML boundary conditions. This makes the choice of a proper preconditioner difficult and significantly restricts a variety of possible options~\cite{VerwerBotchev09}. \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth]{sparsity_A_prec_adi_10x10x6.png} \\ \includegraphics[width=0.8\linewidth]{sparsity_A_prec_field_10x10x6.png} \end{center} \caption{Sparsity patterns of the matrix $I+\gamma A$ (left column) and its factors $I+\gamma A_{1,2}$ in the ADI (top row) and FS (bottom row) preconditioners for a coarse mesh $10\times 10\times 6$. The matrices $I+\gamma A$ and $I+\gamma A_{1,2}$ occupy the first 5082 rows and columns of the matrices $I+\gamma\mathcal{A}$ and $I+\gamma\mathcal{A}_{1,2}$, respectively.} \label{f:prec} \end{figure} Recently an efficient alternating direction implicit (ADI) preconditioner is proposed and analyzed for solving time-dependent Maxwell equations~\cite{Hochbruck_ea2015_ADI} discretized in space by finite differences. In~\cite{deCloetMarissenWestendorp2015} de~Cloet, Marissen and Westendorp compared performance of this ADI preconditioner with another preconditioner based on the field splitting (i.e., a splitting into the magnetic and electric fields). Their conclusion is that this field splitting (FS) preconditioner outperforms the ADI preconditioner in terms of the CPU time. Unlike the ADI preconditioner, the FS preconditioner is not restricted to the finite difference approximations on Cartesian meshes. The linear system~\eqref{ls}, with either the FS or ADI preconditioner applied from the right, can be written as \begin{equation} \label{pls} \widetilde{\mathcal{A}}\widetilde{x} = b, \quad \widetilde{\mathcal{A}}=(I+\gamma\mathcal{A})\mathcal{M}^{-1}, \quad \widetilde{x} = \mathcal{M} x, \end{equation} where \begin{equation} \label{fsp} \mathcal{M}=(I+\gamma\mathcal{A}_1)(I+\gamma\mathcal{A}_2), \quad \mathcal{A}_1+\mathcal{A}_2 = \mathcal{A}. \end{equation} In the FS preconditioner the matrices $\mathcal{A}_{1,2}$ are defined as \begin{equation} \label{FS} \begin{gathered} \mathcal{A}_1 = \begin{bmatrix} A_1 & B_{1,H}^T \\ -B_{2,H} & 0 \end{bmatrix}, \quad \mathcal{A}_2 = \begin{bmatrix} A_2 & B_{1,E}^T \\ -B_{2,E} & 0 \end{bmatrix}, \\ A = A_1 + A_2, \quad A_1 = \begin{bmatrix} M_1 & K_1 \\ 0 & 0 \end{bmatrix}, \quad A_2 = \begin{bmatrix} 0 & 0 \\ -K_2^T & M_2 \end{bmatrix}, \end{gathered} \end{equation} where the matrices $B_{j,H}$, $B_{j,E}$, $j=1,2$, form splittings of the PML blocks $B_{1,2}$, $$ B_1=B_{1,H}+ B_{1,E}, \quad B_2=B_{2,H}+ B_{2,E}, $$ defined in~\ref{app2}. For complete definition of the ADI preconditioner we refer to \cite[Section~5.1]{deCloetMarissenWestendorp2015} and \cite{Hochbruck_ea2015_ADI}. In both the ADI and FS preconditioners the factors $I+\gamma\mathcal{A}_{1,2}$ are not triangular matrices (see Figure~\ref{f:prec}) and sparse LU~factorizations have to be carried out to implement the preconditioner actions. \begin{figure} \centering{% \includegraphics[width=0.35\linewidth]{Ritz_vv24_adi} ~~ \includegraphics[width=0.35\linewidth]{Ritz_vv24_field}} \caption{Ritz values on the complex plane computed after 24~iterations of the FOM iterative method with the ADI (left) and FS (right) preconditioners} \label{Rvv} \end{figure} Numerical tests presented in~\cite{deCloetMarissenWestendorp2015} demonstrate that both ADI and FS preconditioners require approximately the same number of iterations to converge. However, the FS preconditioner is faster than ADI in terms of the CPU time. This is caused by another attractive property of the FS preconditioner observed in~\cite{deCloetMarissenWestendorp2015}: sparse LU factorizations of the FS factors $I+\gamma\mathcal{A}_{1,2}$ do not yield any additional fill in. This is the case for the ADI preconditioner, see \cite[Section~6.2]{deCloetMarissenWestendorp2015}, where the fill in varies from 25\% on coarse meshes to 65\% on the mesh $80\times 80\times 48$. Ritz values obtained after 24~iterations of the FOM iterative method~\cite{GMRES} with both the ADI and FS preconditioners are plot in Figure~\ref{Rvv}. There we see that both preconditioners yield preconditioned matrices with effectively real spectra. \section{Numerical experiments} In the test problem a 3D photonic crystal is considered. At the $x$- and $y$- boundaries of the spatial domain $[1,4]\times [1,4]\times [0,3]$ the PML boundary conditions are imposed, whereas homogeneous Dirichlet (perfectly conducting) boundary conditions are posed on the $z$-boundaries. The PML regions extend the total computational domain along the $x$- and $y$-walls to $[0,5]\times [0,5]\times [0,3]$. The crystal consists of $3\times 3\times 3$ spheres of radius $0.4$ centered at points $(x_i,y_j,z_k)=(2.5+i,2.5+j,1.5+k)$, $i,j,k=-1,0,1$. The magnetic permeability $\mu\equiv 1$ in the whole domain, whereas the electric permittivity $\varepsilon$ is set to~$8.9$ inside the spheres and to~$1$ everywhere else in the domain. \begin{table} \caption{The number of degrees of freedom for the meshes used in the tests} \label{t:mesh} \centering\begin{tabular}{ccc} \hline\hline mesh & system size \\ $n_x \times n_y \times n_z$ & $N=n+m$ \\ \hline $20\times 20\times 12$ & $ 45\,565= 34\,398+11\,167$ \\ $40\times 40\times 24$ & $333\,425=252\,150+81\,275$ \\ $80\times 80\times 48$ & $2\,548\,441=1\,928\,934+619\,507$ \\ $160\times 160\times 96$ & $ 19\,922\,345 = 15\,086\,022 + 4\,836\,323$ \\\hline \end{tabular} \end{table} We consider matrices $\mathcal{A}$ resulting from the standard Yee finite difference approximation, see Table~\ref{t:mesh} for typical mesh sizes used in the tests. The parameter $\gamma$ is chosen as explained in~\cite{Botchev2016} and set to $\gamma=0.012$ in all the tests. The tests are run in Matlab on a Linux PC with 8~CPUs Intel Xeon~E5504 2.00GHz. \begin{table} \caption{Iteration numbers and residual norms for solving linear systems with $I+\gamma A + \gamma^2 B_1^TB_2$ preconditioned by $I+\gamma A$ and norms of the symmetric and skew-symmetric parts of $\gamma^2B_1^TB_2$, $\mathtt{H}=\frac{\gamma^2}2(B_1^TB_2+(B_1^TB_2)^T)$ $\mathtt{S}=\frac{\gamma^2}2(B_1^TB_2-(B_1^TB_2)^T)$.} \label{t:B1B2} \centerline{\begin{tabular}{cccc} \hline\hline mesh & \# iter, & $\|\mathtt{H}\|_1 $ & $\|\mathtt{S}\|_1$ \\%& nonz.rows & resid.norm& & \\\hline $20\times 20 \times 12$ & 21, \texttt{2.80e-07} & 1968 & 9.8 \\ $40\times 40 \times 24$ & 21, \texttt{4.73e-07} & 1572 & 19.6 \\ $60\times 60 \times 36$ & 21, \texttt{5.71e-07} & 1458 & 29.4 \\ \hline \end{tabular}} \end{table} In Table~\ref{t:B1B2} we illustrate the fact that linear systems with the matrix $I+\gamma A + \gamma^2 B_1^TB_2$ can be efficiently solved iteratively using $I+\gamma A$ as a preconditioner. The preconditioner actions here are carried with the help of the sparse LU factorization (UMFPACK in Matlab), that is why in this case we can not use a fine mesh. In this test the exact solution vectors are taken to have normally distributed random entries with zero mean and variance one. This is done to make the test difficult so that the solver can not possibly profit from the solution smoothness. The number of iterations listed there are for the BiCGstab(2) iterative solver (the standard built-in solver in Matlab) run to satisfy the stopping criterion tolerance of $10^{-6}$. In the same table we also list the norms of the symmetric and skew-symmetric parts of $\gamma^2 B_1^TB_2$. The values show that the field of values of this matrix is bounded and confirm the expectation given by relation~\eqref{B1B2}: only the off-diagonal blocks (related to the skew-symmetric part) of this matrix depend on the mesh size and this dependence is linear. In Figure~\ref{f:I_gA} we plot 24~Ritz values of the preconditioned matrix $I+(I+\gamma A)^{-1}\gamma^2 B_1^TB_2$. As we see, the Ritz values are real and well clustered, which means that the preconditioner is efficient and damps the skew-symmetric part of the system matrix well. \begin{figure} \centering{\includegraphics[width=0.7\linewidth]{rval_prec_I_gA}} \caption{Ritz values of the preconditioned matrix $I+(I+\gamma A)^{-1}\gamma^2 B_1^TB_2$ on the complex plane for the mesh $20\times 20\times 12$ (top) and $40\times 40\times 24$ (bottom)} \label{f:I_gA} \end{figure} \begin{table} \caption{Results for the nested Schur complement solver for the ``difficult'' test case, with random exact solution vector.} \label{t:NSC} \centering{\begin{tabular}{cccc} \hline\hline mesh & residual & iterations & CPU \\ & norm & outer (inner)& time \\\hline $20\times 20\times 12$ & \texttt{7.50e-11} & 31 (68) & 3.59~s \\ $40\times 40\times 24$ & \texttt{6.67e-11} & 32 (108) & 17.3~s \\ $80\times 80\times 48$ & \texttt{6.84e-11} & 32 (145) & 135.6~s \\ $160\times 160\times 96$ & \texttt{7.01e-11} & 31 (234) & 1258~s \\\hline \end{tabular}} \end{table} We now test our nested Schur complement solver. The linear systems in the test have the exact solutions which are again a normally distributed random vector whose entries have zero mean and variance one. This is done to prevent the solver from profiting from the solution smoothness. The solver is applied in its direct form, as outlined in Figure~\ref{f:alg}, with the outer solver GMRES(10) and inner solver ICCG(0). The stopping criterion tolerance in both solvers is set to $10^{-10}$. We see that the number of the outer iterations remains constant, independently of the mesh size, as expected. The number of inner iterations grows because the CG solver is used with the simple IC(0) preconditioner. We note that the number of inner iterations changes from one outer iteration to another and the inner iteration count reported in the table is the maximum number of inner iterations (required in all cases at the last outer iteration). \begin{table} \caption{Comparison results of the nested Schur complement solver and the FS preconditioner. The former is employed with GMRES(10) and ICCG(0) as the outer and inner solvers, respectively. The FS preconditioner is applied with nonrestarted GMRES.} \label{t:vs} \centering{\begin{tabular}{cclc} \hline\hline mesh, & method & CPU time, & residual \\ tolerance & & iter outer(inner) & norm \\ \hline $40\times 40\times 24$ & FS prec. & 0.49~s, 7~~~& \texttt{7.58e-12}\\ \texttt{9.64e-11} & nested Schur & 0.54~s, 2(8)& \texttt{1.23e-13}\\ \hline $80\times 80\times 48$ & FS prec. & 3.84~s, 8~~~& \texttt{5.95e-10}\\ \texttt{8.09e-09} & nested Schur & 3.51~s, 2(8)& \texttt{2.09e-09}\\ \hline $160\times160\times 96$& FS prec. & 59.7~s, 14~~~&\texttt{3.97e-09} \\ \texttt{4.23e-09} & nested Schur & 45.8~s, 2(19)&\texttt{3.77e-09}\\ \hline \end{tabular}} \end{table} Finally, in Table~\ref{t:vs} we present results of comparison of the nested Schur complement solver with the FS preconditioner. The comparison is done on linear systems arising in the time integration carried out by an exponential integrator based on a rational shift-and-invert exponential Krylov subspace method. Therefore the stopping criterion tolerance in the tests vary and is reported in the table. We see that the nested Schur complement solver outperforms the FS preconditioner on fine meshes. This is expected because the FS preconditioner does not converge mesh-independently. \section{Conclusions and an outlook to further research} A nested Schur complement solver is proposed for iterative linear system solution within exponential and implicit time integration of the Maxwell equations. The solver exhibits a mesh-independent convergence and outperforms other preconditioners, such ADI (alternative direction implicit) and FS (field splitting) preconditioners. Different aspects in the proposed concept require further investigation and possible improvement. In future we plan to test the nested Schur complement solver in combination with a more robust (and mesh-independent) inner iterative solver. Another interesting research question is which form of the solver is most efficient: its direct form, as tested in this paper, or iterative, as a three-level iterative solver.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{APPROACHING A HUMAN} \label{sec:approaching} Before a robot approaches a human of interest, the robot needs to know where the human is and where to move in relation to the human based on the environment model.The pose of the human is provided to the robot by an external pose detection system. Defining the robot goal pose by a fixed offset to the human is not robust because obstacles could make the goal unreachable. Like Human-Human-Interaction, there is a variable set of possible poses for HRI. The set of possible poses highly increases the probability of finding a reachable robot goal. This is addressed by the new method we developed. All possible poses are describing an area around the human which is delimited by the operating range of the user (Section \ref{sec:searcharea}). During the approach to the human, the robot needs to continuously update the whole calculation of the goal area (Section IV-B) because of the dynamics in the environment (e.g., the user is moving) and the limited field of view. The robot may not always perceive the goal area. Our implementation solves the issues by using a dynamic recalculation of a set of goal positions based on the grid map of the robot environment. The grid-based calculation dynamically makes an efficient rating of multiple goal cells in the search area. This allows us to continuously update this procedure during the approach of the user and avoids the previously mentioned problems. The temporary best-rated cell becomes the goal pose (Section \ref{sec:best}), extended by the orientation from this cell to the center of the human. \subsection{Defining Search Area} \label{sec:searcharea} The search area (Fig. \ref{fig:searchsteps}a) is the area that humans can reach with their arms to interact with the robot. The operating area of a human $P(x,y,\varphi)$ is limited. Minimum radius $r_{min}$ and maximum radius $r_{max}$ around the human limit the operating distances. Moreover, angle constraints limit the operating orientation range of the human. For each valid radius, the software calculates a circle around the human inside the costmap and check the angle conditions using algorithm 1. \begin{algorithm} \label{alg:searcharea} \caption{Defining goal grid cells for HRI} \textsc{DefiningSearchArea($gridmap,P(x,y,\varphi)$)} \begin{algorithmic}[!h] \STATE{Set seedpoint at $P(x,y,\varphi)$} \FOR{$r_{min}$ \TO $r_{max}$} \STATE{calculate circle in gridmap:} \FOR{$cell$ in $circle$} \IF{angle conditions are TRUE} \STATE{ADD $cell$ to $S$} \ENDIF \ENDFOR \ENDFOR \STATE{\textbf{Return} $S$} \end{algorithmic} \end{algorithm} Flexible parameters are defining the angle constraints. If the human is standing, the robot tries to approach from the front into the unidirectional search area, defined by $\alpha_1$. If the human is sitting, the robot tries to approach from the sides into the bidirectional search area, defined by $\alpha_2$. All suitable cells $i$ are stored inside a container $S$. \subsection{Calculating Costs} \label{sec:calc} All positions inside the search area are suitable for human-robot interaction but may be unreachable for the robot. The idea behind this calculation is to quantify the suitability by costs and to reveal unreachable cells inside the container. The calculation uses the latest costmap of the robot. The suitability rates the costmap and the path planning for the robot; it also rates the distance and angle error for the human. The sum of those four influences indicates the overall quality of each cell. \subsubsection{Costmap Cost} \label{sec:cm} The first calculation step checks the value of the costmap at the position $C(x_i, y_i)$ of each cell inside the container (see Fig. \ref{fig:searchsteps}b). The value of the costmap multiplied by an influence factor $m_{cm}$ assigns the costmap cost to the cells. Further calculations are ignoring occupied cells. In real environments, this step efficiently reduces the number of cells. \begin{equation*} c_{i,cm} = C(x_i, y_i) \cdot m_{cm} \end{equation*} \subsubsection{Path Planning Cost} \label{sec:path} Even if the costmap is not occupied, the robot may not be able to find a path to this cell, i.e., the cell is unreachable. This calculation step approves and rates the path planning of all cells in one shot. The path search starts at the robot position exploring all neighbors using the breadth-first search (BFS) algorithm of Lee [9], which is very efficient. The search ends after reaching all goal cells or if the exploring depth is disproportionate to the distance. Removing all unreached cells from the container reduces the further calculation (see Fig. \ref{fig:searchsteps}c). In contrast to common path planning algorithms like the A* which provides the optimum for just one goal, the Lee-Algorithm provides the optimum for all goal cells in one shot. The overall length of the path between the robot and the goal cells $l_i$ multiplied by an influence factor $m_{path}$ indicates the cost of the remaining cells. \begin{equation*} c_{i,path} = l_i \cdot m_{path} \end{equation*} \subsubsection{Distance Cost} \label{sec:dist} For every human the optimal operation distance to the robot is different. The distance cost $c_{i,dist}$ describes this influence by assigning a radial cost function starting at the center of the human, e.g., linear increasing by radius $r_i$ as shown in Fig.~\ref{fig:searchsteps}d. This personalization helps to prefer the optimal user interaction distance. \begin{equation*} c_{i,dist} = r_i \cdot m_{dist} \end{equation*} \subsubsection{Angle Error Cost} \label{sec:angle} The angle error cost $c_{i, angle}$ is similar to the previous cost but focus on the robot orientation in relation to the human. The assumption is that the mean angle $\alpha_{mean}$ of the search area(s) is the optimal orientation for HRI. For every cell, the angle difference between the mean angle $\alpha_{mean}$ and cell angle $\alpha_{i}$, multiplied by an influence factor $m_{angle}$ defines the angle cost error (see Fig.~\ref{fig:searchsteps}e). \begin{equation*} c_{i,angle} = |\alpha_{mean}-\alpha_{i}| \cdot m_{angle} \end{equation*} \subsection{Finding the Best Pose} \label{sec:best} The four influences assign costs to every reachable cell of the goal area of the robot. The sum of all four costs represents the overall weight of every cell. The overall weight is adjustable by adapting the multiplied influence factors of each cost. The best robot position for HRI is cell $c_{best}$, which has the lowest overall costs i.e. minimizing the sum of costs (see Fig.~\ref{fig:searchsteps}f) \begin{equation*} c_{best} = \min_{\forall i \in S} (c_{i,cm} + c_{i,path} + c_{i,dist} + c_{i,angle}) \end{equation*} \begin{figure}[!h] \subfigure[Definition Search Area]{ \includegraphics[width=0.18\textwidth]{figures/search.pdf}} \subfigure[Costmap Cell Weights]{\includegraphics[width=0.24\textwidth]{figures/search1.png}} \subfigure[Path Planning Cell Weights]{\includegraphics[width=0.24\textwidth]{figures/search4.png}} \subfigure[Distance Cell Weights]{\includegraphics[width=0.24\textwidth]{figures/search3.png}} \subfigure[Radius Error Cell Weights]{\includegraphics[width=0.24\textwidth]{figures/search2.png}} \subfigure[Overall Cell Weights]{\includegraphics[width=0.24\textwidth]{figures/search5.png}} \caption{Different stages of costs for HRI visualized on the gridmap} \label{fig:searchsteps} \end{figure} For HRI the robot has to look in the direction of the human, which defines the robot orientation $\alpha_r$. The 2D-pose $g$ is the robot goal that is send to move the base. The best robot pose updates permanently during approaching. This recalculation allows adapting dynamically to environment. \begin{equation*} g = (c_{best}(x), c_{best}(y), \alpha_r) \end{equation*} with $ \alpha_r = \alpha_{h} - \arctan\left(\dfrac{y_{cm} - y_{h}}{x_{cm} - x_{h}}\right) + \pi$ \bigbreak \section{CONCLUSIONS AND OUTLOOK} \label{sec:conclusion} The functional design of MobiKa enables versatile user interaction by using a height adjustable tablet which allows multimodal communication while standing, sitting and lying down (e.g. after a fall). In combination with low-cost components, the functional design helps to minimize the cost. MobiKa can navigate autonomously through a pre-mapped environment. For approaching the human, we developed an efficient and robust algorithm. The approaching was evaluated in laboratory tests, which indicated human-aware navigation. Additionally, it was tested in a care home to activate elderly people with dementia. The open infrastructure enables universal expansion options. Due to the positive feedback and the economic potential of MobiKa, the development of MobiKa will continue in two directions. On the one hand, we will extend its functionalities. On the other hand, we will work on commercialization of our platform to make it available for end users. Moreover, we will include the approaching algorithm into the navigation stack to re-use it for other robots. \section{Evaluation} \label{sec:results} \subsection{General Evaluation} \label{sec:general} After the robot was set up, the basic functionality was verified in our lab. Using the 2D laser scanner in combination with a 3D sensor MobiKa is able to navigate safely in a known environment with the Fraunhofer IPA navigation software. It was further checked that MobiKa's height-adjustable tablet can adapt to standing, sitting and even lying people (see Fig. \ref{fig:mobika_adaptive}). During the publicly funded project EmAsIn \cite{emasin}, the partners could successfully connect their software components (e.g. speech recognition and graphical user interface) to the flexible software framework of MobiKa. Customized apps as well as third-party apps for entertainment were successfully tested. At typical usage, the battery of MobiKa can power the robot for more than eight hours without charging. In general it could be verified that the low-cost components used for MobiKa provide the necessary functionality and robustness. \subsection{Analyzing Approaching To Human in Lab}\label{sec:app} For the evaluation of the approaching strategy of the robot, we analyzed the final HRI poses within a lab at Fraunhofer IPA (see Fig. 4). After mapping the environment, we set the poses of five human's static on the map so that errors orginating from an imprecise camera detection could be excluded. The robot had to approach the two people on the sofa unidirectionally from the front ($\alpha_{mean} = \SI{0}{\degree}$). Moreover, the robot had to approach the three people sitting at the table bidirectionally from the sides ($\alpha_{mean} =\SI{60}{\degree}$), even if the best orientation would be to approach from the front \cite{Dautenhahn2006, Kheng2007}. We set the minimum search radius $r_{min}$ to $\SI{0.45}{\metre}$ and $r_{max}$ to $\SI{0.9}{\metre}$ to stay above the intimate distance and still inside the working space of the human arms \cite{hall1966hidden, KRUSE20131726}. The angle $\alpha_{1} = \alpha_{2}$ is set to $\SI{90}{\degree}$. We defined an approaching sequence for all persons within a simple state machine. The robot navigated ten rounds autonomously where the robot approached all people successfully. During the approaching, the robot goal pose $g$ updated at $\SI{2}{Hz}$. The final robot poses, as well as the poses of the people, are visualized in Fig. 4. Here the small arrows indicate the final robot pose and the bigger arrows indicate the person poses. Fig. \ref{fig:results} shows the final robot distance and orientation in relation to the center of the humans head split by unidirectional and bidirectional search for 50 poses. The distance reaches from $\SI{0.57}{\metre}$ to $\SI{0.92}{\metre}$ while the orientation $\alpha$ for the unidirectional search was $\SI{0}{\degree} - \SI{10}{\degree}$ and the orientation for the directional was $\SI{79}{\degree} - \SI{103}{\degree}$. In comparison to the table poses, the distances of the sofa poses are higher (see Fig. \ref{fig:results}) because of the user legs and the sofa itself which avoided the robot to approach closer. Moreover, the robot always chose the shortest path, indicated by the side of approaching the persons at the table (see Fig. 4). \begin{figure}[ht!] \centering \includegraphics[width=0.4\textwidth]{figures/map.pdf} \caption{Gridmap with HRI poses} \label{fig:mobika-rviz} \end{figure} \begin{figure}[ht!] \centering \subfigure[Results Unidirectional Search]{\includegraphics[width=0.4\textwidth]{figures/results_poses_uni.jpg}} \subfigure[Results Bidirectional Search]{ \includegraphics[width=0.39\textwidth]{figures/results_poses_bi.jpg}} \caption{Final Pose of the Robot in Relation to the Human} \label{fig:results} \end{figure} \subsection{User-Feedback and Observations During Tests in an Elderly Care Home} \label{sec:test} In addition to the lab tests, we tested the robot in a real life scenario. In the EmAsIn project, a system consisting of three main components, namely a server, kinect sensors and MobiKa was used to activate eight residents with dementia in the group room, thus making everyday life more varied (Fig.~6). The portfolio of activations consisted of games, quizzes, picture galleries, and karaoke. For the evaluation, observations by the involved scientists were collected. In addtion, questionnaires from four elderly people and three care workers were evaluated. All respondents answered the question of whether the mobile robot platform is too human, negative. In addition also the speed of the robot and the approach behaviour was considert adequate by all respondents. Both the size and the shape of the mobile robot platform had a pleasant effect on the residents. About $\SI{85}{\%}$ of respondents said that the final pose for interaction stayed well within the social distance and was not to close. \begin{figure}[ht!] \centering \includegraphics[width=0.45\textwidth]{figures/teningen.jpg} \caption{Interaction of MobiKa with elderly people} \label{fig:mobika-teningen} \end{figure} MobiKa successfully activated elderly people by approaching them. We observed and got the feedback that the robot behavior successfully maintains the human comfort, e.g. the robot approaching was a good trade-off between how close to move for the user-interaction without scaring persons. It was observed, that approaching the person already motivated them to interact with the robot. This is a big advantage compared to simple tablet solutions without the robot. The usage of robots, especially the touchscreen was new for most of the elerly people. Therefore, the users needed a short introduction from supervisors. Activities, e.g. quiz also activated nearby people that led to a group activity. \bigbreak \section{ROBOT DESIGN} \label{sec:robotdesign} MobiKa is designed as a mobile communication assistant. While designing the robot, the main idea was to create an affordable system optimized for Human-Robot Interaction. Therefore we chose a functional design which helps us to reduce the price and illustrates, that the robot's capabilities are far away from that of a human. To make it affordable, we based the design on open-source software and low-cost hardware. The goal of the development was to cover functions such as: \begin{itemize} \item General communication tasks via multi-modal interfaces (speech and visual) \item Entertainment functions (games and services on display, activating the user) \item Reaction to users falling; connection with stationary sensors and networks to allow detection of medical emergencies and contacting an external service provider (robot guides to fallen person) \item Reminding of appointments and taking medication \item Telepresence and telemedicine \item Simple transport tasks (user places objects on the robot) \item Guiding persons \item Open infrastructure to third-party apps, e.g., for medical services \end{itemize} With these functions, MobiKa can support elderly persons to stay independent and live longer in their homes. It can also assist rehabilitation patients so that they can return to their normal every-day routine earlier. \subsection{Hardware Design}\label{sec:hw} MobiKa is built on a compact mobile base in which the main components reside. The dimensions of the robot were derived from the intended user interaction; to interact with standing persons, MobiKa needs a minimum height of $\SI{1.1}{\metre}$. To also allow interaction with sitting and lying persons (Fig. \ref{fig:mobika_adaptive}), the screen needs to be adjustable in height. Therefore a belt-driven linear axis was designed, that allows adjusting an Android tablet to the pose of the user. Length and width and also mass were kept as low as possible, which required to concentrate the mass close to the ground to maintain stability during travel. \begin{figure} \subfigure[MobiKa as an emergency assistant]{\includegraphics[width=0.24\textwidth]{figures/fall.jpg}} \subfigure[MobiKa as a social assistant]{\includegraphics[width=0.24\textwidth]{figures/scene.jpg}} \label{fig:fdfhh} \caption{Different use cases of MobiKa. Thanks to its adjustable tablet height MobiKa can interact with the people while they are standing, sitting and laying (e.g. after a fall).} \label{fig:mobika_adaptive} \end{figure} The design is based on low-cost components. While the differential drive and also the tablet axis are formed by simple DC kit motors and motor controllers, the $\SI{24}{\V}$ battery originates from an e-bike. For the tablet axis, belts and pullies from 3D printer supply were chosen. Selecting the processing unit was also fundamental since it should be low-cost, energy-efficient but performant. That's why we picked an octa-core computing device Odroid XU4 with eMMC flash storage. We found that it is capable of running the software that the robot needs with minimal power consumption. Other components include DC-DC converters and a Wi-Fi bridge. As it became clear very soon, that the outer shell of the robot is also costly (e.g., when 3D-printed or molded in small quantities), a design was chosen which limits covers to the mobile base, while the structure to support the tablet is made of metal tubes. To keep the vertical axis simple, all cabling between tablet and robot base was avoided. The tablet only connects to charging contacts in its lower end position. Communication is solved via Wi-Fi. MobiKa's sensors consist of a low-cost LIDAR sensor, which provides distance data in a $\SI{360}{\degree}$ angle in a horizontal plane and a 3D sensor, which looks downward in a $\SI{45}{\degree}$ angle along the front of the robot and allows detecting persons, tables and small obstacles on the ground. In future, the camera will be attached on an additional axis which will enable it to look forward (e.g., to recognize faces of a standing person) and also backward (e.g., for docking a battery charger). For safety reasons, the robot is also equipped with a small bumper to detect collisions. \subsection{Software Design} \subsubsection{Software Structure} MobiKa needs several independent software components interacting with each other in real time. ROS (Robot Operating System) \cite{quigley2009ros} running on Ubuntu 16.04 is responsible for the communication of these components. Thanks to ROS, we could create a decentralized and modular software system meaning that none of the packages depends on a central application other than the ROS master. This is crucial for complex robotic systems since if there is a failure in one of the software components or hardware drivers, we can directly diagnose the failed components and fix the issue without affecting the other parts of the robot. This is also the case for updating the individual software components. \subsubsection{Virtual Model} The software should be aware of the links and joints of the robot. To support this, we create the virtual model of the robot using URDF (Unified Robot Description Format), which is an XML based robot description format. Thanks to URDF along with CAD design, we can successfully visualize the robot in our software. \subsubsection{Navigation} The navigation software stack is one of the most critical parts of MobiKa's software. It enables the robot to navigate to the person safely inside known environment. The environment is represented by a 2D gridmap that is created initially by the robot using the open-source GMapping package \cite{grisetti6gmapping}. During navigation, a simultaneos updated costmap inflates obstacles from the gridmap and from the sensor data to limit the operating area for collision avoidance. Due to the issue that the laser scanner scans the environment only horizontally at its mounting hight, obstacles at other heights are not visible. This is dangerous because the robot can collide with tables or other objects that are not fully detectable by the laser scanner. The 3D sensor at the top of the robot solves this issue by projecting a 3D point cloud onto the gound plane as a virtual scan. Using the virtual scan as additional input of the costmap helps that the robot is able to navigate safely without any collision. The navigation makes use of an EKF (Extended Kalman Filter) to localize the robot inside the gridmap. The EKF is part of the Fraunhofer IPA navigation stack, which uses the wheel odometry and laser scanner data as input. When we launch the robot, the last known robot pose is set as initial pose. Afterwards the wheel odometry updates the pose incrementally. In case of association between map features and laser scanner data the pose will be corrected. This is necessary to compensate odometry drift. \section{RELATED WORK} \label{sec:relwork} In health care, service robots have the potential to help humans in different ways such as physical, emotional, social and cognitive. A typical service robot includes a user-interface, several sensors, e.g. cameras and laser scanner, a mobile base and sometimes a pair of arms. Care-O-bot \cite{kittmann2015let}, Homemate \cite{zhao2014octree} and PR2, Toyota HSR \cite{Yamamoto:2018:HSR:3214907.3233972} and TIAGo \cite{tiago2019} are good examples of fully capable service robots. They can fetch and carry objects for people, navigate autonomously indoor, perceive a person with their cameras and approach them for interaction. Since the arms should be safe and able to carry payload, the technical requirements are high. The arms make the robot multifunctional, but also complex and expensive, hence today not affortable by the end users. In contrast to such multifunctional robots, there are also more specialized robots with fewer functionalities. One example is Pepper \cite{pandey2018mass, ahn2018development} that has arms just for carrying out gestures to reinforce the expression of user-interaction. Other robots like the Robotic Service Assistant \cite{baumgarten2016robotic} or SMOOTH demonstrator \cite{juel2018smooth} do not have arms but are still able to carry out physical tasks. The Robotic Service Assistant serves people drinks by driving to the user and handing out drinks. The SMOOTH demonstrator assists elderly people by transporting objects. Additionally, some robots cannot physically support persons. These robots are consisting of auditive I/O and visual I/O e.g. Kompai \cite{kompai2010}, SCITOS \cite{scitos2007} or RP-VITA \cite{rpvita2019} to interact with people. However, they can observe people and communicate with them to remind them to take pills, to socialize, or to monitor the health. ElliQ \cite{elliq2019} is an example of a stationary companion which allows people to do phone calls and play cognitive games. There are also social robots like iSocioBot \cite{tan2018isociobot, tan2015designing}. This robot socializes with people by observing them and creating some facial expressions and speech. Since it is a research platform, its size is enormous compared to Zenbo and Buddy \cite{milliez2018buddy}. Another example of specialized robots is MobiNa \cite{mobina2015} - a low-cost robot with navigation for emergency assistance. For the Human-Robot Interaction, robots need to know how to approach a person by using human-aware navigation to maintain human comfort \cite{Ramirez2016, KRUSE20131726, lee2017}. The process consists of different stages \cite{Satake2009}: Finding person for interaction, interacting in public distance, initiating conversation in social distance. \section{INTRODUCTION} \label{sec:intro} Due to demographic change in the coming years, the number of elderly people will increase in most industrialized countries. Robot technology can help these people to live self-determined and independent in their homes as long as possible and to reduce the need for ambulant or stationary care, e.g. by providing means of communication, detecting anomalies and emergencies, guiding people and fetching objects. Service robots can also support other people with reduced mobility such as rehabilitation patients. Private households are highly dynamic enviroments which are primarily designed for humans. Therefore a mobile robot has to cope with narrow passages and needs a design that supports safe HRI. To be affordable, robots need to be availiable at low cost, but offer at the same time considerable functionality. In this paper, we introduce MobiKa (Mobile Communication Assistant) depicted in Fig. \ref{fig:mobika-interacting}. Our vision is to solve the aformentioned issues by developing an affordable multi-purpose mobile service robot focusing on communication. MobiKa can navigate autonomously within a pre-mapped environment. For Human-Robot Interaction, MobiKa is able to approach robustly the user. MobiKa is easy to use, even for non-technical users. With the use of functional hardware design and modular software architecture, we provide a highly adaptable robot platform. This paper is organized as follows. In Section~\ref{sec:relwork}, the related work is introduced to the reader. Our robot's hardware and software designs are explained in detail in Section~\ref{sec:robotdesign}. Next, the proposed approaching humans is described in Section~\ref{sec:approaching} and the experiments and their evaluation are introduced in Section~\ref{sec:results}. Lastly, the paper is briefly concluded in Section~\ref{sec:conclusion} with a short outlook. \begin{figure}[ht!] \centering \includegraphics[width=0.48\textwidth ]{figures/scene_fog.jpg} \label{fig:mobika-interacting} \caption{MobiKa is interacting with a sitting human} \end{figure} \section{ACKNOWLEDGMENT} This work has received funding from the German Ministry of Education and Research (BMBF) under grant agreement No 16SV7362 for the EmAsIn project as well as funding from the European European Union's 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No 721619 for the SOCRATES project.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} It is now evident that the majority of matter in our universe is dark, in the sense that it interacts at most weakly with the fields of the Standard Model (SM).~ Nevertheless, despite an impressive array of experimental efforts dedicated to probing the particle properties of this dark matter, its fundamental nature remains a mystery. It is still possible that a conclusive discovery --- at the LHC, at one of the many direct-detection experiments currently in operation or under construction, at one of the telescopes sensitive to indirect-detection signatures of dark-matter annihilation or decay, or at one of the experiments dedicated to the detection of axions and/or axion-like particles --- will revolutionize our understanding of dark matter within the next few years. However, such a breakthrough is far from assured. Indeed, most experimental strategies for probing the particle properties of the dark matter rely on the assumption that the dark matter has appreciable non-gravitational interactions with the particles of the SM, but there is no guarantee that the dark matter possesses such interactions. For this reason, it is crucial to explore other possible methods for probing the particle properties of the dark matter --- methods which do not rely on its interactions with SM particles. One characteristic of the dark matter which could reveal information about both its particle properties and its production mechanism in the early universe is its primordial velocity distribution $f(v,t)$. This distribution is conventionally described in terms of {\il{f(v)\equiv f(v,t_{\mathrm{now}})}}, {\it i.e.}\/, the distribution obtained by redshifting $f(v,t)$ from some early time $t$ to the present time $t_{\mathrm{now}}$, while ignoring effects such as virialization. This dark-matter velocity distribution plays a crucial role in determining the structure of the present-day universe. In fact, many important quantities follow from the form of $f(v)$, such as the non-linear matter power spectrum and the halo-mass function $dn/d\log M$, where $n$ is the number density of matter halos with mass $M$. Unfortunately, the relationship between $f(v)$ and these resulting quantities is anything but straightforward. The spectrum of primordial density perturbations initially established during the epoch of cosmic inflation evolves with time according to the Einstein-Boltzmann equations --- equations which depend both on $f(v)$ and on other aspects of the background cosmology. While these perturbations are sufficiently small at early times that their evolution may be reliably modeled using a linearized-gravity approach, this approach remains valid only until non-linear feedback becomes significant and perturbation theory becomes less reliable. As a result, the time-evolution of the density perturbations is complicated (and not even invertible), and one must adopt a different strategy for understanding the mass density at late times. Such strategies typically involve approaching the problem numerically, using $N$-body or hydrodynamic simulations, or employing approximate analytic models. It nevertheless remains difficult to work backwards and extract meaningful information about the dark-matter velocity distribution in the early universe from the distribution of matter observed at late times. In this paper, we propose a simple technique for extracting information about the primordial dark-matter velocity distribution from the spatial distribution of dark matter within the present-day universe. In particular, we posit an empirical conjecture for reconstructing $f(v)$ directly from the shape of $dn/d\log M$, and we demonstrate within the context of an illustrative model that this reconstruction conjecture is quite robust. Indeed our reconstruction conjecture is capable of reproducing the salient features of $f(v)$, even in situations in which this velocity distribution is non-thermal and even multi-modal. This conjecture has an older sibling. In Ref.~\cite{Dienes:2020bmn}, we proposed a conjecture which related the dark-matter velocity distribution to the {\it linear matter power spectrum}\/ $P(k)$. As such, this earlier conjecture did not confront all of the non-linearities associated with relating $P(k)$ to actual halo-mass distributions. In this paper, by contrast, we are proposing a conjecture which relates $f(v)$ directly to $dn/d\log M$ --- one which, as we shall show, operates with the same level of success as the former. Moreover, this conjecture is wholly independent of the previous conjecture, and does not depend on it in any way. This paper is organized as follows. In Sect.~\ref{sec:HaloMassFunction}, we review the process through which the velocity distribution of dark-matter particles within the early universe impacts the present-day distribution of dark-matter halos. In Sect.~\ref{sec:ConjAll}, we formulate a conjecture which allows us to invert this process and thereby reconstruct the underlying dark-matter velocity distribution directly from the shape of the halo-mass function. In Sect.~\ref{sec:Results}, we test our reconstruction conjecture by applying it to the dark-matter velocity distributions that arise in the context of our illustrative model --- distributions which take a variety of forms and can be highly non-thermal and even multi-modal. Finally, in Sect.~\ref{sec:Conclusions}, we conclude with a summary of our results and discuss possible directions for future work. Further details concerning the theoretical underpinnings of our conjecture can be found in Appendix~\ref{app:TheMap}. \section{From Dark-Matter Velocity Distributions to Halo-Mass Functions\label{sec:HaloMassFunction}} Our principal aim in this paper is to formulate a method for extracting information about the dark-matter velocity distribution $f(v)$ from the halo-mass function $dn/d\log M$. In this section, we shall begin by reviewing the process whereby one can begin with the primordial dark-matter velocity distribution $f(v)$ and then determine the corresponding $dn/d\log M$. Throughout, we shall assume an otherwise standard background cosmology. In Sect.~\ref{sec:ConjAll} we will then present a heuristic conjecture which provides a quick method of {\it inverting}\/ this process and reconstructing $f(v)$ directly from $dn/d\log M$. The distribution of matter halos which follows from any particular cosmological model arises due to spatial variations in the density of matter in the early universe. Such variations can be characterized by the fractional overdensity $\delta(\vec{\mathbf{x}},t)$, while point-to-point correlations in $\delta(\vec{\mathbf{x}},t)$ are given by the two-point correlation function $\xi(\vec{\mathbf{r}},t)$. For a universe which is homogeneous and isotropic on large scales, {\il{\xi(\vec{\mathbf{r}},t) = \xi(r,t)}} depends only on the magnitude $r$ of the displacement vector. Given these assumptions, the Fourier transform of $\xi(r,t)$, which is commonly referred to as the matter power spectrum, may be written in the form \begin{equation} P(k,t) ~\equiv~ 4\pi \int dr\, r^2 \frac{\sin(kr)}{kr} \xi(r,t)~. \end{equation} In the following we shall evaluate $P(k,t)$ using linear perturbation theory (thereby producing the linear matter power spectrum), and we shall adopt the shorthand {\il{P(k) \equiv P(k,t_{\mathrm{now}})}}. The velocity distribution of dark-matter particles in the early universe affects the manner in which $P(k,t)$ evolves with time. For example, dark-matter particles with sufficiently large velocities can free-stream out of overdense regions which might have otherwise collapsed into halos, thereby suppressing the growth of structure on small scales. The impact that these effects have on the shape of the linear matter power spectrum at late times may reliably be assessed numerically by means of Einstein-Boltzmann solvers such as the \texttt{CLASS} software package~{\mbox{\cite{Lesgourgues:2011re,Blas:2011rf,Lesgourgues:2011rg,Lesgourgues:2011rh}}}. In this way, under standard cosmological assumptions, a given dark-matter velocity distribution $f(v)$ gives rise to a particular form for $P(k)$. Having summarized the relationship between $f(v)$ and $P(k)$, we now discuss the relationship between $P(k)$ and $dn/d\log M$. In relating these two quantities, we follow the approach originally posited by Press and Schechter~\cite{Press:1973iz} and subsequently justified using the excursion-set formalism of Bond {\it et al.}~\cite{Bond:1990iw}. At late times, regions of space with sufficiently large average overdensity collapse under their own gravity and form compact, virialized objects --- {\it i.e.}\/, matter halos. The probability that a randomly chosen spherical region of space with radius $R$ will collapse prior to a given cosmological time $t$ depends on the statistical properties of $\delta(\vec{\mathbf{x}},t)$. The crucial quantity in this regard is the spatial average $\sigma^2(t,R)$ of the variance of $\delta(\vec{\mathbf{x}},t)$ within the same region. This spatial average may be written as \begin{equation} \sigma^2(t,R) ~\equiv~ \int_{0}^{\infty}d\log k\,\, W^2(k,R) \frac{k^3 P(k,t)}{2\pi^2}~, \label{eq:VarianceOfDensPertAvgd} \end{equation} where $W(k,R)$ is the Fourier transform in $k$-space of the position-space top-hat function {\il{W(r,R) \equiv \Theta(1-r/R)}}, where $\Theta(x)$ denotes the Heaviside function. This enforces the condition that only points at distances {\il{r \leq R}} away from the center of the region are included in the average. However, this definition of $\sigma^2(t,R)$ may also be generalized to include other functional forms for $W(k,R)$. In this paper, we shall instead adopt a window function which is a top-hat function in $k$-space~\cite{Bertschinger:2006nq}: \begin{equation} W(k,R) ~=~ \Theta(1-kR)~. \label{eq:Windowfn} \end{equation} One well-known advantage of the window function in Eq.~(\ref{eq:Windowfn}) is that its flatness in $k$-space allows $\sigma^2(t,R)$ to be sensitive to the natural shape of the matter power spectrum itself, rather than that of $W(k,R)$~\cite{Schneider:2013ria}. This window function also has other advantages. One of these is that only density perturbations with wavenumbers {\il{k \leq R^{-1}}} have any effect on $\sigma^2(t,R)$. The primary drawback of this form of $W(k,R)$, however, is that the precise mathematical relationship between the value of $R$ associated with a halo and the corresponding halo mass $M$ is not well defined. Nevertheless, since symmetry considerations dictate that {\il{M \propto R^3}}, the relationship between $M$ and $R$ may be parametrized as \begin{equation} M ~\equiv~ \frac{4\pi}{3}\overline{\rho} (c_W R)^3~, \label{eq:RtoMmap} \end{equation} where the value of the coefficient $c_W$ may be obtained from the results of numerical simulations. Following Ref.~\cite{Schneider:2014rda}, we take {\il{c_W\approx 2.5}}. Given that a well-defined one-to-one relationship exists between $M$ and $R$, the spatially-averaged variance $\sigma^2(t,M)$ may also be viewed as a function of the halo mass $M$. Within the Press-Schechter formalism, the present-day halo-mass function which follows from any particular $P(k)$ profile takes the form \begin{equation} \frac{dn}{d\log M} ~=~ \frac{\overline{\rho}}{2M}\eta(M)\frac{d\log\nu(M)}{d\log M}~, \label{eq:PressSchechter} \end{equation} where {\il{\nu(M) \equiv \delta_c^2/\sigma^2(t_{\mathrm{now}},M)}}, with {\il{\delta_c\approx 1.686}} denoting the critical overdensity, and where the function $\eta(M)$, which depends on $M$ only through the function $\nu(M)$, represents the probability density of obtaining an averaged fractional overdensity at a given location. For the window function in Eq.~(\ref{eq:Windowfn}), regardless of the form of $\eta(M)$, the expression for $dn/d\log M$ in Eq.~(\ref{eq:PressSchechter}) simplifies to \begin{eqnarray} \frac{dn}{d\log M} ~=~ \frac{\overline{\rho}}{12\pi^2 M} \nu(M) \eta(M) \frac{P\big(1/R(M)\big)}{\delta_c^2 R^3(M)}~,~~~ \label{eq:PressSchechterSimp} \end{eqnarray} where $R(M)$ is the particular value of $R$ which corresponds to a given halo mass $M$ through Eq.~(\ref{eq:RtoMmap}). A variety of possible forms for the function $\eta(M)$ have been proposed, based either on purely theoretical grounds or based on the results of $N$-body or hydrodynamic simulations~\cite{Press:1973iz,Sheth:1999mn,Jenkins:2000bv,Warren:2005ey,Tinker:2008ff, Crocce:2009mg,Bhattacharya:2010wy,Watson:2012mt,Seppi:2020isf}. In what follows, we adopt the form~\cite{Sheth:1999mn} \begin{equation} \eta(M) ~=~ \frac{\sqrt{2\nu(M)}}{\pi} A\left[1 + \nu^{-\alpha}(M)\right]e^{-\nu(M)/2}~, \label{eq:ShethTormanf} \end{equation} where {\il{A \approx 0.3222}} and {\il{\alpha = 0.3}}. This form for $\eta(M)$ is mathematically simple and accords reasonably well with the results of numerical simulations. We shall discuss the way in which alternative functional forms for $\eta(M)$ could affect the results of our analysis in Sect.~\ref{sec:Conclusions}. In order to quantify the extent to which $dn/d\log M$ deviates from the corresponding result $(dn/d\log M)_{\rm CDM}$ that we would obtain for purely cold dark matter (CDM), we shall henceforth define the dimensionless {\it structure-suppression function}\/ according to the relation \begin{equation} S(M) ~\equiv~ \sqrt{ \frac{dn/d\log M }{ (dn/d\log M)_{\rm CDM} }}~. \label{eq:StrucSuppFn} \end{equation} This function may be viewed as playing an analogous role with respect to the halo-mass function that the so-called transfer function {\il{T(k) \equiv \sqrt{P(k)/P_{\rm CDM}(k)}}} plays with respect to the matter power spectrum. Moreover, for the Press-Schechter halo-mass function in Eq.~(\ref{eq:PressSchechterSimp}), we find that $S^2(M)$ and $T^2(k)$ are related in any given dark-matter model by \begin{equation} S^2(M) ~=~ \frac{\nu(M) \eta(M)}{\nu_{\rm CDM}(M)\eta_{\rm CDM}(M)} T^2(1/R(M))~, \label{eq:StrucSuppFnSimp} \end{equation} where $\nu(M)$ and $\eta(M)$ are obtained from the corresponding matter power spectrum $P(k)$ for the dark-matter model in question. \section{Reconstruction Conjecture\label{sec:ConjAll}} We now propose a method for inverting the procedure outlined in the previous section and reconstructing $f(v)$ directly from information contained in $S^2(M)$. For this purpose, it turns out to be more convenient to characterize $f(v)$ in terms of the quantity {\il{g_v(v) \equiv v^3 f(v)}}. Moreover, as discussed in Appendix~\ref{app:TheMap}, we can define a change of variables from $v$ to $M$ via the functional map \begin{equation} M(v) ~\equiv~ \frac{4\pi\bar{\rho}c_W^3}{3\xi^3} \left[\int_0^1 \frac{da}{Ha^2}\frac{\gamma v}{\sqrt{\gamma^2 v^2 + a^2}}\right]^3~, \label{eq:vtoMmap} \end{equation} where {\il{\gamma \equiv (1-v^2)^{-1/2}}} is the usual relativistic factor, where $a$ is the scale factor, where {\il{H\equiv \dot{a}/a}} is the Hubble parameter, and where $\xi$ is an $\mathcal{O}(1)$ constant whose value shall be discussed later. The relationship between $g_v(v)$ and the corresponding velocity distribution $g_M(M)$ in $M$-space is then \begin{equation} g_v(v) ~=~ g_M(M)\left|\frac{d\log M}{d\log v}\right|~, \label{eq:gMMdefBody} \end{equation} where $M$ is understood to represent the function of $v$ in Eq.~(\ref{eq:vtoMmap}) and where $|d\log M/d\log v|$ is simply the Jacobian factor which follows from the change of variables from $M$ to $v$. The primary advantage of defining $g_M(M)$ in this way is that it allows us to characterize the fraction of dark matter particles which are ``hot'' relative to the mass scale $M$ --- {\it i.e.}\/, capable of free-streaming out of regions which would collapse into halos with masses below $M$ --- in a straightforward manner. Indeed, this {\it hot-fraction function} is \begin{equation} F(M) ~\equiv~ \frac{1}{{\cal N}} \int_{\log M}^\infty d \log M'\, g_M(M')~, \label{eq:hotfrac} \end{equation} where \begin{equation} {\cal N} ~\equiv~ \int_{-\infty}^\infty d\log M\, g_M(M)~. \label{eq:SqcriptN} \end{equation} The hot-fraction function $F(M)$ will play a crucial role in our reconstruction conjecture. In order to formulate this conjecture, we begin by restricting our attention to cases in which {\il{d^2\log S^2(M)/d(\log M)^2 \leq 0}} for all $M$. We then posit --- as the underlying core of our conjecture --- that there is a direct relationship between $F(M)$ and $S^2(M)$, or more precisely between $F(M)$ and the first {\it derivative}\/ $d\log S^2(M)/d\log M$. Moreover, we find that this relationship is well approximated by the simple empirical equation \begin{equation} \frac{d \log S^2(M)}{d\log M} ~\approx~ \frac{7}{10} \, F^2(M)~. \label{eq:empir} \end{equation} A statement about the functional form of $g_M(M)$ may then be obtained in a straightforward manner from Eq.~(\ref{eq:empir}). Taking the logarithmic derivative of both sides of this relation and using the definition of the hot-fraction function in Eq.~(\ref{eq:hotfrac}) to relate $dF(M)/dM$ to $g_M(M)$, we find that \begin{equation} \frac{ g_M(M)}{{\cal N}} ~\approx~ \sqrt{ \frac{5}{14}} \left( \frac{d \log S^2(M)}{d\log M}\right)^{-1/2}\, \left| \frac{d^2 \log S^2(M)}{(d\log M)^2}\right|~. \label{eq:recconjrough} \end{equation} This is the basic form of our conjecture which allows us to reconstruct the salient features of the dark-matter velocity distribution $g(M)$ directly from the first and second logarithmic derivatives of the structure-suppression function. As stated above, our conjecture in Eq.~(\ref{eq:recconjrough}) holds under the assumption that $d^2 \log\, S^2(M)/(d\log M)^2$ is always either zero or negative --- {\it i.e.}\/, that $\log \,S^2(M)$ is always either a straight line or concave-down when plotted versus $\log\, M$. However, as we shall see, there exist realistic halo-mass distributions for which $S^2(M)$ is occasionally concave-{\it up}. In order to address this possibility, we shall further posit that {\il{g(M)\approx 0}} whenever {\il{d^2 \log\, S^2(M)/(d\log\, M)^2 > 0}}. We can therefore extend our conjecture to take the more general form \begin{eqnarray} \frac{ g_M(M)}{{\cal N}} &~\approx~& \sqrt{ \frac{5}{14}} \left( \frac{d \log S^2(M)}{d\log M}\right)^{-1/2}\nonumber \\ && ~~~\times \left| \min\left(0, \frac{d^2 \log S^2(M)}{(d\log M)^2}\right) \right|~.~~~ \label{eq:recconj} \end{eqnarray} This, then, is the complete statement of our reconstruction conjecture. Several important caveats must be borne in mind regarding our conjecture. First, we emphasize that this conjecture is not meant to be a precise mathematical statement. Indeed, given the rather complicated nature of the Einstein-Boltzmann evolution equations which connect $g_M(M)$ to $S(M)$, we do not expect a relation of the simple form in Eq.~(\ref{eq:recconj}) to provide a precise inverse (except perhaps under some limiting approximations and simplifications). Rather, this conjecture is intended merely as an approximate practical guide --- a back-of-the-envelope method for reproducing the rough characteristics of $g(M)$ given a particular structure-suppression function $S(M)$. Second, as discussed in more detail in Appendix~\ref{app:TheMap}, our map between $v$ and $M$ in Eq.~(\ref{eq:vtoMmap}) has been formulated under the assumption that the dark matter is relativistic at the time at which it is produced. When this is not the case, we expect that a more appropriate map between these two variables will depend on further details such as the time at which the dark matter is produced, and hence will carry a sensitivity to the particular dark-matter production scenario envisaged. However, in the vast majority of situations in which this assumption is violated and a significant population of dark-matter particles is non-relativistic at the time of production, this population of non-relativistic dark-matter particles is typically sufficiently cold that it has no effect on $S^2(M)$ for $M$ within our regime of interest. While it is possible to engineer situations in which the map in Eq.~(\ref{eq:vtoMmap}) might require modification while free-streaming effects on $S^2(M)$ are non-negligible, such situations require a somewhat unusual dark-matter cosmology --- a cosmology in which a significant non-relativistic yet ``lukewarm'' population of dark-matter particles is generated at exceedingly late times by some dynamics that contributes significantly to $f(v)$ within a particular range of velocities. Third, our procedure for calculating $P(k)$ from a given $f(v)$ implicitly incorporates certain assumptions. One of these assumptions is that the background cosmology does not deviate significantly from that of the standard cosmology. For example, it is assumed that the time $t_{\mathrm{MRE}}$ of matter-radiation equality is the same as in the standard cosmology and that the universe is effectively radiation-dominated at all times from the end of the reheating epoch until $t_{\mathrm{MRE}}$. It is also assumed that the primordial spectrum of density perturbations produced after inflation is Gaussian-random. Another of these assumptions is that the velocity distribution of dark-matter particles has ceased evolving, except as a consequence of redshifting effects, by some very early time deep within the radiation-dominated epoch. This implies not only that the production of the dark matter is effectively complete by that time, but also that the effect of scattering and decay processes involving dark-matter particles is negligible thereafter. Of course, the above assumptions do not necessarily imply limitations on our conjecture {\it per se}\/ in these regimes. While it is possible that our conjecture ceases to provide accurate results for cosmological histories wherein the above assumptions are relaxed, it is also possible that our conjecture remains robust even in the presence of these deviations. The restrictions implied by these caveats are not severe. Indeed, as we shall demonstrate in Sect.~\ref{sec:Results}, our conjecture as stated here will still allow us to resurrect the salient features of $g_M(M)$ --- and hence also of $f(v)$ --- for a wide variety of different dark-sector scenarios. \section{Testing the Conjecture\label{sec:Results}} Having stated our conjecture, we now assess the extent to which it enables us to reconstruct the underlying dark-matter velocity distribution from the halo-mass function. In particular, we shall test this conjecture within the context of an illustrative dark-matter model in which $g_M(M)$ deviates significantly from that of purely cold dark matter in a variety of ways within different regions of model-parameter space. For a set of illustrative points in that parameter space, we will then reconstruct $g_M(M)$ using our conjecture and compare it with the corresponding ``true'' $g_M(M)$ distribution. The model which we shall adopt for purposes of illustration --- a model which was introduced in Ref.~\cite{Dienes:2020bmn} --- is one in which the cosmological abundance of dark matter is generated non-thermally, via the decays of unstable dark-sector particles. The details of this model are summarized in Sect.~IV of Ref.~\cite{Dienes:2020bmn}.~ However, for our present purposes, these details are unimportant. We are simply interested in the dark-matter velocity distributions to which the model gives rise --- velocity distributions which we shall view solely as test functions for our conjecture. Indeed, by varying the parameters of this model --- in particular, two parameters $r$ and $s$ which govern the coupling structure of the dark-sector particles --- we are able to realize a variety of qualitatively different dark-matter velocity distributions in a straightforward way. For any given $g_M(M)$ distribution we obtain in this model, we first calculate the linear matter power spectrum $P(k)$ numerically using \texttt{CLASS}.~ We then evaluate the corresponding structure-suppression function $S^2(M)$ using the Press-Schechter formalism, as encapsulated in Eq.~(\ref{eq:PressSchechter}), in order to obtain both $dn/d\log M$ and $(dn/d\log M)_{\rm CDM}$. We then test our conjecture by using it to reconstruct $g_M(M)$ from $S^2(M)$, and assess how well this reconstructed $g_M(M)$ matches the $g_M(M)$ test function with which we started. In Fig.~\ref{fig:BradyBunch}, we display the results of our analysis for nine different combinations of the model parameters $r$ and $s$ defined in Sect.~IV of Ref.~\cite{Dienes:2020bmn}. These parameter combinations have been chosen such that the corresponding dark-matter velocity distributions exhibit a wide variety of profiles. The blue curve displayed in each panel represents the ``true'' velocity distribution $g_M(M)$ for the corresponding choice of model parameters. Indeed, we see that the set of $g_M(M)$ functions obtained for this set of parameter combinations includes unimodal distributions as well as a variety of multi-modal distributions. Thus, the velocity distributions shown in Fig.~\ref{fig:BradyBunch} will collectively provide a thorough ``stress-test'' of how well our conjecture performs when applied to qualitatively different kinds of dark-matter scenarios. \begin{figure*}[h!] \centering \includegraphics[clip, width=0.99\textwidth]{reconstruction.pdf}~ \caption{An explicit test of our reconstruction conjecture for a variety of different dark-matter phase-space distributions $g_M(M)$, including distributions which are uni-modal, bi-modal, and even multi-modal, exhibiting complex configurations of peaks and troughs. These distributions are drawn from the particular dark-matter model discussed in Ref.~\cite{Dienes:2020bmn} and represent different choices of the parameters $(r,s)$ defined therein. For our purposes, however, these distribution may be regarded simply as test functions through which we may assess the validity of our conjecture. The blue curve shown in each panel represents the original dark-matter velocity distribution $g_M(M)$. The black curve represents the corresponding structure-suppression function $S^2(M)$ to which it gives rise. The green curve represents the reconstruction of $g_M(M)$ from $S^2(M)$ using in Eq.~(\ref{eq:recconj}). In all cases, we see that our conjecture successfully reproduces the salient features of the original velocity distribution. \label{fig:BradyBunch}} \end{figure*} The black curve appearing in each panel of Fig.~\ref{fig:BradyBunch} represents the structure-suppression function $S^2(M)$ which corresponds to the velocity distribution $g_M(M)$. The green curve, on the other hand, represents the reconstructed $g_M(M)$ obtained solely from information contained in $S^2(M)$ using Eq.~(\ref{eq:recconj}). In performing this test, we have chosen the value of the proportionality constant in Eq.~(\ref{eq:vtoMmap}) to be {\il{\xi = 9/4}}, as this tends to horizontally align the original and reconstructed dark-matter velocity distributions with each other. We observe that in each case shown, our reconstruction conjecture indeed reproduces the salient features of the original velocity distribution. In particular, we see that our conjecture allows us to reconstruct not only the approximate locations of the peaks in $g_M(M)$, but also the relative areas under those peaks to an impressive degree of accuracy across the entire range of $M$ shown. Thus, while our conjecture of course does not reproduce the detailed shapes of the features in $g_M(M)$ with perfect fidelity, the results in Fig.~\ref{fig:BradyBunch} attest that the simple relation in Eq.~(\ref{eq:recconj}) nevertheless provides a versatile tool for extracting meaningful information about the properties of the dark matter directly from the shape of the halo-mass function alone. \section{Conclusions\label{sec:Conclusions}} In this paper, we have proposed a conjecture which can be used to reconstruct the salient features of the primordial dark-matter velocity distribution $f(v)$ directly from the shape of the halo-mass function $dn/d\log M$. This reconstruction conjecture requires essentially no additional information about the properties of the dark matter beyond what is imprinted on $dn/d\log M$ itself. Moreover, we have shown that our conjecture successfully reproduces the salient features of the underlying dark-matter velocity distribution even in situations in which that distribution is complicated and even multi-modal. Several comments are in order. First of all, the precise mathematical statement of our reconstruction conjecture in Eq.~(\ref{eq:recconj}) is predicated on a number of theoretical assumptions concerning the form of the halo-mass function, the window function $W(k,R)$, {\it etc}.\/~ For example, in our analysis, we have adopted the functional form for $\eta(M)$ in Eq.~(\ref{eq:ShethTormanf}). However, as discussed in Sect.~\ref{sec:HaloMassFunction}, there exist a number of alternatives we could have chosen for $\eta(M)$.~ Likewise, while our choice of the window function in Eq.~(\ref{eq:Windowfn}) allows us to formulate the map between $k$ and $M$ in an unambiguous way, it is certainly possible to consider alternatives for $W(k,R)$. It is possible that such modifications would change the precise empirical relation in Eq.~(\ref{eq:empir}). However, for well-behaved window functions --- {\it i.e.}\/, functions which have sufficiently flat tops and which decay sufficiently rapidly for {\il{k \gtrsim R^{-1}}} --- we expect that these modifications will not alter the fundamental connection between $d\log S^2(M)/d\log M$ and $F(M)$ which underlies our conjecture. This issue merits further exploration. One interesting feature of our reconstruction conjecture is that it makes no particular assumption about the mass of the dark-matter state(s). As such, it is not restricted to cases in which the dark-matter velocity distribution $f(v)$ receives contributions from a single dark-matter species with a well-defined mass. Indeed, in the case in which multiple particle species contribute to the present-day dark-matter abundance (an extreme example of which occurs in the Dynamical Dark Matter framework~{\mbox{\cite{Dienes:2011ja,Dienes:2011sa}}}), the distribution $g_M(M)$ can still be obtained via Eq.~(\ref{eq:recconj}). The corresponding distribution $g_v(v)$ in this case represents the aggregate velocity distribution for all particle species which contribute to the present-day dark-matter abundance. Indeed, while our reconstruction conjecture is capable of distinguishing between different dark-matter velocity distributions, it does not distinguish between single-particle and multi-particle dark-matter scenarios which yield the same $f(v)$ and therefore the same $S^2(M)$. It is also important to note the similarities and differences between our conjecture for reconstructing $f(v)$ from the shape of $S^2(M)$ and the similar proposal that we advanced in Ref.~\cite{Dienes:2020bmn} for extracting information about the dark-matter phase-space distribution from the linear matter power spectrum~\cite{Dienes:2020bmn}. First, as emphasized in the Introduction, the conjecture in Eq.~(\ref{eq:recconj}) does not rely on this previous proposal in any way. Moreover, the conjecture in Eq.~(\ref{eq:recconj}) permits one to extract information about $f(v)$ at much lower velocities. Measurements of $P(k)$ based on data obtained at low redshifts are currently reliable up to around {\il{k\lesssim 0.05}} -- $0.1\mbox{~Mpc}^{-1}$. Information from Lyman-$\alpha$-forest measurements can provide additional information about $P(k)$ at wavenumbers up to around {\il{k \sim 1\mbox{~Mpc}^{-1}}}. While future measurements of the 21-cm line of neutral hydrogen could in principle yield information about $P(k)$ at significantly higher redshifts, the present state of our knowledge of $P(k)$ permits us to reconstruct $f(v)$ only down to {\il{v \sim 5\times 10^{-7}}} using the methods of Ref.~\cite{Dienes:2020bmn}. By contrast, our reconstruction conjecture in Eq.~(\ref{eq:recconj}) relies solely on information contained within the halo-mass function in order to reconstruct $f(v)$. Thus, one could in principle use this conjecture to probe $f(v)$ down to {\il{v \sim 10^{-9}}} or lower. Determining the halo-mass function from observation, of course, presents its own challenges. Significant theoretical uncertainties exist in the relationship between the relevant astrophysical observables and halo mass. Moreover, statistical fluctuations in the measured values of these observables can introduce a so-called Eddington bias~\cite{Eddington:1913}. Furthermore, the accuracy to which the mass of an individual halo can be measured is limited both by the number density of background source images and by uncertainties in the shapes of foreground halos. Nevertheless, a number of strides have been made toward measuring $dn/d\log M$ on the basis of observational data. Strong gravitational lensing provides a tool which can be used to probe the halo-mass function and corresponding subhalo-mass function on mass scales {\il{M \sim 10^{6}}} -- $10^{8}$~$M_\odot$. Analyses of small existing samples of strongly-lensed objects have already yielded meaningful constraints on these functions~{\mbox{\cite{Vegetti:2014lqa,Hsueh:2019ynk,Gilman:2019nap}}}. Moreover, a significant number of additional strong-lensing candidates have been identified within Sloan Digital Sky Survey (SDSS) data~\cite{Talbot:2020arv}. Methods have also been proposed for obtaining information about $dn/d\log M$ at higher values of $M$ from CMB data~\cite{Murray:2013sna}, galaxy-cluster number counts~\cite{Castro:2016jmw}, weak-lensing measurements~{\mbox{\cite{Dong:2019mch,Sonnenfeld:2019}}}, and the line widths of neutral hydrogen emitted by galaxies~\cite{Li:2019zvm}. Thus, despite the challenges involved in determining the halo-mass function from observation, larger data sets and an improved understanding of the theoretical relationship between astrophysical observables and halo mass could significantly reduce the uncertainties in $dn/d\log M$ in the near future. As such, a calculational tool of the sort we have proposed in this paper could be a valuable addition to the toolbox of the dark-matter cosmologist. \begin{acknowledgments} KM wishes to thank the EXCEL Scholars Program for Undergraduate Research at Lafayette College, which helped to facilitate this research. The research activities of KRD are supported in part by the Department of Energy under Grant DE-FG02-13ER41976 (DE-SC0009913) and by the National Science Foundation through its employee IR/D program. FH is supported in part by the National Natural Science Foundation of China (NSFC) under Grants 11690022, 12047503, 11875003, and 12022514 and is also supported by the Strategic Priority Research Program and Key Research Program of Frontier Science of the Chinese Academy of Sciences under Grants XDB21010200, XDB23000000, and ZDBS-LY-7003. The research activities of JK are supported in part by the Science and Technology Research Council (STFC) under the Consolidated Grant ST/T00102X/1. The research activities of KM and BT are supported in part by the National Science Foundation under Grants PHY-1720430 and PHY-2014104. The opinions and conclusions expressed herein are those of the authors, and do not represent any funding agencies. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} We fix throughout a field $\mathbb{F}$ of characteristic zero. All Lie algebras and representations considered in this paper are assumed to be finite dimensional over $\mathbb{F}$, unless explicitly stated otherwise. Given a Lie algebra $\g$ and a $\g$-module $V$, the \emph{socle series} of $V$, namely \[ 0=\mathrm{soc}^{0}(V)\subset\mathrm{soc}^{1}(V)\subset \cdots\subset \mathrm{soc}^{m}(V)=V \] is inductively defined by declaring $\mathrm{soc}^{i}(V)/\mathrm{soc}^{i-1}(V)$ to be the socle of $V/\mathrm{soc}^{i-1}(V)$, that is, the sum of all irreducible submodules of $V/\mathrm{soc}^{i-1}(V)$, $1\leq i\leq m$. By definition, $V$ is \emph{uniserial} if the \emph{socle factors} $\mathrm{soc}^{i}(V)/\mathrm{soc}^{i-1}(V)$ are irreducible for all $1\leq i\leq m$. In other words, $V$ is uniserial if its socle series is a composition series, or equivalently if its submodules are totally ordered by inclusion. Uniserial and serial modules or rings are very important in the context of associative algebras and there is an extensive literature devoted to them. For instance, the class of serial rings and algebras includes discrete valuation rings, Nakayama algebras, triangular matrix rings over a skew field and Artinian principal ideal rings (see \cite{EG,Pu}). In particular, every proper factor ring of a Dedekind domain is serial. Also, serial algebras occur as the group algebras in characteristic $p$ (see, for instance \cite{Sr}). In \cite{BH-Z}, among other things, a characterization of algebras of finite uniserial type is given. In contrast, there are only few papers devoted to the study these concepts for Lie algebras. This work is a new step in a project aiming to systematically investigate the uniserial representations of Lie algebras. Here, we classify all uniserial $\g$-modules for $\g=\sl(2)\ltimes \h_{n}$, where $\h_{n}$ is the Heisenberg Lie algebra of dimension $2n+1$ and $\sl(2)$ acts on $\h_{n}$ so that both the center $\z$ of $\h_{n}$ and $\h_{n}/\z$ are irreducible $\sl(2)$-modules. More precisely, given an integer $a\geq 0$, let $V(a)$ be the irreducible $\sl(2)$-module with highest weight $a$. Thus, \[ \h_n\simeq V(m)\oplus V(0),\quad m=2n-1, \] as $\sl(2)$-modules. The Lie algebra $\g$ is a conformal Galilei algebra and it is an important object in mathematical physics. Galilei algebras and their representations attract considerable attention (see \cite{AIK}, \cite{LMZ} and references therein). Previously, we obtained a classification of all uniserial $\g$-modules when $\g=\sl(2)\ltimes V(a)$, $a\geq 1$, over the complex numbers (see \cite{CS}), as well as when $\g$ is abelian, over a sufficiently large perfect field (see \cite{CS_Can}). In the first case the classification turns out to be equivalent to determining all non-trivial zeros of the Racah-Wigner $6j$-symbol within certain parameters, while in the second a sharpened version of the Primitive Element Theorem plays a central role, specially over finite fields. \medskip In this article we focus on $\g=\sl(2)\ltimes \h_{n}$. Since every non-trivial ideal of $\g$ contains $\z$, it follows that any non-faithful representation of $\g$ is obtained from a representation of $\sl(2)\ltimes V(m)$. Therefore, the classification of all non-faithful uniserial $\g$-modules follows from \cite{CS}, while the classification of all faithful uniserial $\g$-modules is given by the following theorem, which is the main result of the paper. \begin{theorem}\label{thm.main} All faithful uniserial $\g$-modules have length 3. Moreover, the socle factors of a faithful uniserial $\g$-module of length 3 must be one of the following: \medskip \noindent \begin{tabular}{ll} $m=1:$ & $V(a),V(a+1),V(a)$ or $V(a+1),V(a),V(a+1)$, with $a\ge0$. \\[1mm] $m=3:$ & $V(0),V(3),V(0)$ or $V(1),V(4),V(1)$ or $V(1),V(2),V(1)$ or $V(4),V(3),V(4)$.\\[1mm] $m\geq 5:$ & $V(0),V(m),V(0)$ or $V(1),V(m+1),V(1)$ or $V(1),V(m-1),V(1)$. \end{tabular} \medskip \noindent Furthermore, each of these sequences arises from one and only one isomorphism class of uniserial $\g$-modules. \end{theorem} \begin{remark} It follows from this theorem that for a given $n>2$, $\sl(2)\ltimes\h_{n}$ has only 3 isomorphism classes of faithful uniserial representations (if $n = 2$, it has 4), whereas it has infinitely many classes of non-faithful ones. \end{remark} Theorem \ref{thm.main} is a direct consequence of Theorems \ref{uno}, \ref{dos} and \ref{no5} below. Explicit realizations of these modules are given in \S\ref{sec.construction}. Let us say a few words about Theorem \ref{thm.main}. Suppose $\g=\s\ltimes\n$, with $\s$ simple, $\n$ nilpotent, and assume that $\n$ is generated as a Lie algebra by an $\s$-submodule $\n_0\subset\n$. By general results of the theory, in order to obtain a faithful uniserial $\g$-module of length $\ell$ with socle factors $V_i$, $1\le i\le\ell$, the following is required: \begin{enumerate}[(1)] \item for $1\le i\le\ell$, a matrix representation $R_i:\s\to\gl(d_i)$ corresponding to the $\s$-module $V_i$; at least one $R_i$ must be non-trivial. \item for $2\le i\le\ell$, a faithful matrix presentation $X_i:\n_0\to M_{d_{i-1},d_i}(\mathbb{F})$ corresponding to an $\s$-module homomorphism $\n_0\to \text{Hom}(V_i,V_{i-1})$; \item for the linear map $R:\s\oplus\n_0\to\gl(\sum d_i)$, defined by $$ R(s+u)=\left( \begin{matrix} R_1(s) & X_2(u) & 0 & \cdots & 0\\ 0 & R_2(s) & X_3(u) & \cdots & 0 \\ \vdots & & \ddots & & \vdots\\ 0 & 0 & \cdots & R_{\ell-1}(s) & X_{\ell}(u) \\ 0 & 0 & \cdots & 0& R_\ell(s)\\ \end{matrix} \right),\quad s\in\s,\; u\in \n_0, $$ the matrix Lie algebra $\tilde \n$ generated by $R(\n_0)$ must be isomorphic to $\n$ (note that $\tilde \n$ consists of block upper triangular matrices). \end{enumerate} \medskip When $\g=\sl(2)\ltimes \n$, the following fact describes what happens with the second superdiagonal of $\tilde \n$, namely $[R(\n_0),R(\n_0)]$, which should be isomorphic to $[\n_0,\n_0]\subset\n$ as $\s$-modules. \medskip \noindent \textbf{Fact:} Generically speaking, it turns out that the block $(i-2,i)$, $3\le i\le\ell$, of $\tilde \n$ consists of \textbf{all} irreducible $\s$-submodules of $\Lambda^2\n_0$ that also appear in $\text{Hom}(V_i,V_{i-2})$. Nevertheless, in some curious cases, some of these constituents do not appear in $[R(\n_0),R(\n_0)]$, as the following example shows. \medskip \noindent \textbf{Example:} Let $\g=\sl(2)\ltimes\h_2$, so that $\n_0=V(3)$ and $[\n_0,\n_0]=V(0)$. Assume that $\{v_0,v_1,v_2,v_3\}$ is \blue{a} standard basis of $\n_0$, as defined in \S\ref{wdos}. Proceeding as above, we obtain a faithful representation of $\g$ with socle factors $V(4)$, $V(3)$, $V(4)$ via: \[ R\big(\textstyle\sum_{i=0}^3a_iv_i\big)=\left( \begin{array}{c|c|c} \qquad 0\qquad & \begin{smallmatrix} -6a_1&6a_0&0&0\\ -3a_2&0&3a_0&0\\ -a_3&-3a_2&3a_1&a_0\\ 0&-3a_3&0&3a_1\\ 0&0&-6a_3&6a_2 \end{smallmatrix} & \\[5mm] \hline & 0 &\begin{smallmatrix} \\[1mm] 3a_2&-6a_1&3a_0&0&0\\ a_3&0&-3a_1&2a_0&0\\ 0&2a_3&-3a_2&0&a_0\\ 0&0&3a_3&-6a_2&3a_1 \end{smallmatrix} \\[5mm] \hline & \begin{smallmatrix}\\[6mm] \end{smallmatrix} & 0 \end{array} \right). \] It turns out that, by ``some miracle'', $[R(\n_0),R(\n_0)]$ is just $V(0)$, as opposed to the expected result of $V(0)\oplus V(4)$ (note that $V(4)$ is indeed a constituent of both $\text{Hom}(V(4),V(4))$ and $\Lambda^2\n_0$. This ``miracle'', which is due to the exceptional zero $\left\{\begin{matrix} \frac{4}{2} \; \frac{3}{2} \; \frac{3}{2} \\[.6mm] \frac{3}{2} \; \frac{4}{2} \; \frac{3}{2} \end{matrix}\right\}=0$ of the $6j$-symbol, produces an unexpected uniserial $\g$-module. \medskip In general, if $\g=\sl(2)\ltimes \n$ then the exceptional zeros of the $6j$-symbol control when the matrix Lie algebra generated by $R(\n_0)$ is isomorphic to $\n$. Item (3) above might be very difficult to determine for other simple Lie algebras $\s$. \section{Preliminaries} \subsection{Matrix recognition of uniserial representations}\label{sec.matrix_recognition} In this subsection we recall from \cite{CS} some basic facts about uniserial representations of a Lie algebra $\g$ with solvable radical $\r$ and fixed Levi decomposition $\g=\s\ltimes \r$. Given a representation $T:\g\to\gl(V)$ and a basis $B$ of $V$ we let $M_B:\g\to\gl(d)$, $d=\dim(V)$, stand for the corresponding matrix representation. Since $\s$ is semisimple, it follows that there exist irreducible $\s$-submodules $W_i$, $1\le i\le n$, such that \begin{equation}\label{eq.comp_series} 0\subset W_1\subset W_1\oplus W_2\subset W_1\oplus W_2\oplus W_3\subset \cdots\subset W_1\oplus\cdots \oplus W_n=V \end{equation} is the composition series of $V$. Let $B=B_1\cup\cdots\cup B_n$ be a basis of $V$, where each $B_i$ is a basis of $W_i$. We say that $B$ is \emph{adapted} to the composition series \eqref{eq.comp_series}. If $B$ is adapted to a composition series, then $M_B(s)$ is block diagonal for all $s\in\s$. It is well-known \cite[Ch. I, \S 5, Th. 1]{Bo} that $[\g,\r]$ annihilates every irreducible $\g$-module. Therefore, if $B$ is adapted to a composition series, then $M_B(r)$ is strictly block upper triangular for all $r\in [\g,\r]$. The following result, proven in \cite[Theorem 2.4]{CS} over $\C$, remains valid over $\mathbb{F}$. \begin{theorem}\label{thm.CS_2.4} \label{fed} The $\g$-module $V$ is uniserial if and only if, given any basis $B$ adapted to a composition series, none of the blocks in the first superdiagonal of $M_B(\r)$ is identically 0. Moreover, if $[\g,\r]=\r$ and there exists a basis $B$ adapted to a composition series such that none of the blocks in the first superdiagonal of $M_B(\r)$ is identically 0, then $V$ is uniserial. \end{theorem} \subsection{Uniserial representations of \texorpdfstring{$\sl(2)\ltimes V(m)$}{sl(2)xV(m)}}\label{wdos} Recall that $V(a)$ is the irreducible $\sl(2)$-module with highest weight $a\ge0$. We fix a basis $\{v_0,\dots,v_a\}$ of $V(a)$ relative to which $e,h,f\in\sl(2)$ act as follows: $$ hv_i=(a-2i)v_i, $$ $$ ev_i=(a-(i-1))v_{i-1}, $$ $$ fv_i=(i+1)v_{i+1}, $$ where $0\leq i\leq a$ and $v_{-1}=0=v_{a+1}$. Such basis of $V(a)$ will be called \emph{standard}. We write $R_a:\sl(2)\to\gl(a+1)$ for the corresponding matrix representation. The following theorem, proved in \cite{CS}, provides a classification, up to isomorphism, of all the uniserial representations of the Lie algebra $\sl(2)\ltimes V(m)$, $m\ge1$, when the underlying field is $\C$. Nevertheless, the classification remain true over $\mathbb{F}$. \begin{theorem}\label{thm.CS_Classification} Up to a reversing of the order, the following are the only possible sequences of socle factors of uniserial representations of $\sl(2)\ltimes V(m)$: \noindent \begin{tabular}{ll} \\[-2mm] Length 1. & $V(a)$. \\[2mm] Length 2. & $V(a),V(b)$, where $a+b\equiv m\mod 2$ and $0\le b-a\leq m\leq a+b$. \\[2mm] Length 3. & $V(a),V(a+m),V(a+2m)$; or \\[1mm] & $V(0),V(m),V(c)$, where $c\equiv 2m \mod 4$ and $c\leq 2m$. \\[2mm] Length 4. & $V(a),V(a+m),V(a+2m),V(a+3m)$; or \\[1mm] & $V(0),V(m),V(m),V(0)$, where $m\equiv 0\mod 4$. \\[2mm] Length $\geq 5$. & $V(a),V(a+m),\dots,V(a+s m)$, where $s\geq 4$. \\[2mm] \end{tabular} \noindent Moreover, each of these sequences arises from only one isomorphism class of uniserial $\g$-modules, except for the sequence $V(0),V(m),V(m),V(0)$, $m\equiv 0\mod 4$. The isomorphism classes of uniserial $\g$-modules associated to this sequence are parametrized by $\mathbb{F}$. \end{theorem} Explicit realizations of these modules can be found in \cite{CS}. \subsection{The Lie algebra \texorpdfstring{$\g=\sl(2)\ltimes \h_{n}$}{}}\label{lieg} We fix an integer $n\ge1$. Let $\h_{n}$ be the Heisenberg Lie algebra of dimension $2n+1$ and set $m=2n-1$. Of all Lie algebras of a given dimension (that, a fortiori, must be odd), $\h_{n}$ is characterized by the fact that its center, say $\z=\C z$, is 1-dimensional and $[\h_n,\h_n]=\z$. We know that $\sl(2)$ acts via derivations on $\h_{n}$ in such a way that \[ \h_{n}\simeq V(m)\oplus \z \] as $\sl(2)$-modules, where $\z\simeq V(0)$. There is a unique $\sl(2)$-invariant skew-symmetric bilinear form on $V(m)$, up to scaling. Thus, the bracket on $V(m)$ is uniquely determined, up to scaling. We fix $[v_0,v_m]=z$ and obtain \[ [v_i,v_{m-i}]=(-1)^i\binom{m}{i}z,\quad 0\leq i\leq m. \] Let $\g=\sl(2)\ltimes \h_{n}$. \subsection{The \texorpdfstring{$6j$}{}-symbol} Given three half-integers, $j_1$, $j_2$ and $j_3$, we say that they satisfy the triangle condition if $j_1+j_2+j_3\in\Z$ and there is a triangle with sides $j_1$, $j_2$ and $j_3$; that is \[ |j_1-j_2|\le j_3\le j_1+j_2. \] In particular, $j_1$, $j_2$ and $j_3$ must be non-negative. If either $|j_1-j_2|=j_3$ or $j_3=j_1+j_2$ we say that the triple $(j_1,j_2, j_3)$ is a degenerate triangle. The Clebsch-Gordan formula states that $\dim_{\mathbb{F}}\Hom_{\sl(2)}(V(k), V(a)\otimes V(b))=1$ if $(\frac{a}{2},\frac{b}{2},\frac{k}{2})$ satisfies the triangle condition and 0 otherwise. \medskip We recall from \cite[Chapter 9]{VMK} some of the main properties of the $6j$-symbol. \begin{enumerate}[(1)] \item\label{it.2} Given six half-integers $j_1$, $j_2$, $j_3$, $j_4$, $j_5$ and $j_6$ the $6j$-symbol $\left\{ \begin{matrix} j_1 \; j_2 \; j_3 \\ j_4 \; j_5 \; j_6 \end{matrix} \right\}$ is a real number that is, by definition, zero if one of following four triples \begin{equation} \label{triples} (j_1,j_2, j_3),\; (j_1, j_5, j_6),\; (j_4, j_2, j_6),\; (j_4, j_5, j_3) \end{equation} does not satisfy the triangle condition. In particular, $\left\{ \begin{matrix} j_1 \; j_2 \; j_3 \\ j_4 \; j_5 \; j_6 \end{matrix} \right\}=0$ if some $j_i<0$. In contrast, if all four triples (\ref{triples}) satisfy the triangle condition and one of them is a degenerate triangle then $\left\{ \begin{matrix} j_1 \; j_2 \; j_3 \\ j_4 \; j_5 \; j_6 \end{matrix} \right\}\ne0$ (see \cite[\S9.5.2]{VMK}). \item\label{it.3} The Biedenharn-Elliott identity yields, in particular, the following three-term recurrence relation (see \cite[\S9.6.2]{VMK} or \cite[pag. 1963]{SG}) \begin{align*} i_1E(i_1+1) \left\{\begin{matrix} i_1\!\!+\!\!1 \; i_2 \; i_3 \\ \;\; i_4 \;\; i_5 \; i_6 \end{matrix} \right\} +F(i_1) \left\{\begin{matrix} i_1 \; i_2 \; i_3 \\ i_4 \; i_5 \; i_6 \end{matrix} \right\} +(i_1+1)E(i_1) \left\{\begin{matrix} i_1\!\!-\!\!1 \; i_2 \; i_3 \\ \;\; i_4\;\; i_5 \; i_6 \end{matrix} \right\}=0 \end{align*} where \begin{align*} F(i_1)= (2i_1 + 1)\big(& i_1(i_1+1)(-i_1(i_1+1) + i_2(i_2+1) + i_3(i_3+1)) \\ + &i_5(i_5+1)( i_1(i_1+1) + i_2(i_2+1) - i_3(i_3+1)) \\ + &i_6(i_6+1)( i_1(i_1+1) - i_2(i_2+1) + i_3(i_3+1)) \\ - &2i_1(i_1+1)i_4(i_4+1) \big) \end{align*} and \begin{align*} E(i_1)\!=\! \sqrt{\big(i_1^2 - (i_2-i_3)^2\big)\big((i_2+i_3+1)^2 - i_1^2\big) \big(i_1^2 - (i_5-i_6)^2\big)\big((i_5+i_6+1)^2 - i_1^2\big)}. \end{align*} \item\label{it.4} The $6j$-symbol is invariant under the permutation of any two columns. It is also invariant if upper and lower arguments are interchanged in any two columns. \end{enumerate} \begin{prop}\label{prop.6j_non-zero} Let $j_1$, $j_2$, $j_3$, $j_4$, $j_5$ and $j_6$ be non-negative half-integers such that $j_1=j_5+j_6\ge3$, $j_2=j_3$ and all the triples \begin{equation}\label{eq.4triples} (h, j_2, j_3),\; (h, j_5, j_6),\; (j_4, j_2, j_6),\; (j_4, j_5, j_3) \end{equation} satisfy the triangle condition for $h=j_1$, $h=j_1-1$. If $ \left\{\begin{matrix} j_1\!-\!1 \; j_2 \; j_3 \\ \;\;j_4 \;\; \; j_5 \; j_6 \end{matrix} \right\}=0 $ then $ \left\{\begin{matrix} j_1\!-\!2 \; j_2 \; j_3 \\ \;\;j_4 \;\; \; j_5 \; j_6 \end{matrix} \right\}\ne0 $ and $ \left\{\begin{matrix} j_1\!-\!3 \; j_2 \; j_3 \\ \;\;j_4 \;\; \; j_5 \; j_6 \end{matrix} \right\}\ne0 $. In particular, the triples \eqref{eq.4triples} satisfy the triangle condition for $h=j_1-2$ and $h=j_1-3$. \end{prop} \begin{proof} Fix $(i_2,i_3,i_4,i_5,i_6)=(j_2,j_3,j_4,j_5,j_6)$. Since $j_2=j_3$, we have \begin{align*} E(i_1)\!&=\! \sqrt{i_1^2\big((2j_2+1)^2 - i_1^2\big) \big(i_1^2 - (j_5-j_6)^2\big)\big((j_5+j_6+1)^2 - i_1^2\big)}. \\ F(i_1)&= -(2i_1 + 1)i_1(i_1+1) \\ &\hspace{1cm}\times( i_1(i_1+1) - 2j_2(j_2+1) - j_5(j_5+1) - j_6(j_6+1) + 2j_4(j_4+1) ). \end{align*} As the triangle condition is satisfied by $(j_1-1, j_5, j_6)$, we get $|j_5-j_6|\le j_1-1$ and thus $|j_5-j_6|<j_1$. Likewise, as the triangle conditions satisfied by $(j_1, j_2, j_2)$, we get $j_1<2j_2+1$. Moreover, by hypothesis, $j_1=j_5+j_6$, so $j_1<j_5+j_6+1$. Recalling that $j_1>0$, these inequalities imply that \[ E(j_1)\ne0. \] Observe next that $F(j_1)=0$. Indeed, $(j_1+1,j_5,j_6)$ does not satisfy the triangle condition and, by hypothesis, $ \left\{\begin{matrix} j_1\!-\!1 \; j_2 \; j_3 \\ \;\;j_4 \;\; \; j_5 \; j_6 \end{matrix} \right\}=0. $ It follows from Property \eqref{it.3} applied to $i_1=j_1$ that \begin{equation} \label{uo} F(j_1) \left\{\begin{matrix} j_1 \; j_2 \; j_3 \\ j_4 \; j_5 \; j_6 \end{matrix} \right\}=0. \end{equation} But the second factor is non-zero since all four triples (\ref{triples}) taken from (\ref{uo}) satisfy the triangle condition and $(j_1,j_5,j_6)$ is a degenerate triangle. Thus $F(j_1)=0$. We next claim that $F(j_1-2)\ne0$. Indeed, from $j_1>0$ and $F(j_1)=0$ we obtain \begin{equation*}\label{eq.Cond1} j_1(j_1+1) - 2j_2(j_2+1) - j_5(j_5+1) - j_6(j_6+1) + 2j_4(j_4+1)=0. \end{equation*} If $F(j_1-2)=0$ then $j_1>2$ implies $j_1(j_1+1) = (j_1-2)(j_1-1)$, so $j_1=\frac{1}{2}$, a contraction. This proves that $F(j_1-2)\ne0$. We apply Property \eqref{it.3} to $i_1=j_1-1$. By above, $(j_1-1)E(j_1) \left\{\begin{matrix} j_1 \; j_2 \; j_3 \\ \;\;j_4 \;\; \; j_5 \; j_6 \end{matrix} \right\}\neq 0 $ and, by hypothesis, $ \left\{\begin{matrix} j_1\!-\!1 \; j_2 \; j_3 \\ \;\;j_4 \;\; \; j_5 \; j_6 \end{matrix} \right\}= 0 $. We infer $ \left\{\begin{matrix} j_1\!-\!2 \; j_2 \; j_3 \\ \;\;j_4 \;\; \; j_5 \; j_6 \end{matrix} \right\}\ne 0$. We finally apply Property \eqref{it.3} to $i_1=j_1-2$. By hypothesis, $ \left\{\begin{matrix} j_1\!-\!1 \; j_2 \; j_3 \\ \;\;j_4 \;\; \; j_5 \; j_6 \end{matrix} \right\}= 0 $, while $F(j_1-2)\left\{\begin{matrix} j_1\!-\!2 \; j_2 \; j_3 \\ \;\;j_4 \;\; \; j_5 \; j_6 \end{matrix} \right\}\ne 0$. It follows that $\left\{\begin{matrix} j_1\!-\!3 \; j_2 \; j_3 \\ \;\;j_4 \;\; \; j_5 \; j_6 \end{matrix} \right\}\ne0$. \end{proof} \begin{remark} Proposition \ref{prop.6j_non-zero} is not true without the hypothesis $j_2=j_3$ and, indeed, there are many examples showing this. For instance, if $(j_1,j_2,j_3,j_4,j_5,j_6)$ is either \[ (3, 3, 2, 2, 1, 2),\;(4,3/2,7/2,3/2,3,1),\;(6,5/2,13/2,3,9/2,3/2) \]then $j_1=j_5+j_6\ge3$, the triples \eqref{eq.4triples} satisfy the triangle condition for $h=j_1$, $h=j_1-1$; $\left\{\begin{matrix} j_1\!-\!1 \; j_2 \; j_3 \\ \;\;j_4 \;\; \; j_5 \; j_6 \end{matrix} \right\} =0$ (this can be verified with an on-line calculator or from the explicit formula for the $6j$-symbol given in \cite[\S9.2.1]{VMK}) but $(j_1-3,j_2,j_3)$ does not satisfy the triangle condition and thus $\left\{\begin{matrix} j_1\!-\!3 \; j_2 \; j_3 \\ \;\;j_4 \;\; \; j_5 \; j_6 \end{matrix} \right\}=0$. \end{remark} \section{Uniqueness of faithful uniserial \texorpdfstring{$\g$}{}-modules of length 3} \begin{theorem}\label{uno} Suppose $V$ is a faithful uniserial $\g$-module of length 3 with socle factors $V(a),V(b),V(c)$. Then $c=a$. Moreover, \begin{enumerate}[(i)] \item If $m=1$ then either $b=a+1$, or $a\geq 1$ and $b=a-1$. \item If $m\geq 3$ then either $a=0$ and $b=m$, or $a=1$ and $b=m+1$, or $a=1$ and $b=m-1$, or $m=3$, $a=4$ and $b=3$. \end{enumerate} Furthermore, in all cases the isomorphism type of $V$ is completely determined by that of its socle factors. \end{theorem} \begin{proof} Let $0=V_0\subset V_1\subset V_2\subset V_3=V$ be the only composition series of $V$ as $\g$-module. As $\sel(2)$ is semisimple, there exist $\sel(2)$-submodules $W_2$ and $W_3$ of $V$ such that $V_2=V_1\oplus W_2$ and $V_3=V_2\oplus W_3$. Here $V_1\simeq V(a)$, $W_2\simeq V(b)$ and $W_3\simeq V(c)$. Let $B_1,B_2,B_3$ be bases of $V_1,W_2,W_3$, respectively, so that $B=B_1\cup B_2\cup B_3$ is adapted to a basis of $V$. Since $\h_n=[\g,\h_n]$, it follows from \S\ref{sec.matrix_recognition} that there is a block upper triangular matrix representation $R:\g\to\gl(d)$, $d=a+b+c+3$, relative to $B$, of the form \begin{equation} \label{zeta} R(s+h)=\left( \begin{array}{ccc} R_1(s) & X(h) & Z(h) \\ 0 & R_2(s) & Y(h) \\ 0 & 0 & R_3(s) \\ \end{array} \right),\quad s\in\sl(2),\; h\in\h_n. \end{equation} We may view $\gl(d)$ as a $\g$-module via $x\cdot A=[R(x),A]$. Note that the 6 upper triangular blocks, say $M_{11}, M_{22}, M_{33}, M_{12}, M_{23}, M_{13}$, become $\sel(2)$-submodules of $\gl(d)$. Moreover, $X:\h_n\to M_{12}$, $Y:\h_n\to M_{23}$, $Z:\h_n\to M_{13}$ are $\sel(2)$-homomorphisms and $M_{12}\simeq\Hom(V(b),V(a))$, $M_{23}\simeq\Hom(V(c),V(b))$, $M_{13}\simeq\Hom(V(c),V(a))$ as $\sel(2)$-modules. By (\ref{zeta}), $R(\z)=R([\h_n,\h_n])\subseteq M_{13}$. Thus, $X$ and $Y$ vanish on $\z$. Since $R$ is faithful, $Z$ does not vanish on $\z$. Hence $V(0)$ enters $V(c)\otimes V(a)$ and this implies, by the Clebsch-Gordan formula, that $c=a$ and $Z(\z)$ consists of scalar operators. Moreover, since $m\not\equiv 2a\mod 2$, $Z$ must vanish on $V(m)$ and therefore $Z$ is completely determined by $X$ and $Y$, whose restrictions to $V(m)$ are non-trivial by Theorem \ref{thm.CS_2.4}. Conjugating by a suitable block diagonal matrix, with each block a scalar matrix, we can arbitrarily scale all blocks in the first superdiagonal. This shows that $V$ is uniquely determined by its socle factors (cf. \cite[Proposition 3.2]{CS}). Furthermore, since $V(m)$ enters $V(a)\otimes V(b)$, we obtain (i). We assume for the remainder of the proof that $m\geq 3$. Consider the $\sl(2)$-homomorphism $\Lambda^2 V(m)\to M_{13}$ given by $u\wedge v=X(u)Y(v)-X(v)Y(u)=Z[u,v]$. By above, its image, say $\mathcal J$, is isomorphic to $V(0)$. Set $r=\min\{2m-2,2a\}$ if $a$ is even, and $r=\min\{2m-2,2a-2\}$ if $a$ is odd. Suppose first \begin{equation} \label{6j} \left\{\begin{matrix} \frac{m}{2} \; \frac{r}{2} \; \frac{m}{2} \\[.6mm] \frac{a}{2} \;\, \frac{b}{2} \;\, \frac{a}{2} \end{matrix} \right\} \end{equation} is non-zero. Then, according to \cite[Corollary 9.2]{CS}, $V(r)$ enters $\mathcal J$. Therefore $r=0$. Recalling that $m\geq 3$, and taking into account that $V(m)$ enters $V(a)\otimes V(b)$, we obtain $a=0$ with $b=m$ if $a$ is even, and $a=1$ with $b=m \pm 1$ if $a$ is odd. Suppose next (\ref{6j}) is zero. We deal first with the case when $a$ is even. If $r=2a$ then all four triples (\ref{triples}) taken from (\ref{6j}) satisfy the triangle condition and $(\frac{a}{2},\frac{r}{2},\frac{a}{2})$ is a degenerate triangle, so Property \eqref{it.2} yields that (\ref{6j}) is non-zero, a contradiction. Therefore, we must have $r=2m-2$ with $m-1<a$. Set \begin{align*} j_1&=m,& j_2=j_3=\frac{a}{2},\\ j_4&=\frac{b}{2},& j_5=j_6=\frac{m}{2}. \end{align*} From the fact that (\ref{6j}) is zero, it follows from Property \eqref{it.4} that \begin{equation} \label{6j2} \left\{\begin{matrix} j_1\!-\!1 \; j_2 \; j_3 \\[.6mm] \;\; j_4 \;\;\; j_5 \; j_6 \end{matrix} \right\}=0. \end{equation} Moreover, all four triples (\ref{triples}) taken from (\ref{6j2}) satisfy the triangle condition. Furthermore, since $m\leq a$, all four triples (\ref{triples}) taken from $(j_1,j_2,j_3,j_4,j_5,j_6)$ satisfy the triangle condition. Thus, all hypotheses of Proposition \ref{prop.6j_non-zero} are met. We obtain $\left\{\begin{matrix} j_1\!-\!3 \; j_2 \; j_3 \\ \;\;j_4 \;\; \; j_5 \; j_6 \end{matrix} \right\}\ne0. $ Making use of \cite[Corollary 9.2]{CS} and Property \eqref{it.4}, we infer that $V(r-4)$ appears in $\mathcal J$. Thus $r=4$, that is, $m=3$. We now need to find out the possible values of $a$ and $b$. The fact that (\ref{6j}) is zero becomes \begin{equation} \label{te0} \left\{\begin{matrix} \frac{4}{2} \; \frac{3}{2} \; \frac{3}{2} \\[.6mm] \frac{b}{2} \; \frac{a}{2} \; \frac{a}{2} \end{matrix} \right\}=\left\{\begin{matrix} \frac{3}{2} \; \frac{4}{2} \; \frac{3}{2} \\[.6mm] \frac{a}{2} \; \frac{b}{2} \; \frac{a}{2} \end{matrix} \right\}=0. \end{equation} Now \begin{equation} \label{te} \left\{\begin{matrix} \frac{4}{2}\!+\!{\scriptstyle 1} \; \frac{3}{2} \; \frac{3}{2} \\[.6mm] \;\;\;\frac{b}{2} \;\;\; \frac{a}{2} \; \frac{a}{2} \end{matrix} \right\} \ne0, \end{equation} since $a\geq 3$ and therefore all four triples (\ref{triples}) taken from (\ref{te}) satisfy the triangle condition and we have the degenerate triangle $(\frac{6}{2},\frac{3}{2},\frac{3}{2})$. On the other hand, \begin{equation} \label{te1} \left\{\begin{matrix} \frac{4}{2}\!+\!{\scriptstyle 2} \; \frac{3}{2} \; \frac{3}{2} \\[.6mm] \;\;\;\frac{b}{2}\;\;\; \frac{a}{2} \; \frac{a}{2} \end{matrix} \right\} =0 \end{equation} as $(\frac{8}{2},\frac{3}{2},\frac{3}{2})$ does not satisfy the triangle condition. It follows from Property \eqref{it.3} applied to $(i_2,i_3,i_4,i_5,i_6)=(\frac{3}{2},\frac{3}{2},\frac{b}{2},\frac{a}{2},\frac{a}{2})$ that $F(3)=0$. The definition of $F(3)$ now gives \begin{equation}\label{eq.ab} a(a+2)=b(b+2)+9, \end{equation} and the only pair $(a,b)$ of non-negative integers satisfying \eqref{eq.ab} is $(a,b)=(4,3)$. The final case, when $a$ is odd, is impossible. Indeed, set \begin{align*} j_1&=\frac{r}{2}+1=\min\{m,a\},& j_2=j_3=\max\left\{\frac{m}{2},\frac{a}{2}\right\},\\ j_4&=\frac{b}{2},& j_5=j_6=\min\left\{\frac{m}{2},\frac{a}{2}\right\}. \end{align*} Then Proposition \ref{prop.6j_non-zero} applies to give $r=4$. As above, this gives $m=4$ or $a=4$, which is impossible since $m$ and $a$ are both odd. \end{proof} \section{Construction of faithful uniserial \texorpdfstring{$\g$}{}-modules of length 3}\label{sec.construction} \begin{theorem}\label{dos} In all cases below there is faithful uniserial $\g$-module of length 3 with socle factors $V(a),V(b),V(a)$. \begin{enumerate}[(i)] \item $m=1$ with $b=a+1$ or $b=a-1$ (in the latter case $a>0$). \item$m\geq 3$, with $(a,b)=(0,m)$ or $(a,b)=(1,m+1)$ or $(a,b)=(1,m-1)$ or $(m,a,b)=(3,4,3)$. \end{enumerate} \end{theorem} \begin{proof} We will give an explicit faithful uniserial representation $R:\g\to\gl(d)$, $d=2a+b+3$, in every case listed above. In each case, $$ R(s+v+a z)=\left( \begin{array}{ccc} R_a(s) & X(v) & Z(a z) \\ 0 & R_b(s) & Y(v) \\ 0 & 0 & R_a(s) \\ \end{array} \right),\quad s\in\sl(2),\; v\in V(m),\; a\in\C. $$ Here $X:V(m)\to\Hom(V(b),V(a))$ and $Y:V(m)\to\Hom(V(a),V(b))$ are $\sl(2)$-homomorphisms given in matrix form relative to standard bases of $V(a)$ and $V(b)$, and $R_a$ and $R_b$ are as given in \S\ref{lieg}. Moreover, $Z:\z\to\gl(V(a))$ is an $\sl(2)$-homomorphism given in matrix scalar form. It is straightforward to see (cf. \cite[\S 3]{CS}) that such $R$ is indeed a Lie homomorphism (and hence a faithful uniserial representation by Theorem \ref{thm.CS_2.4}) provided $Z\ne0$ and \begin{equation} \label{funca} X(v_i)Y(v_j)-X(v_j)Y(v_i)=Z([v_i,v_j]),\quad 0\leq i\leq m. \end{equation} We leave it to the reader to verify (\ref{funca}) in each case, recalling from \S\ref{lieg} that $$ [v_i,v_j]=0\text{ if }i+j\neq m,\text{ while } [v_i,v_{m-i}]=(-1)^i\binom{m}{i}z. $$ Let $A'$ stand for the transpose of a matrix $A$ and set $v=\underset{0\leq i\leq m}\sum a_iv_i\in V(m)$, $a_i\in\C$. \medskip \begin{enumerate}[(1)] \item\label{it.0m} $m\geq 1$ and $V$ has socle factors $V(0),V(m),V(0)$. \medskip \noindent $Z(z)=(2)$. \medskip \noindent \[ X(v)=\begin{pmatrix} -\binom{m}{m}a_{m} & \binom{m}{m-1}a_{m-1} & \cdots & \binom{m}{2}a_{2} & -\binom{m}{1}a_{1} & \binom{m}{0}a_{0} \end{pmatrix}. \] \medskip \noindent \[ Y(v)= \begin{pmatrix} a_{0} & a_{1} & \cdots & a_{m-1} & a_{m} \end{pmatrix}'. \] \item\label{it.1m} $m\geq 1$ and $V$ has socle factors $V(1),V(m+1),V(1)$. \medskip \noindent $Z(z)=\begin{pmatrix} m+2 & 0 \\ 0 & m+2 \end{pmatrix} $. \medskip \noindent \[ X(v)=\begin{pmatrix} -\binom{m}{m}a_{m} & \binom{m}{m-1}a_{m-1} & \cdots & \binom{m}{2}a_{2} & -\binom{m}{1}a_{1} & \binom{m}{0}a_{0} & 0 \\[2mm] 0 & -\binom{m}{m}a_{m} & \binom{m}{m-1}a_{m-1} & \cdots & \binom{m}{2}a_{2} & -\binom{m}{1}a_{1} & \binom{m}{0}a_{0} \end{pmatrix}. \] \medskip \noindent \[ Y(v)=\begin{pmatrix} (m+1) a_{0} & ma_{1} & \cdots & 2a_{m-1} & a_{m} & 0 \\[2mm] 0 & a_{0} & 2a_{1} & \cdots & ma_{m-1} & (m+1)a_{m} \end{pmatrix}'. \] \item\label{it.1m-1} $m\geq 1$ and $V$ has socle factors $V(1),V(m-1),V(1)$. \medskip \noindent $Z(z)=I_2$. \medskip \noindent \[ X(v)=\begin{pmatrix} \binom{m-1}{m-1}a_{m-1} & -\binom{m-1}{m-2}a_{m-2} & \cdots & \binom{m-1}{2}a_{2} & -\binom{m-1}{1}a_{1} & \binom{m-1}{0}a_{0} \\[2mm] \binom{m-1}{m-1}a_{m} & -\binom{m-1}{m-2}a_{m-1} & \binom{m-1}{m-3}a_{m-2}& \cdots & -\binom{m-1}{1}a_{2} & \binom{m-1}{0}a_{1} \end{pmatrix}. \] \medskip \noindent \[ Y(v)=\begin{pmatrix} a_{1} & a_{2} & \cdots & a_{m-1} & a_{m} \\[2mm] -a_{0} & -a_{1} & -a_{2} & \cdots & -a_{m-1} \end{pmatrix}'. \] \item\label{it.a} $m=1$ and $V$ has socle factors $V(a)$, $V(a+1)$, $V(a)$. Let $I_k$ be the identity matrix of size $k$, let $0_k$ be the zero column matrix with $k$ rows. Let $J^+_k,J^-_k$ be the diagonal matrices of size $k$ given by \[ J_k^+= \begin{pmatrix} k & 0 & \cdots & 0 \\ 0 & k-1 & \cdots & 0 \\[-1mm] 0 & 0 & \ddots & 0 \\ 0 & 0 & \cdots & 1 \end{pmatrix},\qquad J_k^-= \begin{pmatrix} 1 & 0 & \cdots & 0 \\ 0 & 2 & \cdots & 0 \\[-1mm] 0 & 0 & \ddots & 0 \\ 0 & 0 & \cdots & k \end{pmatrix}. \] \medskip \noindent $Z(z)=(a+2)I_{a+1}$. \medskip \noindent \[ X(v_0)=\begin{pmatrix} 0_{a+1} & I_{a+1} \end{pmatrix}\quad\text{ and }\quad X(v_1)=\begin{pmatrix} -I_{a+1} & 0_{a+1} \end{pmatrix}. \] \medskip \noindent \[ Y(v_0)= \begin{pmatrix} J^+_{a+1} \\[3mm] 0_{a+1}' \end{pmatrix}\quad\text{ and }\quad Y(v_1)=\begin{pmatrix} 0_{a+1}' \\[3mm] J^-_{a+1} \end{pmatrix}. \] \item\label{it.a1} $m=1$ and $V$ has socle factors $V(a+1)$, $V(a)$, $V(a+1)$. \medskip \noindent $Z(z)=-(a+1)I_{a+2}$. \medskip \noindent \[ X(v_0)=\begin{pmatrix} J^+_{a+1} \\[3mm] 0_{a+1}' \end{pmatrix}\quad\text{ and }\quad X(v_1)=\begin{pmatrix} 0_{a+1}' \\[3mm] J^-_{a+1} \end{pmatrix}. \] \medskip \noindent \[ Y(v_0)=\begin{pmatrix} 0_{a+1} & I_{a+1} \end{pmatrix}\quad\text{ and }\quad Y(v_1)=\begin{pmatrix} -I_{a+1} & 0_{a+1} \end{pmatrix}. \] \item\label{it.434} $m=3$ and $V$ has socle factors $V(4)$, $V(3)$, $V(4)$. \medskip \noindent $Z(z)=6I_{5}$. \medskip \noindent The matrices $X(v_0),X(v_1),X(v_2),X(v_3)\in V(m)$ are respectively: {\footnotesize \[ \begin{pmatrix} 0&6&0&0\\ 0&0&3&0\\ 0&0&0&1\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix}, \quad \begin{pmatrix} -6&0&0&0\\ 0&0&0&0\\ 0&0&3&0\\ 0&0&0&3\\ 0&0&0&0\end{pmatrix}, \quad \begin{pmatrix} 0&0&0&0\\ -3&0&0&0\\ 0&-3&0&0\\ 0&0&0&0\\ 0&0&0&6\end{pmatrix}, \quad \begin{pmatrix} 0&0&0&0\\ 0&0&0&0\\ -1&0&0&0\\ 0&-3&0&0\\ 0&0&-6&0\end{pmatrix}. \]} \medskip \noindent The matrices $Y(v_0),Y(v_1),Y(v_2),Y(v_3)\in V(m)$ are respectively: {\footnotesize \[ \begin{pmatrix} 0&0&3&0&0\\ 0&0&0&2&0\\ 0&0&0&0&1\\ 0&0&0&0&0\end{pmatrix}, \quad \begin{pmatrix} 0&-6&0&0&0\\ 0&0&-3&0&0\\ 0&0&0&0&0\\ 0&0&0&0&3\end{pmatrix}, \quad \begin{pmatrix} 3&0&0&0&0\\ 0&0&0&0&0\\ 0&0&-3&0&0\\ 0&0&0&-6&0\end{pmatrix}, \quad \begin{pmatrix} 0&0&0&0&0\\ 1&0&0&0&0\\ 0&2&0&0&0\\ 0&0&3&0&0\end{pmatrix}. \]} \end{enumerate} \end{proof} \section{Non-existence of faithful uniserial \texorpdfstring{$\g$}{}-modules of length \texorpdfstring{$\geq 4$}{}} \begin{prop}\label{no4} The are no faithful uniserial $\g$-modules of length 4. \end{prop} \begin{proof} Suppose, if possible, that $V$ is a faithful uniserial $\g$-module of length 4, with socle factors $V(a),V(b),V(c),V(d)$. Then $V$ has a uniserial submodule $W_1$ with socle factors $V(a),V(b),V(c)$ and a uniserial quotient $W_2=V/V(a)$ with socle factors $V(b),V(c),V(d)$. Four cases arise, depending on whether $W_1,W_2$ are faithful or not. The case when $W_2$ is faithful, but $W_1$ is not, follows by duality (see \cite[Lemma 2.6]{CS}) from the case when $W_1$ is faithful but $W_2$ is not. \medskip \noindent{\sc Case 1.} $W_1$ is faithful and $m\geq 3$. By Theorem \ref{uno}, $(a,b,c)$ must be one of $(0,m,0),(1,m+1,1),(1,m-1,1),(4,3,4)$, where in the latter case $m=3$. If $W_2$ is faithful, a second application of Theorem \ref{uno}, this time to $(b,c,d)$, leaves no possible value for $b$. If $W_2$ is not faithful we appeal to the classification of uniserial $\sl(2)\ltimes V(m)$-modules of length 3 given in Theorem \ref{thm.CS_Classification}. It forces $c=m$ to be in $\{0,1,4\}$, or $(b,c,d)$ to be an arithmetic progression of step $\pm m$, which is impossible. \medskip \noindent{\sc Case 2.} $W_1$ is faithful and $m=1$. Suppose first $W_2$ is also faithful. From Theorem \ref{uno} we deduce that $(a,b,c,d)=(a,a+1,a,a+1)$ or $(a,b,c,d)=(b+1,b,b+1,b)$. Let us deal with the case $(a,b,c,d)=(a,a+1,a,a+1)$ first. Consider a basis $B=B_1\cup B_2\cup B_3\cup B_4$ of $V$ adapted to the composition (socle) series. We choose $B_i$ so that the matrix representation $R:\g\to\gl(d)$, $d=4a+6$, relative to $B$ has the form \begin{equation} \label{rept} R(s+h)=\left( \begin{array}{cccc} R_a(s) & A(h) & D(h) & F(h) \\ 0 & R_{a+1}(s) & B(h) & E(h) \\ 0 & 0 & R_a(s) & C(h) \\ 0 & 0 & 0 & R_{a+1}(s) \\ \end{array} \right),\quad s\in\sl(2),\;h\in\h_n. \end{equation} Here $A,C,F:\h_n\to\Hom(V(a+1),V(a))$, $B:\h_n\to\Hom(V(a),V(a+1))$, $D:\h_n\to\Hom(V(a),V(a))$ and $E:\h_n\to\Hom(V(a+1),V(a+1))$ are $\sl(2)$-homomorphisms given in matrix form and are unique up to scaling. Therefore, since both $W_1$ and $W_2$ are faithful and uniserial (and $B$ is part of the definition of both modules), it follows from Theorem \ref{uno} that $A,B,C,D,E$ are exactly as given in \S\ref{sec.construction}. In particular $D(z)=(a+2)I_{a+1}$ and $E(z)=-(a+1)I_{a+2}$. Now $R([v_1,z])=0$, whereas block (1,4) of $[R(v_1),R(z)]$ is $-(2a+3) \begin{pmatrix} 0_{a+1} & I_{a+1} \end{pmatrix},$ a contradiction. This shows that this case is impossible for all $a$. It is easy to see that the case $(a,b,c,d)=(b+1,b,b+1,b)$ is dual to the previous one (see also \cite[Lemma 2.6]{CS}) and is therefore impossible. Suppose next $W_2$ is not faithful. Appealing to Theorem \ref{thm.CS_Classification} we deduce that $(a,b,c,d)=(a,a+1,a,a-1)$ or $(a,b,c,d)=(b+1,b,b+1,b+2)$. Arguing as above we find that block (1,4) of $[R(v_1),R(z)]$ is $-(a+2)\begin{pmatrix} J^+_{a} \\[3mm] 0_{a}' \end{pmatrix}$ in the first case and $(b+1)\begin{pmatrix} 0_{b+2} & I_{b+2} \end{pmatrix}$ in the second one. Both cases are impossible. \medskip \noindent{\sc Case 3.} Neither $W_1$ nor $W_2$ is faithful. Since $V$ is faithful, then $V(0)$ must enter $\Hom(V(d),V(a))$, whence $d=a$. We appeal, again, to the classification of uniserial $\sl(2)\ltimes V(m)$-modules of length 3 given in Theorem \ref{thm.CS_Classification}. Since $d=a$, we see that $(a,b,c)$ cannot be in an arithmetic progression of step $\pm m$, which forces $(a,b,c)$ to be $(0,m,c)$ or $(a,m,0)$. In the latter case $(b,c,d)=(m,0,a)$, against Theorem \ref{thm.CS_Classification}. In the former, $(b,c,d)=(m,m,0)$ by Theorem \ref{thm.CS_Classification}. But this is only possible when $m\equiv 2m\mod 4$, which is not the case. \end{proof} \begin{theorem}\label{no5} The are no faithful uniserial $\g$-modules of length $\ell >3$. \end{theorem} \begin{proof} By induction on $\ell$. The base case $\ell=4$ is proven in Proposition \ref{no4}. Suppose $V$ is a uniserial $\g$-module of length $\ell>4$ and there are no faithful uniserial $\g$-modules of length $\ell-1$. Let $V(a_1),\dots,V(a_\ell)$ be the socle factors of $V$. Then $V$ has a submodule $W_1$ with socle factors $V(a_1),\dots,V(a_{\ell-1})$ and a quotient module $V/V(a_1)$ with socle factors $V(a_2),\dots,V(a_\ell)$. By inductive hypothesis, these uniserial $\g$-modules are not faithful. Therefore, they are uniserial $\sl(2)\ltimes V(m)$-modules. The classification of uniserial $\sl(2)\ltimes V(m)$-modules of length $\geq 4$ given in Theorem \ref{thm.CS_Classification} forces $a_1,\dots,a_\ell$ to be an arithmetic progression of step $\pm m$. Thus, $V(0)$ does not enter $\Hom(V(a_j),V(a_i)))$ for any $i<j$, so $\z$ acts trivially on $V$. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} There has been increasing evidence in literature that radio-quiet Active Galactic Nuclei (AGN) are radio emitters at some level \citep[e.g.][]{HU01, Nagar2000}. Several works \citep[e.g.][]{HU01, Nagar2000, GirolettiPanessa2009,PG13,Baldi2018} have been dedicated to understand the physical origin of radio emission in Low-Luminosity AGN (LLAGN), a category comprising low-luminosity Seyferts, LINERs and transition nuclei \citep{Nagar2000}, and defined as AGN with $L_{H\alpha}\le$10$^{40}$\,ergs\,s$^{-1}$ by \citet{HoFilippenkoSargent1997a}. Although differences between the two classes can be found in the X-ray and radio bands, they share some properties, e.g. the absence of a Big Blue Bump in the Optical/UV, which clearly make them distinguishable from the classical Seyferts and Quasars \citep{Ho2008}. The faintness of their radio emission \citep[radio luminosities between 10$^{32}$ and 10$^{40}$\,ergs\,s$^{-1}$,][]{Baldi2018} requires high-sensitivity observations to carry out a robust study. LLAGN often show flat-spectrum radio cores, usually explained in terms of free-free emission/absorption or Advection-Dominated Accretion Flow \citep[ADAF,][]{NarayanYi1994}, synchrotron self-absorbed emission from a scaled-down version of a more powerful AGN jet \citep[e.g.][]{Falcke1999,Nagar2000}, possibly coupled with an ADAF \citep{FalckeMarkoff2000}, or a standard thin accretion disk \citep{Ghisellini2004}. A flat spectrum is also observed from Star-Forming (SF) regions, accompanied by diffuse and low surface brightness emission \citep{OrientiPrieto2010}. Synchrotron emission from shocked regions in low collimated outflows has been claimed as the origin of radio emission in quasar \citep{Zakamska2014} and, in a few cases, the outflow structure has been mapped at high resolution in the local Universe \citep{Kharb2006}. Observations with very-long baseline interferometry (VLBI) have established the occurrence of compact cores in a fraction of LLAGN \citep[e.g.][]{AndersonUlvestad2005,WrobelHo2006} down to milli-arcseconds (mas) scales, with high brightness temperatures ($T_B\ge$10$^{8}$\,K), and extended features resembling jet-like outflows \citep[e.g.][]{Falcke2000,Bontempi+2012}. High brightness temperature coupled with flat-spectrum radio cores at mas scales have been interpreted as signatures of non-thermal synchrotron self-absorbed (SSA) emission originating from the base of a jet. However, some high-resolution observations led to different conclusions, like in the case of NGC~1068 and NGC~4477, in which the nuclear radio emission has been attributed to thermal free-free emission from an X-ray heated corona \citep{Gallimore2004,Bontempi+2012}. In the last two decades, a strong effort has been devoted to the understanding of the origin of the radio emission (that typically traces the ejected material) and its relation to the X-ray emission (tracing the disc corona accreting system) in black holes at different luminosity levels, leading to the formulation of the 'Fundamental Plane' of black-hole activity \citep{Merloni2003,FalckeKordingMarkoff2004}, suggesting a disk-jet coupling from low to high mass black holes. Moreover, \citet{Panessa2007} found a correlation between the nuclear $2-10$\,keV X-ray luminosity and radio luminosity (at 2, 6 and 20\,cm) for a number of local Seyfert galaxies, suggesting again a strong physical coupling. These relations are interesting in light of a possible AGN - X-ray binaries (XRB) analogy. According with this analogy LLAGN, similarly to 'hard state' XRB, would follow an inefficient accretion track \citep[e.g.][]{FalckeKordingMarkoff2004}. This analogy has been suggested by \citet{GuCao2009} for a sample of LINERs and local Seyferts and, more recently, by \citet{PG13}, who found a slope for the X-ray - radio luminosity relation comparable with the $\sim\,$0.7 found by \citet{Gallo2003} in low-hard state XRBs for a sample of local Seyfert galaxies. In spite of the hints provided by the above mentioned efforts, however, a conclusive picture has not emerged, yet. Moreover, a number of LLAGN have remained undetected \citep[e.g.][]{HU01}, posing intriguing questions about the nature of their nuclei. In this work we present multi-frequency NSF's Karl G. Jansky Very Large Array (VLA) observations for ten Seyfert galaxies belonging to the complete, distance limited sample of \citet{Cappietal06}, refined by \citet{PG13}, for which radio nuclei were not detected in previous radio studies. For these ten sources, we take advantage of the improved sensitivity of the upgraded VLA, reaching lower flux limits thanks to a sensitivity level as low as $\sim$27 ${\rm \mu}$Jy\,beam$^{-1}$ (3$\,\sigma$), characterising their radio spectra and the morphology of the radio emission. Having completed the radio information for the whole sample of \citet{Cappietal06}, we provide detection rates and investigate the correlation between the X-ray and radio luminosity down to very low regimes. The paper is organised as follows. In Section 2, we introduce the sample used for our analysis; in Section 3, we describe the observations and the data reduction procedure; in Section 4, we show the results obtained in the data analysis; in Section 5, we discuss the results and in Section 6 we summarise our findings and considerations. Throughout the paper, we use a flat $\Lambda-$CDM cosmological model with $\Omega_{\rm{M}}$=0.3, $\Omega_\Lambda=$0.7 and a Hubble constant of 70 km s$^{-1}$ Mpc$^{-1}$ \citep{Bennett2003}. \section{The Sample} \begin{table*} \caption{The targets used in this study. \textit{Columns:} (1) Target name; (2) $\&$ (3) J2000 optical positions from \citet{HU01}; (4) optical classification, S1 are Seyfert 1 sources, S2 are Seyfert 2 sources and T are transition objects (The ':' indicates an uncertain classification); (5) distance in Mpc. Columns 4 and 5 were obtained from \citet{Panessa2006}; the positions of NGC~4639 were obtained from \citet{Hakobyan+12}.} \begin{tabular}{ccccc} \hline \hline \multicolumn{1}{c}{Name} & RA & Dec & Seyfert Type & Distance \\ & (J2000) & (J2000) & & (Mpc) \\ (1) & (2) & (3) & (4) & (5) \\ \hline NGC~676 & 01 48 57.38 $\pm$ 0.11 & +05 54 25.7 $\pm$ 1.7 & S2: & 19.5 \\ NGC~1058 & 02 43 30.24 $\pm$ 0.22 & +37 20 27.2 $\pm$ 2.6 & S2 & 9.1 \\ NGC~2685 & 08 55 34.79 $\pm$ 0.39 & +58 44 01.6 $\pm$ 3.1 & S2/T2: & 16.2 \\ NGC~3185 & 10 17 38.66 $\pm$ 0.12 & +21 41 17.2 $\pm$ 1.7 & S2: & 21.3 \\ NGC~3486 & 11 00 24.10 $\pm$ 0.20 & +28 58 31.6 $\pm$ 2.7 & S2 & 7.4 \\ NGC~3941 & 11 52 55.42 $\pm$ 0.21 & +36 59 10.5 $\pm$ 2.5 & S2: & 12.2 \\ NGC~4477 & 12 30 02.22 $\pm$ 0.18 & +13 38 11.3 $\pm$ 2.6 & S2 & 16.8 \\ NGC~4639 & 12 42 52.36 $\pm$ 0.03 & +13 15 26.5 $\pm$ 0.1 & S1 & 22.9 \\ NGC~4698 & 12 48 22.98 $\pm$ 0.16 & +08 29 14.8 $\pm$ 2.4 & S2 & 16.8 \\ NGC~4725 & 12 50 26.69 $\pm$ 0.16 & +25 30 02.3 $\pm$ 2.2 & S2: & 13.2 \\ \hline \end{tabular} \label{tab:targets} \end{table*} In this work we present results of VLA A-configuration observations for a group of ten Seyferts ('Reference sample', Table \ref{tab:targets}), nine Type~2 and one Type~1, the faintest members of the sample of 28 Seyfert galaxies in \citet{Cappietal06} ('Parent sample'). Indeed, the sources in the Reference Sample are characterised by an average Eddington ratio $\langle\log\,L_{2-10\,\rm{keV}}/L_{\rm{Edd}}\,\rangle\,\sim\,$-5.6, nearly an order of magnitude lower than the average Eddington ratio for the Parent sample ($\,\sim\,-4.7$). However, the original sample was initially constituted by only 27 sources: NGC~3982 was excluded because of lack of XMM-\textit{Newton} observations. \citet{PG13} updated the sample to 28 galaxies including NGC~3982. This last sample, which we will call 'Parent sample', is complete and distance limited (D$\le$23$\,$Mpc) and sources have been chosen as the nearest ones in the optically selected sample of 52 Seyfert galaxies given in \citet{HoFilippenkoSargent1997a}, that was extracted from the Palomar optical spectroscopic survey of nearby galaxies \citep{FilippenkoSargent1985,HoFilippenkoSargent1995}, comprising 486 northern ($\delta$>$0\,^\circ$) galaxies provided with homogeneous spectral classification \citep{HoFilippenkoSargent1997a}. This sample is complete to $B_T=$12.0\,mag and 80 per cent complete to $B_T=$12.5\,mag \citep{Sandage1979}, and it has also other several advantages, concerning selection biases and the wide range of radio luminosities, see \citet{HU01} for a deeper discussion. Recently, \citet{Baldi2018}, in their LeMMINGs 1.5-GHz parsec-scale survey, have revised the optical classification of \citet{HoFilippenkoSargent1997} using the spectroscopic diagnostic diagrams based on criteria by \citet{Kewley2006} and \citet{Buttiglione2010}. The basic difference in this classification is in the transition from Seyferts to LINERs, as low $O[III]/H\beta$ Seyferts are classified as LINERs, and the use of a more stable index, which is the average of low-ionisation line ratios. Moreover, they removed the 'Transition Galaxies' class \citep[for the details see][]{Baldi2018}. We reconsidered the Parent and Reference sample optical classification in the light of the revised classification of \citet{Baldi2018} finding that approximately 40 per cent of sources in the Parent sample (11/28) can be classified as LINERs, and the same percentage holds if we select only the Reference sample (4/10), i.e., NGC~3941, NGC~1058, NGC~4477 and NGC~4639. Note however, that these objects populate a region very close to the Seyferts/LINERS boundaries. Considering the related uncertainties, we prefer to keep the Seyfert classification, in order to be consistent with previous works of our group and literature \citep[e.g.][]{PG13}. The Palomar sample has been the subject of extensive observational campaigns, through VLA \citep[e.g.][]{Nagar2000,Nagar2005,HU01}, VLBI \citep[e.g.][]{Falcke2000,Nagar2005,PG13} and, more recently, of the LeMMINGs 1.5-GHz parsec-scale survey \citep[][D. R. A. Williams' thesis]{Baldi2018} and the 15-GHz survey performed by \citet{Saikia2018}. In particular, \citet{HU01} conducted a radio continuum survey at 6 and 20\,cm with the VLA (in configuration B and A, respectively) at $\sim$1\,arcsec resolution for the sample of 52 Seyfert galaxies in \citet{HoFilippenkoSargent1997a}. They detected radio emission in the 85 per cent of cases at 6\,cm and in the 71 per cent at 20\,cm for the whole sample, but when considering the subsample of 28 sources in the Parent sample, the VLA detection rates are 82 and 64 per cent at 6 and 20\,cm, respectively \citep{PG13}. The morphology is predominantly compact on arcseconds scales, either unresolved or slightly resolved, and the nuclear spectral indices range from steep to flat/inverted. Among the sources of the reference sample listed in Table \ref{tab:targets}, five, namely NGC~0676, NGC~1058, NGC~2685, NGC~3486 and NGC~4725 have not been detected by \citet{HU01} at the 3$\sigma$ sensitivity threshold of $\sim$0.12$-$0.14\,${\rm m}$Jy\,beam$^{-1}$ at either 20 or 6\,cm. The remaining sources have not been detected at 20\,cm, at 6\,cm they are considered marginal detections, as they have peak intensities of $\sim$3.5$-$6$\,\sigma$, and their morphology is classified as ambiguous. At very-long baseline interferometry (VLBI) milli-arcseconds scales, \citet{GirolettiPanessa2009} analysed five sources, NGC~4051, NGC~4388, NGC~4501, NGC~5033 and NGC~5273, among the sources in the Parent sample, detecting radio emission at $\sim$100\,${\rm \mu}$Jy\,beam$^{-1}$ level, except for NGC~5273, with physical parameters consistent with different underlying physical processes. \citet{Bontempi+2012} performed a dual-frequency, 1.7 and 5\,GHz, analysis with the European VLBI Network (EVN) of the faintest Seyferts nuclei in the Parent sample. They did not detect radio emission at the 3$\sigma$ sensitivity level of $\sim$20-120$\,$${\rm \mu}$Jy\,beam$^{-1}$ for the four sources NGC~3185, NGC~3941, NGC~4639 and NGC~4698 which have been identified by \citet{HU01} as marginal detections. Only NGC~4477 has been detected at 5 GHz, although with a low significance, exhibiting flux density compatible with that quoted by \citet{HU01}, and physical parameters which would rule out synchrotron self-absorption (SSA) as emission mechanism in favour of thermal free-free emission. Moreover, \citet{PG13} conducted a census of physical properties of these 28 galaxies using published VLA data (at 20 and 6\,cm, mostly from \citealt{HU01}) and new and published VLBI data (at 20 and 6\,cm) for 23/28 galaxies. They found that the brightness temperature as derived from VLBI observations is higher, on average, in type 1 than in type 2, and this evidence has been interpreted as hint for a free-free domination in type 2. They also found that there is a significant radio-X-rays luminosity relation when considering VLA scales, but at VLBI scales they did not find any correlation. In the present work we present new VLA - configuration A observations for the ten faintest members of the \citet{Cappietal06} sample ('Reference sample') and, coupled with previous data available for the remaining sources in the \citet{Cappietal06} sample, we will establish for the first time, radio detection rates for a complete, distance limited sample of nearby Seyfert galaxies with homogeneous observations. \section{Observations and Data Reduction} \label{sec:procedure} \begin{center} \centering \begin{table}\footnotesize \centering \caption{List of calibrators (flux and phase) per observation group. \textit{Columns:} (1) Target name; (2) Observation date; (3) Absolute flux density scale calibrator; (4) Phase Calibrator} \begin{tabular}{cccc} \hline & & \multicolumn{2}{c}{Calibrators} \\ \cline{3-4} \\ Target & Obs Date & Flux & Phase \\ & (dd/mm/yy) & & \\ (1) & (2) & (3) & (4) \\ \hline \hline NGC~0676 & 09/11/12 & 3C48 & J0149+0555 \\ NGC~1058 & 09/11/12 & 3C48 & J0230+4032 \\ \hline NGC~2685 & 05/01/13 & 3C286 & J0854+5757 \\ NGC~3185 & 05/01/13 & 3C286 & J1014+2301 \\ \hline NGC~3486 & 30/12/12 & 3C286 & J1102+2757 \\ NGC~3941 & 30/12/12 & 3C286 & J1146+3958 \\ \hline NGC~4477 & 17/12/12 & 3C286 & J1254+1141 \\ NGC~4639 & 17/12/12 & 3C286 & J1254+1141 \\ \hline NGC~4698 & 17/12/12 & 3C286 & J1239+0730 \\ NGC~4725 & 17/12/12 & 3C286 & J1221+2813 \\ \hline \end{tabular} \label{tab:radObs} \end{table} \end{center} \begin{table*} \caption{Total observing time on each target per frequency band L (1.5 GHz), C (5.5 GHz), X (9 GHz) and Ku (14 GHz), and the expected (theoretical) sensitivity.} \centering \begin{tabular}{ccccccccc} \hline & \multicolumn{2}{c}{L-band} & \multicolumn{2}{c}{C-band} & \multicolumn{2}{c}{X-band} & \multicolumn{2}{c}{Ku-band} \\ Target & Time & $\sigma_{\mathrm{th}}$ & Time & $\sigma_{\mathrm{th}}$& Time & $\sigma_{\mathrm{th}}$& Time & $\sigma_{\mathrm{th}}$ \\ & (min) & (${\rm \mu}$Jy\,beam$^{-1}$) & (min) & (${\rm \mu}$Jy\,beam$^{-1}$) & (min) & (${\rm \mu}$Jy\,beam$^{-1}$) & (min) & (${\rm \mu}$Jy\,beam$^{-1}$)\\ \hline \hline NGC~0676 & 6 & 25 & 7 & 9 & 8 & 9 & 8 & 11\\ NGC~1058 & 6 & 25 & 7 & 9 & 8 & 9 & 8 & 11\\ NGC~2685 & 6 & 25 & 6 & 10 & 8 & 9 & 8 & 11\\ NGC~3185 & 6 & 25 & 6 & 10 & 8 & 9 & 8 & 11\\ NGC~3486 & 6 & 25 & 7 & 9 & 8 & 9 & 8 & 11\\ NGC~3941 & 8 & 20 & 8 & 9 & 8 & 9 & 8 & 11\\ NGC~4477 & 8 & 20 & 8 & 9 & 8 & 9 & 10 & 10\\ NGC~4639 & 8 & 20 & 8 & 9 & 8 & 9 & 10 & 10\\ NGC~4698 & 6 & 25 & 7 & 9 & 8 & 9 & 8 & 11\\ NGC~4725 & 6 & 25 & 7 & 9 & 8 & 9 & 8 & 11\\ \hline \hline \multicolumn{9}{l}{\textbf{}}\\ \multicolumn{9}{l}{{}}\\ \end{tabular} \label{table:rmsTable} \end{table*} A total of 10 hours of observations, divided over 4 days in November, December 2012 and January 2013, were obtained with the VLA in A array configuration. The ten targets of the Reference sample were observed in five groups, with 2 hours dedicated to each group. Table~\ref{tab:radObs} lists the observations, sub-divided by group (based upon observing date). The observations were conducted in L (1.5 GHz), C (5.5 GHz), X (9 GHz) and Ku (14 GHz) bands, and each source has been observed for a total of 32 minutes, while the flux density calibrator has been observed for a total of 18 minutes per group. The scheduling blocks have been organised as follows. For a single observing block, the observations at different frequencies have been switched from one source to the other, in order to optimise the \textit{(u,v)} plane coverage. Therefore, the exposure time of a source at a certain frequency have been split into several scans, each one bracketed by the observation of the phase calibrators, which are thus observed every 2-3 min depending on the scheduling block. Table \ref{table:rmsTable} lists, for each source and for each frequency, the Time-On-Source (TOS) together with the expected theoretical sensitivity. The total observing bandwidths were 1024\,MHz for L band, while 2048\,MHz were used for the C, X and Ku bands. All bands were subdivided into 16 spectral windows of 64 channels. The data calibration and reduction were performed using the Common Astronomy Software Applications (\textsc{casa} 5.3.0 version, \citealt{McMullin+07}). The full, unaveraged datasets (five in total) were downloaded from the NRAO science data archive\footnote{\url{https://archive.nrao.edu/archive/archiveproject.jsp}} as SDM-BDF datasets with flags generated during the observations applied. Each dataset contained the calibrators (flux, phase and bandpass) and two targets for the four frequency bands. The calibration was performed using the \textsc{casa} calibration pipeline 5.1.1-5. After calibration, the resulting plots of the calibrators were inspected for RFI and the different bands were split into separate MS files. Since the observations covered wide bandwidths, the calibrators were imaged using the multi-frequency synthesis (mfs) algorithm \citep{CCW90, RC11}, with nterms=2. The integrated flux densities of the flux calibrators (3C48 dataset 1, and 3C286 all other datasets) were measured using the \textsc{casa} task \textsc{imfit}, and compared to the modelled flux densities given by \citet{PerleyButler13}. All measured flux densities were found to be within the 5 per cent of the tabulated flux densities, so we adopted an average 5 per cent flux calibration error. The accuracy of phase calibrators is typically of order of $\sim$1 mas, so our estimates of radio positions in the four bands are dominated by uncertainties associated to the estimation of the peak of elliptical gaussian fit performed by the \textsc{casa} task \textsc{imfit}. We adopted different imaging strategies for the L-band and for the C, X and Ku bands, performed using the \textsc{casa} task, \textsc{tclean}. For the L-band data, we used the deconvolution procedure of \citet{Hogbom+1974} in \textsc{tclean} with image size of 8192 pixels (0.26 arcsec per pixel); we adopted wide-imaging technique \textsc{w-projection} in order to take into account the non-coplanarity of baselines and we applied a primary-beam correction in a non-interactive mode. The L-band maps have then been visually inspected and, in cases in which artifacts were still present due to outlier sources, we rerun the clean algorithm with outliers field in an interactive mode. The position of the outliers have been checked via the FIRST Survey\footnote{Faint Images of the Radio Sky at Twenty cm \citep{Becker1995}}. Considering the C, X and Ku bands, each target field has been imaged in \textsc{tclean} using a multi-term, multi-frequency synthesis algorithm (with nterms=2) and image size of 2048 pixels (0.07, 0.04 and 0.03 arcsec per pixel for C, X and Ku bands, respectively). As before, in cases in which artifacts in final maps were still present, we rerun the clean algorithm with outliers field. In all the four frequency bands, initial imaging of each target field has been performed using the full array angular resolution (i.e. no tapering), with Briggs \citep{Briggs1995} weighting with robust parameter equal to 0.5. The positions, peak intensities, integrated flux densities, deconvolved sizes and position angle (PA) of the sources were estimated by fitting a two-dimensional Gaussian in the image plane via the \textsc{casa} task \textsc{imfit}. We determined the rms noise of each map from a source-free annular region around the source. The resulting average RMS in the four bands is approximately 40, 12, 10 and 9 ${\rm \mu}$Jy\,beam$^{-1}$ in the L, C, X and Ku bands, respectively. The uncertainty in the final flux density measurements are affected by fitting errors from \textsc{imfit}, and flux calibration error of 5 per cent, which are added in quadrature and adopted as the error measurements. The positional accuracy of the detected radio nuclei is limited by the positional accuracy of the phase calibrators, typically few mas, and by the accuracy of the gaussian fit to the source brightness distribution as performed by \textsc{imfit}. The uncertainty on radio position expressed in columns 8 and 9 in Table \ref{table:fluxTable} is therefore the sum in quadrature of the two contributions. \section{Results} \begin{table*} \caption{Imaging results. \textit{Columns:} (1) Target name; (2) Frequency band; (3) Image noise rms [${\rm \mu}$Jy\,beam$^{-1}$]; (4) Integrated flux density (${\rm \mu}$Jy); (5) Peak intensity (${\rm \mu}$Jy\,beam$^{-1}$); (6) Deconvolved FWHM dimensions (major $\times$ minor axis) for the fitted source, determined from an elliptical Gaussian fit source size (arcsec); (7) Source position angle (deg); (8) $\&$ (9) Detected source position in epoch J2000 (hh:mm:ss and $^{o}\,$:$^{\prime}\,$:$^{\prime\prime}\,$).} \centering \begin{adjustbox}{width=1\textwidth,center=\textwidth} \begin{tabular}{cccccccccc} \hline \hline Target & Band & $\sigma_{\rm image}$ & F$_{\mathrm{total}}$ & F$_{\mathrm{peak}}$ & $\theta_{\mathrm{M}} \times \theta_{\mathrm{m}}$ & P.A. & $\alpha_{\rm J2000}$ & $\delta_{\rm J2000}$\\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (8)\\ \hline NGC0676 & L & 89 & $\cdots$ & $<$267 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$\\ & C & 18 & $\cdots$ & $<$54 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$\\ & X & 10.5 & $\cdots$ & $<$31.5 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$\\ & Ku & 9 & $\cdots$ & $<$27 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$\\ \hline NGC1058 & L & 32 & $\cdots$ & $<$96 & $\cdots$ & $\cdots$ & $\cdots$&$\cdots$\\ & C & 11 & $\cdots$ & $<$33 & $\cdots$ & $\cdots$ & $\cdots$&$\cdots$\\ & X & 9 & $\cdots$ & $<$27 & $\cdots$ & $\cdots$ &$\cdots$ &$\cdots$\\ & Ku & 9 & $\cdots$ & $<$27 & $\cdots$ & $\cdots$ &$\cdots$ &$\cdots$\\ \hline NGC2685 & L & 36 & $\cdots$ & $<$108 & $\cdots$ & $\cdots$ & $\cdots$& $\cdots$\\ & C & 11 & $\cdots$ & $<$33 & $\cdots$ & $\cdots$ & $\cdots$& $\cdots$\\ & X & 9 & $\cdots$ & $<$27 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$\\ & Ku & 9.5 & $\cdots$ & $<$28.5 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ \\ \hline NGC3185 & L & $27$ & 9250$\pm$470 & 1130$\pm$96 & 5.05$\times$3.8 & $\cdots$ & $\cdots$ & $\cdots$\\ & C & 16 & 3240$\pm$160 & 583$\pm$48 & 3.7$\times$3.5 & $\cdots$ & $\cdots$ & $\cdots$\\ & X & 14 & 668$\pm$47 & 116$\pm$34 & 3.7$\times$1.9 & $\cdots$ & $\cdots$ & $\cdots$ \\ & Ku & 10 & $\cdots$ &$<$30 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$\\ \hline NGC3486 & L & 34 & $\cdots$ & $<$102 & $\cdots$ & $\cdots$ & $\cdots$&$\cdots$\\ & C & 10 & $\cdots$ & $<$30 & $\cdots$ & $\cdots$ & $\cdots$&$\cdots$\\ & X & 8 & $\cdots$ & $<$24 & $\cdots$ & $\cdots$ & $\cdots$&$\cdots$\\ & Ku & 9 & $\cdots$ & $<$27 & $\cdots$ & $\cdots$ & $\cdots$&$\cdots$\\ \hline NGC3941 & L & 30 & 320$\pm$70 & 221$\pm$30 & 1.50$\pm$0.59$\times$0.12$\pm$0.56 & 120$\pm$28 & 11:52:55.397$\pm$0.149 & +36.59.10.854$\pm$0.073 \\ & C & 11 & 46$\pm$16 & 54$\pm$9 & <0.8$\times$0.55 & $\cdots$ & 11:52:55.364$\pm$0.161 & +36.59.10.861$\pm$0.05 \\ & X & 12 &$\cdots$ &$<$36 &$\cdots$ & $\cdots$ & $\cdots$& $\cdots$ \\ & Ku & 14 & $\cdots$ & <42 & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$\\ \hline NGC4477 & L & 44 & 210$\pm$76 & 209$\pm$42 & $<$0.60 $\times$0.5 & $\cdots$ & 12:30:02.23$\pm$0.141 & +13.38.11.686$\pm$0.085 \\ & C & 10 & 91$\pm$15 & 72$\pm$7 & $<$0.17 $\times$ 0.15 & $\cdots$ & 12:30:02.197$\pm$0.017 & +13.38.11.546$\pm$0.018\\ & X & 9 & 67$\pm$9 & 84$\pm$5 & $<$0.10 $\times$ 0.09 & $\cdots$ & 12:30:02.195$\pm$0.004 & +13.38.11.570$\pm$0.004 \\ & Ku & 8 & 61$\pm$9 & 93$\pm$7 & $<$0.07 $\times$ 0.06 & $\cdots$ & 12:30:02.196$\pm$0.003 & +13.38.11.547$\pm$0.003 \\ \hline NGC4639 & L & 27 & 353$\pm$55 & 303$\pm$28 & $<$0.60 $\times$0.56 & $\cdots$ & 12:42:52.380$\pm$0.057 & +13.15.26.735$\pm$0.044\\ & C & 10 & 404$\pm$17 & 393$\pm$10 & $<$0.17$\times$0.15 & $\cdots$ & 12:42:52.378$\pm$0.003 & +13.15.26.730$\pm$0.004\\ & X & 9 & 488$\pm$15 & 477$\pm$9 & $<$0.10 $\times$ 0.09 & $\cdots$ & 12:42:52.378$\pm$0.002 & +13.15.26.735$\pm$0.002\\ & Ku & 8 & 586$\pm$13 & 610$\pm$8 & $<$0.07 $\times$ 0.06 & $\cdots$ & 12:42:52.379$\pm$0.001 & +13.15.26.734$\pm$0.001\\ \hline NGC4698 & L & 38 & 150$\pm$53 & 119$\pm$33 & $<$0.78 $\times$ 0.58 & $\cdots$ & 12:48:22.902$\pm$0.167 & +08.29.14.711$\pm$0.047\\ & C & 10 & 238$\pm$17 & 241$\pm$10 & $<$0.21 $\times$ 0.15 & $\cdots$ & 12:48:22.910$\pm$0.008 & +08.29.14.667$\pm$0.006\\ & X & 9 & 259$\pm$15 & 275$\pm$9 & $<$0.13 $\times$ 0.10 & $\cdots$ & 12:48:22.909$\pm$0.003 & +08.29.14.673$\pm$0.003\\ & Ku & 9 & 258$\pm$16 & 267$\pm$9 & $<$0.08 $\times$ 0.06 & $\cdots$ & 12:48:22.909$\pm$0.003 & +08.29.14.664$\pm$0.002 \\ \hline NGC4725 & L & 39 & $\cdots$ & $<$117 & $\cdots$ & $\cdots$ &$\cdots$ &$\cdots$\\ & C & 11 & 66$\pm$15 & 84$\pm$10 & $<$0.18 $\times$ 0.15 & $\cdots$ & 12:50:26.570$\pm$0.016 & +25.30.02.691$\pm$0.014\\ & X & 9 & 118$\pm$14 & 133$\pm$9 & $<$0.10 $\times$ 0.10 & $\cdots$ & 12:50:26.569$\pm$0.007 & +25.30.02.749$\pm$0.004\\ & Ku & 8 & 81$\pm$11 & 105$\pm$8 & $<$0.07 $\times$ 0.06 & $\cdots$ & 12:50:26.569$\pm$0.005 & +25.30.02.748$\pm$0.003\\ \hline \hline \end{tabular} \end{adjustbox} \label{table:fluxTable} \end{table*} We detected radio emission in six out of ten sources: NGC~3185, NGC~3941, NGC~4477, NGC~4639, NGC~4698 and NGC~4725. Images of the detected sources (contours and coloured maps) are shown in Figs \ref{fig:contourMaps3185} - \ref{fig:contourMaps4725}. A source is considered detected if the peak intensity is $\ge3\,\sigma$. However, a source exhibiting a peak intensity 3$\,\le\,S_{\rm{P}}\,<\,6\,\sigma$ is considered as marginal detection, following the same criterion as in \citet{HU01}. Following this criterion, the detection rates are 5/10 in the L-band (with NGC~4477 and NGC~4698 as marginal detections), 6/10 in the C-band (with NGC~3941 as marginal detection), 5/10 in the X band (with NGC~3185 as marginal detection) and 4/10 in Ku band. As can be seen from images shown in Figs \ref{fig:contourMaps3185} - \ref{fig:contourMaps4725}, four out of six sources (NGC~4477, NGC~4639, NGC~4698 and NGC~4725) are are not resolved in our VLA observations, and for them we could only set an upper limit to their sizes which correspond to half of the beam size, in analogy with \citet{HU01}. Four sources, NGC~0676, NGC~1058, NGC2685 and NGC~3486, have not been detected at any wavelength at 3$\,\sigma$ sensitivity thresholds ranging from 270 ${\rm \mu}$Jy\,beam$^{-1}$ in L-band down to 27 ${\rm \mu}$Jy\,beam$^{-1}$ in Ku band, and in these cases we provide upper limits on peak intensities defined as three times the rms noise. Initially, we detected NGC~3185 and NGC~3941 only in the L-Band (at 1.5 GHz), with un-tapered maps exhibiting diffuse radio emission (see Figs. \ref{fig:contourMaps3185} and \ref{fig:contourMaps3941}). In order to further investigate this issue, the C, X and Ku datasets of these sources have been re-imaged with a uv-taper equal to the L-band clean beam size (via the parameter \textsc{uvtaper} with sub-parameter \textsc{outertaper}=['1.21arcsec','1.15arcsec','-74.63deg'] in \textsc{tclean}), applied to match their respective L-band observation. This has been done in order to be more sensitive to the extended radio emission. Then, the final uv-tapered images have been smoothed to the final resolution of the L-band beam (via the \textsc{restoringbeam} parameter of the \textsc{casa} task \textsc{tclean}). In the case of NGC~3941, which is fainter with respect to NGC~3185, we used a \textsc{briggs} weighting with robust parameter equal to 2: this value ensures that the weighting is close to a natural weighting. Natural weighting guarantees a better surface brightness sensitivity but degraded resolution, and for this reason it is more appropriate when considering diffuse emission. We have thus recovered radio emission in C band for NGC~3185 and NGC~3941 at a level above 5$\sigma$, and radio emission in NGC~3185 in X band at a level $\sim$ 3.5$\sigma$ (images are shown in Figs. \ref{fig:contourMaps3185} and \ref{fig:contourMaps3941}). In the case of NGC~3185, in order to estimate source parameters we considered interactively defined boundaries via the \textsc{casa viewer}. The uncertainties associated to peak and integrated flux densities are calculated as ${\sigma_S}=\sqrt{N\times{(rms)^2}+(0.05{\times}S)^2}$, where N is the number of beam areas covered by a source of flux density S, and it is taken into account an uncertainty of 5 per cent in the absolute flux density scale. In Table \ref{table:fluxTable} we summarise our findings. \subsection{Detection rates} In Table \ref{table:Tab} we list the sources belonging to the Parent sample, and in boldface we indicate the sources of the Reference sample, for which we provided new estimates of the parameters based on the new VLA data. \begin{table*}\footnotesize \caption{The Parent sample of 28 sources. Objects in boldface are sources in the Reference sample, for which we report new VLA observations.} \centering \begin{adjustbox}{width=0.95\textwidth,center=\textwidth} \begin{tabular}{ccccccccccccc} \hline Target & D & Seyfert Type & Hubble Type & $L_{5}$ & $L_{1.4}$ & $L_{X}$ & $\log{R_X}$ & $\alpha$ & $M_{BH}$ & $L_X/L_{Edd}$ \\ & (Mpc) & & & VLA & VLA & & & VLA & ($M_{\odot}$) & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\ \hline \hline \textbf{NGC~676} & 19.5 & S2: & S0/a:spin & \textbf{<35.1} & \textbf{<35.2} & 40.8 & \textbf{<-5.7} & $-$ & $-$ & $-$ \\ \textbf{NGC~1058} & 9.1 & S2 & Sc & \textbf{<34.2} & \textbf{<34.1} & <37.5 & \textbf{$\lessgtr$-3.35} & $-$ & 4.9 & -5.4\\ NGC~1068 & 14.4 & S1.9 & Sb & 38.8 & 38.6 & 42.8 & -4.0 & 0.7 & 7.2 & -2.5\\ \textbf{NGC~2685} & 16.2 & S2/T2: & SB0+ pec & \textbf{<34.7} & \textbf{<34.7} & 39.9 & \textbf{-5.2} & $-$ & 7.1 & -5.3\\ NGC~3031 & 3.5 & S1.5 & Sab & 36.8 & 36.15 & 40.2 & -3.5 & -0.16 & 7.8 & -5.6\\ NGC~3079 & 17.3 & S2 & SBc spin & 38.2 & 37.6 & 42.6 & -4.4 & -0.18 & 7.7 & -3.13\\ \textbf{NGC~3185} & 21.3 & S2: & Sbc & $-$ & $-$ & 40.8 & $-$ & <-0.12 & 6.1 & -3.4\\ NGC~3227 & 20.6 & S1.5 & SABa pec & 37.7 & 37.6 & 41.7 & -4.0 & 0.82 & 7.6 & -3.9\\ \textbf{NGC~3486} & 7.4 & S2 & SABc & \textbf{<34.0} & \textbf{<34.0} & 38.9 & \textbf{<-4.9} & $-$ & 6.1 & -5.4\\ \textbf{NGC~3941} & 12.2 & S2: & SB0 & \textbf{34.7} & \textbf{34.7} & 38.9 & \textbf{-4.2} & \textbf{+1.52} & 8.2 & -7.4\\ NGC~3982 & 20.5 & S1.9 & SABb: & 36.7 & 36.3 & 41.2 & -4.5 & 0.39 & 6.1 & -3.0\\ NGC~4051 & 17.0 & S1.2 & SABbc & 36.6 & 36.3 & 41.3 & -4.7 & 0.55 & 6.1 & -2.9\\ NGC~4138 & 13.8 & S1.9 & S0+ & 35.6 & 35.2 & 41.3 & -5.3 & -0.32 & 7.6 & -4.6\\ NGC~4151 & 20.3 & S1.5 & SABab: & 38.3 & 38.1 & 42.5 & -4.2 & 0.6 & 7.2 & -2.8\\ NGC~4258 & 7.2 & S1.9 & SABbc & 35.8 & 35.2 & 40.9 & -5.1 & -0.01 & 7.6 & -4.8\\ NGC~4388 & 16.7 & S1.9 & Sb: spin & 37.0 & 36.8 & 41.7 & -4.7 & 0.69 & 6.8 & -3.2\\ NGC~4395 & 2.6 & S1 & Sm: & 34.4 & 34.2 & 39.8 & -5.4 & 0.66 & 5.0 & -3.3\\ NGC~4472 & 16.7 & S2:: & E2 & 37.5 & 37.2 & <39.3 & >-1.8 & 0.42 & 8.8 & -7.6\\ \textbf{NGC~4477} & 16.8 & S2 & SB0? & \textbf{35.1} & \textbf{35.0} & 39.6 & \textbf{-4.5} & \textbf{+0.52} & 7.9 & -6.4\\ NGC~4501 & 16.8 & S1.9 & Sb & 36.2 & 35.9 & 39.6 & -3.4 & 0.44 & 7.9 & -6.4\\ NGC~4565 & 9.7 & S1.9 & Sb? spin & 36.2 & 35.5 & 39.4 & -3.2 & -0.19 & 7.7 & -6.4\\ NGC~4579 & 16.8 & S1 & SABb & 37.8 & 36.9 & 41.0 & -3.2 & 0.56 & 7.8 & -4.8\\ \textbf{NGC~4639} & 22.9 & S1.0 & SABbc & \textbf{36.1} & \textbf{35.4} & 40.2 & \textbf{-4.1} & \textbf{-0.40} & 6.9 & -4.7\\ \textbf{NGC~4698} & 16.8 & S2 & Sab & 35.6 & 34.7 & 39.2 & \textbf{-3.6} & \textbf{-0.13} & 7.3 & -6.2\\ \textbf{NGC~4725} & 13.2 & S2: & SABab pec & \textbf{34.9} & \textbf{34.5} & 38.9 & \textbf{-4.0} & <-0.5 & 7.5 & -6.7\\ NGC~5033 & 18.7 & S1.5 & Sc & 36.8 & 36.5 & 41.1 & -4.3 & 0.51 & 7.3 & -4.3\\ NGC~5194 & 8.4 & S2 & Sbc pec & 35.6 & 35.4 & 40.9 & -5.3 & 0.6 & 7.0 & -4.14\\ NGC~5273 & 16.5 & S1.5 & S0 & 36.2 & 35.9 & 41.4 & -5.2 & 0.49 & 6.5 & -3.2\\ \hline \hline \multicolumn{11}{l}{Column (1), name of the source; Column (2), distance of the source; Columns (3) and (4); Seyfert and Hubble type; Columns (5) and (6),} \\ \multicolumn{11}{l}{log VLA 5 and 1.4 GHz peak luminosities in erg/s, respectively; Column (7), log 2-10 keV X-ray luminosity; Column (8), radio-loudness} \\ \multicolumn{11}{l}{parameter as defined by \citet{TerashimaWilson2003}; Column (9), spectral index defined as $S_{\nu}\,\propto\,\nu^{-\alpha}$; Column (10), BH} \\ \multicolumn{11}{l}{mass; Column (11), Eddington ratio.} \\ \multicolumn{11}{l}{Columns (2), (3), (4), (7) and (10) are taken from \citet{Panessa2006}. } \\ \multicolumn{11}{l}{For NGC~4725 and NGC~3185 we kept the upper limit on spectral index provided by \citet{PG13}.} \\ \end{tabular} \end{adjustbox} \label{table:Tab} \end{table*} We provide the detection rates at 5 and 1.4 GHz for the Parent sample, together with other average values for relevant quantities, indicated in Table \ref{tab:full_sample}. The Parent sample is constituted by 19 type-2 Seyferts and 9 type-1 Seyferts. We found a detection rate at 1.5 GHz of 15/19 (79 per cent) and 9/9 (100 per cent), respectively, while at 5 GHz we found detection rates of 16/19 (84 per cent) for type 2 objects and 9/9 (100 per cent) for type 1 objects. Similar detection rates are obtained when considering detections at both frequencies. The detection rate for the Parent sample at 5 and 1.4 GHz is 24/28 (86 per cent). Indeed, as already shown in Section 4, four sources, namely NGC~0676, NGC~1058, NGC~2685 and NGC~3486 remain undetected at our tens ${\rm \mu}$Jy\,beam$^{-1}$ sensitivity level. Clearly, given the new detections, such rates are higher with respect to \citet{PG13}. All sources in the Parent sample, except NGC~1068, NGC~2685, NGC~3185, NGC~3486, NGC~4477, NGC~4501, NGC~4639, NGC~4698 and NGC~4725, are part of the 15-GHz survey performed by \citet{Saikia2018} at sub-arcsec resolution with the VLA-A configuration. Only five sources, namely NGC~3982, NGC~4051, NGC~4395, NGC~5194 and NGC~5273, have been detected by them, with peak intensities ranging from 0.5\,${\rm m}$Jy\,beam$^{-1}${} (NGC~3982) to 0.17\,${\rm m}$Jy\,beam$^{-1}${} (NGC~4395). If we consider only the Reference sample, the three sources in common, NGC~0676, NGC~1058 and NGC~3941, have not been detected by \citet{Saikia2018} and the upper limits on the peak intensities are in agreement with our limits listed in Table \ref{table:fluxTable}. \begin{table*}\footnotesize \caption{Summary of statistical results for the Parent sample.} \centering \begin{adjustbox}{width=1\textwidth,center=\textwidth} \begin{tabular}{cccccccc} \hline Target & Det rate 1.4 GHz & Det rate 5 GHz & Det. rate at either freq. & $\langle\,L_5^{VLA}\,\rangle$ & $\langle\,\alpha\,\rangle$ & $\alpha$ flat:steep & $\langle\,\log{R_X}\,\rangle$ \\ \hline \hline Full & 24/28 (86\%) & 24/28 (86\%) & 24/28 (86\%) & 36.1$\pm$0.3$^{*}$ & 0.30$\pm$0.10 & 12:11 & -4.40$\pm$0.15$^{*}$ \\ Type 1 & 9/9 (100\%) & 9/9 (100\%) & 9/9 (100\%) & 36.7$\pm$0.4 & 0.40$\pm$0.13 & 1:2 & -4.3$\pm$0.2 \\ Type 2 & 15/19 (79\%) & 16/19 (84\%) & 15/19 (79\%) & 35.8$\pm$0.3$^{*}$ & 0.28$\pm$0.13 & 9:5 & -4.5$\pm$0.2$^{*}$ \\ \hline \hline \multicolumn{7}{l}{Notes: $^{*}$, cases in which the mean may be ill-defined, see Section 4.2} \\ \multicolumn{7}{l}{In the calculation of mean values we neglected NGC~3185; in the calculation of mean radio-loudness parameter we} \\ \multicolumn{7}{l}{excluded NGC~1058 and NGC~4472, for which we have upper limits on X-ray luminosities.} \\ \multicolumn{7}{l}{The flat-to-steep threshold for the spectral index is 0.5.} \\ \end{tabular} \end{adjustbox} \label{tab:full_sample} \end{table*} \subsection{Radio Spectra} \begin{figure*}\scriptsize \centering \subfloat{\includegraphics[scale=0.35]{radio_spectra/NGC3185.pdf}} \subfloat{\includegraphics[scale=0.35]{radio_spectra/NGC3941.pdf}}\\ \subfloat{\includegraphics[scale=0.35]{radio_spectra/NGC4477.pdf}} \subfloat{\includegraphics[scale=0.35]{radio_spectra/NGC4639.pdf}}\\ \subfloat{\includegraphics[scale=0.35]{radio_spectra/NGC4698.pdf}} \subfloat{\includegraphics[scale=0.35]{radio_spectra/NGC4725.pdf}} \caption{Radio spectra of LLAGNs in the Reference sample that were detected in at least two bands. The flux densities and their corresponding errors are listed in Table~\ref{table:fluxTable}, upper limits are represented by arrows and, when possible, a power-law ${S_\nu}\,{\propto}\,{{\nu}^{-\alpha}}$ fit is performed. The black solid lines are power-law least squares fits to the data (for NGC~4639 the 1.5 GHz point has not been considered).} \label{fig:radiospectra} \end{figure*} We have built radio spectra for the sources which have been detected in two or more frequencies: NGC~3185, NGC~3941, NGC~4477, NGC~4639, NGC~4698 and NGC~4725 (Fig. \ref{fig:radiospectra} - upper limits are represented as arrows), considering all the available data points. We performed a weighted linear least squares fit using the \textsc{Sci-Py} \citep{scipy} routine \textsc{curve{\textunderscore}fit}, weighting for the uncertainties in the flux densities. In three cases, NGC~4477, NGC~4639 and NGC~4698, the targets were found to fit a typical power-law ${S_{\nu}^{\rm{I}}}\,{\propto}\,{\nu^{-\alpha}}$ to the spectrum\footnote{We define a steep spectrum as one having $\alpha\,\ge\,$0.5, and a flat one as $\alpha\,\le\,$0.5, following \citet{PG13}, with $S_{\nu}\,\propto\,{\nu}^{-\alpha}$. We also define an inverted spectrum as one having $\alpha\,\le\,$0}. For NGC~4639 we considered only the 5.5, 9 and 14 GHz data points, as the 1.5 GHz data point seems to deviate with respect to the power-law behaviour, being probably associated with a different emitting component which dominates at lower frequencies. We found an inverted spectral index for NGC~4639 and NGC~4698 of $\alpha\,=$-0.40$\pm$0.01 and $\alpha\,=$-0.13$\pm$0.07, respectively, and a steep spectral index of $\alpha\,=$+0.52$\pm$0.09 for NGC~4477. We computed the spectral index also for NGC~3941 with the only two frequency points at 1.4 and 5 GHz, and we found a spectral slope of +1.52$\pm$0.33, in which the uncertainty in the slope has been estimated as $\sqrt{{{(\sigma_{f_1}/S_{f_1})}^2}+{{(\sigma_{f_2}/S_{f_2})^2}}}/ln({f_2/f_1})$, where $\sigma_{f_{1,2}}$ and $S_{f_{1,2}}$ are the uncertainties on the flux density and the flux density at the two frequencies \citep{HU01}. We remark that initially this source was detected only at 1.4 GHz and that we recovered emission at 5 GHz by applying a uv-tapering equal to the L-band clean beam size. The spectral index for NGC~3185 has not been computed as the radio emission from this source has a complex morphology and it may be due to other components, like star-formation. In Table \ref{tab:full_sample}, we indicate the average values of the slope for the objects in the Parent sample in order to test if there are significant differences between the general population and the type-1 and type-2 sub-populations. We used the routine \textsc{KMESTM} in the Astronomy Survival Analysis software package \citep[ASURV,][]{Isobe1990,Lavalley1992}, which gives the Kaplan-Meier estimator for the distribution function of a randomly censored sample \footnote{We caution that such averaged values may suffer from statistical biases, in the case in which the lowest value in the sample is an upper limit, for details see the ASURV documentation.}. Using the aforementioned Kaplan-Meier estimator, we computed an average spectral index and the flat-to-steep ratio for the full sample and for the type-1-only and type-2-only sub-populations. We find that the Parent sample exhibits a nearly equal number of flat and steep spectral indices (12:11 flat-to-steep ratio), with an average spectral slope compatible with being flat (0.30$\pm$0.10). However, if we consider type 1 and type 2 objects separately, the two mean values in our case are consistent one with each other within errors: 0.40$\pm$0.13 and 0.28$\pm$0.13, respectively. \subsection{The nuclear Radio and X-rays luminosity correlation} In order to study the relation between nuclear radio luminosity and the X-rays one, we correlated the 5 GHz VLA peak luminosity with the 2-10 keV X-rays luminosity (corrected for Compton-thin and Compton-thick absorption) for the Parent sample \citep{PG13} including our new data. Given the presence of dual censored data, we performed a Schmitt's linear regression algorithm to compute the correlation slope and a generalised Kendall's tau test for the significance of the correlation using the ASURV package. In Fig. \ref{fig:radio_x} we show a plot of 5 GHz VLA luminosity as a function of 2-10 keV luminosity. NGC~3185 has been excluded from this fit, for its complex morphology not ascribable to a core emission (see the L-band map in Fig. \ref{fig:contourMaps3185}). The slope of the Schmitt's regression is $Log\,{L_{5\,\rm{GHz}}[\rm{erg\,s^{-1}}]}=(0.8\pm0.1)Log\,{L_{2-10\,\rm{keV}}[\rm{erg\,s^{-1}}]}$. The generalised Kendall's tau test for correlation results in a probability P that a correlation is not present of 1.8$\times$10$^{-6}$ (z-value$\,\sim\,$4.8, $\ge\,$4.5$\,\sigma$). Even though our sources are all nearby (d$\,\le\,$23$\,$Mpc), in order to eliminate distance effects, we performed correlation test for the associated flux-flux relation, finding a P$\,\sim\,$ 2.6$\times$10$^{-6}$ for the null hypothesis (z-value$\,\sim\,4.7$, $\ge\,$4.5$\,\sigma$), so the correlation is significant. For three out of four undetected sources, the derived upper limits are nearly an order of magnitude smaller than the fluxes derived via the luminosity-luminosity relation. \begin{figure*}\scriptsize \centering \includegraphics[scale=0.42]{plots/radio5x.pdf} \caption{The 5 GHz VLA luminosity as a function of the 2-10$\,$ keV X-rays luminosity for the sources in the Parent sample (NGC~3185 is excluded). The straight black line is the Schmitt's regression line to our sample, the black dashed line is the slope from \citet{PG13}. Filled circles represent detected sources in this work; sources which have not been detected and for which we provided upper/lower limits are indicated as arrows; empty circles represent objects not in the Reference sample but present in the Parent sample.} \label{fig:radio_x} \end{figure*} \begin{figure*}\scriptsize \centering \subfloat{\includegraphics[scale=0.42]{plots/rxmbh.pdf}} \subfloat{\includegraphics[scale=0.42]{plots/rxedd.pdf}}\\ \caption{Left panel: the VLA radio-loudness parameter $R_X$, as defined by \citet{TerashimaWilson2003}, versus the BH mass. The black line is the \citet{TerashimaWilson2003} limit of $R_X=-4.5$, while the blue dashed line is the $R_X=-2.755$ limit for LLAGN derived by \citet{Panessa2006}. Right panel: the VLA radio-loudness parameter $R_X$, as defined by \citet{TerashimaWilson2003}, versus the Eddington ratio (NGC~3185 and NGC~0676 are excluded). The black line is the \citet{TerashimaWilson2003} limit of $R_X=-4.5$, while the blue dashed line is the $R_X=-2.755$ limit for LLAGN derived by \citet{Panessa2006}. Filled circles represent detected sources in this work; sources which have not been detected and for which we provided upper/lower limits are indicated as arrows; empty circles represent objects not studied in the Reference sample but present in the Parent sample.} \label{fig:radio_x2} \end{figure*} \subsection{Radio loudness and Accretion efficiency} The distribution of VLA X-ray radio-loudness parameter \footnote{$\log{R_X}=\log{L(5\,\rm{GHz})/L(2-10\,\rm{keV})}$, as defined by \citet{TerashimaWilson2003}} for the detected sources in the Parent sample is such that there is a prevalence of radio-loud sources, based on the radio-loudness limit $\log{R_X}\,=\,$-4.5 by \citet{TerashimaWilson2003}, but if we consider the -2.75 limit of \citet{Panessa2007}, then all our sources (except NGC~4472) are radio-quiet. We note that the BH masses, as in \citet{Panessa2006}, are taken from literature, and therefore they have been derived with different methods: maser kinematics, gas kinematics, stellar kinematics, reverberation mapping and mass-velocity dispersion, with reverberation mapping and stellar kinematics measurements which have been identified by \citet{WooUrry2002} as the most reliable ones. As noted in \citet{Panessa2006}, the BH mass estimates should be affected by a typical 0.3 - 0.5 dex uncertainty, mainly due to the mass-velocity relation scatter. In Fig. \ref{fig:radio_x2}, we show $R_X$ as function of BH mass and Eddington ratio ($\log{L_{2-10\,\rm{keV}}/L_{\rm{Edd}}}$), left and right panel, respectively. We performed a Kendall's tau test for the significance of the correlation between the radio-loudness parameter and the black-hole mass (Fig. \ref{fig:radio_x2}, left panel), finding that there is a weak trend (z-value$\sim$3.17 and a probability P$\sim$0.0015), while no significant correlation is found with the Eddington ratio (z-value$\sim$1.28 and a P$\sim$0.2). We have also investigated the correlation between the spectral index and black-hole mass and Eddington ratio as recently reported in \citet{LaorBaldiBehar2019}, however we do not find significant correlations with either the black-hole mass or the Eddington ratio. In order to deepen this issue, a larger sample of sources would allow a more statistically robust study. The population of sources in between the two radio-loudness thresholds (see Fig. \ref{fig:radio_x2}) could be referred to as Radio-Intermediate (RI) AGN, see for instance \citet{FalckeSherwoodPatnaik1996}. For the PG quasar sample, \citet{FalckeSherwoodPatnaik1996} find that, when considering intermediate values of the radio loudness parameter R (in the Kellerman definition), an approximately equal fraction of flat-spectrum sources emerges. \citet{Kellerman1994} showed that these flat-spectrum radio-intermediate sources are compact on VLA scales. Interestingly, in our Parent sample, 12 sources out of the 23 having $R_X$ and spectral index information can be considered as Radio-Intermediate, with 7 of them exhibiting a flat-spectrum with a compact VLA component (with the exception of NGC 3079). If we consider the $\log{R_X}<-4.5$ sources (9 out of 23), then only three are both compact and flat-spectrum. In Table \ref{tab:full_sample} we indicate the average values for the 5 GHz luminosity and for the X-ray radio-loudness parameter $\log{R_X}=\log{L(5\,\rm{GHz})/L(2-10\,\rm{keV})}$ (as defined by \citet{TerashimaWilson2003}). If we consider the average 5 GHz VLA luminosity and the average radio-loudness parameter for type 1 and type 2 Seyferts, no significant difference is found. \section{Discussion} We detected radio emission in six out of ten sources, with detection rates of 5/10 in L-band, 6/10 in C-band, 5/10 in X-band and 4/10 in Ku-band. The new VLA A-configuration observations allowed us to reach sensitivity levels down to $\sim\,$270 ${\rm \mu}$Jy\,beam$^{-1}$ in L-band (highest RMS for NGC~0676) and $\sim\,$27 ${\rm \mu}$Jy\,beam$^{-1}$ in Ku-band, which translates into radio powers of $L\,\sim\,10^{19}$ and $L\,\sim\,10^{18}$ W\,Hz$^{-1}$ at 1.5 and 14 GHz respectively. This allows us to increase the detection rates for the 28 sources of the Parent sample compared to \citet{PG13}: from 64 to 86 per cent in L-band and from 82 to 86 per cent in C-band (see table \ref{tab:full_sample}), thus sampling among the lowest radio luminosities for AGN. In the C-band, we find a detection rate of 84 per cent for the type-2 Seyferts and 100 per cent for the type-1s. Clues on the origin of the radio emission can be obtained by analyzing observational information, such as the spectral slope and the morphology. In two cases, namely NGC~4639 and NGC~4698, we have found spectral slopes of -0.40$\pm$0.01 and -0.13$\pm$0.07, respectively. Inverted or slightly-inverted radio spectra from very compact sources (like the above cases) are usually associated with optically thick synchrotron emission from the base of a jet \citep[e.g.][]{Blandford1979,Reynolds1982}, in which multiple self-absorbed components, peaking at different frequencies, overlap and flatten the overall spectrum with respect to the 2.5 SSA slope \citep{FalckeBiermann1995}. Alternatively, thermal processes like free-free emission may also produce flat/inverted spectral slopes \citep{Bontempi+2012}. The other detected sources exhibit different spectra: NGC~4477 has been found to fit an intermediate spectral index between flat and steep (0.52$\pm$0.09), NGC~3941 has a very steep ($\alpha\,$=$+1.52\,\pm\,0.33$) spectrum, both compatible with optically-thin synchrotron emission. NGC~4725 exhibits a peculiar spectrum and further observations are needed to derive meaningful information for this source. The above consideration could be strengthened via brightness temperature arguments. Indeed, following \citet{Falcke1999}, the natural limit between thermal and non-thermal processes is roughly $T_B\,\sim\,10^{6}$\,K, in which a brightness temperature much greater than this value suggests a non-thermal origin. However, we cannot put strong limits on the brightness temperature from VLA observation, given the loose upper limit on the deconvolved size. The only VLBI detection of the Reference Sample is for NGC~4477 at 5 GHz \citep{Bontempi+2012}. A very inverted spectrum and an intermediate brightness temperature between thermal and non-thermal processes ($\log{T_B}[K]\sim$6.5) was found. The derived physical parameters, under the assumption that the spectrum is due to synchrotron-self absorption, would agree with a thermal emission originating in a compact, hot corona surrounding the accretion disc, as proposed by coronal models by \citet{LaorBehar2008}. \begin{figure*}\scriptsize \centering \subfloat{\includegraphics[scale=0.3]{plots/NGC3185_L_HST.pdf}} \subfloat{\includegraphics[scale=0.295]{plots/NGC3185_C_HST.pdf}}\\ \caption{Left panel: L-band radio contours superimposed to HST image for NGC~3185; Right panel: C-band radio contours superimposed to HST image for NGC~3185. The archivial HST image has been taken via the WFPC2 detector and F450W filter.} \label{fig:3185_HST} \end{figure*} We detected radio emission for six sources, but while the morphology of radio emission in four over six cases is compact on $\le\,$ arcseconds scales, predominantly unresolved, for NGC~3185 and NGC~3941 we find a more complex morphology. The estimated radio luminosities at this frequency are L$\,\sim\,5\,\times\,10^{20}$ and $\sim\,6\,\times\,10^{18}$\,W\,Hz$^{-1}$ for NGC~3185 and NGC~3941, respectively, well below the Radio-Loud/Radio-Quiet threshold of L$\,\sim\,10^{23}$\,W\,Hz$^{-1}$ defined by \citet{Condon1992}. Considering NGC~3185, the structure observed in L-Band has a size of $\sim$5$\,\times\,$3.8 arcsec, which translates into a linear scale of $\sim\,$0.5$\,\times\,$0.4 kpc\footnote{If instead we use z-independent distance provided in the Nasa/Ipac Extra-galactic Database (NED) of 23.2 Mpc, then the linear scale would be $\sim\,$0.56$\,\times\,$0.42 kpc, so our linear scales estimates are affected by a $\le\,10$ per cent error, which do not affect our considerations.}. Analogous considerations can be made for the C and X Bands, leading to linear scales of $\sim\,$0.38$\,\times\,$0.36 kpc. There is evidence for radio emission spread over the host galaxy scale for a number of Seyferts when observed with adequate angular resolution and sensitivity \citep{Orienti2015}. In particular, circumnuclear starburst rings with knots of star formation are observed. \citet{OrientiPrieto2010} observed diffuse radio emission with knots of star formation for a number of Seyferts, on scales smaller than 1 kpc (NGC~5506, NGC~7469 and NGC~7582). Analogously, the morphology of the radio emission in the case of NGC~3185 as observed in our radio maps may be consistent with emission from circumnuclear rings of star formation. In order to check this hypothesis, we overlapped L-band and C-band radio contours to an archivial HST image\footnote{Taken from the Hubble Legacy Archive (HLA), \url{https://hla.stsci.edu/}} for NGC~3185, as shown in Fig. \ref{fig:3185_HST}. The radio contours overlap with optical emission, from which a nearly circular, ring-like structure emerges. However, further work is needed to understand the link between radio emission and star formation in the form of circumnuclear rings in this source. In particular, intermediate angular resolution scales radio observations (such as those of e-MERLIN) combined with H$\alpha$ maps would allow us to confirm the star-formation hypothesis. If we consider the Parent sample with our new data, then we find that the average spectral slope is compatible with flat (0.30$\pm$0.10), with a nearly equal number of flat and steep slopes. If we consider the two sub-populations of type-1 and type-2 Seyferts, then no significant differences are found with respect to the average spectral index and the average radio-loudness parameter, the only difference being that type-1 sources are slightly more luminous. Considering the X-ray radio loudness parameter, the black-hole mass and the Eddington ratio, we do not find significant differences between the type-1 and type-2 sub-classes. We note that, even though the sources in the Reference sample have an Eddington ratio nearly an order of magnitude smaller than the average for the Parent sample, the X-ray radio-loudness parameter does not exhibit anomalous values. We also investigated the relation between the 5 GHz luminosity and the nuclear 2-10 keV X-ray luminosity. The latter is usually considered a tracer of accretion luminosity, while the former may have a core and a jet contribution. Nevertheless, we performed this analysis in order to understand if there is an interplay between the emitting components. We have found a significant correlation between the two quantities $\log L_\mathrm{5\ GHz} \propto (0.8\pm0.1) \log L_\mathrm{2-10\ keV}$ at scales traced by the VLA, in agreement with considerations of \citet{Panessa2007}. The increased sensitivity of our observations allows us to put stringent limits on the derived flux densities at very low luminosities, resulting in a steepening of the correlation slope with respect to the previous estimate of \citet{PG13}, i.e., $\log L_\mathrm{5\ GHz} \propto (0.67\pm0.01) \log L_\mathrm{2-10\ keV}$. However, our value is still in agreement with the general $\sim$ 0.7 slope of \citet{Gallo2003} found in the case of 'low-hard' state XRBs. During their outburst activity, XRBs experience transitions between different accretion and ejection states, likely in dependence of the efficiency of the accretion flow \citep{Fender2004}. In their low-hard state, XRBs show a radio spectrum which is flat/slightly inverted, and can be interpreted as synchrotron self-absorbed emission from an optically thick core, probably the base of a jet as may be occurring in AGN \citep{Blandford1979}. The fact that LLAGNs do exhibit a slope of the radio - X-rays correlation comparable to that of low-hard state XRB has been interpreted as symptom of a common underlying physics \citep[e.g.][]{FalckeKordingMarkoff2004}. The two classes of low-accreting sources would therefore be associated with a radiatively inneficient accretion (RIAF) coupled with a jet \citep{Ho2008}, although there have been successful attempts in modelling in terms of jet-only and RIAF-only models \citep[e.g.][]{Kording2006}. Indeed, the observed low Eddington ratios favour an inefficient regime for our sources. However, we have to note that the jet origin of radio emission, associated to the low/hard state of XRBs, for the sources in our sample can not be confirmed. Indeed, a possible non-negligible contribution to radio emission might come from central star-formation regions (like in the case of NGC~3185). Moreover, the interpretation of spectral indices is not unique: different processes, such as low-power jets, outflows or star formation could produce steep or flat spectra, dependently on the conditions \citep[for a review see][]{Panessa2019}. \begin{table}\footnotesize \centering \caption{Spectral index for the four sources NGC~3941, NGC~4477, NGC~4639 and NGC~4698. \textit{Columns}: (1) target name; (2) spectral index.} \begin{tabular}{cc} \hline Target & Spectral index \\ & $\alpha$ \\ (1) & (2) \\ \hline \hline NGC~3941 & +1.52$\pm$0.33 \\ NGC~4477 & +0.52$\pm$0.09 \\ NGC~4639 & -0.40$\pm$0.01 \\ NGC~4698 & -0.13$\pm$0.07 \\ \hline \end{tabular} \label{tab:spec_ind} \end{table} \section{Conclusions} We present new multi-frequency (L, C, X and Ku bands), high resolution (A configuration) VLA observations for ten Seyferts (Reference sample), which are the faintest members of the complete, distance limited sample of \citet{Cappietal06} (Parent sample). Below, we summarize our results: \\ - We detected radio emission in six out of ten sources, with four of them, namely NGC~0676, NGC~1058, NGC~2685 and NGC~3486 which remain radio-silent at 3$\,\sigma$ sensitivity levels ranging from $\sim\,$270 $\mu$Jy/beam (L-band) down to $\sim\,$27 $\mu$Jy/beam (Ku-Band), this translates into upper limits on luminosities of L$\,\sim\,10^{19}$ and L$\,\sim\,10^{18}$\,W\,Hz$^{-1}$ at 1.5 and 14 GHz, respectively.\\ - The increased sensitivity of the new observations translates into a higher detection rate in the Parent sample, from 64 to 86 per cent in the L-band and from 82 to 86 per cent in the C-band, with respect to previous works \citep{PG13}, suggesting that all nuclei should be radio emitters when observed with the adequate sensitivity. \\ - For the detected sources we computed radio spectral slopes, and we found: $\alpha\,=$+0.52 for NGC~4477, compatible with both a steep and flat spectrum; $\alpha\,=$-0.40 for NGC~4639 and $\alpha\,=$-0.13 for NGC~4698, slightly inverted spectra compatible with both compact optically-thick emission and thermal (e.g. free-free) emission; $\alpha\,=$+1.52 for NGC~3941, with the spectral index defined as $S_\nu^{\rm{I}}\,\propto\,\nu^{-\alpha}$. NGC~4725 exhibits a peculiar spectrum, that deserves further investigation. \\ - We studied the morphology of the radio emission for the ten faintest Seyferts, and we found it to be predominantly compact on scales $\le$ arcsec, which corresponds to linear scales smaller than $\sim$\,100\,pc, except in two cases, NGC~3185 and NGC~3941, which show complex morphology at sub-kpc scales, probably due to star formation processes. \\ - We did not find particular trends with neither the black-hole mass nor the Eddington ratio considering the Parent sample. Even though the sources in the Reference sample have an Eddington ratio an order of magnitude lower than the average for the Parent sample, they do not exhibits any specific trend for the X-ray radio-loudness parameter \citep[as defined by][]{TerashimaWilson2003} \\ - Considering the type-1 and type-2 sub-populations for the Parent sample, we did not find a clear dichotomy when considering the average radio-loudness parameter $\log{R_X}=\log{L(5\,\rm{GHz})/L(2-10\,\rm{keV})}$ and the average spectral index, the only difference being that type-1 Seyferts have a prevalence of steep spectra with respect to type-2. \\ - We correlated the radio 5 GHz luminosity with the 2-10 keV luminosity for the Parent sample, taking into account censored data. The slope we find of 0.8$\pm$0.1 is consistent with the slope of $0.7$ found by \citet{Gallo2003} for the low-hard state XRBs, which would suggest a possible common inefficient accretion physics.\\ These faint nuclei will constitute one of the dominant source population in the Square Kilometer Array (SKA) sky, allowing to investigate several different physical processes that are possibly producing radio emission in these nuclei. \section*{Acknowledgements} We thank the referee for the comments which significantly contributed to improve the quality of the paper. EC thanks D.R.A. Williams for comments that improved the manuscript. EC acknowledges the National Institute of Astrophysics (INAF) and the University of Rome - Tor Vergata for the PhD scholarship in the XXXIII PhD cycle. GB acknowledges financial support under the INTEGRAL ASI-INAF agreement 2013-025-R.1. FP and MG acknowledge support from a grant PRIN-INAF SKA-CTA 2016. FT acknowledges support by the Programma per Giovani Ricercatori – anno 2014 "Rita Levi Montalcini". \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The eigenvalues of random matrices are known to have far-reaching implications in various scientific disciplines. Finite dimensional properties of the eigenvalues of Wishart type random matrices are of paramount importance in classical multivariate analysis \cite{ Anderson, Robb}, whereas recent multivariate statistical investigations have focused on establishing the asymptotic properties of the eigenvalues \cite{Imj, Alex}. Various links between the eigenvalues of random matrices and statistical physics, combinatorics and integrable systems have been established over the last few decades (see, e.g.,\cite{PJF, Mehta} and references therein). Apart from these areas, random matrices, especially matrices with complex Gaussian elements, have also found new applications in signal processing and wireless communications \cite{Telatar, Verdu}. The majority of those studies focus on random matrix ensembles derived from zero mean Gaussian matrices. However, random matrices derived from non-zero mean Gaussian matrices have been traditionally an area of interest in multivariate analysis \cite{Anderson, Cons, James, Robb}. Moreover, mathematical objects such as zonal polynomials \cite{Hua, James} and hypergeometric functions of matrix arguments \cite{Herz, Khatri} have been introduced in multivariate analysis literature to facilitate further analysis of such non-central random matrices. Interestingly, these non-central matrices have also been referred to as random matrices with external sources in the literature of physics \cite{Brezin1, Brezin2, Brezin4, Justin}. In this respect, the classical orthogonal polynomial based characterization of the eigenvalues of random matrices \cite{Mehta} has been further extended to encompass multiple orthogonal polynomials in \cite{Bleher2, Bleher1}. Alternatively, capitalizing on a contour integral approach due to Kazakov \cite{Kazakov}, the authors in \cite{Arous, Brezin1, Brezin2} have introduced a double contour integral representation for the correlation kernel of the eigenvalue point process of non-central random matrices. Some recent contributions on this matter include \cite{ Forrester1, Peter}. One of the salient features common to those latter studies is that they exclusively focus either on spiked correlation or mean model. It is noteworthy that these two models are mathematically related to each other \cite{Bleher3}. As we are well aware of, the characterization of the joint eigenvalue distribution of non-central random matrices\footnote{Here the term ``non-central random matrices" refers to non-central Gaussian and Wishart matrices.} involves the hypergeometric function of two matrix arguments \cite{James}. It turns out that one of the argument matrices becomes reduced-rank in the presence of a spiked mean/correlation model. Specifically, when the spike is of rank one, an alternative representation of the hypergeometric function of two matrix arguments has recently been discovered independently by Mo \cite{Mo}, Wang \cite{Wang} and Onatski \cite{Alex}. The key contribution amounts to the representation of the hypergeometric function of two matrix arguments with a rank-$1$ argument matrix in terms of an infinite series involving a single contour integral. This representation has been subsequently used to further characterize the asymptotic behaviors of the eigenvalues of non-central random matrices \cite{Mo, Wang}. In this paper, by employing this alternative contour integral representation, we analyze three problems pertaining to the eigenvalues of a finite dimensional complex non-central Wishart matrix with a rank-$1$ mean matrix\footnote{This is also known as the shifted mean chiral Gaussian ensemble with $\beta=2$ (i.e., the complex case) \cite{Peter}.}. Let $0<\lambda_1\leq \lambda_2\leq \cdots\leq \lambda_n$ be the ordered eigenvalues of an $n\times n$ complex non-central Wishart matrix $\mathbf{W}$ with $m$ degrees of freedom and a rank-$1$ mean. We are interested in the following three problems. \begin{enumerate} \item The characterization of the cumulative distribution function (c.d.f.) of the minimum eigenvalue of $\mathbf{W}$ as the determinant of a square matrix, the size of which depends on the difference of the degrees of freedom and $n$ (i.e., $m-n$). \item The statistical characterization of the random quantity $\frac{\text{tr}(\mathbf{W})}{\lambda_1}$ with $\text{tr}(\cdot)$ denoting the trace of a square matrix. \item The statistical average of the reciprocal of the characteristic polynomial $\det[z\mathbf{I}_n+\mathbf{W}], \;|\arg z|<\pi$, with $\det[\cdot]$ and $\mathbf{I}_n$ denoting the determinant of a square matrix and the $n\times n$ identity matrix, respectively. \end{enumerate} The first problem has a straightforward solution in the form of the determinant of a square matrix of size $n\times n$ \cite{Matthesis}. This stems from the determinant representation of the hypergeometric function of two matrix arguments due to Khatri \cite{Khatri}. However, in certain cases, it is convenient to have an expression with the determinant of a square matrix of size $m-n$ (e.g., when $m=n$). Therefore, in this work, by leveraging the knowledge of classical orthogonal polynomials, we derive an alternative expression for the c.d.f. of the minimum eigenvalue which involves the determinant of a square matrix of size $m-n+1$. This new form is highly desirable when the difference between $m$ and $n$ is small irrespective of their individual magnitudes. In such a situation, this new expression circumvents the analytical complexities associated with the above straightforward solution which requires to evaluate the determinant of an $n\times n$ square matrix. This key representation, in turn, facilitates the further analysis of the so-called microscopic limit of the minimum eigenvalue (i.e., the limit when $m,n\to\infty$ such that $m-n$ is fixed) which is known to have a determinantal form involving the Bessel kernel \cite{Arous}. The random quantity of our interest in the second problem is commonly known as the Demmel condition number in the literature of numerical analysis \cite{Demmel}. As opposed to the case corresponding to the central Wishart matrices \cite{Robb}, $\text{tr}(\mathbf{W})$ and $\frac{\lambda_1}{\text{tr}(\mathbf{W})}$ are no longer statistically independent. Furthermore, a direct Laplace transform relationship between $\frac{\lambda_1}{\text{tr}(\mathbf{W})}$ and the probability density of the minimum eigenvalue of $\mathbf{W}$ does not seem to exist, whereas such a relationship exists among these random quantities in the case of central Wishart matrices \cite{PrathaSIAM, Krish}. Therefore, we introduce a moment generating function (m.g.f.) based framework to solve the second problem. In particular, using a classical orthogonal polynomial approach, we derive the m.g.f. of the random variable of our interest in terms of a single integral involving the determinant of a square matrix of size $m-n+1$. Upon taking the direct Laplace inversion of the m.g.f. we then obtain an exact expression for the probability desnity function (p.d.f.). The remarkable fact of having the determinant of a square matrix of size $m-n+1$ makes it suitable to be used when the relative difference between $m$ and $n$ is small. For instance, in the special case of $m=n$, the p.d.f. simplifies to an expression involving a single infinite summation. A generalized framework based on the duality between certain matrix ensembles has been proposed in \cite{Patric} to solve certain problems involving the averages of the reciprocals of characteristic polynomials pertaining to non-central Wishart matrices. However, the third problem of our interest does not seem to be consistent with that framework, since the specific parameters associated with our problem do not satisfy the requirements in \cite{Patric}. Also, it is worth mentioning that this particular problem has not been addressed in a more recent work of Forrester \cite{Forrester1} on the averages of characteristic polynomials for shifted mean chiral Gaussian ensembles. Therefore, again following the classical orthogonal polynomial approach, here we derive a new expression for this particular average. The resultant expression turns out to have a single infinite series. This is not surprising, since in the case of a central Wishart matrix the corresponding answer depends only on the number of characteristic polynomials rather than the size of the random matrix \cite{Boro, Patric, Fodorov}. The rest of this paper is organized as follows. We begin in Section 2 by deriving a new p.d.f. for the eigenvalues of a complex non-central Wishart matrix with a rank-$1$ mean. In Section $3$ we use the new joint eigenvalue p.d.f. to derive the c.d.f. of the minimum eigenvalue in terms of the determinant of a square matrix of size $m-n+1$. Section $4$ addresses the problem of statistical characterization of the random quantity $\frac{\text{tr}(\mathbf{W})}{\lambda_1}$ by deriving corresponding m.g.f. and p.d.f. expressions. Section $5$ is dedicated to deriving the average of the reciprocal of the characteristic polynomial $\det[z\mathbf{I}_n+\mathbf{W}], \;|\arg z|<\pi$. \section{New Joint Density of the Eigenvalues of a Complex Non-central Wishart Matrix with a Rank-$1$ Mean} Let us first define the p.d.f. of a complex non-central Wishart matrix. \begin{defn} Let $\mathbf{X}\in\mathbb{C}^{m\times n}$ be distributed as $\mathcal{CN}_{n,m}\left(\mathbf{M},\mathbf{I}_m\otimes\mathbf{I}_n\right)$ where $\mathbf{M}\in\mathbb{C}^{n\times n}$ with $m\geq n$. Then $\mathbf{W}=\mathbf{X}^\dagger \mathbf{X}$ has a complex non-central Wishart distribution $\mathcal{W}_n\left(m,\mathbf{I}_n,\mathbf{M}^\dagger\mathbf{M}\right)$ with p.d.f.\footnote{Henceforth, we use $(\cdot)^\dagger$ to denote the conjugate transpose of a matrix.} \begin{align} \label{den} f_{\mathbf{W}}(\mathbf{W})=\frac{e^{-\rm{tr}(\mathbf{M}^\dagger \mathbf{M})}}{\widetilde \Gamma_n(m)}|\mathbf{W}|^{m-n}e^{-\rm{tr}\left(\mathbf{W}\right)} {}_0\widetilde F_1\left(m;\mathbf{M}^\dagger\mathbf{M}\mathbf{W}\right) \end{align} where $\widetilde \Gamma_n(m)=\displaystyle \pi^{\frac{m(m-1)}{2}}\prod_{i=1}^n\Gamma(m-i+1)$ and ${}_0\widetilde F_1\left(\cdot;\cdot\right)$ denotes the complex hypergeometric function of one matrix argument. In particular, for a Hermitian positive definite $n\times n$ matrix $\mathbf{A}$, we have \cite{James} \begin{equation*} {}_0\widetilde F_1\left(p;\mathbf{A}\right)=\sum_{k=0}^\infty \frac{1}{k!}\sum_{\kappa}\frac{C_\kappa(\mathbf{A})}{[p]_\kappa} \end{equation*} where $C_\kappa(\cdot)$ is the complex zonal polynomial\footnote{The specific definition of the zonal polynomial is not given here as it is not required in the subsequent analysis. More details of the zonal polynomials can be found in \cite{James, Takemura}.} which depends through the eigenvalues of the argument matrix $\mathbf{A}$, $\kappa=(k_1,k_2,\ldots,k_n)$, with $k_i$'s being non-negative integers, is a partition of $k$ such that $k_1\geq k_2\geq\ldots\geq k_n\geq 0$ and $\sum_{i=1}^nk_i=k$. Also $[n]_\kappa=\prod_{i=1}^n(n-i+1)_{k_i}$ with $(a)_n=a(a+1)\ldots(a+n-1)$ denoting the Pochhammer symbol. \end{defn} The following theorem is due to James \cite{James}. \begin{thm} The joint density of the ordered eigenvalues $0<\lambda_1\leq \lambda_2\leq\ldots\leq\lambda_n$ of $\mathbf{W}$ is given by \cite{James} \begin{align} \label{eig} f_{\boldsymbol{\Lambda}}\left(\lambda_1,\lambda_2,\ldots,\lambda_n\right)=K_{m,n}e^{-\rm{tr}\left(\mathbf{M}^\dagger\mathbf{M}\right)}&\Delta_n^2(\boldsymbol{\lambda})\prod_{i=1}^n\lambda_i^{m-n}e^{-\lambda_i} {}_0\widetilde F_1\left(m;\boldsymbol{\Lambda},\mathbf{M}^\dagger\mathbf{M}\right) \end{align} where \begin{equation*} K_{m,n}=\frac{1}{\prod_{i=1}^n\Gamma(m-i+1)\Gamma(n-i+1)}, \end{equation*} $\boldsymbol{\Lambda}=\rm{diag}(\boldsymbol{\lambda})$ with $\boldsymbol{\lambda}=\left(\lambda_1,\lambda_2,\ldots,\lambda_n\right)$ and $\Delta_n(\boldsymbol{\lambda})=\prod_{1\leq i<k\leq n}\left(\lambda_k-\lambda_i\right)$. Moreover, ${}_0\widetilde F_1(\cdot;\cdot,\cdot)$ denotes the complex hypergeometric function of two matrix arguments. For Hermitian positive definite $n\times n$ matrices $\mathbf{A}$ and $\mathbf{B}$, we have \begin{align*} {}_0\widetilde F_1\left(m;\mathbf{A},\mathbf{B}\right)=\sum_{k=0}^\infty \frac{1}{k!} \sum_{\kappa} \frac{C_\kappa(\mathbf{A})C_\kappa(\mathbf{B})}{[m]_\kappa C_\kappa(\mathbf{I}_n)}. \end{align*} \end{thm} Now let us focus on simplifying the hypergeometric function in the case of a rank-$1$ mean matrix (i.e., the matrix $\mathbf{M}$ is rank-$1$). To this end, we expand the hypergeometric function term in (\ref{eig}) to yield \begin{equation} \label{zonal} {}_0\widetilde F_1\left(m;\boldsymbol{\Lambda},\mathbf{M}^\dagger\mathbf{M}\right)= \sum_{k=0}^\infty\frac{1}{k!}\sum_{\kappa}\frac{C_\kappa(\boldsymbol{\Lambda})C_\kappa(\mathbf{M}^\dagger\mathbf{M})}{[m]_\kappa C_\kappa(\mathbf{I}_n)}. \end{equation} Since $\mathbf{M}$ is of rank one, clearly the product $\mathbf{M}^\dagger\mathbf{M}$ contains only one non-zero eigenvalue (say $\mu$). This, along with \cite [Corollary 7.2.4]{Robb}, in turn gives that $C_\kappa(\mathbf{M}^\dagger\mathbf{M})=0$, for all partitions of $k$ having more than one non-zero parts. Therefore, only partitions of the form $(k,0,0\cdots,0)$, which we simply denote by $k$, contribute to the summation. In light of this observation, we can simplify (\ref{zonal}) to obtain \begin{equation} \label{zonal_simple} {}_0\widetilde F_1\left(m;\boldsymbol{\Lambda},\mathbf{M}^\dagger\mathbf{M}\right)= \sum_{k=0}^\infty\frac{1}{(m)_k k!}\left(\prod_{i=0}^{k-1}\frac{1+i}{n+i}\right)C_k(\boldsymbol{\Lambda})\mu^k \end{equation} where we have used the fact that $C_k(\mathbf{I}_n)=\prod_{i=0}^{k-1}\frac{n+i}{1+i}$. Now following Wang \cite{Wang}, we have \begin{align} \label{taylor} \frac{1}{k!}\left(\prod_{i=0}^{k-1}1+i\right)C_k(\boldsymbol{\Lambda})=\frac{1}{2\pi \mathrm{i}}\oint_0\prod_{j=1}^n \frac{1}{\left(1-z\lambda_j\right)}\frac{{\rm d}z}{z^{k+1}} \end{align} where the contour is taken to be a small circle around $0$ with $\frac{1}{\lambda_i} (i=1,2,\ldots,n)$ being exterior of the contour and $\mathrm{i}=\sqrt{-1}$. Substituting (\ref{taylor}) back into (\ref{zonal_simple}) followed by exchanging the summation and integral then gives \begin{align*} {}_0\widetilde F_1\left(m;\boldsymbol{\Lambda},\mathbf{M}^\dagger\mathbf{M}\right)=\frac{1}{2\pi \mathrm{i}}\oint_0\prod_{j=1}^n \frac{1}{\left(1-z\lambda_j\right)} \sum_{k=0}^\infty\frac{1}{(m)_k(n)_k}\frac{\mu^k}{z^{k+1}} {\rm d}z \end{align*} where we have used the relation $\prod_{i=0}^{k-1}(n+i)=(n)_k$. Since there exists an integer $N$ such that $n=N+1$, we can rewrite the above equation as \begin{align*} {}_0\widetilde F_1\left(m;\boldsymbol{\Lambda},\mathbf{M}^\dagger\mathbf{M}\right) & =\frac{N!(m-1)!}{2\pi \mathrm{i}}\oint_0\prod_{j=1}^n \frac{1}{\left(1-z\lambda_j\right)} \sum_{k=N}^\infty\frac{1}{\Gamma(m+k-N)k!}\frac{\mu^{k-N}}{z^{k-N+1}} {\rm d}z\nonumber\\ &=\frac{N!(m-1)!}{(m-n)!}\frac{1}{2\pi \mathrm{i}}\oint_0\prod_{j=1}^n \frac{1}{\left(1-z\lambda_j\right)}\left\{ \sum_{k=0}^\infty\frac{1}{k!(m-N)_k}\frac{\mu^{k-N}}{z^{k-N+1}} \right.\nonumber\\ & \hspace{5.5cm} -\left.\sum_{k=0}^{N-1}\frac{1}{k!(m-N)_k}\frac{\mu^{k-N}}{z^{k-N+1}} \right\}{\rm d}z. \end{align*} Clearly, the second summation evaluates to zero, since the integrand is an analytic function. Therefore, we can further simplify the above equation to yield \begin{align*} {}_0\widetilde F_1\left(m;\boldsymbol{\Lambda},\mathbf{M}^\dagger\mathbf{M}\right) &=\frac{N!(m-1)!}{(m-n)!\;\mu^N}\frac{1}{2\pi \mathrm{i}}\oint_0\prod_{j=1}^n \frac{1}{\left(1-z\lambda_j\right)}z^{N-1} {}_0F_1\left(m-N;\frac{\mu}{z}\right){\rm d}z, \end{align*} from which we obtain after the change of variable \begin{align} \label{hyp_simp} {}_0\widetilde F_1\left(m;\boldsymbol{\Lambda},\mathbf{M}^\dagger\mathbf{M}\right)=\frac{N!(m-1)!}{(m-n)!\;\mu^N}\frac{1}{2\pi \mathrm{i}}\oint_\infty\prod_{j=1}^n \frac{1}{\left(s-\lambda_j\right)} {}_0F_1\left(m-N;\mu s\right){\rm d}s \end{align} where all the $\lambda_i$'s except $0$ lie inside the contour. Finally, using (\ref{hyp_simp}) in (\ref{eig}) gives the new contour integral representation of the joint p.d.f. of $\lambda_1,\lambda_2,\ldots,\lambda_n$ as \begin{align*} f_{\boldsymbol{\Lambda}}\left(\lambda_1,\lambda_2,\ldots,\lambda_n\right) =K_{m,n}\frac{N!(m-1)!}{(m-n)!} \frac{e^{-\mu}}{\mu^N} \frac{1}{2\pi \mathrm{i}}\oint_\infty {}_0F_1\left(m-N;\mu s\right) \prod_{j=1}^n \frac{\lambda_j^{m-n}}{\left(s-\lambda_j\right)}e^{-\lambda_j} \Delta_n^2(\boldsymbol{\lambda}){\rm d}s. \end{align*} One can evaluate the above contour integral to obtain the following new representation for the distribution of the eigenvalues of $\mathbf{W}$. \begin{cor} Let $\mathbf{W}\sim \mathcal{W}_n\left(m,\mathbf{I}_n,\mathbf{M}^\dagger\mathbf{M}\right)$, where $\mathbf{M}$ is rank-$1$ and $\mathrm{tr}(\mathbf{M}^\dagger\mathbf{M})=\mu$. Then the joint density of the eigenvalues of $\mathbf{W}$ is given by \begin{align} \label{newden} &f_{\boldsymbol{\Lambda}}\left(\lambda_1,\lambda_2,\ldots,\lambda_n\right)=\mathcal{K}_{n,\alpha} \frac{e^{-\mu}}{\mu^{n-1}} \prod_{i=1}^n\lambda_i^{\alpha}e^{-\lambda_i} \Delta_n^2(\boldsymbol{\lambda}) \sum_{k=1}^n\frac{{}_0F_1\left(\alpha+1;\mu \lambda_k\right)}{\displaystyle\prod_{\substack{i=1\\i\neq k}}^n\left(\lambda_k-\lambda_i\right)} \end{align} where \begin{equation*} \mathcal{K}_{n,\alpha}=K_{n+\alpha,n}\frac{(n-1)!(n+\alpha-1)!}{\alpha!} \end{equation*} with $\alpha=m-n$. \end{cor} Let us now see how to derive a new expression for the c.d.f of the minimum eigenvalue of a complex non-central Wishart matrix with a rank-$1$ mean starting from the joint p.d.f. given above. Before proceeding, it is worth mentioning the following preliminary results and definitions. \begin{defn} For $\rho>-1$, the generalized Laguerre polynomial of degree $M$, $L^{(\rho)}_M(z)$, is given by \cite{Szego} \begin{equation} \label{lagdef} L^{(\rho)}_M(z)=\frac{(\rho+1)_M}{M!}\sum_{j=0}^{M}\frac{(-M)_j}{(\rho+1)_j}\frac{z^j}{j!}, \end{equation} with the kth derivative satisfying \begin{equation} \label{lagderi} \frac{d^k}{dz^k}L^{(\rho)}_M(z)=(-1)^kL^{(\rho+k)}_{M-k}(z). \end{equation} Also $L^{(\rho)}_M(z)$ satisfies the following contiguous relationship \begin{equation} \label{contg} L^{(\rho-1)}_M(z)=L^{(\rho)}_M(z)-L^{(\rho)}_{M-1}(z). \end{equation} \end{defn} \begin{defn} For a negative integer $-M$, we have the following relation \begin{equation} \label{poch} (-M)_j=\left\{\begin{array}{cc} (-1)^j\frac{M!}{(M-j)!}& \text{for $j\leq M$}\\ 0 & \text{for $j>M$}. \end{array}\right. \end{equation} \end{defn} \begin{lem}\label{lag} Following \cite[Eq. 7.414.7]{Grad} and \cite[Corollary 2.2.3]{Askey}, for $j,k\in\{0,1,2,\cdots\}$, we can establish \begin{equation*} \int_0^\infty x^j e^{-x} L^{(k)}_M(x){\rm d} x=\frac{j!}{M!}(k-j)_M. \end{equation*} \end{lem} The following compact notation has been used to represent the determinant of an $M\times M$ block matrix: \begin{equation*} \det\left[a_{i,1}\;\;\; b_{i,j-1}\right]_{\substack{i=1,2,\cdots,M\\ j=2,3,\cdots,M}}=\left|\begin{array}{ccccc} a_{1,1} & b_{1,1} & b_{1,2}& \cdots & b_{1,M-1}\\ a_{2,1} & b_{2,1} & b_{2,2}& \cdots & b_{2,M-1}\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ a_{M,1} & b_{M,1} & b_{M,2}& \cdots & b_{M,M-1} \end{array} \right|. \end{equation*} \section{Cumulative Distribution of the Minimum Eigenvalue} Here we derive a new expression for the c.d.f. of the minimum eigenvalue $\lambda_{\min}$ of $\mathbf{W}$ with a rank-$1$ mean. By definition, the c.d.f. of $\lambda_{\min}$ is given by \begin{equation} \label{cdf} F_{\lambda_{\min}}(x)=\Pr\left(\lambda_1<x\right)=1-\Pr\left(\lambda_1\geq x\right) \end{equation} where \begin{equation} \label{multi_integral_one} \Pr\left(\lambda_1\geq x\right)=\int_{x\leq \lambda_1\leq \lambda_2\leq \cdots\leq \lambda_n<\infty} f_{\boldsymbol{\Lambda}}\left(\lambda_1,\lambda_2,\ldots,\lambda_n\right) {\rm d}\lambda_1{\rm d}\lambda_2\cdots{\rm d}\lambda_n. \end{equation} The following theorem gives the c.d.f. of $\lambda_{\min}$. \begin{thm} Let $\mathbf{W}\sim \mathcal{W}_n\left(m,\mathbf{I}_n,\mathbf{M}^\dagger\mathbf{M}\right)$, where $\mathbf{M}$ is rank-$1$ and $\mathrm{tr}(\mathbf{M}^\dagger\mathbf{M})=\mu$. Then the c.d.f. of the minimum eigenvalue of $\mathbf{W}$ is given by \begin{align} \label{cdffinal} F_{\lambda_{\min}}(x)& =1-(n+\alpha-1)!\;e^{-nx}\det\left[(-\mu)^{i-1} \psi_i(\mu, x) \;\;\; L_{n+i-j}^{(j-2)}(-x)\right]_{\substack{i=1,2,\cdots,\alpha+1\\j=2,3,\cdots,\alpha+1}} \end{align} where $\alpha=m-n$, \begin{align*} \psi_i(\mu,x)=\frac{1}{(\alpha+i+n-2)!}\sum_{k=0}^\infty \frac{(x\mu)^k{}_1F_1\left(\alpha+k;\alpha+n+i+k-1;-\mu\right)}{k!(\alpha+i+n-1)_k}, \end{align*} and ${}_1F_1(a;c;z)$ is the confluent hypergeometric function of the first kind. \end{thm} {\bf{Proof:}} Since the joint p.d.f. is symmetric in $\lambda_1,\lambda_2,\cdots,\lambda_n$, we can write (\ref{multi_integral_one}) as \begin{equation*} \Pr\left(\lambda_1\geq x\right)=\frac{1}{n!}\int_{[x,\infty)^n}f_{\boldsymbol{\Lambda}}\left(\lambda_1,\lambda_2,\ldots,\lambda_n\right) {\rm d}\lambda_1{\rm d}\lambda_2\cdots{\rm d}\lambda_n \end{equation*} from which we obtain upon using the variable transformations \begin{equation*} \Pr\left(\lambda_1\geq x\right)=\frac{1}{n!}\int_{[0,\infty)^n}f_{\boldsymbol{\Lambda}}\left(\lambda_1+x,\lambda_2+x,\ldots,\lambda_n+x\right) {\rm d}\lambda_1{\rm d}\lambda_2\cdots{\rm d}\lambda_n. \end{equation*} Now it is convenient to use (\ref{newden}) to arrive at \begin{align} \label{eqsum} \Pr\left(\lambda_1\geq x\right)= \frac{\mathcal{K}_{n,\alpha}}{n!}\frac{e^{-\mu-nx}}{\mu^{n-1}} \sum_{k=1}^n \int_{[0,\infty)^n}& \frac{{}_0F_1\left(\alpha+1;\mu (\lambda_k+x)\right)}{\prod_{\substack{i=1\\i\neq k}}^n\left(\lambda_k-\lambda_i\right)} \nonumber\\ & \qquad \times \prod_{i=1}^n(\lambda_i+x)^{\alpha}e^{-\lambda_i} \Delta_n^2(\boldsymbol{\lambda}) {\rm d}\lambda_1{\rm d}\lambda_2\cdots{\rm d}\lambda_n. \end{align} Since each term in the above summation contributes the same amount, we may write (\ref{eqsum}) as \begin{align*} \Pr\left(\lambda_1\geq x\right)= \frac{\mathcal{K}_{n,\alpha}}{(n-1)!}\frac{e^{-\mu-nx}}{\mu^{n-1}} \int_{[0,\infty)^n} & \frac{{}_0F_1\left(\alpha+1;\mu (\lambda_1+x)\right)}{\prod_{i=2}^n\left(\lambda_1-\lambda_i\right)} \nonumber\\ & \qquad \qquad\times \prod_{i=1}^n(\lambda_i+x)^{\alpha}e^{-\lambda_i} \Delta_n^2(\boldsymbol{\lambda}) {\rm d}\lambda_1{\rm d}\lambda_2\cdots{\rm d}\lambda_n. \end{align*} Noting the fact that \begin{equation*} \Delta_n^2(\boldsymbol{\lambda})=\prod_{i=2}^n(\lambda_1-\lambda_i)^2 \Delta_{n-1}^2(\boldsymbol{\lambda}) \end{equation*} with $\Delta_{n-1}^2(\boldsymbol{\lambda})=\prod_{2\leq k<j\leq n}(\lambda_j-\lambda_k)^2$, we can rewrite the above multiple integral after some algebraic manipulation as \begin{align*} \Pr\left(\lambda_1\geq x\right)=& \frac{\mathcal{K}_{n,\alpha}}{(n-1)!}\frac{e^{-\mu-nx}}{\mu^{n-1}} \int_{[0,\infty)} {}_0F_1\left(\alpha+1;\mu (\lambda_1+x)\right) (\lambda_1+x)^\alpha e^{-\lambda_1}\nonumber\\ & \qquad \times \left(\int_{[0,\infty)^{n-1}}\prod_{i=2}^n(\lambda_1-\lambda_i)(\lambda_i+x)^{\alpha}e^{-\lambda_i} \Delta_{n-1}^2(\boldsymbol{\lambda}) {\rm d}\lambda_2\cdots{\rm d}\lambda_n\right) \; {\rm d}\lambda_1. \end{align*} Now it is convenient to relabel the variables as $\lambda_1=\lambda$ and $\lambda_i=y_{i-1}, i=2,3,\cdots,n$, to obtain \begin{align} \label{cdfin} \Pr\left(\lambda_1\geq x\right)= \frac{\mathcal{K}_{n,\alpha}}{(n-1)!}\frac{e^{-\mu-nx}}{\mu^{n-1}} \int_{[0,\infty)}& {}_0F_1\left(\alpha+1;\mu (\lambda+x)\right) (\lambda+x)^\alpha e^{-\lambda} \nonumber\\ & \qquad \qquad \quad \times (-1)^{(n-1)\alpha}Q_{n-1}\left(\lambda,-x,\alpha\right) {\rm d}\lambda \end{align} where \begin{equation} \label{Qintdef} Q_{n}\left(a,b,\alpha\right):=\int_{[0,\infty)^n}\prod_{i=1}^{n}(a-y_i)(b-y_i)^{\alpha}e^{-y_i} \Delta_{n}^2(\boldsymbol{y}) {\rm d}y_1{\rm d} y_2\cdots{\rm d}y_n. \end{equation} As shown in the Appendix, we can solve the above multiple integral in closed form giving \begin{align} \label{q1} Q_{n}\left(a,b,\alpha\right) =\frac{\overline{\mathcal{K}}_{n,\alpha}}{(b-a)^\alpha} \det\left[L_{n+i-1}^{(0)}(a)\;\;\; L_{n+i+1-j}^{(j-2)}(b)\right]_{\substack{i=1,2,\cdots,\alpha+1\\j=2,3,\cdots,\alpha+1}} \end{align} where \begin{equation*} \overline{\mathcal{K}}_{n,\alpha}=(-1)^{n+\alpha(n+\alpha)}\frac{\prod_{i=1}^{\alpha+1}(n+i-1)!\prod_{i=0}^{n-1}i!(i+1)!}{\prod_{i=1}^{\alpha-1}i!}. \end{equation*} Therefore, using (\ref{q1}) in (\ref{cdfin}) with some algebraic manipulation, we have \begin{align*} \Pr\left(\lambda_1\geq x\right)=(-1)^{n+1}\frac{(n+\alpha-1)!}{\alpha!}\frac{e^{-\mu-nx}}{\mu^{n-1}}&\int_0^\infty {}_0F_1\left(\alpha+1;\mu (\lambda+x)\right)e^{-\lambda}\nonumber\\ & \times \det\left[ L_{n+i-2}^{(0)}(\lambda)\;\;\; L_{n+i-j}^{(j-2)}(-x)\right]_{\substack{i=1,2,\cdots,\alpha+1\\j=2,3,\cdots,\alpha+1}} {\rm d}\lambda. \end{align*} Observing that only the first column of the determinant contains the variable $\lambda$, we can further simplify the above integral to yield \begin{align} \label{mineigsplit} \Pr\left(\lambda_1\geq x\right)&=(-1)^{n+1}\frac{(n+\alpha-1)!}{\alpha!}\frac{e^{-\mu-nx}}{\mu^{n-1}} \det\left[\zeta_i(x)\;\;\; L_{n+i-j}^{(j-2)}(-x)\right]_{\substack{i=1,2,\cdots,\alpha+1\\j=2,3,\cdots,\alpha+1}} \end{align} where \begin{align*} \zeta_i(x)=\int_0^\infty {}_0F_1\left(\alpha+1;\mu (\lambda+x)\right)e^{-\lambda} L_{n+i-2}^{(0)}(\lambda){\rm d}\lambda. \end{align*} The remaining task is to evaluate the above integral, which does not seem to have a simple closed-form solution. Therefore, we expand the hypergeometric function with its equivalent power series and use some algebraic manipulation to arrive at \begin{align*} \zeta_i(x)=\sum_{l=0}^\infty\sum_{k=0}^\infty \frac{\mu^{l+k} x^{k}}{k!l!(\alpha+1)_{l+k}}\int_0^\infty \lambda^l e^{-\lambda} L_{n+i-2}^{(0)}(\lambda){\rm d}\lambda. \end{align*} This integral can be solved with the help of Corollary \ref{lag} to yield \begin{align*} \zeta_i(x)=\sum_{l=0}^\infty\sum_{k=0}^\infty \frac{\mu^{l+k} x^{k}}{k!(\alpha+1)_{l+k}}\frac{(-l)_{n+i-2}}{(1)_{n+i-2}}. \end{align*} Following (\ref{poch}), we observe that the quantity $(-l)_{n+i-2}$ is non-zero only when $l\geq n+i-2$. Therefore, we shift the summation index $l$ with some algebraic manipulation to yield \begin{align} \label{zetaans} \zeta_i(x)=\frac{(-1)^{n+i}\mu^{n+i-2}\alpha !}{(n+i+\alpha-2)!} \sum_{k=0}^\infty \frac{(x\mu)^k}{k! (n+i+\alpha-1)_k} {}_1F_1(n+i-1;n+\alpha+i+k-1;\mu) \end{align} where we have used the relation \begin{align*} (\alpha+1)_{k+i+n+l-2}=\frac{(\alpha+i+n-2)!}{\alpha!}(\alpha+i+n+k-1)_l (\alpha+i+n-1)_k. \end{align*} Substituting (\ref{zetaans}) back into (\ref{mineigsplit}) with some algebra then gives \begin{align} \label{hypeq} \Pr\left(\lambda_1\geq x\right) & = (n+\alpha-1)!e^{-nx}\det\left[(-\mu)^{i-1}\psi_i(\mu,x) \;\;\; L_{n+i-j}^{(j-2)}(-x)\right]_{\substack{i=1,2,\cdots,\alpha+1\\j=2,3,\cdots,\alpha+1}} \end{align} where we have used the Kummer relation \cite{Erdelyi} \begin{equation*} {}_1F_1(a;c;z)=e^z {}_1F_1(c-a;c,-z). \end{equation*} Finally, using (\ref{hypeq}) in (\ref{cdf}) gives the c.d.f. of the minimum eigenvalue which concludes the proof. \begin{rk} Alternatively, we can express $\psi_i(\mu, x)$ as \begin{align} \psi_i(\mu,x)=\frac{e^{-\mu}}{(\alpha+i+n-2)!}\Phi_3\left(n+i-1,n+\alpha+i-1;\mu,x\mu\right) \end{align} where \begin{equation*} \Phi_3(a,c;x,y)=\sum_{i=0}^\infty\sum_{j=0}^\infty \frac{(a)_i}{(c)_{i+j} i!j!} x^iy^j \end{equation*} is the confluent hypergeometric function of two variables \cite[Eq. 5.7.1.23]{Erdelyi}. \end{rk} In the special case of $\alpha=0$ (i.e., $m=n$), (\ref{cdffinal}) admits the following simple form \begin{align} \label{cdfjmva} F_{\lambda_{\min}}(x)& =1-e^{-nx}\sum_{k=0}^\infty \frac{(x\mu)^k}{k!(n)_k}{}_1F_1(k;n+k;-\mu)\nonumber\\ &=1-e^{-\mu-nx}\Phi_3\left(n,n;\mu,x\mu\right) \end{align} which coincides with what we have derived in \cite[Eq. 32/39]{Pratha} purely based on a matrix integral approach.\footnote{Since the results given in \cite{Pratha} are valid for an arbitrary covariance matrix with $\alpha=0$, one has to assume the identity covariance to obtain the above results.} In addition, it is not difficult to show that, for $\mu=0$, (\ref{cdffinal}) simplifies to \begin{equation} F_{\lambda_{\min}}(x) =1-e^{-nx}\det\left[L_{n+i-j}^{(j-1)}(-x)\right]_{i,j=1,2,\cdots,\alpha}. \end{equation} \begin{rk} Although we can show that the microscopic limit of the above expression (\ref{cdffinal}) for the c.d.f of the minimum eigenvalue takes the form of a determinant of size $\alpha$ involving the Bessel kernel, we omit a detailed analysis here as this particular asymptotic limit is well known in the literature \cite{Arous}. \end{rk} Having analyzed the behavior of the minimum eigenvalue of $\mathbf{W}$, let us now move on to determine the distribution of the random variable $\frac{\rm{tr}(\mathbf{W})}{\lambda_1}$. \section{The Distribution of $\frac{\rm{tr}(\mathbf{W})}{\lambda_1}$} Here we study the distribution of the quantity \begin{equation} \label{vint} V=\frac{\rm{tr}(\mathbf{W})}{\lambda_1}=\frac{\sum_{j=1}^n\lambda_j}{\lambda_1}. \end{equation} It turns out that this quantity is intimately related to the distribution of the minimum eigenvalue of $\mathbf{W}$ given the constraint $\rm{tr}(\mathbf{W})=1$ (i.e., fixed trace) \cite{Profchen}. To be precise, the latter is distributed as $\frac{1}{V}$. Apart from that, the most notable application of the distribution of $V$ is the so-called ``smoothed analysis of condition numbers" \cite{Spielman}. For a given function $g:\mathbb{C}^{m\times n}\to \mathbb{R}_+$ (e.g., the $2-$norm condition number), $\mathbf{A}\sim \mathcal{CN}_{m,n}(\mathbf{M}, \sigma^2\mathbf{I}_m\otimes \mathbf{I}_n)$ with $0<\sigma\leq 1$ and $\mathbf{M}\in\mathbb{C}^{m\times n}$ being arbitrary such that either $\rm{tr}\left(\mathbf{M}^\dagger\mathbf{M}\right)=1$ or $||\mathbf{M}||_2\leq \sqrt{n}$ is satisfied, under the smoothed analysis framework, a typical problem is to study the behavior of \cite{Mario, Sankar, Cucker, Burg} \begin{equation} \label{smooth} \sup_{\mathbf{M}}\mathrm{E}_{\mathbf{A}}\left(g(\mathbf{A})\right) \end{equation} where $\rm{E}_{\mathbf{A}}(\cdot)$ and $||\cdot||_2$ denote the mathematical expectation with respect to $\mathbf{A}$ and the $2$-norm, respectively. For mathematical tractability, sometimes it is assumed that the matrix $\mathbf{M}$ is of rank one \cite{Mario}. Bounds on the quantity (\ref{smooth}) have been derived in the literature when $g(\mathbf{A})$ defines various condition numbers (see, e.g., \cite{Sankar, Cucker, Burg} and references therein). Among those condition numbers, the one introduced by James Demmel \cite{Demmel} plays an important role in understanding the behaviors of other condition numbers arising in different contexts. For a rectangular matrix $\mathbf{X}\in\mathbb{C}^{m\times n}$, the function $g$ defined by \cite{Cucker} \begin{equation} \label{dem_mat} g(\mathbf{X})=||\mathbf{X}||_F||\mathbf{X}^*||_2 \end{equation} with $||\cdot||_F$ denoting the Frobenius norm and $\mathbf{X}^*$ denoting the Moore-Penrose inverse, gives the Demmel condition number\footnote{This is the extension of the condition number definition given in \cite{Demmel} to $m\times n$ rectangular matrices.}. In particular, for the matrix of our interest $\mathbf{X}\sim \mathcal{CN}_{m,n}(\mathbf{M},\mathbf{I}_m\otimes \mathbf{I}_n)$ with $m\geq n$, (\ref{dem_mat}) specializes to \begin{equation*} g(\mathbf{X})=\frac{\sum_{j=1}^n\lambda_j}{\lambda_1}=V \end{equation*} where $\lambda_1\leq \lambda_2\leq\cdots\leq \lambda_n$ are the ordered eigenvalues of $\mathbf{W}=\mathbf{X}^\dagger\mathbf{X}$. In light of these developments we can clearly see that the distribution of $V$ is of great importance in performing the smoothed analysis on the Demmel condition number. Having understood the importance of the variable $V$ in (\ref{vint}), we now focus on deriving its p.d.f when the matrix $\mathbf{W}$ has a rank-$1$ mean. For this purpose, here we adopt an approach based on the m.g.f. of $V$. We have the following key result. \begin{thm} Let $\mathbf{W}\sim\mathcal{W}_n\left(m,\mathbf{I}_n,\mathbf{M}^\dagger\mathbf{M}\right)$, where $\mathbf{M}$ is rank-$1$ and $\mathrm{tr}(\mathbf{M}^\dagger\mathbf{M})=\mu$. Then the p.d.f. of $V$ is given by \begin{align} \label{pdfexact} f^{(\alpha)}_V(v) =(n-1)! \frac{e^{-\mu}}{v^{n(n+\alpha)}} & \mathcal{L}^{-1}\Biggl\{ \frac{e^{-ns}}{s^{(n-1)(n+\alpha+1)}} \det\left[\left(-\frac{\mu}{ sv}\right)^{i-1} \phi_i(\mu,s,v) \;\;\; L^{(j)}_{n+i-1-j}(-s) \right]_{\substack{i=1,2,\cdots,\alpha+1\\ j=2,3,\cdots,\alpha+1}} \Biggr\} \end{align} where \begin{align*} \phi_i(\mu,s,v)&= \sum_{k=0}^\infty \frac{a_i(k)}{k!} \left(\frac{\mu}{s v}\right)^k {}_1F_1\left(n^2+n\alpha+k+i-1;n+i+k+\alpha-1;\frac{\mu}{v}\right)\\ a_i(k)&=(n+i-1)\frac{(n^2+n\alpha+i-2)!}{(n+i+\alpha-2)!}\frac{(n+i)_k(n+i-2)_k (n^2+n\alpha+i-1)_k}{(n+i-1)_k(n+i+\alpha-1)_k} \end{align*} and $\mathcal{L}^{-1}(\cdot)$ denotes the inverse Laplace transform. \end{thm} {\bf{Proof:}} By definition, the m.g.f. of $V$ can be written as \begin{align*} \mathfrak{M}_V(s)={\rm{E}}_{\boldsymbol{\Lambda}}\left(e^{-s\frac{\sum_{j=1}^n\lambda_j}{\lambda_1}}\right),\;\;\; \Re(s)\geq 0, \end{align*} which has the following multiple integral representation \begin{align*} \mathfrak{M}_V(s)=e^{-s}\int_{0\leq \lambda_1\leq\lambda_2\leq\cdots\leq \lambda_n<\infty} e^{-s\frac{\sum_{j=2}^n\lambda_j} {\lambda_1}} f_{\boldsymbol{\Lambda}}(\lambda_1,\lambda_2,\cdots,\lambda_n) {\rm d}\lambda_1{\rm d}\lambda_2\cdots {\rm d}\lambda_n. \end{align*} Since the argument of the exponential function is symmetric in $\lambda_2,\cdots,\lambda_n$, it is convenient to introduce the substitution $\lambda_1=x$ and rewrite the multiple integral, keeping the integration with respect to $x$ last, as \begin{align} \label{int_split} \mathfrak{M}_V(s)=e^{-s}\int_0^\infty \int_{x\leq\lambda_2\leq\cdots\leq \lambda_n<\infty} e^{-s\frac{\sum_{j=2}^n\lambda_j} {x}} f_{\boldsymbol{\Lambda}}(x,\lambda_2,\cdots,\lambda_n) {\rm d}\lambda_2\cdots {\rm d}\lambda_n{\rm d}x. \end{align} To be consistent with the above setting, we may restructure the joint p.d.f. of $\boldsymbol{\Lambda}$ given in (\ref{newden}) as \begin{align} \label{res_den} f_{\boldsymbol{\Lambda}}(x,\lambda_2,\cdots,\lambda_n)=\mathcal{K}_{n,\alpha} \frac{e^{-\mu}}{\mu^{n-1}} x^\alpha e^{-x}& \prod_{i=2}^n\lambda_i^{\alpha}e^{-\lambda_i}(x-\lambda_i)^2 \Delta_{n-1}^2(\boldsymbol{\lambda})\nonumber\\ &\hspace{-4mm} \times \left(\frac{{}_0F_1\left(\alpha+1;\mu x\right)}{\displaystyle\prod_{i=2}^n\left(x-\lambda_i\right)}+\sum_{k=2}^n\frac{{}_0F_1\left(\alpha+1;\mu \lambda_k\right)}{(\lambda_k-x)\displaystyle\prod_{\substack{i=2\\i\neq k}}^n\left(\lambda_k-\lambda_i\right)}\right) \end{align} where we have used the decomposition $\Delta_n^2(\boldsymbol{\lambda})=(x-\lambda_i)^2 \Delta_{n-1}^2(\boldsymbol{\lambda})$. Now we use (\ref{res_den}) in (\ref{int_split}) with some algebraic manipulation to obtain \begin{equation} \label{mgffact} \mathfrak{M}_V(s)=\mathfrak{P}(s)+\mathfrak{S}(s) \end{equation} where \begin{align} \label{Pdef} \mathfrak{P}(s)&=\mathcal{K}_{n,\alpha} \frac{e^{-\mu-s}}{\mu^{n-1}}\int_0^\infty e^{-x}x^\alpha{}_0F_1\left(\alpha+1;\mu x\right)\nonumber\\ & \qquad \quad\times \left( \int_{x\leq \lambda_2\leq\cdots\leq \lambda_n<\infty} \prod_{i=2}^n e^{-\left(1+\frac{s}{x}\right)\lambda_i}\lambda_i^\alpha (x-\lambda_i)\Delta_{n-1}^2(\boldsymbol{\lambda}) {\rm d}\lambda_2\cdots{\rm d}\lambda_n\right)\; {\rm d}x \end{align} and \begin{align} \label{Sdef} \mathfrak{S}(s)=\mathcal{K}_{n,\alpha} \frac{e^{-\mu-s}}{\mu^{n-1}} \int_0^\infty e^{-x}x^{\alpha}& \Biggl(\int_{x\leq \lambda_2\leq\cdots\leq \lambda_n<\infty} \sum_{k=2}^n\frac{{}_0F_1\left(\alpha+1;\mu \lambda_k\right)}{(\lambda_k-x)\displaystyle\prod_{\substack{i=2\\i\neq k}}^n\left(\lambda_k-\lambda_i\right)}\nonumber\\ & \qquad \qquad \left.\times \prod_{i=2}^n\lambda_i^{\alpha}e^{-\lambda_i}(x-\lambda_i)^2 \Delta_{n-1}^2(\boldsymbol{\lambda}){\rm d}\lambda_2\cdots{\rm d}\lambda_n\right) {\rm d}x. \end{align} The remainder of this proof is focused on evaluating the above two multiple integrals. Since the two integrals do not share a common structure, in what follows, we will evaluate them separately. Let us begin with (\ref{Pdef}). Clearly, the inner multiple integral is symmetric in $\lambda_2,\cdots,\lambda_n$. Thus, we can remove the ordered region of integration to yield \begin{align*} \mathfrak{P}(s)=\frac{\mathcal{K}_{n,\alpha}}{(n-1)!} \frac{e^{-\mu-s}}{\mu^{n-1}}&\int_0^\infty e^{-x}x^\alpha{}_0F_1\left(\alpha+1;\mu x\right)\nonumber\\ & \qquad \times \left(\int_{[x,\infty)^{n-1}} \prod_{i=2}^n e^{-\left(1+\frac{s}{x}\right)\lambda_i}\lambda_i^\alpha (x-\lambda_i) \Delta_{n-1}^2(\boldsymbol{\lambda}){\rm d}\lambda_2\cdots{\rm d}\lambda_n\right)\; {\rm d}x. \end{align*} Now we apply the change of variables $y_{i-1}=\frac{(x+s)}{x}(\lambda_i-x),\; i=2,3,\cdots,n$, to the inner $(n-1)$ fold integral with some algebraic manipulation to obtain \begin{align*} \mathfrak{P}(s)=(-1)^{(n-1)(1+\alpha)}\frac{\mathcal{K}_{n,\alpha}}{(n-1)!} \frac{e^{-\mu-sn}}{\mu^{n-1}} \int_0^\infty e^{-nx}x^{n(n-1+\alpha)}&\frac{{}_0F_1\left(\alpha+1;\mu x\right)}{(x+s)^{(n+\alpha)(n-1)}}\nonumber\\ & \qquad \times R_{n-1}(-(s+x),\alpha) {\rm d}x \end{align*} where \begin{align*} R_n(a,\alpha)=\int_{[0,\infty)^n}\prod_{i=1}^n e^{-y_j}y_j(a-y_j)^\alpha \Delta^2_n(\mathbf{y}){\rm d}y_1{\rm d}y_2\cdots{\rm d}y_n. \end{align*} Following \cite[Section 22.2.2]{Mehta}, we can solve the above integral to yield\footnote{Specific steps pertaining to this evaluation are not given here as the detailed steps of solving an analogous integral have been given in \cite{PrathaSIAM}.} \begin{align} R_n(a,\alpha)=(-1)^{n\alpha}\prod_{j=0}^{n-1}(j+1)!(j+1)!\prod_{j=0}^{\alpha-1}\frac{(n+j)!}{j!} \det\left[L^{(j)}_{n+i-j}(a)\right]_{i,j=1,2,\cdots,\alpha}. \end{align} Therefore, we obtain \begin{align} \label{Ppartans} \mathfrak{P}(s)&=(-1)^{(n-1)}\frac{(n-1)!}{\alpha!} \frac{e^{-\mu-sn}}{\mu^{n-1}} \nonumber\\ &\qquad \times \int_0^\infty e^{-nx}x^{n(n-1+\alpha)}\frac{{}_0F_1\left(\alpha+1;\mu x\right)}{(x+s)^{(n+\alpha)(n-1)}} \det\left[L^{(j)}_{n+i-j-1}(-x-s)\right]_{i,j=1,2,\cdots,\alpha} {\rm d}x. \end{align} Although further manipulation in this form is feasible, it is convenient to leave the solution in the current form. Next we focus on solving the multiple integral given (\ref{Sdef}). By symmetry, we convert the ordered region of integration in (\ref{Sdef}) to an unordered region to yield \begin{align*} \mathfrak{S}(s)=\frac{\mathcal{K}_{n,\alpha}}{(n-1)!} \frac{e^{-\mu-s}}{\mu^{n-1}} \int_0^\infty e^{-x}x^{\alpha} &\Biggl(\int_{[x,\infty)^{n-1}} \sum_{k=2}^n\frac{{}_0F_1\left(\alpha+1;\mu \lambda_k\right)}{(\lambda_k-x)\displaystyle\prod_{\substack{i=2\\i\neq k}}^n\left(\lambda_k-\lambda_i\right)}\nonumber\\ & \qquad \quad \left.\times \prod_{i=2}^n\lambda_i^{\alpha}e^{-\lambda_i}(x-\lambda_i)^2 \Delta_{n-1}^2(\boldsymbol{\lambda}){\rm d}\lambda_2\cdots{\rm d}\lambda_n\right)\; {\rm d}x. \end{align*} Since each term in the above summation contributes the same amount, we can further simplify the multiple integral giving \begin{align*} \mathfrak{S}(s)=\frac{\mathcal{K}_{n,\alpha}}{(n-2)!} \frac{e^{-\mu-s}}{\mu^{n-1}} \int_0^\infty e^{-x}x^{\alpha} &\Biggl(\int_{[x,\infty)^{n-1}} \frac{{}_0F_1\left(\alpha+1;\mu \lambda_2\right)}{(\lambda_2-x)\displaystyle\prod_{i=3}^n\left(\lambda_2-\lambda_i\right)}\nonumber\\ & \times \prod_{i=2}^n\lambda_i^{\alpha}e^{-\lambda_i}(x-\lambda_i)^2 \Delta_{n-1}^2(\boldsymbol{\lambda}){\rm d}\lambda_2\cdots{\rm d}\lambda_n\Biggr)\; {\rm d}x, \end{align*} from which we obtain after using the decomposition $\Delta_{n-1}^2(\boldsymbol{\lambda})=\prod_{j=3}^n(\lambda_2-\lambda_j)^2\Delta_{n-2}^2(\boldsymbol{\lambda})$, \begin{align*} \mathfrak{S}(s) &=\frac{\mathcal{K}_{n,\alpha}}{(n-2)!} \frac{e^{-\mu-s}}{\mu^{n-1}} \int_0^\infty e^{-x}x^{\alpha}\left\{\int_x^\infty \lambda_2^\alpha (\lambda_2-x) {}_0F_1\left(\alpha+1;\mu \lambda_2\right)e^{-\left(1+\frac{s}{x}\right)\lambda_2}\right.\nonumber\\ & \times\left. \left( \int_{[x,\infty)^{n-2}} \prod_{i=3}^n\lambda_i^{\alpha}e^{-\left(1+\frac{s}{x}\right)\lambda_i}(\lambda_2-\lambda_i)(x-\lambda_i)^2 \Delta_{n-2}^2(\boldsymbol{\lambda}){\rm d}\lambda_3\cdots{\rm d}\lambda_n\right) {\rm d}\lambda_2\right\} {\rm d}x. \end{align*} Now we apply the variable transformations \begin{align*} y &=\lambda_2-x\\ y_{i-2}&=\frac{(x+s)}{x}(\lambda_i-x),\;\; i=3,4,\cdots,n \end{align*} in the above multiple integral to yield \begin{align*} \mathfrak{S}(s) &=(-1)^{n\alpha}\frac{\mathcal{K}_{n,\alpha}}{(n-2)!} \frac{e^{-\mu-sn}}{\mu^{n-1}} \int_0^\infty \frac{e^{-xn}x^\alpha}{\left(1+\frac{s}{x}\right)^{(n-2)(n+\alpha+1)}} \left\{\int_0^\infty y (y+x)^\alpha e^{-\left(1+\frac{s}{x}\right)y} \right.\nonumber\\ & \hspace{3.5cm} \times{}_0F_1\left(\alpha+1;\mu( y+x)\right) T_{n-2}\left(y\left(1+\frac{s}{x}\right),-s-x,\alpha\right) {\rm d}y\Biggr\} {\rm d}x \end{align*} where \begin{align} T_n(a,b,\alpha):=\int_{[0,\infty)^n}\prod_{i=1}^n(a-y_i)(b-y_i)^\alpha e^{-y_i}y_i^2 \Delta_n^2(\mathbf{y}) {\rm d}y_1{\rm d}y_2\cdots{\rm d}y_n. \end{align} It is not difficult to observe that $T_n(a,b,\alpha)$ and $Q_n(a,b,\alpha)$ defined in (\ref{Qintdef}) share a common structure up to a certain Laguerre weight. Therefore, we can readily follow similar arguments as shown in the Appendix with the modified monic orthogonal polynomials given by $\mathsf{P}_k(x)=(-1)^k k! L_k^{(2)}(x)$ to arrive at \begin{align*} T_n(a,b,\alpha):=\frac{(-1)^{n+\alpha(n+\alpha)}\widetilde{\mathcal{K}}_{n,\alpha}}{(b-a)^\alpha}\det\left[L^{(2)}_{n+i-1}(a)\;\;\; L_{n+i+1-j}^{(j)}(b)\right]_{\substack{i=1,2,\cdots,\alpha+1\\j=2,3,\cdots,\alpha+1}} \end{align*} where \begin{align} \widetilde{\mathcal{K}}_{n,\alpha}=\frac{\prod_{j=1}^{\alpha+1}(n+j-1)! \prod_{j=0}^{n-1}(j+1)!(j+2)!}{\prod_{j=0}^{\alpha-1}j!}. \end{align} This in turn gives \begin{align*} \mathfrak{S}(s) =(-1)^{n}\frac{(n-1)!}{\alpha!} \frac{e^{-\mu-sn}}{\mu^{n-1}}& \int_0^\infty \frac{e^{-xn}x^{\alpha}}{\left(1+\frac{s}{x}\right)^{(n-1)(n+\alpha)-2}} \left\{\int_0^\infty y e^{-\left(1+\frac{s}{x}\right)y} {}_0F_1\left(\alpha+1;\mu( y+x)\right) \right.\nonumber\\ & \times\det\left[L^{(2)}_{n+i-3}\left(y\left(1+\frac{s}{x}\right)\right)\;\;\; L_{n+i-1-j}^{(j)}\left(-x-s\right)\right]_{\substack{i=1,2,\cdots,\alpha+1\\j=2,3,\cdots,\alpha+1}} {\rm d}y\Biggr\} {\rm d}x \end{align*} from which we obtain after the variable transformation $y\left(1+\frac{s}{x}\right)=t$ \begin{align} \label{demmel_decom} \mathfrak{S}(s) =(-1)^{n}\frac{(n-1)!}{\alpha!} \frac{e^{-\mu-sn}}{\mu^{n-1}}& \int_0^\infty \frac{e^{-xn}x^{\alpha}}{\left(1+\frac{s}{x}\right)^{(n-1)(n+\alpha)}} \nonumber\\ & \times \det\left[\varrho_i(s,x) \;\;\; L_{n+i-1-j}^{(j)}\left(-x-s\right)\right]_{\substack{i=1,2,\cdots,\alpha+1\\j=2,3,\cdots,\alpha+1}} {\rm d}x \end{align} where \begin{align} \label{aux_def} \varrho_i(s,x)=\int_0^\infty t e^{-t} {}_0F_1\left(\alpha+1;\mu\left(x+\frac{t}{1+\frac{s}{x}}\right)\right) L^{(2)}_{n+i-3}(t){\rm d}t \end{align} and we have used the fact that only the first column of the determinant depends through the variable $t$. The integral in (\ref{aux_def}) does not seem to have a simple closed form solution. Therefore, to facilitate further analysis, we write the hypergeometric function with its equivalent power series expansion and use Lemma \ref{lag} to arrive at \begin{align*} \varrho_i(s,x)&=\frac{1}{(n+i-3)!}\sum_{p=0}^\infty \sum_{k=0}^p\frac{\mu ^p x^{p-k} (k+1)!}{k!(p-k)! (\alpha+1)_p}\frac{(1-k)_{n+i-3}}{\left(1+\frac{s}{x}\right)^k}\nonumber\\ &= \frac{1}{(n+i-3)!}\sum_{k=0}^\infty \sum_{p=0}^\infty\frac{\mu ^{p+k} x^{p} (k+1)!}{k!p! (\alpha+1)_{p+k}}\frac{(1-k)_{n+i-3}}{\left(1+\frac{s}{x}\right)^k}. \end{align*} The behavior of the Pochhammer symbol $(1-k)_{n+i-3}$ with respect to $l$ deserves a special attention at this juncture. As such, we can observe that \begin{equation*} (1-k)_{n+i-3}=\left\{\begin{array}{cc} (n+i-3)! & \text{for $k=0$}\\ 0 & \text{for $k=1$}\\ (1-k)_{n+i-3} & \text{for $k\geq 2$}, \end{array}\right. \end{equation*} which enables us to decompose the terms corresponding to the summation index $k$ into two parts. As a result, after some algebra, we obtain \begin{align} \label{sigma_decom} \varrho_i(s,x)&= {}_0F_1\left(\alpha+1;\mu x\right) +\frac{\sigma_i(s,x)}{(n+i-3)!}. \end{align} where \begin{align} \label{sigma_def} \sigma_i(s,x)=\sum_{k=0}^\infty \sum_{p=0}^\infty\frac{\mu ^{p+k+2} x^{p} (k+3)!}{(k+2)!p! (\alpha+1)_{p+k+2}}\frac{(-1-k)_{n+i-3}}{\left(1+\frac{s}{x}\right)^{k+2}}. \end{align} Now we substitute (\ref{sigma_decom}) into (\ref{demmel_decom}) and further simplify the resultant determinant using the multilinear property to obtain \begin{align} \label{multi} \mathfrak{S}(s) &=(-1)^{n}\frac{(n-1)!}{\alpha!} \frac{e^{-\mu-sn}}{\mu^{n-1}} \int_0^\infty e^{-xn}x^{n(n+\alpha-1)}\frac{{}_0F_1\left(\alpha+1;\mu x\right)}{\left(x+s\right)^{(n-1)(n+\alpha)}} \nonumber\\ & \hspace{6.5cm}\times \det\left[ 1\;\;\;L_{n+i-1-j}^{(j)}\left(-x-s\right)\right]_{\substack{i=1,2,\cdots,\alpha+1\\j=2,3,\cdots,\alpha+1}} {\rm d}x\nonumber\\ &\qquad +(-1)^{n}\frac{(n-1)!}{\alpha!} \frac{e^{-\mu-sn}}{\mu^{n-1}}\int_0^\infty \frac{e^{-xn}x^{\alpha}}{\left(1+\frac{s}{x}\right)^{(n-1)(n+\alpha)}}\nonumber\\ & \hspace{5.5cm}\times \det\left[\frac{\sigma_i(s,x)}{(n+i-3)!}\;\;\; L_{n+i-1-j}^{(j)}\left(-x-s\right)\right]_{\substack{i=1,2,\cdots,\alpha+1\\j=2,3,\cdots,\alpha+1}} {\rm d}x. \end{align} Let us now focus on further simplification of the determinant in the first integral. To this end, we apply the row operation, $i\text{th row}\to i\text{th row}+(-1)(i-1)\text{th row}$ on each row for $i=2,3,\cdots,\alpha+1$ and expand the resultant determinant using its first column to obtain \begin{align} \det\left[ 1\;\;\;L_{n+i-1-j}^{(j)}\left(-x-s\right)\right]_{\substack{i=1,2,\cdots,\alpha+1\\j=2,3,\cdots,\alpha+1}}& =\det\left[ L_{n+i-1-j}^{(j)}\left(-x-s\right)\right]_{i,j=1,2,\cdots,\alpha} \end{align} where we have used the contiguous relation given in (\ref{contg}). Therefore, in view of (\ref{Ppartans}), we can clearly identify the first term in (\ref{multi}) as $-\mathfrak{P}(s)$. This key observation along with (\ref{mgffact}) gives \begin{align*} \mathfrak{M}_V(s) =(-1)^{n}\frac{(n-1)!}{\alpha !} \frac{e^{-\mu-sn}}{\mu^{n-1}} & \int_0^\infty \frac{e^{-xn}x^{\alpha}}{\left(1+\frac{s}{x}\right)^{(n-1)(n+\alpha)}}\nonumber\\ & \times \det\left[\frac{\sigma_i(s,x)}{(n+i-3)!} \;\;\; L_{n+i-1-j}^{(j)}\left(-x-s\right)\right]_{\substack{i=1,2,\cdots,\alpha+1\\j=2,3,\cdots,\alpha+1}} {\rm d}x. \end{align*} The remaining task at hand is to further simplify $\sigma_i(s,x)$ given in (\ref{sigma_def}). To this end, following (\ref{poch}), we find that $(-1-k))_{n+i-3}$ is non-zero for $k\geq n+i-4$. Therefore, we shift the index $k$ with some algebraic manipulation to obtain the m.g.f. of $V$ as \begin{align} \mathfrak{M}_V(s)&=(n-1)! e^{-\mu-sn}\int_0^\infty \frac{e^{-xn}x^{n(n+\alpha)-1}}{\left(x+s\right)^{(n-1)(n+\alpha+1)}}\nonumber\\ & \qquad \qquad \times \det\left[ \left(-\frac{\mu x}{x+s}\right)^{i-1}\vartheta_i(x\mu,x+s) \;\;\; L_{n+i-1-j}^{(j)}\left(-x-s\right)\right]_{\substack{i=1,2,\cdots,\alpha+1\\j=2,3,\cdots,\alpha+1}} \hspace{-1cm}{\rm d}x \end{align} where \begin{align*} \vartheta_i(w,z)=\frac{(n+i-1)}{(n+\alpha+i-2)!} \sum_{k=0}^\infty \frac{(n+i)_k(n+i-2)_k\; {}_0F_1\left(\alpha+n+i+k-1;w\right) w^k}{k!(\alpha+n+i-1)_k(n+i-1)_k z^k}. \end{align*} Finally, we take the inverse Laplace transform of the above to yield the p.d.f. of $V$ which concludes the proof. Although further simplification of (\ref{pdfexact}) seems intractable for general matrix dimensions $m$ and $n$, we can obtain a relatively simple expression in the important case of square matrices (i.e., $m=n$), which is given in the following corollary. \begin{cor} For $\alpha=0$, (\ref{pdfexact}) becomes \begin{align} f^{(0)}_V(v) &=n(n^2-1) e^{-\mu} (v-n)^{n^2-2}v^{-n^2}\nonumber\\ & \times \sum_{k=0}^\infty \frac{(n^2)_k}{(n)_k k!} \left(\frac{\mu}{v}\right)^k {}_3F_3\left(n+1,n-1,n^2+k;n,n+k,n^2-1;\mu\left(1-\frac{n}{v}\right)\right) H(v-n) \end{align} where $H(z)$ denotes the unit step function and ${}_3F_3(a_1,a_2,a_3;c_1,c_2,c_3;z)$ is the generalized hypergeometric function \cite{Erdelyi}. \end{cor} It is also worth pointing out that, for $\mu=0$ (i.e., when the matrix $\mathbf{W}$ is a central Wishart matrix), (\ref{pdfexact}) simplifies to \begin{align} f^{(\alpha)}_V(v)=\frac{n!(n^2+n\alpha-1)!}{(n+\alpha-1)! v^{n(n+\alpha)}} \mathcal{L}^{-1}\left\{\frac{e^{-ns}}{s^{(n-1)(n+\alpha+1)}}\det\left[L^{(j+1)}_{n+i-j-1}(-s)\right]_{i,j=1,2,\cdots,\alpha}\right\} \end{align} which coincides with the corresponding result given in \cite[Corollary 3.2]{PrathaSIAM}. \section{The Average of the Reciprocal of a Certain Characteristic Polynomial} Here we consider the problem of determining the average of the reciprocal of a certain characteristic polynomial with respect to a complex non-central Wishart density with a rank one mean. It is noteworthy that this particular problem corresponding to complex central Wishart matrices has been solved in \cite{Mehta, Fodorov}. A general framework to derive such averages based on duality relations has been proposed in \cite{Patric}. However, the duality relation given in \cite[Proposition 8]{Patric} does not seem to apply here, since the stringent technical requirements for the validity of that formula are not satisfied by the parameters in our model of interest. Moreover, this particular case has not been considered in a recent detailed analysis on the averages of characteristic polynomials for Gaussian and Chiral Gaussian matrices with an external source \cite{Peter}. Therefore, in what follows, we derive the average of one of the basic forms of the reciprocal of the characteristic polynomial. The most general form, however, is not investigated here. Let us consider the following average \begin{equation} \label{average} {\rm{E}}_{\mathbf{W}}\left(\frac{1}{\det[z\mathbf{I}_n+\mathbf{W}]}\right)={\rm{E}}_{\boldsymbol{\Lambda}}\left(\prod_{j=1}^n\frac{1}{z+\lambda_j}\right),\; |\arg{z}|<\pi, \end{equation} the value of which is given in the following theorem. \begin{thm} Let $\mathbf{W}\sim\mathcal{W}_n\left(m,\mathbf{I}_n,\mathbf{M}^\dagger\mathbf{M}\right)$, where $\mathbf{M}$ is rank-$1$ and $\mathrm{tr}(\mathbf{M}^\dagger\mathbf{M})=\mu$. Then (\ref{average}) is given by \begin{align} \label{reciprocal} {\rm{E}}_{\mathbf{W}}\left(\frac{1}{\det[z\mathbf{I}_n+\mathbf{W}]}\right)= z^\alpha\sum_{k=0}^\infty (-\mu)^k \Psi(k+n+\alpha;\alpha+1;z),\;\; |\arg{z}|<\pi \end{align} where $\Psi(a;c;z)$ is the confluent hypergeometric function of the second kind. \end{thm} {\bf{Proof:}} Due to symmetry, we have \begin{equation*} {\rm{E}}_{\boldsymbol{\Lambda}}\left(\prod_{j=1}^n\frac{1}{z+\lambda_j}\right)=\frac{1}{n!}\int_{[0,\infty)^n} \frac{f_{\boldsymbol{\Lambda}}(\lambda_1,\lambda_2,\cdots,\lambda_n)}{\prod_{j=1}^n (z+\lambda_j)} {\rm d}\lambda_1{\rm d}\lambda_2\cdots{\rm d}\lambda_n. \end{equation*} Now it is convenient to apply partial faction decomposition to yield \begin{align*} {\rm{E}}_{\boldsymbol{\Lambda}}\left(\prod_{j=1}^n\frac{1}{z+\lambda_j}\right)=\frac{1}{n!}\sum_{j=1}^n \int_{[0,\infty)^n} \frac{1}{\displaystyle \prod_{\substack{i=1\\ i\neq j}}^n(\lambda_i-\lambda_j)} \frac{f_{\boldsymbol{\Lambda}}(\lambda_1,\lambda_2,\cdots,\lambda_n)}{ (z+\lambda_j)} {\rm d}\lambda_1{\rm d}\lambda_2\cdots{\rm d}\lambda_n. \end{align*} Since each integral in the above summation contributes the same amount, we can simplify it as \begin{align*} {\rm{E}}_{\boldsymbol{\Lambda}}\left(\prod_{j=1}^n\frac{1}{z+\lambda_j}\right)=\frac{1}{(n-1)!} \int_{[0,\infty)^n} \frac{1}{\prod_{j=2 }^n(\lambda_j-\lambda_1)} \frac{f_{\boldsymbol{\Lambda}}(\lambda_1,\lambda_2,\cdots,\lambda_n)}{ (z+\lambda_1)} {\rm d}\lambda_1{\rm d}\lambda_2\cdots{\rm d}\lambda_n. \end{align*} To facilitates further analysis, we use (\ref{newden}) with some rearrangements to write \begin{align} \label{chaeq} {\rm{E}}_{\boldsymbol{\Lambda}}\left(\prod_{j=1}^n\frac{1}{z+\lambda_j}\right)=\Omega_1(z)+\Omega_2(z) \end{align} where \begin{align} \label{omega1} \Omega_1(z)&= \frac{(-1)^{n-1}}{\alpha !}\frac{e^{-\mu}}{\mu^{n-1}} \int_0^\infty \frac{{}_0F_1\left(\alpha+1,\mu \lambda_1\right)}{z+\lambda_1} \lambda_1 ^{\alpha} e^{-\lambda_1}{\rm d}\lambda_1 \end{align} and \begin{align} \label{omega2} \Omega_2(z)& =\frac{\mathcal{K}_{n,\alpha}}{(n-1)!}\frac{e^{-\mu}}{\mu^{n-1}} \int_0^\infty \frac{\lambda_1^\alpha e^{-\lambda_1}}{z+\lambda_1}\left( \sum_{k=2}^n \int_{[0,\infty)^{n-1}} \frac{{}_0F_1\left(\alpha+1,\mu \lambda_k\right)}{\prod_{\substack{j=1\\ j\neq k}}^n (\lambda_j-\lambda_k)}\right.\nonumber\\ & \hspace{5.5cm}\left.\times \prod_{j=2}^n\frac{\lambda_j^\alpha e^{-\lambda_j}}{ (\lambda_j-\lambda_1)} \Delta_n^2(\boldsymbol{\lambda}){\rm d}\lambda_2 {\rm d}\lambda_3\cdots{\rm d}\lambda_n \right) {\rm d}\lambda_1. \end{align} Since further simplification of (\ref{omega1}) seems an arduous task, we leave it in its current form and focus on (\ref{omega2}). Noting that each term inside the summation contributes the same amount due to symmetry in $\lambda_2,\lambda_3,\cdots,\lambda_n$, we can further simplify (\ref{omega2}) to yield \begin{align*} \Omega_2(z)& =\frac{(-1)^n\mathcal{K}_{n,\alpha}}{(n-2)!}\frac{e^{-\mu}}{\mu^{n-1}} \int_0^\infty \frac{\lambda_1^\alpha e^{-\lambda_1}}{z+\lambda_1} \int_0^\infty {}_0F_1\left(\alpha+1,\mu \lambda_2\right) \lambda_2^\alpha e^{-\lambda_2}\left( \int_{[0,\infty)^{n-1}} \frac{1}{\prod_{\substack{j=1\\ j\neq 2}}^n (\lambda_j-\lambda_k)}\right.\nonumber\\ & \hspace{7cm}\left.\times \prod_{j=2}^n\frac{\lambda_j^\alpha e^{-\lambda_j}}{ (\lambda_1-\lambda_j)} \Delta_n^2(\boldsymbol{\lambda}) {\rm d}\lambda_3\cdots{\rm d}\lambda_n \right){\rm d}\lambda_2 {\rm d}\lambda_1. \end{align*} We now use the decomposition $\Delta_n^2(\boldsymbol{\lambda})=\prod_{j=2}^n (\lambda_1-\lambda_j)^2\prod_{j=3}^n(\lambda_2-\lambda_j)^2\Delta^2_{n-2}(\boldsymbol{\lambda})$ followed by the variable transformation $y_j=\lambda_{j-2},\;j=3,4,\cdots,n$, to obtain \begin{align*} \Omega_2(z)= \frac{(-1)^n\mathcal{K}_{n,\alpha}}{(n-2)!}\frac{e^{-\mu}}{\mu^{n-1}} \int_0^\infty \frac{\lambda_1^\alpha e^{-\lambda_1}}{z+\lambda_1}\left( \int_0^\infty {}_0F_1\left(\alpha+1,\mu \lambda_2\right) \lambda_2^\alpha e^{-\lambda_2} U_{n-2}(\lambda_1,\lambda_2,\alpha) {\rm d}\lambda_2 \right) {\rm d}\lambda_1 \end{align*} where \begin{align} U_n(r_1,r_2,\alpha):=\int_{[0,\infty)^{n}} \prod_{j=1}^n\prod_{i=1}^2(r_i-y_j)y^\alpha_j e^{-y_j} \Delta^2_n({\bf{y}}) {\rm d}y_1 {\rm d}y_2\cdots {\rm d}y_n. \end{align} The above integral can be solved using \cite[Eqs. 22.4.2, 22.4.11]{Mehta} and the Appendix with the choice of $\mathsf{P}_k(x)=(-1)^kk!L_k^{(\alpha)}(x)$ to yield \begin{align} U_n(r_1,r_2,\alpha)=(-1)n!(n+1)!\prod_{j=0}^{n-1}(j+1)!(j+\alpha)!\frac{\det\left[L^{(\alpha)}_{n+i-1}(r_j)\right]_{i,j=1,2}}{(r_2-r_1)}. \end{align} This in turn gives \begin{align} \label{omega2int} \Omega_2(z)& = (-1)^{n+1}\frac{(n-1)!}{\alpha ! (n+\alpha-2)!}\frac{e^{-\mu}}{\mu^{n-1}}\nonumber\\ & \times \int_0^\infty \frac{\lambda_1^\alpha e^{-\lambda_1}}{z+\lambda_1}\left( \int_0^\infty {}_0F_1\left(\alpha+1,\mu \lambda_2\right) \frac{\det\left[L^{(\alpha)}_{n+i-1}(\lambda_j)\right]_{i,j=1,2}}{(\lambda_2-\lambda_1)}\lambda_2^\alpha e^{-\lambda_2} {\rm d}\lambda_2 \right) {\rm d}\lambda_1. \end{align} Further manipulation of the above integral in its current form is highly undesirable due to the term $\lambda_2-\lambda_1$ in the denominator. To circumvent this difficulty, we employ the following form of the Christoffel-Darboux formula \begin{align*} \frac{\det\left[L^{(\alpha)}_{n+i-1}(\lambda_j)\right]_{i,j=1,2}}{(\lambda_2-\lambda_1)}& = \frac{L^{(\alpha)}_{n-1}(\lambda_2)L^{(\alpha)}_{n-2}(\lambda_1)-L^{(\alpha)}_{n-1}(\lambda_1)L^{(\alpha)}_{n-2}(\lambda_2)}{\lambda_2-\lambda_1}\nonumber\\ &= (-1)\frac{(n+\alpha-2)!}{(n-1)!}\sum_{j=0}^{n-2}\frac{j!}{(j+\alpha)!}L^{(\alpha)}_{j}(\lambda_1)L^{(\alpha)}_{j}(\lambda_2) \end{align*} in (\ref{omega2int}) to obtain \begin{align*} \Omega_2(z) = \frac{(-1)^n}{\alpha !}\frac{e^{-\mu}}{\mu^{n-1}} \sum_{j=0}^{n-2}\frac{j!}{(j+\alpha)!}& \int_0^\infty \frac{L^{(\alpha)}_{j}(\lambda_1)}{z+\lambda_1}\lambda_1^\alpha e^{-\lambda_1}{\rm d}\lambda_1\\ &\qquad \quad \times \int_0^\infty {}_0F_1\left(\alpha+1,\mu \lambda_2\right) L^{(\alpha)}_{j}(\lambda_2)\lambda_2^\alpha e^{-\lambda_2} {\rm d}\lambda_2. \end{align*} The second integral can be solved using Lemma \ref{lag} to obtain \begin{align} \int_0^\infty {}_0F_1\left(\alpha+1,\mu \lambda_2\right) L^{(\alpha)}_{j}(\lambda_2)\lambda_2^\alpha e^{-\lambda_2} {\rm d}\lambda_2= \frac{\alpha!}{j!}(-\mu)^je^{\mu} \end{align} which in turn gives \begin{align*} \Omega_2(z)= \frac{(-1)^{n}}{\alpha !}\frac{e^{-\mu}}{\mu^{n-1}} \sum_{j=0}^{n-2}\frac{(-\mu)^j\alpha !}{(j+\alpha)!}\;e^{\mu} \int_0^\infty \frac{L^{(\alpha)}_{j}(\lambda_1)}{z+\lambda_1}\lambda_1^\alpha e^{-\lambda_1}{\rm d}\lambda_1. \end{align*} In order to further simplify the above integral, we rearrange the summation with respect to index $j$ giving \begin{align} \label{omega2int1} \Omega_2(z) = \frac{(-1)^{n}}{\alpha !}\frac{e^{-\mu}}{\mu^{n-1}} & \left( \sum_{j=0}^{\infty}\frac{(-\mu)^j\alpha !}{(j+\alpha)!}\;e^{\mu} \int_0^\infty \frac{L^{(\alpha)}_{j}(\lambda_1)}{z+\lambda_1}\lambda_1^\alpha e^{-\lambda_1}{\rm d}\lambda_1\right.\nonumber\\ & \hspace{2.6cm}-\left. \sum_{j=n-1}^{\infty}\frac{(-\mu)^j\alpha !}{(j+\alpha)!}\;e^{\mu} \int_0^\infty \frac{L^{(\alpha)}_{j}(\lambda_1)}{z+\lambda_1}\lambda_1^\alpha e^{-\lambda_1}{\rm d}\lambda_1\right). \end{align} Let us now focus on the first infinite summation. As such, using (\ref{lagdef}) with some algebraic manipulation we get \begin{align} \sum_{j=0}^{\infty}\frac{(-\mu)^j\alpha !}{(j+\alpha)!}L^{(\alpha)}_{j}(\lambda_1)& ={}_0F_1(\alpha+1;\mu \lambda_1) e^{-\mu}. \end{align} Therefore, (\ref{omega2int1}) simplifies to \begin{align*} \Omega_2(z)= \frac{(-1)^{n}}{\alpha !}\frac{e^{-\mu}}{\mu^{n-1}}&\left( \int_0^\infty \frac{{}_0F_1(\alpha+1;\mu \lambda_1)}{z+\lambda_1}\lambda_1^\alpha e^{-\lambda_1}{\rm d}\lambda_1\right.\\ &\hspace{3cm}-\left. \sum_{j=n-1}^{\infty}\frac{(-\mu)^j\alpha !}{(j+\alpha)!}\;e^{\mu} \int_0^\infty \frac{L^{(\alpha)}_{j}(\lambda_1)}{z+\lambda_1}\lambda_1^\alpha e^{-\lambda_1}{\rm d}\lambda_1\right). \end{align*} from which, in view of (\ref{omega1}), we obtain \begin{align*} \Omega_2(z)& =-\Omega_1(z)+ (-1)^{n+1}\frac{1}{\mu^{n-1}} \sum_{j=n-1}^{\infty}\frac{(-\mu)^j}{(j+\alpha)!} \int_0^\infty \frac{L^{(\alpha)}_{j}(\lambda_1)}{z+\lambda_1}\lambda_1^\alpha e^{-\lambda_1}{\rm d}\lambda_1. \end{align*} Finally, we shift the initial value of the summation index to zero and use \cite[Eq. 6.15.2.16]{Erdelyi} with (\ref{chaeq}) to yield (\ref{reciprocal}) which concludes the proof. \section*{Acknowledgment} The author would like to thank Yang Chen and Matthew McKay for insightful discussions. This work was supported by a National Science Foundation grant. \section*{Appendix} Following \cite[Eqs. 22.4.2, 22.4.11]{Mehta}, we begin with the integral \begin{align} \label{qbeg} \int_{[0,\infty)^n}\prod_{j=1}^ne^{-y_j}\prod_{i=1}^{\alpha+1}(r_i-y_j)&\Delta_n^2(\mathbf{y}) {\rm d}y_1 {\rm d}y_2\cdots{\rm d}y_n = \prod_{i=0}^{n-1}(i+1)!i!\;\frac{\det\left[\mathsf{P}_{n+i-1}(r_j)\right]_{i,j=1,2,\cdots,\alpha+1}}{\Delta_{\alpha+1}(\mathbf{r})}, \end{align} where $\mathsf{P}_{k}(x)$'s are monic polynomials orthogonal with respect to $e^{-x}$, over $0\leq x<\infty$. As such, we choose $\mathsf{P}_k(x)=(-1)^kk!L_k^{(0)}(x)$, which upon substituting into the above equation gives \begin{align} & \int_{[0,\infty)^n}\prod_{j=1}^ne^{-y_j}\prod_{i=1}^{\alpha+1}(r_i-y_j)\Delta_n^2(\mathbf{y}) {\rm d}y_1 {\rm d}y_2\cdots{\rm d}y_n \nonumber\\ &=(-1)^{(n-1)(\alpha+1)}\prod_{i=0}^{n-1}(i+1)!i!\prod_{i=1}^{\alpha+1}(-1)^i(n+i-1)!\; \frac{\det\left[L^{(0)}_{n+i-1}(r_j)\right]_{i,j=1,2,\cdots,\alpha+1}}{\Delta_{\alpha+1}(\mathbf{r})}. \end{align} In general, the $r_i$'s in the above formula are distinct parameters. However, for our purpose, we have to choose them in such a manner that the left side of (\ref{qbeg}) becomes $Q_n(a,b,\alpha)$. To this end, we select $r_i$'s such that \begin{equation*} r_i=\left\{\begin{array}{ll} a & \text{if $i=1$}\\ b & \text{if $i=2,3,\cdots,\alpha+1$.} \end{array}\right. \end{equation*} This direct substitution in turn gives a $\frac{0}{0}$ indeterminate form for the right side of (\ref{qbeg}). To circumvent this problem, instead of direct substitution, we evaluate the following limit \begin{align} \label{qdef} Q_n(a,b,\alpha)&=(-1)^{(n-1)(\alpha+1)}\prod_{i=0}^{n-1}(i+1)!i!\prod_{i=1}^{\alpha+1}(-1)^i(n+i-1)!\nonumber\\ & \hspace{2cm} \times \lim_{r_2,r_3,\cdots,r_{\alpha+1}\to b}\frac{\det\left[L^{(0}_{n+i-1}(a)\;\;\;L^{(0)}_{n+i-1}(r_j)\right]_{\substack{i=1,2,\cdots,\alpha+1\\ j=2,3,\cdots,\alpha+1}}}{\det[a^{i-1}\;\;\; r_j^{i-1}]_{\substack{i=1,2,\cdots,\alpha+1\\ j=2,3,\cdots,\alpha+1}}}. \end{align} The limit on the right can be evaluated based on an approach given in \cite{Khatri} to yield \begin{align} \label{qlim} &\lim_{r_2,r_3,\cdots,r_{\alpha+1}\to b}\frac{\det\left[L^{(0)}_{n+i-1}(a)\;\;\;L^{(0)}_{n+i-1}(r_j)\right]_{\substack{i=1,2,\cdots,\alpha+1\\ j=2,3,\cdots,\alpha+1}}}{\det[a^{i-1}\;\;\; r_j^{i-1}]_{\substack{i=1,2,\cdots,\alpha+1\\ j=2,3,\cdots,\alpha+1}}}\nonumber\\ &\hspace{5.5cm}=\frac{\det\left[L^{(0)}_{n+i-1}(a)\;\;\;\displaystyle \frac{{\rm d}^{j-2}}{{\rm d}b^{j-2}}L^{(0)}_{n+i-1}(b)\right]_{\substack{i=1,2,\cdots,\alpha+1\\ j=2,3,\cdots,\alpha+1}}}{\det\left[a^{i-1}\;\;\; \displaystyle \frac{{\rm d}^{j-2}}{{\rm d}b^{j-2}}b^{i-1}\right]_{\substack{i=1,2,\cdots,\alpha+1\\ j=2,3,\cdots,\alpha+1}}}. \end{align} The denominator of (\ref{qlim}) gives \begin{equation} \label{denom} \det\left[a^{i-1}\;\;\; \displaystyle \frac{{\rm d}^{j-2}}{{\rm d}b^{j-2}}b^{i-1}\right]_{\substack{i=1,2,\cdots,\alpha+1\\ j=2,3,\cdots,\alpha+1}}=\prod_{i=1}^{\alpha-1} i!\;(b-a)^\alpha. \end{equation} The numerator can be simplified using (\ref{lagderi}) to yield \begin{align} \label{num} &\det\left[L^{(0)}_{n+i-1}(a)\;\;\;\displaystyle \frac{{\rm d}^{j-2}}{{\rm d}b^{j-2}}L^{(0)}_{n+i-1}(b)\right]_{\substack{i=1,2,\cdots,\alpha+1\\ j=2,3,\cdots,\alpha+1}}\nonumber\\ &\hspace{3.5cm} = (-1)^{\frac{1}{2}\alpha(\alpha-1)}\det\left[L^{(0}_{n+i-1}(a)\;\;\;L^{(j-2)}_{n+i+1-j}(b)\right]_{\substack{i=1,2,\cdots,\alpha+1\\ j=2,3,\cdots,\alpha+1}}. \end{align} Substituting (\ref{denom}) and (\ref{num}) into (\ref{qlim}) and then the result into (\ref{qdef}) gives (\ref{q1}).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} It is well known that chemically peculiar stars of the Ap and Am types are rotating slower than their normal counterparts (e.g. North 1994). The question then arises, whether slow rotation is acquired during the main sequence life of the star, or before its arrival on the ZAMS, i.e. during the proto-stellar phase. Havnes \& Conti (1971) had suggested that magnetic stars undergo magnetic braking during their main sequence lifetime, due to mass accretion from the interstellar medium, while Strittmatter \& Norris (1971) proposed the same, but due to mass loss. These theoretical considerations seemed to get support from observational evidence when Wolff (1975, 1981), Stift (1976) and Abt (1979) found some correlation between the radii or ages of Ap stars and their rotational period obtained from their photometric or spectroscopic variation. On the contrary, Hartoog (1977) concluded that magnetic Ap stars in young clusters do not rotate faster than those in older clusters, and this conclusion was also reached by North (1984a, b, 1985, 1986, 1987), Borra et al. (1985) and Klochkova \& Kopylov (1985). The apparent correlation between radius and rotational period has been commented by Hensberge et al. (1991), who conclude that this correlation is real but possibly due to a detection bias depending on the inclination angle, and by Stepien (1994), who concluded on the contrary that this correlation does simply not exist, if spurious rotational periods are duly excluded. Using the $\log P_{\rm rot}$ vs. $\log g$ diagram for field stars, North (1985, 1986, 1992) showed that for Si stars, there is indeed a trend towards longer periods for low-gravity stars, but which can be entirely explained by conservation of angular momentum as the star evolves with increasing radius within the main sequence. In this note, we revisit the $\log P_{\rm rot}$ vs. $\log g$ diagram for field stars having both a rotational period in the literature and a reliable surface gravity, the latter being either spectroscopic or obtained from Hipparcos data. \section{The sample} \subsection{Spectroscopic surface gravities} The sample has been built from two parts. First was considered the list of silicon stars for which North \& Kroll (1989, hereafter NK89) give a spectroscopic estimate of $\log g$ based on the profile of the H$_\beta$ line. The estimate given in column 5 of their Table 1 was adopted, and corrected for a constant shift \begin{equation} \log g(corrected) = \log g(\rm H_\beta) + 0.14 \end{equation} which takes into account the systematic error displayed in their Fig. 16, although not exactly according to their Eq. 16 which would imply too large $\log g$ values for some stars. The intersection of this list with all stars having a known rotational period in the literature was then done, using an updated version of the database of Renson et al. (1991) which is a digital version of the catalogue of Renson (1991). Two stars in the list of NK89 which had no known period have been added, since their period has now been determined thanks to the Hipparcos mission (Perryman et al. 1997): they are HD 154856 ($P=1.9525$~days) and HD 161841 ($P=3.21048$~days). The original sample of NK89 was biased in favour of low {\it photometric} gravities, hence also of low {\it spectroscopic} (and hopefully real) gravities, since there is a loose correlation between them. Most stars of this sample are not very bright ($V\sim 7 - 8$), nor closeby enough that their parallax is significant, even for Hipparcos. This sample contains 40 stars. \subsection{Hipparcos surface gravities} Second, the list of Si, SiCr or Cr stars with a Hipparcos parallax larger than 7~mas was defined and those with a known rotational period were retained. One star had no period in the literature but has a new one from Hipparcos (HD 74067, $P=3.113$~days). Their mass has been interpolated in theoretical evolutionary tracks (Schaller et al. 1992) from $T_{\rm eff}$ and $M_{\rm bol}$. The effective temperature has been computed using the $X$ and $Y$ parameters of Geneva photometry calibrated by K\"unzli et al. (1997), and corrected according to the formula $T_{\rm eff}= -230 + 0.941\times T(X,Y)$ (Hauck \& K\"unzli 1996) which replaces Eq. 1 of Hauck \& North (1993) and where $T(X,Y)$ results from the calibration. The bolometric correction was interpolated in Table 6 of Lanz (1984) and corrected by $\delta_{\rm BC}$ plotted in his Fig. 4a. Contrary to the previous sample, this one is not biased regarding the distribution of the surface gravities, at least not {\it a priori}: it is a volume-limited sample which, although surely affected by a Malmquist-like bias, should be representative of field stars with a more or less uniform distribution of ages. Therefore, it contains a majority of stars which are rather close to the ZAMS in the HR diagram, just because stellar evolution is slower there than near the core-hydrogen exhaustion phase. If there is no {\it a priori} bias regarding the evolutionary state, one may say, nevertheless, that the $\log g$ distribution is biased towards high values, compared to a uniform distribution (which, then, would be strongly biased towards large ages). This sample contains 56 stars. \subsubsection{Lutz-Kelker correction} The absolute magnitudes have been corrected for the Lutz-Kelker (1973) correction, but this correction was not applied in its original form which assumes a constant stellar density. Indeed, the distances involved are not negligible compared with the density scale height perpendicular to the galactic disk, so the following generalized formulae were adopted: \begin{eqnarray} N(r)dr \propto& r^2 \cos b \exp\left(-\frac{r\sin |b|}{\beta(M_V)}\right) dr \\ G(Z,\pi_{\rm o},b)=& Z^{-4}\times\exp\left(\frac{\sin |b|}{\beta(M_V) \pi_{\rm o}}\left[1-\frac{1}{Z}\right]\right) \nonumber \\ & \times\exp\left(-\frac{(Z-1)^2}{2(\sigma/\pi_{\rm o})^2}\right) \\ Z \equiv \frac{\pi}{\pi_{\rm o}}& \end{eqnarray} Let us recall that the correction on the absolute magnitude then reads: \begin{equation} <\Delta M(\epsilon)> = \frac{5\int_{\epsilon}^{\infty}\log Z G(Z,\pi_{\rm o},b)dZ} {\int_{\epsilon}^{\infty}G(Z,\pi_{\rm o},b)dZ} \end{equation} where $\epsilon = 0.2$, $\beta(M_V)$ is the scale height of the star density above the galactic plane tabulated by Allen (1976), $\pi$ is the true parallax and $\pi_{\rm o}$ is the observed parallax affected by a gaussian error $\sigma$. \subsubsection{Visual absorption} The absolute magnitude also had to be corrected for the visual absorption, even though it remains negligible in most cases. Since Cramer (1982) found that the colour excess $E[U-B]$ defined in the Geneva system was almost the same for Bp, Ap members of clusters as for normal B, A members -- and with a smaller dispersion than $E[B-V]$ -- this colour excess was used, corrected using Cramer's relation $E[U-B](Bp,Ap) = E[U-B](X,Y)-0.009$ where $E[U-B](X,Y)$ is the colour excess obtained using the intrinsic colours (of normal stars) of Cramer (1982). $A_V$ is then obtained through $E(B-V)=1.28 E[U-B]$, since $E[U-B] = 0.658 E[B-V]$ (Cramer 1994) and $E(B-V) = 0.842 E[B-V]$ (Cramer 1984), and $R = A_V/E(B-V) = 3.25 + 0.25 (B-V)_{\rm o} + 0.05 E(B-V)$ (Olson 1975). \subsubsection{Fundamental parameters from Hipparcos parallaxes} Once the effective temperature and bolometric correction are determined from photometry and the absolute magnitude from Hipparcos parallaxes as described above, it becomes possible to pinpoint the star on a theoretical HR diagram, the luminosity being obtained from \begin{equation} \log(L/L_\odot) = -0.4 (M_V - 4.72 + B.C.) \end{equation} Then, we assume that Bp stars follow standard, solar-composition evolutionary tracks (since the chemical peculiarities are limited to superficial layers only) and the mass can be interpolated from $T_{\rm eff}$ and $\log (L/L_\odot)$ (using successive 3rd-degree splines in luminosity, $T_{\rm eff}$ and overall metallicity $Z$, with $Z=0.018$) whenever there is a one-to-one relation between these quantities. The latter condition is not fulfilled near the core-hydrogen exhaustion phase, when $T_{\rm eff}$ increases, then decreases again, and in this domain we always assumed the star to lie on the lower, continuous branch of the evolutionary track, which also corresponds to the slowest evolution, hence to the higher probability. This assumption, if violated, will lead to a mass overestimate no larger than five percent. The radius is directly obtained from \begin{equation} \log(R/R_\odot) =\frac{1}{2}\log(L/L_\odot)-2\log (T_{\rm eff}/T_{\rm eff\odot}) \end{equation} and the surface gravity from \begin{eqnarray} \log g = \log\left({M \over M_\odot}\right) + 4 \log\left({T_{\rm eff} \over T_{\rm eff\odot}}\right)\nonumber \\ - log\left({L \over L_\odot}\right) + 4.44 \end{eqnarray} The latter equation shows how strongly $\log g$ depends on $T_{\rm eff}$, which remains a crucial quantity. The error on it was generally assumed to be 5 percent. The errors on the other quantities are estimated using the usual, linearized propagation formulae, but caring for the correlations between $L$, $T_{\rm eff}$ and $M$. The results are displayed on Table 1. \begin{table*} \caption{Fundamental parameters of the Si and He-weak stars derived from the Hipparcos parallaxes. The masses were obtained by interpolation in the evolutionary tracks of Schaller et al. (1992). Note that the errors are multiplied by a factor 1000 for $T_{\rm eff}$, 100 for Mass, $\log (L/L_\odot)$ and $\log g$, and 10 for $R$. The rotational period from the literature (or from Hipparcos photometry in three cases, see text) is given in the last column. ``LK'' means ``Lutz-Kelker correction'' and is expressed in magnitudes.} \begin{center} \footnotesize \begin{tabular}{rrlllllrlll}\hline HD&M$_{\rm V}$&Mass [M$_\odot$]&$\log T_{\rm eff}$&$\log(L/L_\odot)$&$\log g$ & R [R$_\odot$]&d [pc] &$\sigma(\pi)/\pi$&LK [mag]& P$_{\rm rot}$ [days] \\ \hline 4778& 1.18& 2.24$\pm$ 9& 3.972$\pm$ 14& 1.51$\pm$ 7& 4.12$\pm$ 9& 2.2$\pm$ 2& 93& 0.07&-0.046& 2.5616 \\ 9484& 1.00& 2.34$\pm$ 12& 3.987$\pm$ 22& 1.59$\pm$ 9& 4.12$\pm$ 13& 2.2$\pm$ 3& 128& 0.09&-0.070& 0.7 ? \\ 9531& 0.35& 2.85$\pm$ 15& 4.039$\pm$ 20& 1.96$\pm$10& 4.19$\pm$ 13& 2.7$\pm$ 4& 126& 0.10&-0.103& 0.67 \\ 9996& 0.68& 2.47$\pm$ 15& 3.987$\pm$ 23& 1.72$\pm$12& 4.01$\pm$ 14& 2.6$\pm$ 4& 149& 0.12&-0.143&8395 (23 y)?\\ 10221& -0.28& 3.12$\pm$ 12& 4.030$\pm$ 20& 2.19$\pm$ 9& 3.95$\pm$ 11& 3.6$\pm$ 5& 141& 0.08&-0.069& 3.18 \\ 11502& -0.15& 2.87$\pm$ 8& 3.989$\pm$ 18& 2.05$\pm$ 6& 3.76$\pm$ 9& 3.7$\pm$ 4& 63& 0.05&-0.026& 1.60920 \\ 12767& -0.60& 3.65$\pm$ 18& 4.111$\pm$ 20& 2.39$\pm$ 9& 4.14$\pm$ 12& 3.2$\pm$ 4& 114& 0.09&-0.059& 1.9 \\ 14392& 0.26& 3.07$\pm$ 14& 4.078$\pm$ 20& 2.04$\pm$ 8& 4.29$\pm$ 11& 2.4$\pm$ 3& 112& 0.08&-0.056& 4.189 \\ 18296& -0.54& 3.32$\pm$ 15& 4.036$\pm$ 20& 2.31$\pm$11& 3.89$\pm$ 12& 4.1$\pm$ 6& 125& 0.11&-0.109& 2.8842 \\ 19832& 0.25& 3.16$\pm$ 17& 4.095$\pm$ 20& 2.04$\pm$10& 4.36$\pm$ 12& 2.3$\pm$ 3& 119& 0.10&-0.096& 0.7278972 \\ 24155& 0.18& 3.39$\pm$ 20& 4.132$\pm$ 20& 2.11$\pm$12& 4.48$\pm$ 13& 2.1$\pm$ 3& 145& 0.12&-0.138& 2.535 \\ 25267& -0.30& 3.35$\pm$ 15& 4.080$\pm$ 20& 2.26$\pm$ 7& 4.11$\pm$ 11& 3.1$\pm$ 4& 103& 0.07&-0.041& 1.210 \\ 27309& 0.34& 3.06$\pm$ 12& 4.079$\pm$ 13& 2.02$\pm$ 8& 4.32$\pm$ 9& 2.4$\pm$ 3& 99& 0.07&-0.052& 1.569 \\ 29305& -0.39& 3.33$\pm$ 10& 4.064$\pm$ 14& 2.29$\pm$ 5& 4.02$\pm$ 7& 3.5$\pm$ 3& 54& 0.03&-0.006& 2.94 \\ 32549& -0.98& 3.29$\pm$ 17& 3.985$\pm$ 23& 2.37$\pm$12& 3.47$\pm$ 14& 5.5$\pm$10& 131& 0.12&-0.144& 4.64 \\ 32650& 0.05& 3.08$\pm$ 13& 4.059$\pm$ 20& 2.10$\pm$ 7& 4.15$\pm$ 11& 2.9$\pm$ 4& 117& 0.06&-0.037& 2.73332 \\ 34452& -0.42& 3.95$\pm$ 21& 4.160$\pm$ 20& 2.42$\pm$10& 4.34$\pm$ 12& 2.6$\pm$ 4& 144& 0.10&-0.097& 2.4660 \\ 40312& -1.05& 3.38$\pm$ 8& 3.997$\pm$ 13& 2.42$\pm$ 6& 3.49$\pm$ 7& 5.5$\pm$ 5& 54& 0.04&-0.018& 3.6190 \\ 49976& 1.13& 2.21$\pm$ 11& 3.955$\pm$ 23& 1.51$\pm$ 8& 4.04$\pm$ 13& 2.3$\pm$ 3& 104& 0.08&-0.067& 2.976 \\ 54118& 0.35& 2.73$\pm$ 9& 4.022$\pm$ 17& 1.89$\pm$ 5& 4.03$\pm$ 9& 2.7$\pm$ 3& 87& 0.04&-0.015& 3.28 \\ 56455& 0.02& 3.25$\pm$ 14& 4.096$\pm$ 20& 2.13$\pm$ 7& 4.29$\pm$ 11& 2.5$\pm$ 3& 133& 0.07&-0.045& 1.93 \\ 72968& 1.06& 2.25$\pm$ 9& 3.960$\pm$ 14& 1.55$\pm$ 7& 4.04$\pm$ 9& 2.4$\pm$ 3& 84& 0.07&-0.047& 11.305 \\ 74067& 0.45& 2.57$\pm$ 7& 3.988$\pm$ 13& 1.82$\pm$ 6& 3.93$\pm$ 7& 2.9$\pm$ 3& 87& 0.05&-0.024& 3.11299 \\ 74521& -0.02& 3.01$\pm$ 16& 4.033$\pm$ 20& 2.11$\pm$10& 4.04$\pm$ 13& 3.3$\pm$ 5& 131& 0.11&-0.103& 7.0501 \\ 89822& 0.76& 2.57$\pm$ 9& 4.025$\pm$ 16& 1.72$\pm$ 6& 4.18$\pm$ 9& 2.2$\pm$ 2& 93& 0.05&-0.020& 7.5586 \\ 90044& 0.71& 2.51$\pm$ 12& 4.002$\pm$ 22& 1.73$\pm$ 8& 4.07$\pm$ 12& 2.4$\pm$ 3& 110& 0.08&-0.054& 4.379 \\ 92664& -0.37& 3.86$\pm$ 17& 4.154$\pm$ 20& 2.38$\pm$ 8& 4.35$\pm$ 11& 2.5$\pm$ 3& 146& 0.07&-0.053& 1.673 \\ 103192& -0.55& 3.36$\pm$ 15& 4.044$\pm$ 20& 2.33$\pm$10& 3.90$\pm$ 12& 4.0$\pm$ 6& 117& 0.10&-0.091& 2.34 \\ 112381& 1.40& 2.26$\pm$ 12& 3.999$\pm$ 24& 1.45$\pm$ 9& 4.29$\pm$ 13& 1.8$\pm$ 3& 105& 0.09&-0.076& 2.8 \\ 112413& 0.24& 3.00$\pm$ 9& 4.060$\pm$ 14& 2.04$\pm$ 5& 4.21$\pm$ 8& 2.6$\pm$ 2& 34& 0.04&-0.011& 5.46939 \\ 114365& 0.83& 2.80$\pm$ 13& 4.069$\pm$ 20& 1.81$\pm$ 8& 4.45$\pm$ 11& 1.9$\pm$ 3& 108& 0.08&-0.055& 1.27 \\ 115735& 0.48& 2.55$\pm$ 7& 3.990$\pm$ 13& 1.80$\pm$ 6& 3.96$\pm$ 7& 2.8$\pm$ 3& 85& 0.05&-0.024& 0.77 ? \\ 116458& -0.17& 2.95$\pm$ 11& 4.012$\pm$ 22& 2.09$\pm$ 8& 3.81$\pm$ 11& 3.5$\pm$ 5& 146& 0.08&-0.063& 147.9 \\ 119419& 1.09& 2.62$\pm$ 13& 4.048$\pm$ 20& 1.69$\pm$ 9& 4.45$\pm$ 12& 1.9$\pm$ 3& 116& 0.08&-0.071& 2.6006 \\ 124224& 0.42& 3.03$\pm$ 17& 4.084$\pm$ 13& 1.97$\pm$ 8& 4.37$\pm$ 9& 2.2$\pm$ 2& 82& 0.07&-0.046& 0.52068 \\ 125248& 1.19& 2.24$\pm$ 9& 3.972$\pm$ 14& 1.51$\pm$ 8& 4.12$\pm$ 9& 2.2$\pm$ 3& 93& 0.08&-0.063& 9.2954 \\ 125823& -1.27& 5.69$\pm$ 30& 4.248$\pm$ 20& 3.07$\pm$10& 4.20$\pm$ 12& 3.7$\pm$ 5& 134& 0.10&-0.089& 8.817744 \\ 126515& 1.03& 2.29$\pm$ 17& 3.970$\pm$ 24& 1.57$\pm$15& 4.06$\pm$ 16& 2.3$\pm$ 5& 155& 0.15&-0.200& 129.95 \\ 129174& -0.39& 3.49$\pm$ 14& 4.094$\pm$ 14& 2.33$\pm$ 9& 3.98$\pm$ 9& 3.2$\pm$ 4& 100& 0.09&-0.067& 2.24 ? \\ 133652& 0.68& 3.05$\pm$ 14& 4.113$\pm$ 12& 1.89$\pm$ 9& 4.57$\pm$ 9& 1.8$\pm$ 2& 99& 0.09&-0.082& 2.304 \\ 133880& 0.09& 3.17$\pm$ 18& 4.079$\pm$ 20& 2.12$\pm$11& 4.22$\pm$ 13& 2.7$\pm$ 4& 134& 0.11&-0.116& 0.877485 \\ 140728& 0.48& 2.58$\pm$ 7& 3.998$\pm$ 13& 1.81$\pm$ 6& 3.99$\pm$ 7& 2.7$\pm$ 2& 98& 0.05&-0.020& 1.29557 \\ 142301& -0.57& 4.41$\pm$ 36& 4.193$\pm$ 20& 2.59$\pm$18& 4.35$\pm$ 17& 2.7$\pm$ 6& 161& 0.17&-0.311& 1.459 \\ 142884& 0.53& 3.45$\pm$ 22& 4.160$\pm$ 20& 2.03$\pm$13& 4.67$\pm$ 14& 1.7$\pm$ 3& 133& 0.13&-0.176& 0.803 \\ 149822& 0.61& 2.58$\pm$ 14& 4.010$\pm$ 22& 1.78$\pm$10& 4.07$\pm$ 13& 2.5$\pm$ 4& 140& 0.11&-0.101& 1.459 \\ 152308& 0.68& 2.43$\pm$ 12& 3.976$\pm$ 22& 1.71$\pm$11& 3.97$\pm$ 13& 2.7$\pm$ 4& 146& 0.11&-0.111&1.10 (or 0.92)?\\ 166469& 0.49& 2.62$\pm$ 15& 4.012$\pm$ 22& 1.81$\pm$10& 4.04$\pm$ 13& 2.6$\pm$ 4& 140& 0.10&-0.112& 2.9 \\ 170000& 0.21& 2.99$\pm$ 10& 4.058$\pm$ 14& 2.03$\pm$ 6& 4.21$\pm$ 8& 2.7$\pm$ 2& 89& 0.04&-0.017& 1.71649 \\ 170397& 1.07& 2.35$\pm$ 9& 3.993$\pm$ 13& 1.57$\pm$ 8& 4.16$\pm$ 9& 2.1$\pm$ 2& 89& 0.07&-0.055& 2.1912 \\ 175362& -0.52& 5.17$\pm$ 31& 4.249$\pm$ 20& 2.78$\pm$12& 4.45$\pm$ 14& 2.6$\pm$ 4& 140& 0.12&-0.152& 3.67375 \\ 183806& -0.22& 2.89$\pm$ 14& 3.976$\pm$ 22& 2.07$\pm$11& 3.68$\pm$ 13& 4.1$\pm$ 7& 142& 0.16&-0.126& 2.9 \\ 187474& 0.27& 2.70$\pm$ 11& 4.004$\pm$ 22& 1.90$\pm$ 9& 3.94$\pm$ 12& 2.9$\pm$ 4& 108& 0.09&-0.075& 2345 \\ 199728& 0.43& 3.00$\pm$ 20& 4.078$\pm$ 20& 1.97$\pm$13& 4.35$\pm$ 14& 2.3$\pm$ 4& 143& 0.14&-0.177& 2.2 \\ 203006& 0.99& 2.36$\pm$ 8& 3.989$\pm$ 13& 1.60$\pm$ 6& 4.12$\pm$ 8& 2.2$\pm$ 2& 58& 0.05&-0.024& 2.122 \\ 221006& 0.27& 3.38$\pm$ 14& 4.135$\pm$ 20& 2.08$\pm$ 7& 4.51$\pm$ 11& 2.0$\pm$ 2& 118& 0.06&-0.033& 2.3 \\ 223640& 0.08& 3.21$\pm$ 15& 4.089$\pm$ 13& 2.12$\pm$10& 4.27$\pm$ 10& 2.5$\pm$ 3& 103& 0.10&-0.090& 3.735239 \\ \hline \end{tabular} \end{center} \end{table*} \subsection{Comparison between different sources of $\log g$} A comparison between photometric and spectroscopic $\log g$ values was already shown by NK89. Fig. 1 shows how photometric and Hipparcos values compare, for Si and HgMn stars lying closer than 100~pc to the Sun. The diagrams look exactly the same as in the comparison of photometric vs. spectroscopic values, i.e. the Si stars are strongly scattered ($\sigma_{\rm res}=0.273$~dex) while the HgMn stars follow the one-to-one relation much more closely ($\sigma_{\rm res}=0.080$~dex), with the exception of HD 129174, a visual double which was excluded from the fit. \begin{figure}[th!] \infig{8.4}{fig1.ps}{8.4} \caption{Comparison between photometric and Hipparcos $\log g$ values for Si (left, full dots) and HgMn (right, open triangles) stars closer than 100~pc. The continuous line is the one-to-one relationship, while the dotted line is a least-squares fit which takes into account similar errors on both axes. The discrepant point on the right panel is HD 129174, a visual double excluded from the fit.} \end{figure} The nice behaviour of the HgMn stars in this diagram inspires confidence in the value of Hipparcos gravities. The comparison between spectroscopic and Hipparcos gravities for Si stars is shown in Fig. 2, where all stars of the list of NK89 having Hipparcos parallaxes with $\sigma(\pi)/\pi\leq 0.14$ are plotted (please note that some of them do not appear in Table 1 because they have $\pi < 7$~mas). Unfortunately, only six objects fulfill this criterion; among them, four are on the equality line within the errors (at least within $2 \sigma$), while two are clearly below. \begin{figure}[th!] \infig{8.8}{fig2.ps}{8.8} \caption{Comparison between spectroscopic and Hipparcos $\log g$ values for Si stars with $\sigma(\pi)/\pi\leq 0.14$. The continuous line is the one-to-one relationship.} \end{figure} The two outsiders are HD 147010 and HD 199728. Interestingly, these stars have the largest photometric amplitude, as shown in Table 2 where the peak-to-peak amplitude in Str\"omgren's $u$ band (or Geneva $[U]$ band) is given with its source. This suggests that photometry overestimates $T_{\rm eff}$ in cases of extreme peculiarities\footnote{Interestingly, Abt \& Morrell (1995) classify HD 199728 as F0:Vp while it is surely hotter than 10000~K, even though photometry tends to overestimate its effective temperature.}, and is quite coherent with the fact that, in Table 1, some stars have $\log g$ values (determined from Hipparcos luminosities) around 4.5, which is about 0.2 dex more than the theoretical ZAMS value. This is probably due to an overestimate of their effective temperature. It seems that those Ap stars having a more or less fundamental $T_{\rm eff}$ value have on average less extreme peculiarities than those having a good rotational period (hence a large photometric amplitude) and considered here, so that the photometric calibration tends to overestimate $T_{\rm eff}$ for some of the latter. Nevertheless, no systematic correction will be made on $\log g$ in this sample, because the bias strongly depends on the individual stars. \begin{table} \caption{Peak-to-peak amplitudes of the 6 stars having both a spectroscopic and a Hipparcos $\log g$ value.} \begin{center} \begin{tabular}{rcl}\hline \multicolumn{1}{c}{HD}&$u$ or $[U]$& Source \\ & total ampl.& \\ \hline 49976 & 0.055 & Catalano \& Leone (1994) \\ 90044 & 0.060 & Manfroid \& Renson (1994) \\ 94660 & 0.035 & Hensberge (1993) \\ 147010& 0.080 & North (1984c) \\ 164258& 0.016 & Catalano \& Leone (1994) \\ 199728& 0.127 & Renson (1978) \\ \hline \end{tabular} \end{center} \end{table} \subsection{Hipparcos radii versus $v\sin i$} In order to test the validity of the radii obtained using Hipparcos parallaxes, a comparison between the observed projected rotational velocities and equatorial velocities obtained from the formula of the oblique rotator model \begin{equation} V_{eq} [km\,s^{-1}] = 50.6\times R [R_\odot]/P [days] \end{equation} is shown on Fig. 3. The sources of $v\sin i$ are Abt \& Morrell (1995), Levato et al. (1996), Renson (1991) and Uesugi \& Fukuda (1981). \begin{figure}[th!] \infig{8.8}{fig3.ps}{8.8} \caption{Comparison between the observed $v\sin i$ and the equatorial velocity computed from the period and from the Hipparcos radius. The continuous line is the one-to-one relationship. Stars lying above this line are labeled by their HD number. Arrows indicate cases where only an upper limit to $v\sin i$ is known.} \end{figure} Most stars fall below the equality line, as expected from $\sin i \leq 1$; therefore, the test appears rather successful, statistically speaking. However, seven of them are above, at least two of which simply because of the uncertainty on the $v\sin i$ determination (HD 126515 and HD 187474, with $V_{\rm eq}\sim 0$). HD 199728 is only slightly above, but this may well be due to an underestimate of its radius linked with an overestimate of its effective temperature (see Subsection 2.3 and Fig. 2 above). This is also the case of HD 24155, HD 142884 and HD 221006 (which have $\log g = 4.48$, 4.67 and 4.51 respectively, suggesting an overestimated radius), although the radius of the latter star would need to be strongly underestimated. The star HD 14392 (and possibly HD 221006 too) lies so high above the equality line that its rotational period may be questioned. Indeed, Pyper \& Adelman (1985) proposed a period of 1.3102 days (following Winzer 1974), instead of 4.189 days proposed later by e.g. Adelman \& Knox (1994). The photometric curves of HD 14392 are so scattered that the shorter period may be the right one after all; magnetic and spectroscopic observations should be done to settle the matter. The rotational period of HD 221006 has been found to lie around 2.31 days by Renson (1978) and this was confirmed by Manfroid \& Mathys (1985) and by Leone et al. (1995). There seems to be no reason to question this value; therefore, we are left with two possibilities: either $v\sin i=69$~km\,s$^{-1}$ (Uesugi \& Fukuda 1970) is overestimated, or the radius is underestimated by more than 30 percent. This appears doubtful, since $T_{\rm eff}= 13275$~K has been estimated in a quasi-fundamental way (with the IR Flux Method) by M\'egessier (1988) and is only 370~K lower than our photometric estimate: such a difference does not imply an increase of $R$ by more than 3 percent. Finally, HD 142884 has a reliable period and its radius must be underestimated by about 20 percent, as suggested by its very large $\log g$ (4.67). An independant estimate of its $T_{\rm eff}$ would be extremely welcome. \section{The $\log P_{\rm rot}$ vs. $\log g$ diagram} Fig. 4 shows the distribution of stars according to their rotational period and surface gravity. There is of course an intrinsic scatter, but on average the width of the period distribution is relatively narrow and there are clearly longer periods among the more evolved stars. Stars with both a small $\log g$ and a very short period are lacking. There are two stars falling below the lower envelope in a significant way: HD~115599 and HD~150035. HD~115599 was measured photometrically by Moffat (1977) only once a night near culmination, so that the published period might very well be an alias of the real one. The photometric measurements of HD 150035 made by Borra et al. (1985) do not seem very precise, judging from the low S/N lightcurve they published. The period of this star appears to remain highly uncertain. It is interesting to consider the case of CU Vir or HD 124224, because in the literature a very small $\log g$ is sometimes quoted: for instance, Hiesberger et al. (1995) quote values as small as 3.45 to 3.60 (obtained from spectrophotometric scans), but also 4.2 and 3.71. The latter two values come from the same $uvby\beta$ photometric indices but through two different calibrations. The Hipparcos data, together with $T_{\rm eff} = 12130$~K obtained from Geneva photometry, point to $\log g = 4.37\pm 0.09$, i.e. the star is very close to the ZAMS. If a higher effective temperature is adopted, like $T_{\rm eff} = 13000$~K, the result becomes worse, with $\log g = 4.50\pm 0.08$ (the error on $\log g$ was computed assuming an error of only 400~K on $T_{\rm eff}$). The conclusion that CU Vir is unevolved seems unescapable and is coherent with the fact that no Bp or Ap star has a rotational period significantly shorter than 0.5 days (the record is held by HD 60431, with $P_{\rm rot} = 0.47552$~days, see North et al. 1988). This may bear some importance in view of the fact that CU Vir is the only Ap star for which a period change has been unambiguously identified (Pyper et al. 1998). Any explanation for this intriguing discovery will have to take into account the unevolved state of the star. The full and broken lines drawn in Fig. 4 are kinds of evolutionary tracks: assuming an initial period of 0.5~days (respectively 4.0~days), they show how a star rotating as a rigid body will evolve, if no loss of angular momentum occurs. These lines essentially reflect how the moment of inertia changes with evolution for stars having 2.5 and 5~M$_\odot$. They depend in a negligible way on the mass and are entirely compatible with the observations. They were established starting from the conservation of angular momentum: \begin{equation} I\omega = I_0\omega_0 \end{equation} where $\omega$ is the star's angular velocity, $I$ the moment of inertia and the subscript $0$ indicates initial value (i.e. on the ZAMS). For the period, one has \begin{equation} P = \frac{2\pi}{\omega}\, \Rightarrow \,P = P_0 \frac{I}{I_0} \, \Rightarrow \, \log P = \log P_0 + \log \frac{I}{I_0} \end{equation} How the moment of inertia changes with evolution is provided by the models of Schaller et al. (1992), through a code kindly provided by Dr. Georges Meynet. The two steep, straight dotted lines illustrate the extreme case of conservation of angular momentum in concentric shells which would rotate rigidly but glide one over the other without any viscosity, i.e. without the least radial exchange of angular momentum. In such a case, the moment of inertia of each shell of mass $\delta m$ and radius $r$ reads \begin{equation} I = \frac{2}{3} \delta m r^2 \end{equation} and in particular, the outermost shell having $r=R$ and being the only one observed, one gets \begin{eqnarray} P = P_0 \left(\frac{R}{R_0}\right)^2 = P_0 \frac{g_0}{g} \\ \log P = \log P_0 + \log g_0 -\log g \end{eqnarray} Surely this case is an ideal and not very realistic one, but it is shown for illustrative purpose. Do Si stars undergo any rotational braking during their life on the Main Sequence? Because of the decreasing number of stars with decreasing $\log g$, the statistics remains a bit small, and doubling the number of stars in the range $\log g < 3.8$ would be very useful. Nevertheless, the data are entirely compatible with nothing more than conservation of angular momentum for a rigidly rotating star. They may be marginally consistent with the dotted lines whose slope is 1 (conservation of angular momentum for independent spherical shells): if these lines are interpreted as betraying some {\it loss} of angular momentum through some braking mechanism yet to be understood, then this loss cannot increase the period by more than about \begin{equation} \log P = \log P_0 + 0.325 (\log g_0 -\log g) \end{equation} meaning a relative increase of no more than 82 percent during the whole Main Sequence lifetime. This is only a fraction of the increase due to angular momentum conservation alone (for a rigid sphere). The whole reasoning has been applied to a mix of stars with various masses (between 2.2 and 5.7~M$_\odot$), but if any magnetic breaking exists, its efficiency might well be a sensitive function of mass. Then, one would need a larger sample, allowing $\log P$ vs $\log g$ diagrams to be built separately for stars in narrow mass ranges. The sample as a whole would not need to be enlarged in an unrealistic way: it is especially the evolved stars which are crucial for the test, so increasing their number from 13 (for $\log g < 3.8$) to about 50 or 70 would probably be enough to answer the question on firmer grounds. Spectroscopic observations would be needed to estimate $\log g$ (and hopefully $T_{\rm eff}$!) and photometric ones to determine the periods. \begin{figure}[th!] \infig{8.8}{fig4.ps}{8.8} \caption{Rotational period versus surface gravity. Full symbols represent stars with a reliable period, open symbols are for possibly ambiguous periods. Round dots (and triangles) represent stars with a spectroscopic value of $\log g$, while diamonds are for stars with $\log g$ determined from Hipparcos data. The three triangles are for stars with a rotational period newly determined from Hipparcos photometry (the upside-down triangle has $\log g$ determined from Hipparcos, the others from spectroscopy). The continuous and broken lines represent the evolution of the period predicted from that of the moment of inertia, under the assumption of rigid-body rotation and for initial periods of 0.5 and 4 days. The dotted lines show the ideal case of conservation of angular momentum in independent spherical shells.} \end{figure} \section{Conclusion} New surface gravities of magnetic Bp and Ap stars obtained from the Hipparcos parallaxes, as well as homogeneous spectroscopic gravities, have been used to reconsider how the rotational period of such stars varies with age. The result is entirely consistent with previous works suggesting that field Si stars do not undergo any significant magnetic braking during their life on the Main Sequence; it is also more firmly based than earlier studies made on field Ap stars. Therefore, the slow rotation of these objects must be a property acquired {\it before} they arrive on the ZAMS. How this occurs has just been explored by Stepien (1998) but further investigations remain worthwhile. On the other hand, this study has shown that $\log g$ values obtained from Hipparcos luminosities may be overestimated by up to 0.2 dex for some extreme Ap stars, probably through an overestimate of their $T_{\rm eff}$. This shows how badly fundamental determinations of $T_{\rm eff}$ are needed for these stars. \acknowledgements{This work was supported in part by the Swiss National Fondation for Scientific Research. I thank Fabien Carrier for his update of the database of Ap stars, which was widely used, and Dr. Georges Meynet for the code which allowed me to compute stellar moments of inertia. Daniel Erspamer computed the visual absorptions and Dr. Laurent Eyer provided a list of new periods of Ap stars obtained with Hipparcos. This research has made use of the Simbad database, operated at CDS, Strasbourg, France. I thank the referee for his constructive criticism.}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} The concept of magnetic monopoles (MMs) goes back to the origin of magnetism. At the beginning of the 19th century there were discussions concerning the magnetic content of matter and the possible existence of isolated magnetic charges. In 1931 Dirac introduced the MM in order to explain the quantization of the electric charge~\cite{dirac}. He established the relation between the elementary electric charge $e$ and a basic magnetic charge $g$: $eg=n\hbar c/2= ng_{D}$, where $n$ is an integer, $n=1,2,..$; $g_D=\hbar c/2e = 68.5 e$ is the unit Dirac charge. The existence of magnetic charges and of magnetic currents would symmetrize in form Maxwell's equations, but the symmetry would not be perfect since $e \neq g$ (but the couplings could be energy dependent and could merge in a common value at high energies)~\cite{derujula}. There was no prediction for the MM mass; a rough estimate, obtained assuming that the classical monopole radius is equal to the classical electron radius yields $m_M \simeq \frac{g^{2}m_e}{e^{2}} \simeq n \ 4700\ m_e \simeq n \ 2.4\ GeV/c^{2}$. From 1931 searches for \textit{``classical Dirac monopoles"} were carried out at every new accelerator using simple setups, and recently also large collider detectors~$^{3-7}$. \par Electric charge is naturally quantized in Grand Unified Theories (GUT) of the basic interactions; they imply the existence of \textit{GUT monopoles} with calculable properties. The MMs appear in the Early Universe at the phase transition corresponding to the breaking of the unified group into subgroups, one of which is U(1)~\cite{thooft}. The MM mass is related to the mass of the X, Y carriers of the unified interaction, $ m_{M}\ge m_{X}/G$, where G is the dimensionless unified coupling constant at energies E $\simeq m_{X}$. If $m_{X}\simeq 10^{14}-10^{15}$ GeV and $G\simeq0.025$, $m_{M}>10^{16}-10^{17}$ GeV. This is an enormous mass: MMs cannot be produced at any man--made accelerator, existing or conceivable. They may have been produced only in the first instants of our Universe. \par Larger MM masses are expected if gravity is brought into the unification picture, and in some SuperSymmetric models. \par \textit{Multiply charged Intermediate Mass Monopoles }(IMMs) may have been produced in later phase transitions in the Early Universe, when a semisimple gauge group yields a U(1) group~\cite{lazaride}. IMMs with m$_M$ $\sim 10^{7} \div 10^{13}$ GeV may be accelerated to relativistic velocities in one galactic magnetic field domain. Very energetic IMMs could yield the highest energy cosmic rays~\cite{bhatta}. \par The lowest mass MM is stable, since magnetic charge is conserved like electric charge. Thus the poles produced in the Early Universe should still exist as cosmic relics; their kinetic energy was affected by the Universe expansion and by travel through galactic and intergalactic magnetic fields. \par GUT poles are best searched for underground in the penetrating cosmic radiation (CR). IMMs may be searched for at high altitude laboratories. \par In this lecture we review the experimental situation on MM searches and briefly discuss the searches for nuclearites ~\cite{nucleariti} and Q-balls ~\cite{qballs}. \section{Properties of magnetic monopoles}\label{sec:prop-mm} The main properties of MMs are obtained from the Dirac relation. \par \noindent - If $n$~=1 and the basic electric charge is that of the electron, then the {\it basic magnetic charge} is $ g_D =\hbar c/ 2e=137e/2$. The magnetic charge is larger if $n>1$ and if the basic electric charge is $e/3$. \noindent - In analogy with the fine structure constant, $\alpha =e^{2}/\hbar c\simeq 1/137$, the {\it dimensionless magnetic coupling constant} is $ \alpha_g=g^{2}_{D}/ \hbar c \simeq 34.25$; since it is $>1$ perturbative calculations cannot be used. \par \noindent - {\it Energy W acquired in a magnetic field B}:~ $ W = ng_{D} B\ell = n \ 20.5$ keV/G~cm. In a coherent galactic--length ($\ell\simeq 1$ kpc, $B\simeq 3~\mu$G), the energy gained by a MM with $ g=g_{D}$ is $ W \simeq 1.8\times 10^{11}$ GeV. Classical poles and IMMs in the CR may be accelerated to relativistic velocities. GUT poles should have low velocities, $10^{-4}<\beta<10^{-1}$. \par \noindent- {\it MMs may be trapped in ferromagnetic materials} by an image force, which could reach values of $\sim 10$ eV/\AA. \par \noindent- Electrically charged monopoles (dyons) may arise as quantum--mechanical excitations or as M--p, M-nucleus composites.\par \noindent- The interaction of a MM magnetic charge with a nuclear magnetic dipole could lead to the formation of a M--nucleus bound system. A monopole--proton bound state may be produced via radiative capture. Monopole--nucleus bound states may exist for nuclei with large gyromagnetic ratios. \par \noindent- {\it Energy losses of fast poles.} A fast MM with magnetic charge $g_D$ and velocity $v=\beta c$ behaves like an electric charge $(ze)_{eq}=g_D\beta$, Fig.\ \ref{fig:perdita-di-energia}.\par \noindent - {\it Energy losses of slow poles} ($10^{- 4}<\beta<10^{-2}$) may be due to ionization or excitation of atoms and molecules of the medium (``electronic'' energy loss) or to recoiling atoms or nuclei (``atomic'' or ``nuclear'' energy loss). Electronic energy loss predominates for $\beta>10^{-3}$. \par \noindent - {\it Energy losses at very low velocities.} MMs with $v<10^{-4}c$ may lose energy in elastic collisions with atoms or with nuclei. The energy is released to the medium in the form of elastic vibrations and/or infra--red radiation~\cite{derkaoui1}.\par Fig.\ \ref{fig:perdita-di-energia} shows the energy loss in liquid hydrogen of a $g=g_D$ MM vs $\beta$~\cite{gg+lp}.\par \begin{figure}[ht] \begin{center} \includegraphics[width=0.8\textwidth]{perdita-di-energia.eps} \end{center} \caption{The energy losses, in MeV/cm, of $g=g_D$ MMs in liquid hydrogen vs ${ \beta}$. Curve a) corresponds to elastic monopole--hydrogen atom scattering; curve b) to interactions with level crossings; curve c) describes the ionization energy loss.} \label{fig:perdita-di-energia} \end{figure} \noindent - {\it Energy loss of MMs in celestial bodies.} For $\beta$ $<10^{-4}$ the dE/dx in the Earth is due to pole--atom elastic scattering, eddy currents, and nuclear stopping power. MMs may be stopped by celestial bodies if they have:\\ \noindent Moon: $\beta\leq 5\times {10^{-5}}$,\quad Earth: $\beta \leq 10^{-4}$, \quad Sun: $\beta \leq 10^{-3}.$\par \section{Monopole detectors}\label{sec:mm-det} Monopole detectors are based on MM properties given by Dirac's relation. \par \noindent - {\it Superconducting induction devices are sensitive to MMs of any velocity~\cite{gg1}.} A moving MM induces in a ring an electromotive force and a current change ($\Delta i$). For a coil with N turns and inductance {\it L}, $ \Delta i=4\pi N ng_D/L=2\Delta i_o$, where $\Delta i_o$ is the current change corresponding to a change of one unit of the flux quantum of superconductivity. This method of detection is based only on the long--range electromagnetic interaction between the magnetic charge and the macroscopic quantum state of a superconducting ring. \par \noindent - {\it Scintillation counters} for MMs have a threshold $\beta \sim 10^{-4}$, above which the light signal is larger than that of a minimum ionizing particle~\cite{derkaoui1,macro1}. \noindent - {\it Gaseous detectors } of various types have been used. MACRO used a gas mixture of 73\% helium and 27\% n--pentane~\cite{macro1}. This allows exploitation of the Drell~\cite{drell} and Penning effects~\cite{gg1}: a MM leaves a helium atom in a metastable state (He*) with an excitation energy of $\simeq 20$ eV. The ionization potential of n--pentane is $\simeq$~10 eV; the excited energy of the He* is converted into ionization of the n--pentane molecule (Penning effect). \par \noindent - {\it Nuclear track detectors (NTDs).} The formation of an etchable track in a NTD is related to the Restricted Energy Loss (REL), the fraction of the energy loss localized in a cylindrical region of 10 nm diameter around the particle trajectory. It was shown that both the electronic and the nuclear energy losses are effective in producing etchable tracks in the CR39 NTD which has a threshold at $z/\beta \simeq5$~\cite{cr39}; it is the most sensitive NTD and it allows to search for MMs with $g=g_D$ for $\beta$ around $10^{-4}$ and $>10^{-3}$, the whole $\beta$-range of $4 \times 10^{-5}<\beta< 1$ for MMs with $g \geq 2 g_D$~\cite{derkaoui1}. The Lexan and Makrofol polycarbonates are sensitive for $z/\beta \geq 50$~\cite{barcellona}. \section{``Classical Dirac monopoles''} \noindent - {\it Accelerator searches.} If MMs are produced at high--energy accelerators, they would be re\-la\-ti\-vi\-stic and would ionize heavily. Examples of \textit{direct searches} are the experiments performed with scintillators or NTDs. Experiments at the Fermilab $\overline p p$ collider established cross section limits of $\sim 2\times 10^{-34}$~cm$^2$ for MMs with $m_M<850$ GeV~\cite{bertani}. Searches at $e^{+}e^{-}$ colliders excluded masses up to 45 GeV and later in the 45-102 GeV range ($\sigma<5\times 10^{-37}$~cm$^2$). Recently few high energy general purpose detectors used some subdetectors to search for Dirac MMs~\cite{opal}.\par Fig.\ \ref{fig:mmclass2} summarizes the cross section limits vs MM mass obtained by direct and indirect experiments (solid lines and dashed lines) at the Fermilab $\overline p p$ collider, $e^{+}e^{-}$ colliders, the ISR $p p$ collider~\cite{gg+lp}. Most searches are sensitive to poles with magnetic charges $g =n g_{D}/q$ with $0.5<n<5$.\par Examples of indirect searches are those performed at the CERN SPS and at Fermilab: the protons interacted in ferromagnetic targets, later the targets were placed in front of a superconducting solenoid with a field $B>100$ kG, large enough to extract and accelerate the MMs, to be detected in scintillators and in NTD sheets~\cite{gg1}. An indirect experiment performed at the $\bar{p}p$ Tevatron collider, assumed that produced MMs could stop, be trapped and bound in the matter surrounding a collision region~\cite{kalbfleish}. Small Be and Al samples were passed through the 10 cm diameter bore of two superconducting coils, and the induced charge measured by SQUIDs. Limits m$_M>285$ GeV were published for $g=g_D$ poles. It is difficult to establish the validity of the hypotheses made to interpret these results.\par \vspace{2mm} \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{mmclass2.eps} \end{center} \vspace{-5mm} \caption{Classical Dirac MMS cross section upper limits vs MM mass obtained from direct accelerator searches (solid lines) and indirect searches (dashed lines).} \label{fig:mmclass2} \end{figure} \noindent - {\it Multi--$\gamma$ events.} Five peculiar photon showers found in emulsion plates exposed to high--altitude CRs, are characterized by an energetic narrow cone of tens of photons, without any incident charged particle~\cite{multigamma}. The total energy of the photons is $\sim 10^{11}$ GeV. The small radial spread of photons suggested a c.m. $\gamma=(1-\beta^{2})^{-1/2}>10^3$. The energies of the photons are too small to have $\pi^o$ decays as their source. One possible explanation: a high--energy $\gamma$--ray, with energy $>10^{12}$ eV, produced a pole--antipole pair, which suffered bremsstrahlung and annihilation producing the final multi--$\gamma$ events. Searches for multi-$\gamma$ events were performed in $pp$ collisions at the ISR at $\sqrt{s}=53$ GeV, at the $\bar{p}p$ 1.8 TeV collider and in $e^{+}e^{-}$ collisions at LEP (Fig.\ \ref{fig:mmclass2}). The D0 experiment searched for $\gamma$ pairs with high transverse energies; virtual pointlike MMs may rescatter pairs of nearly real photons into the final state via a box monopole diagram; they set a 95\% CL limit of 870 GeV~\cite{kalbfleish}. At LEP the L3 coll. searched for $Z\rightarrow \gamma\gamma\gamma$ events; no deviation from QED predictions was observed, setting a 95\% CL limit of 510 GeV~\cite{kalbfleish}. Many authors studied the effects from virtual monopole loops~\cite{derujula,ginzburg}. The authors of Ref.~\cite{anti-d0} criticized the underlying theory and believe that no significant limit can be obtained from present experiments. \par \noindent - {\it Searches in bulk matter.} Classical MMs could be produced by CRs and could stop at the Earth surface, where they may be trapped in ferromagnetic materials. Bulk matter searches used hundreds of kg of material, including meteorites, schists, ferromanganese nodules, iron ore and others. A superconducting coil through which the material was passed, yielded a monopole/nucleon ratio in the samples $<1.2\times 10^{-29}$ at 90\% CL~\cite{gg1}. \par Ruzicka and Zrelov summarized all searches for classical poles performed before 1980~\cite{ruzicka}. A more recent bibliography is given in Ref.~\cite{biblio}. Possible effects arising from low mass MMs have been reported in Ref.~\cite{oscuro}.\par \section{GUT monopoles} As already stated, GUT theories of the electroweak and strong interations predict the existence of superheavy MMs produced in the Early Universe (EU) when the GUT gauge group breaks into separate groups, one of which is U(1). Assuming that the GUT group is SU(5) (which is excluded by proton decay experiments) one should have the following transitions: \begin{equation} \footnotesize \begin{array}{ccccc} {} & 10^{15}\ GeV & {} & 10^{2}\ GeV & {} \\ SU(5) & \longrightarrow & SU(3)_{C}\times \left[ SU(2)_{L}\times U(1)_{Y}\right] & \longrightarrow & SU(3)_{C}\times U(1)_{EM} \\ {} & \small10^{-35}s & {} & \small10^{-9}s & {} \end{array} \end{equation} MMs would be generated as topological point defects in the GUT phase transition, about one pole for each causal domain. In the standard cosmology this leads to too many poles (the monopole problem). Inflation would defer the GUT phase transition after large supercooling; in its simplest version the number of generated MMs would be very small. However the flux depends critically on several parameters, like the pole mass, the reheating temperature, etc. If the reheating temperature is large enough one would have MMs produced in high energy collisions, like $e^{+}e^{-}\rightarrow M\bar{M}$. \\ Fig.\ \ref{fig:gut} shows the structure of a GUT MM: a very small core, an electroweak region, a confinement region, a fermion--antifermion condensate (which may contain 4--fermion baryon--number--violating terms); for $r\geq 3$ fm it behaves as a point particle generating a field $B=g/r^{2}$~\cite{picture}. \begin{figure} \begin{center} \includegraphics[width=0.74\textwidth]{gut.eps} \end{center} \vspace{-3mm} \caption{Structure of a GUT pole. The 4 regions correspond to: (i) Grand Unification ($r \sim 10^{-29}$ cm; inside this core one finds virtual $X$, $Y$ particles); (ii) electroweak unification ($r \sim 10^{-16}$ cm; inside one finds virtual $W^{\pm}$ and $Z^0$); (iii) confinement region ($r \sim 10^{-13}$ cm; inside one finds virtual $\gamma$, gluons, fermion-antifermion pairs and possibly 4-fermion virtual states); (iv) for $r>$ few fm one has the field of a point magnetic charge.} \label{fig:gut} \end{figure} A flux of cosmic GUT MMs may reach the Earth with a velocity spectrum in the range $4 \times 10^{-5} <\beta <0.1$, with possible peaks corresponding to the escape velocities from the Earth, the Sun and the Galaxy. Searches for such MMs in the CR performed with superconducting induction devices yielded a combined 90\%~CL limit of $2 \times 10^{-14}~$cm$^{-2}$~s$^{-1}$~sr$^{-1}$, independent of $\beta$~\cite{gg+lp}. Direct searches were performed above ground and underground~$^{4, 25-27}$. MACRO performed a search with different types of detectors (liquid scintillators, limited streamer tubes and NTDs) with an acceptance of $\sim$ 10,000 m$^2$sr for an isotropic flux. No MM was detected; the 90\% CL flux limits, shown in Fig.\ \ref{fig:global2} vs $\beta$ for $g=g_D$, are at the level of $1.4\times 10^{-16}$~cm$^{-2}$~s$^{-1}$~sr$^{-1}$ for $\beta > 4 \times 10^{-5}$~\cite{mm_macro}. The figure shows also the limits from the Ohya~\cite{ohya}, Baksan, Baikal, and AMANDA experiments~\cite{baksan}. \begin{figure} \begin{center} \includegraphics[width=0.78\textwidth]{trieste04.eps} \end{center} \caption{The 90\% CL MACRO direct upper limits vs $\beta$ for GUT $g=g_D$ poles in the penetrating CR, and direct limits from other experiments (see text).} \label{fig:global2} \end{figure} The interaction of the GUT monopole core with a nucleon can lead to a reaction in which the nucleon decays (monopole catalysis of nucleon decay), f. e. \( M + p \rightarrow M + e^+ + \pi^0\). The cross section for this process is very small, of the order of magnitude of the core size; but the catalysis process could proceed via the Rubakov-Callan mechanism with a $\sigma$ of the order of the strong interaction cross section~\cite{rubakov}. MACRO performed a dedicated search for nucleon decays induced by the passage of a GUT pole in the streamer tube system. The flux limits obtained, $3-8 \times 10^{-16}$~cm$^{-2}$~s$^{-1}$~sr$^{-1}$, depend on the MM velocity and on the catalysis cross section~\cite{catalisi}. Previous limits were at levels $10^{-15}$~cm$^{-2}$~s$^{-1}$~sr$^{-1}$~\cite{catalisi}, except the Baikal limit which is $6 \times 10^{-17}$~cm$^{-2}$~s$^{-1}$~sr$^{-1}$ for $\beta \simeq 10^{-5}$~\cite{baksan}.\par Indirect GUT MM searches used ancient mica, which has a high threshold. It is assumed that a pole passing through the Earth captures an Al nucleus and drags it through subterranean mica causing a trail of lattice defects, which survive as long as the mica is not reheated. Only small sheets were analyzed ($13.5$ and $18$ cm$^2$), but should have been recording tracks for $4\div9\times 10^8$ years. The flux limits are $10^{-17} ~\mbox{cm}^{-2}~ \mbox{s}^{-1} $sr$^{-1}$ for $10^{- 4}<\beta<10^{-3}$~\cite{price}. There are reasons why these indirect experiments might not be sensitive: if MMs have a positive electric charge or protons attached, then Coulomb repulsion could prevent capture of heavy nuclei.\par \section{Cosmological and astrophysical bounds} Rough upper limits for a GUT monopole flux in the CR were obtained on the basis of cosmological and astrophysical considerations.\par \noindent - {\it Limit from the mass density of the universe:} it is obtained requiring that the present MM mass density be smaller than the critical density $\rho_c$ of the universe. For $m_M\simeq 10^{17}$ GeV one has the limit: $F={n_Mc\over 4\pi}\beta<3\times 10^{-12}h^2_0\beta~(\mbox{cm}^{-2}\mbox{s}^{-1} \mbox{sr}^{-1})$. It is valid for poles uniformely distributed in the universe. If poles are clustered in galaxies the limit is larger~\cite{gg1}. \noindent - {\it Limit from the galactic magnetic field (Parker limit).} The $\sim 3\ \mu$G magnetic field in our Galaxy is probably due to the non--uniform rotation of the Galaxy, which generates a field with a time--scale of the order of the rotation period of the Galaxy $(\tau\sim 10^8$ yr). An upper bound for the MM flux is obtained by requiring that the kinetic energy gained per unit time by MMs be less than the magnetic energy generated by the dynamo effect: $F<10^{- 15}~\mbox{cm}^{-2}~\mbox{s}^{-1}$ sr$^{-1}$~\cite{parker}; taking into account the almost chaotic nature of the field, with domains of $\ell\sim 1$ kpc, the limit becomes mass dependent~\cite{parker}. An extended ``Parker bound", obtained by considering the survival of an early seed field~\cite{adams}, yields $ F\leq 1.2 \times 10^{-16}(m_M/10^{17}GeV)~\mbox{cm}^{- 2}~\mbox{s}^{-1}~ \mbox{sr}^{-1}$. \par \noindent - {\it Limit from the intergalactic (IG) magnetic field.} If $B_{IG}\sim 3\times 10^{-8}~G$ with a regeneration time $\tau_{IG}\sim 10^9~y$, a more stringent bound is obtained; the limit is less reliable because the IG field is less known. \par \noindent - {\it Limits from peculiar A4 stars and from pulsars} may be stringent, but the assumptions made are not clear (see the pulsar PSR 1937+214)~\cite{gg1,gg+lp}. \section{Intermediate mass magnetic monopoles} IMMs may appear as topological point defects at a later time in the Early Universe; f.e. the SO(10) GUT group would not yield directly a U(1) group \begin{equation} \footnotesize \begin{array}{ccccc} {} & 10^{15}\ GeV & {}& 10^{9}\ GeV & \\ SO(10) & \longrightarrow & SU(4)\times SU(2)\times SU(2) & \longrightarrow & SU(3)\times SU(2)\times U(1) \\ {} & \small10^{-35}s & {} & \small10^{-23}s & {} \end{array} \end{equation} \noindent This would lead to MMs with masses of $\sim 10^{10}$ GeV; they would survive inflation, be stable, ``doubly charged'' ($g=2g_D$) and do not catalyze nucleon decay~\cite{lazaride}. The structure of an IMM would be similar to that of a GUT MM, but the core would be larger (since R $\sim$ 1/$m_M$) and the outer cloud would not contain 4--fermion baryon--number--violating terms. \par Relativistic IMMs, $10^7<m_M<10^{13}$ GeV, could be present in the cosmic radiation, could be accelerated to large $\gamma$ in one coherent domain of the galactic field. Thus one would have to look for $\beta\ge0.1$ MMs.\par Detectors at the Earth surface could detect MMs coming from above if they have $m_M>10^5-10^6$ GeV~\cite{derkaoui1}; lower mass MMs may be searched for with detectors located at high mountain altitudes, balloons and satellites. \par Few experimental results are available. Fig.\ \ref{fig:imm1} shows the situation on the flux upper limits for IMMs~\cite{gg+lp}. The Cherenkov neutrino telescopes under ice and underwater are sensitive to fast ($\gamma >>1$) MMs coming from above. \begin{figure} \begin{center} \includegraphics[width=0.750\textwidth]{imm-trieste04.eps} \end{center} \caption{Experimental 90\% CL upper limits for a flux of IMMs with mass $m_M=10^{10}$ GeV plotted versus $\beta$.} \label{fig:imm1} \end{figure} The SLIM experiment, which searches for IMMs with NTDs at the Chacaltaya high altitude lab (5290 m a.s.l.)~\cite{slim}, is sensitive to $g=2g_D$ MMs in the whole range $4 \times 10^{-5}<\beta <1$. \section{Nuclearites and Q-balls} Strange Quark Matter (SQM) should consist of aggregates of \textit{u, d} and \textit{s} quarks in almost equal proportions; the number of \textit{s} quarks should be lower than the number of \textit{u} or \textit{d} quarks and the SQM should have a positive integer charge. The overall neutrality of SMQ is ensured by an electron cloud which surrounds it, forming a sort of atom (see Fig.\ \ref{fig:qpict}). SQM should have a constant density $\rho_N = M_N /V_N\simeq 3.5 \times 10^{14}$~g~cm$^{-3}$, larger than that of atomic nuclei, and it should be stable for all baryon numbers in the range between ordinary heavy nuclei and neutron stars (A $\sim 10^{57}$). Lumps of SQM with baryon number $A<10^6-10^7$ are usually called ``strangelets''; the word ``nuclearite'' was introduced to indicate large lumps of SQM which could be present in the CR~\cite{nucleariti}. SQM lumps could have been produced shortly after the Big Bang and may have survived as remnants; they could also appear in violent astrophysical processes, such as in neutron star collisions. SQM could contribute to the cold dark matter. The main energy loss mechanism for low velocity nuclearites is elastic or quasi-elastic collisions with the ambient atoms. The energy loss is large; therefore nuclearites should be easily detected in scintillators and CR39 NTDs~\cite{macro-nucl} . Nuclearites should have typical galatic velocities, $\beta\sim10^{-3}$, and for masses larger than 0.1 g could traverse the earth. Most nuclearite searches were obtained as byproducts of CR MM searches; the flux limits are similar to those for MMs. \begin{figure} \centerline{\epsfxsize=4.1in\epsfbox{nucleariti.eps}} \caption{Nuclearite structure. Dimensions of the quark bag (radius $R_N$) and of the core+electron system; the black points are the electrons (the border of the core~+~electron cloud for small masses is indicated by the dashed lines). For masses smaller than $10^{9}$ GeV, the electrons are outside the quark bag, the core+electron system has size of $\sim 10^5$ fm; for $ 10^9 < M_N < 10^{15}$ GeV the $e^{-}$ are partially inside the core, for $M_N>10^{15}$ GeV all electrons are inside the core. \label{fig:qpict}} \end{figure} The most relevant direct flux limits for nuclearites come from three large area experiments: the first two use CR39 NTDs; one experiment was performed at mountain altitude (Mt. Norikura at 2770 m a.s.l.)~\cite{nakamura}, the 2nd at the depth of $10^4$~g~cm$^{-2}$ in the Ohya mine~\cite{ohya}; the third experiment, MACRO, at an average depth of 3700 hg~cm$^{-2}$, used liquid scintillators besides NTDs~\cite{gg02}. A 4th experiment (SLIM) is deployed at high altitudes. Indirect searches with old mica samples could yield the lowest limits, but they are affected by several uncertainties. Some exotic cosmic ray events were interpreted as due to incident nuclearites, f. e. the ``Centauro'' events and the anomalous massive particles, but the interpretation is not unique~\cite{polacchi}. Supermassive nuclearites (M $\sim$ 1 ton) passing through Earth could induce epilinear earthquakes~\cite{nucleariti,terremoti}. Fig.\ \ref{fig:nuclearites} shows a compilation of limits for a flux of downgoing nuclearites compared with the dark matter (DM) limit, assuming a velocity at ground level $\beta = 10^{-3}$, corresponding to nuclearites of galactic or extragalactic origin. The MACRO limit is extended above the DM bound to show the transition to an isotropic flux for $M_n>0.1$~g ($\sim 10^{23}$ GeV). Some possible positive indications are discussed in Ref.~\cite{polacchi}. \begin{figure}[ht] \begin{center} \includegraphics[width=0.7\textwidth]{nucleariti-trieste04.eps} \end{center} \vspace{-5mm} \caption{ 90\% CL flux upper limits versus mass for nuclearites with $\beta= 10^{-3}$ at ground level. These nuclearites could have galatic or extragalatic origin. The limits are from Refs.~$^{26,35,36}$.} \label{fig:nuclearites} \end{figure} {\it Q-balls} should be aggregates of squarks $\tilde{q}$, sleptons $\tilde {l}$ and Higgs fields~\cite{qballs}. The scalar condensate inside a Q-ball core has a global baryon number Q (and may be also a lepton number). Protons, neutrons and may be electrons could be absorbed in the condensate. There could exist neutral and charged Q-balls. Supersymmetric Electrically Neutral Solitons (SENS) are generally massive and may catalyse proton decay. SENS may obtain a positive electric charge absorbing a proton in their interactions with matter yielding SECS (Supersymmetric Electrically Charged Solitons), which have a core electric charge, have generally lower masses and the Coulomb barrier could prevent the capture of nuclei. SECS have only integer charges because they are color singlets. A SENS which enters the earth atmosphere could absorb a nitrogen nucleus which would give it the positive charge of +7 (SECS with $z=7$). Other nuclear absorptions are prevented by Coulomb repulsion. If the Q-ball can absorb electrons at the same rate as protons, the positive charge of the absorbed nucleus may be neutralized by the charge of absorbed electrons. If, instead, the absorption of electrons is slow or impossible, the Q-ball carries a positive electric charge after the capture of the first nucleus in the atmosphere. Q-balls may be cold DM candidates. SECS with $\beta \simeq 10 ^{-3}$ and $M_Q < 10^{13}$ GeV could reach an underground detector from above, SENS also from below. SENS may be detected by their continuons emission of charged pions (energy loss $\sim$ 100 GeV g$^{-1}$cm$^2$), SECS may be detected by scintillators, NTDs and ionization detectors.\par Note that we did not consider here the possibility of strongly interacting, colored, MMs, nuclearites~\cite{wick} and Q-balls. \section{Conclusions. Outlook} Direct and indirect accelerator searches for classical Dirac MMs placed limits at the level $m_M > 850$ GeV with cross section upper values as shown in Fig.\ \ref{fig:mmclass2}. Future improvements may come from experiments at the LHC~\cite{moedal}. \\ \indent Many searches were performed for GUT poles in the penetrating cosmic radiation. The 90\% CL flux limits are at $\sim 1.4 \times 10^{-16} $~cm$^{-2}$~s$^{-1}$~sr$^{-1}$ for $\beta \ge 4 \times 10^{-5}$. It may be difficult to do much better since one would require refined detectors of considerably larger areas. \par Present limits on Intermediate Mass Monopoles with high $\beta$ are relatively poor. Experiments at high altitudes and at neutrino telescopes should improve the situation. In particular stringent limits may be obtained by large neutrino telescopes for IMMs with $\beta > 0.5$ coming from above. \par As a byproduct of GUT MM searches some experiments obtained stringent limits on nuclearites and on Q-balls. Future experiments at neutrino telescopes and at high altitudes should perform searches for nuclearites and Q-balls of smaller masses. \section{Acknowledgements} We acknowledge the cooperation of many colleagues, in particular S. Cecchini, M. Cozzi, M. Giorgini, G. Mandrioli, S. Manzoor, V. Popa, M. Spurio, and others. We thank ms. Giulia Grandi for typing the manuscript.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Nanostructured carbon has emerged over the last two decades as one of the most promising materials available to mankind. The discovery of fullerenes \cite{Kroto85,Kroto87}, followed by that of carbon nanotubes \cite{Iijima91} and graphene \cite{Geim,Novoselov}, sparkled the interest for low-dimensional materials. The fascinating electronic and mechanical properties of single-atom-thick surfaces and structures are believed to offer unprecedented opportunities for innovative applications, ranging from next-generation electronics to pharmacology, to batteries and solar cells \cite{Gupta,Mannix,Mas-Balleste}. New findings are emerging at an always increasing pace, cutting across Materials Science, Physics, and Chemistry, and extending from fundamental science to novel applications \cite{Dresselhaus11,Morris13}. Carbon nanotubes are long, hollow structures showing cylindrical symmetry \cite{Dresselhaus92}. Their walls consist of a single (or multiple) one-atom-thick layer of carbon atoms forming {\it $sp^2$ covalent} bonds \cite{Clayden12} arranged in a hexagonal pattern. This molecular structure is responsible for amazing mechanical properties: Carbon nanotubes are presently among the strongest and stiffest known materials with a nominal Young's modulus \cite{KDEYT, TEG} of 1 TPa and ideal strength greater than 100 MPa \cite{Arroyo05}. In addition, they show electric and thermal conductivity, chemical sensitivity, transparency, light weight and environmental friendliness \cite{Tuukanen}. Nanotubes can be visualized as the result of rolling up a patch of a regular hexagonal lattice. Depending on the different possible realizations of this rolling-up, different topologies may arise, giving rise to {\it zigzag, armchair}, and {\it chiral} nanotubes. These topologies are believed to have a specific impact on the mechanical and electronic properties of the nanotube, which can range from highly conducting to semiconducting \cite{Cao07,Charlier98}. In contrast to the ever-growing material knowledge, the rigorous mathematical description of two-dimensional carbon systems is considerably less developed. Ab initio atomistic models are believed to accurately describe some features of the carbon nanotube geometry and mechanics \cite{Li07,Rochefort99,Yakobson96}. These methods are nevertheless computational in nature and cannot handle a very large number of atoms due to the rapid increase in computational complexity. On the other hand, a number of continuum mechanics approaches have been proposed where carbon nanotubes are modeled as rods \cite{Poncharal99}, shells \cite{Arroyo05,Bajaj13,Favata12,Ru01}, or solids \cite{Wang05}. These bring the advantage of possibly dealing with long structures, at the price however of a less accurate description of the detailed microscopic behavior. The unique mechanical behavior of nanotubes under {\it stretching} is a crucial feature of these structures. As such, it has attracted attention from the theoretical \cite{Bajaj13, Favata14, Ru01, Zhang08}, the computational \cite{Agrawal, Cao07, Han14, Jindal}, and the experimental side \cite{Demczyk, KDEYT,Warner11, YFAR}. Still, a reliable description of nanotubes under stretching requires to resolve correctly the atomic scale and, simultaneously, to rigorously deal with the whole structure. We hence resort to the classical frame of Molecular Mechanics \cite{Allinger,Lewars,Rappe} which identifies carbon nanotubes with point configurations $\{x_1, \dots, x_n\}\in \mathbb{R}^{3n}$ corresponding to their atomic positions. The atoms are interacting via a {\it configurational energy} $E=E(x_1, \dots, x_n)$ given in terms of classical potentials and taking into account both attractive-repulsive {\it two-body} interactions, minimized at a certain bond length, and {\it three-body} terms favoring specific angles between bonds \cite{Brenner90,Stillinger-Weber85,Tersoff}. The $sp^2$-type covalent bonding implies that each atom has exactly three first neighbors and that bond angles of $2\pi/3$ are energetically preferred \cite{Clayden12}. The Reader is referred to \cite{Davoli15,E-Li09, FPS, Mainini-Stefanelli12, stable} for a collection of results on local and global minimizers in this setting and to \cite{Smereka15,cronut} for additional results on carbon structures. The focus of this paper is to show the local minimality of periodic configurations, both in the unstreched case and under the effect of small stretching. More specifically, we prove that, by applying a small stretching to a zigzag nanotube, the energy $E$ is locally strictly minimized by a specific periodic configuration where all atoms see the same local configuration (Theorem \ref{th: main3}). Local minimality is here checked with respect to {\it all} small perturbations in $\mathbb{R}^{3n}$, namely not restricting {\it a priori} to periodic perturbations. On the contrary, periodicity is proved here to emerge as effect of the global variational nature of the problem. The novelty of this result is threefold. At first, given the mentioned periodicity of local minimizers, the actual configuration in $\mathbb{R}^{3n}$ can be determined by solving a simple minimization problem in $\mathbb{R}^2$, which consists in identifying the length of two specific bond lengths between neighboring atoms. This is indeed the standpoint of a number of contributions, see \cite{Agrawal,Budyka,Favata15,Favata16,Jiang,Jindal,Kanamitsu,Kurti} among many others, where nevertheless periodicity is a priori {\it assumed}. In this regard, our result offers a justification for these lower-dimensional approaches. Our assumptions on $E$ are kept fairly general in order to possibly include the menagerie of different possible choices for energy terms which have been implemented in Computational Chemistry codes \cite{Brooks83,Clark89,Gunsteren87,Mayo90,Weiner81}. A by-product of our results is hence the cross-validation of these choices in view of their capability of describing carbon nanotube geometries. Secondly, we rigorously check that, also in presence of small stretching, the geometrical model obtained via local minimization corresponds neither to the classical {\it Rolled-Up} model \cite{Dresselhaus92,Dresselhaus-et-al95, Jishi93}, where two out of three bond angles at each atom are $2\pi/3$, nor to the {\it Polyhedral} model \cite{Cox-Hill07,Cox-Hill08,Lee}, where all bond angles are equal. The optimal configuration lies between these two (Proposition \ref{th: main2}), a fact which remarkably corresponds to measurements on very thin carbon nanotubes \cite{Zhao}. Moreover, in accordance with the results in \cite{Jindal}, local minimizers are generically characterized by two different bond lengths. Finally, our result proves the validity of the so-called {\it Cauchy-Born rule} for carbon nanotubes: By imposing a small tension, the periodicity cell deforms correspondingly and global periodicity is preserved. This fact rests at the basis of a possible elastic theory for carbon nanotubes. As a matter of fact, such periodicity is invariantly {\it assumed} in a number of different contributions, see \cite{Bajaj13,Favata14,Han14,Zhang08} among others, and then exploited in order to compute tensile strength as well as stretched geometries. Here again our results provide a theoretical justification of such approaches. Albeit the Cauchy-Born rule plays a pivotal role in Mechanics \cite{Ericksen08, Ericksen84,Zanzotto92}, rigorous results are just a few. Among these we mention \cite{Friesecke02,Conti06}, which assess its validity within two- and $d$-dimensional cubic mass-spring systems, respectively. More general interactions are considered in \cite{E07,E07b}, where the Cauchy-Born rule is investigated under a specific ellipticity condition applying to the triangular and hexagonal lattice, both in the static and the dynamic case. Our result is, to the best of our knowledge, the first one dealing with a three-dimensional structure which is not a subset of a Bravais lattice nor of a multilattice. Note though the Saint Venant principle in \cite{Monneau14}, which corresponds to the validity of an approximate version of the Cauchy-Born rule, up to a small error. However, the setting of \cite{Monneau14} is quite different from the present one, where long-range purely two-body interactions are considered. This work is the culmination of a series on the geometry and mechanics of nanotubes \cite{MMPS,MMPS-new}. The theoretical outcomes of this paper have been computationally anticipated in \cite{MMPS}, where stability of periodic configurations have been investigated with Monte Carlo techniques, both for zigzag and armchair topologies under moderate displacements. A first step toward a rigorous analytical result has been obtained in \cite{MMPS-new} for both zigzag and armchair topologies under no stretching. In \cite{MMPS-new}, stability is checked against a number of non-periodic perturbations fulfilling a specific structural constraint, which is related to the nonplanarity of the hexagonal cells induced by the local geometry of the nanotube. Here, we remove such constraint and consider all small perturbations, even in presence of stretching. Indeed, removing the structural assumption and extending the result of \cite{MMPS-new} to the present fully general setting requires a remarkably deeper analysis. In a nutshell, one has to reduce to a cell problem and solve it. The actual realization of this program poses however substantial technical challenges and relies on a combination of perturbative and convexity techniques. Whereas the proof in \cite{MMPS-new} was essentially based on the convexity of the energy given by the three bond angles at one atom, in the present context we have to reduce to a {\it cell} which includes eight atoms and is slightly nonplanar. The convexity of cell energies for various Bravais lattices has already been investigated in the literature \cite{Conti06, FriedrichSchmidt:2014.1, Friesecke02, Schmidt:2009}, particularly for problems related to the validation of the Cauchy-Born rule. In our setting, however, we need to deal with an almost planar structure embedded in the three-dimensional space and therefore, to confirm convexity of the cell energy, a careful analysis in terms of the nonplanarity is necessary, see Section \ref{sec: convexity} and Theorem \ref{th: cell convexity3}. \BBB In this context, an additional difficulty lies in the fact that the reference configuration of the cell is not a stress-free state. \EEE The convexity is then crucially exploited in order to obtain a quantitative control of the \emph{energy defect} in terms of the \emph{symmetry defect} produced by symmetrizing a cell (Theorem \ref{th: Ered}). On the other hand, a second quantitative estimate provides a bound on the defect in the nonplanarity of the cell (called \emph{angle defect}) with respect to the symmetry defect of the cell (Lemma \ref{lemma: sum}). The detailed combination of these two estimates and a convexity and monotonicity argument (Proposition \ref{th: mainenergy}) proves that ground states necessarily have symmetric cells, from which our stability result follows (Theorem \ref{th: main3}). The validation of the Cauchy Born rule essentially relies on the application of a slicing technique which has also been used in \cite{FriedrichSchmidt:2014.1} in a more general setting: One reduces the problem to a chain of cells along the diameter of the structure and shows that identical deformation of each cell is energetically favorable. In the present context, however, additional slicing arguments along the cross sections of the nanotube are necessary in order to identify correctly the nonplanarity of each hexagonal cell. The paper is organized as follows. In Section \ref{Fsection} we introduce the objects in study and the mathematical setting. Section \ref{sec: mainresults} collects our main results. In Section \ref{sec: main proof} we present the proof strategy, the essential auxiliary statements (Lemma \ref{lemma: sum} - Theorem \ref{th: Ered}), and the proof of Theorem \ref{th: main3}. The proofs of the various necessary ingredients are postponed to Sections \ref{sec: angles}-\ref{sec: cellenery}. \section{Carbon-nanotube geometry}\label{Fsection} The aim of this section is that of presenting the objects in study, together with the relevant notation. Let us start by introducing the mathematical setting as well as some preliminary observations. As mentioned above, carbon nanotubes (nanotubes, in the following) are modeled by {\it configurations} of atoms, i.e. collections of points in $\mathbb{R}^3$ representing the atomic sites. Nanotubes are very long structures, measuring up to $10^7$ times their diameter. As such, we shall not be concerned with describing the fine nanotube geometry close to their ends. We thus restrict to periodic configurations, i.e. configurations that are invariant with respect to a translation of a certain period in the direction of the nanotube axis. Without loss of generality we consider only nanotubes with axis in the $e_1:=(1,0,0)$ direction. Therefore, a nanotube is identified with a configuration $$\mathcal{C}:=C_n+Le_1\mathbb{Z}$$ where $L>0$ is the {\it period} of $\mathcal{C}$ and $C_n:=\{x_1,\dots,x_n\}$ is a collection of $n$ points $x_i\in\mathbb{R}^3$ such that $x_i\cdot e_1\in [0,L)$. In the following, we will refer to $C_n$ as the {\it $n$-cell} of $\mathcal{C}$, and since $\mathcal{C}$ is characterized by its $n$-cell $C_n$ and its period $L$, we will systematically identify the periodic configuration $\mathcal{C}$ with the couple $(C_n, L)$, i.e. $\mathcal{C}=(C_n, L)$. \subsection{Configurational energy} We now introduce {\it the configurational energy $E$ of a nanotube $\mathcal{C}$}, and we detail the hypotheses which we assume on $E$ throughout the paper. We aim here at minimal assumptions in order to possibly include in the analysis most of the many different possible choices for energy terms, which have been successfully implemented in Computational Chemistry codes \cite{Brooks83,Clark89,Gunsteren87,Mayo90,Weiner81}. The energy $E$ is given by the sum of two contributions, respectively accounting for {\it two-body and three-body interactions among particles} that are respectively modulated by the potentials $v_2$ and $v_3$, see \eqref{E}. We assume that the {\it two-body potential} $v_2:\mathbb{R}^+\to[-1,\infty)$ is smooth and attains its minimum value \RRR only \EEE at $1$ with $v_2(1) = -1$ and $v''_2(1)>0$. Moreover, we ask $v_2$ to be {\it short-ranged}, that is to vanish shortly after $1$. For the sake of definiteness, let us define $v_2(r)=0$ for $r\ge 1.1$. These assumptions reflect the nature of covalent atomic bonding in carbon favoring a specific interatomic distance, here normalized to $1$. We say that two particles $x,y\in\mathcal{C}$ are {\it bonded} if $|x-y|<1.1$, and we refer to the graph formed by all the bonds as the {\it bond graph} of $\mathcal{C}$. Taking into account periodicity, this amounts to consider two particles $x_i$ and $x_j$ of the $n$-cell $C_n$ of $\mathcal{C}$ as bonded if $|x_i-x_j|_L<1.1$, where $|\cdot|_L$ is the {\it distance modulo $L$} defined by $$ |x_i-x_j|_L:=\min_{t\in\{-1,0,+1\}}|x_i-x_j+Lte_1| $$ for every $x_i,x_j\in C_n$. Let us denote by $\mathcal{N}$ the set of all couples of indices corresponding to bonded particles, i.e. $$ \mathcal{N}:=\{(i,j)\,:\,\, \textrm{$x_i$, $x_j\in C_n$, $i\neq j$, and $|x_i-x_j|_L<1.1$}\}. $$ The {\it three-body potential} $v_3: [0,2\pi]\to[0,\infty)$ is assumed to be smooth and symmetric around $\pi$, namely $v_3(\alpha)=v_3(2\pi{-}\alpha)$. Moreover, we suppose that the minimum value $0$ is attained only at $2\pi/3$ and $4\pi/3$ with $v_3''(2\pi/3)>0$. Let $\mathcal{T}$ be the index set to the triples corresponding to first-neighboring particles, i.e. $$ \mathcal{T}:=\{(i,j,k)\,:\,\, \textrm{$i\neq k$, $(i,j)\in\mathcal{N}$ and $(j,k)\in\mathcal{N}$}\}. $$ For all triples $(i,j,k)\in\mathcal{T}$ we denote by $\alpha_{ijk} \in \RRR [0,\pi] \EEE $ the {\it bond angle} formed by the vectors $x_i-x_j$ and $x_k-x_j$. The assumptions on $v_3$ reflect the basic geometry of carbon bonding in a nanotube: Each atom presents three $sp^2$-hybridized orbitals, which tend to form $2\pi/3$ angles. The configurational energy $E$ of a nanotube $\mathcal{C}=(C_n, L)$ is now defined by \begin{equation}\label{E} E(\mathcal{C})=E(C_n,L):=\frac12\sum_{(i,j)\in \mathcal{N}}v_2(|x_i{-}x_j|_L) + \frac12\sum_{(i,j,k)\in\mathcal{T}}v_3(\alpha_{ijk}). \end{equation} Let us mention that the smoothness assumptions on $v_2$ and $v_3$ are for the sake of maximizing simplicity rather than generality and could be weakened. Observe that our assumptions are generally satisfied by classical interaction potentials for carbon (see \cite{Stillinger-Weber85,Tersoff}). Since the energy $E$ is clearly rotationally and translationally invariant, in the following we will tacitly assume that all statements are to be considered up to isometries. We say that a nanotube $\mathcal{C}=(C_n,L)$ is {\it stable} if $(C_n,L)$ is a strict local minimizer of the interaction energy $E$. \subsection{Geometry of zigzag nanotubes} We now introduce a specific two-parameter family of nanotubes which will play a crucial role in the following. This is the family of so-called {\it zigzag nanotubes} having the \emph{minimal period} $\mu>0$. The term {\it zigzag} refers to a specific topology of nanotubes, see Figure \ref{Figure1}, which is here chosen for the sake of definiteness. Let us however note that the other classical choice, to so-called {\it armchair} topology, could be considered as well. The Reader is referred to \cite{MMPS-new} for some results on unstretched armchair geometries. \begin{figure}[h] \pgfdeclareimage[width=0.9\textwidth]{tube}{tube.pdf} \pgfuseimage{tube} \caption{Zigzag nanotube.} \label{Figure1} \end{figure} We let $\ell \in \mathbb{N}$, $\ell >3$, and define the family $\mathscr{F}(\mu)$ as the collection of all configurations that, up to isometries, coincide with \begin{align} &\Bigg\{\left(k (\lambda_1+\sigma) + j (2\sigma+2\lambda_1) + l(2\sigma + \lambda_1), \rho \cos \left(\frac{\pi(2i+k)}{\ell}\right) , \rho\sin \left(\frac{\pi(2i+k)}{\ell}\right) \right) \ \Big| \ \nonumber \\ &\hspace{80mm} i=1,\dots,\ell, \ j \in \mathbb{Z}, \ k,l \in \lbrace 0,1 \rbrace \Bigg\}\label{zigzagfamilydefinition} \end{align} for some choice of $$\lambda_1 \in (0,\mu/2),\ \ \ \lambda_2 \in (0,\mu/2),\ \ \ \sigma \in (0,\mu/2), \ \ \ \text{and} \ \ \ \rho\in \left(0,\,\frac{\mu}{4\sin(\pi/(2\ell))}\right)$$ such that \begin{align}\label{eq: basic constraints} 2\sigma + 2\lambda_1 = \mu, \ \ \ \ \ \ \ \ \ \sigma^2 +4\rho^2\sin^2\left(\frac{\pi}{2\ell}\right) =\lambda_2^2. \end{align} Of course, the configurations in $\mathscr{F}(\mu)$ are periodic with minimal period $\mu$. The parameter $\rho$ indicates the diameter of the tube and $\lambda_1$, $\lambda_2$ are the two possibly different lengths of the covalent bonds in each hexagon of the tube, where the bonds of length $\lambda_1$ are oriented in the $e_1$ direction. These configurations are {\it objective} \cite{James}: They are obtained as orbits of \NNN two points \EEE under the action of a prescribed isometry group. The latter group is generated by a translation combined with a rotation along the axis and by a simple translation. Notice that our definition slightly differs from the one adopted in \cite{MMPS,MMPS-new} in the sense that for fixed $i$, $k$ the points individuated by the quadruples $(i,j,k,l)$ for $j \in \mathbb{Z}$, $l \in \lbrace 0,1 \rbrace$ lie on a line parallel to $e_1$ (see Figure \ref{quadruples}). For fixed $\mu >0$, $\mathscr{F}(\mu)$ is a two-parameter smooth family of configurations since each configuration in $\mathscr{F}(\mu)$ is uniquely determined by $\lambda_1$ and $\lambda_2$ by taking relation \eqref{eq: basic constraints} into account. Later we will consider different values for the minimal period $\mu$ in order to model nanotubes under stretching. We state the following basic geometric properties of configurations in $\mathscr{F}(\mu)$. The analogous properties in the case $\lambda_1 = \lambda_2 = 1$ have already been discussed in \cite{MMPS}. \begin{proposition}[Geometric structure of zigzag nanotubes]\label{basiczigzag} Let $\mathcal{F}\in\mathscr{F}(\mu)$. Then \begin{enumerate} \item[\rm (a)] Atoms in $\mathcal{F}$ lie on the surface of a cylinder with radius $\rho$ and axis $e_1$. \item[\rm (b)] Atoms in $\mathcal{F}$ are arranged in planar {sections}, perpendicular to $e_1$, obtained by fixing $j$, $k$, and $l$ in \eqref{zigzagfamilydefinition}. Each of the sections exactly contains $\ell$ atoms, arranged at the vertices of a regular $\ell$-gon. For each section, the two closest sections are at distance $\sigma$ and $\lambda_1$, respectively. \item[\rm (c)] The configuration $\mathcal{F}$ is invariant under a rotation of $2\pi/\ell$ around $e_1$, under the translation $\mu e_1$, and under a rototranslation of angle $\pi/\ell$ along the vector $(\lambda_1+\sigma)e_1$. \item [\rm (d)] Let $i\in\{1,\ldots,\ell\} $, $j\in\mathbb{Z}$ and $k,l\in\{0,1\}$: the quadruple $(i,j,k,l)$ individuates points of $\mathcal{F}$, denoted by $x_{i,k}^{j,l}$, where $(0,j,k,l)$ identifies with $(\ell,j,k,l)$. Given $x_{i,0}^{j,0}\in \mathcal{F}$, the two points $x_{i,1}^{j-1,1}$, $x_{i-1, 1}^{j-1,1}$ have distance $\lambda_2$ and $x_{i, 0}^{j-1,1}$ has distance $\lambda_1$ from $x_{i,0}^{j,0}$. For $x_{i,0} ^{j,1}$, the distance of $x_{i,1}^{j,0}$ and $x_{i-1,1}^{j,0}$ is $\lambda_2$ and the distance from $x_{i,0}^{j+1,0}$ is $\lambda_1$. See {\rm Figure \ref{quadruples}} for the analogous notation of $x_{i,1}^{j,0}$ and $x_{i,1}^{j,1}$. \end{enumerate} \end{proposition} \begin{figure}[htp] \pgfdeclareimage[width=0.7\textwidth]{quadruples}{quadruples.pdf} \pgfuseimage{quadruples} \caption{ Configuration points are individuated by quadruples $(i,j,k,l)$ for $i=1,\dots,\ell$, $j \in \mathbb{Z}$, and $k,l \in \lbrace 0,1 \rbrace$.} \label{quadruples} \end{figure} Notice that for fixed $\lambda_1$ and $\lambda_2$ the other parameters range between two degenerate cases: $\rho=0$ (the cylinder is reduced to its axis) and $\sigma=0$ (sections collide). We shall however impose further restrictions, for each atom should have three bonds. In particular, the only three bonds per atom should be the ones individuated by point (d) of Proposition \ref{basiczigzag}. By recalling that two particles are bonded if their distance is less than the reference value $1.1$, since the distance between two consecutive sections is either $\lambda_1$ or $\sigma$, we require $\lambda_1 > 0.9$ and $\sigma>0.2$. Additionally, we require $\lambda_1,\lambda_2 < 1.1$, which also implies $\sigma < 1.1$ by \eqref{eq: basic constraints}. On the other hand, on each section, the edge of the regular $\ell$-gon should be greater than $1.1$. Such length is given by $ 2\rho\sin\gamma_\ell, $ where $\gamma_\ell$ is the internal angle of a regular $2\ell$-gon, i.e. \begin{equation}\label{gamma} \gamma_\ell:=\pi\left(1-\frac{1}{\ell}\right). \end{equation} Therefore, we need to impose $\rho>\rho^-:=0.55/\sin\gamma_\ell$. With these restrictions we have the following \begin{proposition}[Parametrization of the family]\label{betaproperties} Let $ \mathcal{F}\in\mathscr{F}(\mu)$ with $\rho > \rho^-$, $\sigma >0.2$ and $\lambda_1,\lambda_2 \in (0.9, 1.1)$. Then, all atoms in $\mathcal{F}$ have exactly three (first-nearest) neighbors, two at distance $\lambda_2$ and one at distance $\lambda_1$, where the bond corresponding to the latter neighbor is oriented in the $e_1$ direction. Among the corresponding three smaller than $\pi$ bond angles, two have amplitude $\alpha$ (the ones involving atoms in three different sections), and the third has amplitude $\beta$, where $\alpha\in(\pi/2,\pi)$ is obtained from \begin{equation}\label{alphars} \sin\alpha=\sqrt{1-(\sigma/\lambda_2)^2}=2 (\rho/\lambda_2)\sin\left(\frac{\pi}{2\ell}\right) \end{equation} and $\beta\in(\pi/2,\pi)$ is given by \begin{equation}\label{betaz} \beta=\beta(\alpha,\gamma_\ell):=2\arcsin\left(\sin\alpha\sin\frac{\gamma_\ell}{2}\right). \end{equation} \end{proposition} The proof for the case $\lambda_1 = \lambda_2=1$ was detailed in \cite{MMPS}. The extension to our setting is a straightforward adaption and is therefore omitted. As already mentioned, the collection $\mathscr{F}(\mu)$ is a two-parameter family where all its configurations are uniquely determined by the specification of $\lambda_1$ and $\lambda_2$. The corresponding element will be denoted by $\mathcal{F}_{\lambda_1,\lambda_2,\mu}$. Restricting the minimal period $\mu$ to the interval $(2.6,3.1)$ we observe by \eqref{eq: basic constraints} and an elementary computation that the constraints $\lambda_1,\lambda_2 \in (0.9,1.1)$ and $\ell >3$ automatically imply $0.2 < \sigma <0.65$ and $\rho>\rho^-$. Therefore, the assumptions of {\BBB Proposition} \ref{betaproperties} hold. \section{Main results}\label{sec: mainresults} In this section we collect our main results. The corresponding proofs will then be presented in Sections \ref{sec: main proof}-\ref{sec: cellenery}. For fixed integer $\ell>3$, let us consider a configuration $\mathcal{F}$ in the family $\mathscr{F}(\mu)$. Of course it is periodic, and then it identifies with the couple $(F_n, L)$, where $F_n$ is the corresponding $n$-cell ($n=4m\ell$ for some $m\in \mathbb{N}$), and \begin{equation}\label{zigzagperiod} L = L^\mu_m:= m\mu \end{equation} is the period parameter, corresponding to the cell length (notice that for $m=1$ we get the minimal period of the configuration). In view of \eqref{E} and the properties stated in Proposition \ref{betaproperties}, the energy can be written as \begin{align}\label{basicenergy} E(\mathcal{F}) = E(F_n, L^\mu_m) = \frac{n}{2} \big( v_2(\lambda_1) + 2v_2(\lambda_2) \big) + n\big(2v_3(\alpha) + v_3(\beta(\alpha,\gamma_\ell))\big). \end{align} \subsection{Unstrechted nanotubes} {\BBB A first natural problem to be considered is the energy minimization restricted to the families $\mathscr{F}(\mu)$, with the values of $\mu$ in the reference interval $\mu\in (2.6, 3.1)$. Let us denote by $\mathcal{F}_{\lambda_1,\lambda_2,\mu}$ an element of $\mathscr{F}(\mu)$ with bond lengths $\lambda_1,\lambda_2$. If we minimize among nanotubes $\mathcal{F}_{\lambda_1,\lambda_2,\mu}$ with respect to $\mu\in (2.6,3.1)$ and $\lambda_1,\lambda_2$ in a neighborhood of $1$, we reduce to the case $\lambda_1=\lambda_2=1$}. \RRR Indeed, we can replace $\lambda_1,\lambda_2$ by $1$, leave $\alpha$ unchanged, and choose $\mu$ according to \eqref{eq: basic constraints} and \eqref{alphars} such that the energy \eqref{basicenergy} decreases\EEE . We notice that $\lbrace \mathcal{F}_{1,1,\mu}| \ \mu \in (2.6,3.1) \rbrace $ is a one-parameter family. It follows from Proposition \ref{betaproperties} and \eqref{eq: basic constraints} that this family can also be parametrized in terms of the bond angle $\alpha$ introduced in Proposition \ref{betaproperties} using the relation $\mu = 2 (1-\cos\alpha)$. We indicate these configurations by $\mathcal{G}_\alpha$. As already discussed in \cite{MMPS}, there are two specific angles $\alpha^{\rm ch}_\ell < \alpha^{\rm ru}$ corresponding to the \emph{Rolled-up} \cite{Dresselhaus92,Dresselhaus-et-al95} and \emph{Polyhedral} \cite{Cox-Hill07,Cox-Hill08} configuration, respectively, with $\alpha^{\rm ru} = 2\pi/3$ and $\alpha^{\rm ch}_\ell$ being the unique solution of the equation $\beta(\alpha^{\rm ch}_\ell, \gamma_\ell) = \alpha^{\rm ch}_\ell$ in $(\arccos(-0.4), \arccos(-0.6))$. The {\BBB one variable minimization problem for the map $\alpha\mapsto E(\mathcal{G}_\alpha)$ has} been investigated in \cite[Theorem 4.3]{MMPS}: \begin{proposition}[Existence and uniqueness of minimizer: Unstretched case]\label{eq: old main result} There exist an open interval $A$ and $\ell_0 \in \mathbb{N}$ only depending on $v_3$ such the the following holds for all $\ell \ge \ell_0$: There is a unique angle $\alpha^{\rm us}_\ell \in A$ such that $\mathcal{G}_{\alpha^{\rm us}_\ell}$ minimizes the energy $E$ in the class $\lbrace \mathcal{G}_\alpha| \ \alpha \in A \rbrace$. Moreover, one has $\alpha^{\rm us}_\ell \in (\alpha^{\rm ch}_\ell,\alpha^{\rm ru}) \subset A$. \end{proposition} Let us report the idea of the proof. Exploiting the monotonicity properties of $v_3$ and $\beta$ (the latter being decreasing as a function of $\alpha$), one derives that the minimum is attained for $\alpha$ in a small left neighborhood $I$ of $2\pi/3$. Using in addition the convexity of $v_3$ and the concavity of $\beta$, it follows that $\alpha \mapsto E(\mathcal{F}) = -3n/2 + n\big(2v_3(\alpha) + v_3(\beta(\alpha,\gamma_\ell))\big)$ is strictly convex in $I$, which implies the assertion. The result in particular shows that neither the Cox-Hill nor the Rolled-up configuration is a local minimizer of the energy $E$. The corresponding minimal period of the nanotube is given by \begin{align}\label{eq:muellneu} \mu^{\rm us}_\ell := 2 - 2\cos\alpha^{\rm us}_\ell, \end{align} cf. \eqref{eq: basic constraints} and \eqref{alphars}, and we notice $\mathcal{G}_{\alpha^{\rm us}_\ell} = \mathcal{F}_{1,1,\mu^{\rm us}_\ell}$. {\BBB Nanotubes with $\mu=\mu_\ell^{\rm us}$ will be referred to as {\it unstretched} nanotubes. } The aim of \cite{MMPS,MMPS-new} was to prove that $\mathcal{G}_{\alpha^{\rm us}_\ell}$ is a local minimizer. This has been illustrated numerically in \cite{MMPS} and checked analytically in \cite{MMPS-new}, for a restricted class of perturbations. Our stability result Theorem \ref{th: main3} below delivers an analytical proof of stability with respect to {\it all} small perturbations. As such, it generalizes and improves known results, even in the unstreched case. \subsection{Nanotubes under stretching} Let us now move forward to the case of {\it stretched} nanotubes. This corresponds to choosing $\mu \neq \mu^{\rm us}_\ell$. Indeed, we impose a tensile or compressive stress on the nanotube by simply modifying its minimal period. Given the role of periodicity in the definition of the energy $E$, see \eqref{E}, this has the net effect of stretching/compressing the structure. Note that this action on the structure is very general. In particular, it includes, without reducing to, imposed Dirichlet boundary conditions, where only the first coordinate of the boundary atoms is prescribed. For fixed $\mu \in (2.6,3.1)$ we consider the minimization problem \begin{align}\label{min2} E_{\rm min}(\mu) = \min\big\{ E(\mathcal{F}_{\lambda_1,\lambda_2,\mu})| \ \mathcal{F}_{\lambda_1,\lambda_2,\mu} \in \mathscr{F}(\mu), \ \lambda_1,\lambda_2 \in (0.9,1.1) \big\}. \end{align} We obtain the following existence result. \begin{theorem}[Existence and uniqueness of minimizer: General case]\label{th: main1} There exist $\ell_0 \in \mathbb{N}$ and, for each $\ell \ge \ell_0$, an open interval $M^\ell$ only depending on $v_2$, $v_3$, and $\ell$, with $\mu^{\rm us}_\ell \in M^\ell$, such that for all $\mu \in M^\ell$ there is a unique pair of bondlengths $(\lambda^\mu_1,\lambda^\mu_2)$ such that $\mathcal{F}_{\lambda^\mu_1,\lambda^\mu_2,\mu}$ is a solution of the problem \eqref{min2}. \end{theorem} In the following the minimizer is denoted by $\mathcal{F}_\mu^*$. Note that we have $\mathcal{F}_{\mu^{\rm us}_\ell}^* = \mathcal{G}_{\alpha^{\rm us}_\ell}$ by Proposition \ref{eq: old main result}. Our aim is to investigate the local stability of $\mathcal{F}_\mu^*$. To this end, we consider \emph{general} small perturbations $\tilde{\mathcal{F}}$ of $\mathcal{F}_\mu^*$ with the same bond graph, i.e. each atom keeps three and only three bonds, and we can identify the three neighboring atoms of the perturbed configurations with the ones for the configuration $\mathcal{F}_\mu^*$. By $F^\mu_n = \lbrace x^\mu_1,\ldots, x^\mu_n \rbrace$ denote the $n$-cell of $\mathcal{F}_\mu^*$ so that $\mathcal{F}_\mu^* = (F_n^\mu, L^\mu_m)$ with $L^\mu_m$ as defined in \eqref{zigzagperiod} for $m \in \mathbb{N}$ with $n = 4m\ell$. We define \emph{small perturbations} $\mathscr{P}_\eta(\mu)$ of $\mathcal{F}_\mu^*$ by \begin{align}\label{eq: bc} \begin{split} \mathscr{P}_\eta(\mu) = \lbrace \tilde{\mathcal{F}} = (F_n,L^\mu_m)| \ F_n := \lbrace x_1,\ldots,x_n \rbrace \ \text{ with } |x_i - x_i^\mu| \le \eta \rbrace. \end{split} \end{align} The parameter $\eta>0$ will always be chosen sufficiently small such that the topology of the bond graph remains invariant. $\eta$ will in general also depend on $\ell$. Moreover, we recall $E(\tilde{\mathcal{F}}) = E(F_n, L^\mu_m)$. We obtain our main result, concerning local stability under small stretching. \begin{theorem}[Local stability of minimizers]\label{th: main3} There exist $\ell_0 \in \mathbb{N}$ and for each $\ell \ge \ell_0$ some $\mu^{\rm crit}_\ell > \mu^{\rm us}_\ell$ and $\eta_\ell >0$ only depending on $v_2$, $v_3$, and $\ell$ such that for all $\ell \ge \ell_0$ and for all $\mu \in [\mu_\ell^{\rm us},\mu_\ell^{\rm crit}]$ we have $$E(\tilde{\mathcal{F}})>E(\mathcal{F}_\mu^*) $$ for any nontrivial perturbation $\tilde{\mathcal{F}} \in \mathscr{P}_{\eta_\ell}(\mu)$ of the configuration $\mathcal{F}_\mu^*$. \end{theorem} The theorem asserts that, under prescribed and small stretchings, local minimizers are periodic. In particular, they belong to the family $\mathscr{F}(\mu)$. This amounts to a validation of the Cauchy-Born rule in this specific setting. Especially, the result justifies the reduction of the $3n$-dimensional minimization problem $\min \{E(\mathcal{F}) | \ \mathcal{F} \in \mathscr{P}_{\eta_\ell}(\mu)\}$ to the two-dimensional problem \eqref{min2}. In the following statement we collect the main properties of the local minimizer. \begin{proposition}[Properties of minimizer]\label{th: main2} There exist $\ell_0 \in \mathbb{N}$ and for each $\ell \ge \ell_0$ an open interval $M^\ell$ only depending on $v_2$, $v_3$, and $\ell$, with $\mu^{\rm us}_\ell \in M^\ell$, such that: \begin{itemize} \item[1.] The mapping $\mu \mapsto E(\mathcal{F}_\mu^*) = E_{\rm min}(\mu)$ is smooth, strictly convex on $M^\ell$ and attains its minimum in $\mu_\ell^{\rm us}$. Particularly, $\frac{d^2}{d\mu^2} E_{\rm min}(\mu_\ell^{\rm us}) \ge cn$ for $c>0$ only depending on $v_2$, $v_3$. \item[2.] The lengths $\lambda^\mu_1,\lambda^\mu_2$ increase continuously for $\mu \in M^\ell$. In particular, we have $\lambda^\mu_1,\lambda^\mu_2>1$ for $\mu > \mu_\ell^{\rm us}$ and $\lambda^\mu_1,\lambda^\mu_2<1$ for $\mu < \mu_\ell^{\rm us}$. \item[3.] The angle $\alpha^\mu$ corresponding to $\lambda^\mu_1,\lambda^\mu_2$ given by the relations \eqref{eq: basic constraints} and \eqref{alphars} satisfies $\alpha^\mu \in (\alpha^{\rm ch}_\ell,\alpha^{\rm ru})$ for all $\mu \in M^\ell$. \item[4.] Whenever $v_2''(1) \neq 6v_3''(2\pi/3)$, the radius $\rho^\mu$ corresponding to $\lambda^\mu_1,\lambda^\mu_2$ given by relation \eqref{eq: basic constraints} is continuously increasing or decreasing for $\mu \in M^\ell$, respectively, depending on whether $v_2''(1) < 6v_3''(2\pi/3)$ or $v_2''(1) > 6v_3''(2\pi/3)$. \end{itemize} \end{proposition} Properties 1 and 2 imply that that the nanotubes show elastic response for small extension and compression. Property 3 reconfirms that neither the Polyhedral nor the Rolled-Up configuration is a local minimizer of the energy, for all $\mu$ near $\mu^{\rm us}_\ell$. Finally, Property 4 implies that under stretching or compressing the radius of the nanotube {\it generically} changes. In particular, if $v_2''(1) > 6v_3''(2\pi/3)$, the radius of the nanotube decreases as changing the angles is energetically more convenient. Notice that Theorem \ref{th: main3} provides a stability result only for the case of expansion $\mu \ge \mu^{\rm us}_\ell$ and for values $\mu$ near $\mu^{\rm us}_\ell$. The situation for compression is more subtle from an analytical point of view and our proof techniques do not apply in this case. However, we expect stability of nanotubes also for small compression and refer to \cite{MMPS} for some numerical results in this direction. Let us complete the picture in the tension regime by \BBB discussing briefly the fact that for larger stretching cleavage \EEE along a section is energetically favored. More precisely, we have the following result. \begin{theorem}[Fracture]\label{fracture} Let \RRR $\mathcal{H}_\mu$ \EEE be the configuration \begin{align*} x_{i,k}^{j,l} = \begin{cases} \bar{x}_{i,k}^{j,l} & \RRR j \EEE \in [0,m/2) + m\mathbb{Z}, \\ \bar{x}_{i,k}^{j,l} + m(\mu - \mu_\ell^{\rm us}) & \text{else} \end{cases} \end{align*} for $i=1,\ldots,\ell$ and $k,l \in \lbrace 0,1 \rbrace$, where $\bar{x}_{i,k}^{j,l}$ denote the atomic positions of the configuration $\mathcal{F}_{1,1,\mu^{\rm us}_\ell}$ \RRR (see Proposition \ref{basiczigzag}(d)). \EEE Then there \BBB are an open interval $M^\ell$ containing $\mu_\ell^{\rm us}$ and a constant $c>0$ only depending on $v_2$ and $v_3$ such that for all $\mu \in M^\ell$, $\mu \ge \mu^{\rm frac}_{\ell,m} := \mu_\ell^{\rm us} + c/\sqrt{m}$, \EEE one has $E(\mathcal{H}_\mu) < E(\mathcal{F}^*_\mu)$. \end{theorem} Notice that the configuration $\mathcal{H}_\mu$ corresponds to a brittle nanotube cleaved along a cross-section. The energy is given by $E(\mathcal{H}_\mu) = E(\mathcal{F}_{1,1,\mu^{\rm us}_\ell}) + \NNN 4\ell \EEE $ since in the configuration $\mathcal{H}_\mu$ there are $4\ell$ less active bonds {\BBB per $n$-cell} than in $\mathcal{F}_{1,1,\mu^{\rm us}_\ell}$. Moreover, $\mathcal{H}_\mu$ is a stable configuration in the sense of Theorem \ref{th: main3} \BBB for all $\mu \ge \mu^{\rm us}_\ell$, \EEE which can be seen by applying Theorem \ref{th: main3} separately on the two parts of $\mathcal{H}_\mu$, \RRR consisting of the points $x_{i,k}^{j,l}$ with $j < m/2$ and $j \ge m/2$, respectively. \EEE As mentioned, nanotubes are long structures. In particular, $m$ should be expected to be many orders of magnitude larger than $\ell$. The case of large $m$ is hence a sensible one and for $m$ large enough we have $\mu^{\rm frac}_{\ell,m} < \mu^{\rm crit}_\ell$, with $\mu^{\rm crit}_\ell$ from Theorem \ref{th: main3}. Hence, by combining Theorem \ref{th: main3} with Theorem \ref{fracture}, for {\it all} $\mu \ge \mu^{\rm us}_\ell$ we obtain a stability result for an elastically stretched or cleaved nanotube, respectively. The proof of Theorem \ref{fracture} is elementary and relies on the fact that the difference of the energy associated to $\mathcal{F}^*_\mu$ and $\mathcal{H}_\mu$ can be expressed by \begin{align*} E(\mathcal{H}_\mu) - E(\mathcal{F}^*_\mu)& = 4\ell + E(\mathcal{F}_{1,1,\mu^{\rm us}_\ell}) - E(\mathcal{F}^*_\mu) = 4\ell + E_{\rm min}(\mu^{\rm us}_\ell) - E_{\rm min}(\mu)\\ & = 4\ell - \frac{1}{2}\frac{d^2}{d^2\mu}E_{\rm min}(\mu^{\rm us}_\ell) (\mu -\mu^{\rm us}_\ell)^2 + {\rm O}((\mu -\mu^{\rm us}_\ell)^3)\\ & \le 4\ell - \frac{1}{4} \frac{d^2}{d^2\mu}E_{\rm min}(\mu^{\rm us}_\ell) (\mu -\mu^{\rm us}_\ell)^2 \le 4\ell - m\ell c (\mu -\mu^{\rm us}_\ell)^2 \end{align*} for $\mu$ in a small neighborhood around $\mu^{\rm us}_\ell$, where we used Property 1 in Proposition \ref{th: main2} \RRR and $n = 4m\ell$. \EEE We close the section by noting that the scaling of $\mu^{\rm frac}_{\ell, m} - \mu^{\rm us}_\ell$ in $m$ is typical for atomistic systems with pair interaction of Lennard-Jones type and has also been obtained in related models, cf. \cite{Braides-Lew-Ortiz:06, FriedrichSchmidt:2011, FriedrichSchmidt:2014.1}. \section{Existence and stability: Proof of Theorem \ref{th: main1} and Theorem \ref{th: main3}}\label{sec: main proof} In this section we consider small perturbations $\tilde{\mathcal{F}}$ of configurations in $\mathscr{F}(\mu)$ with the same bond graph, {\BBB as defined in \eqref{eq: bc}}. The atomic positions of $\tilde{\mathcal{F}}$ will be indicated by $x_{i,k}^{j,l}$ and are labeled as for a configuration $\mathscr{F}(\mu)$, cf. Proposition \ref{basiczigzag}(d). We first introduce some further notation needed for the proof of our main result. In particular, we introduce a \emph{cell energy} corresponding to the energy contribution of a specific {\BBB basic} cell. \noindent \textbf{Centers and dual centers.} We introduce the \emph{cell centers} \begin{align}\label{eq: centers} z_{i,j,k} = \frac{1}{2}\Big(x_{i,k}^{j,0} + x_{i,k}^{j,1}\Big) \end{align} and the \emph{dual cell centers} $$z^{\rm dual}_{i,j,k} = \frac{1}{2}\Big(x_{i,k}^{j,1} + x_{i,k}^{j+1,0}\Big).$$ Note that for a configuration in $\mathcal{F}(\mu)$ for fixed $j$ the $2\ell$ points $z_{i,j,0}$ and $z^{\rm dual}_{i,j-1,1}$ for $i=1,\ldots,\ell$ lie in a plane perpendicular to $e_1$. Likewise, $z_{i,j,1}$ and $z^{\rm dual}_{i,j,0}$ for $i=1,\ldots,\ell$ lie in a plane perpendicular to $e_1$. \noindent \textbf{Cell energy.} The main strategy of our proof will be to reduce the investigation of \eqref{min2} to a cell problem. {\BBB In order to correctly capture the contribution of all bond lengths and angles to the energy, } it is not enough to consider a hexagon as a basic cell, but two additional atoms have to be taken into account. \begin{figure}[htp] \begin{center} \pgfdeclareimage[width=0.7\textwidth]{cell}{cell.pdf} \pgfuseimage{cell} \caption{ Notation for the points and the centers in the basic cell.} \label{cell} \end{center} \end{figure} Let be given a center $z_{i,j,k}$ and number the atoms of the corresponding hexagon by $x_1 = x_{i,k}^{j,0}$, $ x_2 = x_{i,k}^{j,1}$ and the remaining clockwisely by $x_3,x_4,x_5,x_6$ as indicated in Figure \ref{cell}, {\BBB such that $x_3$ is consecutive to $x_1$,} see also \eqref{kink} below. Additionally, the atoms bonded to $x_1$ and $x_2$, respectively, which are not contained in the hexagon, are denoted by $x_7$ and $x_8$. Note that $z_{i,j-1,k}^{\rm dual} = (x_7 + x_1)/2$ and $z_{i,j,k}^{\rm dual} = (x_2 + x_8)/2$. For $i=1,\ldots,6$ we define the bondlengths $b_i$ as indicated in Figure \ref{cellangles} and $b_7 = |x_1 - x_7|$, $b_8 = |x_2 - x_8|$, where $$2|z_{i,j-1,k}^{\rm dual} - x_1| = b_7, \ \ \ \ 2|z_{i,j,k}^{\rm dual} - x_2| = b_8.$$ By $\varphi_i$ we denote the interior angle of the hexagon at $x_i$. By $\varphi_7,\varphi_8$ we denote the remaining two angles at $x_1$ and by $\varphi_9,\varphi_{10}$ we denote the remaining two angles at $x_2$, see again Figure \ref{cellangles}. \begin{figure}[htp] \begin{center} \pgfdeclareimage[width=0.45\textwidth]{cellangles}{cellangles.pdf} \pgfuseimage{cellangles} \caption{ Notation for the bond lengths and angles in the basic cell.} \label{cellangles} \end{center} \end{figure} We define the \emph{cell energy} by \begin{align} E_{{\rm cell}}(z_{i,j,k}) & = \frac{1}{4} \big(v_2(b_1) + v_2(b_2) \big) + \frac{1}{2}\sum_{i=3}^6v_2(b_i) + \frac{1}{4} \big( v_2(b_7) + v_2(b_8) \big) \nonumber\\ & + v_3(\varphi_1) + v_3(\varphi_2) + \frac{1}{2}\sum_{i=3}^6 v_3(\varphi_i) +\frac{1}{2} \sum_{i=7}^{10} v_3(\varphi_i). \label{eq: cell} \end{align} To derive convexity properties of $E_{{\rm cell}}$ it is convenient to take also the contribution of the angles $\varphi_7, \ldots, \varphi_{10}$ into account. Observe that \begin{align}\label{eq: sumenergy} E(\tilde{\mathcal{F}}) = \sum_{i=1}^\ell \sum_{j=1}^m \sum_{k=0,1} E_{{\rm cell}}(z_{i,j,k}). \end{align} Indeed, each bond not (approximately) parallel to $e_1$ is contained exactly in two cells. Each bond (approximately) parallel to $e_1$ is contained in four cells, twice in form of a bond in a hexagon, once as a bond left of a hexagon and once as a bond right of a hexagon. Moreover, angles with index $\lbrace 1,2\rbrace$ are contained exactly in one cell and angles with index $\lbrace 3,\ldots,10\rbrace$ are contained in exactly two cells. \noindent \textbf{Symmetrization of cells.} \BBB A basic cell is a configuration of eight points of $\mathbb{R}^3$. By $\boldsymbol{x}_{\rm kink}^\ell\in\mathbb{R}^{3\times 8}$ we denote the \emph{unstretched kink configuration}: a basic cell as found in the unstretched configuration $\mathcal{G}_{\alpha_\ell^{\rm us}}$ from Section \ref{sec: mainresults}, see \eqref{kink} below for the exact definition. \BBB Notice that the coordinates given in \eqref{kink} correspond to a convenient choice of a new reference orthonormal system in $\mathbb{R}^3$, to be often tacitly considered when working with a basic cell. Indeed, let be given a cell of the nanotube $\mathcal{G}_{\alpha_\ell^{\rm us}}$, where the eight points are ordered from $x_1$ to $x_8$ according to the convention of the previous subsection (see Figure \ref{cell}), \RRR in particular the points $x_3,x_4,x_5,x_6$ are numbered clockwisely with respect to an observer lying in the interior of the tube. \EEE \BBB We are fixing a new reference coordinate system as follows: we let the center of the cell be the origin, $e_1$ (axis direction) be the direction of $x_2-x_1$, $e_2$ the direction of $x_3-x_6$, and $e_3=e_1\wedge e_2$. Sometimes we will write $\mathbb{R}^2\times\{0\}$ for the plane generated by $e_1,e_2$. If $\boldsymbol{x}\in \mathbb{R}^{3\times 8}$ denotes a generic cell, possibly after a rigid motion we may always assume that, {\BBB with respect to the new reference system}, the second and third components of \RRR $(x_1+x_7)/2$, $(x_2+x_8)/2$ are zero \EEE and the points $x_4$, $x_5$ lie in a plane parallel to $\mathbb{R}^2 \times \lbrace 0 \rbrace$. A key step in our analysis will be to show that the minimization of the cell energy \eqref{eq: cell} can be reduced to a special situation with high symmetry. To this end, we introduce the \emph{symmetrization} of a cell. For $y = (y^1, y^2,y^3) \in \mathbb{R}^3$ we let $r_1 (y) := (-y^1,y^2,y^3)$ and $r_2 (y) := (y^1,-y^2,y^3)$. For the generic cell $\boldsymbol{x}= (x_1,\ldots,x_8) \in \mathbb{R}^{3 \times 8}$ we define the reflections \begin{equation}\label{reflexion} \begin{aligned} S_1(\boldsymbol{x})& = ( r_2(x_1) \, | \, r_2(x_2) \, | \, r_2(x_6) \, | \, r_2(x_5) \, | \, r_2(x_4) \, | \, r_2(x_3) \, | \, r_2( x_7) \, | \, r_2( x_8)),\\ S_2(\boldsymbol{x}) &= ( r_1( x_2) \, | \, r_1( x_1) \, | \, r_1(x_4 )\, | \, r_1( x_3) \, | \,r_1( x_6) \, | \, r_1(x_5) \, | \, r_1( x_8) \, | \, r_1( x_7)). \end{aligned} \end{equation} $S_1$ interchanges the pair of points $(x_3, x_6)$ and $(x_4,x_5)$, and changes the sign of the second components of all points. On the other hand, $S_2$ interchanges the pair of points $(x_1, x_2)$, $(x_3,x_4)$, $(x_5,x_6)$, and $(x_7,x_8)$, and changes the sign of the first components of all points. \EEE {\BBB We let \begin{equation}\label{s1s2} \boldsymbol{x}_{S_1}:=\boldsymbol{x}_{\rm kink}^\ell+S_1(\boldsymbol{x}-\boldsymbol{x}_{\rm kink}^\ell),\quad \boldsymbol{x}_{S_2} : = \boldsymbol{x}_{\rm kink}^\ell + S_2(\boldsymbol{x}- \boldsymbol{x}_{\rm kink}^\ell). \end{equation} If $\boldsymbol{x}$ is seen as a perturbation of $\boldsymbol{x}_{\rm kink}^\ell$, $\boldsymbol{x}_{S_1}$ (resp. $\boldsymbol{x}_{S_2}$) is the symmetrized perturbation with respect to the plane generated by ${e_1, e_3}$ (resp. $e_2, e_3$). The symmetry of the configurations implies therefore $E_{\rm cell}(\boldsymbol{x}_{S_2} ) = E_{\rm cell}(\boldsymbol{x}_{S_1} ) = E_{\rm cell}(\boldsymbol{x})$. } We define \begin{subequations}\label{reflection2} \begin{align} \boldsymbol{x}' := \boldsymbol{x}_{\rm kink}^\ell + \frac{1}{2} \Big((\boldsymbol{x} - \boldsymbol{x}_{\rm kink}^\ell) + S_1(\boldsymbol{x}- \boldsymbol{x}_{\rm kink}^\ell) \Big),\label{reflection2-a}\\ \mathcal{S}(\boldsymbol{x}): = \boldsymbol{x}_{\rm kink}^\ell + \frac{1}{2} \Big((\boldsymbol{x}' - \boldsymbol{x}_{\rm kink}^\ell) + S_2(\boldsymbol{x}'- \boldsymbol{x}_{\rm kink}^\ell) \Big).\label{reflection2-b} \end{align} \end{subequations} We also introduce the \emph{symmetry defect} \begin{align}\label{delta} \Delta(z_{i,j,k}) := |\boldsymbol{x} - \boldsymbol{x}'|^2 + |\boldsymbol{x}' - \mathcal{S}(\boldsymbol{x})|^2. \end{align} {\BBB A property that we remark is that for a basic cell $\boldsymbol{x}$ with center $z_{i,j,k}$ the quantity $|z^{\rm dual}_{i,j,k} - z^{\rm dual}_{i,j-1,k}|$ does not change when passing to $\mathcal{S}(\boldsymbol{x})$ \RRR since the second and third component of $z^{\rm dual}_{i,j,k}, z^{\rm dual}_{i,j-1,k}$ are assumed to be zero. \EEE } Below we will see that the difference of the cell energy of $\mathcal{S}(\boldsymbol{x})$ and $\boldsymbol{x}$ can be controlled in terms of $\Delta(z_{i,j,k})$ due to strict convexity of the energy. \noindent \textbf{Angles between planes.} For each $x = x_{i,k}^{j,l}$ we denote by $x_1,x_2,x_3$ the three atoms that are bonded with $x$, where the three points are numbered such that $x_3 - x$ is (approximately) parallel to {\BBB the axis direction} $e_1$. Let $\theta = \theta(x)\le\pi$ denote the angle between the planes defined by $\{x_3 x x_1\}$ and $\{x_3 x x_2\}$. More precisely, let $n_{13}$, $n_{23}$ denote unit normal vectors to the planes $\{x_3 x x_1\}$ and $\{x_3 x x_2\}$, respectively. Then we have \begin{align}\label{eq: thetaangle} \theta(x) = \max \big\{\pi - \arccos ( n_{13} \cdot n_{23} ), \ \arccos ( n_{13} \cdot n_{23} ) \big\} \end{align} as represented in Figure \ref{angletheta}. With these preparations we will now define angles corresponding to centers and dual centers. Let be given a center $z_{i,j,k} = \frac{1}{2}(x_{i,k}^{j,0} + x_{i,k}^{j,1})$ of a hexagon. As before we denote the points of the hexagon by $x_1,\ldots,x_6$. By $\theta_l(z_{i,j,k})$ we denote the angle between the planes $\{x_1 x_3 x_4\}$ and $\{x_1 x_6 x_5\}$. By $\theta_r(z_{i,j,k})$ we denote the angle between the planes $\{x_3 x_4 x_2\}$ and $\{x_2 x_5 x_6\}$. For a dual center $z^{\rm dual}_{i,j,k} = (x_{i,k}^{j,1} + x_{i,k}^{j+1,0})/2$ we introduce $\theta_l(z^{\rm dual}_{i,j,k}) = \theta(x_{i,k}^{j,1})$ and $\theta_r(z^{\rm dual}_{i,j,k}) = \theta(x_{i,k}^{j+1,0})$. \begin{figure}[htp] \begin{center} \pgfdeclareimage[width=0.5\textwidth]{angletheta}{angletheta.pdf} \pgfuseimage{angletheta} \caption{ The angle between the planes $\{x_3 x x_1\}$ and $\{x_3 x x_2\}$ is denoted by $\theta(x)$. } \label{angletheta} \end{center} \end{figure} In Section \ref{sec: angles} we prove the following lemma which provides a linear control for the oscillation of plane angles of a perturbed configuration $\tilde{\mathcal{F}}$ \RRR with respect to those of a configuration in $\mathscr{F}(\mu)$ \EEE in terms of the symmetry defect from \eqref{delta}. \begin{lemma}[Symmetry defect controls angle defect]\label{lemma: sum} There is a universal constant $c>0$ such that for $\eta>0$ small enough \RRR for all $\tilde{\mathcal{F}}$ with $\Delta(z_{i,j,k}) \le \eta$ for all centers $z_{i,j,k}$ \EEE we have \begin{align*} \sum_{j=1}^m\sum_{i=1}^\ell \sum_{k=0,1}\Big(\theta_l(z_{i,j,k}) + \theta_l(z^{\rm dual}_{i,j,k}) & + \theta_r(z_{i,j,k}) + \theta_r(z^{\rm dual}_{i,j,k}) \Big) \\ & \le 4m(2\ell - 2)\pi + c\sum_{j=1}^m\sum_{i=1}^\ell\sum_{k=0,1} \Delta(z_{i,j,k}). \end{align*} \end{lemma} Note that the sum on the left equals exactly $4m(2\ell - 2)\pi$ if $\tilde{\mathcal{F}} \in \mathscr{F}(\mu)$. \noindent \textbf{Reduced energy.} A key step in our analysis will be to show that the minimization of the cell energy \eqref{eq: cell} can be reduced to a special situation with high symmetry. As represented in Figure \ref{reducedenergy}, this corresponds to the conditions \begin{equation}\label{sym-assumption} \begin{aligned} &b_1 = b_2 = \lambda_1, \ \ \ \ b_3 = b_4 = b_5 = b_6 = \lambda_2, \ \ \ \ b_7 = b_8 = \lambda_3,\\ &z^{\rm dual}_{i,j,k} - z^{\rm dual}_{i,j-1,k} = {\BBB \widetilde\mu }e_1, \ \ \ \ x_2-x_1 = \lambda_4 e_1,\\ &\varphi_1 = \varphi_2 = \beta, \ \ \ \ \varphi_3 = \varphi_4 = \varphi_5 = \varphi_6 = \alpha_1, \ \ \ \ \varphi_7 = \varphi_8 = \varphi_9= \varphi_{10} = \alpha_2, \\ &{\theta_l}(z_{i,j,k}) = {\theta_r}(z_{i,j,k}) = \gamma_1, \ \ \ \ \ {\theta_l}(z^{\rm dual}_{i,j,k}) = {\theta_r}(z^{\rm dual}_{i,j-1,k}) = \gamma_2 \end{aligned} \end{equation} with $\lambda_1,\lambda_2,\lambda_3 \in (0.9,1.1)$, $\lambda_4 \in (0.9,3.3)$, ${\BBB \widetilde\mu} \in (2.6,3.1)$ $\alpha_1,\alpha_2,\beta \in (\arccos(-0.4),\arccos(-0.6))$, $\gamma_1,\gamma_2 \in [\frac{3}{4}\pi,\pi]$. Note that ${\theta_r}(z^{\rm dual}_{i,j-1,k})= \theta(x_1)$ and ${\theta_l}(z^{\rm dual}_{i,j,k})= \theta(x_2)$ with the angles introduced in \eqref{eq: thetaangle}. {\BBB The notation $\tilde \mu$ is reminiscent of the fact that we have indeed $\widetilde \mu=\mu$ for a basic cell of a nanotube in $\mathscr{F}(\mu)$.} {\BBB Under \eqref{sym-assumption}}, arguing along the lines of Proposition \ref{betaproperties}, we obtain \begin{align}\label{eq: constraint2} \beta= \beta(\alpha_1,\gamma_1) = 2\arcsin\left(\sin\alpha_1\sin\frac{\gamma_1}{2}\right) = \beta(\alpha_2,\gamma_2) = 2\arcsin\left(\sin\alpha_2\sin\frac{\gamma_2}{2}\right). \end{align} By elementary trigonometry, cf. Figure \ref{reducedenergy}, we also get \begin{align}\label{lambda4} \lambda_4 = \lambda_1 - 2\lambda_2\cos\alpha_1. \end{align} We now introduce the \emph{symmetric energy} by \begin{align}\label{symmetric-cell} \begin{split} E_{\mu,\gamma_1,\gamma_2}^{{\rm sym}}(\lambda,\alpha_1,\alpha_2) &= 2v_2(\lambda) + \frac{1}{2} {v}_2 \big(\mu/2 + \lambda\cos\alpha_1 \big) + \frac{1}{2} {v}_2 \big(\mu/2 + \lambda\cos\alpha_2 \big) \\ & \ \ \ \ + 2 v_3(\alpha_1) + 2 v_3(\alpha_2) + v_3(\beta(\alpha_1,\gamma_1)) + v_3(\beta(\alpha_2,\gamma_2)). \end{split} \end{align} {\BBB Notice that $E_{\rm cell}(z_{i,j,k})=E^{\rm sym}_{\widetilde\mu,\gamma_1,\gamma_2}(\lambda,\alpha_1,\alpha_2)$ if the conditions \eqref{sym-assumption} hold with $\alpha_1=\alpha_2$, $\gamma_1=\gamma_2$, \RRR $\lambda_1=\lambda_3= \mu/2 + \lambda\cos\alpha_1$, and $\lambda_2=\lambda$. In general, } we show that, up to a small perturbation, the symmetric energy {\BBB $E_{\widetilde\mu,\gamma_1,\gamma_2}^{{\rm sym}}$ delivers a lower bound for $E_{\rm cell}$ for cells satysfying \eqref{sym-assumption}}. \begin{figure}[htp] \begin{center} \pgfdeclareimage[width=0.7\textwidth]{reducedenergy}{reducedenergy.pdf} \pgfuseimage{reducedenergy} \caption{Half of a cell configuration kinked at the plane $\pi$ and satisfying conditions \eqref{sym-assumption}. The other half of the cell configuration can be determined by symmetry with respect to the plane $\pi$.} \label{reducedenergy} \end{center} \end{figure} \begin{lemma}[Cell energy and symmetric energy]\label{lemma: sym-energy} There exist a constant $c_0>0 $ and $\ell_0 \in \mathbb{N}$ only depending on $v_2$ and $v_3$ \RRR such that for each $\tilde{\mathcal{F}}$ and all centers $z_{i,j,k}$ satisfying conditions \eqref{sym-assumption} with $|\lambda_1 - 1| + |\lambda_3 - 1| \le \ell^{-4}$ and $|\gamma_1 - \gamma_2| \le \ell^{-2}$ \EEE we have $$ E_{{\rm cell}}(z_{i,j,k}) \ge E_{{\BBB \widetilde\mu},\gamma_1,\gamma_2}^{{\rm sym}}(\lambda_2,\alpha_1,\alpha_2) - c_0 \ell^{-4} (\gamma_1 - \gamma_2)^2. $$ \end{lemma} This lemma will be proved in Section \ref{sec: reduced-energy}. The idea in the proof is to express $\lambda_3$ in terms of the relations \eqref{sym-assumption} and \eqref{lambda4} to find $\lambda_3 = {\BBB \widetilde\mu} - \lambda_1 + 2\lambda\cos\alpha_1 + {\rm O}( (\gamma_1 - \gamma_2)^2)$, where we set $\lambda=\lambda_2$. Here the term $ {\rm O}( (\gamma_1 - \gamma_2)^2)$ appears as the points $x_7,x_1,x_2,x_8$ in general do not lie on a line. Likewise, we obtain $\lambda_1 = {\BBB \widetilde\mu} - \lambda_3 + 2\lambda\cos{\BBB \alpha_2} + {\rm O}( (\gamma_1 - \gamma_2)^2)$. Finally, we use $v_2(\lambda_1) + v_2(\lambda_3) \ge 2v_2( (\lambda_1 + \lambda_3)/2 )$ by convexity of $v_2$. We also introduce the \emph{reduced energy} \begin{align}\label{red} E_{\rm red}(\mu,\gamma_1,\gamma_2) &= \min\lbrace E_{\mu,\gamma_1,\gamma_2}^{{\rm sym}}(\lambda,\alpha_1,\alpha_2)| \ \lambda \in (0.9,1.1), \ \alpha_1,\alpha_2 \in (\arccos(-0.4),\arccos(-0.6)) \rbrace. \end{align} Since $E_{\mu,\gamma_1,\gamma_2}^{{\rm sym}}$ is symmetric in $(\alpha_1,\gamma_1)$ and $(\alpha_2,\gamma_2)$, we observe that $E_{\rm red}$ is symmetric in $\gamma_1$ and $\gamma_2$, i.e $E_{\rm red}(\mu,\gamma_1,\gamma_2) = E_{\rm red}(\mu,\gamma_2,\gamma_1)$. The following result, which is proved in Section \ref{sec: reduced-energy}, collects the fundamental properties of $E_{\rm red}$. \begin{proposition}[Properties of $E_{\rm red}$]\label{th: mainenergy} There exists $\ell_0 \in \mathbb{N}$ and for each $\ell \ge \ell_0$ there are open intervals $M^\ell$, $G^\ell$ only depending on $v_2$, $v_3$ and $\ell$ with $\mu^{\rm us}_\ell \in M^\ell$, $\gamma_\ell \in G^\ell$ such that the following holds: \begin{itemize} \item[1.] \BBB (Unique minimizer) For each $(\mu, \gamma_1,\gamma_2) \in M^\ell \times G^\ell \times G^\ell$ there exists a unique triple $(\lambda^\mu, \alpha^\mu_1,\alpha^\mu_2)$ solving the minimization problem \eqref{red}. {\BBB {Moreover, $\alpha_1^\mu=\alpha_2^\mu$ if $\gamma_1=\gamma_2$.}} \EEE \RRR (For simplicity, the dependence of the triple on $\gamma_1,\gamma_2$ is not included in the notation.)\EEE \item[2.] (Strict convexity) $E_{\rm red}$ is strictly convex on $M^\ell \times G^\ell \times G^\ell$, in particular there is a constant $c_0'>0$ only depending on $v_2$ and $v_3$ such that $$E_{\rm red}(\mu,\gamma_1,\gamma_2) \ge E_{\rm red}(\mu,\bar{\gamma},\bar{\gamma}) + c_0'\ell^{-2} (\gamma_1 - \gamma_2)^2$$ with $\bar{\gamma} = (\gamma_1 + \gamma_2)/2$ for all $\mu \in M^\ell$ and $\gamma_1,\gamma_2 \in G^\ell$. \item[3.] (Monotonicity in $\gamma$) For each $\mu \in M^\ell$, the mapping $g(\gamma):= E_{\rm red}(\mu,\gamma,\gamma)$ is decreasing on $G^\ell$ with $|g'(\gamma)| \le C\ell^{-3}$ for all $\gamma \in G^\ell$ for some $C>0$ depending only on $v_3$. \item[4.] (Monotonicity in $\mu$) The mapping $h(\mu):= E_{\rm red}(\mu,\gamma_\ell,\gamma_\ell)$ is strictly convex on $M^\ell$ with $h''(\mu^{\rm us}_\ell)>0$ and strictly increasing on $M^\ell \cap \lbrace \mu \ge \mu^{\rm us}_\ell \rbrace$. \item[5.] \BBB (Minimization) For each $\mu \in M^\ell$ and $\gamma_1 = \gamma_2 = \gamma_\ell$, letting $\lambda_1^\mu = \mu/2 + \lambda^\mu \cos\alpha^\mu_1$ and $\lambda_2^\mu = \lambda^\mu$ with $\lambda^\mu$ and $\alpha^\mu_1$ from 1., \EEE the configuration $\mathcal{F}_{\lambda_1^\mu,\lambda_2^\mu,\mu}$ is the unique minimizer of the problem \eqref{min2} with $$E(\mathcal{F}_\mu^*) = E(\mathcal{F}_{\lambda_1^\mu,\lambda_2^\mu,\mu}) = 2m\ell E_{\rm red}(\mu,\gamma_\ell,\gamma_\ell).$$ \end{itemize} \end{proposition} \RRR \noindent\textbf{Proof of Theorem \ref{th: main1} and Theorem \ref{th: main3}.} We postpone the proofs of the auxiliary results Lemma \ref{lemma: sum}, Lemma \ref{lemma: sym-energy}, and Proposition \ref{th: mainenergy} to the next sections and now proceed with the proof of Theorem \ref{th: main1} and Theorem \ref{th: main3}. For the proof of Proposition \ref{th: main2} we refer to Section \ref{sec: reduced-energy}. Moving from the properties of the reduced energy $E_{\rm red}$, we directly obtain Theorem \ref{th: main1}. \begin{proof}[Proof of Theorem \ref{th: main1}] Theorem \ref{th: main1} follows from Property 5 of Proposition \ref{th: mainenergy}. \end{proof} We denote the unique minimzer again by $\mathcal{F}_\mu^*$ and recall the definition of small perturbations $\mathscr{P}_\eta(\mu)$ in \eqref{eq: bc}. Based on the properties of the reduced energy $E_{\rm red}$, \EEE we are able to show that, up to a linear perturbation in terms of the symmetry defect $\Delta$ defined in \eqref{delta}, $E_{\rm red}$ bounds the cell energy $E_{\rm cell}$ from below. More precisely, we have the following. \begin{theorem}[Energy defect controls symmetry defect]\label{th: Ered} There exist $C>0 $ and $\ell_0 \in \mathbb{N}$ only depending on $v_2$ and $v_3$, and for each $\ell \ge \ell_0$ there are $\eta_\ell > 0$ and an open interval $M^\ell$ containing $\mu^{\rm us}_\ell $ such that for all $\mu \in M^\ell$, $\tilde{\mathcal{F}} \in \mathscr{P}_{\eta_\ell}(\mu)$, and centers $z_{i,j,k}$ we have \begin{align*} E_{{\rm cell}}(z_{i,j,k}) \ge E_{\rm red}\big(| z^{\rm dual}_{i,j,k} - z^{\rm dual}_{i,j-1,k}| , \bar{\theta}(z_{i,j,k}), \bar{\theta}(z_{i,j,k})\big) + C \ell^{-2}\Delta(z_{i,j,k}), \end{align*} where $\bar{\theta}(z_{i,j,k}) := \big(\theta_l(z_{i,j,k}) + \theta_r(z_{i,j,k}) + \theta_l(z^{\rm dual}_{i,j,k}) + \theta_r(z^{\rm dual}_{i,j-1,k}) \big)/4$. \end{theorem} \RRR We postpone the proof of Theorem \ref{th: Ered} to Section \ref{sec: cellenery} and close this section with the proof of our main stability result Theorem \ref{th: main3}. \EEE \begin{proof}[Proof of Theorem \ref{th: main3}] Let $M^\ell$ be an open interval containing $\mu^{\rm us}_\ell$ such that Proposition \ref{th: mainenergy} and Theorem \ref{th: Ered} hold for all $\mu \in M^\ell$ and let $G^\ell$ be the interval from Proposition \ref{th: mainenergy}. Then choose $\mu^{\rm crit}_\ell > \mu^{\rm us}_\ell$ such that $[\mu^{\rm us}_\ell, \mu^{\rm crit}_\ell] \subset \subset M^\ell$. Let $\ell \ge \ell_0$ and $\mu \in [\mu^{\rm us}_\ell, \mu^{\rm crit}_\ell]$ be given. Consider a nontrivial perturbation $\tilde{\mathcal{F}} \in \mathscr{P}_{\eta_\ell}(\mu)$ with $\eta_\ell$ as in Theorem \ref{th: Ered}. We denote the atomic positions by $x_{i,k}^{j,l}$ and the centers by $z_{i,j,k}$, $z_{i,j,k}^{\rm dual}$ as introduced at the beginning of the section, see \eqref{eq: centers} and Figure \ref{cell}. Define \begin{align}\label{thetabar} \bar{\theta}(z_{i,j,k}) = \frac{1}{4}\big(\theta_l(z_{i,j,k}) + \theta_r(z_{i,j,k})+\theta_l(z^{\rm dual}_{i,j,k}) + \theta_r(z^{\rm dual}_{i,j-1,k}) \big) \end{align} and also $$\bar{\mu} = \frac{1}{2m\ell}\sum_{j=1}^m\sum_{i=1}^\ell \sum_{k=0,1} |z^{\rm dual}_{i,j,k} - z^{\rm dual}_{i,j-1,k}|, \ \ \ \ \ \bar{\theta} = \frac{1}{2m\ell}\sum_{j=1}^m\sum_{i=1}^\ell \sum_{k=0,1} \bar{\theta}(z_{i,j,k}).$$ Possibly passing to a smaller $\eta_\ell$, we get $|z^{\rm dual}_{i,j,k} - z^{\rm dual}_{i,j-1,k}| \in M^\ell$ and $\bar{\theta}(z_{i,j,k}) \in G^\ell$ for all $i,j,k$. By Theorem \ref{th: Ered} we have for each cell \begin{align}\label{mainproof1} E_{{\rm cell}}(z_{i,j,k}) \ge E_{\rm red}\Big( |z^{\rm dual}_{i,j,k} - z^{\rm dual}_{i,j-1,k}|, \bar{\theta}(z_{i,j,k}), \bar{\theta}(z_{i,j,k}) \Big) + C \ell^{-2}\Delta(z_{i,j,k}) \end{align} if $\ell_0$ is chosen sufficiently large. Then, taking the sum over all cells and using Property {\it 2.} of Proposition \ref{th: mainenergy}, we get by \eqref{eq: sumenergy} \begin{align*} E(\tilde{\mathcal{F}}) &= \sum_{i=1}^\ell \sum_{j=1}^m \sum_{k=0,1} E_{{\rm cell}}(z_{i,j,k})\ge 2m\ell E_{\rm red}(\bar{\mu}, \bar{\theta},\bar{\theta}) + C \ell^{-2}\sum_{i=1}^\ell \sum_{j=1}^m \sum_{k=0,1} \Delta(z_{i,j,k}). \end{align*} \RRR Possibly passing to a smaller $\eta_\ell$, we can assume that $\Delta(z_{i,j,k}) \le \eta$ for all centers with $\eta$ from Lemma \ref{lemma: sum}. Then \EEE using Lemma \ref{lemma: sum} and recalling \eqref{thetabar} we find $$\bar{\theta} \le \frac{1}{8m\ell}\Big( 4m(2\ell - 2)\pi + C\sum_{j=1}^m\sum_{i=1}^\ell\sum_{k=0,1} \Delta(z_{i,j,k}) \Big) \le \gamma_\ell + \frac{c}{2m\ell}\sum_{j=1}^m\sum_{i=1}^\ell\sum_{k=0,1} \Delta(z_{i,j,k}),$$ where in the last step we have used the fact that $\gamma_\ell = \pi(1-1/\ell)$, see \eqref{gamma}. This together with Property 3 of Proposition \ref{th: mainenergy} yields \begin{align* E(\tilde{\mathcal{F}}) \ge 2m\ell E_{\rm red}(\bar{\mu}, \gamma_\ell,\gamma_\ell) + \big(C \ell^{-2} - C'\ell^{-3} \big)\sum_{j=1}^m \sum_{i=1}^\ell \sum_{k=0,1} \Delta(z_{i,j,k}) \end{align*} for some $C'>0$ only depending { on $v_3$}. Recalling the constraint in definition \eqref{eq: bc}, we get for fixed $i$ and $k$ that $$m\mu = L^\mu_m = \Big|\sum_{j=1}^m z^{\rm dual}_{i,j,k} - z^{\rm dual}_{i,j-1,k} \Big| \le \sum_{j=1}^m |z^{\rm dual}_{i,j,k} - z^{\rm dual}_{i,j-1,k}| $$ and therefore, by taking the sum over all $i$ and $k$, we get $\bar{\mu} \ge \mu \ge \mu^{\rm us}_\ell$. Then we derive by Property 4 and 5 of Proposition \ref{th: mainenergy} \begin{align} E(\tilde{\mathcal{F}}) &\ge 2m\ell E_{\rm red}(\mu,\gamma_\ell,\gamma_\ell) + C''\ell^{-2}\sum_{i=1}^\ell \sum_{j=1}^m \sum_{k=0,1} \Delta(z_{i,j,k})\nonumber \\ &= E(\mathcal{F}_\mu^*) + C''\ell^{-2}\sum_{i=1}^\ell \sum_{j=1}^m \sum_{k=0,1} \Delta(z_{i,j,k}) \label{mainproof3} \end{align} for $\ell_0$ sufficiently large and a possibly smaller constant $C''>0$. Note that in this step of the proof we have fundamentally used that $\mu \ge \mu^{\rm us}_\ell$, i.e. the nanotube is stretched, so that a monotonicity argument can be applied. It remains to confirm the strict inequality $E(\tilde{\mathcal{F}}) > E(\mathcal{F}_\mu^*)$. If $\Delta(z_{i,j,k})>0$ for some center $z_{i,j,k}$, this follows directly from the previous estimate. Otherwise, as $\tilde{\mathcal{F}}$ is a nontrivial perturbation, one of the angles in \eqref{thetabar} or one of the lengths $|z^{\rm dual}_{i,j,k} - z^{\rm dual}_{i,j-1,k}|$ does not coincide with the corresponding mean value and then at least one of the inequalities \eqref{mainproof1}-\eqref{mainproof3} is strict due to the strict convexity and monotonicity of the mappings considered in Proposition \ref{th: mainenergy}. \end{proof} \section{Symmetry defect controls angle defect: Proof of Lemma \ref{lemma: sum}}\label{sec: angles} This short section is devoted to the proof of Lemma \ref{lemma: sum}. Recall the definition of the centers in \eqref{eq: centers}, the angles \eqref{eq: thetaangle}, and the symmetry defect \eqref{delta}. \begin{proof}[Proof of Lemma \ref{lemma: sum}] \RRR Let be given $\tilde{\mathcal{F}}$, being a small perturbation of a configuration $\mathcal{F}' \in \mathscr{F}(\mu)$, with $\Delta(z_{i,j,k}) \le \eta$ for all centers $z_{i,j,k}$. \EEE Due to the symmetry of the problem it suffices to show $$ \sum_{j=1}^m\sum_{i=1}^\ell \Big(\theta_l(z_{i,j,0}) + \theta_l(z^{\rm dual}_{i,j-1,1}) \Big) \le m(2\ell - 2)\pi + c\sum_{j=1}^m\sum_{i=1}^\ell\sum_{k=0,1} \Delta(z_{i,j,k}). $$ For brevity we write $\theta'_i = \theta_l(z_{\frac{i+1}{2},j,0})$ for $i =1,3,\ldots, 2\ell-1$ and $\theta'_i = \theta_l(z^{\rm dual}_{\frac{i}{2},j-1,1})$ for $i=2,4,\ldots,2\ell$. (Note that for convenience we do not include the index $j$ in the notation.) Let $n_i,n_{i+1}$ be unit normal vectors as introduced before \eqref{eq: thetaangle} such that $n_i \cdot n_{i+1}$ is near $1$ and $\sphericalangle(n_i,n_{i+1}) = \pi - \theta'_i$ for $i= 1,3,\ldots, 2\ell-1$. For a suitable ordering of $n_i$ and $n_{i+1}$ we then also obtain $\sphericalangle(n_{i},n_{i+1}) = \pi- \theta'_i$ for $i=2,4,\ldots,2\ell$. Fix a center $x_0 \in \mathbb{R}^3$ and let $P$ be the $2\ell$-gon with vertices $v_i := x_0 + n_i$, $i=1,\ldots,2\ell$. Denote the interior angles accordingly by $\varphi_i$. Note that each edge of $P$ forms a triangle with $x_0$ with angles $\pi - \theta'_i$, $\psi_i^1$, and $\psi_i^2$, where $\psi_i^1$ is the angle at the vertex $v_i$ and $\psi_{i}^2$ is the angle at $v_{i+1}$. The key ingredient in the proof is now the observation that there exists a universal $c>0$ such that \begin{subequations}\label{psi-phi} \begin{align} &\psi_{i+1}^1 + \psi_{i}^2 - \varphi_{i+1} \le c\Delta(z_{\frac{i+1}{2},j,0}) + c\Delta(z_{\frac{i+3}{2},j,0}) , \label{psi-phi-a} \\ &\psi_{i}^1 + \psi_{i-1}^2 - \varphi_{i} \le c\Delta(z_{\frac{i-1}{2},j,0}) + c\Delta(z_{\frac{i+1}{2},j,0})\label{psi-phi-b} \end{align} \end{subequations} for $i=1,3\ldots,2\ell-1$, {\BBB where it is understood that $\psi_0^2=\psi^2_{2\ell}$ and $z_{0,j,0}=z_{\ell,j,0}$}. We defer the derivation of this property to the end of the proof. Notice that $\theta_i' = \psi_i^1 + \psi_i^2$ for $i=1,\ldots, 2\ell$ and that $\sum_{i=1}^{2\ell} \varphi_i \le (2\ell-2) \pi$ since $P$ is a $2\ell$-gon. We now obtain by \eqref{psi-phi} \begin{align*} \sum_{i=1}^{2\ell} \theta_i' = \sum_{i=1}^{2\ell} (\psi_i^1 + \psi_i^2) \le (2 \ell - 2)\pi + c \sum_{i=1}^\ell \Delta(z_{i,j,0}). \end{align*} The assertion then follows by taking the sum over all $j=1,\ldots,m$. It remains to confirm \eqref{psi-phi}. Fix $i=1,3,\ldots,2\ell-1$ and let $N_{i+1}$ be the plane containing the points $v_i, v_{i+1}$, and $v_{i+2}$. By $d_{i+1}$ we denote the distance of $x_0$ from $N_{i+1}$ and by $n'_{i+1}$ the orthogonal projection of the vector $n_{i+1}$ onto $N_{i+1}$. Note that $d_{i+1} \le \delta$ for $\delta$ small, depending only on the choice of $\eta$, and that $|n_{i+1}'| = |n_{i+1}| + {\rm O}(d_{i+1}^2)$. The segments $v_{i+2} - v_{i+1},n'_{i+1}$ and $v_{i}-v_{i+1},n_{i+1}'$ enclose two angles, denoted by $\hat{\psi}_{i+1}^1$ and $\hat{\psi}_{i}^2$, so that $\varphi_{i+1} = \hat{\psi}_{i+1}^1 + \hat{\psi}^2_{i}$. Observe that $\hat{\psi}_{i+1}^1$ and $\hat{\psi}_{i}^2$ are the projections of $\psi_{i+1}^1$, $\psi_{i}^2$, respectively, onto $N_{i+1}$. For notational convenience suppose $(v_{i+2} - v_{i+1}) \cdot n_{i+1}'>0$ and $(v_{i+2} - v_{i+1}) \cdot n_{i+1}>0$, which holds after possibly changing the signs of the vectors. Using that $(v_{i+2} - v_{i+1}) \cdot (n_{i+1} - n_{i+1}') = 0$ and recalling that $d_{i+1}$ is small, we calculate by a Taylor expansion \begin{align*} \hat{\psi}_{i+1}^1 & = \arccos\Big( \frac{(v_{i+2} - v_{i+1}) \cdot n_{i+1}'}{|v_{i+2} - v_{i+1}||n_{i+1}'|} \Big) = \arccos\Big( \frac{(v_{i+2} - v_{i+1}) \cdot n_{i+1}}{|v_{i+2} - v_{i+1}|(|n_{i+1}| + {\rm O}(d_{i+1}^2))} \Big) \\& = \psi_{i+1}^1 + {\rm O}(d_{i+1}^2), \end{align*} where ${\rm O}(\cdot)$ is universal. \RRR Likewise, we have $\hat{\psi}_i^2 = \psi_i^2 + {\rm O}(d_{i+1}^2)$. Since $\varphi_{i+1} = \hat{\psi}_{i+1}^1 + \hat{\psi}^2_{i}$, \EEE to conclude \eqref{psi-phi-a}, it therefore remains to show \begin{align}\label{di+1} d^2_{i+1} \le c\big(\Delta(z_{\frac{i+1}{2},j,0}) + \Delta(z_{\frac{i+3}{2},j,0}) \big) \end{align} for a universal constant $c>0$. To see this, we first note that we have $d_{i+1} = 0 $ whenever $\Delta(z_{\frac{i+1}{2},j,0}) + \Delta(z_{\frac{i+3}{2},j,0}) = 0$. Indeed, if $\Delta(z_{\frac{i+1}{2},j,0}) + \Delta(z_{\frac{i+3}{2},j,0}) = 0$, the high symmetry of the atoms in the cells with centers $z_{\frac{i+1}{2},j,0}$ and $z_{\frac{i+3}{2},j,0}$ (cf. \eqref{delta}) implies that the three normal vectors $n_i$, $n_{i+1}$, and $n_{i+2}$ are coplanar. Thus, $x_0$ is contained in $N_{i+1}$ and therefore $d_{i+1} =0$. \BBB Note that $d^2_{i+1}$, $\Delta(z_{\frac{i+1}{2},j,0})$, and $\Delta(z_{\frac{i+3}{2},j,0})$ are functions of the positions of the atoms contained in the adjacent cells with center $z_{\frac{i+1}{2},j,0}, z_{\frac{i+3}{2},j,0}$, denoted by $\tilde{\boldsymbol{y}}=(\tilde{y}_1,\ldots,\tilde{y}_{14}) \in \mathbb{R}^{3 \times 14}$. By \eqref{delta} we find that $\Delta(z_{\frac{i+1}{2},j,0}) + \Delta(z_{\frac{i+3}{2},j,0}) = (\tilde{\boldsymbol{y}} - {\boldsymbol{y}}^0)^T \mathcal{Q} (\tilde{\boldsymbol{y}} - {\boldsymbol{y}}^0) $ is quadratic with $\mathcal{Q}\in \mathbb{R}^{42 \times 42}$, where ${\boldsymbol{y}}^0$ denotes the atomic positions of \RRR $\mathcal{F}' \in \mathscr{F}(\mu)$. \EEE \BBB Moreover, the fact that $d^2_{i+1}$ is smooth as a function in $\tilde{\boldsymbol{y}}$, a Taylor expansion, and $d_{i+1} \le \delta $ yield $d^2_{i+1} \le C |\tilde{\boldsymbol{y}} - {\boldsymbol{y}}^0|^2$ for a universal constant $C>0$. Now \eqref{di+1} follows from the property that \EEE $d_{i+1} = 0 $ whenever $\Delta(z_{\frac{i+1}{2},j,0}) + \Delta(z_{\frac{i+3}{2},j,0}) = 0$. The second estimate \eqref{psi-phi-b} can be shown along similar lines. This concludes the proof. \end{proof} \section{Properties of the reduced energy: Proof of Lemma \ref{lemma: sym-energy}, Proposition \ref{th: mainenergy}, and Proposition \ref{th: main2}}\label{sec: reduced-energy} In this section we investigate the properties of the symmetric energy and the reduced energy as introduced in \eqref{symmetric-cell} and \eqref{red}, respectively. \subsection{Proof of Lemma \ref{lemma: sym-energy}} We {\RRR start with} the relation of the cell energy \eqref{eq: cell} and the symmetric energy \eqref{symmetric-cell}. \begin{proof}[Proof of Lemma \ref{lemma: sym-energy}] {\BBB In the proof we let $\lambda=\lambda_2$}. Given the cell energy, the symmetric energy, and the constraints \eqref{sym-assumption}-\eqref{eq: constraint2}, we observe that it suffices to show \begin{align}\label{v_2--lambda_3} v_2(\lambda_1) + v_2(\lambda_3) \ge 2{v}_2 \big(\widetilde\mu/2 +2\lambda\cos\alpha_i \big) -c_0 \ell^{-4} (\gamma_1 - \gamma_2)^2 \ \ \ \text{for} \ \ \ i=1,2 \end{align} for a constant $c_0$ only depending on $v_2$ and $v_3$. First, with the notation of \eqref{sym-assumption}, particularly recalling $\lambda_3 = | x_8 - x_2| = |2(z^{\rm dual}_{i,j,k} - x_2)|$, we see $$ \lambda^2_3 = (\widetilde\mu - \lambda_4)^2 + 4|(x_2 - z^{\rm dual}_{i,j,k}) \cdot e_2|^2 + 4|(x_2 - z^{\rm dual}_{i,j,k}) \cdot e_3|^2.$$ As in the special case $\gamma_1 = \gamma_2$ the points $x_1,x_2,z^{\rm dual}_{i,j,k}$ are contained in one line and thus the latter two terms vanish, we obtain by a Taylor expansion $\lambda_3 = \widetilde\mu - \lambda_4+ {\rm O}((\gamma_1 - \gamma_2)^2)$, which together with \eqref{lambda4} gives $$ \lambda_1 + \lambda_3 = \widetilde\mu + 2\lambda\cos\alpha_1 + {\rm O}((\gamma_1 - \gamma_2)^2).$$ By a similar argument, interchanging the roles of $\lambda_1$ and $\lambda_3$, we also get $$\lambda_1 + \lambda_3 = \widetilde\mu + 2\lambda\cos\alpha_2 + {\rm O}((\gamma_1 - \gamma_2)^2).$$ \RRR Recall that $|\lambda_1 - 1| + |\lambda_3 - 1| \le \ell^{-4}$ and $|\gamma_1 - \gamma_2| \le \ell^{-2}$ by assumption. \EEE Then by the convexity of $v_2$ in a neighborhood of $1$ and a Taylor expansion we derive \begin{align*} v_2(\lambda_1) + v_2(\lambda_3) &\ge 2v_2(\widetilde\mu/ 2 + \lambda\cos\alpha_i + {\rm O}((\gamma_1 - \gamma_2)^2)) \\ & \ge 2v_2(\widetilde\mu/ 2 + \lambda\cos\alpha_i) - C|v'_2(\widetilde\mu/2 + \lambda\cos\alpha_i)| (\gamma_1 - \gamma_2)^2 - C(\gamma_1 - \gamma_2)^4 \end{align*} for $i=1,2$. We recall that $|v'_2(\widetilde\mu/2 + \lambda\cos\alpha_i)| = {\rm O}(\ell^{-4})$ since $|\lambda_1 - 1| + |\lambda_3 - 1| + |\gamma_1 - \gamma_2|^2 \le 2\ell^{-4}$, and $v_2$ is smooth and attains its minimum in $1$. Moreover, observe that by $|\gamma_1 - \gamma_2| \le \ell^{-2}$ we get $|\gamma_1 - \gamma_2|^4 \le \ell^{-4}|\gamma_1 - \gamma_2|^2$. This concludes the proof of \eqref{v_2--lambda_3}. \end{proof} \subsection{Convexity of the reduced energy} Let us now concentrate on the symmetric energy $E_{\mu,\gamma_1,\gamma_2}^{{\rm sym}}$ introduced in \eqref{symmetric-cell}. We recall the definition of the angle $\beta=\beta(\alpha,\gamma) = 2\arcsin\left(\sin\alpha\sin\frac{\gamma}{2}\right)$ in \eqref{eq: constraint2} and for later use we note that the function $\beta$ is smooth on $[\frac{1}{2}\pi,\frac{3}{4}\pi] \times [\frac{3}{4}\pi,\pi]$ and satisfies \begin{subequations}\label{zigder2} \begin{align} \partial_\alpha\beta(2\pi/3,\pi)&=-2,\quad \partial^2_{\alpha\alpha}\beta(2\pi/3,\pi)=0, \quad \partial_\gamma\beta(2\pi/3,\pi)=0, \label{zigder2-a}\\ \partial^2_{\gamma\gamma}\beta(2\pi/3,\pi)&=-\sqrt{3}/2, \quad \partial^2_{\alpha\gamma} \beta(2\pi/3,\pi)=0. \label{zigder2-b} \end{align} \end{subequations} More precisely, a Taylor expansion also shows \begin{equation}\label{zigder3} \lim_{\ell \to \infty} \ell\partial_\gamma\beta(2\pi/3,\gamma_\ell)= \frac{\sqrt{3}}{2} \pi, \ \ \ \ \ \ \lim_{\ell \to \infty} \ell^2\partial^2_{\alpha\alpha}\beta(2\pi/3,\gamma_\ell)= -2\sqrt{3} \pi^2, \end{equation} where $\gamma_\ell$ as in \eqref{gamma}. For the exact expressions of the derivatives of the function $\beta$ we refer the Reader to \cite[Section 4]{MMPS-new}. Recall the definition of $\alpha_\ell^{\rm us}$ in Proposition \ref{eq: old main result}. \begin{lemma}[Angles of unstretched nanotubes]\label{aus} There are $0 < c_1 < c_2$ and $\ell_0 \in \mathbb{N}$ only depending on $v_3$ such that for all $\ell \ge \ell_0$ $$ \alpha^{\rm us}_\ell, \beta(\alpha_\ell^{\rm us}, { \RRR \gamma_\ell \EEE }) \in (2\pi/3 - c_2\ell^{-2} , 2\pi/3 - c_1\ell^{-2}). $$ \end{lemma} \begin{proof} By Proposition \ref{eq: old main result} and the fact that $\alpha \mapsto \beta(\alpha,\gamma_\ell)$ is decreasing, we obtain \RRR $\alpha_\ell^{\rm us} \ge \alpha_\ell^{\rm ch}$ and \EEE $\beta(\alpha^{\rm us}_\ell,\gamma_\ell) \le \alpha^{\rm us}_\ell \le 2\pi/3$. \RRR By \cite[(11)]{MMPS} we have $2\pi/3 - \alpha_\ell^{\rm ch} = {\rm O}(\ell^{-2})$. Moreover, in view of \eqref{gamma}, \eqref{betaz} and a Taylor expansion, we find $ \alpha^{\rm us}_\ell - \beta(\alpha^{\rm us}_\ell,\gamma_\ell) \ge C\ell^{-2}$. \RRR Summarizing, we get \begin{align}\label{sumatjunction} \RRR 2\pi/3 - \alpha^{\rm us}_\ell \le C \ell^{-2}, \ \ \ \ \ \ 2\pi - 2\alpha^{\rm us}_\ell - \beta(\alpha^{\rm us}_\ell,\gamma_\ell)\ge C\ell^{-2} \end{align} \EEE for some universal $C>0$. As $2v_3(\alpha) + v_3(\beta(\alpha,\gamma_\ell))$ is minimized at $\alpha = \alpha^{\rm us}_\ell$ (see Proposition \ref{eq: old main result}), we get $2v'_3( \alpha^{\rm us}_\ell) + v'_3(\beta( \alpha^{\rm us}_\ell,\gamma_\ell))\partial_\alpha \beta( \alpha^{\rm us}_\ell,\gamma_\ell) = 0$. Using \eqref{zigder2-a} and a Taylor expansion of \RRR $v'_3$ \EEE around $2\pi/3$, we deduce that for $\ell_0$ large enough and all $\ell \ge \ell_0$ $$ \frac{2\pi/3 - \alpha^{\rm us}_\ell}{ 2\pi/3 - \beta( \alpha^{\rm us}_\ell,\gamma_\ell)} \in [C',1]$$ for a constant $0 < C'<1$ only depending on $v_3$. This together with \eqref{sumatjunction} concludes the proof. \end{proof} Recall the minimization problem \eqref{red} for the symmetric energy introduced in \eqref{symmetric-cell}. We proceed with the identification of the minimizers of \eqref{red}. \begin{proposition}[Existence and uniqueness of minimizers]\label{prop1} There exists $\delta>0$ depending only on $v_2$, $v_3$ such that, for any fixed $\mu\in[3-\delta, 3+\delta]$ and $\gamma = (\gamma_1,\gamma_2)\in [\pi-\delta,\pi]^2$, the minimization problem \eqref{red} has a unique solution $(\lambda^*(\mu,\gamma),\alpha_1^*(\mu,\gamma), \alpha_2^*(\mu,\gamma))$, which satisfies \begin{align}\label{eq: firstorder-opt} \nabla E_{\mu,\gamma_1,\gamma_2}^{{\rm sym}}(\lambda^*(\mu,\gamma),\alpha_1^*(\mu,\gamma), \alpha_2^*(\mu,\gamma)) = 0, \end{align} where $\nabla$ denotes the derivative with respect to $(\lambda, \alpha_1,\alpha_2)$. \end{proposition} \begin{proof} We start the proof with a direct computation of the derivatives. Replace $E_{\mu,\gamma_1,\gamma_2}^{{\rm sym}}$ by $\tilde{E}$ for notational convenience. We obtain \begin{subequations}\label{first order} \begin{align}\displaystyle {\partial_{\lambda}}\tilde{E}(\lambda, \alpha_1, \alpha_2)&= 2v_2'(\lambda) + \sum_{i=1,2} \Big( \frac{1}{2} \cos\alpha_i \, v_2'(\mu/2 + \lambda\cos\alpha_i) \Big) \label{first order-a} \\\displaystyle {\partial_{\alpha_i}}\tilde{E}(\lambda,\alpha_1, \alpha_2)&=- \frac{1}{2}\lambda \sin\alpha_i \, v_2'(\mu/2 + \lambda\cos\alpha_i) \notag \\ & \ \ \ + v'_3(\beta(\alpha_i,\gamma_i))\partial_\alpha\beta(\alpha_i,\gamma_i) + 2 v'_3(\alpha_i), \ \ \ \ \ \ i =1,2.\label{first order-b} \end{align} \end{subequations} Moreover, for $i=1,2$ \[\begin{aligned} {\partial^2_{\lambda\lambda}}\tilde{E}(\lambda,\alpha_1,\alpha_2)&= 2v_2''(\lambda) + \sum_{j=1,2} \Big( \frac{1}{2}\cos^2\alpha_j \ v_2''(\mu/2 + \lambda\cos\alpha_j) \Big),\\ {\partial^2_{\alpha_i\alpha_i}}\tilde{E}(\lambda,\alpha_1,\alpha_2)&= \frac{1}{2}\lambda^2 \sin^2\alpha_i \, v_2''(\mu/2 + \lambda\cos\alpha_i) - \frac{1}{2}\lambda \cos\alpha_i \, v_2'(\mu/2 + \lambda\cos\alpha_i) + 2 v_3''(\alpha_i)\\ & \ \ \ + v''_3(\beta(\alpha_i,\gamma_i))\,(\partial_\alpha\beta(\alpha_i,\gamma_i))^2 + v'_3(\beta(\alpha_i,\gamma_i))\partial^2_{\alpha\alpha}\beta(\alpha_i,\gamma_i), \\ {\partial^2_{\lambda\alpha_i}}\tilde{E}(\lambda,\alpha_1,\alpha_2)&= -\frac{1}{2}\sin\alpha_i \, v_2'(\mu/2 + \lambda\cos\alpha_i) - \frac{1}{2}\lambda \sin\alpha_i \cos\alpha_i \, v_2''(\mu/2 + \lambda\cos\alpha_i ), \\ {\partial^2_{\alpha_1\alpha_2}}\tilde{E}(\lambda,\alpha_1,\alpha_2)&= 0. \end{aligned} \] For notational convenience we define $s_{\rm ref} :=(1,2\pi/3,2\pi/3)$. Recall that $\partial_\alpha \beta(2\pi/3,\pi)= -2 $ by \eqref{zigder2-a}, $\beta(2\pi/3,\pi)=2\pi/3$ by \eqref{eq: constraint2}, $v_3'(2\pi/3) =0$, $\cos(2\pi/3) = - 1/2$, $\sin(2\pi/3) =\sqrt{3}/2$. At the planar reference configuration $\mu = 3$, $\gamma_1=\gamma_2 = \pi$, $\alpha_1= \alpha_2=2\pi/3$, $\lambda = 1$ the derivative then reads after some computation \begin{align*} {\partial^2_{\lambda\lambda}}E_{3,\pi,\pi}^{{\rm sym}}(s_{\rm ref}) &= \frac{9}{4} v_2''(1), \ \ \ \ \ \ {\partial^2_{\alpha_i\alpha_i}}E_{3,\pi,\pi}^{{\rm sym}}(s_{\rm ref})=\frac{3}{8} v_2''(1) + 6 v''_3(2\pi/3), \ \ i=1,2,\\ {\partial^2_{\lambda\alpha_i}}E_{3,\pi,\pi}^{{\rm sym}}(s_{\rm ref}) &= \frac{\sqrt{3}}{8}v_2''(1), \ \ i=1,2,\ \ \ \ \ {\partial^2_{\alpha_1\alpha_2}}E_{3,\pi,\pi}^{{\rm sym}}(s_{\rm ref})= 0. \end{align*} We shall check the positivity of the Hessian matrix in a neighborhood of the reference configuration. Since \begin{align*} {\rm det} \Big( D^2_{\alpha_1\alpha_2}E_{3,\pi,\pi}^{{\rm sym}}(s_{\rm ref}) \Big) &= \big({\partial^2_{\alpha_1\alpha_1}}E_{3,\pi,\pi}^{{\rm sym}}(s_{\rm ref})\big)^2, \\ {\rm det} \big(D^2E_{3,\pi,\pi}^{{\rm sym}}(s_{\rm ref})\big) &= \big({\partial^2_{\alpha_1\alpha_1}}E_{3,\pi,\pi}^{{\rm sym}}(s_{\rm ref})\big)^2 {\partial^2_{\lambda\lambda}}E_{3,\pi,\pi}^{{\rm sym}}(s_{\rm ref}) \\ &- 2\big({\partial^2_{\lambda\alpha_1}}E_{3,\pi,\pi}^{{\rm sym}}(s_{\rm ref})\big)^2 {\partial^2_{\alpha_1\alpha_1}}E_{3,\pi,\pi}^{{\rm sym}}(s_{\rm ref}) \end{align*} are positive, the principal minors of the Hessian matrix $D^2E_{3,\pi,\pi}^{{\rm sym}}(1,2\pi/3,2\pi/3)$ are positive. Due to the smoothness of the potentials $v_2$, $v_3$ and the mapping $(\alpha,\gamma) \mapsto \beta(\alpha,\gamma)$, we get that for $\delta'>0$ sufficiently small the principal minors of the Hessian matrix $D^2E_{\mu,\gamma_1,\gamma_2}^{{\rm sym}}(\lambda,\alpha_1,\alpha_2)$ are positive for all $(\lambda,\alpha_1,\alpha_2) \in D_{\delta'}$ and for all $\mu\in[3-\delta',3+\delta']$, $(\gamma_1,\gamma_2) \in [\pi-\delta',\pi]^2$, where $$D_{\delta'} := [1-\delta',1+\delta'] \times [2\pi/3-\delta',2\pi/3 + \delta']^2.$$ Since we have shown that $E_{\mu,\gamma_1,\gamma_2}^{{\rm sym}}$ is strictly convex on $D_{\delta'}$, it follows that it has a unique minimizer $(\lambda^*(\mu,\gamma),\alpha_1^*(\mu,\gamma), \alpha_2^*(\mu,\gamma) )$ for all $\mu\in[3-\delta',3+\delta']$ and $\gamma = (\gamma_1,\gamma_2) \in [\pi-\delta',\pi]^2$. Moreover, a continuity argument shows that \begin{align}\label{eq:continuity} (\lambda^*(\mu,\gamma),\alpha_1^*(\mu,\gamma), \alpha_2^*(\mu,\gamma)) & \to (\lambda^*(3,\pi,\pi), \alpha_1^*(3,\pi,\pi), \alpha_2^*(3,\pi,\pi)) = (1,2\pi/3, 2\pi/3) \end{align} as $\gamma \to (\pi,\pi)$ and $\mu \to 3$. \BBB Recalling \eqref{symmetric-cell} and the fact that $v_2$ and $v_3$ attain their minimum exactly at $1$ and $2\pi/3$, respectively, we find $\inf_{(\lambda,\alpha_1,\alpha_2)\notin D_{\delta'}}E_{\mu,\gamma_1,\gamma_2}^{{\rm sym}}(\lambda,\alpha_1,\alpha_2) > - 3$. On the other hand, by \eqref{eq: constraint2}, \eqref{symmetric-cell}, and \eqref{eq:continuity} we get $E_{\mu,\gamma_1,\gamma_2}^{{\rm sym}}(\lambda^*(\mu,\gamma),\alpha_1^*(\mu,\gamma), \alpha_2^*(\mu,\gamma)) \to -3$ as $\gamma \to (\pi,\pi)$ and $\mu \to 3$. This shows that for all $\mu\in[3-\delta'',3+\delta'']$ and $\gamma \in [\pi-\delta'',\pi]^2$, for some small $\delta''>0$, the triple $(\lambda^*(\mu,\gamma),\alpha_1^*(\mu,\gamma), \alpha_2^*(\mu,\gamma))$ is the unique solution of the minimization problem \eqref{red}. Moreover, if $\delta''>0$ is chosen small enough, the triple \EEE lies in the interior of $D_{\delta'}$ and the first order optimality conditions \eqref{eq: firstorder-opt} follow. We conclude the proof by setting $\delta = \min\lbrace \delta',\delta''\rbrace$. \end{proof} We now study convexity properties of the reduced energy $E_{\rm red}$ defined in \eqref{red}. Recall the definition of $\gamma_\ell$ in \eqref{gamma} and the \RRR definition of $\mu^{\rm us}_\ell$ in \eqref{eq:muellneu}. \EEE \begin{proposition}[Convexity of reduced energy]\label{convexenergy} There exists $\ell_0 \in \mathbb{N}$ and for each $\ell \ge \ell_0$ there exits $\varepsilon=\varepsilon(\ell)>0$ such that $E_{\rm red}$ is strictly convex on $D^\ell_\varepsilon :=[\mu_\ell^{\rm us} - \varepsilon, \mu_\ell^{\rm us}+ \varepsilon] \times [\gamma_\ell - \varepsilon,\gamma_\ell+\varepsilon]^2$. Moreover, there is $c_0'>0$ depending only on $v_2$ and $v_3$ such that for all $\ell \ge \ell_0$ and $(\mu,\gamma_1,\gamma_2) \in D_\varepsilon^\ell$ \begin{align}\label{eq:convexenergy} E_{\rm red}(\mu,\gamma_1,\gamma_2) = E_{\rm red}(\mu,\gamma_2,\gamma_1) \ge E_{\rm red} \Big( \mu, \frac{\gamma_1 + \gamma_2}{2},\frac{\gamma_1 + \gamma_2}{2} \Big) + c_0' \ell^{-2} (\gamma_1-\gamma_2)^2. \end{align} \end{proposition} \begin{proof} Choosing $\ell$ sufficiently large and $\varepsilon>0$ small we can suppose that $D^\ell_\varepsilon \subset [3- \delta, 3+ \delta] \times [\pi- \delta,\pi]^2$ with $\delta$ from Proposition \ref{prop1} since $\mu^{\rm us}_\ell = 2-2\cos\alpha^{\rm us}_\ell \to 3$ as $\ell \to\infty$. \RRR Then \eqref{eq: firstorder-opt} holds for $(\mu,\gamma_1,\gamma_2) \in D^\ell_\varepsilon$. \EEE We drop the brackets $(\mu,\gamma_1,\gamma_2)$ and indicate the unique solution at $(\mu,\gamma_1,\gamma_2)$ by $(\lambda^*,\alpha_1^*, \alpha_2^*)$ for notational convenience. Taking the partial derivatives and making use of the first order optimality conditions \eqref{eq: firstorder-opt}, we get \begin{align}\label{derivative1} \partial_\mu E_{\rm red}(\mu,\gamma_1,\gamma_2)&=\frac d{d\mu}E_{\mu,\gamma_1,\gamma_2}^{{\rm sym}}(\lambda^*,\alpha_1^*, \alpha_2^*)\notag\\ &=\frac{\partial E_{\mu,\gamma_1,\gamma_2}^{{\rm sym}}}{\partial \mu}(\lambda^*,\alpha_1^*, \alpha_2^*) + \nabla E_{\mu,\gamma_1,\gamma_2}^{{\rm sym}}(\lambda^*,\alpha_1^*, \alpha_2^*)\cdot (\partial_\mu\lambda^*,\partial_\mu\alpha_1^*, \partial_\mu\alpha_2^*)\notag\\ &=\frac{\partial E_{\mu,\gamma_1,\gamma_2}^{{\rm sym}}}{\partial \mu}(\lambda^*,\alpha_1^*, \alpha_2^*) =\sum_{j=1,2}\frac{1}{4}{v}'_2 \big(\mu/2+\lambda^*\cos\alpha^*_j \big), \end{align} where $\nabla$ denotes the derivative with respect to $(\lambda,\alpha_1,\alpha_2)$. Likewise, we get for $i=1,2$ \begin{equation}\label{derivative1.2} \begin{aligned} \partial_{\gamma_i} E_{\rm red}(\mu,\gamma_1,\gamma_2) &=\frac{\partial E_{\mu,\gamma_1,\gamma_2}^{{\rm sym}}}{\partial \gamma_i}(\lambda^*,\alpha_1^*, \alpha_2^*) = v'_3(\beta(\alpha_i^*,\gamma_i))\, \partial_\gamma \beta(\alpha_i^*,\gamma_i). \end{aligned} \end{equation} Next we compute the second derivatives and obtain \begin{align} \partial^2_{\mu\mu} E_{\rm red}(\mu,\gamma_1,\gamma_2)&=\sum_{j=1,2}\frac{1}{4} {v}''_2 \big(\mu/2 +\lambda^*\cos\alpha^*_j \big) \, w_{j,\mu}(\mu,\gamma_1,\gamma_2), \label{convexity7-a}\\ \partial^2_{\gamma_i\gamma_i} E_{\rm red}(\mu,\gamma_1,\gamma_2)&= v'_3(\beta(\alpha_i^*,\gamma_i))\, \big( \partial^2_{\gamma\gamma} \beta(\alpha_i^*,\gamma_i) + \partial^2_{\gamma\alpha} \beta(\alpha_i^*,\gamma_i) \,\partial_{\gamma_i} \alpha^*_i \big)\notag\\ & \hspace{-10mm}+ v''_3(\beta(\alpha_i^*,\gamma_i))\, \partial_\gamma \beta(\alpha_i^*,\gamma_i) \cdot \big(\partial_\gamma\beta(\alpha_i^*,\gamma_i) + \partial_\alpha\beta(\alpha_i^*,\gamma_i)\, \partial_{\gamma_i} \alpha_i^* \big), \ i=1,2,\label{convexity7-b}\\ \partial^2_{\mu\gamma_i} E_{\rm red}(\mu,\gamma_1,\gamma_2)&= \sum_{j=1,2}\frac{1}{4} {v}''_2 \big(\mu/2 + \lambda^*\cos\alpha^*_j \big) \, w_{j,\gamma_i}(\mu,\gamma_1,\gamma_2), \ \ i=1,2,\label{convexity7-c}\\ \partial^2_{\gamma_1\gamma_2} E_{\rm red}(\mu,\gamma_1,\gamma_2) & = v'_3(\beta(\alpha_1^*,\gamma_1))\, \partial^2_{\gamma\alpha} \beta(\alpha_1^*,\gamma_1) \,\partial_{\gamma_2} \alpha^*_1 \notag \\ & \ \ \ + v''_3(\beta(\alpha_1^*,\gamma_1))\, \partial_\gamma \beta(\alpha_1^*,\gamma_1) \, \partial_\alpha\beta(\alpha_1^*,\gamma_1)\, \partial_{\gamma_2} \alpha_1^*,\label{convexity7-d} \end{align} where for brevity we have introduced \begin{subequations}\label{convexity2} \begin{align} w_{j,\mu}(\mu,\gamma_1,\gamma_2) & = 1/2 + \partial_\mu\lambda^*\cos\alpha^*_j - \lambda^*\sin\alpha^*_j\,\partial_\mu \alpha^*_j, \ \ \ \ j =1,2, \label{convexity2-a}\\ w_{j, \gamma_i}(\mu,\gamma_1,\gamma_2) & = \partial_{\gamma_i}\lambda^*\cos\alpha^*_j - \lambda^*\sin\alpha^*_j\, \partial_{\gamma_i} \alpha^*_j, \ \ \ \ i,j=1,2. \label{convexity2-b} \end{align} \end{subequations} We now exploit the identity $\nabla E^{\rm sym}_{\mu,\gamma_1,\gamma_2}(\lambda^*,\alpha_1^*, \alpha_2^*) =0$: differentiating \eqref{first order} with respect to $\mu$, $\gamma_1$ or $\gamma_2$, respectively, we obtain \begin{align} 0&=2v_2''(\lambda^*) \, \partial_X \lambda^* + \sum_{j=1,2} \Big( -\frac{1}{2}\sin\alpha^*_j \, \partial_X \alpha^*_j \, v_2'(\mu/2 + \lambda^*\cos\alpha^*_j) \Big) \notag \\ & \ \ \ + \sum_{j=1,2} \Big( \frac{1}{2}\cos\alpha^*_j \, v_2''(\mu/2 + \lambda^*\cos\alpha^*_j) \, w_{j,X}(\mu,\gamma_1,\gamma_2) \Big), \label{eq: long-est-a} \\ 0 & = -\frac{1}{2}v_2'(\mu/2 + \lambda^*\cos\alpha^*_j) \Big( \partial_X\lambda^* \sin\alpha^*_j + \lambda^* \cos\alpha^*_j \partial_X \alpha_j^* \Big)\notag \\ & \ \ \ -\frac{1}{2}\lambda^* \, \sin\alpha^*_j \, v_2''(\mu/2 + \lambda^*\cos\alpha^*_j)\, w_{j,X}(\mu,\gamma_1,\gamma_2) + v'_3(\beta(\alpha^*_j,\gamma_j))\partial^2_{\alpha\alpha}\beta(\alpha^*_j,\gamma_j)\partial_X\, \alpha^*_j \notag\\ & \ \ \ + v''_3(\beta(\alpha^*_j,\gamma_j))\big(\partial_\alpha \beta(\alpha^*_j,\gamma_j)\big)^2\, \partial_X \alpha^*_j + 2v''_3(\alpha^*_j) \, \partial_X \alpha^*_j + z_{j,X}(\mu,\gamma_1,\gamma_2), \ \ \ j=1,2, \label{eq: long-est-b} \end{align} where $X \in \lbrace \mu, \gamma_1, \gamma_2 \rbrace$ and where we have defined for brevity \begin{align*} z_{j,\gamma_j}(\mu,\gamma_1,\gamma_2) &= v'_3(\beta(\alpha^*_j,\gamma_j))\partial_{\alpha\gamma}\beta(\alpha^*_j,\gamma_j) + v''_3(\beta(\alpha^*_j,\gamma_j))\partial_\alpha\beta(\alpha^*_j,\gamma_j) \partial_\gamma \beta(\alpha^*_j,\gamma_j),\\ z_{j,\gamma_i}(\mu,\gamma_1,\gamma_2) &= z_{j,\mu}(\mu,\gamma_1,\gamma_2) = 0, \ \ \ i \neq j. \end{align*} \RRR For brevity let $t_{\rm ref}^\ell := (\mu^{\rm us}_\ell,\gamma_\ell,\gamma_\ell)$ and $t_{\rm ref} := (3,\pi,\pi)$. Observe that $t_{\rm ref}^\ell \to t_{\rm ref}$ as $\ell \to \infty$ by \eqref{gamma}, \eqref{eq:muellneu}, and Lemma \ref{aus}. Moreover, by \eqref{eq:continuity} we get that the unique solution of the problem \eqref{red} corresponding to $t_{\rm ref}^\ell$ converges to $(1,2\pi/3,2\pi/3)$, in particular $\alpha^*_j(t_{\rm ref}^\ell) \to 2\pi/3$ for $j=1,2$. We also recall $\beta(\alpha^*_j(t_{\rm ref}^\ell),\gamma_\ell) \to 2\pi/3$ for $j=1,2$ (see \eqref{eq: constraint2}). \EEE Using $v_2'(1) = v_3'(2\pi/3) = 0$, $\cos(2\pi/3) = -1/2$, $\sin(2\pi/3)= \sqrt{3}/2$ and \eqref{zigder2} we then deduce from \eqref{eq: long-est-a}-\eqref{eq: long-est-b} \begin{subequations}\label{convexity1} \begin{align} 0 &= 2 v_2''(1)\, \partial_X \lambda^*({\RRR t_{\rm ref} \EEE})- \frac{1}{4} v_2''(1)\sum_{j=1,2}w_{j,X}({\RRR t_{\rm ref} \EEE}), \label{convexity1-a}\\ 0 & = -v_2''(1) \, w_{j,X}({\RRR t_{\rm ref} \EEE}) + 8\sqrt{3} v_3''(2\pi/3) \, \partial_X\alpha^*_j({\RRR t_{\rm ref} \EEE}), \ \ \ j=1,2, \label{convexity1-b} \end{align} \end{subequations} as $\ell \to \infty$ , where $X \in \lbrace \mu, \gamma_1, \gamma_2 \rbrace$. Inserting the identities into \eqref{convexity2}, we obtain, after some elementary but tedious calculations, \begin{subequations}\label{convexity3} \begin{align} &w_{1,\mu}({\RRR t_{\rm ref} \EEE }) = w_{2,\mu}({\RRR t_{\rm ref} \EEE }) = 4/K, \ \ \ w_{1,\gamma_i}({\RRR t_{\rm ref} \EEE }) = w_{2,\gamma_i}({\RRR t_{\rm ref} \EEE })=0, \ \ i=1,2,\\ & \partial_\mu \lambda^*({\RRR t_{\rm ref} \EEE }) = 1/K, \ \ \ \partial_\mu \alpha_1^*({\RRR t_{\rm ref} \EEE }) =\partial_\mu \alpha_2^*({\RRR t_{\rm ref} \EEE })= v_2''(1) / (2\sqrt{3}K v_3''(2\pi/3)), \label{convexity3-b} \end{align} \end{subequations} where $K:= 9 + v_2''(1)/(2 v_3''(2\pi/3))$. In particular, the last two equalities of the first line together with \eqref{convexity1} yield that $\partial_{\gamma_i}\lambda^*$, $\partial_{\gamma_i} \alpha^*_1$, and $\partial_{\gamma_i} \alpha^*_2$ vanish at ${\RRR t_{\rm ref} \EEE }$. Thus, by a Taylor expansion in terms of $1/\ell$ the limits $w_{j,\gamma_i}^\infty := \lim_{\ell \to \infty} \ell w_{j, \gamma_i}({\RRR t^\ell_{\rm ref} \EEE })$, $\partial_{\gamma_i}\lambda^\infty := \lim_{\ell \to \infty} \ell \partial_{\gamma_i} \lambda^*({\RRR t^\ell_{\rm ref} \EEE })$, and $\partial_{\gamma_i} \alpha^\infty_j := \lim_{\ell \to \infty} \ell \partial_{\gamma_i} \alpha^*_j({\RRR t^\ell_{\rm ref} \EEE })$ for $i,j=1,2$ exist {\BBB and are finite}. By Lemma \ref{aus} and the fact that $v_3$ is smooth with minimum at $2\pi/3$ we note that one has $|v'_3(\beta(\alpha^{\rm us}_\ell,\gamma_\ell))| \le C\ell^{-2}$ for a constant only depending on $v_3$. Consequently, multiplying the estimates in \eqref{eq: long-est-a}-\eqref{eq: long-est-b} by $\ell$ and letting $\ell \to \infty$ we get using \eqref{zigder2} and \eqref{zigder3} \begin{align* \begin{split} 0 & = 2 v_2''(1) \partial_{\gamma_i} \, \lambda^\infty - \frac{1}{4} v_2''(1)\sum_{j=1,2}w_{j,{\gamma_i} }^\infty, \ \ \ i=1,2,\\ 0&= - \frac{1}{4} v_2''(1)w_{j,\gamma_i}^\infty + 2\sqrt{3} v_3''(2\pi/3) \, \partial_{\gamma_i} \alpha^\infty_j - v_3''(2\pi/3)\pi \, \delta_{ij}, \ \ i,j=1,2, \end{split} \end{align*} where $\delta_{ij}$ denotes the Kronecker delta. As before, inserting the identities into \eqref{convexity2-b}, we obtain after some tedious calculations \begin{subequations}\label{convexity6} \begin{align} \sum_{j=1,2}w_{j,\gamma_i}^\infty & = - \frac{2\pi}{K}, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \sum_{j=1,2} \partial_{\gamma_i} \alpha_j^\infty = \frac{\pi}{2\sqrt{3}} - \frac{\pi v_2''(1)}{4\sqrt{3}K v_3''(2\pi/3)},\\ \partial_{\gamma_i} \alpha_i^\infty & = \frac{\pi}{2\sqrt{3}} - \frac{\pi v_2''(1)}{4\sqrt{3}K v_3''(2\pi/3)} - \frac{\pi}{KK^\infty}, \ \ \ \ \ \ \ \ \ \partial_{\gamma_i} \alpha_j^\infty =\frac{\pi}{KK^\infty}, \ \ i \neq j, \end{align} \end{subequations} for $i=1,2$ with $K$ as defined after \eqref{convexity3} and $K^\infty := 64\sqrt{3} v_3''(2\pi/3)/v_2''(1) + 4\sqrt{3}$. Moreover, we notice that by \eqref{zigder2-b} {\BBB and Lemma \ref{aus} there holds} $$v'_3(\beta(\alpha_\ell^{\rm us}, \gamma_\ell)) \partial^2_{\gamma\gamma} \beta(\alpha_\ell^{\rm us},\gamma_\ell) \ge 0$$ for $\ell$ sufficiently large. With this at hand, we go back to \eqref{convexity7-a}-\eqref{convexity7-d} and derive as $\ell \to \infty$ by \eqref{zigder2}, \eqref{zigder3}, \eqref{convexity3}, and \eqref{convexity6} \begin{align} \partial^2_{\mu\mu} E_{\rm red}({\RRR t^\ell_{\rm ref} \EEE })&=\frac{2v''_2(1)}{K}+ {\rm O}(\ell^{-1}),\label{convexity-last-a}\\ \partial^2_{\gamma_i\gamma_i} E_{\rm red}({\RRR t^\ell_{\rm ref} \EEE })& \ge \ell^{-2} \Big( v''_3(2\pi/3) \frac{3}{4}\pi^2 - v''_3(2\pi/3) \,\sqrt{3}\pi \partial_{\gamma_i}\alpha^\infty_i \Big)+ {\rm O}(\ell^{-3}) \notag \\ & \RRR = \EEE \ell^{-2} v''_3(2\pi/3)\pi^2 \Big( \frac{1}{4} + \frac{v_2''(1)}{4Kv_3''(2\pi/3)} + \frac{\sqrt{3}}{KK^\infty}\Big) + {\rm O}(\ell^{-3}), \ i=1,2, \notag\\ \partial^2_{\mu\gamma_i} E_{\rm red}({\RRR t^\ell_{\rm ref} \EEE })&= - \ell^{-1}\frac{\pi v_2''(1)}{2K} + {\rm O}(\ell^{-2}), \ \ i=1,2, \notag \\ \partial^2_{\gamma_1\gamma_2} E_{\rm red}({\RRR t^\ell_{\rm ref} \EEE })& = -\ell^{-2} v''_3(2\pi/3)\, \sqrt{3}\pi \partial_{\gamma_1} \alpha_2^\infty + {\rm O}(\ell^{-3}) = -\ell^{-2} v''_3(2\pi/3)\, \frac{\sqrt{3}\pi^2}{KK^\infty} + {\rm O}(\ell^{-3}). \notag \end{align} We now check the positivity of the Hessian $D^2 E_{\rm red}$ by considering the minors $H_1 = \partial^2_{\gamma_2\gamma_2} E_{\rm red}$, $H_2 = \det (D^2_{\gamma_1\gamma_2} E_{\rm red} )$ and $H_3 = \det (D^2 E_{\rm red})$. First, we get for $\ell \in \mathbb{N}$ sufficiently large \begin{align*} &H_1({\RRR t^\ell_{\rm ref} \EEE }) \ge \ell^{-2} v''_3(2\pi/3) \frac{\pi^2}{4}>0, \ \ \ H_2({\RRR t^\ell_{\rm ref} \EEE }) \ge \ell^{-4} (v_3(2\pi/3)'')^2 \pi^4 (1/4)^2 >0 \end{align*} and finally for $\ell$ large enough \begin{align*} H_3({\RRR t^\ell_{\rm ref} \EEE }) &= \Big(\partial^2_{\gamma_2\gamma_2} E_{\rm red} - \partial^2_{\gamma_1\gamma_2} E_{\rm red}\Big) \cdot \Big( \partial^2_{\mu\mu} E_{\rm red}\big(\partial^2_{\gamma_2\gamma_2} E_{\rm red} + \partial^2_{\gamma_1\gamma_2} E_{\rm red}\big) - 2\big( \partial^2_{\mu\gamma_1} E_{\rm red}\big)^2 \Big)\\ & \ge \ell^{-4} v''_3(2\pi/3) \frac{\pi^2}{4} \Big( \frac{\pi^2 v_2''(1) v_3''(2\pi/3)}{2K} + \frac{\pi^2(v_2''(1))^2}{2K^2} - 2 \frac{\pi^2(v_2''(1))^2}{4K^2}\Big)>0. \end{align*} Due to the smoothness of the potentials $v_2$, $v_3$, the mapping $(\alpha,\gamma) \mapsto \beta(\alpha,\gamma)$, and the solutions $(\lambda^*,\alpha^*_1,\alpha_2^*)$ as functions of $(\mu,\gamma_1,\gamma_2)$, we get that for $\ell_0 \in \mathbb{N}$ sufficiently large and $\varepsilon>0$ small (depending on $\ell$) $H_i(\mu,\gamma_1,\gamma_2) >0$ for $i=1,2,3$ for all $(\mu,\gamma_1,\gamma_2) \in [\mu^{\rm us}_\ell - \varepsilon, \mu^{\rm us}_\ell + \varepsilon] \times [\gamma_\ell - \varepsilon, \gamma_\ell + \varepsilon]^2 $. It remains to confirm \eqref{eq:convexenergy}. The first identity is a consequence of the fact that $E^{\rm sym}_{\mu,\gamma_1,\gamma_2}$ is symmetric in $(\alpha_1,\gamma_1)$ and $(\alpha_2,\gamma_2)$. \BBB Recalling \eqref{convexity-last-a} and the fact that $D^2E_{\rm red}$ is positive definite, we can control the eigenvalues of $\ell^2 D^2E_{\rm red}$ from below and find $\ell^2 D^2E_{\rm red} \ge 8c_0' \mathbf{I} + {\rm O}(\ell^{-1})$ for some constant $c_0'$ depending only on $v_2''(1)$ and $v_3''(2\pi/3)$, where $\mathbf{I}$ denotes the identity matrix. This implies the second estimate of \eqref{eq:convexenergy}. \EEE \end{proof} \subsection{Proof of Proposition \ref{th: mainenergy} and Proposition \ref{th: main2}} We are now in the position to show the main properties of $E_{\rm red}$. \begin{proof}[Proof of Proposition \ref{th: mainenergy}] Property 2 follows directly from Proposition \ref{convexenergy} if the intervals $M^\ell, G^\ell$ are chosen appropriately depending on $\varepsilon$, with $\varepsilon$ from Proposition \ref{convexenergy}. In Proposition \ref{prop1} we have seen that for given $(\mu,\gamma_1,\gamma_2) \in M^\ell \times G^\ell \times G^\ell$ there is a unique solution $(\lambda^*,\alpha_1^*, \alpha_2^*)$ of the minimization problem \eqref{red}. In particular, if $\gamma_1 = \gamma_2$ we obtain $\alpha^* := \alpha_1^* = \alpha_2^*$ as then \eqref{red} is completely symmetric in $\alpha_1$ and $\alpha_2$. \RRR This shows Property 1. \EEE We now specifically consider the case $\gamma_1 = \gamma_2 = \gamma_\ell$ and denote the minimizer in \eqref{red} by $(\lambda^\mu,\alpha^\mu, \alpha^\mu)$. We observe that $\lambda_1^\mu := \mu/2 +\lambda^\mu \cos\alpha^\mu$, $\lambda_2^\mu := \lambda^\mu$, and $\sigma^\mu : = -\lambda^\mu\cos\alpha^\mu$ satisfy the relations \eqref{eq: basic constraints} and \eqref{alphars}. Then by \eqref{basicenergy}, \eqref{symmetric-cell}, and the fact that $n= 4m\ell$ we derive \begin{align*} E_{\rm red}(\mu,\gamma_\ell,\gamma_\ell) &= 2v_2(\lambda^\mu) + {v}_2 \big(\mu/2 +\lambda^\mu\cos\alpha^\mu \big) + 4 v_3(\alpha^\mu) + 2v_3(\beta(\alpha^\mu,\gamma_\ell))\\ & = 2v_2(\lambda^\mu_2) + v_2(\lambda^\mu_1) + 4 v_3(\alpha^\mu) + 2v_3(\beta(\alpha^\mu,\gamma_\ell)) = \frac{1}{2m\ell} E(\mathcal{F}_{\lambda_1^\mu,\lambda_2^\mu,\mu}), \end{align*} which concludes the proof of Property 5. To see Property 3, we introduce $g(\gamma) = E_{\rm red}(\mu,\gamma,\gamma)$ for $\mu \in M^\ell$. By \eqref{derivative1.2} we have $$g'(\gamma) = \sum_{i=1,2}\partial_{\gamma_i} E_{\rm red}(\mu,\gamma,\gamma) = 2v_3'(\beta(\alpha^*,\gamma)) \partial_\gamma \beta(\alpha^*,\gamma),$$ where $\alpha^* = \alpha^*(\mu,\gamma,\gamma)$. Using \eqref{zigder3} and the fact that $v_3'(\beta(\alpha^*,\gamma)) < 0$ since $\beta(\alpha^*,\gamma) < 2\pi/3$, we get $g'(\gamma) < 0$. Moreover, taking again \eqref{zigder3} and Lemma \ref{aus} into account, a Taylor expansion shows $|g'(\gamma)| \le C\ell^{-3}$ for some $C>0$ only depending on $v_3$. This shows Property 3. Finally, we show Property 4. The strict convexity of $\mu \mapsto E_{\rm red}(\mu,\gamma_\ell,\gamma_\ell)$ follows from \eqref{convexity-last-a} and a continuity argument, exactly as in the proof of Proposition \ref{convexenergy}. To show that the mapping is \RRR strictly \EEE increasing for $\mu > \mu^{\rm us}_\ell$, we have to show that for $\mu > \mu^{\rm us}_\ell$ \begin{align}\label{larger than one} \mu/2 + \lambda^\mu \cos\alpha^\mu >1 \end{align} as then the property follows from \eqref{derivative1}. Using the monotonicity properties of $v_2$ we see that the first-order optimality conditions \eqref{eq: firstorder-opt} and \eqref{first order-a} imply \begin{align}\label{eq: sgn} \begin{split} \mu/2 + \lambda^\mu\cos\alpha^\mu> 1 \ \ \ \Leftrightarrow \ \ \ \lambda^\mu >1. \end{split} \end{align} We prove \eqref{larger than one} by contradiction. Suppose $\lambda^\mu \le 1$. This together with the fact $\mu > \mu^{\rm us}_\ell = 2 - 2\cos\alpha^{\rm us}_\ell$ \RRR (see \eqref{eq:muellneu}) \EEE and $\cos\alpha^\mu<0$ would imply by \eqref{eq: sgn} \begin{align}\label{eq: sgn2} 2\cos\alpha^\mu - 2\cos\alpha^{\rm us}_\ell +1 = \mu^{\rm us}_\ell -1 +2\cos\alpha^\mu < \mu -1 +2\lambda^\mu\cos\alpha^\mu \le 1 \end{align} and thus $\alpha^\mu > \alpha^{\rm us}_\ell$. By the optimality condition in the unstretched case (see \eqref{first order-b} and recall that bond lengths are all equal to $1$) we get $$v_3'(\beta(\alpha^{\rm us}_\ell,\gamma_\ell))\, \partial_\alpha \beta(\alpha^{\rm us}_\ell,\gamma_\ell) + 2v_3'(\alpha^{\rm us}_\ell) = 0.$$ Consider the mapping $\alpha \mapsto v_3'(\beta(\alpha,\gamma_\ell))\, \partial_\alpha \beta(\alpha,\gamma_\ell) + 2v_3'(\alpha)$ and observe that the derivative reads as $$v_3'(\beta(\alpha,\gamma_\ell))\, \partial^2_{\alpha\alpha} \beta(\alpha,\gamma_\ell) + v_3''(\beta(\alpha,\gamma_\ell))\, (\partial_\alpha \beta(\alpha,\gamma_\ell))^2 + 2v_3''(\alpha).$$ Thus, the mapping is strictly increasing in a left neighborhood of $2\pi/3$ by \eqref{zigder3} and the fact that $\beta(\alpha,\gamma_\ell) < 2\pi/3$. Since $\alpha^\mu > \alpha^{\rm us}_\ell$, this gives $$ v'_3(\beta(\alpha^\mu,\gamma_\ell)) \, \partial_\alpha\beta(\alpha^\mu,\gamma_\ell) + 2 v'_3(\alpha^\mu) >0. $$ In view of \eqref{first order-b} and the first order optimality conditions \eqref{eq: firstorder-opt}, we get $\mu/2 + \lambda^\mu\cos\alpha^\mu > 1$, which contradicts the last inequality in \eqref{eq: sgn2}. Consequently, \eqref{larger than one} holds, which concludes the proof. \end{proof} \RRR We close this section with the proof of Proposition \ref{th: main2}.\EEE \begin{proof}[Proof of Proposition \ref{th: main2}] Let $M^\ell$ be the interval given by Proposition \ref{th: mainenergy}. The strict convexity of the mapping $\mu \mapsto E_{\rm min}(\mu)$ on $M^\ell$ as well as $\frac{d^2}{d\mu^2} E_{\rm min}(\mu_\ell^{\rm us}) \ge c2m\ell \ge cn$ follow from Properties 4 and 5 of Proposition \ref{th: mainenergy}. The fact that the energy minimum is attained at $\mu^{\rm us}_\ell$ follows from the definition of $\mu^{\rm us}_\ell$, see Proposition \ref{eq: old main result} \RRR and \eqref{eq:muellneu}. \EEE This shows Property 1. Now consider Property 2. We define $\lambda^\mu_1=\mu/2 +\lambda^\mu \cos\alpha^\mu$, $\lambda^\mu_2 = \lambda^\mu$ \RRR with $\lambda^\mu$, $\alpha^\mu$ being the solution of \eqref{red} for $\mu$ and $\gamma_1 = \gamma_2 = \gamma_\ell$ (cf. Proposition \ref{th: mainenergy}(v)) \EEE and use \eqref{convexity3-b} to obtain $\partial_\mu \lambda_2^\mu({\RRR t_{\rm ref} \EEE }) = \partial_\mu \lambda^*({\RRR t_{\rm ref} \EEE }) = 1/K$ and $\partial_\mu \lambda_1^\mu({\RRR t_{\rm ref} \EEE }) = 1/2 - \partial_\mu \lambda^*({\RRR t_{\rm ref} \EEE })/2- \sqrt{3}\partial_\mu \alpha^*_1({\RRR t_{\rm ref} \EEE })/2 = 4/K$ with $K = 9 + v_2''(1)/(2 v_3''(2\pi/3))$. \RRR (Recall the definition $t_{\rm ref} = (3,\pi,\pi)$.) \EEE Consequently, by a standard continuity argument we see that $\lambda^\mu_1$ and $\lambda^\mu_2$ increase continuously for $\mu \in M^\ell$, possibly passing to a smaller (not relabeled) open interval $M^\ell$ containing $\mu^{\rm us}_\ell$. The proof of the fact that $\mu > \mu^{\rm us}_\ell$ implies $\lambda^\mu_1,\lambda^\mu_2 >1$ is already contained in the proof of Proposition \ref{th: mainenergy}, see particularly \eqref{larger than one} and \eqref{eq: sgn}. The fact that $\mu < \mu^{\rm us}_\ell$ implies $\lambda^\mu_1,\lambda^\mu_2 <1$ can be proved along similar lines. To see Property 3, recall that by Proposition \ref{eq: old main result} we have $ \RRR \alpha^{\rm us}_\ell \EEE = \alpha^{\mu^{\rm us}_\ell} \in (\alpha^{\rm ch}_\ell,\alpha^{\rm ru})$ in the unstretched case. By a continuity argument we particularly obtain the convergence of minimizers, i.e. $\alpha^\mu \to \alpha^{\mu^{\rm us}_\ell}$ as $\mu \to \mu_\ell^{\rm us}$. Consequently, again possibly passing to a smaller interval $M^\ell$, Property 3 follows. We finally concern ourselves with Property 4. Recall by \eqref{alphars} that the radius of the nanotube is given by $$\rho^\mu = \lambda_2^\mu \sin\alpha^\mu / (2\sin(\pi/(2\ell))).$$ We compute the derivative and obtain $$\partial_\mu \rho^\mu = \big(\lambda_2^\mu \cos\alpha^\mu \, \partial_\mu \alpha^\mu + \partial_\mu \lambda_2^\mu\sin\alpha^\mu \big)/(2\sin(\pi/(2\ell))). $$ By \eqref{convexity3-b} the derivative at the unstrechted planar reference configuration reads as \begin{align*} \lim_{\ell \to \infty} \partial_\mu \rho^{\mu^{\rm us}_\ell} \cdot (2\sin(\pi/(2\ell))) & = - \frac{1}{2} \partial_\mu \alpha_1^*({\RRR t_{\rm ref} \EEE }) + \frac{1}{2}\sqrt{3}\partial_\mu \lambda^*({\RRR t_{\rm ref} \EEE }) = \frac{\sqrt{3}}{2K} \Big( 1 - \frac{v_2''(1)}{6v_3''(2\pi/3)} \Big). \end{align*} Consequently, whenever $v_2''(1) \neq 6v_3''(2\pi/3))$, by a continuity argument the sign of $\partial_\mu \rho^{\mu}$ for $\ell \in \mathbb{N}$ large in a small neighborhood of $\mu^{\rm us}_\ell$ only depends on the sign of $v_2''(1) - 6v_3''(2\pi/3)$. \end{proof} \section{Energy defect controls symmetry defect: Proof of Theorem \ref{th: Ered}}\label{sec: cellenery} This section is devoted to the proof of Theorem \ref{th: Ered}. The fact that the minimum of the cell energy is attained for a special configuration with high symmetry (see \eqref{sym-assumption}) essentially relies on convexity properties of the cell energy $E_{\rm cell}$ defined in \eqref{eq: cell}. Throughout the section we consider a cell consisting of eight points $\boldsymbol{x} = (x_1,\ldots,x_8) \in \mathbb{R}^{3 \times 8}$ as defined before \eqref{eq: cell}, see Figure \ref{cell}. Likewise, the bond lengths are again denoted by $b_1,\ldots,b_8$ and the angles by $\varphi_1,\ldots, \varphi_{10}$, \RRR see Figure \ref{cellangles}. \EEE With a slight abuse of notation we denote the cell energy for a given configuration $\boldsymbol{x}$ by $E_{\rm cell}(\boldsymbol{x})$. \subsection{Relation between atomic positions, bonds, and angles} We will investigate the convexity properties of $E_{{\rm cell}}$ near the \emph{planar reference configuration} $\boldsymbol{x}^0 = (x^0_1,\ldots,x^0_8) \in \mathbb{R}^{3 \times 8}$ defined by \begin{align*} \begin{split} &x_1^0 = (-1,0,0), \ \ \ x^0_2 = (1,0,0), \ \ \ x_3^0 =(-1/2,\sqrt{3}/2,0), \\ & x_4^0 =(1/2,\sqrt{3}/2,0), \ \ \ x_5^0 =(1/2,-\sqrt{3}/2,0), \ \ x_6^0 =(-1/2,-\sqrt{3}/2,0), \\& x_7^0 = (-2,0,0), \ \ x_8^0 = (2,0,0). \end{split} \end{align*} Moreover, we introduce the \emph{unstretched kink configuration} $\boldsymbol{x}^\ell_{\rm kink} = (x^{\rm kink}_1,\ldots,x^{\rm kink}_8) \in \mathbb{R}^{3 \times 8}$ by \begin{equation}\label{kink} \begin{aligned} &x_1^{\rm kink} = (-1/2-\sigma^{\rm us},0,0), \\ & x^{\rm kink}_2 = (1/2 + \sigma^{\rm us},0,0), \\ & x_3^{\rm kink} =(-1/2,\sin\alpha^{\rm us}_\ell \sin(\gamma_\ell/2),\sin\alpha^{\rm us}_\ell \cos(\gamma_\ell/2)), \\ & x_4^{\rm kink} =(1/2,\sin\alpha^{\rm us}_\ell \sin(\gamma_\ell/2),\sin\alpha^{\rm us}_\ell \cos(\gamma_\ell/2)), \\ & x_5^{\rm kink} =(1/2,-\sin\alpha^{\rm us}_\ell \sin(\gamma_\ell/2),\sin\alpha^{\rm us}_\ell \cos(\gamma_\ell/2)), \\ & x_6^{\rm kink} =(-1/2,-\sin\alpha^{\rm us}_\ell \sin(\gamma_\ell/2),\sin\alpha^{\rm us}_\ell \cos(\gamma_\ell/2)), \\& x_7^{\rm kink} = (-3/2 - \sigma^{\rm us},0,0), \\ & x_8^{\rm kink} = (3/2 + \sigma^{\rm us},0,0), \end{aligned} \end{equation} where $\gamma_\ell=\pi(1-1/\ell)$ and $\sigma^{\rm us} = - \cos\alpha^{\rm us}_\ell$ with $\alpha^{\rm us}_\ell$ as given by Proposition \ref{eq: old main result} (cf. also \eqref{alphars}). Note that $\boldsymbol{x}^\ell_{\rm kink}$ represents the mutual position of atoms in a cell for the unstretched nanotube $\mathcal{G}_{\alpha^{\rm us}_\ell}$ found in Proposition \ref{eq: old main result}. For later use we note that by Lemma \ref{aus} and a Taylor expansion we find \begin{align}\label{eq: distance} |\boldsymbol{x}^0 -\boldsymbol{x}^\ell_{\rm kink}| \le C \ell^{-1} \end{align} for some universal $C>0$ large enough. In order to discuss the convexity properties of $E_{\rm cell}$ we need to introduce a specific basis of $\mathbb{R}^{3 \times 8}$, i.e. the space of cell configurations. This will consist of three collections of vectors, denoted by $\mathcal{V}_{\rm degen}$, $\mathcal{V}_{\rm good}$, and $\mathcal{V}_{\rm bad}$, where the sets are defined as follows: We introduce the translations and infinitesimal rotations \begin{align*} \mathcal{V}_{\rm trans} &= \Big\{ ( {e}_1,\ldots, {e}_1), ( e_2,\ldots, {e}_2), ( {e}_3,\ldots, {e}_3)\Big\} \subset \mathbb{R}^{3 \times 8} \\ \mathcal{V}_{\rm rot} &= \left\{ \boldsymbol{v}_1 := \begin{pmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \, \boldsymbol{x}^0, \ \boldsymbol{v}_2 :=\begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \end{pmatrix} \, \boldsymbol{x}^0, \ \boldsymbol{v}_3 :=\begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \end{pmatrix} \, \boldsymbol{x}^0 \right\} \subset \mathbb{R}^{3 \times 8} \end{align*} and set $\mathcal{V}_{\rm degen}= \mathcal{V}_{\rm trans} \cup \mathcal{V}_{\rm rot}$. The family $\mathcal{V}_{\rm good}$ contains the 13 vectors \begin{align*} \boldsymbol u_1=&( -1, 0, 0 | 1 , 0 , 0 | - 1/2, \sqrt{3}/2 , 0 | 1/2 , \sqrt{3}/2, 0 | 1/2 , -\sqrt{3}/2 , 0| - 1/2 , -\sqrt{3}/2 , 0 | 0 , 0 , 0 | 0 , 0 , 0),\\ \boldsymbol u_2=& (0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 1/2 , \sqrt{3}/2 , 0 \, | \, - 1/2 , \sqrt{3}/2 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0),\\ \boldsymbol u_3=&(0 , 0 , 0 \, | \, 1 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 1 , 0 , 0 \, | \, 1 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0),\\ \boldsymbol u_4=&(0 , 0 , 0 \, | \, 1/2 , -\sqrt{3}/2 , 0 \, | \, 1/2 , \sqrt{3}/2 , 0 \, | \, - 1/2 , \sqrt{3}/2 , 0 \, | \, 1 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0),\\ \boldsymbol u_5=&(0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, -1 , 0 , 0 \, | \, 0 , 0 , 0),\\ \boldsymbol u_6=&(0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, -1 , 0 , 0 \, | \, 1 , 0 , 0),\\ \boldsymbol u_7=&(\sqrt{3} , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 1 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , -1 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0),\\ \boldsymbol u_8=&(0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, \sqrt{3}/2 , - 1/2 , 0 \, | \, \sqrt{3}/2 , 1/2 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0),\\ \boldsymbol u_9=&(\sqrt{3}/2, 1/2 , 0 \, | \, -\sqrt{3}/2 , 1/2 , 0 \, | \, 0 , 1 , 0 \, | \, 0 , 1 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0),\\ \boldsymbol u_{10}=&(0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 1 , 0 \, | \, 0 , 0 , 0),\\ \boldsymbol u_{11}=&(0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 1 , 0 \, | \, 0 , 1 , 0),\\ \boldsymbol u_{12}=&(1 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0),\\ \boldsymbol u_{13}=&(0 , 1 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0). \end{align*} The first 6 vectors keep the angles fixed and modify only the bonds, see Figure \ref{Eigenvectors_angle_fixed}. The vectors $\boldsymbol u_8, \dots , \boldsymbol u_{11}$ keep the bond lengths fixed \RRR to first order \EEE and change the angles, see Figure \ref{Eigenvectors_sides_fixed}. Eventually, the remaining vectors $\boldsymbol u_{12}$ and $ \boldsymbol u_{13}$ modify both angles and bonds as in Figure \ref{Eigenvectors_nofixed}. \begin{figure}[htp] \begin{center} \pgfdeclareimage[width=0.9\textwidth]{Eigenvectors_angle_fixed}{Eigenvectors_angle_fixed.pdf} \pgfuseimage{Eigenvectors_angle_fixed} \caption{Vectors $\boldsymbol u_1, \dots, \boldsymbol u_6$ in $\mathcal{V}_{\rm good}$ keep the angles fixed (ordered from left to right both in the first and in the second line).} \label{Eigenvectors_angle_fixed} \end{center} \end{figure} \begin{figure}[tp] \begin{center} \pgfdeclareimage[width=0.8\textwidth]{Eigenvectors_sides_fixed}{Eigenvectors_sides_fixed.pdf} \pgfuseimage{Eigenvectors_sides_fixed} \caption{Vectors $\boldsymbol u_{7}, \dots, \boldsymbol u_{11}$ in $\mathcal{V}_{\rm good}$ keep the bond lengths fixed (ordered from left to right both in the first and in the second line).} \label{Eigenvectors_sides_fixed} \end{center} \end{figure} \begin{figure}[htp] \begin{center} \pgfdeclareimage[width=0.45\textwidth]{Eigenvectors_nofixed}{Eigenvectors_nofixed.pdf} \pgfuseimage{Eigenvectors_nofixed} \caption{Vectors $\boldsymbol u_{12} $ and $\boldsymbol u_{13}$ in $\mathcal{V}_{\rm good}$ keep neither angles nor bond lengths fixed (ordered from left to right).} \label{Eigenvectors_nofixed} \end{center} \end{figure} By $\mathcal{V}_{\rm bad}$ we denote the collection of the vectors \begin{align*} & (0 , 0 , 1 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0),\\ &(0 , 0 , 1 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 1 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0),\\ &(0 , 0 , 1 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 1 \, | \, 0 , 0 , 1 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0),\\ &(0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 1 \, | \, 0 , 0 , 0),\\ &(0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 0 \, | \, 0 , 0 , 1 \, | \, 0 , 0 , 1). \end{align*} It is elementary to check that the vectors $\mathcal{V}_{\rm degen} \cup \mathcal{V}_{\rm good} \cup \mathcal{V}_{\rm bad}$ are linearly independent and thus form a basis of $\mathbb{R}^{3 \times 8}$. Note that the vectors in $\mathcal{V}_{\rm good}$ are perpendicular to the vectors in $\mathcal{V}_{\rm bad}$. Clearly, the cell energy is strictly convex as a function of the bond lengths and angles by the assumptions on the potentials $v_2$ and $v_3$. Our goal is to show that the same property holds if the cell energy is given as a function of the atomic positions. To this end, we introduce the mapping $T = (T^a,T^b): \mathbb{R}^{3 \times 8} \to \mathbb{R}^{18}$ defined by \begin{align*} T^a_i(\boldsymbol{x}) = \varphi_{i} \ \text{ for } i= 1,\ldots,10, \ \ \ \ T^b_i(\boldsymbol{x}) &= b_i \ \text{ for } i= 1,\ldots,8. \end{align*} Then the cell energy reads as \begin{align}\label{eq: cell-energy} E_{{\rm cell}}(\boldsymbol{x}) = \sum_{i=1}^8 \kappa^b_i v_2(T^b_i(\boldsymbol{x})) +\sum_{i=1}^{10} \kappa^a_i v_3(T^a_i(\boldsymbol{x})) \end{align} with the factors $\kappa^b_1 = \kappa^b_2 = \kappa^b_7= \kappa^b_8 = 1/4$, $\kappa^b_3 = \kappa^b_4 = \kappa^b_5= \kappa^b_6 = 1/2$, $\kappa^a_1 = \kappa^a_{2} = 1$, $\kappa^a_{3} = \ldots = \kappa^a_{10} = 1/2$. {\BBB Before analyzing the mapping $T$, we need to introduce some more notation for the sum of angles $\varphi_i$. From here on, we denote by $\boldsymbol{e}_1,\ldots,\boldsymbol{e}_{10}$ the canonical basis of $\mathbb{R}^{10}$, and we let $$\boldsymbol{a}_1 := \boldsymbol{e}_1 + \ldots + \boldsymbol{e}_6,\quad \boldsymbol{a}_2 := \boldsymbol{e}_1 + \boldsymbol{e}_7 + \boldsymbol{e}_8,\quad \boldsymbol{a}_3: = \boldsymbol{e}_2 + \boldsymbol{e}_9 + \boldsymbol{e}_{10}$$ be vectors in $\mathbb{R}^{10}$. Elementary geometry yields $T^a(\boldsymbol{x}^0)\cdot \boldsymbol{a}_1 = 4\pi$ and $T^a(\boldsymbol{x}^0) \cdot \boldsymbol{a}_j = 2\pi$ for $j=2,3$ as well as $T^a(\boldsymbol{x})\cdot \boldsymbol{a}_1 \le 4\pi$ and $T^a(\boldsymbol{x}) \cdot \boldsymbol{a}_j \le 2\pi$ for $j=2,3$ for each $\boldsymbol{x}\in\mathbb{R}^{3\times 8}$. Indeed, the sum of the interior angles in a hexagon is always smaller or equal to $4\pi$ and exactly $4\pi$ if the hexagon is planar. Likewise one argues for a triple junction. } \begin{lemma}[Properties of $T$]\label{lemma: T} The mapping $T$ is smooth in a neighborhood of $\boldsymbol{x}^0$. There is a constant $c_{\rm kink} >0$ such that \begin{align*} 1.& \ \ {\rm Ker}(D T(\boldsymbol{x}^0)) = {\rm span}(\mathcal{V}_{\rm degen} \cup \mathcal{V}_{\rm bad}), \ \ \ \ {\rm dim}({\rm Ker}(D T(\boldsymbol{x}^0))) = 11,\\ 2.& \ \ {\rm dim}({\rm Ker}(D T^a(\boldsymbol{x}^0))) = 17,\\ 3.& \ \ (\boldsymbol{v}^T D^2 T^a(\boldsymbol{x}^0) \boldsymbol{v})\cdot \boldsymbol{a}_j \le 0 \text{ for } \ j=1,2,3, \ \ \text{ for all } \ \ \boldsymbol{v} \in \mathbb{R}^{3 \times 8},\\ 4.& \ \ \sum_{j=1}^{3}(\boldsymbol{v}^T D^2 T^a(\boldsymbol{x}^0) \boldsymbol{v}) \cdot \boldsymbol{a}_j\le -c_{\rm kink}|\boldsymbol{v} - \boldsymbol{v}_{\rm degen}|^2 \ \ \text{ for all } \ \ \boldsymbol{v} \in {\rm span}(\mathcal{V}_{\rm degen} \cup \mathcal{V}_{\rm bad}), \\ & \ \ \ \text{ where $\boldsymbol{v}_{\rm degen}$ is the orthogonal projection of $\boldsymbol{v}$ onto ${\rm span}(\mathcal{V}_{\rm degen})$.} \end{align*} \end{lemma} \begin{proof} First, to see Property 1, we note that ${\rm span}(\mathcal{V}_{\rm degen} \cup \mathcal{V}_{\rm bad})$ is a subset of ${\rm Ker}(D T(\boldsymbol{x}^0))$ since each vector in $\mathcal{V}_{\rm degen} \cup \mathcal{V}_{\rm bad}$ does not change bond lengths and angles to first order. On the other hand, each vector in $\mathcal{V}_{\rm good}$ changes bond lengths or angles to first order and is therefore not contained in the kernel of $D T(\boldsymbol{x}^0)$. Indeed, the first six vectors of $\mathcal{V}_{\rm good}$ are directions of perturbations that do not change angles in first order, but bond lengths. Vectors $\boldsymbol u_7, \dots, \boldsymbol u_{11}$ are perturbations that do not change bond lengths in first order, but angles. Vectors $ \boldsymbol u_{12}$ and $\boldsymbol u_{13}$ are in-plane displacements of a single atom and change both bond lengths and angles to first order. \RRR More precisely, \vspace{0.2cm} for the changes of bond lengths we get\\ \begin{tabular}{ll}\vspace{0.1cm} $DT^b(\boldsymbol{x}^0) \boldsymbol{u}_1 \,\|\, (1,1,1,1,1,1,-1,-1)$, & $DT^b(\boldsymbol{x}^0) \boldsymbol{u}_2 \,\|\, (0,-1,1,1,0,0,0,0)$,\\ \vspace{0.1cm} $DT^b(\boldsymbol{x}^0) \boldsymbol{u}_3 \,\|\, (1,1,0,0,0,0,0,-1)$, & $DT^b(\boldsymbol{x}^0) \boldsymbol{u}_4 \,\|\, (2,-2,2,4,-2,0,0,-1)$,\\ \vspace{0.1cm} $DT^b(\boldsymbol{x}^0) \boldsymbol{u}_5 \,\|\, (0,0,0,0,0,0,1,0)$, & $DT^b(\boldsymbol{x}^0) \boldsymbol{u}_6 \,\|\, (0,0,0,0,0,0,1,1)$,\\ \vspace{0.2cm} $DT^b(\boldsymbol{x}^0) \boldsymbol{u}_{12} \,\|\, (0,0,-1,0,0,-1,2,0)$, & $DT^b(\boldsymbol{x}^0) \boldsymbol{u}_{13} \,\|\, (0,0,-1,0,0,1,0,0)$, \end{tabular}\\ where $\boldsymbol{w}_1 \,\|\, \boldsymbol{w}_2$ indicates that $\boldsymbol{w}_1$ and $\boldsymbol{w}_2$ are linearly dependent. Likewise, for the changes of angles we have \vspace{0.2cm} \\ \begin{tabular}{ll}\vspace{0.1cm} $DT^a(\boldsymbol{x}^0) \boldsymbol{u}_7 \,\|\, (4,0,-3,1,1,-3,-2,-2,0,0),$ & $DT^a(\boldsymbol{x}^0) \boldsymbol{u}_8 \,\|\, (-1,1,2,-2,0,0,1,0,-1,0)$, \\ \vspace{0.1cm} $DT^a(\boldsymbol{x}^0) \boldsymbol{u}_9 \,\|\, (-2,-2,1,1,1,1,1,1,1,1)$, & $DT^a(\boldsymbol{x}^0) \boldsymbol{u}_{10} \,\|\, (0,0,0,0,0,0,0,0,-1,1)$, \\ \vspace{0.1cm} $DT^a(\boldsymbol{x}^0) \boldsymbol{u}_{11} \,\|\, (0,0,0,0,0,0,-1,1,-1,1)$, & $DT^a(\boldsymbol{x}^0) \boldsymbol{u}_{12} \,\|\, (2,0,-1,0,0,-1,-1,-1,0,0)$,\\ \vspace{0.1cm} $DT^a(\boldsymbol{x}^0) \boldsymbol{u}_{13} \,\|\, (0,0,0,0,0,0,1,-1,0,0).$ & \vspace{0.2cm} \end{tabular}\\ \EEE (We prefer not to give details of the computation, but rather refer the Reader to Figures \ref{Eigenvectors_angle_fixed}-\ref{Eigenvectors_nofixed} where the situation of the different directions is indicated.). \RRR It is elementary to check that the vectors $DT(\boldsymbol{x}^0) \boldsymbol{u}_i$, $i =1,\ldots, 13$, are linearly independent which concludes the proof of Property 1 by a dimension counting. \EEE Since $ {\rm dim}({\rm Ker}(D T(\boldsymbol{x}^0))) =11$ and in $\mathcal{V}_{\rm good}$ only the first six vectors do not change angles to first order, Property 2 holds. Property 3 follows from the fact that the mapping $t \mapsto T^a(\boldsymbol{x}^0 + t\boldsymbol{v})\cdot \boldsymbol{a}_j$ has a local maximum at $t= 0$ for $j=1,2,3$ and for all $\boldsymbol{v} \in \mathbb{R}^{3 \times 8}$ as noticed before the statement of the lemma. To see Property 4, we first consider the special case $\boldsymbol{v} \in \mathcal{V}_{\rm bad}$. In this situation the property follows from an elementary computation, which we detail only in the case $\boldsymbol{v} = (e_3 |0| \ldots|0)$. In this case, after some calculations, we obtain $(T^a(\boldsymbol{x}^0 + t\boldsymbol{v}))_i = \arccos(-1/2 + 3t^2/2) + {\rm O}(t^3) \le 2\pi/3 -ct^2$ for some $c>0$ for $i=1,7,8$ (i.e. for the angles at the triple junction at point $x_1$). Using also Property 1, this indeed implies $(\boldsymbol{v}^T D^2 T^a(\boldsymbol{x}^0) \boldsymbol{v}) \cdot \boldsymbol{a}_2 \le -c$, i.e. by a perturbation out of the plane the sum of the angles is reduced to second order. For the other triple junction and the interior angles of the hexagon we argue analogously. This shows the property for perturbations in the directions $\mathcal{V}_{\rm bad}$. \BBB Likewise, we proceed for directions in ${\rm span}(\mathcal{V}_{\rm bad})$. \EEE Now consider the general case $\boldsymbol{v} = \boldsymbol{v}_{\rm trans} + \boldsymbol{v}_{\rm rot} + \boldsymbol{v}_{\rm bad} \in {\rm span}(\mathcal{V}_{\rm degen} \cup \mathcal{V}_{\rm bad})$ for $\boldsymbol{v}_{\rm trans} \in {\rm span}(\mathcal{V}_{\rm trans})$, $\boldsymbol{v}_{\rm rot} \in {\rm span}(\mathcal{V}_{\rm rot})$, and $\boldsymbol{v}_{\rm bad} \BBB \in {\rm span}(\mathcal{V}_{\rm bad}) \EEE $. First, since $T(\boldsymbol{x} + \boldsymbol{w}) = T(\boldsymbol{x})$ for all $\boldsymbol{x} \in \mathbb{R}^{3 \times 8}$ and all $\boldsymbol{w} \in \mathcal{V}_{\rm trans}$, we get $DT(\boldsymbol{x}) \boldsymbol{w} = 0$ and $\boldsymbol{w}^TD^2 T(\boldsymbol{x}) \boldsymbol{w}' = 0$ for all $\boldsymbol{w} \in {\rm span}(\mathcal{V}_{\rm trans})$, $\boldsymbol{w}' \in \mathbb{R}^{3 \times 8}$, and $\boldsymbol{x} \in \mathbb{R}^{3 \times 8}$. Consequently, we deduce $\boldsymbol{v}^TD^2 T^a(\boldsymbol{x}^0) \boldsymbol{v} = (\boldsymbol{v}_{\rm rot} + \boldsymbol{v}_{\rm bad})^TD^2 T^a(\boldsymbol{x}^0) (\boldsymbol{v}_{\rm rot} + \boldsymbol{v}_{\rm bad})$. Moreover, let $A \in \mathbb{R}^{3 \times 3}_{\rm skew}$ such that $\boldsymbol{v}_{\rm rot} = A \boldsymbol{x}^0$ and observe that there is a rotation $R_t \in SO(3)$ such that $\boldsymbol{x}^0_t := R_t (\boldsymbol{x}^0 + t \boldsymbol{v}_{\rm rot})$ is contained in the plane $\mathbb{R}^2 \times \lbrace 0 \rbrace$ and one has $|R_t - (\mathbf{I} - tA)| = {\rm O}(|tA|^2)$, cf. \BBB \cite[(3.20)]{FrieseckeJamesMueller:02}. \EEE (Here $\mathbf{I} \in \mathbb{R}^{3 \times 3}$ denotes the identity matrix.) Consequently, we get $|\boldsymbol{x}^0 - \boldsymbol{x}^0_t| = {\rm O}(|tA|^2)$. This implies \begin{align*} T^a(\boldsymbol{x}^0 + t(\boldsymbol{v}_{\rm rot} + \boldsymbol{v}_{\rm bad}))& = T^a\big(R_t(\boldsymbol{x}^0 + t(\boldsymbol{v}_{\rm rot} + \boldsymbol{v}_{\rm bad}))\big) = T^a(\boldsymbol{x}^0_t + t R_t\boldsymbol{v}_{\rm bad}) \\ &= T^a(\boldsymbol{x}^0 + t \boldsymbol{v}_{\rm bad} + t^2 \boldsymbol{w} + {\rm O}(t^3)) \end{align*} for some $\boldsymbol{w} \in \mathbb{R}^{3 \times 8}$ with $|\boldsymbol{w}| \le c|A|^2$ and the property that the third component of each vector in $\boldsymbol{w}$ is zero. A Taylor expansion and Property 1 of the lemma then yield \begin{align*} T^a(\boldsymbol{x}^0 + t(\boldsymbol{v}_{\rm rot} + \boldsymbol{v}_{\rm bad}))& = T^a(\boldsymbol{x}^0) + t^2DT^a(\boldsymbol{x}^0)\boldsymbol{w} + \frac{t^2}{2}\boldsymbol{v}_{\rm bad}^TD^2T^a(\boldsymbol{x}^0)\boldsymbol{v}_{\rm bad} + {\rm O}(t^3). \end{align*} As the sum of the angles in the hexagon and at the triple junctions remains invariant under perturbation $\boldsymbol{w}$, we then deduce $$\sum_{j=1}^{3} T^a(\boldsymbol{x}^0 + t(\boldsymbol{v}_{\rm rot} + \boldsymbol{v}_{\rm bad})) \cdot \boldsymbol{a}_j = 8\pi + \sum_{j=1}^{3}\frac{t^2}{2}\boldsymbol{v}_{\rm bad}^TD^2T^a(\boldsymbol{x}^0)\boldsymbol{v}_{\rm bad} \cdot \boldsymbol{a}_j + {\rm O}(t^3). $$ The desired result now follows from the fact that $\sum_{j=1}^3\boldsymbol{v}_{\rm bad}^TD^2T^a(\boldsymbol{x}^0)\boldsymbol{v}_{\rm bad} \cdot \boldsymbol{a}_j \le -c|\boldsymbol{v}_{\rm bad}|^2$ has already been established in the first part of the proof, \BBB where we also note that $|\boldsymbol{v}_{\rm bad}| \ge c |\boldsymbol{v} - \boldsymbol{v}_{\rm degen}|$ with $\boldsymbol{v}_{\rm degen}$ being the orthogonal projection of $\boldsymbol{v}$ onto ${\rm span}(\mathcal{V}_{\rm degen})$. \EEE \end{proof} For later purpose we also introduce the mapping $\tilde{E}: {\RRR [0,2\pi]^{10}\times [0,+\infty)^8} \to \mathbb{R}$ defined by $$ \tilde{E}(\boldsymbol y) = \sum_{i=1}^{10} \kappa^a_i v_3(y_{i}) + \sum_{i=1}^{8} \kappa^b_i v_2(y_{i+10}) $$ for $\boldsymbol y \in {\BBB [0,2\pi]^{10}\times [0,+\infty)^8}$. Note that $E_{\rm cell}(\boldsymbol{x}) = \tilde{E}(T(\boldsymbol{x}))$ for all $\boldsymbol{x} \in \mathbb{R}^{3\times 8}$. \begin{lemma}[Properties of $\tilde{E}$]\label{tildeE} The mapping $\tilde{E}$ is \RRR smooth \EEE and there are constants $0 < c_{E,1}< c_{E,2}$ and $\ell_0 \in \mathbb{N}$ depending only on $v_2$ and $v_3$ such that for $\ell \ge \ell_0$ \begin{align*} \begin{split} 1.& \ \ (D \tilde{E}(T(\boldsymbol{x}_{\rm kink}^\ell)))_i =0 \ \ \ \text{ for } \ i=11,\ldots,18,\\ 2.& \ \ -c_{E,2} \ell^{-2} \le (D\tilde{E}(T(\boldsymbol{x}_{\rm kink}^\ell)))_i \le -c_{E,1} \ell^{-2} \ \ \ \text{ for } \ i=1,\ldots,10,\\ 3.& \ \ c_{E,1} \le (D^2 \tilde{E}(T(\boldsymbol{x}_{\rm kink}^\ell)))_{ii} \le c_{E,2} \ \ \text{ for } \ i=1,\ldots,18, \ \ \ (D^2 \tilde{E}(T(\boldsymbol{x}_{\rm kink}^\ell)))_{ij} = 0 \ \ \text{ for } i\neq j. \end{split} \end{align*} \end{lemma} \begin{proof} Property 1 follows from the fact that $T^b(\boldsymbol{x}^\ell_{\rm kink}) = (1,\ldots,1) \in \mathbb{R}^8$ and $v_2'(1) = 0$. To see Property 2, we apply Lemma \ref{aus} to find $(T^a(\boldsymbol{x}^\ell_{\rm kink}))_i \in (2\pi/3 - c_2\ell^{-2}, 2\pi/3 - c_1\ell^{-2})$ for $i=1,\ldots,10$ and the fact that $v_3 \in C^2$ with $v_3'(2\pi/3) = 0$, $v_3''(2\pi/3)>0$. Likewise, Property 3 follows from $v_2''(1)>0$ and $v''_3(2\pi/3)>0$, respectively. \end{proof} \subsection{Convexity of the cell energy} \label{sec: convexity} The following theorem gives a first property of the Hessian of $E_{\rm cell}$ at the kink configuration $\boldsymbol{x}^\ell_{\rm kink}$. \begin{theorem}[Convexity of $E_{\rm cell}$ in good directions]\label{th: cell convexity1} Let $0 < r <1$. Then there exist $\ell_0 \in \mathbb{N}$ and a constant $c>0$ depending only on $v_2$, $v_3$, and $r$ such that for $\ell \ge \ell_0$ and each $\boldsymbol{v} \in \mathbb{R}^{3 \times 8}$ with $$|\boldsymbol{v} \cdot \boldsymbol{w}| \le r|\boldsymbol{w}||\boldsymbol{v}| \ \ \ \text{ for all } \ \ \ \boldsymbol{w} \in {\rm span}(\mathcal{V}_{\rm degen} \cup \mathcal{V}_{\rm bad})$$ one has $$\boldsymbol{v}^TD^2E_{{\rm cell}}(\boldsymbol{x}^\ell_{\rm kink})\boldsymbol{v} \ge c|\boldsymbol{v}|^2.$$ \end{theorem} \begin{proof} First, by the regularity of the mapping $T$, Property 1 in Lemma \ref{lemma: T}, and the fact that $\boldsymbol{x}^\ell_{\rm kink} \to \boldsymbol{x}^0$ for $\ell \to \infty$, we find $\ell_0 \in \mathbb{N}$ sufficiently large such that for $\ell \ge \ell_0$ the kernel of $D T(\boldsymbol{x}^\ell_{\rm kink})$ has dimension at most $11$. Then we find universal constants $0< c_1 < c_2$ such that for all $\ell \ge \ell_0$, possibly for a larger $\ell_0$, we have \begin{align}\label{eq: conv1} \begin{split} & c_1|\boldsymbol{v}| \le |D T(\boldsymbol{x}^\ell_{\rm kink}) \boldsymbol{v}| \le c_2|\boldsymbol{v}| \ \text{ for all } \boldsymbol{v}\in {\rm span}(\mathcal{V}_{\rm degen} \cup \mathcal{V}_{\rm bad})^\bot,\\ &|D T(\boldsymbol{x}^\ell_{\rm kink}) \boldsymbol{v}| \le c_2| \boldsymbol{v}| \ell^{-1} \ \text{ for all } \boldsymbol{v}\in {\rm span}(\mathcal{V}_{\rm degen} \cup \mathcal{V}_{\rm bad}). \end{split} \end{align} For the second property we used \eqref{eq: distance}. Let be given $\boldsymbol{v} \in \mathbb{R}^{3 \times 8}$ with $|\boldsymbol{v} \cdot \boldsymbol{w}| \le r|\boldsymbol{w}||\boldsymbol{v}| \ \text{ for all } \ \boldsymbol{w} \in {\rm span}(\mathcal{V}_{\rm degen} \cup \mathcal{V}_{\rm bad})$. The vector can be written as $\boldsymbol{v} = \boldsymbol{v}_{\rm good} + \boldsymbol{v}_{\rm good}^\bot$ with two orthogonal vectors $\boldsymbol{v}_{\rm good}, \boldsymbol{v}_{\rm good}^\bot$ satisfying $\boldsymbol{v}_{\rm good}^\bot \in {\rm span}(\mathcal{V}_{\rm degen} \cup \mathcal{V}_{\rm bad})$ and $|\boldsymbol{v}_{\rm good}| \ge \sqrt{1-r^2}|\boldsymbol{v}|$. Consider the mapping $f_{\boldsymbol{v}}:\mathbb{R} \to \mathbb{R}$ defined by $f_{\boldsymbol{v}}(t) = \tilde{E}(T(\boldsymbol{x}^\ell_{\rm kink} + t\boldsymbol{v}))$. We compute \begin{align} f'_{\boldsymbol{v}}(t) & = D \tilde{E}(T(\boldsymbol{x}^\ell_{\rm kink} + t\boldsymbol{v})) \big(D T(\boldsymbol{x}^\ell_{\rm kink} + t\boldsymbol{v}) \boldsymbol{v}\big), \notag\\ f''_{\boldsymbol{v}}(t) & = \big(D T(\boldsymbol{x}^\ell_{\rm kink} + t\boldsymbol{v}) \boldsymbol{v}\big)^T D^2 \tilde{E}(T(\boldsymbol{x}^\ell_{\rm kink} + t\boldsymbol{v})) \big(D T(\boldsymbol{x}^\ell_{\rm kink} + t\boldsymbol{v}) \boldsymbol{v}\big) \notag \\& \ \ \ + D \tilde{E}(T((\boldsymbol{x}^\ell_{\rm kink} + t\boldsymbol{v})) \big(\boldsymbol{v}^TD^2 T(\boldsymbol{x}^\ell_{\rm kink} + t\boldsymbol{v}) \boldsymbol{v}\big). \label{eq: conv3-b} \end{align} We further observe that by Lemma \ref{tildeE}, Property 1 and 2, there is a constant $c_3$ only depending on $c_{E,2}$ such that \begin{align}\label{eq: conv2} |D \tilde{E}(T(\boldsymbol{x}^\ell_{\rm kink})) \big(\boldsymbol{v}^TD^2 T(\boldsymbol{x}^\ell_{\rm kink}) \boldsymbol{v}\big)| \le c_3|\boldsymbol{v}|^2 \ell^{-2}. \end{align} Then collecting \eqref{eq: conv1}-\eqref{eq: conv2} and using Property 3 of Lemma \ref{tildeE} we derive \begin{align*} \boldsymbol{v}^TD^2E_{{\rm cell}}(\boldsymbol{x}^\ell_{\rm kink})\boldsymbol{v} & = f_{\boldsymbol{v}}''(0) \\&\ge c_{E,1}c_1^2|\boldsymbol{v}_{\rm good}|^2 -2 c_{E,2} c^2_2|\boldsymbol{v}_{\rm good}||\boldsymbol{v}_{\rm good}^\bot| \ell^{-1} - c_3|\boldsymbol{v}|^2 \ell^{-2} \\ & \ge |\boldsymbol{v}|^2 \big(c_{E,1}c_1^2(1-r^2) - 2c_{E,2}c^2_2 \ell^{-1} - c_3 \ell^{-2}\big). \end{align*} For $\ell_0$ large enough \RRR (depending also on $r$) \EEE this implies the assertion of the lemma for $\ell \ge \ell_0$. \end{proof} To investigate the convexity properties in the directions $\mathcal{V}_{\rm bad}$, we need some further preparations. Recall the reflections introduced in \eqref{reflexion}. The following lemma is a consequence of Theorem \ref{th: cell convexity1} and shows that variations in the directions $\mathcal{V}_{\rm good}$ decrease the energy only to higher order. \begin{lemma}[Energy decrease in good directions]\label{lemma: in plane} There exist $\ell_0 \in \mathbb{N}$ and a constant $C>0$ depending only on $v_2$ and $v_3$ such that for $\ell \ge \ell_0$ and each $\boldsymbol{v} \in {\rm span}(\mathcal{V}_{\rm good})$ $$D \tilde{E}(T(\boldsymbol{x}^\ell_{\rm kink})) \big(D T(\boldsymbol{x}^\ell_{\rm kink}) \boldsymbol{v}\big) \ge - C|\boldsymbol{v}| \ell^{-3}.$$ \end{lemma} \begin{proof} Let $\boldsymbol{v} \in {\rm span}(\mathcal{V}_{\rm good})$ be given and define a perturbation of $\boldsymbol{v}$ by \begin{align}\label{eq:sv'} \boldsymbol{v}' = \boldsymbol{v} \RRR + \EEE s\ell^{-1}|\boldsymbol{v}|(0,0,{e}_3, {e}_{3}, {e}_{3}, {e}_{3},0,0) \in \mathbb{R}^{3 \times 8} \end{align} for some universal $s>0$ to be specified below. (Note that the direction $\boldsymbol{v}' - \boldsymbol{v}$ increases the third components of the points $x_3,\ldots,x_6$ {\BBB of the basic cell}). By Property 1 and 2 of Lemma \ref{tildeE} and the fact that $|\boldsymbol{v}-\boldsymbol{v}'| \le 4s|\boldsymbol{v}|\ell^{-1}$ it clearly suffices to show \begin{align}\label{eq: deriv=0} D \tilde{E}(T(\boldsymbol{x}^\ell_{\rm kink})) \big(D T(\boldsymbol{x}^\ell_{\rm kink}) \boldsymbol{v}'\big) \ge 0. \end{align} To this end, we will show that \begin{align}\label{eq: deriv=0.2} \tilde{E}(T(\boldsymbol{x}^\ell_{\rm kink} + t\boldsymbol{v}')) \ge \tilde{E}(T(\boldsymbol{x}^\ell_{\rm kink})) \end{align} for all $t >0$ small. Then \eqref{eq: deriv=0} follows by taking the limit $t \to 0$. Consider $\boldsymbol{x} = \boldsymbol{x}_{\rm kink}^\ell + t \boldsymbol{v}'$ for $t >0$ small. Possibly after applying a rigid motion we can assume that the second and third components of \RRR $(x_1 + x_7)/2 $ and $(x_2 + x_8)/2$ are zero, \EEE the points $x_1,x_2,x_7,x_8$ lie in the plane $\mathbb{R}^2 \times \lbrace 0\rbrace$ and that the points $x_3,x_4,x_5,x_6$ lie in a plane parallel to $\mathbb{R}^2 \times \lbrace 0\rbrace$. (Recall that $\boldsymbol{v}$ induces an in-plane perturbation, i.e. the third component of each vector in $\boldsymbol{v}$ is zero.) We replace $\boldsymbol{x}$ by a symmetrized version as follows. Define {\BBB $\boldsymbol{x}_{S_1} $ by \eqref{s1s2} } and note that $E_{\rm cell}(\boldsymbol{x}_{S_1} ) = E_{\rm cell}(\boldsymbol{x})$. Moreover, it is elementary to see that the third component of each vector in $\boldsymbol{w}_1 := \boldsymbol{x}_{S_1} - \boldsymbol{x}$ is zero. Consequently, $\boldsymbol{w}_1$ is perpendicular to $\mathcal{V}_{\rm bad}$, $\mathcal{V}_{\rm trans}$, and the rotations $\boldsymbol{v}_2, \boldsymbol{v}_3$. Clearly, as the reflection $S_1$ leaves the points \RRR $(x_1 + x_7)/2 $ and $(x_2 + x_8)/2$ \EEE unchanged, we also have that $\boldsymbol{w}_1$ is not parallel to the rotation $\boldsymbol{v}_1$. Consequently, by Theorem \ref{th: cell convexity1} and a continuity argument with $t$ small enough, the mapping $t' \mapsto E_{\rm cell}(\boldsymbol{x} + t' \RRR \boldsymbol{w}_1 \EEE )$ is convex on $[0,1]$. This implies for $\boldsymbol{x}' = \frac{1}{2}(\boldsymbol{x} + \boldsymbol{x}_{S_1} )$ (see \eqref{reflection2-a}) that $E_{\rm cell}(\boldsymbol{x}') \le \frac{1}{2}(E_{\rm cell}(\boldsymbol{x}) + E_{\rm cell}(\boldsymbol{x}_{S_1} )) = E_{\rm cell}(\boldsymbol{x})$. Likewise, we consider {\BBB $\boldsymbol{x}'_{S_2}: = \boldsymbol{x}_{\rm kink}^\ell + S_2(\boldsymbol{x}'- \boldsymbol{x}_{\rm kink}^\ell)$ and note that $E_{\rm cell}(\boldsymbol{x}'_{S_2}) = E_{\rm cell}(\boldsymbol{x}')$. Similarly as before, the vector $\boldsymbol{w}_2 := \boldsymbol{x}'_{S_2} - \boldsymbol{x}'$ is perpendicular to the vectors $\mathcal{V}_{\rm bad}$ and not parallel to $\mathcal{V}_{\rm degen}$. Using Theorem \ref{th: cell convexity1} we get $E_{\rm cell} (\mathcal{S}(\boldsymbol{x})) \le E_{\rm cell}(\boldsymbol{x}') \le E_{\rm cell}(\boldsymbol{x}) $ for $\mathcal{S}(\boldsymbol{x}) = \frac{1}{2}(\boldsymbol{x}' +\boldsymbol{x}'_{S_2})$ (see \eqref{reflection2-b}).} By this symmetrization procedure we get that the eight points $\mathcal{S}(\boldsymbol{x})$ are contained in two {\BBB kinked} planes (similarly as $\boldsymbol{x}_{\rm kink}^\ell$). We denote the incidence angle of the two planes by $\gamma \le \pi$ and note that $\gamma \le \gamma_\ell$ if the constant $s>0$ \RRR in \eqref{eq:sv'} \EEE is chosen sufficiently large. The bond lengths satisfy $b_1 = b_2$, $b_3 = b_4 = b_5= b_6$ and $b_7 = b_{8}$. For the angles $\varphi_1 = \varphi_2$ and $\varphi_3 = \ldots = \varphi_{10}$ holds. Recalling \eqref{betaz} and \eqref{eq: cell-energy} we find $\alpha$ in a small neighborhood of $\alpha_\ell^{\rm us}$ such that $$E_{\rm cell}(\mathcal{S}(\boldsymbol{x})) \ge - 3+ 4 v_3(\alpha) + 2v_3\big(2\arcsin(\sin\alpha \sin(\gamma/2)) \big).$$ Now taking $\gamma \le \gamma_\ell$ into account and recalling that $\alpha_\ell^{\rm us}$ is {\BBB optimal angle from} Proposition \ref{eq: old main result}, we find \begin{align*} E_{\rm cell}(\boldsymbol{x}) & \ge E_{\rm cell}(\mathcal{S}(\boldsymbol{x})) \ge - 3 + 4 v_3(\alpha) + 2v_3\big(2\arcsin(\sin\alpha \sin(\gamma_\ell/2)) \big) \\& \ge - 3 + 4 v_3(\alpha_\ell^{\rm us}) + 2v_3\big(2\arcsin(\sin\alpha_\ell^{\rm us} \sin(\gamma_\ell/2)) \big) = E_{\rm cell}(\boldsymbol{x}_{\rm kink}^\ell), \end{align*} \RRR where the last step follows with \eqref{kink}. \EEE This shows \eqref{eq: deriv=0.2} and concludes the proof. \end{proof} The next lemma shows that a perturbation of the angles, which does not change the sum of the angles, essentially does not decrease the energy to first order. \begin{lemma}\label{lemma: conv} There exist $\ell_0 \in \mathbb{N}$ and a constant $C>0$ depending only on $v_2$ and $v_3$ such that for $\ell \ge \ell_0$ and each $\boldsymbol{w} =(\boldsymbol{w}_1,\ldots,\boldsymbol{w}_{10}) \in \mathbb{R}^{10}$ with $\boldsymbol{w} \cdot \boldsymbol{a}_j = 0$ for $j=1,2,3$ we have $$\sum_{i=1}^{10} \big( D \tilde{E}(T(\boldsymbol{x}^\ell_{\rm kink})) \big)_i \boldsymbol{w}_i \ge - C|\boldsymbol{w}|\ell^{-3}.$$ \end{lemma} \begin{proof} From Property 2 of Lemma \ref{lemma: T} we have that the image of the affine mapping $DT^a(\boldsymbol{x}^0)$ has dimension 7. Moreover, we have $(DT^a(\boldsymbol{x}^0) \boldsymbol{v}) \cdot \boldsymbol{a}_j = 0$ for $j=1,2,3$ and all $\boldsymbol{v} \in \mathbb{R}^{3 \times 8}$. Indeed, \RRR write $\boldsymbol{v} = \boldsymbol{v}_{\rm good} + \boldsymbol{v}_{\rm bad}$ with $\boldsymbol{v}_{\rm good} \in {\rm span}(\mathcal{V}_{\rm good})$ and $v_{\rm bad} \in {\rm span}(\mathcal{V}_{\rm degen} \cup \mathcal{V}_{\rm bad})$. \EEE Note that $DT^a(\boldsymbol{x}^0) \boldsymbol{v} = DT^a( \boldsymbol{x}^0) \boldsymbol{v}_{\rm good}$ \RRR by Property 1 of Lemma \ref{lemma: T}. \EEE For each $t \in \mathbb{R}$ the eight points $ \boldsymbol{x}^0 + t \boldsymbol{v}_{\rm good}$ are contained in the plane $\mathbb{R}^2 \times \lbrace 0 \rbrace$. This implies $T^a( \boldsymbol{x}^0 + t \boldsymbol{v}_{\rm good}) \cdot \boldsymbol{a}_j \in \lbrace 2\pi,4\pi \rbrace$ for all $t \in \mathbb{R}$ and $j=1,2,3$, which gives $(DT^a( \boldsymbol{x}^0) \boldsymbol{v}_{\rm good}) \cdot \boldsymbol{a}_j = 0$ for $j=1,2,3$, as desired. The dimension of the image of $DT^a(\boldsymbol{x}^0)$ together with the fact that $\boldsymbol{w} \cdot \boldsymbol{a}_j = 0$ for $j=1,2,3$ show that there exists a vector $\boldsymbol{v}' \in {\rm span}(\mathcal{V}_{\rm good})$ such that $DT^a(\boldsymbol{x}^0) \boldsymbol{v}' = \boldsymbol{w}$. Applying Lemma \ref{lemma: in plane} we get $$ D \tilde{E}(T(\boldsymbol{x}^\ell_{\rm kink})) \big(D T(\boldsymbol{x}^\ell_{\rm kink}) \boldsymbol{v}'\big) \ge - C'|\boldsymbol{v}'| \ell^{-3}, $$ where $C'$ is the constant from Lemma \ref{lemma: in plane}. By a continuity argument and \eqref{eq: distance} we get $|D T(\boldsymbol{x}^\ell_{\rm kink}) - D T(\boldsymbol{x}^0)| \le c\ell^{-1}$. This together with Property 2 of Lemma \ref{tildeE} shows $$ D \tilde{E}(T(\boldsymbol{x}^\ell_{\rm kink})) \big(D T(\boldsymbol{x}^0) \boldsymbol{v}'\big) \ge - C|\boldsymbol{v}'| \ell^{-3} $$ for $C=C(C',c_{E,2},c)$. The fact that $DT^a(\boldsymbol{x}^0) \boldsymbol{v}' = \boldsymbol{w}$, $|\boldsymbol{v}'| \le c|\boldsymbol{w}|$ for a constant $c>0$ (depending on $DT^a(\boldsymbol{x}^0)$) and Property 1 of Lemma \ref{tildeE} conclude the proof. \end{proof} We now improve Theorem \ref{th: cell convexity1} and prove convexity of $E_{\rm cell}$ at the kink configuration $\boldsymbol{x}^\ell_{\rm kink}$. \begin{theorem}[Convexity of $E_{\rm cell}$]\label{th: cell convexity3} Let $0 < r <1$. Then there exist $\ell_0 \in \mathbb{N}$ and a constant $c>0$ depending only on $v_2$, $v_3$, and $r$ such that for $\ell \ge \ell_0$ and each $\boldsymbol{v} \in \mathbb{R}^{3 \times 8}$ with $$|\boldsymbol{v} \cdot \boldsymbol{w}| \le r|\boldsymbol{w}||\boldsymbol{v}| \ \ \ \text{ for all } \ \ \ \boldsymbol{w} \in {\rm span}(\mathcal{V}_{\rm degen})$$ one has $$\boldsymbol{v}^TD^2E_{\rm cell}(\boldsymbol{x}^\ell_{\rm kink})\boldsymbol{v} \ge c|\boldsymbol{v}|^2\ell^{-2}.$$ \end{theorem} \begin{proof} As in the proof of Theorem \ref{th: cell convexity1} we consider the mapping $f_{\boldsymbol{v}}$ as defined before \eqref{eq: conv3-b}. The goal is to show $f''_{\boldsymbol{v}}(0) \ge c |\boldsymbol{v}|^2 \ell^{-2}$. We write $\boldsymbol{v} = \boldsymbol{v}_{\rm degen}+ \boldsymbol{v}_{\rm bad} + \boldsymbol{v}_{\rm good} $ with three orthogonal vectors, where $\boldsymbol{v}_{\rm degen}+ \boldsymbol{v}_{\rm bad} \in {\rm span}(\mathcal{V}_{\rm degen} \cup \mathcal{V}_{\rm bad})$, $\boldsymbol{v}_{\rm degen} \in {\rm span}(\mathcal{V}_{\rm degen})$, $\boldsymbol{v}_{\rm bad} \in {\rm span}(\mathcal{V}_{\rm degen})^\bot$, and $\boldsymbol{v}_{\rm good} \in {\rm span}(\mathcal{V}_{\rm degen} \cup \mathcal{V}_{\rm bad})^\bot$. By assumption we obtain after a short calculation \begin{align}\label{givenumber} |\boldsymbol{v}_{\rm good}|^2 + |\boldsymbol{v}_{\rm bad}|^2 \ge (1-r^2)|\boldsymbol{v}|^2. \end{align} Set $c_* := \max\lbrace 2 c_{2}/c_1, (8c_3/(c_{E,1}c^2_1))^{1/2} \rbrace$ with $c_1,c_2$ from \eqref{eq: conv1}, $c_3$ from \eqref{eq: conv2}, and $c_{E,1}$ from Lemma \ref{tildeE}. First, we suppose $|\boldsymbol{v}_{\rm good}| \ge c_*|\boldsymbol{v}|\ell^{-1}$. We use \eqref{eq: conv1} and $\boldsymbol{v}_{\rm good} \in {\rm span}(\mathcal{V}_{\rm degen} \cup \mathcal{V}_{\rm bad})^\bot$ to find $$ |D T(\boldsymbol{x}^\ell_{\rm kink})\boldsymbol{v}| \ge c_1|\boldsymbol{v}_{\rm good}| - c_{2}|\boldsymbol{v}| \ell^{-1} \ge \frac{c_1}{2}|\boldsymbol{v}_{\rm good}|.$$ Then by Property 3 of Lemma \ref{tildeE}, \eqref{eq: conv3-b}, and \eqref{eq: conv2} we get \begin{align*} f''_{\boldsymbol{v}}(0) &= \boldsymbol{v}^TD^2E_{\rm cell}(\boldsymbol{x}_{\rm kink})\boldsymbol{v} \ge \big(D T(\boldsymbol{x}^\ell_{\rm kink}) \boldsymbol{v}\big)^T D^2 \tilde{E}(T(\boldsymbol{x}^\ell_{\rm kink})) \big(D T(\boldsymbol{x}^\ell_{\rm kink}) \boldsymbol{v}\big) - c_3 |\boldsymbol{v}|^2 \ell^{-2} \\ & \ge c_{E,1}|D T(\boldsymbol{x}^\ell_{\rm kink})\boldsymbol{v}|^2 - c_3 |\boldsymbol{v}|^2 \ell^{-2} \ge \frac{c_{E,1} c^2_1}{4}|\boldsymbol{v}_{\rm good}|^2- c_3 |\boldsymbol{v}|^2 \ell^{-2} \ge \frac{c_{E,1} c^2_1c_*^2}{8\ell^2} |\boldsymbol{v}|^2. \end{align*} Now suppose $|\boldsymbol{v}_{\rm good}| < c_*|\boldsymbol{v}|\ell^{-1}$. Since the first term of $f_{\boldsymbol{v}}''(0)$ given in \eqref{eq: conv3-b} is nonnegative, it suffices to consider the second term of $f_{\boldsymbol{v}}''(0)$. First, using Property 1 of Lemma \ref{tildeE} we have \begin{align}\label{eq: conv6} \sum_{i=11}^{18} \big( D \tilde{E}(T((\boldsymbol{x}^\ell_{\rm kink})) \big)_i \, \big(\boldsymbol{v}^TD^2 T(\boldsymbol{x}^\ell_{\rm kink}) \boldsymbol{v}\big)_i =0. \end{align} Define for brevity $\boldsymbol{w} =(\boldsymbol{v}_{\rm degen} + \boldsymbol{v}_{\rm bad})^T D^2 T^a(\boldsymbol{x}^\ell_{\rm kink}) (\boldsymbol{v}_{\rm degen} + \boldsymbol{v}_{\rm bad}) \in \mathbb{R}^{10}$ and note that $|\boldsymbol{v}_{\rm good}| < c_*\ell^{-1}|\boldsymbol{v}|$ implies \begin{align}\label{new estimate} \Big|(\boldsymbol{v}^T D^2 T^a(\boldsymbol{x}^\ell_{\rm kink}) \boldsymbol{v})_i - \boldsymbol{w}_i \Big| \le c_4|\boldsymbol{v}|^2\ell^{-1}, \ \ \ \ i=1,\ldots,10, \end{align} for $c_4$ depending on $c_*$. By Properties 3 and 4 in Lemma \ref{lemma: T}, \eqref{eq: distance}, and a continuity argument we obtain constants $0 < c_5 < c_6$ (depending on $c_{\rm kink}$) such that for $\ell$ sufficiently large \begin{align*} \boldsymbol{w} \cdot \boldsymbol{a}_j \le c_6|\boldsymbol{v}|^2 \ell^{-1}, \ \ \ j=1,2,3, \ \ \ \ \ \ \sum_{j=1}^{3} \boldsymbol{w} \cdot \boldsymbol{a}_j \le -c_5|\boldsymbol{v}_{\rm bad}|^2 + c_6|\boldsymbol{v}|^2 \ell^{-1}. \end{align*} Consequently, we can find a decomposition $\boldsymbol{w} = \boldsymbol{w}' + \boldsymbol{w}''$ with the property \begin{align*} &\boldsymbol{w}' \cdot \boldsymbol{a}_j = 0, \ \ \ j=1,2,3, \ \ \ \ |\boldsymbol{w}'| \le c_7|\boldsymbol{v}|^2,\\ & \sum_{i=1}^{10} w''_i \le -c_5|\boldsymbol{v}_{\rm bad}|^2 + c_6|\boldsymbol{v}|^2 \ell^{-1}, \ \ \ \ w''_i \le c_6|\boldsymbol{v}|^2 \ell^{-1}, \ \ \ i=1,\ldots,10 \end{align*} for a universal constant $c_7>0$. \BBB (Choose, e.g., $w_3' = w_3 - \boldsymbol{w} \cdot \boldsymbol{a}_1$, $w_7' = w_7 - \boldsymbol{w} \cdot \boldsymbol{a}_2$, $w_9' = w_9 - \boldsymbol{w} \cdot \boldsymbol{a}_3$, and $w_i' = w_i$ else.) \EEE Let $I = \lbrace i=1,\ldots,10| \ \boldsymbol{w}_i'' \le 0 \rbrace$ and note $\sum_{i \in I} \boldsymbol{w}_i'' \le \sum_{i=1}^{10} \boldsymbol{w}''_i $. Then using Property 2 of Lemma \ref{tildeE} and Lemma \ref{lemma: conv} we derive \begin{align*} \sum_{i=1}^{10} & \big( D \tilde{E}(T((\boldsymbol{x}^\ell_{\rm kink})) \big)_i \boldsymbol{w}_i \\&= \sum_{i=1}^{10} \big( D \tilde{E}(T((\boldsymbol{x}^\ell_{\rm kink})) \big)_i \boldsymbol{w}_i' +\sum_{i \in I} \big( D \tilde{E}(T((\boldsymbol{x}^\ell_{\rm kink})) \big)_i \boldsymbol{w}_i'' + \sum_{i \notin I} \big( D \tilde{E}(T((\boldsymbol{x}^\ell_{\rm kink})) \big)_i \boldsymbol{w}_i''\\ &\ge - C|\boldsymbol{w}'| \ell^{-3} + c_{E,1}\ell^{-2} \sum_{i \in I} - \boldsymbol{w}_i'' - 10\cdot c_{E,2}c_6|\boldsymbol{v}|^2 \ell^{-3} \\ &\ge - Cc_7|\boldsymbol{v}|^2 \ell^{-3} + c_{E,1}\ell^{-2}\big(c_5|\boldsymbol{v}_{\rm bad}|^2 - c_6|\boldsymbol{v}|^2 \ell^{-1}\big) - 10\cdot c_{E,2}c_6|\boldsymbol{v}|^2 \ell^{-3} , \end{align*} where $C$ is the constant from Lemma \ref{lemma: conv}. Moreover, again using Lemma \ref{tildeE} and \eqref{new estimate} we get $$\sum_{i=1}^{10} {\BBB \left| \big( D \tilde{E}(T((\boldsymbol{x}^\ell_{\rm kink})) \big)_i \Big(\boldsymbol{w}_i - \big(\boldsymbol{v}^T D^2 T(\boldsymbol{x}^\ell_{\rm kink}) \boldsymbol{v}\big)_i \Big) \right|}\le 10 \cdot c_{E,2}c_4 |\boldsymbol{v}|^2\ell^{-3}.$$ We then use \eqref{eq: conv3-b}, \eqref{eq: conv6}, and the previous two estimates to find \begin{align*} f''_{\boldsymbol{v}}(0) &= \boldsymbol{v}^TD^2E_{\rm cell}(\boldsymbol{x}_{\rm kink})\boldsymbol{v} \ge D \tilde{E}(T(\boldsymbol{x}^\ell_{\rm kink})) \big(\boldsymbol{v}^TD^2 T(\boldsymbol{x}^\ell_{\rm kink}) \boldsymbol{v}\big) \\& \ge c_{E,1}c_5|\boldsymbol{v}_{\rm bad}|^2 \ell^{-2} - c' |\boldsymbol{v}|^2 \ell^{-3}, \end{align*} where $c' =c'(C,c_{E,1},c_{E,2},c_4,c_5,c_6,c_7)$ large enough. Since $|\boldsymbol{v}_{\rm good}| < c_*|\boldsymbol{v}|\ell^{-1}$, we get $|\boldsymbol{v}_{\rm bad}|^2 \ge \frac{1}{2}(1-r^2)|\boldsymbol{v}|^2$ for $\ell_0$ large enough by \eqref{givenumber}. Then $f''_{\boldsymbol{v}}(0) \ge c\ell^{-2}|\boldsymbol{v}|^2$ follows when we choose $\ell_0 \in \mathbb{N}$ sufficiently large (depending also on $r$). \end{proof} \subsection{Proof of Theorem \ref{th: Ered}} As a last preparation for the proof of Theorem \ref{th: Ered}, we need to investigate how the angles between planes behave under reflection of a configuration (see \eqref{reflexion}-\eqref{reflection2}). Let a center $z_{i,j,k}$ be given and, as before, denote by $\boldsymbol{x} \in \mathbb{R}^{3 \times 8}$ the atoms of the corresponding cell. We introduce the angles between the planes as in Section \ref{sec: main proof}. By $\theta_l(\boldsymbol{x})$ we denote the angle between the planes $\{x_1 x_3 x_4\}$ and $\{x_1 x_6 x_5\}$. By $\theta_r(\boldsymbol{x})$ we denote the angle between the planes $\{x_3 x_4 x_2\}$ and $\{x_2 x_5 x_6\}$. Moreover, we let $\theta^{\rm dual}_l(\boldsymbol{x}) = \theta(x_1)$ and $\theta^{\rm dual}_r(\boldsymbol{x}) = \theta(x_2)$ with $\theta(x_i)$, $i=1,2$, as defined in \eqref{eq: thetaangle}. Recall also the definition of $\Delta(z_{i,j,k})$ in \eqref{delta}. \begin{lemma}[Symmetry defect controls angle defect]\label{lemma: angle invariance} There are a universal constant $C>0$ and $\ell_0 \in \mathbb{N}$, and for each $\ell \ge \ell_0$ there is $\eta_\ell >0$ such that for all $\tilde{\mathcal{F}} \in \mathscr{P}_{\eta_\ell}(\mu)$, $\mu \in (2.6,3.1)$, and and all centers $z_{i,j,k}$ we have \begin{align*} &\theta_l(\mathcal{S}(\boldsymbol{x})) + \theta_r(\mathcal{S}(\boldsymbol{x})) \le \theta_l(\boldsymbol{x}) + \theta_r(\boldsymbol{x})+ C\Delta(z_{i,j,k}),\\ &\theta^{\rm dual}_l(\mathcal{S}(\boldsymbol{x})) + \theta^{\rm dual}_r(\mathcal{S}(\boldsymbol{x})) \le \theta^{\rm dual}_l(\boldsymbol{x}) + \theta^{\rm dual}_r(\boldsymbol{x}) + C\Delta(z_{i,j,k}), \end{align*} where $\boldsymbol{x} \in \mathbb{R}^{3 \times 8}$ denote the position of the atoms in the cell with center $z_{i,j,k}$ and $\mathcal{S}(\boldsymbol{x})$ as in \eqref{reflection2-b}. \end{lemma} We postpone the proof of this lemma to the end of the section and now continue with the proof of Theorem \ref{th: Ered}. \begin{proof}[Proof of Theorem \ref{th: Ered}] Let a configuration $\tilde{\mathcal{F}} \in \mathscr{P}_{\eta_\ell}(\mu)$ be given for $\eta_\ell$ to be specified below and let $\boldsymbol{x} \in \mathbb{R}^{3 \times 8}$ be the points of one cell as introduced in Section \ref{sec: main proof}. {\BBB As usual}, possibly after a rigid motion we can assume that the second and third components of \RRR $(x_1 + x_7)/2$, $(x_2+x_8)/2$ are zero \EEE and the points $x_4$, $x_5$ lie in a plane parallel to $\mathbb{R}^2 \times \lbrace 0 \rbrace$. We now perform a symmetrization argument as in the proof of Lemma \ref{lemma: in plane}. We define $\boldsymbol{x}_{S_1}$ by \eqref{s1s2}, and clearly the vector $\boldsymbol{w}_1 := \boldsymbol{x}_{S_1} - \boldsymbol{x}$ is perpendicular to $\mathcal{V}_{\rm trans}$. Moreover, we have $|\boldsymbol{w}_1\cdot \boldsymbol{v}_i| \le r |\boldsymbol{w}_1||\boldsymbol{v}_i|$ for $i=1,2,3$ for a universal $0 < r < 1$, particularly independent of the perturbation $\boldsymbol{x}$. Indeed, for $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$ this follows from the fact that the points \RRR $(x_1 + x_7)/2 $ and $(x_2 + x_8)/2$ \EEE are left unchanged. For $\boldsymbol{v}_3$ it follows from the assumption that the points $x_4$, $x_5$ lie in a plane parallel to $\mathbb{R}^2 \times \lbrace 0 \rbrace$. Consequently, by Theorem \ref{th: cell convexity3}, a continuity argument, and the definition of the the perturbations $\mathscr{P}_{\eta_\ell}(\mu)$, the mapping $t \mapsto E_{\rm cell}(\boldsymbol{x} + t \boldsymbol{w}_1)$ is strictly convex on $[0,1]$ if $\eta_\ell$ is chosen small enough (independent of $\boldsymbol{x}$). This implies for $\boldsymbol{x}' = \frac{1}{2}(\boldsymbol{x} + \boldsymbol{x}_{S_1})$ (see \eqref{reflection2-a}) that $E_{\rm cell}(\boldsymbol{x}') + c\ell^{-2}|\boldsymbol{w}_1|^2 \le \frac{1}{2}(E_{\rm cell}(\boldsymbol{x}) + E_{\rm cell}(\boldsymbol{x}_{S_1})) = E_{\rm cell}(\boldsymbol{x})$, where $c$ only depends on the constant from Theorem \ref{th: cell convexity3}. Likewise, we consider {\BBB $\boldsymbol{x}'_{S_2} := \boldsymbol{x}_{\rm kink}^\ell + S_2(\boldsymbol{x}'- \boldsymbol{x}_{\rm kink}^\ell)$ } and similarly as before, the vector {\BBB $\boldsymbol{w}_2 := \boldsymbol{x}'_{S_2} - \boldsymbol{x}'$ } is perpendicular to $\mathcal{V}_{\rm trans}$ and satisfies $|\boldsymbol{w}_2\cdot \boldsymbol{v}_i| \le r |\boldsymbol{w}_2||\boldsymbol{v}_i|$ for $i=1,2,3$ for a universal $0 < r < 1$. Indeed, for $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$ this follows as before and for $\boldsymbol{v}_3$ it suffices to note that also for the configuration $\boldsymbol{x}' = (x_1',\ldots,x_8')$ the points $x'_4$, $x'_5$ lie in a plane parallel to $\mathbb{R}^2 \times \lbrace 0 \rbrace$. \RRR Using again Theorem \ref{th: cell convexity3} we get $E_{\rm cell} (\mathcal{S}(\boldsymbol{x})) + c\ell^{-2}|\boldsymbol{w}_2|^2\le E_{\rm cell}(\boldsymbol{x}') $ with $\mathcal{S}(\boldsymbol{x})$ from \eqref{reflection2-b}. Possibly passing to a smaller not relabeled constant $c>0$ and using \eqref{delta}, we observe $$E_{\rm cell}(\mathcal{S}(\mathbf{x})) + c\ell^{-2}\Delta(z_{i,j,k}) \le E_{\rm cell}(\boldsymbol{x}).$$ \EEE By this symmetrization procedure we get that the eight points $\mathcal{S}(\mathbf{x})$ satisfy the symmetry conditions stated in \eqref{sym-assumption}. {\BBB In particular, $\widetilde\mu$ from \eqref{sym-assumption} is here equal to $|z^{\rm dual}_{i,j,k} - z^{\rm dual}_{i,j-1,k}|$, the latter quantity being unchanged after symmetrization \RRR since the second and third component of $z^{\rm dual}_{i,j,k}, z^{\rm dual}_{i,j-1,k}$ are assumed to be zero. \EEE} {\RRR Choose $M^\ell$ and $\eta_\ell$ small enough such that $|\lambda_1 - 1| + |\lambda_3 - 1| \le \ell^{-4}$, and $|\gamma_1 - \gamma_2| \le \ell^{-2}$ with $\lambda_1, \lambda_3,\gamma_1,\gamma_2$ from \eqref{sym-assumption}. This choice of $M^\ell$ is possible thanks to Property 2 in Proposition \ref{th: main2}}. Consequently, by Lemma \ref{lemma: sym-energy} we obtain $$ E_{\rm cell}(\boldsymbol{x}) = E_{\rm cell}(z_{i,j,k}) \ge E_{{\BBB \widetilde\mu},\gamma_1,\gamma_2}^{{\rm sym}}(\lambda_2,\alpha_1,\alpha_2) + c\ell^{-2}\Delta(z_{i,j,k}) - c_0 \ell^{-4} (\gamma_1 - \gamma_2)^2. $$ Using Property 2 of Proposition \ref{th: mainenergy} and \eqref{red} we get for $\ell_0$ sufficiently large \begin{align}\label{sym-red} E_{\rm cell}(z_{i,j,k}) \ge E_{\rm red}({\BBB \widetilde\mu}, \bar{\gamma},\bar{\gamma} ) + c\ell^{-2}\Delta(z_{i,j,k}), \end{align} where $\bar{\gamma} = (\gamma_1 + \gamma_2)/2$. By Lemma \ref{lemma: angle invariance} we obtain $\bar{\gamma} \le \bar{\theta}(z_{i,j,k}) + C\Delta(z_{i,j,k})$, where $\bar{\theta}(z_{i,j,k}) = \big(\theta_l(z_{i,j,k}) + \theta_r(z_{i,j,k}) + \theta_l(z^{\rm dual}_{i,j,k}) + \theta_r(z^{\rm dual}_{i,j-1,k}) \big)/4$. Thus, by the monotonicity of the reduced energy (see Property 3 of Proposition \ref{th: mainenergy}) and a Taylor expansion for the mapping $\gamma \mapsto E_{\rm red}({\BBB \widetilde\mu},\gamma,\gamma)$ we get \begin{align} E_{\rm red}({\BBB\widetilde\mu},\bar{\gamma},\bar{\gamma})& \ge E_{\rm red}\Big({\BBB\widetilde\mu},\bar{\theta}(z_{i,j,k}) , \bar{\theta}(z_{i,j,k}) \Big) -C\ell^{-3} \Delta(z_{i,j,k}) + {\rm O}\big((\Delta(z_{i,j,k}) )^2 \big) \notag\\ & \ge E_{\rm red}\Big({\BBB \widetilde\mu},\bar{\theta}(z_{i,j,k}) , \bar{\theta}(z_{i,j,k}) \Big) -\RRR 2 \EEE C\ell^{-3} \Delta(z_{i,j,k}) \label{sym-red2} \end{align} for $C>0$ large enough \RRR depending on $v_3$, \EEE where the last step follows for $\eta_\ell$ sufficiently small. The assertion of the theorem now follows for $\ell_0$ sufficiently large and $\ell \ge \ell_0$ from \eqref{sym-red}, \eqref{sym-red2}, {\BBB { and the fact that $\widetilde\mu = |z^{\rm dual}_{i,j,k} - z^{\rm dual}_{i,j-1,k}|$.}} \end{proof} Finally, we give the proof of Lemma \ref{lemma: angle invariance}. \begin{proof}[Proof of Lemma \ref{lemma: angle invariance}] The proof is mainly based on a careful Taylor expansion for the angles under the symmetrization of the atomic positions in the cell, which is induced by the reflections \eqref{reflexion}. In particular, the argumentation for the angles $\theta_l, \theta_r$ and the dual angles $\theta_l^{\rm dual}$, $\theta_r^{\rm dual}$, respectively, is very similar, up to some different notational realization. Therefore, we concentrate on the first inequality in the following. Let the configuration $\boldsymbol{x}$ be given for a center $z_{i,j,k}$. Let $n^l_{1}(\boldsymbol{x})$ and $n^l_2(\boldsymbol{x})$ be \RRR unit \EEE normal vectors of the planes $\{x_1 x_3 x_4\}$ and $\{x_1 x_6 x_5\}$. Likewise, let $n^r_{1}(\boldsymbol{x})$ and $n^r_2(\boldsymbol{x})$ be normal vectors of the planes $\{ x_2 x_4 x_3 \}$ and $\{ x_2 x_5 x_6\}$. Let $n_l(\boldsymbol{x})$ and $n_r(\boldsymbol{x})$ be unit vectors perpendicular to $n^l_{1}(\boldsymbol{x}), n^l_2(\boldsymbol{x})$ and $n^r_{1}(\boldsymbol{x}), n^r_2(\boldsymbol{x})$, respectively. Let $s_1^l(\boldsymbol{x})$ be a unit vector perpendicular to $n_l(\boldsymbol{x})$, $n_1^l(\boldsymbol{x})$ and let $s_2^l(\boldsymbol{x})$ be a unit vector perpendicular to $n_l(\boldsymbol{x})$, $n_2^l(\boldsymbol{x})$ such that $s_1^l(\boldsymbol{x}) \cdot s_2^l(\boldsymbol{x})$ is near $-1$. Likewise, we define $s_1^r(\boldsymbol{x})$, $s_2^r(\boldsymbol{x})$. Note that these objects can be chosen smoothly with respect to $\boldsymbol{x}$ and that the angle in \eqref{eq: thetaangle} can be expressed as $$\theta_k(\boldsymbol{x}) = \arccos\big(s_1^k(\boldsymbol{x}) \cdot s_2^k(\boldsymbol{x})\big) \ \ \ \text{for } \ \ k=l,r. $$ We also introduce the mapping \begin{align}\label{concave0} g(\boldsymbol{x}) = \arccos\big(s_1^l(\boldsymbol{x}) \cdot s_2^l(\boldsymbol{x})\big) + \arccos \big( s_1^r(\boldsymbol{x}) \cdot s_2^r(\boldsymbol{x}) \big). \end{align} \emph{Step I.} Recall from the the definition in \eqref{reflection2}, \eqref{delta} that there are two vectors $\boldsymbol{w}_1,\boldsymbol{w}_2 \in \mathbb{R}^{3 \times 8}$ such that the symmetrized configurations can be expressed as $\boldsymbol{x}' = \boldsymbol{x} + \boldsymbol{w}_1$ and $\mathcal{S}(\boldsymbol{x}) = \boldsymbol{x}' + \boldsymbol{w}_2$ with \begin{align}\label{bad/good} |\boldsymbol{w}_1|^2 + |\boldsymbol{w}_2|^2 \RRR = \EEE \Delta(z_{i,j,k}) \end{align} for a universal constant $C>0$. The goal will be to investigate the Hessian of $g$ and to show \begin{align}\label{concave3} \boldsymbol{w}_1^TD^2g(\boldsymbol{x}')\boldsymbol{w}_1 + \boldsymbol{w}_2^TD^2g(\mathcal{S}(\boldsymbol{x}))\boldsymbol{w}_2 \ge -C(|\boldsymbol{w}_1|^2 + |\boldsymbol{w}_2|^2) \end{align} for $C>0$ universal. We defer the proof of \eqref{concave3} and first show that the assertion follows herefrom. We consider the mappings \begin{align}\label{eq: f1,f2} f_1(t) = g(\boldsymbol{x}' + t\boldsymbol{w}_1 ), \ \ \ \ f_2(t) = g(\mathcal{S}(\boldsymbol{x}) + t\boldsymbol{w}_2 ) \ \ \ \ \text{for} \ \ t\in [-1,1] \end{align} and observe that $f_1(-1) = g(\boldsymbol{x})$, $f_2(-1) = g(\boldsymbol{x}')$, {\BBB $f_1(1)=g(\boldsymbol{x_{S_1}})$, $f_2(1)=g(\boldsymbol{x}'_{S_2})$, where $\boldsymbol{x}_{S_1}=\boldsymbol{x}_{\rm kink}^\ell+S_1(\boldsymbol{x}-\boldsymbol{x}_{\rm kink}^\ell)$ and $\boldsymbol{x}'_{S_2}=\boldsymbol{x}_{\rm kink}^\ell+S_2(\boldsymbol{x}'-\boldsymbol{x}_{\rm kink}^\ell)$, see \eqref{reflexion}-\eqref{s1s2}.} Moreover, due to the fact that the symmetrized configurations are obtained by applying the reflections $S_1,S_2$, see \eqref{reflexion}, we get that $f_1,f_2$ are smooth, even functions, in particular, $f'_1(0) = f'_2(0) = 0$. Thus, by a Taylor expansion we find $\xi \in (-1,0)$ such that $$ g(\boldsymbol{x}) = f_1(-1) = f_1(0) - f_1'(0) + \frac{1}{2} f_1''(0) - \frac{1}{6}f_1'''(\xi) \ge g(\boldsymbol{x}' ) + \frac{1}{2}\boldsymbol{w}_1^TD^2g(\boldsymbol{x}')\boldsymbol{w}_1 - C|\boldsymbol{w}_1|^3, $$ where $C>0$ is a universal constant. Indeed, the constant is independent of $\boldsymbol{x}$ as all admissible $\boldsymbol{x}$ \RRR lie in a compact neighborhood of $\boldsymbol{x}_{\rm kink}^\ell$ where $g$ is smooth. \EEE Applying Taylor once more, we get $$ g(\boldsymbol{x}) \ge g(\mathcal{S}(\boldsymbol{x}) ) + \frac{1}{2}\boldsymbol{w}_1^TD^2g(\boldsymbol{x}')\boldsymbol{w}_1 + \frac{1}{2}\boldsymbol{w}_2^TD^2g(\mathcal{S}(\boldsymbol{x}))\boldsymbol{w}_2 - C|\boldsymbol{w}_1|^3 - C|\boldsymbol{w}_2|^3.$$ Then we conclude for $\eta_\ell$ sufficiently small (and thus $|\boldsymbol{w}_1|, |\boldsymbol{w}_2|$ small) by \eqref{bad/good}-\eqref{concave3} $$ g(\boldsymbol{x}) \ge g(\mathcal{S}(\boldsymbol{x}) ) - C (|\boldsymbol{w}_1|^2 + |\boldsymbol{w}_2|^2) \RRR = \EEE g(\mathcal{S}(\boldsymbol{x}) ) - C\Delta(z_{i,j,k}).$$ Recalling \eqref{concave0} we obtain the assertion of the lemma. \emph{Step II.} It remains to confirm \eqref{concave3}. We first concern ourselves with the Hessian of the mapping $f_1$ as defined in \eqref{eq: f1,f2}. For $t \in [-1,1]$ we let $u^k_j(t) = s^k_j(\boldsymbol{x}' + t\boldsymbol{w}_1)$ for $j=1,2$ and $k=l,r$. By a Taylor expansion we obtain \begin{align}\label{expansion} u^k_j(t) = s^k_j(\boldsymbol{x}') + \big( v^{1,k}_j + w^{1,k}_j \big) t + \big( v^{2,k}_j + w^{2,k}_j\big) t^2 + {\rm O}(|\boldsymbol{w}_1|^3t^3) \ \ \ \ \text{ with $ |u^k_j(t)| = 1$}, \end{align} where $v^{1,k}_j,v^{2,k}_j$ are perpendicular to $n_k(\boldsymbol{x}')$ and $w^{1,k}_j,w^{2,k}_j$ are parallel to $n_k(\boldsymbol{x}')$ such that $\sum_{j=1,2} \sum_{k=l,r}(|v^{1,k}_j| + |w^{1,k}_j|) \le C|\boldsymbol{w}_1|$ and $\sum_{j=1,2} \sum_{k=l,r}(|v^{2,k}_j| + |w^{2,k}_j|) \le C|\boldsymbol{w}_1|^2$. (The constant $C$ is again universal as all admissible $\boldsymbol{x}$ lie in a compact set and the mappings $s^k_j$ are smooth.) {\BBB For $j=1,2$ and $k=l,r$, the two vectors $w_j^{1,k}$ and $w_{j}^{2,k}$ are orthogonal to $s_j^k(\boldsymbol{x}')$, and taking the first and the second derivative of the constraint $|s^k_j(\boldsymbol{x}'+t\boldsymbol{w}_1)|^2 = |u^{k}_j(t)|^2 = 1$ with respect to $t$ } yields by an elementary computation \begin{align}\label{u2} (a) \ \ v^{1,k}_j \cdot s^k_j(\boldsymbol{x}') = 0, \ \ \ \ \ \ (b) \ \ |v^{1,k}_j|^2 + |w^{1,k}_j|^2 + 2s^{k}_j(\boldsymbol{x}') \cdot v^{2,k}_j = 0. \end{align} Then we compute by \eqref{eq: f1,f2} \begin{align*} f_1(t) & = \sum_{k=l,r} \arccos\Big(s^k_1(\boldsymbol{x}') \cdot s^k_2(\boldsymbol{x}') + \big( v_1^{1,k} \cdot s^k_2(\boldsymbol{x}') + v_2^{1,k} \cdot s^k_1(\boldsymbol{x}') \big)t \\& \ \ \ \ \ \ \ \ \ + \big( v_1^{2,k} \cdot s^k_2(\boldsymbol{x}') + v_2^{2,k} \cdot s^k_1(\boldsymbol{x}')+ v_1^{1,k} \cdot v_2^{1,k} + w_1^{1,k} \cdot w_2^{1,k} \big) t^2 + {\rm O}(|\boldsymbol{w}_1|^3t^3) \Big). \end{align*} \RRR A Taylor expansion and the fact that $f_1$ is even yield $ f_1(t) -f_1(0) = f''_1(0)t^2/2 + {\rm O}(|\boldsymbol{w}_1|^3t^3)$. More, precisely, \EEE we get recalling $s^k_1(\boldsymbol{x}') \cdot s^k_2(\boldsymbol{x}') = \cos(\theta_k(\boldsymbol{x}'))$ for $k=l,r$ \begin{align} f_1(t) -f_1(0) & = \sum_{k=l,r} \arccos'(\cos(\theta_k(\boldsymbol{x}'))) \big( v_1^{2,k} \cdot s^k_2(\boldsymbol{x}') + v_2^{2,k} \cdot s^k_1(\boldsymbol{x}')+ v_1^{1,k} \cdot v_2^{1,k} + w_1^{1,k} \cdot w_2^{1,k} \big) t^2 \notag \\& \ \ \ + \sum_{k=l,r}\frac{1}{2}\arccos''(\cos(\theta_k(\boldsymbol{x}'))) \big( v_1^{1,k} \cdot s^k_2(\boldsymbol{x}') + v_2^{1,k} \cdot s^k_1(\boldsymbol{x}') \big)^2t^2 + {\rm O}(|\boldsymbol{w}_1|^3t^3). \label{taylor1} \end{align} We get $|v_1^{1,k} \cdot s^k_2(\boldsymbol{x}')| =|v_1^{1,k}|\sin(\theta_k(\boldsymbol{x}')) $ by \eqref{u2}(a). This together with \eqref{u2}(b) and $|v_1^{2,k}| \le C|\boldsymbol{w}_1|^2$ yields for $k= l,r$ \begin{align*} v_1^{2,k} \cdot s^k_2(\boldsymbol{x}') &= \Big( (v_1^{2,k}\cdot s^k_1(\boldsymbol{x}')) s^k_1(\boldsymbol{x}') + |v_1^{1,k} |^{-2} (v_1^{2,k}\cdot v_1^{1,k})v_1^{1,k} \Big) \cdot s^k_2(\boldsymbol{x}')\\& = -\frac{1}{2}(|v_1^{1,k}|^2 + |w_1^{1,k}|^2)\cos(\theta_k(\boldsymbol{x}')) + |v_1^{1,k} |^{-2}(v_1^{2,k}\cdot v_1^{1,k}) (v_1^{1,k} \cdot s^k_2(\boldsymbol{x}')) \\ & \le -\frac{1}{2}(|v_1^{1,k}|^2 + |w_1^{1,k}|^2)\cos(\theta_k(\boldsymbol{x}')) + C\sin(\theta_k(\boldsymbol{x}'))|\boldsymbol{w}_1|^2, \end{align*} and repeating the same calculation for $v_2^{2,k}$, we derive for $k=l,r$ \begin{equation}\label{cross12} \big( v_1^{2,k} \cdot s^k_2(\boldsymbol{x}') + v_2^{2,k} \cdot s^k_1(\boldsymbol{x}') \big) \le \sum_{j=1,2} -\frac{1}{2}(|v_j^{1,k}|^2 + |w_j^{1,k}|^2)\cos(\theta_k(\boldsymbol{x}')) +C\sin(\theta_k(\boldsymbol{x}'))|\boldsymbol{w}_1|^2. \end{equation} Note that $v_1^{1,k} \cdot v_2^{1,k} = |v_1^{1,k}||v_2^{1,k}| q \cos(\theta_k(\boldsymbol{x}'))$ for $q \in \lbrace -1 , 1 \rbrace$ by \eqref{u2}(a). An elementary computation then yields \begin{align}\label{taylor3} \big(v_1^{1,k} \cdot s^k_2(\boldsymbol{x}') + v_2^{1,k} \cdot s^k_1(\boldsymbol{x}')\big)^2 \ = \sin^2(\theta_k(\boldsymbol{x}')) (|v_1^{1,k}|-q|v_2^{1,k}| )^2. \end{align} Combining {\BBB \eqref{taylor1}-\eqref{cross12}-\eqref{taylor3}} and using that $\arccos'(x) = -(1-x^2)^{-1/2}$ and that $\arccos''(x)=-x(1-x^2)^{-3/2}$, we find \begin{align} f_1&(t) -f_1(0) \notag\\ & \ge \sum_{k=l,r} -\sin(\theta_k(\boldsymbol{x}'))^{-1}\Big( \sum_{j=1,2} - \frac{1}{2}(|v_j^{1,k}|^2 + |w_j^{1,k}|^2)\cos(\theta_k(\boldsymbol{x}')) + C\sin(\theta_k(\boldsymbol{x}'))|\boldsymbol{w}_1|^2\notag\\& \ \ \ + w_1^{1,k} \cdot w_2^{1,k} + |v_1^{1,k}||v_2^{1,k}| q \cos(\theta_k(\boldsymbol{x}')) \Big)t^2 \notag \\& \ \ \ - \frac{1}{2}\cos(\theta_k(\boldsymbol{x}')) (1-\cos^2(\theta_k(\boldsymbol{x}')))^{-3/2}\sin^2(\theta_k(\boldsymbol{x}')) (|v_1^{1,k}| - q|v_2^{1,k}|)^2 t^2 + {\rm O}(|\boldsymbol{w}_1|^3t^3)\notag \\ & = \sum_{k=l,r} -\sin(\theta_k(\boldsymbol{x}'))^{-1}\Big( \sum_{j=1,2} - \frac{1}{2}|w_j^{1,k}|^2\cos(\theta_k(\boldsymbol{x}')) + w_1^{1,k} \cdot w_2^{1,k} \Big)t^2 - C|\boldsymbol{w}_1|^2t^2 + {\rm O}(|\boldsymbol{w}_1|^3t^3)\notag \\ & \ge \sum_{k=l,r} -\sin(\theta_k(\boldsymbol{x}'))^{-1}\Big( \sum_{j=1,2} \frac{1}{2}|w_j^{1,k}|^2 + w_1^{1,k} \cdot w_2^{1,k} \Big)t^2 - C|\boldsymbol{w}_1|^2t^2 + {\rm O}(|\boldsymbol{w}_1|^3t^3). \label{taylor7} \end{align} In the last step we used that \RRR $\cos\theta\ge -1$. \EEE Before we proceed let us note that the same computation can be repeated for the second mapping $f_2$ defined in \eqref{eq: f1,f2}: considering an expansion as in \eqref{expansion} with $s^k_j(\mathcal{S}(\boldsymbol{x}))$ in place of $s^k_j(\boldsymbol{x}')$ and indicating the vectors by $\hat{v}^{i,k}_j$ and $\hat{w}^{i,k}_j$ (perpendicular and parallel to $n_k(\mathcal{S}(\boldsymbol{x}))$, respectively) we also obtain \begin{align}\label{taylor8} f_2&(t) -f_2(0) \notag\\ &\ge \sum_{k=l,r} -\frac{1}{\sin(\theta_k(\mathcal{S}(\boldsymbol{x})))}\Big( \sum_{j=1,2} \frac{1}{2}|\hat{w}_j^{1,k}|^2 + \hat{w}_1^{1,k} \cdot \hat{w}_2^{1,k} \Big)t^2 -C|\boldsymbol{w}_2|^2 t^2 + {\rm O}(|\boldsymbol{w}_2|^3t^3). \end{align} \emph{Step III.} We now investigate \eqref{taylor7}-\eqref{taylor8} more in detail. Consider first $f_1$. Due to the symmetry of the setting induced by the reflection $S_1$ (recall \eqref{reflexion}) we find $u^k_1(t)\cdot n_k(\boldsymbol{x}') = u^k_2(-t)\cdot n_k(\boldsymbol{x}') $ for $k=l,r$. In particular, {\BBB taking the derivative in $t$ and using \eqref{u2}(a),} this implies $w_1^{1,k} = -w_2^{1,k}$. Then by \eqref{taylor7} we obtain $$f_1(t) -f_1(0) \ge -C|\boldsymbol{w}_1|^2t^2 + {\rm O}(|\boldsymbol{w}_1|^3t^3)$$ and therefore taking $t \to 0$ we get $ \boldsymbol{w}_1^TD^2g(\boldsymbol{x}')\boldsymbol{w}_1 \ge -C|\boldsymbol{w}_1|^2$, which establishes the first part of \eqref{concave3}. Now consider $f_2$. Notice that one can show $\hat{w}_1^{1,k} = \hat{w}_2^{1,k} $ for $k=l,r$ by symmetry, i.e. we cannot repeat the same argument as for $f_1$. However, in this case we can show \begin{align}\label{taylor9} |\hat{w}_1^{1,l}| + |\hat{w}_1^{1,r}| +|\hat{w}_2^{1,l}| + |\hat{w}_2^{1,r}| \le C|\boldsymbol{w}_2| \ell^{-1}. \end{align} Once this is proved, the assertion follows. Indeed, due to symmetry of $\mathcal{S}(\boldsymbol{x})$ we observe that $\theta_l(\mathcal{S}(\boldsymbol{x})) = \theta_r(\mathcal{S}(\boldsymbol{x}))$, denoted by $\varphi$ in the following. Recalling \eqref{kink} and the fact that $\mathcal{S}(\boldsymbol{x})$ is near $\boldsymbol{x}_{\rm kink}^\ell$, we get $\varphi \le \pi- c\ell^{-1}$ and $\sin(\varphi) \ge c\ell^{-1}$ for some $c>0$. Then by \eqref{taylor8} we have $$f_2(t) -f_2(0) \ge - C|\boldsymbol{w}_2|^2t^2 - C\ell \cdot |\boldsymbol{w}_2|^2 \ell^{-2} t^2 + {\rm O}(|\boldsymbol{w}_2|^3t^3),$$ which shows the second part of \eqref{concave3}. Let us finally show \eqref{taylor9}. Recall the definition of the \RRR unit \EEE normal vectors $n_1^k(\boldsymbol{x}), n_2^k(\boldsymbol{x})$, and $n_k(\boldsymbol{x})$ introduced before \eqref{concave0} for $k=l,r$. Observe that for symmetry reasons we have $n_k(\mathcal{S}(\boldsymbol{x})) = \pm e_1$ and $|n_j^k(\mathcal{S}(\boldsymbol{x})) \cdot e_2| = \sin(\frac{\pi - \varphi}{2})$ for $j=1,2$, $k=l,r$. Then a continuity argument gives $|n_k(\boldsymbol{x}') \cdot e_3| \le C|\boldsymbol{w}_2|$ and $|n^k_j(\boldsymbol{x}') \cdot e_2| \le \sin(\frac{\pi - \varphi}{2}) + C|\boldsymbol{w}_2|$ for $k=l,r$ and $j=1,2$. Moreover, as $\boldsymbol{x}'$ is invariant under the reflection $S_1$ (recall \eqref{reflexion}), we get $n_k(\boldsymbol{x}') \cdot e_2 = 0$. By definition of $s^k_j(\boldsymbol{x}')$ this implies $$|s^k_j(\boldsymbol{x}') \cdot e_1| = \big|\big(n_k(\boldsymbol{x}') \times n^k_j(\boldsymbol{x}') \big) \cdot e_1\big| {\BBB =|n_k(\boldsymbol{x}') \cdot e_3| |n^k_j(\boldsymbol{x}') \cdot e_2|} \le C\sin(\frac{\pi - \varphi}{2})|\boldsymbol{w}_2| + C|\boldsymbol{w}_2|^2.$$ For a small enough perturbation parameter $\eta_\ell$ we get $|\boldsymbol{w}_2| \le \ell^{-1}$ and thus $|s^k_j(\boldsymbol{x}') \cdot e_1| \le C|\boldsymbol{w}_2|\ell^{-1}$ \RRR since $\sin(\frac{\pi - \varphi}{2}) \le c\ell^{-1}$ by \eqref{kink}. \EEE As $s^k_j(\boldsymbol{x}')\cdot e_1 = s^k_j(\mathcal{S}(\boldsymbol{x}))\cdot e_1 - \hat{w}_j^{1,k} + {\rm O}(|\boldsymbol{w}_2|^2) = - \hat{w}_j^{1,k} + {\rm O}(|\boldsymbol{w}_2|^2) $ \RRR (see \eqref{expansion} and use the fact that $s^k_j(\mathcal{S}(\boldsymbol{x}))\cdot e_1 = 0$), \EEE this shows \eqref{taylor9} and concludes the proof. \end{proof} \section*{Acknowledgements} \RRR M.F. acknowledges support from the Alexander von Humboldt Stiftung. E.M. acknowledges support from the Austrian Science Fund (FWF) project M 1733-N20. U.S. acknowledges support from the Austrian Science Fund (FWF) projects P~27052, I~2375, and F~65 and from the Vienna Science and Technology Fund (WWTF) through project MA14-009. The Authors would like to acknowledge the kind hospitality of the Erwin Schr\"odinger International Institute for Mathematics and Physics, where part of this research was developed under the frame of the Thematic Program {\it Nonlinear Flows}. \EEE \vspace{15mm} \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $(Y_n)_{n \in \mathbb{N}}$ be a sequence of random variables which admit a limit in law $Y_\infty$ as $n$ goes to infinity. We assume that the distribution of $Y_\infty$ is absolutely continuous with respect to the Lebesgue measure; thus, there is a density $p(x)$ such that $$\mathbb{P}[Y_\infty\in (a,b)] = \int_a^b p(x)\DD{x}.$$ The convergence in law $Y_n \rightharpoonup Y_\infty$ amounts then to $$\lim_{n \to \infty} \mathbb{P}[Y_n\in (a,b)] = \int_a^b p(x)\DD{x}.$$ for any $a<b$. In this setting, a \emph{local limit theorem} for the sequence $(Y_n)_{n \in \mathbb{N}}$ is a statement of the following form: for some sequence $(s_n)_{n \in \mathbb{N}}$ going to $+\infty$, and any $x,a,b \in \mathbb{R}$, \begin{equation} \lim_{n \to \infty} s_n\,\,\mathbb{P}\!\left[Y_n-x \in \frac{1}{s_n}(a,b)\right] = p(x)\,(b-a).\label{eq:locallimit1} \end{equation} Thus, we are interested in the asymptotics of the probability for $Y_n$ to fall in a small interval of size $(s_n)^{-1}$. More generally, given a bounded measurable subset $B$ whose boundary $\partial B$ has Lebesgue measure $m(\partial B)=0$, we want to prove that for some scales $s_n \to +\infty$, \begin{equation} \lim_{n \to \infty} s_n\,\,\mathbb{P}\!\left[Y_n-x \in \frac{1}{s_n}\,B\right] = p(x)\,m(B).\label{eq:locallimit2} \end{equation} Notice that the convergence in law $Y_n \rightharpoonup Y_\infty$ does not imply this kind of result. Besides, for many convergent sequences $(Y_n)_{n\in \mathbb{N}}$, there exist some scales $(s_n)_{n\in \mathbb{N}}$ for which the probability on the left-hand side of Equation \eqref{eq:locallimit2} cannot be equivalent to $p(x)\,m(B)/s_n$. For instance, if $Y_n = \frac{N_n}{s_n}$ is a renormalisation of an \emph{integer-valued} statistics $N_n$, then at the scale $s_n$, Equation \eqref{eq:locallimit1} cannot be true, because $a,b \mapsto \mathbb{P}[N_n \in (a,b)]$ is not continuous in $a$ and $b$. The goal of this paper is to show that in the setting of mod-$\phi$ convergent sequences, there is a large range of scales $(s_n)_{n \in \mathbb{N}}$ for which the local limit theorem is satisfied. \subsection{Mod-\texorpdfstring{$\phi$}{phi} convergence} We start by recalling the notion of mod-$\phi$ convergent sequences, which has been introduced in \cite{JKN11,DKN15,FMN16}. \begin{definition} Let $D\subseteq \mathbb{C}$ be a subset of the complex plane containing $0$, and $(X_n)_{n\in\mathbb{N}}$ be a sequence of real-valued random variables whose moment generating functions $\mathbb{E}[\mathrm{e}^{zX_n}]$ are well defined over $D$. We also fix a non-constant infinitely divisible distribution $\phi$ whose Laplace transform is well defined over $D$ and has L\'evy exponent $\eta$: \begin{equation*} \forall z\in D,\qquad \int_{\mathbb{R}}\mathrm{e}^{zx}\,\phi(\!\DD{x})=\mathrm{e}^{\eta(z)}. \end{equation*} We then say that $(X_n)_{n\in\mathbb{N}}$ converges mod-$\phi$ over $D$ with parameters $(t_n)_{n \in \mathbb{N}}$ and limiting function $\psi:D\rightarrow \mathbb{C}$ if, locally uniformly on $D$, \begin{equation*} \psi_n(z):=\mathbb{E}\!\left[\mathrm{e}^{zX_n}\right]\,\mathrm{e}^{-t_n\eta(z)}\longrightarrow\psi(z). \end{equation*} Here, $(t_n)_{n\in\mathbb{N}}$ is some deterministic sequence of positive numbers with $\lim_{n \to \infty} t_n =+\infty$, and $\psi(z)$ is a continuous function on $D$ such that $\psi(0)=1$. \end{definition} Let us comment a bit this definition with respect to the choice of the domain. If $D=\mathrm{i}\mathbb{R}$, then we are looking at Fourier transforms, or ratios thereof. So, there is no problem of definition of these quantities (recall that the Fourier transform $\mathrm{e}^{\eta(\mathrm{i} \xi)}$ of an infinitely divisible distribution does not vanish, see \cite[Lemma 7.5]{Sato99}). We shall then simply speak of mod-$\phi$ convergence (or mod-$\phi$ convergence in the Fourier sense if we want to be precise), and denote $$\theta_n(\xi):= \mathbb{E}\!\left[\mathrm{e}^{\mathrm{i} \xi X_n}\right]\,\mathrm{e}^{-t_n\eta(\mathrm{i} \xi)}$$ and $\theta(\xi) = \lim_{n \to \infty} \theta_n(\xi)$. On the other hand, if $D$ is an open subset containing $0$, then the Laplace transforms must be analytic functions on this domain. Mod-$\phi$ convergence on such a domain, or even on the whole complex plane $\mathbb{C}$ occurs often when the reference law $\phi$ is the standard Gaussian distribution, with density $$\frac{1}{\sqrt{2\pi}}\,\mathrm{e}^{-\frac{x^2}{2}}$$ and with Lévy exponent $\eta(z)=\frac{z^2}{2}$. In this paper, we shall consider domains of convergence $D$ equal to $\mathrm{i} \mathbb{R}$, or $\mathbb{C}$, or $\mathbb{R}$ (in Section \ref{subsec:curieweiss}). In some cases, we shall also require that the local uniform convergence of the residues $\psi_n \to \psi$ or $\theta_n \to \theta$ occurs in $\mathrm{L}^1(D)$; see Definitions \ref{def:L1modphi} and \ref{def:L1R}. \medskip \subsection{Stable distributions on the real line} In this article, we shall only be interested in the case where $\phi$ is a stable distribution: \begin{definition} Let $c>0$, $\alpha \in (0,2]$ and $\beta \in [-1,1]$. The stable distribution of parameters $(c,\alpha,\beta)$ is the infinite divisible law $\phi=\phi_{c,\alpha,\beta}$ whose Fourier transform $\int_{\mathbb{R}}\mathrm{e}^{\mathrm{i} \xi x}\phi(dx)=\mathrm{e}^{\eta(\mathrm{i} \xi)}$ has L\'evy exponent $\eta=\eta_{c,\alpha,\beta}$ given by $$ \eta_{c,\alpha,\beta}(\mathrm{i}\xi)=-\left|c\xi\right|^\alpha\left(1-\mathrm{i}\beta h(\alpha,\xi)\,\mathrm{sgn}(\xi)\right), $$ where $\mathrm{sgn}(\xi)$ is the sign of $\xi$, and \begin{equation*} h(\alpha,\xi)=\begin{cases} \tan\left(\frac{\pi \alpha}{2}\right),\qquad \text{if}\ \alpha\not =1,\\ -\frac{2}{\pi}\log\left|\xi\right|,\quad\ \text{if}\ \alpha=1. \end{cases} \end{equation*} \end{definition} We refer to \cite[Chapter 3]{Sato99} for details on stable distributions, see in particular Theorem 14.15 in \emph{loc.~cit.}~for the formula for the Lévy exponents. Using the above definition, one sees that the Lévy exponent of a stable law satisfies the scaling property: \begin{equation*} t\eta_{c,\alpha,\beta}\left(\frac{\mathrm{i}\xi}{t^{1/\alpha}}\right)=\begin{cases} \eta_{c,\alpha,\beta}(\mathrm{i}\xi)&\text{if } \alpha\neq 1,\\ \eta_{c,\alpha,\beta}(\mathrm{i}\xi)-\left(\frac{2c\beta}{\pi}\log t\right)\mathrm{i}\xi &\text{if } \alpha=1. \end{cases} \end{equation*} Stable distributions include: \begin{itemize} \item the standard Gaussian law, corresponding to the triplet $(c,\alpha,\beta)=\left (\frac{1}{\sqrt{2}},2,0\right)$; \item the standard Cauchy law, corresponding to the triplet $(c,\alpha,\beta)=\left (1,1,0\right)$ ; \item the standard L\'evy law, corresponding to the triplet $(c,\alpha,\beta)=\left (1,\frac{1}{2},1\right)$. \end{itemize} Note that, since $|\mathrm{e}^{\eta_{c,\alpha,\beta}(\mathrm{i}\xi)}|= \mathrm{e}^{-\left|c\xi\right|^\alpha}$ is integrable, any stable law is absolutely continuous with the respect to the Lebesgue measure. Throughout this article, we denote by $p_{c,\alpha,\beta}(x)\DD{x}$ the density of the stable distribution $\phi_{c,\alpha,\beta}$. On the other hand, it is an easy exercise to show that the definition of mod-stable convergence together with the scaling property of the L\'evy exponent imply the following proposition (see \cite[Proposition 1.3]{FMN17}). \begin{proposition}\label{prop:convlaw} If $(X_n)_{n\in\mathbb{N}}$ converges mod-$\phi_{c,\alpha,\beta}$ (in the Fourier sense on $D=\mathrm{i} \mathbb{R}$) with parameters $(t_n)_{n \in \mathbb{N}}$ and limiting function $\theta$, then \begin{equation*} Y_n:=\begin{cases} \frac{X_n}{(t_n)^{1/\alpha}}& \text{if } \alpha\neq 1,\\ \frac{X_n}{t_n}-\frac{2c\beta}{\pi}\log t_n&\text{if } \alpha=1 \end{cases} \end{equation*} converges in law towards $\phi_{c,\alpha,\beta}$. \end{proposition} \medskip The stable laws are well known to be the attractors of the laws of sums of independent and identically distributed random variables. More precisely, fix a cumulative distribution function $F: \mathbb{R} \to [0,1]$, and consider a sequence $(X_n)_{n \in \mathbb{N}}$ of independent random variables with $\mathbb{P}[X_n \leq x] = F(x)$ for any $n$. If the scaled sum $$\frac{S_n-A_n}{B_n} = \frac{X_1+X_2+\cdots+X_n - A_n}{B_n}$$ admits a limiting distribution for some choice of parameters $A_n$ and $B_n$, then this limiting distribution is necessarily a stable law $\phi_{c,\alpha,\beta}$ (up to a translation if $A_n$ is not chosen correctly); see \cite[Chapter 7]{GK68}. One then says that $F$ belongs to the domain of attraction of the stable law of parameter $(\alpha,\beta)$ ($c$ can be chosen by changing $B_n$). Necessary and sufficient conditions on $F$ for belonging to the domain of attraction of a stable distribution $\phi_{c,\alpha,\beta}$ are given in \cite[Chapter 7, \S35]{GK68} and \cite[Chapter 2, \S6]{IL71}. In terms of Fourier transforms, one criterion is the following \cite[Theorem 2.6.5]{IL71}: a probability measure $\mu$ belongs to the domain of attraction of a stable law of parameter $(\alpha,\beta)$ if and only if its Fourier transform writes in the neighborhood of the origin as $$\widehat{\mu}(\xi) = \mathrm{e}^{\mathrm{i} m \xi - |c\xi|^\alpha (1-\mathrm{i} \beta h(\alpha,\xi)\,\mathrm{sgn}(\xi))\,s(\xi)\,(1+\varepsilon(\xi))},$$ where $\lim_{\xi \to 0}\varepsilon(\xi)=0$, and $\xi \mapsto s(\xi)$ is a slowly varying function at $0$ in the sense of Karamata (see \cite{BGT87}), meaning that $s$ is positive and $$\forall a \neq 0,\,\,\,\lim_{\xi \to 0} \frac{s(a\xi)}{s(\xi)} = 1.$$ In this representation, $s(\xi)$ is positive real valued, whereas the function $\varepsilon(\xi)$ can be complex. The central limit theorem for random variables in the domain of attraction of a stable law is completed by a local limit theorem \cite{Shepp64,Stone65,Feller67}. In this paper, we shall revisit this theorem by showing that it is a simple consequence of a result of approximation by smooth test functions (Section \ref{subsec:stonefeller}). \medskip \subsection{Main results and outline of the paper} We fix a stable law $\phi_{c,\alpha,\beta}$, and we consider a sequence $(X_n)_{n \in \mathbb{N}}$ of random variables that is mod-$\phi_{c,\alpha,\beta}$ convergent over some domain $D$, with parameters $(t_n)_{n \in \mathbb{N}}$ and limiting function $\psi$. The goal of this paper is to show that the central limit theorem of Proposition \ref{prop:convlaw} goes together with a local limit theorem \begin{equation*} \lim_{n\to\infty}(s_n)\,\mathbb{P}[Y_n - x\in (s_n)^{-1}B] =p_{c,\alpha,\beta}(x)\,m(B), \end{equation*} at scales $(s_n)_{n\in \mathbb{N}}$ that are determined by the quality of the convergence of the residues $\theta_n \to \theta$ (or $\psi_n \to \psi$ if $D = \mathbb{C}$ or $D=\mathbb{R}$). To be more precise, with $(X_n)_{n \in \mathbb{N}}$ and $(Y_n)_{n\in \mathbb{N}}$ as in Proposition \ref{prop:convlaw}, there are two possible situations: \begin{enumerate} \item There exists a minimal scale $s_n \to +\infty$ such that: \begin{itemize} \item if $s_n' =o(s_n)$ and $s_n'\to+\infty$, then $\mathbb{P}[Y_n-x \in (s_n')^{-1}B]$ can be approximated by the probability for the stable reference law; \item on the contrary, $\mathbb{P}[Y_n-x \in (s_n)^{-1}B]$ cannot be approximated by the stable distribution, because of combinatorial or arithmetic properties of the underlying random model (for instance, if $Y_n$ comes from a lattice-valued random variable). \end{itemize} In this case, the theory of zones of control introduced in \cite{FMN17} will enable us to determine the scale $s_n$, or at least a sequence $r_n =O(s_n)$ up to which the stable approximation holds. This is the content of Theorem \ref{thm:locallimit1}. \item The stable approximation of the probability $\mathbb{P}[Y_n-y \in (s_n)^{-1}B]$ holds as soon as $s_n \to +\infty$. Hence, for \emph{any} infinitesimal scale $\varepsilon_n \to 0$, the probability of $Y_n$ falling in an interval with this scale $\varepsilon_n$ is asymptotically equivalent to the probability given by the stable reference law. Theorem \ref{thm:locallimit2} gives a sufficient condition which relies on the notion of mod-$\phi$ convergence in $\mathrm{L}^1(\mathrm{i}\mathbb{R})$. Note that this cannot occur if the $X_n$'s are lattice valued. \end{enumerate} Both approaches extend the results of the paper \cite{DKN15}, which gave a set of conditions (\emph{cf.}~the hypotheses H1-H3 in \emph{loc.~cit.}) that implied a local limit theorem with respect to a symmetric stable law ($\beta=0$). In many cases, the results of our paper improve the range of validity of this local limit theorem. On the other hand, both approaches rely on estimates for the differences $$\mathbb{E}[g_n(Y_n)]-\mathbb{E}[g_n(Y)],$$ where $Y\sim \phi_{c,\alpha,\beta}$ and where the $g_n$'s are integrable test functions whose Fourier transforms have compact support. These test functions already played an essential role in \cite{FMN17} when studying the speed of convergence in Proposition \ref{prop:convlaw}; we recall their main properties in Section \ref{sec:testfunction}. Note that recently, other techniques have been developed in order to obtain local limit theorems: for integer-valued random variables, let us mention in particular the use of Landau--Kolmogorov type inequalities \cite{RR15}, the use of translated Poisson random variables instead of normal variables, and estimates coming from Stein's method \cite{BRR17}. \medskip Let us now detail the content of the paper. The theoretical results are given in Sections \ref{subsec:compactlysupportedfourier}, \ref{sec:zone} and \ref{subsec:l1}. The other sections are devoted to a large variety of examples and applications: \begin{itemize} \item In Section \ref{subsec:stonefeller}, we first look at sequences $S_n=\sum_{i=1}^nA_{i}$, where the $A_i$'s are i.i.d.~random variables. In this case, we show that our results on test functions imply the well-known local limit theorems for distributions in the domain of attraction of a stable law. Namely, we recover the theorem of Shepp \cite{Shepp64} for laws with finite variance, and the generalizations of Stone and Feller \cite{Stone65,Feller67} for the cases $\alpha\in (0,2)$. \item Section \ref{sec:sum} is devoted to the analysis of sums of random variables which are not identically distributed, or not independent. We start with random variables that can be represented in law by sums or series of independent random variables: \begin{itemize} \item the size of a random integer partition or a random plane partition chosen with probability proportional to $q^{|\lambda|}$ (Section \ref{subsec:partition}); \item the number of zeroes of a random analytic series in a disc around the origin, and more generally the number of points of a determinantal point process that fall in a compact subset (Section \ref{subsec:zeroes}); \item the random zeta functions $$\log\left(\prod_{p\leq N} \frac{1}{1-\frac{X_p}{\sqrt{p}}}\right),$$ where the $X_p$'s are labeled by prime numbers, and are independent uniform random variables on the unit circle (Section \ref{subsec:randomzeta}). \end{itemize} The first example was kindly suggested to us by A.~Borodin. On the other hand, the random zeta functions have already been studied in \cite[Section 3, Example 2]{KN12} and \cite[Section 3.5]{DKN15}, in connection to Ramachandra's conjecture on the denseness in the complex plane of the values of the Riemann $\zeta$ function on the critical line. For these three examples, we establish the mod-Gaussian convergence with an adequate zone of control, and we deduce from it a local limit theorem. \item More generally, we can work with sums $S_n = \sum_{v\in V_n} A_v$ of random variables which have a dependency structure encoded in a dependency graph or in a weight\-ed dependency graph. In \cite[Theorem 9.1.8]{FMN16} and \cite[Proposition 5.3]{FMN17}, we proved that these hypotheses imply uniform bounds on the cumulants of $S_n$. From these bounds, it is usually possible to establish a zone of control for a renormalization of $(S_n)_{n\in\mathbb{N}}$, and the local limit theorem is then a straightforward application of Theorem \ref{thm:locallimit1}. In Sections \ref{subsec:dependencygraph} and \ref{subsec:markov}, we study in particular the subgraph counts in random Erdös--Rényi graphs, and the number of visits of a finite ergodic Markov chain. \item The next applications (Section \ref{sec:matrix}) are based on the results in \cite{DHR18,DHR19}, where mod-Gauss\-ian convergence has been proven for sequences stemming either from random matrices or from the Coulomb gas context. For all these examples, one can compute a large zone of control, which combined with our main result (Theorem \ref{thm:locallimit1}) provides a local limit theorem. The precise models that we shall study are: \begin{itemize} \item in Section \ref{subsec:charge}, the charge ensembles proposed in \cite{RSX13,SS14}, which consist of charge $1$ and charge $2$ particles located on the real line or the circle and interacting via their pairwise logarithmic repulsion, and with an harmonic attraction towards the origin in the real case. In the regime where the two types of particles have the same magnitude, the asymptotic behavior of the number of particles with charge $1$ was studied by Dal Borgo, Hovhannisyan and Rouault in \cite{DHR18}. \item in Section \ref{subsec:gue}, the logarithm of the determinant of a random matrix of the Gaussian Unitary Ensemble. The central limit theorem for this quantity was shown by Delannay and Le Ca\"er in \cite{DLC00}, and moderate deviations and Berry--Esseen bounds were established by D\"oring and Eichelsbacher in \cite{DE13}. \item in Section \ref{subsec:beta}, the logarithm of the determinant of a random matrix of the $\beta$-Laguerre, the $\beta$-Jacobi and the $\beta$-uniform Gram ensemble, for general $\beta>0$. The corresponding central limit theorems were established by Rouault in \cite{Rou07}. \item last, in Section \ref{subsec:circular}, the logarithm of the characteristic polynomial of a random matrix of the $\beta$-circular Jacobi ensemble, for general $\beta>0$. An asymptotic study of these quantities relying on the theory of deformed Verblunsky coefficients was proposed by Bourgade, Nikeghbali and Rouault in \cite[Section 5]{BNR09}. \end{itemize} To the best of our knowledge, the local limit theorems are new for all these examples. Using the polynomial structure of the partition function and applying an argument of Bender \cite[Theorem 2]{Bender}, Forrester gave in \cite[Section 7.10]{For10} a local limit theorem for a two-component Coulomb gas model on the circle with charge ratio $2:1$. This model is analogous to the ensemble proposed in \cite{SS14} and studied in our Section \ref{subsec:charge}, but different. \item In our last Section \ref{sec:l1mod}, we consider examples that correspond to the second case of the alternative previously described. In \cite{DKN15,FMN17}, it has been proved that the winding number of the planar Brownian motion starting at $z=1$ converges in the mod-Cauchy sense, with a large zone of control. We show in \S\ref{subsec:brownian} that we have in fact mod-Cauchy convergence in $\mathrm{L}^1(\mathrm{i} \mathbb{R})$, and therefore a local limit theorem that holds at any infinitesimal scale. On the other hand, in \cite[Section 3]{MN15}, a notion of mod-Gaussian convergence in $\mathrm{L}^1(\mathbb{R})$ was introduced, leading to a simple proof of classical results of Ellis and Newman \cite{EN78} on the magnetisation of the Curie--Weiss model at critical temperature. In Section \ref{subsec:curieweiss}, we extend and generalise \cite[Theorem 22]{MN15}, by proving a local limit theorem for this magnetisation, which holds for more scales than in \emph{loc.~cit.} \end{itemize} \bigskip \section{Smooth test functions}\label{sec:testfunction} In this section, we introduce the main tool for the proof of local limit theorems, namely, a space of smooth test functions $\mathscr{T}_0(\mathbb{R})$. This functional space already appeared in \cite{FMN17} when studying estimates of the speed of convergence in central limit theorems. We state in \S\ref{subsec:compactlysupportedfourier} an approximation lemma which will enable the proof of our local limit theorems, and in Section \ref{subsec:stonefeller}, we explain how to recover quickly the Stone--Feller local limit theorem by using the space of test functions. \medskip \subsection{Functions with compactly supported Fourier transforms}\label{subsec:compactlysupportedfourier} In this section, all the spaces of functions are spaces of complex functions on the real line $f : \mathbb{R} \to \mathbb{C}$. We note \begin{itemize} \item $\mathscr{C}^\infty(\mathbb{R})$ (respectively, $\mathscr{D}(\mathbb{R})$) the space of infinitely differentiable functions on $\mathbb{R}$ (respectively, infinitely differentiable and compactly supported). \item $\mathrm{L}^1(\mathbb{R})$ the space of measurable functions on $\mathbb{R}$ that are integrable with respect to the Lebesgue measure. \end{itemize} On the other hand, if $f \in \mathrm{L}^1(\mathbb{R})$, its Fourier transform is the continous function $$\widehat{f}(\xi) = \int_{\mathbb{R}} f(x)\,\mathrm{e}^{\mathrm{i} x \xi}\DD{x}.$$ \begin{definition} A smooth test function is a function $f \in \mathrm{L}^1(\mathbb{R})$ whose Fourier transform is compactly supported: $$\exists K\geq 0,\,\,\,\forall \xi \notin [-K,K],\,\,\,\widehat{f}(\xi) =0.$$ \end{definition} We denote by $\mathscr{T}_0(\mathbb{R})$ the space of smooth test functions; it is a subspace of $\mathscr{C}^\infty(\mathbb{R})$, and if $f \in \mathscr{T}_0(\mathbb{R})$, then $f$ and all its derivatives tend to $0$ at infinity, and $f$ satisfies the Plancherel inversion formula $$f(x) = \frac{1}{2\pi}\int_{-K}^K \widehat{f}(\xi)\,\mathrm{e}^{-\mathrm{i} \xi x}\DD{\xi},$$ where $[-K,K]$ is a support for $\widehat{f}$. We refer to \cite[Section 2.2]{FMN17} for details on the functional space $\mathscr{T}_0(\mathbb{R})$. An essential property of $\mathscr{T}_0(\mathbb{R})$ is the following approximation result, proven in \cite[Theorem 4]{DKN15}: \begin{theorem}\label{thm:approx} Let $f \in \mathscr{D}(\mathbb{R})$. For any $\eta >0$, there exists two smooth test functions $g_1,g_2 \in \mathscr{T}_0(\mathbb{R})$ such that $g_1 \leq f \leq g_2$ and $$\int_{\mathbb{R}} (g_2(x)-g_1(x)) \DD{x}\leq \eta.$$ \end{theorem} \noindent Using approximations by smooth functions in $\mathscr{D}(\mathbb{R})$, one can extend Theorem \ref{thm:approx} to other functions. More precisely: \begin{corollary}\label{cor:approx} Let $B$ be a bounded measurable subset of $\mathbb{R}$ such that $\partial B$ has zero Lebesgue measure. For any $\eta>0$, there exists two smooth test functions $g_1,g_2 \in \mathscr{T}_0(\mathbb{R})$ such that $g_1 \leq 1_B \leq g_2$ and $$\int_{\mathbb{R}}(g_2(x)-g_1(x))\DD{x}\leq \eta.$$ \end{corollary} \begin{proof} The function $x \mapsto 1_B(x)$ is bounded, and since $m(\partial B)=0$, it is almost everywhere continuous. Therefore, it is Riemann integrable, and one can frame it between two step functions $f_1$ and $f_2$ (locally constant functions with a finite number of values). In turn, one can classically approximate these two step functions by smooth compactly supported functions in $\mathscr{D}(\mathbb{R})$, and finally one can use Theorem \ref{thm:approx} to replace the smooth compactly supported functions in $\mathscr{D}(\mathbb{R})$ by smooth test functions in $\mathscr{T}_0(\mathbb{R})$. \end{proof} In the sequel, a bounded measurable subset $B \subset \mathbb{R}$ whose boundary $\partial B$ has zero Lebesgue measure will be called a \emph{Jordan measurable subset}. \medskip \subsection{Stone--Feller local limit theorem}\label{subsec:stonefeller} In the next section, we shall use the approximation theorem \ref{thm:approx} to prove local limit theorems in the mod-$\phi$ setting. As a warm-up, let us explain how to recover the Stone--Feller local limit theorem for sums of i.i.d.~random variables in the attraction domain of a stable law. \begin{theorem}[Stone, Feller] Let $\mu$ be a non-lattice distributed probability measure which is in the attraction domain of $\phi_{c,\alpha,\beta}$, and which has its Fourier transform that writes as $$\widehat{\mu}(\xi) = \mathrm{e}^{\mathrm{i} m \xi - |c\xi|^\alpha (1-\mathrm{i} \beta h(\alpha,\xi)\,\mathrm{sgn}(\xi))\,s(\xi)\,(1+\varepsilon(\xi))},$$ with $s$ slowly varying at $0$ and $\lim_{\xi \to 0} \varepsilon(\xi) = 0$. We assume to simplify that we are not in the case $\alpha = 1,\beta \neq 0$. We consider a sum $S_n = X_1+\cdots+X_n$ of i.i.d.~random variables with law $\mu$, and we define $A_n$ and $B_n$ by $$A_n = nm \qquad;\qquad (B_n)^\alpha = n\,s\left(\frac{1}{B_n}\right).$$ Assume that $(B_n)_{n \in \mathbb{N}}$ goes to $+\infty$. Then, for any $x \in \mathbb{R}$ and any Jordan measurable subset $C$ with $m(C)>0$, $$\lim_{n \to \infty} B_n\,\mathbb{P}\!\left[S_n \in A_n+B_nx+C\right] = p_{c,\alpha,\beta}(x)\,m(C) ,$$ where $p_{c,\alpha,\beta}(x) $ is the density at $x$ of the stable law $\phi_{c,\alpha,\beta}$. \end{theorem} \begin{remark} The assumption $B_n \to + \infty$ is in fact a consequence of the other hypotheses, see \cite[Lemma in \S29]{GK68}; besides, in practice one can usually compute $B_n$ or an estimate of it. On the other hand, under the assumptions of the theorem, for any $\xi$ fixed in $\mathbb{R}$, \begin{align*} \mathbb{E}[\mathrm{e}^{\mathrm{i} \xi \frac{S_n-A_n}{B_n}}] &= \exp\left(-|c\xi|^\alpha (1-\mathrm{i} \beta\, h(\alpha,\xi)\,\mathrm{sgn}(\xi))\,\frac{s\!\left(\frac{\xi}{B_n}\right)}{s\!\left(\frac{1}{B_n}\right)}\left(1+\varepsilon\!\left(\frac{\xi}{B_n}\right)\right)\right) \\ &\to_{n \to \infty} \mathrm{e}^{-|c\xi|^\alpha (1-\mathrm{i} \beta h(\alpha,\xi)\,\mathrm{sgn}(\xi))} \end{align*} so we have the central limit theorem $\frac{S_n-A_n}{B_n} \rightharpoonup_{n \to \infty} \phi_{c,\alpha,\beta}$. The Stone--Feller theorem is a local version of this limiting result. \end{remark} \begin{proof} If $Y_n = S_n-A_n-B_nx$, then we are interested in the asymptotics of the quantity $\mathbb{P}[Y_n \in C]=\mathbb{E}[1_{C}(Y_n)]$. By Corollary \ref{cor:approx}, it suffices to prove that for any $f \in \mathscr{T}_0(\mathbb{R})$, \begin{equation} \lim_{n\to \infty} B_n\,\mathbb{E}[f(Y_n)] = \left(\int_{\mathbb{R}} f(y) \DD{y}\right)\,p_{c,\alpha,\beta}(x) \label{eq:stonefeller} \end{equation} for $f \in \mathscr{T}_0(\mathbb{R})$; the same result will then hold for $f=1_{C}$, hence the theorem. We compute the left-hand side of \eqref{eq:stonefeller}, denoting $[-K,K]$ a support for $\widehat{f}$: \begin{align*} B_n\,\mathbb{E}[f(Y_n)] &= \frac{B_n}{2\pi}\int_{\mathbb{R}} \widehat{f}(\xi)\,(\widehat{\mu}(-\xi))^n\, \mathrm{e}^{A_n\mathrm{i}\xi} \mathrm{e}^{B_n\mathrm{i} \xi x}\DD{\xi} \\ &= \frac{B_n}{2\pi}\int_{-K}^K \widehat{f}(\xi)\,\mathrm{e}^{-n|c\xi|^\alpha (1-\mathrm{i}\beta h(\alpha,-\xi)\,\mathrm{sgn}(-\xi))\,s(-\xi)\,(1+\varepsilon(-\xi))} \mathrm{e}^{ (A_n -nm)\mathrm{i} \xi +B_n\mathrm{i} \xi x}\DD{\xi} \\ &= \frac{1}{2\pi}\int_{-KB_n}^{KB_n} \widehat{f}\left(-\frac{t}{B_n}\right)\,\mathrm{e}^{-|ct|^\alpha (1-\mathrm{i}\beta h(\alpha,t)\,\mathrm{sgn}(t))\,\frac{s(\frac{t}{B_n})}{s(\frac{1}{B_n})}\,(1+\varepsilon(\frac{t}{B_n}))}\mathrm{e}^{-\mathrm{i} x t}\DD{t}. \end{align*} Since $s$ is slowly varying around $0$, the pointwise limit as $n$ goes to infinity of the function in the integral is $$\widehat{f}(0)\,\mathrm{e}^{-|ct|^\alpha (1-\mathrm{i}\beta h(\alpha,t)\,\mathrm{sgn}(t))}\,\mathrm{e}^{-\mathrm{i} x t}.$$ Let us explain why we can use the dominated convergence theorem. As $\mu$ is non-lattice distributed, the function $\widehat{\mu}(\xi)$ has modulus $1$ only for $\xi =0$, and therefore, the real part of the function $\xi \mapsto (1-\mathrm{i} \beta h(\alpha,\xi)\,\mathrm{sgn}(\xi))(1+\varepsilon(\xi))$ does not vanish on $\mathbb{R}$. In particular, this real part stays bounded from below by a positive constant $C_1$ on some interval $[-K,K]$. On the other hand, in order to evaluate the ratio $s(\frac{t}{B_n})/s(\frac{1}{B_n})$, we use Karamata's representation theorem, which states that for $\xi \leq K$, \begin{equation} s(\xi) = \exp\left(\eta_1(\xi) + \int_\xi^K \eta_2(u)\,\frac{\!\DD{u}}{u}\right) \label{eq:karamata} \end{equation} with $\eta_1$ bounded measurable function admitting a limit $\eta_1(0)$ when $\xi \to 0$, and $\eta_2$ is a bounded measurable function with $\lim_{\xi \to 0} \eta_2(\xi)=0$; see \cite[Section 1.3]{BGT87}. Up to a modification of the pair $(\eta_1,\eta_2)$, we can assume that $|\eta_2|\leq \frac{\alpha}{2}$ for any $\xi \leq K$. Then, \begin{align*} \frac{s\!\left(\frac{t}{B_n}\right)}{s\!\left(\frac{1}{B_n}\right)} &= \exp\!\left( \eta_1\!\left(\frac{t}{B_n} \right)-\eta_1\!\left(\frac{1}{B_n} \right)+\int_{\frac{t}{B_n}}^{\frac{1}{B_n}} \eta_2(u)\,\frac{\!\DD{u}}{u}\right) \\ &\geq \exp\!\left(O(1) - \frac{\alpha}{2}\log t\right) \geq \frac{C_2}{t^\frac{\alpha}{2}} \end{align*} for some constant $C_2>0$, and any $t \leq KB_n$. Therefore, on the zone of integration, $$\mathrm{Re}\left(|t|^\alpha (1-\mathrm{i}\beta h(\alpha,t)\,\mathrm{sgn}(t))\,\frac{s(\frac{t}{B_n})}{s(\frac{1}{B_n})}\left(1+\varepsilon\!\left(\frac{t}{B_n}\right)\right)\right) \geq C_1C_2 |t|^{\frac{\alpha}{2}}.$$ This lower bound allows one to use the dominated convergence theorem, which shows that: \begin{align*} \lim_{n \to \infty} B_n \,\mathbb{E}[f(Y_n)] &= \frac{1}{2\pi} \int_\mathbb{R} \widehat{f}(0)\,\mathrm{e}^{-|ct|^\alpha (1-\mathrm{i}\beta h(\alpha,t)\,\mathrm{sgn}(t))}\,\mathrm{e}^{-\mathrm{i} xt}\DD{t} \\ &= \left(\int_{\mathbb{R}} f(y) \DD{y}\right)\,p_{c,\alpha,\beta}(x). \qedhere \end{align*} \end{proof} \medskip The proof adapts readily to the case $\alpha=1$, up to a modification of the parameters $A_n$ and $B_n$ when $\beta\neq 0$. Notice that our result of approximation by smooth test functions has reduced the notoriously difficult proof of the local limit theorem of Stone and Feller (due to Shepp in the case $\alpha=2$, for random variables with finite variance) to an application of Parseval's formula and of Karamata's representation theorem. \bigskip \section{Zones of control and local limit theorems}\label{sec:zone} In this section, we explain how to obtain local limit theorems in the setting of mod-stable convergent sequences. The main idea is that the scales at which the stable approximation of a mod-$\phi$ convergent sequence is valid are dictated by: \begin{enumerate} \item the behavior of the residues $\theta_n(\xi)$ and $\theta(\xi)$ around $0$; \item the maximal size of a zone on which the growth of these residues can be controlled. \end{enumerate} In the following, we fix a sequence of real-valued random variables $(X_n)_{n \in \mathbb{N}}$, a sequence of parameters $t_n \to +\infty$ and a reference stable law $\phi_{\alpha,\beta,c}$. In Section \ref{subsec:zone}, we recall the definition of zone of control, which is in some sense an improvement of the definition of mod-stable convergence. In Section \ref{subsec:llt_modstable}, we prove local limit theorems under this hypothesis of zone of control. \subsection{The notion of zone of control}\label{subsec:zone} In \cite[Section 2.1]{FMN17}, the rate of convergence in Proposition \ref{prop:convlaw} was determined by using the notion of zone of control, which we recall here: \begin{definition}\label{def:zone} Let $(X_n)_{n \in \mathbb{N}}$ be a sequence of real-valued random variables, $\phi_{c,\alpha,\beta}$ be a stable law, and $t_n \to +\infty$. We set $\theta_n(\xi) = \mathbb{E}[\mathrm{e}^{\mathrm{i} \xi X_n}]\,\mathrm{e}^{-t_n\eta_{c,\alpha,\beta}(\mathrm{i} \xi)}$. Consider the following assertions: \begin{enumerate}[label=(Z\arabic*)] \item\label{hyp:zone1} Fix $\nu>0,\ \omega>0$ and $\gamma\in\mathbb{R}$. There exists positive constants $K$, $K_1$ and $K_2$ that are independent of $n$ and such that, for all $\xi$ in the zone $\left[-K\left(t_n\right)^\gamma,K\left(t_n\right)^\gamma\right]$, $$ \left|\theta_n(\xi)-1\right|\le K_1\left|\xi\right|^\nu \exp\left(K_2\left|\xi\right|^\omega\right). $$ \item\label{hyp:zone2} One has $$ \alpha\le \omega, \qquad -\frac{1}{\alpha}< \gamma\le\frac{1}{\omega-\alpha},\qquad 0<K\le\left(\frac{c^\alpha}{2K_2}\right)^{\frac{1}{\omega-\alpha}}. $$ \end{enumerate} Note that if Condition \ref{hyp:zone1} holds for some parameters $\gamma>-\frac{1}{\alpha}$ and $\nu,\omega,K,K_1,K_2$, then \ref{hyp:zone2} can always be forced by increasing $\omega$, and then decreasing $K$ and $\gamma$. If Conditions \ref{hyp:zone1} and \ref{hyp:zone2} are satisfied, then we say that we have a zone of control $\left[-K\left(t_n\right)^\gamma,K\left(t_n\right)^\gamma\right]$ with index of control $(\nu,\omega)$. \end{definition} Let us make a few remarks on this definition. First, Conditions \ref{hyp:zone1} and \ref{hyp:zone2} imply that if $(Y_n)_{n \in \mathbb{N}}$ is defined in terms of $(X_n)_{n \in \mathbb{N}}$ in the same way as in Proposition \ref{prop:convlaw}, then one has the convergence in law $Y_n \rightharpoonup \phi_{c,\alpha,\beta}$ \cite[Proposition 2.3]{FMN17}. On the other hand, the mod-$\phi_{c,\alpha,\beta}$ convergence implies the existence of a zone of control $[-K,K]$ with $\gamma = 0$, with index $(\nu=0,\omega=\alpha)$ and with $K$ as large as wanted (and $K_2=0$). Therefore, Definition \ref{def:zone} is a generalisation of the notion of mod-stable convergence. Conversely, a zone of control does not imply the mod-stable convergence, even if $\gamma\geq 0$. However, in all the examples that we are going to present, it will always be the case that the sequence under consideration converges mod-$\phi_{c,\alpha,\beta}$ with the same parameters $(t_n)_{n \in \mathbb{N}}$ as for the notion of zone of control. \medskip \subsection{Local limit theorems for mod-stable random variables}\label{subsec:llt_modstable} We can now state our main result: \begin{theorem}\label{thm:locallimit1} Let $(X_n)_{n \in \mathbb{N}}$ be a sequence of real-valued random variables, $\phi_{\alpha,\beta,c}$ a stable reference law, $(t_n)_{n \in \mathbb{N}}$ a sequence growing to infinity, and $$\theta_n(\xi) = \mathbb{E}[\mathrm{e}^{\mathrm{i} \xi X_n}]\,\mathrm{e}^{-t_n\eta_{c,\alpha,\beta}(\mathrm{i} \xi)}.$$ We assume that there is a zone of control $\left[-K\left(t_n\right)^\gamma,K\left(t_n\right)^\gamma\right]$ with index $(\nu,\omega)$, and we denote $(Y_n)_{n \in \mathbb{N}}$ the renormalisation of $(X_n)_{n \in \mathbb{N}}$ given by Proposition \ref{prop:convlaw}. Let $x \in \mathbb{R}$ and $B$ be a fixed Jordan measurable subset with $m(B)>0$. Then, for every exponent $\delta \in (0,\gamma+\frac{1}{\alpha})$, $$ \lim_{n \to \infty} (t_n)^{\delta}\,\,\mathbb{P}\!\left[Y_n - x \in \frac{1}{(t_n)^\delta}\,B\right] = p_{c,\alpha,\beta}(x)\,m(B).$$ \end{theorem} Before proving Theorem \ref{thm:locallimit1}, let us make a few comments. First, since this is an asymptotic result, we actually only need a zone of control on the residues $\theta_n$ for $n$ large enough. Secondly, for exponents $\delta \in (0,\frac{1}{\alpha}]$, this local limit theorem was proven in \cite[Theorem 5, Propositions 1 and 2]{DKN15}. Theorem \ref{thm:locallimit1} improves on these previous results by showing that the stable approximation holds at scales $(t_n)^{-\delta}$: \begin{itemize} \item which can be smaller than in \cite{DKN15}, \item and which are directly connected to the size of the zone of control. \end{itemize} \begin{lemma}\label{lem:estimatetestfunction} Consider a sequence $(X_n)_{n\in \mathbb{N}}$ that satisfies the assumptions of Theorem \ref{thm:locallimit1}. Let $f_n \in \mathscr{T}_0(\mathbb{R})$ be a smooth test function whose Fourier transform $\widehat{f}_n$ has its support included in the zone $\left[-K\left(t_n\right)^{\gamma+1/\alpha},K\left(t_n\right)^{\gamma+1/\alpha}\right]$. There exists a constant $C(c,\alpha,\nu)$ such that $$\left|\mathbb{E}[f_n(Y_n)] - \int_{\mathbb{R}} f_n(y)\,\phi_{c,\alpha,\beta}(\!\DD{y}) \right| \leq C(c,\alpha,\nu)\,K_1\,\frac{\|f_n\|_{\mathrm{L}^1}}{(t_n)^{\nu/\alpha}}.$$ \end{lemma} \begin{proof} See \cite[Proposition 2.12]{FMN17}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:locallimit1}] We fix $x,\delta$ and $B$ as in the statement of the theorem. Suppose that we can prove that $$\lim_{n \to \infty} (t_n)^{\delta}\,\mathbb{E}\!\left[f((t_n)^\delta (Y_n-x))\right] = p_{c,\alpha,\beta}(x)\,\left(\int_{\mathbb{R}}f(y)\DD{y}\right).$$ for any $f \in \mathscr{T}_0(\mathbb{R})$. Then, for any $\eta>0$, Corollary \ref{cor:approx} shows that there exist $f_1,f_2 \in \mathscr{T}_0(\mathbb{R})$ with $f_1\leq 1_{B} \leq f_2$ and $\int_{\mathbb{R}}f_2(x)-f_1(x)\DD{x} \leq \eta$, so \begin{align*} \limsup_{n \to \infty} \,(t_n)^{\delta}\,\,&\mathbb{P}\!\left[Y_n-x \in \frac{1}{(t_n)^\delta}\,B\right] \leq \lim_{n \to \infty} \,(t_n)^{\delta}\,\,\mathbb{E}\!\left[f_2((t_n)^\delta (Y_n-x))\right] \\ &\leq p_{c,\alpha,\beta}(x)\,\left(\int_{\mathbb{R}} f_2(y)\DD{y}\right) \\ &\leq p_{c,\alpha,\beta}(x)\,\left(\int_{\mathbb{R}} f_1(y)\DD{y}+\eta\right) \\ &\leq p_{c,\alpha,\beta}(x)\,\eta + \lim_{n \to \infty} (t_n)^{\delta}\,\,\mathbb{E}\!\left[f_1((t_n)^\delta (Y_n-x))\right] \\ &\leq p_{c,\alpha,\beta}(x)\,\eta + \liminf_{n \to \infty}\, (t_n)^{\delta}\,\,\mathbb{P}\!\left[Y_n-x \in \frac{1}{(t_n)^\delta}\,B\right] \end{align*} so the local limit theorem holds. Hence, as in the proof of the Stone--Feller local limit theorem, we have reduced our problem to estimates on test functions in $\mathscr{T}_0(\mathbb{R})$. Fix $f \in \mathscr{T}_0(\mathbb{R})$, and denote $f_n(y)=f((t_n)^{\delta}(y-x))$. If $[-C,C]$ is the support of $\widehat{f}$, then $[-C(t_n)^{\delta},C(t_n)^{\delta}]$ is the support of $\widehat{f}_n$, and since $\delta<\gamma+\frac{1}{\alpha}$, it is included in $[-K(t_n)^{\gamma+1/\alpha},K(t_n)^{\gamma+1/\alpha}]$ for $n$ large enough ($K$ being given by Condition \ref{hyp:zone1} of zone of control). Hence, by the previous lemma, \begin{align*} \mathbb{E}[f_n(Y_n)] &= \left(\int_{\mathbb{R}} f_n(y)\,\phi_{c,\alpha,\beta}(\!\DD{y})\right) + O\left(\frac{\|f_n\|_{\mathrm{L}^1}}{(t_n)^{\frac{\nu}{\alpha}}}\right) =\left(\int_{\mathbb{R}} f_n(y)\,\phi_{c,\alpha,\beta}(\!\DD{y})\right) + O\left(\frac{\|f\|_{\mathrm{L}^1}}{(t_n)^{\frac{\nu}{\alpha}+\delta}}\right), \end{align*} which implies \begin{align*} (t_n)^{\delta} \,\,\mathbb{E}[f((t_n)^\delta (Y_n-x))]&=(t_n)^{\delta}\,\,\mathbb{E}[f_n(Y_n)] \\ &= (t_n)^{\delta}\, \left(\int_{\mathbb{R}} f_n(y)\,\phi_{c,\alpha,\beta}(\!\DD{y})\right) + O\!\left(\frac{\|f\|_{\mathrm{L}^1}}{(t_n)^{\frac{\nu}{\alpha}}}\right)\\ &=\int_{\mathbb{R}} f(u)\,p_{c,\alpha,\beta}\left(x+\frac{u}{(t_n)^{\delta}}\right)\DD{u} + o(1). \end{align*} Since $p_{c,\alpha,\beta}(y)=\frac{1}{2\pi}\int_{\mathbb{R}}\mathrm{e}^{\eta_{c,\alpha,\beta}(\mathrm{i}\xi)}\,\mathrm{e}^{-\mathrm{i} y \xi}\DD{\xi}$ is bounded by $\frac{1}{2\pi}\int_{\mathbb{R}} \mathrm{e}^{-|c\xi|^\alpha}\DD{\xi}$, by dominated convergence, the limit of the integral is \begin{equation*} p_{c,\alpha,\beta}(x)\,\left(\int_{\mathbb{R}} f(u)\DD{u}\right).\qedhere \end{equation*} \end{proof} \medskip If we want Theorem \ref{thm:locallimit1} to be meaningful, it is natural to ask when one can give an explicit formula for the density $p_{c,\alpha,\beta}$ at the real point $x$. Unfortunately, there is no general closed formula for the density of a stable law and it is known explicitly only for the Lévy, Cauchy and normal distributions. However, the following proposition gives a sufficient condition for the existence of a closed formula at the origin. \begin{proposition} Suppose that $|\beta \tan(\frac{\alpha\pi}{2})|< 1$. Then, the density of the stable distribution $\phi_{c,\alpha,\beta}$ at $x=0$ is given by the convergent series $$ p_{c,\alpha,\beta}(0)= \frac{1}{\pi \alpha c}\,\sum_{k=0}^\infty (-1)^k\,\left(\beta \,\tan\left(\frac{\pi \alpha}{2}\right)\right)^{2k}\,\frac{\Gamma(2k+\frac{1}{\alpha})}{\Gamma(2k+1)}.$$ \end{proposition} \begin{proof} Suppose first that $\alpha \neq 1$. One computes \begin{align*} p_{c,\alpha,\beta}(0)&=\frac{1}{2\pi} \int_{\mathbb{R}}\mathrm{e}^{\eta(\mathrm{i} \xi)}\DD{\xi} = \frac{1}{\pi} \int_{0}^\infty \mathrm{e}^{-(c\xi)^\alpha}\,\cos\left((c\xi)^\alpha \beta\,\tan\left(\frac{\alpha\pi}{2}\right)\right) \DD{\xi} \\ &=\frac{1}{\pi\alpha c} \int_{0}^\infty \mathrm{e}^{-u}\,\cos\left(u \beta\,\tan\left(\frac{\alpha\pi}{2}\right)\right) u^{\frac{1}{\alpha}-1}\DD{u}. \end{align*} Under the assumption $|\beta \tan(\frac{\alpha\pi}{2})|<1$, one can develop in power series the cosinus and change the order of summation to obtain the formula claimed; this ends the proof when $\alpha \neq 1$. If $\alpha=1$, then $|\beta \tan(\frac{\alpha\pi}{2})|< 1$ is satified if and only if $\beta=0$. In this case, one deals with the Cauchy law $$\frac{1}{\pi c}\,\frac{1}{1+\frac{x^2}{c^2}}\DD{x},$$ which has density $\frac{1}{c\pi}$ at $x=0$. This is also what is obtained by specialisation of the power series, because, if $\beta=0$, then for every $\alpha \in (0,2]$, the power series specialises to \begin{equation*} \frac{1}{\pi\alpha c}\,\,\Gamma\!\left(\frac{1}{\alpha}\right).\qedhere \end{equation*} \end{proof} \bigskip \section{Sums of random variables}\label{sec:sum} In this section, we apply our main result to various examples of random variables which admit a representation in law as a sum of elementary components which are independent (Sections \ref{subsec:zeroes}, \ref{subsec:partition} and \ref{subsec:randomzeta}) or dependent (Sections \ref{subsec:dependencygraph} and \ref{subsec:markov}). \subsection{Size of a random integer partition or plane partition}\label{subsec:partition} To illustrate our theory of zone of controls and the related local limit theorems, we shall consider as a first example the size of a random integer partition or plane partition chosen with probability proportional to $q^{\mathrm{vol}(\lambda)}$. Let us start with \emph{integer partitions}; we refer to \cite[\S1.1]{Mac95} for the details of their combinatorics. An integer partition of size $n$ is a non-increasing sequence $\lambda = (\lambda_1\geq \lambda_2 \geq \cdots \geq \lambda_r)$ of positive integers such that $\lambda_1+\lambda_2+\cdots+\lambda_r = n$. We then denote $n=|\lambda|$, and we represent $\lambda$ by its Young diagram, which is the array of boxes with $\lambda_1$ boxes on the first row, $\lambda_2$ boxes on the second row, \emph{etc.} For instance, $\lambda = (5,5,3,2)$ is an integer partition of size $15$ represented by the Young diagram \ytableausetup{aligntableaux=bottom} $$ \ydiagram{2,3,5,5}\,\,. $$ Let $\mathfrak{Y}$ be the set of all integer partitions, and $\mathbb{P}_q$ be the probability measure on $\mathfrak{Y}$ which is proportional to $q^{|\lambda|}$, $q$ being a fixed parameter in $(0,1)$. The corresponding partition function is given by Euler's formula $$Z(q) = \sum_{\lambda \in \mathfrak{Y}} q^{|\lambda|} = \prod_{n=1}^\infty \frac{1}{1-q^n}.$$ Thus, $\mathbb{P}_q[\lambda] = \left(\prod_{n=1}^\infty 1-q^n\right)\,q^{|\lambda|}$. We are interested in the asymptotics of the size $S_q$ of a random integer partition taken according to the probability measure $\mathbb{P}_q$. The Laplace transform of $S_q$ is $$\mathbb{E}[\mathrm{e}^{z S_q}] = \prod_{n=1}^\infty \frac{1-q^n}{1-q^n\mathrm{e}^{nz}};$$ it is well defined for $\mathrm{Re}(z)<-\log q$. This formula shows that $S_q$ has the same law as a random series $$S_q = \sum_{n=1}^\infty nY_n,$$ where the $Y_n$'s are independent, and $Y_n$ is a geometric random variable of parameter $1-q^n$, with distribution $\mathbb{P}[Y_n=k] = (1-q^n)q^{nk}$ for any $k \in \mathbb{N}$. Set $A_n=nY_n$, and $f_n(\xi) = \log \mathbb{E}[\mathrm{e}^{\mathrm{i} \xi A_n}] - \frac{nq^n\,\mathrm{i} \xi}{1-q^n} + \frac{n^2q^n}{(1-q^n)^2}\,\frac{\xi^2}{2} $. The function $f_n$ and its two first derivatives vanish at $0$, and $$f_n'''(\xi) = -\mathrm{i} n^3 \frac{(q\mathrm{e}^{\mathrm{i} \xi})^n + (q\mathrm{e}^{\mathrm{i} \xi})^{2n}}{(1-(q\mathrm{e}^{\mathrm{i} \xi})^n)^3}\qquad;\qquad |f_n'''(\xi)| \leq n^3 \frac{q^n + q^{2n}}{(1-q^n)^3}.$$ Set $M_q = \sum_{n=1}^\infty \frac{nq^n}{1-q^n}$ and $V_q = \sum_{n=1}^\infty \frac{n^2q^n}{(1-q^n)^2}$. \begin{lemma}\label{lem:meanvariancepartition} The mean $M_q$ and the variance $V_q$ of the random variable $S_q$ have for asymptotic behavior \begin{align*} M_q &= \frac{\zeta(2)}{(\log q)^2} + \frac{1}{2\,\log q} +\frac{1}{24} + o(1) =\frac{\zeta(2)}{(1-q)^2}\,(1+o(1));\\ V_q &=\frac{2\zeta(2)}{(1- q)^3}\,(1+o(1)) \end{align*} as $q$ goes to $1$. \end{lemma} \begin{proof} For the first asymptotic expansion, we follow closely \cite[Theorem 2.2]{BW17}. Introduce the series $L(q^x) = \sum_{k=1}^\infty q^{kx}=\frac{q^x}{1-q^x}$, and consider the operator $D = \frac{\partial}{\partial x}$, and $$\frac{D}{\mathrm{e}^D-1} = \sum_{n=0}^\infty \frac{B_n\,D^n}{n!},$$ where the $B_n$'s are the Bernoulli numbers. We have \begin{align*} \left(\frac{D}{\mathrm{e}^D-1}\right) (L(q^x)) &= \sum_{k=1}^\infty \left(\frac{D}{\mathrm{e}^D-1}\right)(q^{kx}) = \sum_{k=1}^\infty \sum_{n=0}^\infty \frac{B_n}{n!}\,D^n(q^{kx}) \\ &= \sum_{k=1}^\infty \left(\sum_{n=0}^\infty \frac{B_n (k \log q)^n}{n!}\right)\,q^{kx} = (-\log q) \sum_{k=1}^\infty \frac{k\,q^{kx}}{1-q^k}. \end{align*} On the other hand, we have the expansion in powers of $x \log q$: $$L(q^x) = -\frac{1}{x\log q} + \sum_{k=0}^\infty \frac{\zeta(-k)}{k!}\,(x\log q)^k.$$ We have the relation $\frac{D}{\mathrm{e}^D-1}(x^k) = -k\,\zeta(1-k,x)$ where $\zeta(s,x)=\sum_{n=0}^\infty \frac{1}{(n+x)^s}$ is Hurwitz' zeta function (extended to a meromorphic function of the complex parameter $s$). Therefore, we obtain: \begin{align*} \sum_{k=1}^\infty \frac{k\,q^{kx}}{1-q^k} &= \frac{\zeta(2,x)}{(\log q)^2} + \frac{1}{2\log q}+\sum_{k=0}^\infty \frac{\zeta(-k-1)\,\zeta(-k,x)}{k!}\,(\log q)^{k}, \end{align*} hence the first equivalent by taking $x=1$. For the equivalent of the variance, let us remark that \begin{align} V_q &= \sum_{n=1}^\infty \frac{n^2 q^n}{(1-q^n)^2} = \sum_{n=1}^\infty \sum_{k=1}^\infty n^2 k\, q^{nk} = \sum_{k=1}^\infty k\,\frac{q^k+q^{2k}}{(1-q^k)^3} \nonumber \\ &= \frac{1}{(1-q)^3}\sum_{k=1}^\infty \frac{k}{(1+q+\cdots+q^{k-1})^3}\,(q^{k}+q^{2k}).\label{eq:variancepartition} \end{align} As $q$ goes to $1$, each term of the series in \eqref{eq:variancepartition} converges to $\frac{2}{k^2}$, and it is an easy exercice to see that one can sum these limits. \end{proof} \medskip Set $X_q = \frac{S_q-M_q}{(V_q)^{4/9}}$. We have: \begin{align} \log \mathbb{E}[\mathrm{e}^{\mathrm{i} \xi X_q}] + \frac{(V_q)^{1/9}\xi^2}{2} &= \sum_{n=1}^\infty f_n\left(\frac{\xi}{(V_q)^{4/9}}\right) \nonumber \\ \left|\log \mathbb{E}[\mathrm{e}^{\mathrm{i} \xi X_q}] + \frac{(V_q)^{1/9}\xi^2}{2}\right| &\leq \frac{|\xi|^3}{6\,(V_q)^{4/3}} \sum_{n=1}^\infty \frac{n^3(q^n+q^{2n})}{(1-q^n)^3}.\label{eq:thirdmomentpartition} \end{align} As $q$ goes to $1$, the series in Equation \eqref{eq:thirdmomentpartition} behaves as \begin{align*} \sum_{n=1}^\infty &\frac{n^3(q^n+q^{2n})}{(1-q^n)^3} = \sum_{n=1}^\infty \sum_{k=1}^\infty n^3\,\frac{k(k+1)}{2} (q^{kn} + q^{(k+1)n}) \\ &= \sum_{k=1}^\infty \frac{k(k+1)}{2} \left(\frac{q^k(1+4q^k+q^{2k})}{(1-q^k)^4}+\frac{q^{k+1}(1+4q^{k+1}+q^{2k+2})}{(1-q^{k+1})^4}\right)\\ &=\frac{3(1+o(1))}{(1-q)^4} \sum_{k=1}^\infty \left(\frac{1}{k^2}+\frac{1}{k^3} + \frac{1}{(k+1)^2} - \frac{1}{(k+1)^3}\right) = \frac{6\,\zeta(2)\,(1+o(1))}{(1-q)^4}. \end{align*} It follows that for any constant $C>\frac{1}{2^{4/3}(\zeta(2))^{1/3}}$, there exists $q_0 \in (0,1)$ such that if $q \geq q_0$, then \begin{align*} \left|\log \mathbb{E}[\mathrm{e}^{\mathrm{i} \xi X_q}] + \frac{(V_q)^{1/9}\xi^2}{2}\right| &\leq C|\xi^3|; \\ |\theta_q(\xi)-1| &\leq C|\xi|^3\,\exp(C|\xi|^3), \end{align*} uniformly on the parameter $\xi \in \mathbb{R}$, with $\theta_q(\xi)=\mathbb{E}[\mathrm{e}^{\mathrm{i}\xi X_q}]\,\mathrm{e}^{\frac{(V_q)^{1/9}\xi^2}{2}}$. Hence, the family $(X_q)_{q \in (0,1)}$ has a zone of control of mod-Gaussian convergence for the parameter $t_q = (V_q)^{1/9}$, with index $(3,3)$ and with size $O((t_q)^{3/2})=O((V_q)^{1/6})$ if one wants Condition \ref{hyp:zone2} to be satisfied. We conclude with Theorem \ref{thm:locallimit1} and \cite[Theorem 2.16]{FMN17}: \begin{proposition} Let $S_q$ be the size of a random integer partition chosen with probability proportional to $q^{|\lambda|}$, and $M_q$ and $V_q$ be defined as in Lemma \ref{lem:meanvariancepartition}. As $q$ goes to $1$, the random variable $Y_q = (S_q-M_q)/\sqrt{V_q}$ converges in law to the standard Gaussian distribution, and one has more precisely: $$d_\mathrm{Kol}(Y_q\,,\,\mathcal{N}_{\R}(0,1)) = O\!\left((1-q)^{1/2}\right).$$ Moreover, one has the following local limit theorem: for any exponent $\delta \in (0,\frac{1}{2})$ and any Jordan measurable subset $B$ with $m(B)>0$, $$\lim_{q \to 1} \,(1-q)^{-\delta}\,\,\mathbb{P}\!\left[Y_q - x \in (1-q)^{\delta}\,B\right] = \frac{\mathrm{e}^{-\frac{x^2}{2}}}{\sqrt{2\pi}}\,m(B).$$ \end{proposition} \noindent Note that this result does not allow one to go up to the discrete scale. Indeed, the estimate of the variance shows that the Gaussian approximation for $S_q-M_q$ holds at scales $(1-q)^{\delta-3/2}$ with $\delta < \frac{1}{2}$, so one cannot describe what happens for the scales $(1-q)^{-\gamma}$ with $\gamma \in (0,1]$. \begin{remark} The asymptotics of the expectations $M_q$ are easy to retrieve from the Hardy--Ramanujan asymptotic formula for the number $p(n)$ of integer partitions of size $n$: $$p(n) = \frac{(1+o(1))}{4n\sqrt{3}}\,\exp\!\left(\pi \sqrt{\frac{2n}{3}}\right),$$ see \cite{HR18,Rad38}. It implies that the probability measure $\mathbb{P}_q[\lambda] = \frac{q^{|\lambda|}}{Z(\lambda)}$ is concentrated on partitions of size $n$ such that $p(n)\,q^n$ is maximal, that is roughly with $$f_q(n)=\pi \sqrt{\frac{2n}{3}} + n \log q $$ maximal. When $q$ is fixed, the maximal of $f_q(n)$ is attained at $n=\frac{\pi^2}{6(\log q)^2}=\frac{\zeta(2)}{(\log q)^2}$; this is the leading term in the asymptotic expansion of $M_q$. \end{remark} \medskip Similarly, one can study the size of a random \emph{plane partition} chosen with a probability proportional to $q^{\mathrm{vol}(\lambda)}$. A plane partition is a sequence $\lambda = (\lambda^{(1)},\lambda^{(2)},\ldots,\lambda^{(s)})$ of non-empty integer partitions such that the following inequalities hold: $$\forall i \leq s-1,\,\,\forall j \leq \ell(\lambda^{(i)}),\,\,\,\lambda^{(i)}_j \geq \lambda^{(i+1)}_j .$$ We refer to \cite[Chapter 11]{And76} for the combinatorics of these objects. They can be represented by $3$-dimensional Young diagrams, so for instance, \vspace{2mm} \newcounter{x} \newcounter{y} \newcounter{z} \newcommand\xaxis{210} \newcommand\yaxis{-30} \newcommand\zaxis{90} \newcommand\topside[3]{ \fill[fill=NavyBlue, draw=black,shift={(\xaxis:#1)},shift={(\yaxis:#2)}, shift={(\zaxis:#3)}] (0,0) -- (30:1) -- (0,1) --(150:1)--(0,0); } \newcommand\leftside[3]{ \fill[fill=red!50!white, draw=black,shift={(\xaxis:#1)},shift={(\yaxis:#2)}, shift={(\zaxis:#3)}] (0,0) -- (0,-1) -- (210:1) --(150:1)--(0,0); } \newcommand\rightside[3]{ \fill[fill=NavyBlue!50!white, draw=black,shift={(\xaxis:#1)},shift={(\yaxis:#2)}, shift={(\zaxis:#3)}] (0,0) -- (30:1) -- (-30:1) --(0,-1)--(0,0); } \newcommand\cube[3]{ \topside{#1}{#2}{#3} \leftside{#1}{#2}{#3} \rightside{#1}{#2}{#3} } \newcommand\planepartition[1]{ \setcounter{x}{-1} \foreach \a in {#1} { \addtocounter{x}{1} \setcounter{y}{-1} \foreach \b in \a { \addtocounter{y}{1} \setcounter{z}{-1} \foreach \c in {1,...,\b} { \addtocounter{z}{1} \cube{\value{x}}{\value{y}}{\value{z}} } } } } \begin{center} \begin{tikzpicture}[scale=0.8] \planepartition{{4,4,3,2,2},{4,2,2,1},{2,2},{1},{1}} \end{tikzpicture}\vspace{2mm} \end{center} is the diagram of the plane partition $((5,5,3,2),(4,3,1,1),(2,2),(1),(1))$. The volume of a plane partition is the number of boxes of its diagram, that is $\mathrm{vol}(\lambda)=|\lambda^{(1)}| + \cdots + |\lambda^{(r)}|$. The generating series of the volumes of the plane partitions is given by MacMahon's formula: $$\sum_{\lambda \text{ plane partition}}q^{\mathrm{vol}(\lambda)} = \prod_{n=1}^\infty \frac{1}{(1-q^n)^n}.$$ Therefore, if $S_q'$ is the size of a random plane partition chosen according to the probability measure $\mathbb{P}_q'[\lambda] = \frac{q^{\mathrm{vol}(\lambda)}}{Z'(q)} = (\prod_{n=1}^\infty (1-q^n)^n)\,q^{\mathrm{vol}(\lambda)}$, then the Laplace transform of $S_q'$ is $$\mathbb{E}[\mathrm{e}^{z S_q'}] = \prod_{n=1}^\infty \left(\frac{1-q^n}{1-q^n\mathrm{e}^{nz}}\right)^n.$$ Thus, $S_q'$ admits a representation in law as a random series of independent random variables $$S_q' = \sum_{n=1}^\infty \sum_{i=1}^n n\,Y_{n,i},$$ where $Y_{n,i}$ is a geometric random variable with parameter $(1-q^n)$. Set $M_q' = \sum_{n=1}^\infty \frac{n^2\,q^n}{1-q^n}$ and $V_q' = \sum_{n=1}^\infty \frac{n^3\,q^n}{(1-q^n)^2}$. Similar arguments as those used in the proof of Lemma \ref{lem:meanvariancepartition} show that \begin{align*} M_q' &= \frac{2\,\zeta(3)}{(-\log q)^3} + \frac{1}{12\,\log q} +o(1) = \frac{2\,\zeta(3)}{(1- q)^3}\,(1+o(1));\\ V_q' &= \frac{6\,\zeta(3)}{(1- q)^4}\,(1+o(1)) \end{align*} as $q$ goes to $1$. Again, the asymptotics of $M_q'$ are related to the asymptotic formula for the number on plane partitions with volume $n$: $$p'(n) = \frac{(1+o(1))\,(\zeta(3))^{7/36}}{\sqrt{12\pi}}\,\left(\frac{n}{2}\right)^{-\frac{25}{36}}\,\exp\!\left(3(\zeta(3))^{1/3}\left(\frac{n}{2}\right)^{2/3}+\zeta'(-1)\right),$$ see \cite{Wright31,KM06}. Set $$X_q' = \frac{S_q'-M_q'}{(V_q')^{5/12}}.$$ Since $S_q'$ involves the same geometric random variables as before, we can perform the same computations as before to prove that $(X_q')_{q \in (0,1)}$ admits a zone of control of mod-Gaussian convergence for the parameter $t_q = (V_q')^{1/12}$. This zone of control has again index $(3,3)$, and its size can be taken equal to $O((V_q')^{1/8})$. We conclude: \begin{proposition} Let $S_q'$ be the size of a random plane partition chosen with probability proportional to $q^{\mathrm{vol}(\lambda)}$, and $M_q'$ and $V_q'$ be the expectation and the variance of $S_q'$. As $q$ goes to $1$, $Y_q' = \frac{S_q'-M_q'}{\sqrt{V_q'}}$ converges in law to the standard Gaussian distribution, and one has more precisely: $$d_\mathrm{Kol}(Y_q'\,,\,\mathcal{N}_{\R}(0,1)) = O\!\left((1-q)^{1/2}\right).$$ Moreover, for any exponent $\delta \in (0,\frac{1}{2})$ and any Jordan measurable subset $B$ with $m(B)>0$, $$\lim_{q \to 1} \,(1-q)^{-\delta}\,\,\mathbb{P}\!\left[Y_q' - x \in(1-q)^{\delta}\,B\right] = \frac{\mathrm{e}^{-\frac{x^2}{2}}}{\sqrt{2\pi}}\,m(B).$$ \end{proposition} \medskip \subsection{Determinantal point processes and zeroes of a random analytic series}\label{subsec:zeroes} The determinantal point processes form another framework which often yields mod-Gaussian random variables satisfying Theorem \ref{thm:locallimit1}. Consider a locally compact, separable and complete metric space $\mathfrak{X}$ endowed with a locally finite measure $\lambda$, and a Hermitian non-negative linear operator $\mathscr{K} : \mathrm{L}^2(\mathfrak{X},\lambda) \to \mathrm{L}^2(\mathfrak{X},\lambda)$ such that for any relatively compact subset $A \subset \mathfrak{X}$, the induced operator $\mathscr{K}_A = 1_A\,\mathscr{K}\,1_A$ on $\mathrm{L}^2(A,\lambda_{|A})$ is a trace class operator with spectrum included in $[0,1]$. The operator $\mathscr{K}$ can then be represented by a Hermitian locally square-integrable kernel $K$: $$(\mathscr{K}f)(x) = \int_\mathfrak{X} K(x,y)\,f(y)\,\lambda(\!\DD{y}).$$ In this setting, there is a unique law of random point process $M=\sum_{i \in I} \delta_{X_i}$ on $\mathfrak{X}$ such that the correlation functions of $M$ with respect to the reference measure $\lambda$ write as $$\rho_n(x_1,\ldots,x_n) = \det(K(x_i,x_j))_{1\leq i,j\leq n}.$$ One says that $M$ is the determinantal point process associated to the kernel $K$, see for instance \cite{Sosh00,Joh05} and \cite[Chapter 4]{HKPV09} for details. For any relatively compact set $A$, the number of points $M(A)$ of the random point process that falls in $A$ writes then as $$M(A) =_{(\mathrm{law})} \sum_{j\in J} \mathrm{Ber}(p_{A,j}),$$ where the $p_{A,j}$'s are the eigenvalues of the trace class integral operator $\mathscr{K}_A$, and the Bernoulli random variables are independent; see \cite[Theorem 4.5.3]{HKPV09}. \begin{proposition}\label{prop:determinantal} We consider a determinantal point process $M$ as above, with a continuous kernel $K$ that is locally square-integrable but not square-integrable: $\int_{\mathfrak{X}^2} |K(x,y)|^2\,\lambda(\!\DD{x})\,\lambda(\!\DD{y})=+\infty$. We also fix a growing sequence $(A_n)_{n \in \mathbb{N}}$ of relatively compact subsets of $\mathfrak{X}$ such that $\bigsqcup_{n \in \mathbb{N}} A_n = \mathfrak{X}$, and such that the ratio $$r_n = \left(\frac{\int_{A_n} K(x,x)\,\lambda(\!\DD{x})}{\int_{(A_n)^2} |K(x,y)|^2\,\lambda(\!\DD{x})\,\lambda(\!\DD{y})}\right) \to_{n \to +\infty} r $$ admits a limit $r \in (1,+\infty]$ (we allow $r=+\infty$, and we shall see that $r_n \geq 1$ for any $n \in \mathbb{N}$). Then, with $m_n = \mathbb{E}[M(A_n)]$ and $v_n = \Var(M(A_n))$, we have mod-Gaussian convergence of $X_n = (M(A_n)-m_n)/(v_n)^{1/3}$ with parameters $t_n=(v_n)^{1/3}$, and with a zone of control of size $O((v_n)^{1/3})$ and with index $(3,3)$. Therefore, for any $\delta \in (0,\frac{1}{2})$ and any Jordan measurable subset $B$ with $m(B)>0$, $$\lim_{n \to \infty} (v_n)^{\frac{1}{2}-\delta}\,\mathbb{P}\!\left[M(A_n) - m_n - x(v_n)^{\frac{1}{2}} \in (v_n)^{\delta}\,B\right] = \frac{\mathrm{e}^{-\frac{x^2}{2}}}{\sqrt{2\pi}}\,m(B).$$ If $Y_n = \frac{M(A_n)-m_n}{\sqrt{v_n}}$, then we also have $d_\mathrm{Kol}(Y_n,\mathcal{N}_{\R}(0,1))=O((v_n)^{-1/2})$. \end{proposition} \begin{proof} Denote $(p_{n,j})_{j \in \mathbb{N}}$ the non-increasing sequence of eigenvalues of the compact operator $\mathscr{K}_{A_n}$, these eigenvalues being counted with multiplicity; they all belong to $[0,1]$ by \cite[Theorem 3]{Sosh00}. The expectation of $M(A_n)$ is $$m_n= \sum_{j \in \mathbb{N}} p_{n,j} =\int_{A_n} \rho_1(x)\,\lambda(\!\DD{x}) = \int_{A_n} K(x,x)\,\lambda(\!\DD{x}) = \sum_{j \in \mathbb{N}} p_{n,j},$$ and its variance is \begin{align*} v_n&= \sum_{j \in \mathbb{N}} p_{n,j}(1-p_{n,j}) = \mathbb{E}[M(A_n)] + \mathbb{E}[M(A_n)(M(A_n)-1)] - (\mathbb{E}[M(A_n)])^2 \\ &= \int_{A_n} K(x,x)\,\lambda(\!\DD{x}) + \int_{(A_n)^2} \left(\det\left(\begin{smallmatrix}K(x,x) & K(x,y) \\ K(y,x) & K(y,y)\end{smallmatrix} \right) - K(x,x)\,K(y,y) \right)\,\lambda(\!\DD{x})\,\lambda(\!\DD{y}) \\ &= \int_{A_n} K(x,x)\,\lambda(\!\DD{x}) - \int_{(A_n)^2} |K(x,y)|^2\,\lambda(\!\DD{x})\,\lambda(\!\DD{y}). \end{align*} In particular, we always have $r_n = \frac{m_n}{m_n - v_n} \geq 1$. Since $K$ is not square-integrable on $\mathfrak{X}^2$, $$\lim_{n \to \infty} m_n - v_n = \lim_{n \to \infty} \int_{(A_n)^2} |K(x,y)|^2\,\lambda(\!\DD{x})\,\lambda(\!\DD{y}) = +\infty.$$ If $r \in (1,+\infty)$, then $m_n$ and $v_n$ grow to infinity at the same speed $s_n=m_n-v_n$, but with different rates $rs_n$ and $(r-1)s_n$. If $r=+\infty$, then $m_n$ and $v_n$ grow to infinity faster than the speed $s_n$, and $m_n/v_n \to 1$.\bigskip Consider a real parameter $\zeta$ with $|\zeta| \leq c$ for some constant $c<1$ sufficiently small, so that $\sum_{n=3}^\infty \frac{c^{n-3}}{n} \leq \frac{1}{2}$. Then, with $p \in [0,1]$, the power series expansion of $\log(1+t)$ yields \begin{align*} &\left|\log(1+p(\mathrm{e}^{\mathrm{i} \zeta}-1)) - \mathrm{i} p\zeta +p(1-p)\frac{\zeta^2}{2}\right| \\ &\leq p \left|\mathrm{e}^{\mathrm{i} \zeta}-1-\mathrm{i}\zeta+\frac{\zeta^2}{2}\right| + \frac{p^2}{2} \left|(\mathrm{e}^{\mathrm{i} \zeta}-1)^2 + \zeta^2\right| + \frac{p^3 |\zeta|^3}{2} \leq A\, p\,|\zeta|^3 \end{align*} for some positive constant $A$, since $|\mathrm{e}^{\mathrm{i} \zeta}-1| \leq |\zeta|$ for any $\zeta$. Therefore, with $\zeta = \frac{\xi}{(v_n)^{1/3}}$, we obtain on the zone $\xi \in [-c(v_n)^{1/3},c(v_n)^{1/3}|$ the estimate $$\left|\sum_{j \in \mathbb{N}} \log\!\left(1+p_{n,j}\left(\mathrm{e}^{\frac{\mathrm{i} \xi}{(v_n)^{1/3}}}-1\right)\right) - \mathrm{i} \,\frac{m_n}{(v_n)^{1/3}}\, \xi + (v_n)^{1/3}\,\frac{\xi^2}{2}\right| \leq A\,\frac{m_n}{v_n}\,|\xi|^3.$$ So, if $t_n=(v_n)^{1/3}$ and $X_n = (M(A_n)-m_n)/(v_n)^{1/3}$, then the identity $\mathbb{E}[\mathrm{e}^{\mathrm{i} \zeta M(A_n)}]=\prod_{j \in \mathbb{N}} (1+p_{n,j}\,(\mathrm{e}^{\mathrm{i} \zeta}-1))$ leads to $$|\theta_n(\xi)-1|\leq A\,\frac{m_n}{v_n}\,|\xi|^3\,\exp\left(A\,\frac{m_n}{v_n}\,|\xi|^3\right) \leq K_1|\xi^3|\,\exp(K_2|\xi|^3)$$ with $K_1=K_2 = 2A\,\frac{r}{r-1}$, for $n$ large enough. Once this zone of control is established, the probabilistic estimates follow readily from Theorem \ref{thm:locallimit1} and \cite{FMN17}. \end{proof} As an application of the previous proposition, consider $G(z) = \sum_{n=0}^\infty G_n\,z^n$ a random analytic series, with the $G_n$'s that are independent standard complex Gaussian variables. The radius of convergence of $G$ is almost surely equal to $1$, and the set of zeroes of $G$ is a determinantal point process on $D(0,1)=\{z \in \mathbb{C}\,|\,|z|<1\}$ with kernel $$K(z_1,z_2) = \frac{1}{\pi(1-z_1\overline{z_2})^2};$$ see \cite[Theorem 1]{PV05}. As a consequence, the number of zeroes $Z_R$ of the random series $G$ that fall in the disk $D(0,R)=\{z \in \mathbb{C}\,|\,|z|<R\}$ with $R<1$ admits for representation in law $$Z_R = \sum_{k=1}^\infty \mathrm{Ber}(R^{2k}),$$ where the Bernoulli random variables are taken independent (Theorem 2 in \emph{loc.~cit.}). In \cite[Section 7.1]{FMN16} and \cite[Section 3.1]{FMN17}, we used this representation to prove the mod-Gaussian convergence of $Z_R$ as $R$ goes to $1$. Here, we remark that \begin{align*} m_R &= \mathbb{E}[Z_R] = \sum_{k=1}^\infty R^{2k} = \frac{R^2}{1-R^2};\\ v_R &= \Var(Z_R) = \sum_{k=1}^\infty R^{2k}(1-R^{2k}) = \frac{R^2}{1-R^2} - \frac{R^4}{1-R^4} = \frac{R^2}{1-R^4}, \end{align*} so as $R$ goes to $1$, we have $m_R/(m_R-v_R) = 1+R^{-2} \to 2 \in (1,+\infty]$. Consequently, if we introduce the hyperbolic area $h=\frac{4\pi R^2}{(1-R^2)}$ of the disc $D(0,R)$, and if we use the conformal invariance of the point process of zeroes \cite[Section 2.3]{HKPV09}, we obtain: \begin{proposition} Denote $Z^h$ the number of zeroes of a random analytic series $G=\sum_{n=0}^\infty G_n\,z^n$ that fall in a disc with hyperbolic area $h$. For any $\delta \in (0,\frac{1}{2})$ and any Jordan measurable subset $B$ with $m(B)>0$, $$\lim_{h \to +\infty} h^{\frac{1}{2}-\delta}\,\,\mathbb{P}\!\!\left[Z^h-\frac{h}{4\pi} - \frac{xh^{\frac{1}{2}}}{\sqrt{8\pi}} \in \frac{h^{\delta}}{\sqrt{8\pi}}\,B\right] = \frac{\mathrm{e}^{-\frac{x^2}{2}}}{\sqrt{2\pi}}\,m(B).$$ \end{proposition} \noindent This result is optimal, because $\delta=0$ corresponds to the discrete scale, where the Gaussian approximation cannot hold. \medskip \subsection{Random zeta functions}\label{subsec:randomzeta} In this section, we consider a multi-dimensional example stemming from number theory and the study of the Riemann $\zeta$ function. Notice that Theorem \ref{thm:approx} and Lemma \ref{lem:estimatetestfunction} hold also in $\mathbb{R}^d$ with $d \geq 2$. Therefore, we have the following extension to $\mathbb{R}^{d \geq 2}$ of our main Theorem \ref{thm:locallimit1} (we only state this extension for mod-Gaussian sequences): \begin{proposition}\label{prop:locallimitmultidim} Let $(\mathbf{X}_n)_{n \in \mathbb{N}}$ be a sequence of random vectors in $\mathbb{R}^d$, and $(t_n)_{n \in \mathbb{N}}$ a sequence going to $+\infty$. We denote $$ \theta_n(\boldsymbol{\xi}) = \mathbb{E}[\mathrm{e}^{\mathrm{i} \scal{\boldsymbol{\xi}}{\mathbf{X}_n} }]\,\mathrm{e}^{\frac{t_n\,\|\boldsymbol{\xi}\|^2}{2}},\quad \text{with }\scal{\boldsymbol{\xi}}{\mathbf{X}_n} = \sum_{i=1}^d \xi_i X_{n,i}\text{ and }\|\boldsymbol{\xi}\|^2 = \sum_{i=1}^d (\xi_i)^2. $$ We assume that there is a zone $[-K(t_n)^{\gamma},K(t_n)^\gamma]^d$ such that, for any $\boldsymbol{\xi}$ in this zone, $$|\theta_n(\boldsymbol{\xi})-1|\leq K_1\,\|\boldsymbol{\xi}\|^v\,\exp(K_2\,\|\boldsymbol{\xi}\|^w)$$ where $v>0$, $w \geq 2$ and $-\frac{1}{2}<\gamma\leq \frac{1}{w-2}$. \medskip \noindent Then, $\mathbf{Y}_n = \mathbf{X}_n/\sqrt{t_n}$ converges in law to a standard Gaussian law $\mathcal{N}_{\mathbb{R}^d}(0,I_d)$, and for any $\delta \in (0,\frac{1}{2}+\gamma)$, any $\mathbf{y} \in \mathbb{R}^d$ and any Jordan measurable subset $B \subset \mathbb{R}^d$ with $m(B)>0$, $$\lim_{n \to \infty} \,\,(t_n)^{d\delta}\,\mathbb{P}\!\!\left[\mathbf{Y}_n -\mathbf{y} \in (t_n)^{-\delta}B\right] = \frac{\mathrm{e}^{-\frac{\|\mathbf{y}\|^2}{2}}}{(2\pi)^{\frac{d}{2}}}\,m(B),$$ where $m(B)$ is the $d$-dimensional Lebesgue measure of $B$. \end{proposition} We refer to \cite[Theorem 4]{KN12} for a similar statement, with slightly different assumptions. The reader should beware that in the theory of mod-$\phi$ convergent sequences, this local limit theorem is the only multi-dimensional extension of the results in dimension $1$ that is straightforward. Thus, for the speed of convergence estimates and the large deviation results, new phenomenons occur in dimension $d \geq 2$, and the extension of the one-dimensional results is much more involved \cite{FMN17b}. Now, let us apply Proposition \ref{prop:locallimitmultidim} to the sequence of complex random variables \begin{equation} X_n = - \sum_{p \leq n} \log\left(1-\frac{U_p}{\sqrt{p}}\right), \label{eq:randomzeta} \end{equation} where the sum runs over prime numbers $p$ smaller than $n$, and the random variables $U_p$ are independent and uniformly distributed on the unit circle. The random variables $X_n$ are simple models of the logarithm of the random zeta function on the critical line, see \cite[Section 4.1]{JKN11} and \cite[Example 2]{KN12}. The $2$-dimensional Fourier transform of $X_n$ was computed in \cite{KN12}: $$\mathbb{E}[\mathrm{e}^{\mathrm{i} (\xi_1\mathrm{Re}(X_n) + \xi_2 \mathrm{Im}(X_n))}] = \prod_{p \leq n} \hypergeom{\frac{\mathrm{i} \xi_1+\xi_2}{2}}{\frac{\mathrm{i} \xi_1-\xi_2}{2}}{1}\left(\frac{1}{p}\right),$$ where $\hypergeom{a}{b}{c}(z)$ is the hypergeometric function defined by $\hypergeom{a}{b}{c}(z) = \sum_{m=0}^\infty \frac{a^{\uparrow m}\,b^{\uparrow m}}{c^{\uparrow m}\,m!}\,z^m,$ with $k^{\uparrow m}=k(k+1)\cdots (k+m-1)$. \bigskip Therefore, if $\theta_n(\boldsymbol{\xi}) = \mathbb{E}[\mathrm{e}^{\mathrm{i} \scal{\boldsymbol{\xi}}{X_n}}]\,\mathrm{e}^{\frac{t_n\|\boldsymbol{\xi}\|^2}{2}}$ with $t_n = \frac{1}{2}\sum_{p\leq n}\frac{1}{p}$, then \begin{align*} \theta_n(\boldsymbol{\xi}) = \prod_{p \leq N} \left(1 - \frac{\|\boldsymbol{\xi}\|^2}{4p} + \sum_{m \geq 2} \frac{(\frac{\mathrm{i} \xi_1+\xi_2}{2})^{\uparrow m}\,(\frac{\mathrm{i} \xi_1-\xi_2}{2})^{\uparrow m}}{(m!)^2}\,p^{-m}\right) \mathrm{e}^{\frac{\|\boldsymbol{\xi}\|^2}{4p}}. \end{align*} Denote $T_p(\boldsymbol{\xi})$ the terms of the product on the right-hand side. We have \begin{align*} |R_p(\boldsymbol{\xi})|&=\left|\sum_{m \geq 2}\frac{(\frac{\mathrm{i} \xi_1+\xi_2}{2})^{\uparrow m}\,(\frac{\mathrm{i} \xi_1-\xi_2}{2})^{\uparrow m}}{(m!)^2}\,p^{-m} \right|\\ &\leq \sum_{m \geq 2}\left(\frac{(\frac{\|\boldsymbol{\xi}\|}{2})^{\uparrow m}}{m!}\right)^{\!2}\,p^{-m} = \frac{1}{2\pi} \int_{0}^{2\pi} \left|\sum_{m\geq 2}\frac{(\frac{\|\boldsymbol{\xi}\|}{2})^{\uparrow m}}{m!} \mathrm{e}^{\mathrm{i} m\theta}\,p^{-\frac{m}{2}}\right|^2\DD{\theta}\\ &\leq \frac{1}{2\pi} \int_{0}^{2\pi} \left|\frac{1}{(1-\mathrm{e}^{\mathrm{i}\theta}p^{-1/2})^{\frac{\|\boldsymbol{\xi}\|}{2}}} - 1 - \frac{\|\boldsymbol{\xi}\|}{2}\,\mathrm{e}^{\mathrm{i}\theta}p^{-1/2}\right|^2\DD{\theta} \\ &\leq \left(\frac{(\frac{\|\boldsymbol{\xi}\|}{2})(\frac{\|\boldsymbol{\xi}\|}{2}+1)}{2}\,\frac{1}{(1-p^{-1/2})^{\frac{\|\boldsymbol{\xi}\|}{2} +2}\,p}\right)^2. \end{align*} Suppose $\|\boldsymbol{\xi}\|\leq \sqrt{p}$. Then, $\mathrm{e}^{\frac{\|\boldsymbol{\xi}\|^2}{4p}} \leq \mathrm{e}^{\frac{1}{4}}$ and $(1-(1-\frac{\|\boldsymbol{\xi}\|^2}{4p})\mathrm{e}^{\frac{\|\boldsymbol{\xi}\|^2}{4p}})\leq (1-\frac{3}{4}\mathrm{e}^{\frac{1}{4}}) \frac{\|\boldsymbol{\xi}\|^4}{p^2}$, so \begin{align*} |R_p(\boldsymbol{\xi})| &\leq \frac{1}{64(1-p^{-1/2})^{4+p^{1/2}}}\,\frac{\|\boldsymbol{\xi}\|^2(\|\boldsymbol{\xi}\|+2)^2}{p^2}\leq \frac{12.06\,\|\boldsymbol{\xi}\|^2(\|\boldsymbol{\xi}\|+2)^2}{p^2};\\ |T_p(\boldsymbol{\xi})-1|&\leq \left(1-\left(1-\frac{\|\boldsymbol{\xi}\|^2}{4p}\right)\mathrm{e}^{\frac{\|\boldsymbol{\xi}\|^2}{4p}}\right)+ \mathrm{e}^{1/4}\,|R_p(\boldsymbol{\xi})| \leq \frac{16\,\|\boldsymbol{\xi}\|^2(\|\boldsymbol{\xi}\|+2)^2}{p^2} \\ &\leq \frac{16\,\|\boldsymbol{\xi}\|^2(\|\boldsymbol{\xi}\|+2)^2}{p^2}\,\mathrm{e}^{\frac{16\,\|\boldsymbol{\xi}\|^2(\|\boldsymbol{\xi}\|+2)^2}{p^2}};\\ |T_p(\boldsymbol{\xi})| &\leq 1+\frac{16\,\|\boldsymbol{\xi}\|^2(\|\boldsymbol{\xi}\|+2)^2}{p^2} \leq \mathrm{e}^{\frac{16\,\|\boldsymbol{\xi}\|^2(\|\boldsymbol{\xi}\|+2)^2}{p^2}}. \end{align*} On the first line, we used the fact that $p \in \mathbb{P} \mapsto (1-p^{-1/2})^{4+p^{1/2}}$ attains its minimum at $p=2$. On the other hand, if $\|\boldsymbol{\xi}\| \geq \sqrt{p}$, then \begin{align*} |T_p(\boldsymbol{\xi})| &= \left|\mathbb{E}\!\left[\mathrm{e}^{-\mathrm{i} \scal{\boldsymbol{\xi}}{\log\!\left(1-\frac{U_p}{\sqrt{p}}\right)}}\right]\right|\,\mathrm{e}^{\frac{\|\boldsymbol{\xi}\|^2}{4p}} \leq \mathrm{e}^{\frac{\|\boldsymbol{\xi}\|^2}{4p}} \leq \mathrm{e}^{\frac{16\,\|\boldsymbol{\xi}\|^2(\|\boldsymbol{\xi}\|+2)^2}{p^2}};\\ |T_p(\boldsymbol{\xi})-1| &\leq 1 + \mathrm{e}^{\frac{\|\boldsymbol{\xi}\|^2}{4p}} \leq 2\,\mathrm{e}^{\frac{\|\boldsymbol{\xi}\|^2}{4p}}\leq \frac{16\,\|\boldsymbol{\xi}\|^2(\|\boldsymbol{\xi}\|+2)^2}{p^2}\,\mathrm{e}^{\frac{16\,\|\boldsymbol{\xi}\|^2(\|\boldsymbol{\xi}\|+2)^2}{p^2}}. \end{align*} From these inequalities, one deduces that \emph{for any} $\boldsymbol{\xi} \in \mathbb{R}^2$, \begin{align*} |\theta_n(\boldsymbol{\xi})-1| &= \left|\left(\prod_{p \leq n} T_p(\boldsymbol{\xi})\right)-1\right| \leq \sum_{p\leq n} \left(\prod_{\substack{p'\leq n\\ p'\neq p}} |T_{p'}(\boldsymbol{\xi})|\right)\,|T_p(\boldsymbol{\xi})-1| \leq S \exp S \end{align*} where $S = \sum_{p\leq n} \frac{16\,\|\boldsymbol{\xi}\|^2(\|\boldsymbol{\xi}\|+2)^2}{p^2} \leq 8\,\|\boldsymbol{\xi}\|^2(\|\boldsymbol{\xi}\|+2)^2$. It follows immediately that one has a control of index $(2,4)$ over $\theta_n(\boldsymbol{\xi})-1$, which holds over the whole real line. Hence: \begin{proposition} Let $X_n$ be the random log-zeta function defined by Equation \eqref{eq:randomzeta}. For any exponent $\delta \in (0,1)$ and any $z \in \mathbb{C}$, $$\lim_{n \to \infty} \left(\log \log n\right)^{2\delta}\,\mathbb{P}\!\left[X_n - z\,\sqrt{\log \log n} \in \left(\log \log n\right)^{\frac{1}{2}-\delta}\,B \right] = \frac{\mathrm{e}^{-|z|^2}}{\pi}\,m(B)$$ for any Jordan measurable bounded set $B \subset \mathbb{C}$ with $m(B)>0$. \end{proposition} \noindent This result improves on \cite[Section 3, Example 2]{KN12} and \cite[Section 3.5]{DKN15}, which dealt only with the case $\delta\leq \frac{1}{2}$. \medskip \subsection{Sums with sparse dependency graphs}\label{subsec:dependencygraph} In the previous paragraphs, we looked at sums of independent random variables, but the theory of zones of control is also useful when dealing certain sums of weakly dependent random variables. A general setting where one can prove mod-Gaussian convergence with a zone of control is if one has strong bounds on the \emph{cumulants} of the random variables considered. Recall that if $X$ is a random variable with convergent Laplace transform $\mathbb{E}[\mathrm{e}^{zX_n}]$, then its cumulants $\kappa^{(r \geq 1)}(X)$ are the coefficients of the log-Laplace transform: $$\log \mathbb{E}[\mathrm{e}^{zX_n}] = \sum_{r=1}^\infty \frac{\kappa^{(r)}(X)}{r!}\,z^r.$$ If $(S_n)_{n \in \mathbb{N}}$ is a sequence of random variables, one says that one has uniform bounds on the cumulants with parameters $(D_n,N_n,A)$ if \begin{equation} \forall r \geq 1,\,\,\,|\kappa^{(r)}(S_n)| \leq r^{r-2}\,A^r\,(2D_n)^{r-1}\,N_n.\label{eq:boundoncumulants} \end{equation} This definition was introduced in \cite[Definition 4.1]{FMN17}. If $D_n = o(N_n)$ and $N_n \to +\infty$, then the uniform bounds on cumulants imply that $X_n = \frac{S_n-\mathbb{E}[S_n]}{(N_n)^{1/3}(D_n)^{2/3}}$ admits a zone of control of mod-Gaussian convergence, with parameters $$ t_n = \frac{\Var(S_n)}{(N_n)^{2/3}(D_n)^{4/3}},$$ index $(3,3)$ and size $O(t_n)$ \cite[Corollary 4.2]{FMN17}. Therefore: \begin{proposition} Let $(S_n)_{n \in \mathbb{N}}$ be a sequence of random variables that admit uniform bounds on cumulants (Equation \eqref{eq:boundoncumulants}). We suppose that $\frac{(\mathrm{var}(S_n))^{3/2}}{N_n(D_n)^{2}}$ goes to $+\infty$, and that $D_n = o(N_n)$. Then, if $Y_n = \frac{S_n-\mathbb{E}[S_n]}{\sqrt{\Var(S_n)}}$, for any $\delta \in (0,1)$ and any Jordan measurable subset $B$ with $m(B)>0$, $$\lim_{n \to \infty} \left(\frac{(\Var(S_n))^{3/2}}{N_n(D_n)^2}\right)^{\!\delta} \,\,\mathbb{P}\!\left[Y_n - y \in \left(\frac{(\Var(S_n))^{3/2}}{N_n(D_n)^2}\right)^{\!-\delta}B\right] = \frac{\mathrm{e}^{-\frac{y^2}{2}}}{\sqrt{2\pi}}\, m(B).$$ In particular, if $\liminf_{n \to \infty} \frac{\Var(S_n)}{N_nD_n}>0$, then for any $\gamma \in (0,\frac{1}{2})$, $$\lim_{n \to \infty} \left(\frac{N_n}{D_n}\right)^{\!\gamma} \,\,\mathbb{P}\!\left[Y_n - y \in \left(\frac{D_n}{N_n}\right)^{\!\gamma}B\right] = \frac{\mathrm{e}^{-\frac{y^2}{2}}}{\sqrt{2\pi}}\, m(B).$$ \end{proposition} \begin{example} As a particular case, suppose that $S_n = \sum_{i=1}^{N_n} A_i$ is a sum of random variables with $\|A_i\|_\infty \leq A$ for all $i \leq N_n$, and such that there exists a \emph{dependency graph} $G = (\lle 1,N_n\rre,E_n)$ with the following property: \begin{enumerate} \item We have $D_n = 1+\max_{i \in \lle 1,N_n\rre} \deg(i)$. \item If $I$ and $J$ are two disjoint sets of vertices of $\lle 1,N_n\rre$ without edge $e \in E_n$ connecting a vertex $i \in I$ with a vertex $j \in J$, then $(A_i)_{i \in I}$ and $(A_j)_{j \in J}$ are independent. \end{enumerate} Then, it was shown in \cite[Section 9]{FMN16} that $S_n$ has uniform bounds on cumulants with parameters $(D_n,N_n,A)$. Therefore, if $S_n$ is a sum of random variables with a sparse dependency graph, then $Y_n = (S_n - \mathbb{E}[S_n])/\sqrt{\Var(S_n)}$ usually satisfies a local limit theorem which holds up to the scale $\sqrt{D_n/N_n}$. \bigskip For instance, consider the graph subcount $I(H,G_n)$ of a motive $H$ in a random Erd\"os--R\'enyi graph $G_n$ of parameters $(n,p)$, with $p \in (0,1)$ fixed (see \cite[Example 4.10]{FMN17} for the precise definitions). It is shown in \emph{loc.~cit.} that $I(H,G_n)$ admits a dependency graph with parameters \begin{align*} N_n &= n^{\downarrow k};\\ D_n &= 2\binom{k}{2}\,(n-2)^{\downarrow k-2}, \end{align*} where $k$ is the number of vertices of the graph $H$, and $n^{\downarrow k}=n(n-1)\cdots (n-k+1)$. Moreover, \begin{align*} \mathbb{E}[I(H,G_n)]&=p^h n^{\downarrow k};\\ \Var(I(H,G_n)) &= 2h^2 p^{2h-1}(1-p)\,n^{2k-2} + O(n^{2k-3}), \end{align*} where $h$ is the number of edges of $H$. Therefore, we have the local limit theorem: $$\mathbb{P}\!\left[\frac{I(H,G_n)}{p^h}-n^{\downarrow k} - 2h n^{k-1}x \in n^{k-1-\gamma}\,B\right] \simeq n^{-\gamma}\,\frac{\mathrm{e}^{-\frac{px^2}{1-p}}\,m(B)}{2h\sqrt{\pi\,(\frac{1}{p}-1)}}$$ for any $\gamma \in (0,1)$. For example, if $T_n$ is the number of triangles in a random Erdös--Rényi graph $G(n,p)$, then for any $\gamma \in (0,1)$, $$\lim_{n \to \infty} n^{\gamma}\,\,\mathbb{P}\!\left[\frac{T_n}{p^3} - n^{\downarrow 3} -6n^2 x\in n^{2-\gamma}\,B\right] = \frac{\mathrm{e}^{-\frac{px^2}{1-p}}\,m(B)}{6\sqrt{\pi\,(\frac{1}{p}-1)}}.$$ We cannot attain with our method the discrete scale (which would correspond to the exponent $\gamma=2$ in the case of triangles). In the specific case of triangles, this strong local limit theorem has been proved recently by Gilmer and Kopparty, see \cite{GK16}. Our local limit theorem holds at larger scales and for any graph subcount. \end{example} \medskip \subsection{Numbers of visits of a finite Markov chain}\label{subsec:markov} The method of cumulants can also be applied to sums of random variables that are all dependent (there is no sparse dependency graph), but still with a ``weak'' dependency structure. We refer to \cite[Section 5]{FMN17}, where this is made rigorous by means of the notion of weighted dependency graph. Consider for instance an ergodic Markov chain $(X_n)_{n \in \mathbb{N}}$ on a finite state space $\mathfrak{X}=\lle 1,M\rre$, where by ergodic we mean that the transition matrix $P$ of $(X_n)_{n \in \mathbb{N}}$ is irreducible and aperiodic. We also fix a state $a \in \lle 1,M\rre$, and we denote $\pi(a)$ the value of the unique stationary measure $\pi$ of the Markov chain at $a$. If $T_a$ is the first return time to $a$, then it is well known that $\pi(a) = \frac{1}{\mathbb{E}_a[T_a]}$. In the sequel, we assume to simplify that the Markov chain has for initial distribution the stationary measure $\pi$, and we denote $\mathbb{P}$ and $\mathbb{E}$ the corresponding probabilities and expectations on trajectories in $\mathfrak{X}^{\mathbb{N}}$. If $$N_{n,a} = \mathrm{card} \{ i \in \lle 1,n\rre\,|\,X_i=a\}$$ is the number of visits of $a$ from time $1$ to time $n$, then $\mathbb{E}[N_{n,a}] =n\,\pi(a)$ and $$\lim_{n \to \infty}\frac{\Var(N_{n,a})}{n} = (\pi(a))^3\,\Var(T_a).$$ This identity is a particular case of the following general result: if $f$ is a function on $\mathfrak{X}$, then $$\lim_{n \to \infty} \frac{\Var(\sum_{i=1}^n f(X_i))}{n} = \pi(a)\,\mathbb{E}_a\!\left[\left(\sum_{i=1}^{T_a} f(X_i)-\pi(f)\right)^2\right].$$ In \cite[Theorem 5.14]{FMN17}, we proved that $N_{n,a}$ has uniform bounds on cumulants with parameters $A=1$, $N_n=n$, and $D_n = \frac{1+\theta_P}{1-\theta_P}$, where $\theta_P <1$ is a constant depending only on $P$ (it is the square root of the second largest eigenvalue of the multiplicative reversiblization $P\widetilde{P}$ of $P$, see \cite[\S2.1]{Fill91} for details on this construction). From this, we deduce: \begin{proposition} Let $(X_n)_{n \in \mathbb{N}}$ be a stationary finite ergodic Markov chain, and $a$ be an element of the space of states. The numbers of visits $N_{n,a}$ satisfy the local limit theorem $$\lim_{n \to \infty}\,n^{\gamma}\,\,\mathbb{P}\!\!\left[\frac{N_{n,a} - n\pi(a)}{\sqrt{\Var(N_{n,a})}} - x \in n^{-\gamma}B\right] = \frac{\mathrm{e}^{-\frac{x^2}{2}}\,m(B)}{\sqrt{2\pi}}$$ for any $\gamma \in (0,\frac{1}{2})$ and any Jordan measurable subset $B$ with $m(B)>0$. \end{proposition} For finite ergodic Markov chains, the discrete local limit theorem $(\gamma = \frac{1}{2})$ is known and due to Kolmogorov, see \cite{Kol49}. However, since there is no uniformity in the local estimates of $\mathbb{P}[N_{n,a}=k]$, our result is not a direct consequence (and does not imply) the local limit theorem of Kolmogorov. \bigskip \section{Examples from random matrix theory}\label{sec:matrix} In this section, we examine examples stemming from or closely related to random matrix theory, and which exhibit mod-Gaussian behavior. \subsection{Number of charge one particles in a two charge system}\label{subsec:charge} Let $L,M$ be non-negative random integers and $n\in\mathbb{N}$ be a fixed natural number, such that $L+2M=2n$. We consider the two charge ensembles proposed in \cite{RSX13} and \cite{SS14}, where the particles are located on the real line, respectively on the unit circle. These models can be considered as interpolations between the classical ensembles GOE and GSE, respectively COE and CSE from random matrix theory. \medskip \paragraph{\emph{The real line.}} The system consists of $L$ particles with unit charge and $M$ particles with charge two, located on the real line at positions $\xi=(\xi_1,\ldots,\xi_L)$ and $\zeta=(\zeta_1,\ldots,\zeta_M)$. We denote by $E_{L,M}$ the total potential energy of the state $(\xi,\zeta)$. For this model $E_{L,M}$ is the sum of the total interaction energy between particles and of an external harmonic oscillator potential: \begin{align*} E_{L,M} &= -\sum_{1\leq i<j\leq L }\log |\xi_i-\xi_j| - 2 \sum_{i=1}^L\sum_{j=1}^M \log |\xi_i-\zeta_j|- 4\sum_{1\leq i<j\leq M }\log |\zeta_i-\zeta_j|\\ & \qquad+\sum_{i=1}^L \frac{(\xi_i)^2}{2} + 2\sum_{j=1}^M \frac{(\zeta_j)^2}{2}. \end{align*} The ensemble has population vector $(L,M)$ with probability proportional to $$X^LZ_{L,M},$$ where $X\ge 0$ is a parameter called \emph{fugacity} and $Z_{L,M}$ is given by \begin{equation*} Z_{L,M} = \frac{1}{L! M!} \int_{\mathbb{R}^L} \int_{\mathbb{R}^M} \mathrm{e}^{-E_{L,M} (\xi, \zeta)} d\mu^L(\xi) d\mu^M(\zeta), \end{equation*} $\mu^L,\ \mu^M$ being the Lebesgue measures on $\mathbb{R}^L$ and $\mathbb{R}^M$ respectively. We denote by $Z_n(X)$ the total partition function of the system, that is $$Z_n(X)=\sum_{L+2M=2n}X^L Z_{L,M}.$$ Let $L_n(\gamma)$ represent the number of charge one particles, in the scaling $X=\sqrt{2n\gamma}$ with $\gamma>0$. In this regime, the proportion of such particles is non-trivial. In \cite{RSX13}, the authors gave a representation of the total partition function $Z_n(X)$ as a product of generalized Laguerre polynomials with parameters $-\frac{1}{2}$. As a consequence, in \cite[Section 2]{DHR18}, it is shown that the normalized sequence $$\left(\frac{L_n(\gamma)-\mathbb{E}\left[L_n(\gamma)\right]}{n^{1/3}}\right)_{n\in\mathbb{N}}$$ is mod-Gaussian convergent on the complex plane with parameters $t_n=n^{1/3}\sigma_n^2(\gamma)$ and limiting function $\psi(z)=\exp (M(\gamma)\frac{z^3}{6})$ ; here $\sigma_n^2(\gamma)=\frac{\text{Var}\left(L_n(\gamma)\right)}{n}$ and $$M(\gamma) = \lim_{n \to \infty} \frac{\kappa^{(3)}(L_n(\gamma))}{n}.$$ The precise values of $\sigma^2(\gamma) = \lim_{n \to \infty} \sigma_n^2(\gamma)$ and $M(\gamma)$ are computed in \cite[Proposition 2.2]{DHR18}. On the other hand, the mod-Gaussian convergence has a zone of control of order $O\left(t_n\right)$ and index $(3,3)$. A straightforward application of Theorem \ref{thm:locallimit1} shows that for any $\delta \in (0,\frac{1}{2})$, \begin{equation*} \mathbb{P}\left[\frac{L_n(\gamma)-\mathbb{E}[L_n(\gamma)]}{n^{1/2}\,\sigma_n(\gamma)}-x\in n^{-\delta}B\right]\simeq n^{-\delta}\,\frac{\mathrm{e}^{-\frac{x^2}{2}}}{\sqrt{2\pi}}\,m(B). \end{equation*} \medskip \paragraph{\emph{The unit circle.}} In the circular version of the above ensemble introduced in \cite{SS14}, the particles are located on the unite circle, instead of on the real line. Unlike the previous model, the total energy of the system is given by the interacting energy between the particles, and there is no external field contributing to it: \begin{align*} E_{L,M} &= -\sum_{1\leq i<j\leq L }\log |\xi_i-\xi_j| - 2 \sum_{i=1}^L\sum_{j=1}^M \log |\xi_i-\zeta_j|- 4\sum_{1\leq i<j\leq M }\log |\zeta_i-\zeta_j|. \end{align*} Let $L_n(\rho)$ denote the number of charge one particles, in the scaling $X=2n\rho$ with $\rho>0$. Using the polynomial product structure of the partition function established by Forrester (see \cite[Section 7.10]{For10}), it is possible to prove \cite[Section 3]{DHR18} that the normalized sequence $$\left(\frac{L_n(\rho)-\mathbb{E}\left[L_n(\rho)\right]}{n^{1/3}}\right)_{n\in\mathbb{N}}$$ converges mod-Gaussian on the whole complex plane with parameters $t_n=n^{1/3}\sigma_n^2(\rho)$, limiting function $$\psi(z)=\exp \left(\frac{z^3}{6} \left(\rho\arctan\frac{1}{\rho}-\frac{\rho^4+3\rho^2}{\left(\rho^2+1\right)^2}\right) \right)$$ and with a zone of control of order $O\left(t_n\right)$ and index $(3,3)$. Again $\sigma_n^2(\rho)=\frac{\text{Var}\left(L_n(\rho)\right)}{n}$. Therefore, for any $\delta \in (0,\frac{1}{2})$, \begin{equation*} \mathbb{P}\left[\frac{L_n(\rho)-\mathbb{E}\left[L_n(\rho)\right]}{n^{1/2}\,\sigma_n(\rho)}-x\in n^{-\delta}B\right]\simeq n^{-\delta}\,\frac{\mathrm{e}^{-\frac{x^2}{2}}}{\sqrt{2\pi}}\,m(B). \end{equation*} \medskip \subsection{Determinant of the GUE}\label{subsec:gue} Let $W_n^H$ be a $n\times n$ random matrix in the Gaussian Unitary Ensemble. Denote by \begin{equation*} X_n^{H}:=\log|\det W_{n}^{H}|-\mu_n^H, \end{equation*} the logarithm of the modulus of the determinant, properly centered. The centering is given by \begin{equation*} \mu_n^H=\frac{1}{2}\log2\pi-\frac{n}{2}+\frac{n}{2}\log n, \end{equation*} and corresponds (up to constant terms) to the expectation of $\log\det |W_{n}^{H}|$. The Mellin transform of this statistics has been calculated explicitly by Metha and Normand in \cite{MN98}. Thus $$ \mathbb{E}\!\left[|\det W_{n}^{H}|^z\right]=2^{\frac{nz}{2}}\,\prod_{k=1}^n\frac{\Gamma\!\left(\frac{z+1}{2}+\lfloor \frac{k}{2}\rfloor\right)}{\Gamma\!\left(\frac{1}{2}+\lfloor \frac{k}{2}\rfloor\right)}, $$ is analytic for all $z\in\mathbb{C}$ with $\mathrm{Re}(z)>-1$. Recently, the same representation has been derived by Edelman and La Croix in \cite{CE15}, noticing that $|\det W_n^H|$ is distributed as the product of the singular values of the GUE. Relying on this explicit formula, it is possible to prove that the sequence $(X_n^H)_{n\in\mathbb{N}}$ is mod-Gaussian convergent on $D=\left(-1,+\infty\right)\times \mathrm{i}\mathbb{R}$, with parameters $t_n=\frac{1}{2}\log\frac{n}{2}$ and limiting function \begin{equation*} \psi(z)=\log\frac{\Gamma(\frac{1}{2})\,\left(G(\frac{1}{2})\right)^2}{\Gamma(\frac{z+1}{2})\,\left(G(\frac{z+1}{2})\right)^2}. \end{equation*} Moreover, this sequence has a zone of control of size $O(t_n)$ and index $(1,3)$. Hence, by Theorem \ref{thm:locallimit1}, we obtain that \begin{equation*} \mathbb{P}\left[\frac{X_n^H}{\sqrt{\frac{1}{2}\log\frac{n}{2}}}-x\in (\log n)^{-\delta}B\right]\simeq (\log n)^{-\delta}\,\frac{\mathrm{e}^{-\frac{x^2}{2}}}{\sqrt{2\pi}}\,m(B), \end{equation*} for every $\delta\in \left(0,\frac{3}{2}\right)$.\medskip \subsection{Determinants of \texorpdfstring{$\beta$}{beta}-ensembles}\label{subsec:beta} A result analogous to the previous one can be established for log-determinants of matrices in some well-known $\beta$-ensembles. Namely, let $W_{n}^{i,\beta}$ be a random matrix in the: \begin{enumerate} \item[\footnotesize$(i=L)$] Laguerre ensemble with parameters $(n,n,\beta)$, \item[\footnotesize$(i=J)$] Jacobi ensemble, with parameters $(\lfloor n\tau_1\rfloor,\lfloor n\tau_1\rfloor,\lfloor n\tau_2\rfloor,\beta)$, where $\tau_1,\,\tau_2>0$, \item[\footnotesize$(i=G)$] Uniform Gram ensemble of parameters $(n,n,\beta)$. \end{enumerate} We refer to \cite[Section 3]{DHR19} for the precise definitions. For all $i$, denote by \begin{equation*} X_n^{i,\beta}:=\log\det W_{n}^{i,\beta}-\mu_{n}^{i,\beta} \end{equation*} the logarithm of the determinant, properly centered. As for the GUE case, the centering parameters $\mu_{n}^{i,\beta}$ correspond (up to constant terms) to the expectation of the log-determinants; see \cite[Lemma 4.2]{DHR19} for their explicit expressions.\medskip For these statistics, the moment generating functions can be calculated by means of Selberg integrals. As a consequence, they are all given by a product of Gamma functions; for instance, for the $\beta$-Laguerre ensemble, $$\mathbb{E}\left[\mathrm{e}^{zX_n^{L,\beta}}\right]=\mathrm{e}^{-z\mu_{n}^{L,\beta}}2^{nz}\prod_{k=1}^n\frac{\Gamma(\frac{\beta}{2}k+z)}{\Gamma(\frac{\beta}{2}k)}. $$ Classical techniques of complex analysis enable us to find an asymptotic expansion of these product formulas as $n$ goes to infinity. In turn, these expansions imply mod-Gaussian convergence for all the sequences $(X_n^{i,\beta})_{n\in\mathbb{N}}$ on $$D=\left(-\frac{\beta}{2},+\infty\right)\times \mathrm{i}\mathbb{R},$$ with parameters $t_n=\frac{2}{\beta}\log n$ and a zone of control of size $O(t_n)$, with index $(1,3)$.\\ Therefore, for all $i=L,J,G$ and any $\delta\in\left(0,\frac{3}{2}\right)$, \begin{equation*} \mathbb{P}\left[\frac{X_n^{i,\beta}}{\sqrt{\frac{2}{\beta}\log n}}-x\in (\log n)^{-\delta}B\right]\simeq (\log n)^{-\delta}\,\frac{\mathrm{e}^{-\frac{x^2}{2}}}{\sqrt{2\pi}}\,m(B). \end{equation*} \medskip \subsection{Characteristic polynomial of the circular \texorpdfstring{$\beta$}{beta}-Jacobi ensemble} \label{subsec:circular} Let $W_n^{CJ,\beta}$ be a random matrix in the circular $\beta$-Jacobi ensemble of size $n$. We recall that the joint density function of the eigenangles $\left(\theta_1,\ldots,\theta_n\right)\in [0,2\pi]^n$ is proportional to $$\prod_{1\le k<j\le n}\left|\mathrm{e}^{i\theta_k}-\mathrm{e}^{i\theta_l}\right|\prod_{k=1}^n\left(1-\mathrm{e}^{-i\theta_k}\right)^\delta\left(1-\mathrm{e}^{i\theta_k}\right)^{\bar{\delta}},$$ with $\delta\in\mathbb{C}$, $\mathrm{Re} (\delta)>-\frac{1}{3}$. Denote by $$X_n^{CJ,\beta}:=\log\det\left|\mathrm{Id}-W_n^{CJ,\beta}\right|- \frac{\delta+\bar{\delta}}{\beta}$$ the logarithm of the determinant of the characteristic polynomial evaluated at $1$ and properly shifted. Starting from the representation of Laplace transform of $X_n^{CJ,\beta}$ computed in \cite[Formula 4.2]{BNR09}, one can establish the complex mod-Gaussian convergence of the sequence $(X_n^{CJ,\beta})_{n\in\mathbb{N}}$ on the subset $$D=\left(-\frac{\beta}{2},+\infty\right)\times \mathrm{i}\mathbb{R},$$ with parameters $t_n=\frac{1}{2\beta}\log n$ and a zone of control of size $O(t_n)$, with index $(1,3)$. It follows then from Theorem \ref{thm:locallimit1} that for all $\delta\in\left(0,\frac{3}{2}\right)$, \begin{equation*} \mathbb{P}\left[\frac{X_n^{CJ,\beta}}{\sqrt{\frac{1}{2\beta}\log n}}-x\in (\log n)^{-\delta}\,B\right]\simeq (\log n)^{-\delta}\,\frac{\mathrm{e}^{-\frac{x^2}{2}}}{\sqrt{2\pi}}\,m(B). \end{equation*} \bigskip \section{\texorpdfstring{$\mathrm{L}^1$}{L1}-mod-\texorpdfstring{$\phi$}{phi} convergence and local limit theorems}\label{sec:l1mod} \subsection{Mod-\texorpdfstring{$\phi$}{phi} convergence in \texorpdfstring{$\mathrm{L}^1(\mathrm{i} \mathbb{R})$}{L1(iR)}}\label{subsec:l1} In all the previous examples, by using the notion of zone of control, we identified a range of scales $(t_n)^{-\delta}$ at which the stable approximation of the random variables $Y_n$ holds. However, in certain examples, one has a control over the residues $\theta_n(\xi)$ that is valid over the whole real line. This raises the question whether the theory of mod-$\phi$ convergence can be used to prove local limit theorems that hold \emph{for any} infinitesimal scale. A sufficient condition for these strong local limit theorems is the notion of mod-$\phi$ convergence in $\mathrm{L}^1(\mathrm{i} \mathbb{R})$. It is a more abstract and restrictive condition than the notion of zone of control used in Theorem \ref{thm:locallimit1}, but it yields stronger results. \begin{definition}\label{def:L1modphi} Fix a reference stable law $\phi=\phi_{c,\alpha,\beta}$. Let $(X_n)_{n\in\mathbb{N}}$ be a sequence that is mod-$\phi$ convergent on $D=\mathrm{i} \mathbb{R}$, with parameters $(t_n)_{n \in \mathbb{N}}$ and limiting function $\theta(\xi)$. We say that there is mod-$\phi$ convergence in $\mathrm{L}^1(\mathrm{i} \mathbb{R})$ if: \begin{itemize} \item $\theta$ and the functions $\theta_n(\xi) = \mathbb{E}[\mathrm{e}^{\mathrm{i} \xi X_n}]\,\eta^{-t_n\eta_{c,\alpha,\beta}(\mathrm{i} \xi)}$ belong to $\mathrm{L}^1(\mathbb{R})$; \item the convergence \begin{equation*} \theta_n(\xi)\longrightarrow\theta(\xi) \end{equation*} takes place in $\mathrm{L}^1(\mathbb{R})$: $\|\theta_n -\theta\|_{\mathrm{L}^1(\mathbb{R})}\to 0$. \end{itemize} \end{definition} Roughly speaking, mod-convergence in $\mathrm{L}^1(\mathrm{i}\mathbb{R})$ is equivalent to the assumption that $\gamma=+\infty$ in the zone of control. The following theorem makes this statement more precise. \begin{theorem}\label{thm:locallimit2} Let $(X_n)_{n \in \mathbb{N}}$ be a sequence that converges mod-$\phi_{c,\alpha,\beta}$ in $\mathrm{L}^1(\mathrm{i}\mathbb{R})$, with parameters $(t_n)_{n \in \mathbb{N}}$ and limiting function $\theta$. Let $x \in \mathbb{R}$ and $B$ be a fixed Jordan measurable subset with $m(B)>0$. Then, for any sequence $s_n \to +\infty$, $$ \lim_{n \to \infty} s_n\,\,\mathbb{P}\!\!\left[Y_n-x \in \frac{1}{s_n}\,B\right] = p_{c,\alpha,\beta}(x)\,m(B),$$ where $Y_n$ is obtained from $X_n$ as in Proposition \ref{prop:convlaw}. \end{theorem} \begin{proof} For the same reasons as in the proof of Theorem \ref{thm:locallimit1}, it suffices to prove the estimate on test functions $g \in \mathscr{T}_0(\mathbb{R})$: $$\lim_{n \to \infty} s_n\,\,\mathbb{E}\!\left[g(s_n (Y_n-x))\right] = p_{c,\alpha,\beta}(x)\,\left(\int_{\mathbb{R}}g(y)\DD{y}\right).$$ By using Parseval's theorem and making the adequate changes of variables, we get $$\mathbb{E}\!\left[g(s_n (Y_n-x))\right] = \frac{1}{2\pi\,s_n}\,\int_{\mathbb{R}} \widehat{g}\!\left(\frac{\xi}{s_n}\right)\,\theta_n\!\left(-\frac{\xi}{(t_n)^{1/\alpha}}\right)\,\mathrm{e}^{\eta(-\mathrm{i}\xi)+\mathrm{i} x \xi}\DD{\xi}. $$ The function under the integral sign converges pointwise towards $\widehat{g}(0)\,\mathrm{e}^{\eta(-\mathrm{i} \xi)+\mathrm{i} x \xi}$, and this convergence actually occurs in $\mathrm{L}^1(\mathbb{R})$. Indeed, $$ \int_{\mathbb{R}} |\mathrm{e}^{\eta(-\mathrm{i} \xi)+\mathrm{i} x \xi}|\,\left|\widehat{g}\!\left(\frac{\xi}{s_n}\right)\,\theta_n\!\left(-\frac{\xi}{(t_n)^{1/\alpha}}\right)-\widehat{g}(0)\right|\DD{\xi} \leq A+B+C $$ with \begin{align*} A&= \int_{\mathbb{R}}|\mathrm{e}^{\eta(-\mathrm{i} \xi)}|\,\left|\widehat{g}\!\left(\frac{\xi}{s_n}\right)\right|\,\left|\theta_n\!\left(-\frac{\xi}{(t_n)^{1/\alpha}}\right)-\theta\!\left(-\frac{\xi}{(t_n)^{1/\alpha}}\right)\right|\DD{\xi};\\ B&=\int_{\mathbb{R}}|\mathrm{e}^{\eta(-\mathrm{i} \xi)}|\,\left|\widehat{g}\!\left(\frac{\xi}{s_n}\right)\right|\,\left|\theta\!\left(-\frac{\xi}{(t_n)^{1/\alpha}}\right)-1\right|\DD{\xi};\\ C&=\int_{\mathbb{R}}|\mathrm{e}^{\eta(-\mathrm{i} \xi)}|\,\left|\widehat{g}\!\left(\frac{\xi}{s_n}\right)-\widehat{g}(0)\right|\DD{\xi}. \end{align*} Since $\widehat{g}$ is bounded and $\widehat{g}(\frac{\xi}{s_n})-\widehat{g}(0)\to 0$ pointwise, one can apply the dominated convergence theorem to show that $C \to 0$. For $A$, we make another change of variables and write \begin{align*} A&=\int_{\mathbb{R}}(t_n)^{1/\alpha}\,\mathrm{e}^{-t_n |c\upsilon|^\alpha}\,\left|\widehat{g}\!\left(\frac{\upsilon}{s_n\,(t_n)^{-1/\alpha}}\right)\right|\,\left|\theta_n(-\upsilon)-\theta(-\upsilon)\right|\,d\upsilon\\ &\leq \|\widehat{g}\|_\infty\,\int_{\mathbb{R}}(t_n)^{1/\alpha}\,\mathrm{e}^{-t_n |c\upsilon|^\alpha}\,\left|\theta_n(-\upsilon)-\theta(-\upsilon)\right|\,d\upsilon. \end{align*} Fix $\varepsilon>0$. Since $\theta_n$ converges locally uniformly towards $\theta$, there exists an interval $[-C,C]$ such that $|\theta_n(-\upsilon)-\theta(-\upsilon)| \leq \varepsilon$ for any $\upsilon \in [-C,C]$. The part of $A$ corresponding to this interval is therefore smaller than $$\varepsilon\,\|\widehat{g}\|_\infty\,\int_{-C}^C(t_n)^{1/\alpha}\,\mathrm{e}^{-t_n |c\upsilon|^\alpha}\,d\upsilon \leq \varepsilon \,\|\widehat{g}\|_\infty\,\int_{\mathbb{R}}\mathrm{e}^{ -|c\xi|^\alpha}\DD{\xi},$$ that is to say a constant times $\varepsilon$. On the other hand, for $|\upsilon| \geq C$, $$(t_n)^{1/\alpha}\,\mathrm{e}^{-t_n |c\upsilon|^\alpha} \leq (t_n)^{1/\alpha}\,\mathrm{e}^{-t_n (cC)^\alpha} \to 0,$$ and the part of $A$ corresponding to $\mathbb{R} \setminus [-C,C]$ is smaller than $$\|\widehat{g}\|_\infty\,(t_n)^{1/\alpha}\,\mathrm{e}^{-t_n (cC)^\alpha} \|\theta_n-\theta\|_{\mathrm{L}^1} \to 0 $$ since $\|\theta_n-\theta\|_{\mathrm{L}^1}$ goes to zero. Hence, $A$ goes to zero. The same arguments allows one to show that $B \to 0$, using the continuity of $\theta$ at zero instead of the convergence $\theta_n \to \theta$ for the integral over an interval $[-C,C]$.\medskip As a consequence of the convergence in $\mathrm{L}^1$, one can now write $$\mathbb{E}\!\left[g(s_n (Y_n-x))\right] \simeq \frac{1}{2\pi\,s_n}\,\int_{\mathbb{R}} \widehat{g}(0)\,\mathrm{e}^{\eta(\mathrm{i}\xi)-\mathrm{i} x\xi}\DD{\xi} = \frac{1}{s_n}\,p_{c,\alpha,\beta}(x)\,\left(\int_{\mathbb{R}}g(y)\DD{y}\right),$$ which is what we wanted to prove. \end{proof} \medskip \subsection{The winding number of the planar Brownian motion}\label{subsec:brownian} As an application of our theory of $\mathrm{L}^1$-mod-$\phi$ convergence, consider a standard planar Brownian motion $\left(Z_t\right)_{t\ge 0}$ starting at the point $(1,0)$. With probability 1, $Z_t$ does not visit the origin, so one can write $Z_t=R_t\, \mathrm{e}^{\mathrm{i}\varphi_t}$ with continuous functions $t\rightarrow R_t$ and $t\rightarrow \varphi_t$, and with $\varphi_0=0$. The process $\left(\varphi_t\right)_{t\ge 0}$ is called \emph{winding number} of the Brownian motion around the origin. Its Fourier transform has been calculated by Spitzer (see \cite{Spi58}), in terms of the modified Bessel function $I_\nu(z)=\sum_{k\ge 0}\frac{1}{k!\,\Gamma(\nu+k+1)}\left(\frac{z}{2}\right)^{\nu+2k}$. Thus, \begin{equation*} \mathbb{E}\!\left[\mathrm{e}^{\mathrm{i}\xi\varphi_t}\right]=\sqrt{\frac{\pi}{8t}}\,\mathrm{e}^{-\frac{1}{4t}}\left(I_{\frac{|\xi|-1}{2}}\left(\frac{1}{4t}\right)+I_{\frac{|\xi|+1}{2}}\left(\frac{1}{4t}\right)\right). \end{equation*} As a consequence, in \cite{FMN17} it is shown that $\left(\varphi_t\right)_{t\ge 0}$ converges mod-Cauchy with parameters $\log \sqrt{8t}$, limiting function $$\theta(\xi)=\frac{\sqrt{\pi}}{\Gamma (\frac{|\xi|+1}{2})}$$ and with a control of index $(1,1)$ \emph{over the whole real line} $\mathbb{R}$. Since $\frac{1}{\omega-\alpha} = +\infty$, this means that one can consider zones of control $[-K(t_n)^\gamma,K(t_n)^\gamma]$ with $\gamma$ as large as wanted. In the sequel, we shall rework this example by using the notion of mod-$\phi$ convergence in $\mathrm{L}^1(\mathrm{i}\mathbb{R})$. \medskip Notice first that $$\Gamma\left(\frac{|\xi|+1}{2}\right) \geq \frac{2}{1+|\xi|}\,\,\Gamma\!\left(1+\frac{|\xi|}{2}\right) \geq \frac{2}{1+|\xi|}\,\left(\frac{|\xi|}{2\mathrm{e}}\right)^{\frac{|\xi|}{2}},$$ so that the limiting function $\theta(\xi)=\sqrt{\pi}\,(\Gamma (\frac{|\xi|+1}{2}))^{-1}$ is in $\mathrm{L}^1(\mathbb{R})$. So are the functions $\theta_t$. It remains to check that the convergence $\theta_t\rightarrow \theta$ happens in $\mathrm{L}^1(\mathbb{R})$: \begin{align*} \|\theta_t-\theta\|_{\mathrm{L}^1} & \leq \left\|\sum_{k\ge 0}\left(\frac{\sqrt{\pi}\mathrm{e}^{-\frac{1}{4t}}}{k!\Gamma(k+\frac{|\xi|+1}{2})}\left(\frac{1}{8t}\right)^{2k}-\frac{\sqrt{\pi}\mathrm{e}^{-\frac{1}{4t}}}{k!\Gamma(k+\frac{|\xi|+3}{2})}\left(\frac{1}{8t}\right)^{2k+1}\right)-\frac{\sqrt{\pi}}{\Gamma(\frac{|\xi|+1}{2})}\right\|_{\mathrm{L}^1}\\ &\leq \|\theta\|_{\mathrm{L}^1}\left(1-\mathrm{e}^{-\frac{1}{4t}}\right)+ \sqrt{\pi}\,\mathrm{e}^{-\frac{1}{4t}} \left( \sum_{k=1}^\infty \frac{1}{k!}\left(\frac{1}{8t}\right)^{\!2k}\left\|\frac{1}{\Gamma(k+\frac{|\xi|+1}{2})}\right\|_{\mathrm{L}^1} \right)\\ &\quad + \sqrt{\pi}\,\mathrm{e}^{-\frac{1}{4t}} \left( \sum_{k=0}^\infty \frac{1}{k!}\left(\frac{1}{8t}\right)^{\!2k+1}\left\|\frac{1}{\Gamma(k+\frac{|\xi|+3}{2})}\right\|_{\mathrm{L}^1} \right)\\ &\leq \|\theta\|_{\mathrm{L}^1}\left(\left(1-\mathrm{e}^{-\frac{1}{4t}}\right) + \mathrm{e}^{-\frac{1}{4t}} \left(\mathrm{e}^{\left(\frac{1}{8t}\right)^2}-1\right) + \frac{1}{8t}\, \mathrm{e}^{-\frac{1}{4t}+\left(\frac{1}{8t}\right)^2}\right) \stackrel{t\to \infty}{\longrightarrow} 0. \end{align*} Hence, $(\varphi_t)_{t \in \mathbb{R}_+}$ converges mod-Cauchy in $\mathrm{L}^1(\mathrm{i}\mathbb{R})$, and for any family $(s_t)_{t \in \mathbb{R}_+}$ growing to infinity, $$\lim_{t \to +\infty} s_t\,\,\mathbb{P}\!\left[\frac{\varphi_t}{\log \sqrt{8t}} - x \in \frac{1}{s_t}\,B\right] = \frac{m(B)}{\pi(1+x^2)}.$$ \medskip \subsection{The magnetisation of the Curie--Weiss model}\label{subsec:curieweiss} In \cite{MN15}, another notion of mod-$\phi$ convergence in $\mathrm{L}^1$ was introduced, in connection with models from statistical mechanics. \begin{definition}\label{def:L1R} Let $(X_n)_{n\in \mathbb{N}}$ be a sequence of random variables that is mod-Gaussian convergent on $D=\mathbb{R}$ (beware that the domain here is $\mathbb{R}$ and not $\mathrm{i} \mathbb{R}$), with parameters $(t_n)_{n \in \mathbb{N}}$ and limiting function $\psi(x)$. We say that there is mod-Gaussian convergence in $\mathrm{L}^{1}(\mathbb{R})$ if: \begin{itemize} \item $\psi$ and the functions $\psi_n(x) = \mathbb{E}[\mathrm{e}^{xX_n}]\,\mathrm{e}^{-\frac{t_nx^2}{2}}$ belong to $\mathrm{L}^1(\mathbb{R})$; \item the convergence \begin{equation*} \psi_n(x)\longrightarrow \psi(x) \end{equation*} occurs in $\mathrm{L}^1(\mathbb{R})$: $\|\psi_n -\psi\|_{\mathrm{L}^1(\mathbb{R})}\to 0$. \end{itemize} \end{definition} \noindent This definition mimics Definition \ref{def:L1modphi} with a domain $\mathbb{R}$ instead of $\mathrm{i}\mathbb{R}$. This framework allows one to prove the convergence in distribution for sequences which are obtained from $(X_n)_{n\in\mathbb{N}}$ by an exponential change of measure. We recall without proof this result (see \cite[Theorem 6]{MN15}). \begin{theorem}\label{thm:changeofmeasure} Let $(X_n)_{n \in \mathbb{N}}$ be a sequence of real-valued random variables that converges mod-Gaussian in $\mathrm{L}^1(\mathbb{R})$, with parameters $(t_n)_{n \in \mathbb{N}}$ and limit $\psi$. Denote by $(Y_n)_{n \in \mathbb{N}}$ the sequence obtained by the change of measures \begin{equation*} \mathbb{P}_{Y_n}[\!\DD{x}] = \frac{\mathrm{e}^{\frac{x^2}{2t_n}} }{\mathbb{E}\!\left[\mathrm{e}^{\frac{(X_n)^2}{2t_n}}\right]}\,\mathbb{P}_{X_n}[\!\DD{x}]. \end{equation*} Then $(\frac{Y_n}{t_n})_{n \in \mathbb{N}}$ converges in law towards a random variable $W_\infty$ with density $\frac{\psi(x)\DD{x}}{\int_{\mathbb{R}}\psi(x)\DD{x}}$. \end{theorem} In this setting and with mild additional hypotheses, one can identify the infinitesimal scales at which a local limit theorem holds for $(Y_n)_{n \in \mathbb{N}}$. The precise assumptions are the following: \begin{assumption} Let $(X_n)_{n \in \mathbb{N}}$ be a sequence of real-valued random variables. We assume that: \begin{enumerate}[label=(A\arabic*)] \item\label{hyp:change1} The sequence $(X_n)_{n \in \mathbb{N}}$ is mod-Gaussian convergent in $\mathrm{L}^1(\mathbb{R})$, with parameters $(t_n)_{n \in \mathbb{N}}$ and limit $\psi$. \item\label{hyp:change3} For every $M>0$, $$ \sup_{n \in \mathbb{N}}\sup_{m \in [-M,M]}\left(\int_{\mathbb{R}} |\psi_n(x+\mathrm{i} m)|\DD{x} \right)<+\infty.$$ We denote $C(M)$ the constant in this bound. \end{enumerate} \end{assumption} \begin{theorem}\label{thm:locallimitchange} If Conditions \ref{hyp:change1} and \ref{hyp:change3} are satisfied, then for any $\varepsilon \in (0,1]$, $$\lim_{n \to \infty} (t_n)^{\varepsilon} \,\,\mathbb{P}\!\left[\frac{Y_n}{t_n}-x\in \frac{1}{(t_n)^{\varepsilon}}\,B\right] = \frac{\psi(x)\,m(B)}{\int_{\mathbb{R}}\psi(y)\DD{y}},$$ where $Y_n$ is obtained from $X_n$ by the exponential change of measure of Theorem \ref{thm:changeofmeasure}. \end{theorem} \begin{lemma}\label{lem:exponentialdecay} If $(X_n)_{n \in \mathbb{N}}$ satisfies Conditions \ref{hyp:change1} and \ref{hyp:change3}, then $$|\widehat{\psi_n}(\xi)| \leq 2C(M)\,\mathrm{e}^{-M|\xi|}\quad\text{for any $\xi\in \mathbb{R}$ and any $M>0$}.$$ \end{lemma} \begin{proof} This is the content of \cite[p.~132]{RS75}, which we reproduce here for the convenience of the reader. Set $\psi_{n,M}(x)=\psi_n(x+\mathrm{i} M)$. Applying the Cauchy integral theorem, \begin{align*} \widehat{\psi_{n,M}}(\xi) &= \int_{\mathbb{R}}\psi_n(x+\mathrm{i} M)\,\mathrm{e}^{\mathrm{i} x \xi} \DD{x} = \left(\int_{\mathbb{R}}\psi_n(x+\mathrm{i} M)\,\mathrm{e}^{\mathrm{i} (x + \mathrm{i} M) \xi} \DD{x}\right)\mathrm{e}^{M\xi} \\ &= \left(\int_{\mathbb{R}}\psi_n(x)\,\mathrm{e}^{\mathrm{i} x\xi} \DD{x}\right)\mathrm{e}^{M\xi} = \widehat{\psi_n}(\xi)\,\mathrm{e}^{M\xi} \end{align*} by analyticity of the function $\psi_n(z)\,\mathrm{e}^{\mathrm{i} z\xi}$, and existence and boundedness of all the integrals $ \int_{\mathbb{R}}\psi_n(x+\mathrm{i} m)\,\mathrm{e}^{\mathrm{i} (x+\mathrm{i} m)\xi}\DD{x}$. Therefore, \begin{equation*} |\widehat{\psi_n}(\xi)|\,\mathrm{e}^{M|\xi|} \leq |\widehat{\psi_n}(\xi)|\left(\mathrm{e}^{M\xi}+\mathrm{e}^{-M\xi}\right)\leq |\widehat{\psi_{n,M}}(\xi)| +|\widehat{\psi_{n,-M}}(\xi)| \leq 2C(M).\qedhere \end{equation*} \end{proof} \medskip \begin{proof}[Proof of Theorem \ref{thm:locallimitchange}] In the sequel we set $I_n := \int_{\mathbb{R}} \psi_n(x)\DD{x}$ and $I_\infty := \int_{\mathbb{R}} \psi(x) \DD{x}$. As usual, it is sufficient to prove the estimate with test functions $f \in \mathscr{T}_0(\mathbb{R})$: $$\lim_{n \to \infty} (t_n)^{\varepsilon} \,\,\mathbb{E}\!\left[g\!\left((t_n)^{\varepsilon}\,\left(\frac{Y_n}{t_n}-x\right)\right)\right] = \frac{\psi(x)}{I_\infty}\,\left(\int_{\mathbb{R}} g(y)\DD{y}\right).$$ We compute, with $\widehat{g}$ compactly supported on $[-M,M]$ and $g_n(y)=g((t_n)^{\varepsilon}(y-x))$: \begin{align*} \mathbb{E}\!\left[g\!\left((t_n)^{\varepsilon}\,\left(\frac{Y_n}{t_n}-x\right)\right)\right] &= \mathbb{E}\!\left[g_n\!\left(\frac{Y_n}{t_n}\right)\right] = \frac{1}{2\pi I_n} \int_{\mathbb{R}} \widehat{g_n}(\xi) \,\mathrm{e}^{\frac{\xi^2}{2t_n}}\, \widehat{\psi_n}(-\xi) \DD{\xi} \\ &= \frac{1}{2\pi I_n\,(t_n)^{\varepsilon}} \int_{\mathbb{R}} \widehat{g}\left(\frac{\xi}{(t_n)^{\varepsilon}}\right) \,\mathrm{e}^{\frac{\xi^2}{2t_n}+\mathrm{i} x \xi}\,\widehat{\psi_n}(-\xi)\DD{\xi}. \end{align*} In the integral, $\widehat{g}(\frac{\xi}{(t_n)^{\varepsilon}}) \,\mathrm{e}^{\frac{\xi^2}{2t_n}+\mathrm{i} x \xi}\,\widehat{\psi_n}(-\xi)$ converges pointwise to $\widehat{g}(0)\,\mathrm{e}^{\mathrm{i} x \xi}\,\widehat{\psi}(-\xi)$. Moreover, it is dominated by $$2\,\|\widehat{g}\|_\infty\,C(M)\,\mathrm{e}^{-M\frac{|\xi|}{2}} $$ by Lemma \ref{lem:exponentialdecay}, and using the fact that $\varepsilon\leq 1$. Hence, by dominated convergence, \begin{align*} \lim_{n \to \infty} (t_n)^{\varepsilon} \,\mathbb{E}\!\left[g\!\left((t_n)^{\varepsilon}\,\left(\frac{Y_n}{t_n}-x\right)\right)\right] &= \frac{\widehat{g}(0)}{2\pi I_\infty} \left(\int_{\mathbb{R}} \mathrm{e}^{\mathrm{i} x \xi} \widehat{\psi}(-\xi)\DD{\xi}\right) \\ &= \frac{\psi(x)}{I_\infty}\,\left(\int_{\mathbb{R}} g(y)\DD{y}\right).\qedhere \end{align*} \end{proof} \begin{example} The Curie--Weiss model at critical temperature is the probability law on spin configurations $\sigma=\left(\sigma_i\right)_{i\in \lle 1,n\rre}\in\{\pm 1\}^n$ given by \begin{equation*} \mathbb{CW}(\sigma)=\frac{\mathrm{e}^{\frac{1}{2n}\left(\sum_{i=1}^n\sigma_i\right)^2}}{\sum_{\sigma} \mathrm{e}^{\frac{1}{2n}\left(\sum_{i=1}^n\sigma_i\right)^2}}. \end{equation*} The random quantity $M_n:=\sum_{i=1}^n\sigma_i$ under the law $\mathbb{CW}$ is the \emph{total magnetization} of the Curie--Weiss model.\medskip One can interpret $M_n$ in the mod-Gaussian convergence setting. Namely, consider be a sequence of i.i.d.~Bernoulli random variables $\left(\sigma_i\right)_{i\in\mathbb{N}}$ with $$\mathbb{P}\left[\sigma_i=1\right]=1-\mathbb{P}\left[\sigma_i=-1\right]=\frac{1}{2}$$ Then (see \cite[Theorem 8]{MN15}), $X_n=\frac{\sum_{i=1}^n \sigma_i}{n^{1/4}}$ is mod-Gaussian convergent in $\mathrm{L}^1(\mathbb{R})$ with parameters $t_n=\sqrt{n}$ and limiting function $\psi(x)=\exp(-\frac{x^4}{12})$. The sequence $Y_n$ obtained by the exponential change of measure of Theorem \ref{thm:changeofmeasure} has the same distribution as the rescaled total magnetization in the Curie--Weiss model, that is $$Y_n\stackrel{\mathrm{law}}{=}\frac{M_n}{n^{1/4}}.$$ Note that, in this particular case, $$\psi_n(\xi)=\mathrm{e}^{-\sqrt{n}\frac{x^2}{2}}\left(\cosh\left(\frac{x}{n^{1/4}}\right)\right)^n,\qquad \psi(x)=\mathrm{e}^{-\frac{x^4}{12}}.$$ From Proposition 15 in \cite{MN15}, we have that for all $M>0$ $$\sup_{n\in\mathbb{N}}\sup_{m\in[-M,M]}\left(\int_\mathbb{R}|\psi_n(x+\mathrm{i} m)| \DD{x}\right)\lesssim C(M)$$ with $$ C(M)=\mathrm{e}^{\frac{13M^4}{12}}\left( 2\sqrt{3}M+I_\infty\right),$$ and where $\lesssim$ means that the inequality holds up to a multiplicative constant $(1+\varepsilon)$, with $\varepsilon>0$, and for $n$ big enough. Therefore, Conditions \ref{hyp:change1} and \ref{hyp:change3} are verified, and we can deduce from Theorem \ref{thm:locallimitchange} the following local limit theorem: for any $\varepsilon\in (0,\frac{1}{2}]$, the following local limit theorem holds: $$\lim_{n \to \infty} n^{\varepsilon}\,\,\mathbb{P}\!\left[\frac{M_n}{n^{3/4}}-x\in \frac{B}{n^{\varepsilon}}\right] = \frac{\mathrm{e}^{-\frac{x^4}{12}}}{\int_{\mathbb{R}}\mathrm{e}^{-\frac{y^4}{12}}\DD{y}}\,m(B)$$ for any Jordan measurable subset $B$ with $m(B)>0$. This improves on \cite[Theorem 22]{MN15}, which only dealt with the case $x=0$ and $\varepsilon = \frac{1}{2}$. In the same setting, one can also show that the Kolmogorov distance between $\frac{Y_n}{t_n}$ and its limit in law $W_\infty$ is a $O(\|\psi_n-\psi\|_{\mathrm{L}^1(\mathbb{R})}+\frac{1}{t_n})$, see Theorem 21 in \emph{loc.~cit.} \end{example} \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{Section:Intro} In the past decade, the theory of operator systems has drawn a fair amount of attention in non-commutative functional analysis and quantum information theory. Among some of these works, the matrix-ordered duals of operator systems play an important role in tensor product theory \cite{FP, Ka, KPTT1}, as well as quantum graph theory \cite{FKPT, PT2}. As a result of Choi-Effros abstract characterization of operator systems \cite[Theorem 4.4]{CE}, the matrix-ordered dual $S'$ of a finite-dimensional operator system $S$ remains to be an operator system with a suitable choice of order unit. \begin{thm}[Choi-Effros \cite{CE}]\label{DualFiniteDimension} Let $S$ be a finite-dimensional operator system. Then \begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=iii, leftmargin=*] \item there exists faithful $f \colon S \to \bb{C}$; and \item any faithful functional $f$ is an Archimedean matrix order unit for $S'$. \end{enumerate} Consequently, $S'$ with any faithful state is an operator system. \end{thm} For instance, if $S \subset M_n$ is an operator system, then $S'$ with the trace functional is an operator system. In general, when $\dim(S) = \infty$, it is not clear whether $S'$ possesses an Archimedean matrix order unit. A natural attempt to this question is approximation by inductive sequence of finite-dimensional operator systems. Inductive limits of complete operator systems were introduced by Kirchberg in \cite{K1995CAR}, which relies on the norm structure. Recently, inductive limits of (non-complete) operator systems are also studied \cite{ Li2017inductive, LK2016inductive}; and a systematic investigation has started in \cite{MT}. In particular, in \cite[\S 3.2]{MT} Mawhinney and Todorov showed that every inductive sequence of operator systems, via duality, induces a projective sequence of the corresponding state spaces, whose projective limit is homeomorphic to the state space of its inductive limit. Motivated by their work, this paper aims to provide a construction of projective limits in the categories of AOU spaces and operator systems. We are also interested in the duality between inductive and projective limits when the dual objects remain in the same categories. As a main application, we generalize Theorem \ref{DualFiniteDimension} to separable operator systems; see Theorem \ref{DualSeparable}. The paper is organized as follows. We begin with the preliminaries in \S \ref{Section:Preliminaries}. These include the basics of AOU spaces, operator systems, and inductive limits in these categories developed in \cite{PT, PTT, MT} respectively. In \S \ref{Section:Projective limits}, we construct projective limit in AOU spaces and operator systems using their order structures and the corresponding order norms. Then we show that inductive and projective limits are in duality, provided the dual objects remain in the same categories and the maps are unital complete order embeddings in \S \ref{Section:Duality injective}. In \S \ref{Section:dualityMain}, we show the existence of faithful state for separable operator systems, which allows us to generalize Theorem \ref{DualFiniteDimension} using inductive and projective limits. \section{Preliminaries}\label{Section:Preliminaries} We outline the basics of Archimedean order unit vector spaces and operator systems developed by Paulsen and Tomforde in \cite{PT}. We also summarize some results of \cite{MT} that will be used in \S \ref{Section:Duality injective}. \subsection{AOU spaces}\label{AOUspaces} A \textit{$\ast$-vector space} $V$ is a complex vector space equipped with an involution $\ast$. Elements in the real subspace $V_h = \{v \in V \colon v^* = v \}$ are called Hermitian or self-adjoint. Note that for $v \in V$, $v = \Re(v) + i \Im(v)$, where $\Re(v) = \frac{1}{2}(v+v^*)$ and $\Im(v) = \frac{1}{2i} (v - v^*)$ are Hermitian. An \textit{ordered $\ast$-vector space} is a pair $(V, V^+)$, where $V$ is a $\ast$-vector space and $V^+$ is a proper cone in $V_h$. The cone induces a natural partial ordering on $V_h$, i.e. $v \leq w$ if and only if $w - v \in V^+$. Given an ordered $\ast$-vector space $(V, V^+)$, an element $e \in V^+$ is called an \textit{order unit} if for each $v \in V_h$, there exists $r \geq 0$ such that $v \leq re$. The triple $(V, V^+, e)$ is called an order unit space. The order unit $e$ is called \textit{Archimedean}, provided that for all $v \in V_h$, \begin{equation*} \forall r > 0, re + v \in V^+ \Longrightarrow v \in V^+. \end{equation*} In this case, the triple $(V, V^+, e)$ is called an \textit{Archimedean order unit space}, or an AOU space for short. We often write denote it by $(V, e)$ or simply $V$ whenever the content is clear. Given two AOU spaces $(V, V^+, e_V)$ and $(W, W^+, e_W)$, a linear map $\phi \colon V \to W$ is called \textit{positive} if $\phi(V^+) \subset W^+$ and \textit{unital} if $\phi(e_V) = e_W$. It is an \textit{order isomorphism} provided it is bijective and $\phi(v) \in W^+$ if and only if $v\in V^+$. A \textit{state} on $V$ is a unital positive functional $f \colon V \to \mathbb{C}$. We write $\mathfrak{S}(V)$ for the set of states on $V$ and call it the \textit{state space} of $V$. Note that $\mathfrak{S}(V)$ is a cone in the algebraic dual of $V$. Given an order unit space $(V, V^+, e)$, there is a seminorm of $V_h$ by \begin{equation}\label{OrderNorm} || v ||_h = \inf \{ r > 0 \colon - re \leq v \leq re \}. \end{equation} We call $|| \cdot ||_h$ the \textit{order seminorm} on $V_h$ determined by $e$. By \cite[Theorem 2.30]{PT}, $e$ is Archimedean if and only if $V^+$ is closed in $V_h$ in the order topology induced by $|| \cdot ||_h$. In this case, by \cite[Proposition 2.23]{PT} $||\cdot||_h$ is a norm on $V_h$, and we call $|| \cdot ||_h$ the \textit{order norm} determined by $e$ on $V_h$. A \textit{$\ast$-seminorm} $|| \cdot ||$ on $V$ is a seminorm such that $||v^*|| = || v ||$; it is called an \textit{order seminorm} if $|| v || = || v ||_h$ for every $v \in V_h$. In \cite{PT}, Paulsen and Tomforde introduced the \textit{minimal order seminorm} \begin{equation} || v ||_m := \sup \{|f(v)| \colon f \in \mathfrak{S}(V)\}, \end{equation} and the \textit{maximal order seminorm} \begin{equation} || v ||_M := \inf \{ \sum_{i=1}^n | \lambda_i | ||v_i||_h \colon v = \sum_{i=1}^n \lambda_i v_i, \; \lambda_i \in \mathbb{C}, v_i \in V_h\}; \end{equation} and $|| \cdot ||_m \leq || \cdot || \leq || \cdot ||_M \leq 2|| \cdot ||_m$ for all order seminorms \cite[Proposition 4.9]{PT}. Moreover, if $(V, V^+ e)$ is an AOU space, then every order seminorm is an order norm. The following proposition is part of \cite[Theorem 4.22]{PT}. \begin{prop}\label{upContractive} A unital linear map $\phi$ between AOU spaces is positive if and only if it has norm one with respect to the minimal order seminorms of both spaces. \end{prop} We remark that the original statement also takes into account of the \textit{decomposition order seminorm}, which is not used in this paper. We have a partial converse for the maximal order seminorms. \begin{lemma} Let $\phi \colon V \to W$ be a unital positive map between real AOU spaces. Then $\phi$ is contractive with respect to the order seminorms. \end{lemma} \begin{proof} For each $v \in V$, let $r = || v ||$, then $r \phi(v) \pm e_W = \phi( rv \pm e_V ) \geq 0$. By definition of order seminorm, $|| \phi(v) || \leq || v ||$. \end{proof} \begin{prop} Let $\phi \colon V \to W$ be unital positive map between AOU spaces. Then $\phi$ is contractive with respect to the maximal order norms on $V$ and $W$; in fact, $|| \phi ||_M := \sup \{ ||\phi(v)||_M \colon || v ||_M \leq 1 \} = 1$. \end{prop} \begin{proof} Let $v \in V$ and write $v = \sum_{i=1}^n \lambda_i v_i$, where $\lambda_i \in \mathbb{C}$ and $v_i \in V_h$. Then \begin{align*} || \phi(v) ||_M &\leq \sum_{i=1}^n | \lambda_i | || \phi(v_i) ||_h \leq \sum_{i=1}^n |\lambda_i| || v_i ||_h, \end{align*} where the last inequality follows from the previous lemma. By taking the infinmum over all such representations $v = \sum_i \lambda_i v_i$, we deduce that $|| \phi(v) ||_M \leq || v ||_M$. Also, $|| \phi(e_V) ||_M = || e_W ||_M = 1$, hence $|| \phi ||_M = 1$. \end{proof} The partial ordering on the order unit space $(V, V^+, e)$ gives rise to an \textit{order topology}, which by \cite[Proposition 4.9]{PT}, is equivalent to the \textit{seminorm topology} induced by any of the order seminorms. Moreover, the subspace topology on $V_h$ is equivalent to the topology induced by $|| \cdot ||_h$ on $V_h$. By Proposition \ref{upContractive}, unital positive map $\phi \colon V \to W$ between AOU spaces is continuous with respect to the order topology. We denote $V'$ the space of continuous linear functionals in the order topology. We let $\phi' \colon W' \to V'$, $\phi'(f) = f \circ \phi$, be the \textit{dual map} or \textit{adjoint} of $\phi$. When $V$ is an AOU space, $V'$ is the dual normed space with respect to any of the order norm on $V$, hence it is a Banach space. We equip $V'$ the weak*-topology generated by the order norm topology on $V$. By \cite[Theorem 5.2]{PT}, the state space $\mathfrak{S}(V)$ is a compact cone that spans $V'$. \begin{defn}[Ordered dual] Given an AOU space $(V, V^+, e)$, we define an involution on $V'$ by $f^*(v) := \overline{f(v)}$. We equip $V'$ the natural order $f \in (V')^+$ if and only if $f$ is a positive linear functional. We call the ordered $\ast$-vector space $(V', (V')^+)$ the \textbf{ordered dual} of $V$ and denote it by $V'$. Note that the ordered dual of an AOU space need not be an AOU space. \end{defn} We denote by \textbf{OU} the category whose objects are order unit spaces with morphisms being unital positive maps, and by \textbf{AOU} the category whose objects are AOU spaces with the same morphisms. The process of Archimedeanization is a functor from \textbf{OU} to \textbf{AOU} by forming some quotient of $V$ and taking closure of $V^+$, see \cite[\S 3.2]{PT}. \subsection{Operator Systems}\label{OS} Given a $\ast$-vector space $S$, for each $n \in \mathbb{N}$, we identify the vector space tensor product $M_n \otimes S = M_n(S)$, whose elements are $n$ by $n$ matrices with entries in $S$, equipped with the involution $[s_{ij}]^* := [s_{ji}^*]$. It follows that $M_n(S)$ is a $\ast$-vector space, and we denote $M_n(S)_h$ for its Hermitian subspace. A \textit{matrix ordering} on $S$ is a family of cones $C_n \subset M_n(S)_h$, $n \in \bb{N}$, satisfying: \begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=iii, leftmargin=*] \item $C_n \cap -C_n = \{ 0 \}$, for all $n \geq 1$; \item $M_n(C_n)$ is the complex span of $C_n$; \item $\alpha^* C_n \alpha \subset C_m$, for each $\alpha \in M_{n,m}(\mathbb{C})$. \end{enumerate} This last condition is often called \textit{compatibility} of $\{C_n\}$. A \textit{matrix-ordered $\ast$-vector space} is a pair $(S, \{C_n\}_{n=1}^{\infty})$, where $S$ is a $\ast$-vector space and $\{C_n\}_{n=1}^{\infty}$ is a matrix ordering. When the content is clear, we often write $M_n(S)^+$ for $C_n$. Note that in this case, for each $n\in\mathbb{N}$, $(M_n(S), M_n(S)^+)$ is a $\ast$-ordered vector space. An element $e \in C_1 = S^+$ is a \textit{matrix order unit}, provided that $I_n \otimes e$ is an order unit for $(M_n(S), M_n(S)^+)$ for every $n \in \mathbb{N}$; it is called an \textit{Archimedean matrix order unit} if $I_n \otimes e$ is an Archimedean order unit for $(M_n(S), M_n(S)^+)$ for each $n \in \mathbb{N}$. The triple $(S, \{C_n\}_{n=1}^{\infty}, e)$ is called a \textit{matrix-ordered $\ast$-vector space with a matrix order unit}, or MOU space for short, provided $e$ is a matrix order unit; it is called an \textit{abstract operator system} if $e$ is an Archimedean matrix order unit. We often denote it by the triple $(S, \{C_n\} , e)$, $(S, e)$, or simply $S$ whenever the content is clear. Let $\phi \colon S \to T$ be a linear map between MOU spaces $S$ and $T$. For each $n \in \mathbb{N}$, we write $\phi^{(n)} = id_n \otimes \phi \colon M_n(S) \to M_n(T)$ by $A \otimes s \mapsto A \otimes \phi(s)$. We say that $\phi$ is \textit{$n$-positive} if $\phi^{(n)}$ is positive between the order unit spaces $M_n(S)$ and $M_n(T)$; and $\phi$ is \textit{completely positive} provided that $\phi$ is $n$-positive for every $n \in \mathbb{N}$. We write $CP(S, T)$ (resp. $UCP(S, T)$) for the cone of (resp. unital) completely positive maps from $S$ to $T$. We say that $\phi$ is a \textit{complete order isomophism} if $\phi$ is bijective and both $\phi$ and $\phi^{-1}$ are completely positive; $\phi$ is a \textit{complete order embedding} if $\phi$ is a complete order isomorphism onto its range. We denote by \textbf{MOU} the category whose objects are MOU spaces with morphisms being unital completely positive maps, and by \textbf{OS} the category whose objects are operator systems with the same morphisms. The process of Archimedeanization from \textbf{MOU} to \textbf{OS} was explicitly studied in \cite[\S 3.1]{PTT}. A \textit{concrete operator system} is a unital selfadjoint subspace $S$ of $B(\cl{H})$, the C*-algebra of bounded linear operators on a Hilbert space $\cl{H}$. Naturally $S$ inherits a matrix ordering $\{M_n(S)^+\}_{n=1}^{\infty}$ from $B(\cl{H})$ by $M_n(S)^+ := M_n(B(\cl{H}))^+ \cap M_n(S)$, where we identify $M_n(B(\cl{H})) \cong B( \oplus_{i=1}^n \cl{H} )$. Moreover, the identity $I$ is an Archimedean matrix order unit for $(S, \{M_n(S)^+\}_{n=1}^{\infty})$, thus it is an abstract operator system. The converse was proved by Choi and Effros \cite[Theorem 4.4]{CE}. The next proposition can be found in \cite[Remark 1.2]{FNT2017}, which is a property of MOU spaces rather than operator systems. We include their proof for completeness as this handy tool reduces the complexity in many proofs in the literature. \begin{prop}\label{MOUunit} Let $S$ be a matrix-ordered $\ast$-vector space. Then $e \in S^+$ is an order unit if and only if it is a matrix order unit. \end{prop} \begin{proof} Given $A \in M_n(S)_h$, decompose $A = \sum_i A_i \otimes x_i$, where $A_i \in (M_n)_h$ and $x_i \in S_h$ by \cite[Lemma 3.7]{PTT}. Since $e$ is an order unit, there exists $r > 0 $ such that $r e \pm x_i \in S^+$ for each $i$. Decompose $A_i = P_i - Q_i$, where $P_i, Q_i \in M_n^+$. Note that \begin{align*} r (\sum_i P_i + Q_i) \otimes e - A &= \sum_i P_i \otimes (re - x_i) + \sum_i Q_i \otimes (re + x_i) \end{align*} is in $M_n(S)^+$. By choosing $\lambda > 0 $ such that $\lambda I_n \geq r \sum_i P_i + Q_i$, we deduce that $\lambda I_n \otimes e - A \in M_n(S)^+$. Therefore, $e$ is a matrix order unit for $S$. \end{proof} We also need the following lemma on norm bound. We write $|| T ||_{op}$ for the operator norm of an operator $T$ over a Hilbert space $\mathcal{H}$. \begin{lemma}\label{op-norm_estimate} Let $S \subset B(\mathcal{H})$ be a concrete operator system. Then for each $n \in \mathbb{N}$ and $[T_{ij}] \in M_n(S)$, $|| [T_{ij}] ||_{op} \leq n \cdot \max_{ij} || T_{ij} ||_{M}$. \end{lemma} \begin{proof} For every $T_{ij} \in S \subset B(H)$, a direct calculation shows that $|| [T_{ij}] ||_{op} \leq ( \sum_{i,j=1}^n || T_{ij} ||_{op}^2 )^{1/2}$. Since the operator norm on $S$ is also an order norm, $|| T_{ij} ||_{op} \leq || T_{ij} ||_M$ and the result follows. \end{proof} \subsection{Matrix-ordered duals of operator systems}\label{Section:dualopsys} Given an operator system $(S, \{M_n(S)^+\}_{n=1}^\infty, e)$, its underlying space is the AOU space $(S, S^+, e)$ of which the ordered dual is $S' = (S', (S')^+)$. A positive linear functional $f$ on $M_n(S)$ is identified to the matrix $[f_{ij}] \in M_n(S')$, where $f_{ij}(x) := f( E_{ij} \otimes x)$. By \cite[Theorem 4.3]{PTT}, this identification endows $S'$ a canonical matrix ordering: \begin{equation*} [f_{ij}] \in M_n(S')^+ \overset{\text{def}}{\iff} f( [v_{ij}] ) = \sum_{ij} f_{ij}(v_{ij}) \; \text{is positive.} \end{equation*} On the other hand, a functional $f$ on $M_n(S)$ can be identified to $F \colon S \to M_n$ via $F(x) := [ f(E_{ij} \otimes x) ] = [f_{ij}(x)]$. By \cite[Theorem 6.1]{Pa2}, $f$ is positive if and only if $F$ is $n$-positive, if and only if $F$ is completely positive. Therefore, we obtain the following identification: \begin{equation}\label{DualMatrixOrder} [f_{ij}] \in M_n(S')^+ \overset{\text{def}}{\iff} F(x) = [f_{ij}(x)] \; \text{is completely positive.} \end{equation} We simply denote it by $M_n(S')^+ \cong CP(S, M_n)$. The identification $f \longleftrightarrow F$ in \cite[Chapter 6]{Pa2} has a factor of $n$ and $\frac{1}{n}$. We omitted it as it does not affect complete positivity. \begin{defn}[Matrix-ordered dual]\label{OSdual} We call this matrix-ordered $\ast$-vector space $(S', \{ M_n(S')^+ \}_{n=1}^{\infty})$ the \textbf{matrix-ordered dual} of $S$, and simply denote it by $S'$ whenever the content is clear. Following the discussion after \cite[Theorem 4.3]{PTT}, the identification $f \longleftrightarrow F$ asserts that the weak*-topology on $S'$ endows $M_n(S')$ a topology that is equivalent to the weak*-topology on $M_n(S)'$. We call this topology, unambiguously, the \textit{weak*-topology} on $M_n(S')$. \end{defn} \begin{remark}\label{Remark:ordered_dual} Some authors take the matrix-ordered dual to be the \textit{algebraic dual} $S^d$ of $S$, equipped with the same cone $M_n(S^d)^+ \cong CP(S, M_n)$. However, by \cite[Lemma 4.2]{PTT}, its complex span is again $M_n(S')$, so there is no loss of generality to replace $S^d$ with $S'$, which already has the weak*-topology. The fact that $S'$ and $M_n(S')$ are topological vector spaces turns out to be crucial in \S \ref{Section:Projective limits}. Also, recall that given an operator space $V$, the \textit{operator space dual} is the underlying space $V'$ equipped with the operator space structure given by $M_n(V') \cong CB(V, M_n)$ complete norm isometrically, see \cite{BP, ER}. In a similar vein, for operator system $S$, we have $M_n(S')^+ \cong CP(S, M_n)$, complete order isomorphically. The Wittstock's decomposition theorem \cite[Theorem 8.5]{Pa2} asserts that $M_n(S')^+$ spans $M_n(S')$. We remark that if $S^d$ turns out to be an operator system, then $S^d = S'$. \end{remark} There exists infinite-dimensional operator system whose matrix-ordered dual is as well an operator system. For example, Paulsen and the author in \cite{NP2016} constructed the operator Hilbert system $SOH$, whose matrix-ordered dual remains to be an operator system. Below we give another example. \begin{exam} Given an operator space $V$, the \textit{Paulsen system} $S(V)$ is \begin{equation*} S(V) := \left\{ \begin{bmatrix} \lambda I & X \\ Y^* & \mu I \end{bmatrix} \in M_2(B(\mathcal{H})) \colon \lambda, \mu \in \bb{C}, X, Y \in V \right\}. \end{equation*} In \cite{Pa2}, it is shown that $S(V)$ is independent of the representation $V\subset B(\mathcal{H})$, up to complete order isomorphism. One can check that the \textit{trace} functional \begin{equation*} tr \left( \begin{bmatrix} \lambda I & X \\ Y^* & \mu I \end{bmatrix} \right) := \lambda + \mu \end{equation*} is an Archimedean matrix order unit for $S(V)'$; thus, $S(V)'$ is an operator system. \end{exam} \subsection{Inductive limits}\label{inductlim} In \cite{MT}, Mawhinney and Todorov constructed inductive limits in \textbf{OU}, \textbf{AOU}, \textbf{MOU}, and \textbf{OS}. In this subsection we summarize their main results on \textbf{AOU} and \textbf{OS}. We start with inductive limit in a general category \textbf{C}. \begin{defn} Let \textbf{C} be a category. An \textbf{inductive sequence} in \textbf{C} is a sequence of pair $(A_k, f_k)_{k \in \mathbb{N}}$, where $A_k$ is an object and $f_k$ is a morphism such that $f_k \colon A_k \to A_{k+1}$, for each $k$. To avoid excessive notation, we denote it by $(A_k, f_k)$ whenever the content is clear. We call $f_k$ the connecting morphisms. Observe that for $l > k$, $f_{k, l} := f_{l,l} \circ f_{l-1} \circ \dots \circ f_k$ with $f_{k,k} := id_{A_k}$ is a morphism from $A_k$ to $A_l$. A pair $(A, \{g_k \}_{k\in \mathbb{N}})$, where $A$ is an object in \textbf{C} and for each $k \in \mathbb{N}$, $g_k \colon A_k \to A$ is a morphism, is said to be \textbf{compatible} with $(A_k, f_k)$, provided for each $k \in \mathbb{N}$, $g_{k+1} \circ f_k = g_k$. An \textbf{inductive limit} of $(A_k, f_k)$ is a compatible pair $(A_{\infty}, \{f_{k, \infty}\}_{k\in\mathbb{N}})$ that satisfies the universal property: If $(B, \{g_k\}_{k \in \mathbb{N}})$ is another compatible pair with $(A_k, f_k)$, then there exists a unique morphism $u \colon A_{\infty} \to B$ such that $u \circ f_{k,\infty} = g_k $, for each $k$. If $(A_k, f_k)$ has an inductive limit, then it is unique up to isomorphism in \textbf{C}, and it will be denoted $(A_{\infty}, \{f_{k,\infty}\}_{k\in\mathbb{N}} )$ or $(A_{\infty}, \{f_{k,\infty}\}) = \inductlim{C}(A_k, f_k)$, or $A_{\infty} = \inductlim{C}A_k$ whenever the content is clear. \end{defn} \subsubsection{OU and AOU spaces}\label{inductiveOUandAOU} We omit the details in \cite[\S 3]{MT} but outline some basic facts briefly. Given an inductive sequence $( (V_k, V_k^+, e_k), \phi_k)$ in \textbf{OU}, we consider the vector subspace $V_{\infty}^0$ in $\prod_{k\in\mathbb{N}} V_k$, where \begin{equation*} V_{\infty}^0 = \{ (x_k) \in \prod_{k\in\mathbb{N}}(V_k) \colon \phi_k(x_k) = x_{k+1}, \forall k \in \mathbb{N} \}. \end{equation*} Let $N^0 = \{ (x_k) \in V_{\infty}^0 \colon \exists m \in \mathbb{N} \; \text{so that} \; \forall k \geq m, x_k = 0 \}$ and $\ddot{V}_{\infty} := V_{\infty}^0 / N^0$. There are unital positive maps $\ddot{\phi}_{k,\infty} \colon V_k \to \ddot{V}_{\infty}$ that satisfy the following: \begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=iii, leftmargin=*] \item $\ddot{V}_{\infty} = \cup_{k\in\mathbb{N}} \ddot{\phi}_{k,\infty} (V_k)$, and $\ddot{\phi}_{k,\infty}(x_k) = \ddot{ \phi}_{l,\infty}(x_l)$ if and only if $\phi_{k,m}(x_k) = \phi_{l,m}(x_l)$ for some $m > k,l$. \item $\ddot{V}_{\infty}^+$ is the set of $\ddot{\phi}_{k,\infty}(x_k)$ such that there exists $m \geq k$ with $\phi_{k,m}(V_m^+)$. Moreover, $\ddot{V}_{\infty}^+ = \cup_{k\in\mathbb{N}} \ddot{\phi}_{k,\infty} (V_k^+)$. \item $(\ddot{V}_{\infty}, \ddot{V}_{\infty}^+)$ together with $\ddot{e} = \ddot{\phi}_{k,\infty}(e_k)$ is an order unit space. \item $( \ddot{V}_{\infty}, \{\ddot{\phi}_{k,\infty} \} ) = \inductlim{OU}(V_k, \phi_k)$. \end{enumerate} Given an inductive sequence $(V_k, \phi_k)$ in \textbf{AOU}, we first obtain $\ddot{V}_{\infty}$ in \textbf{OU} and then Archimedeanize $\ddot{V}_{\infty}$ as follows. Let $x = \ddot{\phi}_{k,\infty}(x_k) \in \ddot{V}_{\infty}$ and let $N$ be the \textit{null space} \begin{equation} N := \{ x \in \ddot{V}_{\infty} \colon \lim_{m\to\infty} || \phi_{k,m} (x_k) ||^m = 0 \}, \end{equation} where $|| \cdot ||^m$ is any order norm on $V_m$. It follows that $N$ is the kernel of an, hence any, order seminorm $|| \cdot ||^{\infty}$ on $\ddot{V}_{\infty}$. Let $V_{\infty} := \ddot{V}_{\infty}/N$, then $V_{\infty}$ is the Archimedeanization of $\ddot{V}_{\infty}$. \begin{remark}\label{inductiveAOU} Let $q_V \colon \ddot{V}_{\infty} \to V_{\infty}$ be the canonical quotient map and $\phi_{k,\infty} = q_V \circ \ddot{\phi}_{k,\infty}$. Then the pair $(V_{\infty}, \{\phi_{k,\infty} \})$ satisfies the following: \begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=iii, leftmargin=*] \item $V_{\infty} = \cup_{k\in\mathbb{N}}\phi_{k,\infty} (V_k)$, and $\phi_{k,\infty}(x_k) = \phi_{l,\infty}(x_l)$ if and only if $\ddot{\phi}_{k,\infty}(x_k) - \ddot{\phi}_{l,\infty}(x_l) \in N$, if and only if $\lim_{m\to \infty} || \phi_{k,m} (x_k) - \phi_{l,m}(x_l) ||^m = 0$. \item $V_{\infty}^+$ is the set of $\phi_{k,\infty} (x_k)$, where for every $r > 0$, there exist $m \geq l > k$ such that $y_l \in V_l$ with $\ddot{\phi}_{l,\infty}(y_l) \in N$, and $re_m + \phi_{k,m}(x_k) + \phi_{l,m}(y_l) \in V_m^+$. \item $(V_{\infty}, V_{\infty}^+)$ together with $e = {\phi}_{k,\infty}(e_k)$ is an AOU space. \item $(V_{\infty}, \{\phi_{k,\infty} \}) = \inductlim{AOU}(V_k, \phi_k)$. \end{enumerate} \end{remark} By dualizing the inductive sequence $(V_k, \phi_k)$ in \textbf{OU}, we obtain the following reverse sequence of ordered duals: \begin{equation*} V_1' \overset{\phi_{k}^{'}}{\longleftarrow} \dots \overset{\phi_{k-1}^{'}}{\longleftarrow} V_k' \overset{\phi_{k}^{'}}{\longleftarrow} V_{k+1}' \overset{\phi_{k+1}^{'}}{\longleftarrow} \dots \end{equation*} Since each $\phi_k$ is unital, $\phi_k'$ maps state space to state space, we have the following reverse sequence of compact Hausdorff topological spaces with respect to weak*-topology: \begin{equation*} \mathfrak{S}(V_1) \overset{\phi_{1}^{'}}{\longleftarrow} \dots \overset{\phi_{k-1}^{'}}{\longleftarrow} \mathfrak{S}(V_k) \overset{\phi_{k}^{'}}{\longleftarrow} \mathfrak{S}(V_{k+1}) \overset{\phi_{k+1}^{'}}{\longleftarrow} \dots \end{equation*} This is a \textit{projective sequence} in \textbf{TOP} whose objects are topological spaces and morphisms are continuous maps. By \cite{bourbakiTOP}, its projective limit is \begin{equation*} \projectlim{TOP} \mathfrak{S}(V_k) = \left\{ (f_k) \in \prod_{k\in \mathbb{N}} \mathfrak{S}(V_k) \colon f_k = \phi_{k+1}' (f_{k+1}) \right\}, \end{equation*} together with the product topology. By the Tychonoff's theorem, it is a compact and Hausdorff topological space. Moreover, there is a homeomorphism $\theta \colon \mathfrak{S}(\ddot{V}_{\infty}) \to \projectlim{TOP} \mathfrak{S}(V_k)$ by $\theta(f) = (f_k)$, where $f_k \in \mathfrak{S}(V_k)$ is defined to be $f_k(v_k) := f( \ddot{\phi}_{k,\infty} (v_k) )$. When $(V_k, \phi_k)$ is in \textbf{AOU}, $\mathfrak{S}(V_{\infty})$ is also homeomorphic to $\projectlim{TOP} \mathfrak{S}(V_k)$ by a standard lifting technique on quotient, see \cite[Proposition 3.16]{MT}. \subsubsection{MOU and OS} Again we briefly describe the constructions here. Let $(S_k, \phi_k)$ be an inductive sequence in \textbf{MOU}. At each matrix level, we have an inductive sequence $(M_n(S_k), \phi_k^{(n)})$ in \textbf{OU}. Hence, for each $n \in \mathbb{N}$, we obtain $\inductlim{OU} M_n(S_k)$. Let $\ddot{S}_{\infty} = \inductlim{OU} S_k$. It turns out that $M_n(\ddot{S}_{\infty})$, equipped with canonical structures, is order isomorphic to $\inductlim{OU} M_n(S_k)$. Moreover, the maps $\ddot{\phi}_{k,\infty}$ become unital completely positive; then it follows that the pair $(\ddot{S}_{\infty}, \{\ddot{\phi}_{k,\infty} \}) = \inductlim{MOU}(S_k,\phi_k)$. When $(S_k,\phi_k)$ is in \textbf{OS}, we archimedeanize $\inductlim{MOU}(S_k,\phi_k)$ to obtain an operator system $S_{\infty}$. It is shown that $S_{\infty}$ with unital completely positive maps $\phi_{k,\infty} := q_S \circ \ddot{\phi}_{k,\infty}$ is the inductive limit of $(S_k, \phi_k)$ in \textbf{OS}. \begin{remark} Let $(S_k, \phi_k)$ be an inductive sequence in \textbf{OS}. Then for each $n \in \mathbb{N}$, $(M_n(S_k), \phi_k^{(n)})$ is an inductive sequence in \textbf{AOU}. An alternative way to construct $\inductlim{OS}S_k$ is to first obtain $S_{\infty} = \inductlim{AOU} S_k$, and then identify $M_n(S_{\infty})$ to $\inductlim{AOU} M_n(S_k)$ for each $n$. This argument is valid due to the fact that Archimedeanization from \textbf{MOU} to \textbf{OS} is obtained by forming Archimedeanization from \textbf{OU} to \textbf{AOU} at each matrix level. \end{remark} \section{Projective limits}\label{Section:Projective limits} In this section, we construct projective limits in \textbf{AOU} and \textbf{OS}. We start with the definition of projective limit in a general category \textbf{C}. \begin{defn} Let \textbf{C} be a category. A \textbf{projective sequence} in \textbf{C} is a sequence of pairs $\{ (A_k, f_k) \}_{k \in \mathbb{N}}$, where $A_k$ is an object and $f_k$ is a morphism such that $f_k \colon A_{k+1} \to A_k$, for each $k$. To avoid excessive notation, we denote it by $(A_k, f_k)$ whenever the content is clear. We call $f_k$ the connecting morphisms. Observe that for $l < k$, $f_{k, l} := f_{l} \circ \dots \circ f_{k-1} \circ f_{k,k}$ with $f_{k,k} := id_{A_k}$ is a morphism from $A_k$ to $A_l$. A pair $(A, \{p_k \}_{k\in \mathbb{N}})$, where $A$ is an object in \textbf{C} and for each $k \in \mathbb{N}$, $p_k \colon A \to A_k$ is a morphism, is said to be \textbf{compatible} with $(A_k, f_k)$, provided for each $k \in \mathbb{N}$, $f_k \circ p_{k+1} = p_k$. A \textbf{projective limit} of $(A_k, f_k)$ is a compatible pair $(A, \{p_k\}_{k\in\mathbb{N}})$ that satisfies the universal property: If $(B, \{q_k\}_{k \in \mathbb{N}})$ is another compatible pair with $(A_k, f_k)$, then there exists a unique morphism $u \colon B \to A$ such that $p_k \circ u = q_k $, for each $k$. \end{defn} \begin{remark} If $(A_k, f_k)$ has a projective limit $(A, \{ p_{k}\}_{k\in\mathbb{N} } )$, then it is unique up to isomorphism in \textbf{C}, and it will be denoted $(A, \{p_k\}_{k\in\mathbb{N} })$ or $(A, \{p_k\}) = \projectlim{C}(A_k, f_k)$, or $A=\projectlim{C}A_k$ whenever the content is clear. We summarize the above in the following diagram. \begin{equation}\label{Cuniversal_property} \xymatrix{ A_1 &\ar[l]_{f_1} \dots &\ar[l]_{f_{k-1}} A_k &\ar[l]_{f_k} A_{k+1} & \ar[l]_{f_{k+1}} \dots \\ & & A \ar[u]^{p_k} \ar@/_0.5pc/[ur]^{p_{k+1}} \ar@/^0.5pc/@{=>}[ul] \ar@/^1.0pc/[ull]^{p_1} \ar@/_1.0pc/@{=>}[urr]& & } \end{equation} \end{remark} \subsection{Projective limits of TVS}\label{projectlimTVS} Our ambient category is topological vector spaces with continuous linear maps, denoted by \textbf{TVS}. The material in this subsection is standard, see \cite{bourbakiTOP}. We recall a few facts that shall be used in the sequel. Given a family $\{V_i\}_{i \in I}$ of topological vector spaces, the Cartesian product $X := \prod_{i \in I} V_i = \{v=(v_i) \colon v_i \in V_i, i \in I \}$ endowed with the product topology, is a topological space. For each $i \in I$, the \textit{projection} $\pi_i \colon X \to V_i$ by $\pi_i (v) := v_i$ is a continuous map. Also, $X$ equipped with the canonical vector addition and scalar multiplication is a topological vector space, hence is a \textbf{TVS}. If each $V_i$ is Hausdorff, then so is $X$ by the Tychonoff's theorem. Henceforth, we take $I = \mathbb{N}$ with the usual partial ordering. \begin{thm}\label{TVSprojective_limit} Suppose $(V_k, \phi_k)_{k \in \mathbb{N}}$ is a projective sequence in \textbf{TVS}. Define \begin{equation*} V := \left\{ v = (v_k) \in \prod_{k \in \mathbb{N}} V_k \colon \phi_k(v_{k+1}) = v_k, \; \text{for all} \; k \in \mathbb{N} \right\}, \end{equation*} and define the map $p_k \colon V \to V_k$ by $p_k := \pi_k \circ \iota$ for each $k \in \mathbb{N}$, where $\iota$ is the inclusion map from $V$ to $X$. Then $(V, \{ p_k \} )$ is the projective limit of $(V_k, \phi_k)$ in \textbf{TVS}. \end{thm} The existence part is standard; the uniqueness part will be emphasized in the following remark and proposition. \begin{remark} Indeed, $V$ is a closed subspace of $X$ with respect to the product topology, which is equivalent to the initial topology induced by $\{\pi_k\}_{k \in \mathbb{N}}$. The subspace topology on $V$ is equivalent to the \textit{initial topology} induced by $\{p_k\}_{k \in \mathbb{N}}$. Hence, if we first construct the projective limit $(W, \{q_k\}_{k \in \mathbb{N}})$ of $(V_k, \phi_k)$ in \textbf{VS}, and then equip $W$ with the initial topology induced by $\{q_k\}_{k \in \mathbb{N}}$, then $V$ and $W$ are isomorphic in \textbf{TVS}. This observation and the next proposition conclude the uniqueness of projective limit in \textbf{TVS}. \end{remark} \begin{prop}\label{uniquemorphism} Let $(V, \{p_k\})$ be given as in Theorem \ref{TVSprojective_limit} and $W$ be a topological vector space. Then a liner map $\psi \colon W \to V$ is continuous if and only if $p_k \circ \psi \colon W \to V_k$ is continuous for each $k \in \mathbb{N}$. Moreover, if $(W, \{q_k\})$ in \textbf{TVS} is compatible with $(V_k, \phi_k)$, then there exists unique continuous linear map $\psi \colon W \to V$ such that $p_k \circ \psi= q_k$, for each $k \in \mathbb{N}$. \end{prop} \begin{proof} The first statement is a direct consequence of characterization of initial topology. For the second statement, note that the morphisms $q_k$ determine a unique morphism $\psi \colon W \to X$ by $\psi(w) := (q_k(w))$. The compatibility condition asserts that the image of $\psi$ is $V$ and $p_k \circ \psi = q_k$. \end{proof} \begin{remark} It follows that the uniqueness of projective limit in \textbf{AOU} and \textbf{OS} are deducible from this proposition. It is also the primary reason the author considered \textbf{TVS} instead of \textbf{VS} and emphasized the weak*-topology on matrix-ordered dual in Remark \ref{Remark:ordered_dual}. \end{remark} \subsection{Projective limits of AOU}\label{projectlimAOU} In this subsection, let $(V_k, V_k^+, e_k )_{k \in \mathbb{N}}$ be a sequence of AOU spaces and $\phi_k \colon V_{k+1}$ $\to V_k$ be a unital positive map for each $k \in \mathbb{N}$. We simply write $(V_k, \phi_k)$ for such projective sequence in \textbf{AOU}. To construct a candidate for the projective limit of $(V_k, \phi_k)$ in \textbf{AOU}, we begin by working in the ambient space \textbf{TVS}. By the discussion in \S \ref{AOUspaces}, \textbf{AOU} is a subcategory of \textbf{TVS}. Let us apply the forgetful functor from \textbf{AOU} to \textbf{TVS} and obtain the projective limit $(V, \{ p_k \}) =$ $\projectlim{TVS} (V_k, \phi_k)$ as in Theorem \ref{TVSprojective_limit}. Equip $V$ with canonical involution $v^* := (v_k^*)$, and define $V_h := \{ v \in V \colon v = v^*\}$ and $V^+ := \prod_{k \in \mathbb{N}} V_k^+$. Then $(V, V^+)$ is an ordered $\ast$-vector space such that diagram \eqref{Cuniversal_property} commutes in \textbf{TVS}. We now proceed to construct a candidate $V_{\infty} \subset V$ in \textbf{AOU}. For each $k \in \mathbb{N}$, we denote the minimal and maximal order norms for $v_k \in V_k$ by $|| v_k ||^k_{m}$ and $|| v_k ||^k_{M}$, respectively. Recall that all order norms on $V_k$ are bounded between $|| \cdot ||^k_{m}$ and $|| \cdot ||^k_{M} $, and they all coincide on $(V_k)_h$. For Hermitian $v_k \in (V_k)_h$, we simply write $||v_k||^k_h$ for the order seminorm determined by $e_k$. We define for $v = (v_k) \in V$, \begin{equation*} || v ||_{\max} := \sup_k \{ || v_k ||^k_{M} \} \qquad \text{and} \qquad || v ||_{\min} := \sup_k \{ || v_k ||^k_{m} \}. \end{equation*} If $v \in V_h$, then $||v||_{\min} = ||v||_{\max}$, and we will write $||v ||_h$ to avoid excessive notation. Define \begin{equation}\label{Vinfty} V_{\infty} := \{ v \in V \colon || v ||_{\min} < \infty \}. \end{equation} Let $(V_{\infty})_h := V_{\infty} \cap V_h$ and $V_{\infty}^+ := V_{\infty} \cap V^+$. It follows that $(V_{\infty}, V_{\infty}^+)$ is an ordered $\ast$-subspace of $V$. We first claim that $(V_{\infty}, V_{\infty}^+)$ with $e := (e_k) \in V_{\infty}^+$ is indeed an AOU space. \begin{lemma}\label{orderunit} The triple $(V_{\infty}, V_{\infty}^+, e)$ is an AOU space. \end{lemma} \begin{proof} Given $v = (v_k) \in (V_{\infty})_h$, let $r = || v ||_h$, then for each $k \in \mathbb{N}$, $re_k + v_k \geq || v_k ||^k_h e_k + v_k \in V_k^+$. Thus $re+v \in V_{\infty}^+$, and $e$ is an order unit for $(V_{\infty}, V_{\infty}^+)$. To see that $e$ is Archimedean, suppose $v = (v_k) \in (V_{\infty})_h$ and $r e + v \in V_{\infty}^+$ for each $r > 0$. For each $k \in \mathbb{N}$, $r e_k + v_k \in V_k^+$ asserts that $v_k \in V_k^+$. Hence, $v \in V_{\infty} \cap V^+ = V_{\infty}^+$, and $(V_{\infty}, V_{\infty}^+, e)$ is an AOU space. \end{proof} \begin{remark} Since each $|| \cdot ||^k_{m}$ (resp. $|| \cdot ||^k_{M}$) is a $\ast$-norm on $V_k$, it is evident that $|| \cdot ||_{\min}$ (resp. $|| \cdot ||_{\max}$) is a $\ast$-norm on $V_{\infty}$. In the next lemma, we will see that $|| \cdot ||_h$ is indeed the order seminorm on $(V_{\infty})_h$ determined by $e$, and thus $|| \cdot ||_{\min}$ and $|| \cdot ||_{\max}$ are order norms on $V_{\infty}$. We also remark that we can replace $|| \cdot ||_{\min}$ with $|| \cdot ||_{\max}$ in \eqref{Vinfty} since all order norms are equivalent on each $V_k$. \end{remark} \begin{lemma}\label{order_seminorm} The seminorm $|| \cdot ||_h$ defined on $(V_{\infty})_h$ is the order seminorm determined by $e$. Moreover, $|| \cdot ||_{\min}$ and $|| \cdot ||_{\max}$ are order norms on $(V_{\infty}, V_{\infty}^+, e)$. \end{lemma} \begin{proof} Note that $e$ is an order unit for $(V_{\infty})_h$. For $v \in (V_{\infty})_h$, denote $|| v ||$ the order seminorm determined by $e$ as in \eqref{OrderNorm}. We shall show that $|| v || = || v ||_h$, for $v \in (V_{\infty})_h$. Indeed, if $r = ||v||$, then $re \pm v \in V_{\infty}^+$, and for each $k \in \mathbb{N}$, $re_k \pm v_k \in V_k^+$, which implies that $r \geq || v_k ||^k_h$. Hence, $|| v ||_h = \sup_k ||v_k||^k_h \leq r = || v ||$. Conversely, let $||v|| > \varepsilon > 0$ and $r_{\varepsilon} = || v || - \varepsilon$. By definition of order seminorm, $r_{\varepsilon} e + v$ or $r_{\varepsilon} e - v$ is not positive. It follows that there exists $k \in \mathbb{N}$ such that $r_{\varepsilon} e_k + v_k$ or $r_{\varepsilon} e_k - v_k$ is not in $V_k^+$. Since $||v_k||^k_h e_k \pm v_k \in V_k^+$, again by definition, $r_{\varepsilon} < ||v_k||^k_h \leq || v ||$; therefore, $|| v || = || v ||_h$. The last statement follows from $|| v ||_{\min} = || v ||_h = || v ||_{\max}$, for all $v \in (V_{\infty})_h$. \end{proof} \begin{prop}\label{AOUcompatible} For each $k\in\mathbb{N}$, let $p_{\infty, k} \colon V_{\infty} \to V_k$ by $p_{\infty, k}:= p_k \circ \iota_{\infty}$, where $\iota_{\infty}$ is the natural inclusion from $V_{\infty}$ to $V$. Then $p_{\infty,k}$ is a unital positive map. Furthermore, the pair $(V_{\infty}, \{p_{\infty, k} \}_{k \in \mathbb{N}} )$ is compatible with $(V_k, \phi_k)$ in \textbf{AOU}. \end{prop} \begin{proof} For each $k \in \mathbb{N}$, $p_{\infty,k}(v) = p_k \circ \iota_{\infty} (v) = v_k$ is a unital positive linear map. Moreover, $\phi_{k} \circ p_{\infty, k+1}(v) = \phi_{k} ( v_{k+1} ) = p_{k}(v) = p_{\infty, k} \circ\iota_{\infty} (v) = p_{\infty, k} (v)$. Therefore, $(V_{\infty}, \{ p_{\infty, k}\} )$ is compatible with $(V_k, \phi_k)$ in \textbf{AOU}. \end{proof} \begin{remark} We summarize the relations above in the diagram below. \begin{equation} \xymatrix{ V_1 &\ar[l]_{\phi_1} \dots &\ar[l]_{\phi_{k-1}} V_k & \dots \ar[l]_{\phi_{k}} \\ & & V_{\infty} \ar[u]_{p_{\infty,k}} \ar@{=>}[ul] \ar@/^0.5pc/[ull]^{p_{\infty,1}} \ar@/_0.5pc/@{=>}[ur] \ar[d]^{\iota_{\infty}}& & \\ & & V \ar@/^3.0pc/[lluu]^{p_1} \ar@/^4.0pc/@{=>}[luu] \ar@/^2.0pc/[uu]^{p_k} \ar@/_4.0pc/@{=>}[ruu] & } \end{equation} We remark that, if $(V_k, \phi_k)$ are in \textbf{OU}, then the order norms used above will be order seminorms. The same construction will still go through; in particular, similar arguments in Lemma \ref{orderunit} and Proposition \ref{AOUcompatible} will show that $(V_{\infty}, \{ p_{\infty, k }\} )$ is a compatible pair with $(V_k, \phi_k)$ in \textbf{OU}. However, what makes the construction rather complicated is the topological structures and continuity because $V_k$ and $V_{\infty}$ need not be Hausdorff. \end{remark} \begin{thm}\label{AOUprojective_limit} The pair $( (V_{\infty}, V_{\infty}^+, e), \{ p_{\infty, k} \}_{k\in\mathbb{N}})$ is the limit of the projective sequence $( (V_k, V_k^+, e_k), \phi_k )$ in \textbf{AOU}. \end{thm} \begin{proof} Suppose $( (W, W^+, e') , \{q_k\}_{k\in\mathbb{N}})$ in \textbf{AOU} is another compatible pair with $( (V_k, V_k^+, e_k), \phi_k)$. We will show that there exists unique unital positive map $\psi \colon W \to V_{\infty}$ such that for each $k \in \mathbb{N}$, $p_{\infty, k} \circ \psi = q_k$. Note that $(W, \{q_k\})$ is compatible with $(V_k, \phi_k)$ in \textbf{TVS}, so by Proposition \ref{uniquemorphism}, there exists unique continuous linear $\psi \colon W \to V$ such that $p_{k} \circ \psi = q_k$. We will show that $\psi$ is indeed a unital positive map from $W$ into $V_{\infty}$. By Proposition \ref{upContractive}, since $q_k$ is unital positive, for each $w \in W$, $|| q_k (w) ||^k_{m} \leq || w ||_{m}$ for each $k \in \mathbb{N}$; so $|| \psi(w) ||_{\min} < \infty$ and $\psi(w) \in V_{\infty}$. Also, $\psi(e') = ( q_k(e') ) = (e_k) = e$; and for $w \in W^+$, $\psi(w) = (q_k(w)) \in V_{\infty}^+$. The uniqueness of $\psi$ in \textbf{AOU} follows from its uniquenss in \textbf{TVS}. Consequently, $\psi \colon W \to V_{\infty}$ is the unique morphism in \textbf{AOU} that commutes in the diagram below. \begin{equation}\label{AOUuniversal_diagram} \xymatrix{ V_k & & V \\ W \ar@/^1.75pc/@{-->}[urr]^{\psi} \ar[u]^{q_k} \ar@{-->}[rr]_{\psi} & & V_{\infty} \ar[u]_{\iota_{\infty}} \ar[llu]_{p_{\infty, k}} } \end{equation} \end{proof} We end this subsection by proving that $|| \cdot ||_{\min}$ and $|| \cdot ||_{\max}$ are indeed the minimal and maximal order norms, respectively, on $V_{\infty}$. \begin{prop}\label{AOUminOrderNorm} On $(V_{\infty}, V_{\infty}^+, e)$, $|| \cdot ||_{\min}$ is the minimal order norm. \end{prop} \begin{proof} Denote $|| \cdot ||_m$ the minimal order norm on $V_{\infty}$. By Lemma \ref{order_seminorm}, $|| \cdot ||_{\min}$ is an order norm, so $|| \cdot ||_m \leq || \cdot ||_{\min}$. For the converse, note that for each $f_k \in \mathfrak{S}(V_k)$, $f_k \circ p_{\infty, k} \in \mathfrak{S}(V_{\infty})$. Hence for $v = (v_k) \in V_{\infty}$, \begin{align*} || v ||_{\min} &= \sup_k || v_k ||^k_{\min} = \sup_k \left\{ \sup \{ | f_k (v_k) | \colon f_k \in \mathfrak{S}(V_k) \} \right\} \\ &= \sup \left\{ | f_k \circ p_{\infty, k} (v) | \colon k \in \mathbb{N}, f_k \in \mathfrak{S}(V_k) \right\} \\ &\leq \sup \{ |f (v) | \colon f \in \mathfrak{S}(V_{\infty}) \} = || v ||_{m}. \end{align*} Therefore, $|| v ||_{\min} = ||v||_m$. \end{proof} \begin{prop} On $(V_{\infty}, V_{\infty}^+, e)$, $|| \cdot ||_{\max}$ is the maximal order norm. \end{prop} \begin{proof} Denote $|| \cdot ||_M$ the maximal order norm on $V_{\infty}$. By Lemma \ref{order_seminorm}, $|| \cdot ||_{\max}$ is an order norm, so $|| \cdot ||_{\max} \leq || \cdot ||_{M}$. For the converse, consider $v = (v_k) \in V_{\infty}$. Fix a representation $v = \sum_{i=1}^n \lambda_i v_i$, where $\lambda_i \in \mathbb{C}$ and $v_i = (v_i^k)_{k \in \mathbb{N}} \in (V_{\infty})_h$, so that \begin{equation*} \sum_{i=1}^n | \lambda_i| || v_i ||_h = \sum_{i=1}^n |\lambda_i| \sup_k || v_i^k ||^k_h = \sup_k \sum_{i=1}^n |\lambda_i| || v_i^k ||^k_h. \end{equation*} For each $k \in \mathbb{N}$, this representation of $v$ implies that $v_k = \sum_{i=1}^n \lambda_i v_i^k$, with $v_i^k \in (V_k)_h$. In particular, by definition of maximal order norm, we have $\sum_{i=1}^n |\lambda_i| || v_i^k ||^k_h \geq ||v_k||_{\max}^k$. Hence, \begin{equation*} \sum_{i=1}^n | \lambda_i| || v_i ||_h \geq \sup_k ||v_k||^k_{\max} = || v ||_M. \end{equation*} Taking the infimum over all such representations yields that $|| v ||_{\max} \geq || v ||_M$. Therefore, $||v||_{\max} = ||v||_M$. \end{proof} \subsection{Projective limits of OS}\label{projectlimOS} We proceed to construct projective limit in \textbf{OS}. A linear map $\phi$ between operator systems $S$ and $T$ is unital completely positive if and only if for each $n \in \mathbb{N}$, its amplification $\phi^{(n)} = id_n \otimes \phi \colon M_n(S) \to M_n(T)$ is unital positive. In particular, $(M_n(S), M_n(S)^+, I_n \otimes e_S)$ is an AOU space and likewise for $M_n(T)$. Hence, a projective sequence $(S_k, \phi_k)$ in \textbf{OS} gives rise, for each $n \in \mathbb{N}$, to a projective sequence $( M_n(S_k), \phi_k^{(n)})_{k \in \mathbb{N}}$ in \textbf{AOU}. When $n = 1$, by Theorem \ref{AOUprojective_limit}, we denote $(S_{\infty}, S_{\infty}^+, e)$ with morphisms $\{\phi_{k, \infty}\}_{k\in\mathbb{N}}$ the projective limit $\projectlim{AOU} (S_k, \phi_k)$. The key step of the construction is to realize that there is a natural AOU structure on the vector space $M_n(S_{\infty})$ induced by $\{M_n(S_k)^+\}_{k\in\bb{N}}$, which yields a matrix ordering on $S_{\infty}$. To avoid confusion on reading, in this subsection we denote $\vec{x} = (x_k) \in S_{\infty}$ and $[x^k_{ij}]$ for an element in $M_n(S_k)$. There is a canonical vector space identification between $M_n(S_{\infty})$ and $\projectlim{AOU} M_n(S_k)$; namely, given $[\vec{x}_{ij}] \in M_n(S_{\infty})$, we identify it to $( [x^k_{ij}] )_{k\in\mathbb{N} }$, and vice-versa. We shall see that this identification endows $M_n(S_{\infty})$ the desired structure. Given $[\vec{x}_{ij}] \in M_n(S_{\infty})$, define $[\vec{x}_{ij}]^* := [ \vec{x}^*_{ji} ] = [ (x^k)^*_{ji} ]$, then $M_n(S_{\infty})$ is a $\ast$-vector space. We define a matrix ordering on $S_{\infty}$ by \begin{equation}\label{projectiveOScone} M_n(S_{\infty})^+ := \{ [\vec{x}_{ij}] \in M_n(S_{\infty}) \colon [ x^k_{ij} ] \in M_n(S_k)^+, \forall k \in \mathbb{N} \}. \end{equation} Note that for each $[\vec{x}_{ij}] \in M_n(S_{\infty})^+$ and $\alpha \in M_{m,n}(\mathbb{C})$, $\alpha [\vec{x}_{ij}] \alpha^* = ( \alpha [x^k_{ij}] \alpha^* )_{k\in\mathbb{N}}$ is in $M_m(S_{\infty})^+$. Hence, this definition is well-defined by linearity and compatibilty of $\{\phi_{\infty, k}\}_{k\in\mathbb{N}}$. We shall show that it indeed defines a matrix ordering on $S_{\infty}$. \begin{prop} The triple $(M_n(S_{\infty}), M_n(S_{\infty})^+, I_n \otimes e)$ is an AOU space. Moreover, the collection $\{M_n(S_{\infty})^+\}_{n=1}^{\infty}$ is a compatible family on $S_{\infty}$. Consequently, $(S_{\infty}, \{M_n(S_{\infty})^+\}_{n=1}^{\infty}, e)$ is an operator system. \end{prop} \begin{proof} Note that $M_n(S_{\infty})^+$ is a proper cone in $(M_n(S_{\infty}))_h$ since each of $M_n(S_k)^+$ is a proper cone in $(M_n(S_k))_h$. It suffices to show that $I_n \otimes e$ is an Archimedean order unit. Given Hermitian $[\vec{x}_{ij}]$, take $r_{ij} = || \vec{x}_{ij} ||_{\max}$ and let $r = n \cdot \max_{ij} r_{ij}$. For each $k$, $[x^k_{ij}]$ is Hermitian in $M_n(S_k)$, so by Lemma \ref{op-norm_estimate}, $|| [x^k_{ij}] ||_h = || [x^k_{ij}] ||_{op} \leq r$, and $r I_n \otimes e_k - [x^k_{ij}] \in M_n(S_k)^+$. Hence, $r I_n \otimes e - [\vec{x}_{ij}] \in M_n(S_{\infty})^+$, and $I_n \otimes e$ is an order unit. It is Archimedean for if $rI_n\otimes e + [\vec{x}_{ij}] \in M_n(S_{\infty})^+$ for each $r > 0$, then for each $k\in \mathbb{N}$, $r I_n \otimes e_k + [x^k_{ij}] \in M_n(S_k)^+$. Hence, it follows that $[x^k_{ij}] \in M_n(S_k)^+$ and $[\vec{x}_{ij}] \in M_n(S_{\infty})^+$. Therefore, we conclude that $(M_n(S_{\infty}), M_n(S_{\infty})^+, I_n\otimes e)$ is an AOU space for every $n \in \mathbb{N}$. Compatibility of the cones $\{M_n(S_{\infty})^+\}_{n=1}^{\infty}$ follows from the definition and compatibilty of $\{ M_n(S_k)^+ \}_{n=1}^{\infty}$ for each $k\in\mathbb{N}$. \end{proof} Now the AOU projective limit $S_{\infty}$ has an operator system structure. We claim that it is also the projective limit of $(S_k, \phi_k)$ in \textbf{OS} with the same maps $\phi_{\infty, k}$. \begin{thm}\label{OS_projective_limit} The pair $( (S_{\infty}, \{M_n(S_{\infty})^+\}_{n=1}^{\infty}, e ), \{ \phi_{\infty, k} \}_{k \in \mathbb{N}} )$ is a projective limit of $(S_k, \phi_k)$ in \textbf{OS}. \end{thm} \begin{proof} We claim that for each $n \in \mathbb{N}$, $(M_n(S_{\infty}), M_n(S_{\infty})^+, I_n \otimes e)$ with the maps $\phi_{\infty,k}^{(n)}$, is a projective limit of $(M_n(S_k), \phi_k^{(n)})_{k\in\mathbb{N}}$ in \textbf{AOU}. First we show that the map $\phi_{\infty,k}^{(n)}$ is positive. Indeed, if $[\vec{x}_{ij}] \in M_n(S_{\infty})^+$, then \begin{align*} \phi_{\infty,k}^{(n)} ([\vec{x}_{ij}]) &= [ \phi_{\infty,k} (\vec{x}_{ij}) ] = [x^k_{ij} ], \end{align*} is in $M_n(S_k)^+$ by definition of $M_n(S_{\infty})^+$. In particular, we see that $\phi_{\infty,k}^{(n)}$ is unital. Moreover, \begin{align*} \phi_k^{(n)} \circ \phi_{\infty, k+1}^{(n)} &= (id_n \otimes \phi_k) \circ (id_n \otimes \phi_{\infty,k+1} ) \\ &= id_n \otimes ( \phi_k \circ \phi_{\infty, k+1} ) = id_n \otimes \phi_{\infty, k} = \phi_{\infty, k}^{(n)}, \end{align*} so $(M_n(S_{\infty}), \{ \phi_{\infty, k}^{(n)} \}_{k\in\mathbb{N}} )$ is a compatible pair with $(M_n(S_k), \phi_k^{(n)})$ in \textbf{AOU}; consequently, $(S_{\infty}, \{ \phi_{\infty, k} \}_{k\in\mathbb{N}} )$ is compatible with $(S_k, \phi_k)$ in \textbf{OS}. For the universal property, let $(T, \{ \psi_k \}_{k\in\mathbb{N}})$ be compatible with $(S_k, \phi_k)$ in \textbf{OS}. Define $\Psi \colon T \to S_{\infty}$ to be $\Psi (t) := ( \psi_k(t) )_{k \in \mathbb{N}}$. Then $\Psi$ is well-defined, unital, and positive just as in the proof of Theorem \ref{AOUprojective_limit}. Moreover, for each $n \in \mathbb{N}$ and every $Y \in M_n(T)^+$, $\Psi^{(n)}( Y )$ can be identified to $( \psi_k^{(n)} (Y) )_{k \in \mathbb{N}}$, where each $\psi_k^{(n)} (Y) \in M_n(S_k)^+$. Thus, $\Psi$ is completely positive, and it is evident that $\Psi$ is unital. The uniqueness of $\Psi$ follows from the universal property of $S_{\infty}$ in \textbf{AOU} at the ground level. \end{proof} \begin{remark}\label{surjectivity} If each $\phi_k$ is surjective, the map $\phi_{\infty, k}$ is also surjective for each $k$. Indeed, given $x_k \in S_k$, take $\vec{x} = (x_k) \in S_{\infty}$ such that $x_i = \phi_{k,i}(x_k)$ for $i \leq k$ and $x_i \in \phi_{i,k}^{-1}( \{x_k \} ) \neq \emptyset$ for $i > k$. Then $\phi_{\infty, k}(\vec{x}) = x_k$. \end{remark} \begin{thm}\label{OSdiagram_commute} Let $(S_k, \phi_k)$ and $(T_k, \psi_k)$ be two projective sequences in \textbf{OS} such that for each $k$, there is unital completely positive $\theta_k \colon S_k \to T_k$, where $\theta_k \circ \phi_k = \psi_k \circ \theta_{k+1}$ for each $k \in \mathbb{N}$. Suppose $(S_{\infty}, \{\phi_{\infty,k} \})$ and $(T_{\infty}, \{ \psi_{\infty, k} \})$ are the projective limits of $(S_k, \phi_k)$ and $(T_k, \psi_k)$, respectively. Then there exists unique morphism $\Theta \colon S_{\infty} \to T_{\infty}$ such that $\theta_k \circ \phi_{\infty, k} = \psi_{\infty,k} \circ \Theta$ for every $k \in \mathbb{N}$. \end{thm} \begin{proof} The pair $(S_{\infty}, \{ \theta_k \circ \phi_{\infty, k} \}_{k \in \mathbb{N}})$ is compatible with $(T_k, \psi_k)$ since $\psi_{k} \circ (\theta_{k+1} \circ \phi_{\infty, k+1} ) = (\theta_k \circ \phi_k) \circ \phi_{\infty, k+1} = \theta_k \circ \phi_{\infty, k}$. By the universal property of $T_{\infty}$, there exists unique morphism $\Theta \colon S_{\infty} \to T_{\infty}$ such that $\psi_{\infty, k} \circ \Theta = \theta_k \circ \phi_{\infty,k}$. \end{proof} \section{Duality with injective limits}\label{Section:Duality injective} Let $(S_k, \phi_k)$ be an inductive sequence in \textbf{AOU} (resp. \textbf{OS}), where each $\phi_k$ is an (resp. complete) order embedding and each $S_k'$ is an AOU space (resp. operator system) with some suitable choice of Archimedean (resp. matrix) order unit. Under these assumptions, we show that injective and projective limits of the corresponding sequences are in duality. \subsection{AOU Spaces}\label{dualityAOU} Suppose $(S_k, \phi_k)$ is an inductive sequence in \textbf{AOU} such that each $\phi_k$ is an order embedding. Furthermore, suppose for each $k \in \mathbb{N}$, $(S_k', (S_k')^+, \delta_k)$ is an AOU space such that $\phi_k' \colon S_{k+1}' \to S_k'$ is unital. It follows that $\phi_k'$ is surjective unital positive. Hence, the inductive sequence $(S_k, \phi_k)$ in \textbf{AOU} induces a projective sequence of ordered dual spaces $(S_k', \phi_k')$ in \textbf{AOU}. By \S \ref{inductiveOUandAOU} or \cite[\S 3]{MT}, let $(S_{\infty}, \{ \phi_{k,\infty} \}_{k\in\mathbb{N}} )$ be the inductive limit of $(S_k, \phi_k)$. Let $(T_{\infty}, \{ \psi_{\infty,k} \}_{k\in\mathbb{N}} )$ be the projective limit of $(S_k', \phi_k')$ in \S \ref{projectlimAOU}. By Remark \ref{surjectivity}, the map $\psi_{\infty, k} \colon T_{\infty} \to S_k'$ is surjective. \begin{remark} We caution the reader that we will unambiguously use notation $\phi_{k,l}$ for the connecting morphism from $S_k$ to $S_l$, for $k \leq l$, as defined in \S \ref{inductlim}. We also write $\phi_{k,l}'$ for the dual map of $\phi_{k,l}$, so that $\{\phi_{k,l}'\}$ are the connecting morphisms for the projective sequence $(S_k', \phi_k')$. We summarize these assumptions as follows. \begin{equation*} \xymatrix{ S_k \ar[r]^{\phi_{k,l}} \ar[rd]_{\phi_{k,\infty}} & S_{l} \ar[d]^{\phi_{l,\infty}} & & S_k' & S_{l}' \ar[l]_{\phi_{k,l}'} \\ & S_{\infty} & & T_{\infty} \ar[u]^{\psi_{\infty,k}} \ar[ur]_{\psi_{\infty,l}} & } \end{equation*} \end{remark} \begin{defn} Define a bilinear pairing $\inner{\cdot}{\cdot} \colon S_{\infty} \times T_{\infty} \to \mathbb{C}$ by \begin{equation}\label{duality} \inner{\phi_{k,\infty}(x_k)}{(f_m)} := \lim_{m\to\infty} f_m( \phi_{k,m} (x_k ) ). \end{equation} \end{defn} \begin{prop}\label{AOU:duality} $S_{\infty}$ and $T_{\infty}$ are in duality induced by \eqref{duality}. \end{prop} \begin{proof} We first prove that \eqref{duality} is a well-defined bilinear mapping. Let $\phi_{k,\infty}(x_k) \in S_{\infty}$ and $(f_m) \in T_{\infty}$. Let $m \geq k$ and consider \begin{align*}\tag{$\dagger$} f_m(\phi_{k,m}(x_k)) &= (f_m \circ \phi_{k,m})( \phi_{k,k}( x_k) ) = f_k( x_k ), \end{align*} which shows the limit in \eqref{duality} is in fact a constant. To see that it is well-defined, let $\phi_{k,\infty}(x_k) = \phi_{l,\infty}(x_l) \in S_{\infty}$ for some $l \in \mathbb{N}$. By Remark \ref{inductiveAOU}, $|| \phi_{k,m}(x_k) - \phi_{l,m}(x_l) ||^m \to 0$ as $m \to \infty$. Note that $|| f_m || \leq || (f_m) ||_{\max}$ for all $m \in \mathbb{N}$, so \begin{equation*} |f_m(\phi_{k,m} (x_k) - \phi_{l,m}(x_l) )| \leq || (f_m) ||_{\max} \cdot || \phi_{k,m} (x_k) - \phi_{l,m} (x_l) ||^m \to 0, \end{equation*} as $m \to 0$, so \eqref{duality} is well-defined. It is easy to see that \eqref{duality} is bilinear. For duality, suppose $\inner{\phi_{k,\infty}(x_k)}{(f_m)} = 0$ for every $k \in \mathbb{N}$ and every $x_k \in S_k$. We claim that $(f_m) = 0 \in T_{\infty}$. For fixed $k \in \mathbb{N}$, ($\dagger$) shows that $0 = \inner{\phi_{k,\infty}(x_k)}{(f_m)} = f_k ( \phi_{k,k} (x_k) ) = f_k(x_k)$. By duality between $S_k$ and $S_k'$, we have $f_k = 0$; therefore, $(f_m) = 0 \in T_{\infty}$. Now suppose $\inner{\phi_{k,\infty}(x_k)}{(f_m)} = 0$ for every $(f_m) \in T_{\infty}$. We must show that $\phi_{k,\infty}(x_k) = 0 \in S_{\infty}$ or $\ddot{\phi}_{k,\infty}(x_k) \in N$; or equivalently, by Remark \ref{inductiveAOU}, $\lim_{l \to \infty} || \phi_{k,l}(x_k) ||^l = 0$. For each $l \geq k$, $| f_l( \phi_{k,l} (x_k) ) | = | \inner{\phi_{l,\infty}(x_l)}{(f_m)} | = 0$. By taking the supremum over all $f_l \in S_l'$, we conclude that $|| \phi_{k,l}(x_k) ||^l = 0$; hence, $\phi_{k,\infty}(x_k) = 0 \in S_{\infty}$. \end{proof} \begin{prop}\label{AOUdualcones} $S_{\infty}^+$ and $T_{\infty}^+$ are dual cones with respect to the dual pair $\inner{S_{\infty}}{T_{\infty}}$. More precisely, \begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=ii, leftmargin=*] \item $\phi_{k,\infty}(x_k) \in S_{\infty}^+$ if and only if $\inner{\phi_{k,\infty}(x_k)}{(f_m)} \geq 0$ for all $(f_m) \in T_{\infty}^+$. \item $(f_m) \in T_{\infty}^+$ if and only if $\inner{\phi_{k,\infty}(x_k)}{(f_m)} \geq 0$ for all $\phi_{k,\infty}(x_k) \in S_{\infty}^+$. \end{enumerate} \end{prop} \begin{proof} For (i), let $\phi_{k,\infty}(x_k) \in S_{\infty}^+$. By Remark \ref{inductiveAOU}, for each $r > 0$, there exist $m \geq l > k$ and $y_l \in S_l$ with $|| \phi_{l,m}(y_l) ||^m \to 0$, such that $re_m + \phi_{k,m}(x_k) + \phi_{l,m}(y_l) \in S_m^+$. If $(f_m) \in T_{\infty}^+$, then $f_m \in (S_m^d)^+$ and \begin{equation*} r f_m(e_m) + f_m( \phi_{k,m}(x_k) ) + f_m( \phi_{l,m} (y_l) ) \geq 0. \end{equation*} Since $|| \phi_{l,m}(y_l)||^m \to 0$ and $|| f_m || \leq || (f_m) ||_{\max}$, letting $m \to \infty$ asserts that $r || (f_m) ||_{\max} + \inner{\phi_{k,\infty}(x_k) }{(f_m)} \geq 0$ for each $r \geq 0$. Consequently, $\inner{\phi_{k,\infty} (x_k) }{(f_m)} \geq 0$. Conversely, suppose $\inner{\phi_{k,\infty}(x_k)}{(f_m)} \geq 0$ for all $(f_m) \in T_{\infty}^+$. Then by ($\dagger$), it follows that $f_k(x_k) \geq 0$. Since $\psi_{\infty, k}$ is surjective, letting $f_k$ vary over $(S_k')^+$ asserts that $x_k \in S_k^+$ via duality between $S_k$ and $S_k'$; therefore, $\phi_{k,\infty}(x_k) \in S_{\infty}^+$. For (ii), one direction follows just as the first paragraph. Suppose $(f_m) \in T_{\infty}$ and $\inner{\phi_{k,\infty}(x_k)}{(f_m)} \geq 0$ for all $\phi_{k,\infty}(x_k) \in S_{\infty}^+$. Again by ($\dagger$), for each $k \in \mathbb{N}$, $f_k(x_k) = \inner{\phi_{k,\infty}(x_k)}{(f_m)} \geq 0$. Now let $x_k$ vary over $S_k^+$, and by duality between $S_k$ and $S_k'$, we deduce that $f_k \in (S_k')^+$; therefore, $(f_m) \in T_{\infty}^+$. \end{proof} Combining these two propositions, we conclude the following duality theorem between $S_{\infty}$ and $T_{\infty}$. We write $S_{\infty}'$ for the ordered dual of $S_{\infty}$. \begin{thm}\label{AOUoi} The dual pairing in \eqref{duality} induces an order isomorphism $\Gamma \colon T_{\infty} \to S_{\infty}' $ via $\Gamma \colon (f_k) \mapsto \inner{\cdot}{(f_k)}$. In particular, $(S_{\infty}', (S_{\infty}')^+) $ with $\Gamma( (\delta_k) )$ is an AOU space. \end{thm} \subsection{Operator Systems}\label{dualityOS} We now proceed to the case for \textbf{OS}. Let $(S_k, \phi_k)$ be an inductive sequence in \textbf{OS}, where each $\phi_k$ is a unital complete order embedding. Therefore, the inductive sequence $(S_k, \phi_k)$ in \textbf{OS} induces a projective sequence $(S_k', \phi_k')$, where $\phi_k'$ is surjective, in \textbf{OS}. Let $(S_{\infty}, \{ \phi_{k,\infty} \}_{k\in\mathbb{N}} )$ be the inductive limit of $(S_k, \phi_k)$ and $(T_{\infty}, \{ \psi_{\infty,k} \}_{k\in\mathbb{N}} )$ be the projective limit of $(S_k', \phi_k')$. \begin{thm}\label{OScoi} The duality defined in \eqref{duality} induces a complete order isomorphism $\Gamma \colon T_{\infty} \to S_{\infty}' $ via $\Gamma \colon (f_k) \mapsto \inner{\cdot}{(f_k)}$. Consequently, the matrix-ordered dual $(S_{\infty}', \{ M_n( S_{\infty}' )^+ \}_{n=1}^{\infty} )$, equipped with $\Gamma( (\delta_k) )$, is an operator system. \end{thm} \begin{proof} By Theorem \ref{AOUoi}, $\Gamma$ is an order isomorphism. At the matrix level, we shall prove that $[ (f_k)^{ij} ] \in M_n(T_{\infty})^+$ if and only if $F \colon S_{\infty} \to M_n$ by $F( \phi_{k,\infty}(x_k) ) := [ \inner{\phi_{k,\infty}(x_k)} {(f_k)^{ij}} ]$ is completely positive. Suppose $[(f_k)^{ij}] \in M_n(T_{\infty})^+$. Then by definition of matrix-ordered dual and \eqref{projectiveOScone}, for each $k \in \mathbb{N}$, the map $F_k \colon S_k \to M_n$ by $F_k(x_k) = [ f_k^{ij}(x_k) ]$ is completely positive. By regarding $M_m(S_{\infty})$ as the inductive limit of $( M_m(S_k), \phi_k^{(m)} )$ in \textbf{AOU} and applying Remark \ref{inductiveAOU}, a matrix $[ \phi_{k,\infty}(x_k^{st}) ] \in M_m( S_{\infty} )^+$ if and only if for each $r > 0$, there exist $p \geq l > k$ with $[y_l^{st}] \in M_n( S_l)$ with $|| [ \phi_{l, p} ( y_l^{st} )] ||^p \to 0$, such that $r I_m \otimes e_p + [ \phi_{k,p} (x_k^{st}) ] + [\phi_{l,p} (y_l^{st}) ] \in M_m(S_p)^+$. By a similar argument in the proof of Proposition \ref{AOUdualcones}, applying $F_p^{(m)}$ to this matrix yields that \begin{equation*} r I_m \otimes F_p(e_p) + [ f_p^{ij}( \phi_{k,p} (x_k^{st} ) ) ] + [ f_p^{ij}( \phi_{l,p} (y_l^{st} ) ) ] \in (M_m \otimes M_n) ^+. \end{equation*} The third term vanishes as $p \to \infty$, so we have \begin{equation*} r \left| \left| \left[ (f_k)^{ij} \right] \right| \right|_{\max} I_m \otimes I_n + \left[\inner{\phi_{k,\infty}(x_k^{st}) }{(f_k)^{ij} } \right] \in (M_m \otimes M_n)^+, \end{equation*} for every $r > 0$. Therefore, $F^{(m)} ([ \phi_{k,\infty} (x_k^{st}) ] ) = [\inner{\phi_{k,\infty}(x_k^{st}) }{(f_k)^{ij} } ] \geq 0$, and $F$ is completely positive. Conversely, suppose $F$ is completely positive and $(X_k) = [ (x_k)^{st} ] \in M_m(S_k)^+$. By similar argument in $(\dagger)$, for large enough $p \geq k$, \begin{align*} F_k^{(m)} (X_k) &= [f_k^{ij} (x_k^{st} ) ] = [f_p^{ij} ( \phi_{k,p} (x_k^{st}) ) ] \\ &= \left[ \inner{ \phi_{k,\infty}^{(p)} (X_k) }{ (f_k)^{ij} } \right] = F^{(m)}( \phi_{k,\infty}^{(m)} (X_k) ), \end{align*} where the last quantity is positive by hypothesis. Let $X_k$ vary over $M_n(S_k)^+$, then it follows that $[f_k^{ij}] \in M_n(S_k')^+$ for every $k \in \mathbb{N}$. Consequently, $[(f_k)^{ij}] \in M_n(T_{\infty})^+$; and $\Gamma$ is a complete order isomorphism. \end{proof} \section{Matrix-ordered duals of separable operator systems}\label{Section:dualityMain} Our goal is to generalize Theorem \ref{DualFiniteDimension} to separable operator systems by duality between injective and projective limits of finite-dimensional operator systems. Henceforth, let $S$ be an operator system. We start by noting that the Archimedean property for order units and matrix order units in $S'$ is automatically inherited, due to the nature of weak*-topology and the canonical identification $M_n(S') \cong M_n(S)'$. \begin{thm}\label{DualOrderUnitArch} Let $\delta \in S'$. The following are equivalent: \begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=iii, leftmargin=*] \item $\delta$ is an order unit for $S'$. \item $\delta$ is a matrix order unit for $S'$. \item $\delta$ is an Archimedean order unit for $S'$. \item $\delta$ is an Archimedean matrix order unit for $S'$. \end{enumerate} \end{thm} \begin{proof} For (i) $\iff$ (iii), one direction is trivial. Assume (i) and suppose $f \in S'_h$ such that $r \delta + f \in (S')^+$ for each $r > 0$. Then for each $x \in S^+ \setminus \{0\}$, $\inner{r \delta + f }{x} \geq 0$. Letting $r \searrow 0$ implies that $f(x) \geq 0$, and $f \in (S')^+$; so $\delta$ is an Archimedean order unit. Now (i) $\iff$ (ii) follows from Proposition \ref{MOUunit}; and (ii) $\iff$ (iv) follows from (i) $\iff$ (iii) applied to $M_n(S') \cong M_n(S)'$. \end{proof} \begin{remark} We remark that a necessary condition of being an order unit for $S'$ is faithfulness. Indeed, suppose $\delta$ is an order unit for $S'$ and $x \in S^+$ such that $\delta(x) = 0$. Then for each $f \in (S')^+$, there exists $r > 0$ so that $r \delta - f \in (S')^+$. Thus, $(r \delta - f)(x) = - f(x) \geq 0$ implies that $f(x) = 0$ for all $f \in S'$. By duality, $x = 0$ and $\delta$ is faithful. The converse holds if $S$ in addition is reflexive as a normed space. \end{remark} \begin{prop}\label{DualReflexive} If $S$ is reflexive as a normed space and $\delta \in (S')^+$ is faithful over $S$, then $\delta$ is an order unit for $S'$. Moreover, in this case, $S'$ is an operator system. \end{prop} \begin{proof} Recall that $S$ is reflexive if and only if the closed unit ball $B_1(S)$ of $S$ is weakly-compact. Let $f \in S'_h$ and consider the weakly-compact subset $K = B_1(S) \cap S^+$. Then $f$ attains its maximum on $K$. Also, $\delta$ being faithful asserts that the minimum of $\delta$ on $K$ is strictly positive. Hence, there exists $r > 0$ such that $r \delta - f \geq 0$ on $S^+$; and $\delta$ is an order unit. The second statement now follows from Theorem \ref{DualOrderUnitArch}. \end{proof} \begin{prop}\label{SeparableFaithful} If $S$ is a separable operator system, then there exists a faithful linear functional $\delta$ over $S$. \end{prop} \begin{proof} Since $S$ is separable, $S'$ is weak*-metrizable. The state space $\mathfrak{S}(S)$ is weak*-compact in $S'$, so it is weak*-separable. Let $\{ \delta_n \}$ be a weak*-dense sequence in $\mathfrak{S}(S)$ and define the functional $\delta \colon S \to [0,\infty)$ by $\delta(x) := \sum_n \frac{1}{2^n} \delta_n(x)$. Note that since $S'$ is a Banach space and $|| \delta_n || \leq 1$, $\delta$ is well-defined and $\delta \in \mathfrak{S}(S)$. To this end, suppose by contrary $\delta(x) = 0$ for some $x > 0$. Then $\delta_n(x) = 0$ for all $n \in \bb{N}$. By density, it implies that $f(x) = 0$ for all $f \in \mathfrak{S}(S)$. But $S'$ is the span of $\mathfrak{S}(S)$, it follows that $f(x) = 0$ for all $f \in S'$, and $x = 0$. Therefore, $\delta$ is faithful over $S$. \end{proof} Since finite-dimensional $S$ is both reflexive and separable, Theorem \ref{DualFiniteDimension} is a corollary of the above propositions. For infinite-dimensional $S$, to this end it remains to prove the existence of order unit for $S'$. \begin{prop}\label{DualCountableDim} If $\dim(S)$ is countably infinite and $\delta$ is faithful over $S$, then $\delta$ is an Archimedean order unit for $S'$. \end{prop} \begin{proof} Let $\{x_i=x_i^* \}_{i=1}^{\infty}$ with $x_1 = e$ be a Hamel basis for $S$ and let $S_k$ be the span of $x_i$, $i = 1, \dots, k$. Denote $\iota_k \colon S_k \to S_{k+1}$ the inclusion map and let $\delta_k$ be the restriction of $\delta$ to $S_k$. Note that $\iota_k'$ is the canonical complete order quotient map $q_k \colon f \mapsto f|_{S_k}$. It is clear that $S$ is the inductive limit of $(S_k, \iota_k)$ in \textbf{OS}. Since $\delta_k$ is faithful, by Proposition \ref{DualReflexive}, $(S_k', q_k)$ with $\delta_k$ is a projective sequence in \textbf{OS}. By Theorem \ref{OScoi}, $S'$ is completely order isomorphic to $\projectlim{OS} (S_k', q_k)$, whose Archimedean matrix order unit $( \delta_k )_{k\in\bb{N}}$ corresponds to $\delta \in S'$. \end{proof} For the separable case, we first consider dense operator subsystem $T$ of $S$. The following lemma must be well-known, but we could not find a precise reference. \begin{lemma}\label{DensityMatrixLevel} Let $T$ be an operator subsystem of $S$. Then $T$ is dense in $S$ in the order norm topology if and only if for every $n \in \bb{N}$, $M_n(T)$ is dense in $M_n(S)$ in the order norm topology. \end{lemma} \begin{proof} One direction is trivial. Suppose $S \subset B(\cl{H})$ is a concrete operator system and $T$ is dense in $S$ in the order norm topology. It suffices to show that $M_n(T)_h$ is dense in $M_n(S)_h$. Given $x \in M_n(S)_h$, by \cite[Lemma 3.7]{PTT}, decompose $x = \sum_{i=1}^N A_i \otimes x_i$, where $A_i \in M_n(S)_h$ and $x_i \in S_h$. Since $T_h = T \cap S_h$ is dense in $S_h$ in the subspace topology, for $1\leq i \leq N$, there exists sequence $t_i^m \in T_h$ such that $|| x_i - t_i^m ||_h \to 0$. By \cite[Corollary 5.6]{PT} the order norm $|| \cdot ||_h$ is the operator norm $|| \cdot ||_{op}$ inherited from $B(\cl{H})$. Let $t = \sum_{i=1}^N A_i \otimes t_i \in M_n(T)_h$. Then in $M_n(S) \subset M_n(B(\cl{H})) \cong B( \cl{H}^{(n)})$, \begin{align*} || t - x ||_h &= \left\| \sum_{i=1}^N A_i \otimes (x_i - t_i^m) \right\|_{op} \leq \sum_{i=1}^N \lambda_i || x_i - t_i^m ||_{op}, \end{align*} where $\lambda_i = ||A_i||_{op}$. The last quantity goes to $0$ as $m \to \infty$, so $M_n(T)$ is dense in $M_n(S)$ in the order norm topology. \end{proof} Suppose $T$ is a dense operator subsystem of separable $S$. By a standard density argument, every $f \in T'$ has a unique extension $\tilde{f} \in S'$ such that $\tilde{f}|_{T} = f$. In fact, the map $f \mapsto \tilde{f}$ is an isometric isomorphism. \begin{prop}\label{DualDensity} Let $S$ and $T$ be given as above. If $T'$ is an operator system with Archimedean matrix order unit $\delta$, then $S'$ is an operator system with Archimedean matrix order unit $\tilde{\delta}$. Furthermore, the map $f \mapsto \tilde{f}$ defines a unital complete order isomorphism between $T'$ and $S'$. \end{prop} \begin{proof} Given $\tilde{f} \in S'_h$, then its restriction to $T$ is $f \in T'_h$. By hypothesis, there exists $r > 0$ with $r \delta - f \in (T')^+$. We claim that $r \tilde{\delta} - \tilde{f} \in (S')^+$. If $x \in S^+$, then there exists sequence $t_m \in T^+ = T \cap S^+$ such that $t_m \to x$ under the order norm. Thus, \begin{equation*} \inner{r\tilde{\delta} - \tilde{f}}{x} = \inner{r\tilde{\delta} - \tilde{f}}{ \lim_m t_m } = \lim_m \inner{r \delta - f }{t_m} \geq 0, \end{equation*} where the second equality follows from the unique extension. Hence, $\tilde{\delta}$ is an order unit for $S'$. By Theorem \ref{DualOrderUnitArch}, $(S', \delta)$ is an operator system. Finally, by density and Lemma \ref{DensityMatrixLevel}, the map $f \mapsto \tilde{f}$ uniquely extends $CP(T, M_n)$ to $CP(S, M_n)$. Hence, it is unital completely positive with inverse $f \mapsto f|_{T}$. Therefore, $f \mapsto \tilde{f}$ is a unital complete order isomorphism. \end{proof} We conclude the main result of the paper. \begin{thm}\label{DualSeparable} Let $S$ be a separable operator system. Then \begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=ii, leftmargin=*] \item there exists faithful $\delta \colon S \to \bb{C}$; and \item any faithful functional $\delta$ is an Archimedean matrix order unit for $S'$. \end{enumerate} Consequently, $S'$ with any faithful state is an operator system. \end{thm} \begin{proof} The first statement is Proposition \ref{SeparableFaithful}. Suppose $\delta \in (S')^+$ is faithful and $X$ is a countable dense subset of $S$. Define $T$ to be the span of $X$, $X^*$, and $e$ in $S$. Then $T$ is a dense operator subsystem of $S$ with countable dimension. By Proposition \ref{DualCountableDim}, $(T', \delta|_T)$ is an operator system. The result now follows from Proposition \ref{DualDensity}. \end{proof} \section*{Acknowledgement} The post-doctoral fellowship of Ng at the Hong Kong Polytechnic University is supported by the PolyU central research grant G-YBKR, the HK RGC grant PolyU 502512, and the AMSS-PolyU Joint Research Institute. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The study of the coherent dynamics of spin ensembles in solids has a long history.\cite{NMR} More recent advances allow the study of single-spins in mesocopic and nanoscale devices.\cite{Awschalom99,Sarma04} Physical confinement to low-dimensions enhances interaction effects and leads to novel quantum coherent phenomena involving spins such as spin-charge separation in Luttinger liquids\cite{GiamarchiBook} and skyrmions in quantum Hall ferromagnets.\cite{Sondhi93,Barrett95} In zero-dimensional semiconductor quantum dots, spin-dependent effects predominantly arise from the combination of repulsive Coulomb interactions and the Pauli exclusion principle.\cite{Hanson07} Motivated by quantum information applications,\cite{Loss98} there is now increasing interest in the coherent transport of spin in large arrays of tunnel-coupled quantum dots as a means to distribute quantum information, or to realize more efficient spin-readout, across the array.\cite{Taylor05,Friesen07,Baart16,Fujita17,Mills18,Kandel19,Sigillito19b} A proposed method to achieve charge transport in quantum dot arrays is known as coherent transport by adiabatic passage (CTAP).\cite{Greentree04,Rahman10,Huneke13,Ban18,Platero19,Ban19} This protocol uses an electrical analog of the well-known stimulated Raman adiabatic passage (STIRAP) pulse sequence from atomic, molecular, and optical (AMO) physics to move the electron coherently across the array by keeping it in an adiabatic dark state.\cite{Vitanov01,Vitanov17} Charge coherence times in quantum dots are often relatively short ($\sim 1$ ns),\cite{Hayashi03,Petta04,Petersson10} so far preventing the realization of CTAP in practice. However, the elegance of this method motivates the search for spin based analogs of CTAP (spin-CTAP) that may allow robust spin transport. Single spins confined in semiconductor quantum dots can have long spin-dephasing times ($T_2^* >1~\mu$s) compared to the timescale of exchange-based spin dynamics ($\lesssim 10$ ns),\cite{Petta05,Veldhorst15,Reed16,He19} setting up much more favorable conditions for adiabatic transfer protocols. In this Article, we develop the theoretical framework of spin-CTAP using the Heisenberg exchange interaction in a linear array of quantum dots in a magnetic field gradient. The combination of exchange interactions and a magnetic field gradient leads to an effective Ising interaction.\cite{Meunier11,Russ18,Zajac18,Watson18} By modulating the exchange interaction in time, we can resonantly drive flip-flop transitions of electron spins on neighboring dots of a linear array.\cite{Nichol17,Sigillito19b,Takeda19} As we show here, applying this exchange modulation according to CTAP pulse sequences allows adiabatic spin-transfer across large quantum dot arrays. The investigation of spin transport in Heisenberg coupled spin chains dates back to foundational work on quantum magnetism,\cite{Bloch30} with many studies focused on optimized state transfer for quantum information applications.\cite{Bose03,Landahl04,Osborne04,Murphy10,Yao11,Makin12} Our approach differs in detail from these previous works because of the large magnetic field gradient imposed by a micromagnet and the use of local, time-dependent control of the exchange interaction throughout the array. For many spin systems, local control of exchange coupling is difficult to realize; however, it is readily achievable in quantum dot arrays through electrical driving of the gates used to form the dots.\cite{Petta05,Veldhorst15,Reed16,He19} Our spin transfer and entanglement generation protocols are immediately applicable to current experiments.\cite{Mills18,Kandel19,Volk19} The overall simplicity and robustness to pulse imperfections make adiabatic spin transfer a promising method for the readout of large quantum dot arrays. Motivated by similar considerations, a related adiabatic transfer scheme was recently implemented experimentally in an array of GaAs quantum dot spin-qubits.\cite{Kandel20} The paper is organized as follows: In Sec.~\ref{sec:arrays}, we introduce our theoretical model for extended arrays of quantum dots based on a Hubbard model. We then briefly review charge-CTAP in a quantum dot array containing a single electron. In Sec.~\ref{sec:spinctap}, we transition to a regime where each site in the quantum dot array is occupied by a single electron. We include the effects of a magnetic field gradient and develop the theory of spin-CTAP for three dot arrays, specifically considering the fully-polarized subspace with a single spin-flip. Varying the tunnel coupling, and therefore exchange between adjacent sites, along the array shifts subspaces with different numbers of spin flips out of resonance with the transfer protocol. We use this effect to realize a quantum-controlled version of spin-CTAP conditional on the spin state of the middle electron. We benchmark the performance of our spin-CTAP pulses in the presence of a realistic noise model and study the effects of imperfections in the adiabatic pulse sequences. In Sec.~\ref{sec:multispinctap}, spin-CTAP is generalized to arbitrarily large quantum dot arrays. In Sec.~\ref{sec:ghz}, we show how to use quantum-controlled spin-CTAP to generate many-qubit Greenberger-Horne-Zeilinger (GHZ) states.\cite{NielsenChuang} Including the effects of noise, high-fidelity GHZ state preparation is possible for three dots, with persistent entanglement achievable in arrays of up to 11 dots. We present our conclusions in Sec.~\ref{sec:conclusions}. \section{CTAP in Quantum dot arrays} \label{sec:arrays} Arrays of quantum dots with more than three independent, electrically controllable sites are now routinely studied in experiment.\cite{Zajac16,Nichol17,Mortemousque18,Mills18,Sigillito19,Kandel19,Volk19,Dehollain19} A common approach to analyze these experiments is to approximate the low-energy Hamiltonian by a single-band Hubbard model \be H = \sum_{i,j,\sigma} t_{c,ij} c_{i\sigma}^\dag c_{j \sigma} + \sum_i U_i n_i(n_i-1) - \mu_i n_i , \end{equation} where $t_{c,ij}$ is a tunnel coupling matrix element between the lowest orbital state on each dot, $U_i$ is the local Coulomb repulsion on each dot, and $\mu_i$ is the local chemical potential. Here, $c_{i \sigma}$ is a Fermion annihilation operator on dot $i$ with spin $\sigma$ = $\uparrow$ or $\downarrow$, and $n_i = \sum_{\sigma} c_{i \sigma}^\dag c_{i \sigma}$. When there is only a single electron in a fixed spin state in the entire array, then the Hamiltonian has a single-particle description \be H = \sum_{i,j} t_{c,ij} \ket{i}\bra{j} - \sum_i \mu_i \ket{i}\bra{i}, \end{equation} where $\ket{i} = c_{i \downarrow}^\dag \ket{0}$ is the electronic state with a single excess electron in dot $i$ in a spin-down state. For a linear three dot array with uniform chemical potentials, this Hamiltonian has the representation in the basis $\{\ket{1},\ket{2},\ket{3}\}$ as \be \label{eqn:hc} H = \left( \begin{array}{c c c} 0 & t_{c,12}(t) & 0 \\ t_{c,12}^*(t) & 0 & t_{c,23}(t) \\ 0 & t_{c,23}^*(t) & 0 \end{array} \right), \end{equation} The idea of CTAP is that the electron charge can be adiabatically transferred from dot 1 to dot 3 by taking advantage of special properties of three-level systems with this Hamiltonian.\cite{Greentree04} In particular, for any value of $t_{c,ij}$ there is a zero-energy eigenstate $\ket{D}$ of $H$ (i.e., $H \ket{D} = 0$) that takes the simple form \be \ket{D} \propto t_{c,23} \ket{1} - t_{c,12}^* \ket{3}. \end{equation} In AMO physics, this zero energy state is called a ``dark state'' because it is a nontrivial superposition state with zero population in the intermediate state $\ket{2}$ of the three-level system. Oftentimes, this intermediate state is an optically excited state that emits photons, which is the origin of the terminology.\cite{QuantumOpticsBook} The dark state has a minimal energy gap to the other two eigenstates of $H$ (often called ``bright states'') by an amount \be |\Delta E_{\rm min}| = \sqrt{|t_{c,12}|^2 + |t_{c,23}|^2 }. \end{equation} For a general time-dependent Hamiltonian, the adiabaticity condition to remain in the adiabatic eigenstate $\ket{n}$ takes the form $\sum_{m \ne n}\hbar |\bra{m} \dot{H} \ket{n}|/|E_m - E_n|^2 \ll 1$. Since the adiabatic dark state always has a finite gap from the other two adiabatic bright states, any sufficiently slowly evolving pulse sequence $ \dot{t}_{c,ij} \ll |\Delta E_{\rm min}|^2/\hbar$ will satisfy the adiabaticity condition and maintain population in the dark state. State transfer is achieved for pulse sequences that start with $t_{c,12}(t) \ll t_{c,23}(t) $ and ends with $t_{c,12} \gg t_{c,23}$ such that $\ket{D} $ transforms from $ \ket{1}$ at the beginning of the sequence to $\ket{3}$ at the end. In AMO physics, this adiabatic passage sequence, with its characteristic ``counterintuitive'' ordering, is commonly referred to as stimulated Raman by adiabatic passage (STIRAP).\cite{Vitanov01} Applying such a pulse sequence for a single electron in a quantum dot array leads to coherent transport of charge by adiabatic passage (CTAP).\cite{Greentree04} By adiabatically turning on a large tunnel coupling on the middle dots to energetically isolate an extended zero energy state, this three-site CTAP protocol can be directly generalized to arbitrarily large arrays of dots.\cite{Greentree04} \section{Spin-CTAP in Quantum Dot Arrays} \label{sec:spinctap} We now consider the generalization of CTAP to the spin degree of freedom. Instead of working in the limit of a single electron in the quantum dot array, we consider the half-filled case with one electron per dot. Strong Coulomb repulsion ($U \sim 2$ meV) leads to the formation of a Mott insulating state where the only mobile degrees of freedom at low energies are the electron spins [see Fig.~\ref{fig:1}(a)]. Integrating out the double occupancies from a single-band, spin-full Hubbard model at half-filling generically leads to an effective Heisenberg Hamiltonian for the spins at lowest order in $t_{c,ij}/U_{k}$ \be H= \sum_i g \mu_B \bm{B}_i^{\rm tot}\cdot \bm{s}_i + \sum_{i,j} J_{ij}(t) (\bm{s}_i \cdot \bm{s}_j - 1/4), \end{equation} where $J_{ij}(t)$ is the exchange interaction between the spins on dots $i$ and $j$, $\bm{B}_i^{\rm tot} = B_{\rm ext} \hat{z} + \bm{B}_i^M$ is the local magnetic field experienced by spin $i$ averaged over the orbital wavefunction and $s_i^\mu = \frac{1}{2} \sum_{\alpha \beta} c_{i \alpha}^\dag \sigma_{\alpha \beta}^\mu c_{i \beta}$ is the local spin-1/2 operator on dot $i$ for the Pauli matrix $\sigma^\mu$ ($\mu = x,~y,~z)$. The electronic $g$-factor $g$ $\approx$ 2 in silicon. The total field includes contributions from the global external field $ B_{\rm ext}$ and a local field $\bm{B}_i^M$ induced by an on-chip micromagnet.\cite{Russ18} The exchange interaction can be modulated in time by changing the tunnel barriers that separate the quantum dots.\cite{Petta05,Veldhorst15,Reed16,He19} In the regime we consider here, where the overall Zeeman energy is much greater than the temperature $g \mu_B B_i^{\rm tot} \gg k_B T$, we can initialize the ground state of a single dot using energy selective tunneling.\cite{Elzerman04} Other sites in the array can then be loaded by shuttling electrons\cite{Baart16,Fujita17,Mills18} or applying pairwise SWAP operations.\cite{Nichol17,Kandel19,Takeda19,Sigillito19b} Readout can also be accomplished through spin transport to dots used for spin-to-charge conversion and charge sensing in the array.\cite{Hanson07} \begin{figure}[tb] \begin{center} \includegraphics[width= 0.49\textwidth]{ConceptFig.pdf} \caption{(a) A quantum dot array realizes a spin-1/2 chain. Driving the tunnel barriers modulates the exchange interaction, allowing an adiabatic spin transport protocol which we refer to as spin-CTAP. (b) Exchange pulse profile for spin-CTAP protocol with three dots. Counterintuitively, $j_{23}$ is turned on before $j_{12}$ to keep the system in an adiabatic dark state. } \label{fig:1} \end{center} \end{figure} Single-spin addressability can be achieved in these systems by applying a varying magnetic field across the array that is larger across each pair of sites than the pairwise exchange interaction.\cite{Loss98} In this regime, we can write an effective Hamiltonian in the adiabatic approximation as \be \label{eqn:Heff} H= \sum_i \hbar \omega_i s_i^z + \sum_{i,j} \bar{J}_{ij} s_i^z s_j^z +[j_{ij}(t) e^{i \omega_{ij}t} s_i^- s_j^+ + h.c.], \end{equation} where $\bar{J}_{ij}$ is the time-averaged exchange, $s_i^\pm$ are spin raising/lowering operators, $j_{ij}(t)$ is the amplitude of the exchange oscillating at a frequency $\omega_{ij}$ near the difference in Zeeman frequency $\Delta_{ij} =g\mu_B( B_i^{\rm tot} - B_j^{\rm tot})/\hbar$, and $\hbar \omega_i = g \mu_B B_i^{\rm tot} + \sum_{j} {\bar{J}_{ij}^2}/{2\hbar \Delta_{ij}}$ is the local spin-frequency including a perturbative correction from the time-averaged dc exchange interaction.\cite{Sigillito19b} The condition for the rotating wave approximation to be valid is that the difference in Zeeman energy between each pair of sites is much larger than the exchange and the detuning from resonance. Otherwise, we do not make any assumptions about the spatial profile of the magnetic field. Several recent experiments have operated in the same regime studied here with a large magnetic field gradient and ac exchange driving to realize spin transport or entangling gates.\cite{Nichol17,Takeda19,Sigillito19b} The effective Hamiltonian $H$ conserves $S_z^{\rm tot} = \sum_i s_i^z$, which implies that, when restricted to the fully-polarized subspace with a single spin-flip, the many-body dynamics has a single particle description. In analogy to a particle in a discrete lattice, the transverse exchange interactions act as tunneling terms, while the longitudinal exchange interactions and magnetic fields act as local potentials. We exploit this simplified description to design spin-CTAP pulse sequences. Building on this, we then take advantage of the many-body interacting nature of the problem to realize a form of quantum-controlled spin-CTAP that can be used to generate GHZ states in quantum dot arrays. In the subsections below, we consider a linear array of three silicon quantum dots and show how to achieve state transfer $\ket{\uparrow \downarrow \downarrow} \to \ket{ \downarrow \downarrow \uparrow}$. In Sec.~\ref{sec:multispinctap}, we show how to generalize our results to arbitrarily large one-dimensional arrays. The basic control sequence is illustrated in Fig.~\ref{fig:1}(b). This pulse sequence has the ``counter-intuitive'' ordering that $j_{23}$ is turned on before $j_{12}$, which, we show below, ensures that the system remains adiabatically in the dark state of the three-level system without ever directly exciting the intermediate state $\ket{\downarrow \uparrow \downarrow}$.\cite{Vitanov01,Greentree04,Vitanov17} We first study state transfer for idealized Gaussian pulses \begin{align} \label{eqn:ctap1} j_{12}(t) &= j_{0} \exp\bigg[ -\bigg( t- \frac{t_{0}+2\sigma}{2}\bigg)^2/2\sigma^2\bigg] , \\ \label{eqn:ctap2} j_{23}(t) &= j_{0} \exp\bigg[ -\bigg( t- \frac{t_{0}-2\sigma}{2}\bigg)^2/2\sigma^2\bigg] , \end{align} where $j_0$ is the peak amplitude, $t_0$ is the mean center of the two pulses and $\sigma$ is the pulse width, which is set to be the same as the timing offset between the two pulses. For $t<0$ we set $j_{12}=j_{23} = 0$ and define a maximal cutoff time $t_{\max}$ such that $j_{12}=j_{23}=0$ for $t>t_{\max}$. In practice, it may be difficult to realize ideal Gaussian pulses; however, the adiabatic transfer protocol only relies on the existence of a well-defined dark state that satisfies the adiabaticity condition. As a result, it is robust to small pulse imperfections as we describe in more detail in Sec.~\ref{sec:pulse}. \subsection{Resonantly Driven Spin Subspace} We now consider the transfer of the spin state across a three-dot array. Restricting to the $S_z^{\rm tot} =-1/2$ subspace and moving into a rotating frame $H \to U^\dag H U - i U^\dag dU/dt$ with $U = e^{- i \sum_{j=1}^{N-1} \hbar \delta_j s_j^z t}$ and $\delta_j =\sum_{k \ge j } \omega_{k k+1}$, the Hamiltonian in the basis $\{\ket{\uparrow \downarrow \downarrow},\ket{ \downarrow \uparrow \downarrow},\ket{ \downarrow \downarrow \uparrow}\}$ takes the form [see Fig.~\ref{fig:3dot}(a) for the level diagram] \be \label{eqn:h0} H_{0} = \left( \begin{array}{c c c} \eta_2^0 & j_{12}(t) & 0 \\ j_{12}^*(t) & \eta_1^0 & j_{23}(t) \\ 0 & j_{23}^*(t) & 0 \end{array} \right), \end{equation} where the ``two-photon'' energy detuning (terminology is taken from quantum optics, e.g., Ref.~\onlinecite{QuantumOpticsBook}) is $ \eta_2^{0} = E_1^0-E_3^0 -\hbar (\omega_{12} +\omega_{23})$, the ``single-photon'' energy detuning is $ \eta_1^0 = E_2^0-E_3^0- \hbar \omega_{23}$, the bare energies are $E_i^0 =E_0+ \hbar \omega_i - \sum_{j} \bar{J}_{ij}/2$, and $E_0 = -\sum_i \hbar \omega_i/2$ is an energy offset. The phase of $j_{ij}$ is set by the phase of the ac exchange drive.\cite{Sigillito19b} For illustrative purposes, we have chosen a magnetic field gradient profile with $B_1^{\rm tot} < B_3^{\rm tot} < B_2^{\rm tot} $, so that the level diagram in the $S_z^{\rm tot} = \pm 1/2$ subspace maps to a canonical $\Lambda/V$ system. This assumption is not required and our numerical simulations below are performed for the more natural profile $B_1^{\rm tot} < B_2^{\rm tot} < B_3^{\rm tot} $.\cite{Zajac18} Similar to Eq.~(\ref{eqn:hc}), we can write down the adiabatic dark state of $H_0$ for $\eta_2^0$ = 0 and any value of $\eta_1^0$ \be \ket{D_0 } \propto j_{23}(t) \ket{\uparrow \downarrow \downarrow} - j_{12}^* (t) \ket{\downarrow \downarrow \uparrow}, \end{equation} which satisfies $H_0(t) \ket{D_0(t)} = 0$ for all times $t$. This state has a minimal energy gap to the other two adiabatic eigenstates (the bright states) by an amount \be |\Delta E_{\rm min}| = \sqrt{|j_{12}(t)|^2 + |j_{23}(t)|^2 + \eta_1^{0\,2}/2} - |\eta_1^0|/2. \end{equation} Thus, by choosing a sufficiently slowly varying exchange $\hbar \dot{j}_{ij}/|\Delta E_{\rm min}|^2 \ll 1$, we can ensure that the adiabaticity condition is satisfied. In this limit, the system will remain in the adiabatic eigenstates during the evolution. Note that the precise values of $\bar{J}_{ij}$ are not relevant to the design of the pulse sequence because these values only enter into the resonance conditions for the ac driving fields. In the next section, however, we will show that when the $S_z^{\rm tot} = -1/2$ subspace is tuned into resonance, then the behavior of the $S_z^{\rm tot} = 1/2$ subspace sensitively depends on the relative values of $\bar{J}_{12}$ and $\bar{J}_{23}$. \begin{figure}[bt] \begin{center} \includegraphics[width= .49\textwidth]{3dotSpinCTAP.pdf} \caption{(a) Level diagram in the $S_z^{\rm tot} = -1/2$ subspace realizes a canonical three-level system. For illustrative purposes we took $B_1^z < B_3^z < B_2^z$ to realize a $\Lambda$ system, but our analysis does not rely on this condition. (b) Spin-up population $p_{i \uparrow} = 1/2+\mean{s_i^z}$ on dots 1 dots and 3 during the spin-CTAP pulse sequence, illustrating adiabatic transfer of the spin across the array. In these simulations, we took a gradient profile with $B_1^z < B_2^z < B_3^z$, $\Delta_{i i+1}/2\pi = -150~$MHz, $\bar{J}_{12/23}/h = 20/40$ MHz, $j_0/h = 3 $ MHz, $\omega_{12/23}/2\pi = -190/100$~MHz, $t_{\rm max} =20 \hbar \pi/j_0$, and $\sigma = t_{\rm max}/8$. } \label{fig:3dot} \end{center} \end{figure} As an example of the spin-CTAP performance, we show the population dynamics of the two spin states under this driving protocol in Fig.~\ref{fig:3dot}(b). When the initial state is $\ket{\psi_0} = \ket{\uparrow \downarrow \downarrow}$, it evolves adiabatically into the state $\ket{ \downarrow \downarrow \uparrow}$ with high fidelity $>99\%$. Finally, we remark that when the system is initialized in the state $\ket{\downarrow \downarrow\uparrow}$, then the left-to-right spin-CTAP pulse sequence has the ``intuitive'' ordering and can still transfer the spin-up state across the array from right-to-left. There is an important difference, though, that this right-to-left process is mediated by the two adiabatic bright states instead of the dark state. As a result, this ``backwards'' right-to-left transfer process generally has a lower fidelity than the left-to-right transfer process. \subsection{Blockaded Spin Subspace} We next describe how to realize a quantum-controlled version of spin-CTAP that is conditioned by the spin state of the middle electron. In the $S_z^{\rm tot}= 1/2$ subspace, the Hamiltonian in the basis $\{\ket{\downarrow \uparrow \uparrow},\ket{ \uparrow \downarrow \uparrow},\ket{\uparrow \uparrow \downarrow}\}$ takes the same form as Eq.~(\ref{eqn:h0}) with $j_{ij}(t) \to j_{ij}^*(t)$, $\omega_{ij} \to -\omega_{ij}$, and the shifted energies $E_i^1 = -E_0 - \hbar \omega_i - \sum_{j} \bar{J}_{ij}/2$ [see Fig.~\ref{fig:3dot2}(a)]. The complex conjugation can be understood as arising from a time-reversal operation associated with switching to this subspace. These modifications imply that if we set $\eta_1^0 = \eta_2^0 = 0$, then the $S_z^{\rm tot}=1/2$ sector will have a finite one- and two-photon detuning $ \eta_1^1 = - \bar{J}_{12}$ and $ \eta_2^1 = \bar{J}_{23}-\bar{J}_{12}$, respectively. As a result, for a finite exchange gradient $\delta J = \bar{J}_{23}-\bar{J}_{12}$, the single-photon detuning $\eta_1^1$ becomes nonzero. \begin{figure}[bt] \begin{center} \includegraphics[width= .49\textwidth]{3dotSpinCTAP_blockade.pdf} \caption{(a) Level diagram in the $S_z^{\rm tot} = +1/2$ subspace realizes a $V$ system for the same gradient profile as Fig.~\ref{fig:3dot}(a). When the system is tuned for spin-CTAP in the $S_z^{\rm tot} = -1/2$ subspace, but $\bar{J}_{12} \ne \bar{J}_{23}$, then transport in the $S_z^{\rm tot} = 1/2$ subspace is blocked because the adiabatic dark state begins and ends on one side of the array. This blockade effect can be used to generate GHZ states. (b) Spin-up population $p_{i \uparrow} = 1/2+\mean{s_i^z}$ in the blockaded subspace. The spin-up electron in dot 2 blocks spin-CTAP because the adiabatic dark state remains localized in dot 1. We took parameters as in Fig.~\ref{fig:3dot}(b).} \label{fig:3dot2} \end{center} \end{figure} Despite the different effective Hamiltonians, when $\bar{J}_{12} = \bar{J}_{23}$ the $S_z^{\rm tot} = 1/2$ subspace still undergoes a transfer process from the state $\ket{\uparrow \uparrow \downarrow}$ to $\ket{\downarrow \uparrow \uparrow }$. This transfer proceeds through a different mechanism, however, because it is effectively driving the transfer from right to left (3 to 1) instead of left to right (1 to 3). As we mentioned in the previous subsection, in the adiabatic limit, this reversed state transfer process is mediated by the two bright states, but the transfer fidelity still converges to one in the ideal limit. Thus, for $\bar{J}_{12} = \bar{J}_{23}$, the ideal transfer process will effectively map the spin population across the array in both subspaces. On the other hand, when $\bar{J}_{12}\ne \bar{J}_{23}$ and the system is tuned for spin-CTAP in the $S_z^{\rm tot} = -1/2$ subspace, we now show that the $S_z^{\rm tot} = 1/2$ subspace is blocked from adiabatic transport. Starting from the state $\ket{\uparrow \uparrow \downarrow}$ with $j_{12}=j_{23}=0$, we can calculate the associated adiabatic eigenstate for finite $j_{ij}$ in the limit $|j_{23}(t)| \ll \hbar |\Delta_1|$ and $|j_{12}(t)j_{23}(t)/ \eta_1^1| \ll \eta_2^1$ \be \ket{D_1} \approx \bigg[ 1 - \frac{|j_{23}(t)|^2}{2 \eta_1^{1\,2}} \bigg] \ket{\uparrow \uparrow \downarrow} + \frac{j_{12}^* j_{23}^*}{ \eta_2^1 \eta_1^1} \ket{\downarrow \uparrow \uparrow} - \frac{j_{23}^*}{ \eta_1^1} \ket{\uparrow \downarrow \uparrow}. \end{equation} As a result, the adiabatic spin-state configuration in this subspace remains localized during the spin-CTAP pulse sequence. This implies that we can realize a quantum-controlled version of spin-CTAP where the spin state of the middle electron acts as the control qubit. As we show in Fig.~\ref{fig:3dot2}(b), when the middle spin is pointing up $\ket{\psi_0} = \ket{\uparrow \uparrow \downarrow}$, the spin population returns to dot 1 at the end of the pulse sequence. For the transfer process to be adiabatic, we require the pulse width $ \sigma$ and overall length $t_{\rm max}$ to be large compared to $\hbar j_0^{-1}$ and $\hbar \delta J^{-1}$. In Fig.~\ref{fig:3dot}(b) and Fig.~\ref{fig:3dot2}(b), we took $\delta J/j_0 = 6.67 $, $t_{\rm max} = 20 \pi \hbar/j_0$ and $\sigma = t_{\rm max}/8$. These values satisfy both these constraints for the experimentally relevant parameters of $J_{12/23}/h = 20/40$ MHz and $t_{\rm max} = 3.33~\mu$s.\cite{Zajac18,Watson18} An interesting subject for future work will be to consider ``shortcuts to adiabaticity'' to speed up this transfer process without reducing the fidelity.\cite{Oh13,Torrontegui13,Li18,Ban19} \subsection{Effect of Noise} To characterize the performance of spin-CTAP under more realistic conditions, we numerically characterize the performance of the protocol in the presence of noise in both the local magnetic field on each dot and the exchange interaction. For illustrative purposes, we focus on the simplest realization of spin-CTAP with three quantum dots in the resonantly driven $S_z^{\rm tot} = -1/2$ subspace. We use a noise model, described in more detail in our recent work,\cite{Gullans19} which is parameterized by the coherence time $T_{2i}^{*}$ on each dot and a quality factor $Q_{e,ij}$ that determines the envelope decay rate for exchange oscillations between dots $i$ and $j$. The $T_2^*$ decoherence processes are modeled by adding $1/f$ noise in the $\omega_i$ parameter, while the $Q_{e,ij}$ decoherence is modeled by coupling the same $1/f$ noise field to the parameters $\bar{J}_{ij}$ and $j_{ij}$. \begin{align} \omega_i(t) &= \omega_i^0 + \omega^n_{i} v_{i}(t) , \\ J_{ij}(t) & = J_{ij}^0 \{1 + \delta J_{ij}^n [v_i(t) + v_j(t)] \}, \\ j_{ij}(t) & = j_{ij}^0 \{1 + \delta J_{ij}^n [v_i(t) + v_j(t)] \}, \end{align} where the amplitude of the noise on each dot $v_i$ is given by $\mean{v_i(t) v_j(t)} = \delta_{ij} v_0^2,$ $ v_0 = \sqrt{ 2 A \log(f_c/f_\ell)}$, $A$ is the amplitude of the $1/f$ noise in eV$^2$/Hz and $f_{c/\ell}$ are high/low frequency cutoffs, $\omega_i^n = (v_0 T_{2,i}^*)^{-1}$, and $\delta J_{ij}^n = (\sqrt{2} v_0 Q_{e,ij})^{-1}$. We make the simplifying assumptions the noise is quasistatic over the relevant timescales and that $T_{2i}^{*}$ and $Q_{e,ij}$ do not vary throughout the array. In Fig.~\ref{fig:noise}(a), we show that spin-CTAP becomes robust against noise when transferring spin eigenstates already at relatively modest values of $Q_e > 20$ and $T_2^* > 1~\mu$s, which is quantified by the projection fidelity $F_p = 1/2+\mean{s_3^z}$. Under these conditions, we find that the main source of decoherence arises from charge noise that leads to a finite $Q_e$. We see very little change when increasing $T_2^*$ from 1-10~$\mu$s. \begin{figure}[tb] \begin{center} \includegraphics[width= .49\textwidth]{NoiseSim.pdf} \caption{ (a) Projection fidelity $F_p = 1/2+\mean{s_3^z}$ for three-dot spin-CTAP in the presence of quasi-static noise. The maximal fidelity is limited by nonadiabatic corrections to $\sim$95$\%$ for these parameters: $\Delta_{i i+1}/2\pi = -150~$MHz, $J_{12/23}/h = 20/40$ MHz, $j_0/h = 3 $ MHz, $\omega_{12/23}/2\pi = -190/100$~MHz, $\sqrt{A} = 0.5~\mu$eV$/\sqrt{\rm Hz}$,\cite{Yoneda18,Mi18b}, $ f_\ell = 0.16~$mHz, and $f_c = 100$~kHz, $t_{\rm max} =10 \hbar \pi/j_0$, and $\sigma = t_{\rm max}/8$. We chose a relatively fast transfer time to balance effects from noise with nonadiabatic corrections. $Q_e$ and $T_2^*$ are taken to be uniform across the array. (inset) The average gate fidelity $F_g$ rapidly converges to one with increasing $t_{\max}$. (b) $F_g$ for parameters as in (a) with a maximal fidelity of $\sim 98\%$. Error bars denote one standard deviation due to fluctuations in noise realizations.} \label{fig:noise} \end{center} \end{figure} It is also of interest to consider the performance of the transfer protocol for more general quantum states. We characterize this fidelity by treating the spin-CTAP transfer process \be \ket{\psi} \otimes \ket{\downarrow \downarrow} \to \ket{\downarrow \downarrow} \otimes \ket{\psi} \end{equation} as a quantum channel $\mathcal{E}$ that maps an arbitrary quantum state on the first site to the last site and traces over the remaining sites in the system. In the ideal case, this channel acts as an identity operation (up to a deterministic $z$-rotation that we correct) on the single-qubit Hilbert space of the transferred site. As a result, we can use the average gate fidelity to characterize the performance of the transfer protocol\cite{NielsenChuang} \be F_g = \int d \psi \bra{\psi} \mathcal{E}(\ket{\psi} \bra{\psi}) \ket{\psi}, \end{equation} where $d \psi$ is the Haar measure over the quantum states of a single-qubit. In the inset to Fig.~\ref{fig:noise}(a) we show $F_g$ vs. $t_{\max}$ in the limit of zero noise, which illustrates that the ideal fidelity rapidly converges to one. The results for $F_g$ including noise as a function of $Q_e$ are shown in Fig.~\ref{fig:noise}(b). Interestingly, the fidelity first plateaus near $2/3$ before increasing towards the noiseless limit at large values of $Q_e > 200$. The initial plateau coincides with the convergence of the projection fidelity, while the slower increase with $Q_e$ arises because the transfer of superposition states are sensitive to phase fluctuations in the wavefunction that vary from shot-to-shot due to the noise. A related feature observed in the fidelity is the much stronger dependence on $T_2^*$. When the total transfer time [$t_{\max} = 1.67~\mu$s in Fig.~\ref{fig:noise}(b)] becomes comparable to $T_2^*$, the fidelity substantially decreases from the noiseless limit due to shot-to-shot variations in the phase accumulation during the transfer process. This behavior is in sharp contrast to what was observed for $F_p$, which is insensitive to phase fluctuations even when $t_{\max} \sim T_2^*$. Finally, we remark that the average gate fidelities calculated here are comparable to measured fidelities for SWAP gates under similar conditions.\cite{Nichol17,Takeda19,Sigillito19b} Thus, we conclude that, under some conditions, spin-CTAP is a viable alternative to sequential SWAP gates for transferring spin states in the array. \subsection{Imperfections in AC Exchange Driving} \label{sec:pulse} A central requirement of our proposal is the ability to simultaneously turn on exchange between every pair of sites across the array. Achieving this regime can be challenging and often leads to a nonlinear dependence of the exchange on the external gate voltages.\cite{Qiao20,Pan20} As a result, it may be difficult in practice to realize the ideally shaped Gaussian pulses considered in the previous section. Fortunately, the adiabatic nature of the control scheme renders spin-CTAP largely insensitive to these effects. Another source of non-idealities is the potential for crosstalk between gates.\cite{vanderWiel03,Hensgens17,Mills18,Volk19} In the context of our work, one needs to avoid an effect whereby modulating the exchange on one pair of dots induces non-negligible ac exchange driving on neighboring pairs. Provided the magnetic field gradient between sites is non-uniform across the array, which is typical in devices where the gradient is produced by a proximal micromagnet,\cite{Sigillito19} this ac exchange driving will be off-resonant. As a result, these cross-driving effects can be neglected for the weakly driven limit considered here. For example, for an ac exchange driving of 10 MHz and a gate crosstalk of 10\% or less, the variation or disorder in the magnetic field gradient should be much greater than 40~$\mu$T to avoid cross-driving effects. To study the impact of pulse distortions more quantitatively, we use a simple model for the exchange interaction described in Ref.~\onlinecite{Zajac18}. In a single-band Fermi-Hubbard model for a quantum dot array, the exchange has the scaling $J \sim |t_c|^2/U$, where $t_c\sim1-100~\mu$eV is the tunneling between the two dots and $U \sim 5~$meV is the on-site interaction (estimates are for Si/SiGe quantum dots \cite{Zajac18}). By modeling the barrier between the two quantum dots as a square well and using the WKB approximation, one can derive a functional form for the exchange \be \label{eqn:jvb} J \propto |t_c|^2 = \frac{16 E(V-E) }{V^2} \exp\big( - 2 W \sqrt{2 m|V-E|} \big), \end{equation} where $V$ and $W$ are the potential barrier height and width, $E$ is the energy of the unperturbed states, and $m$ is the electron mass. Using the approximation $V \propto - V_B(t) + {\rm offset}$, where $V_B(t)$ is the voltage on the barrier separating the two dots we obtain a precise prediction for the dependence of $J[V_B(t)]$ on the barrier gate voltage, which provides a good match to experimental data.\cite{Zajac18} \begin{figure}[bt] \begin{center} \includegraphics[width= .49\textwidth]{3dotSpinCTAPDist.pdf} \caption{(a) Exchange pulse profile for spin-CTAP including pulse distortions from Eq.~(\ref{eqn:pulsedist}). We took a larger value of $j_0/h = 15~$MHz with other parameters as in Fig.~\ref{fig:3dot} to amplify the effect of shift in the dc exchange and the ac exchange pulse distortions. (b) Spin-up population $p_{i \uparrow} = 1/2+\mean{s_i^z}$ on dots 1 dots and 3 during the spin-CTAP pulse sequence. We see that even these large pulse distortions do not spoil the state-transfer fidelity. } \label{fig:3dotdist} \end{center} \end{figure} Our spin-CTAP proposal can be realized by modulating the barrier gate voltages between dots $i$ and $j$ as $V_{B,ij}(t) = V_{B0,ij} + v_{ij}(t) \cos \omega_{ij} t$, where $v_{ij}(t)$ is a slowly-varying envelope for the ac modulation term. Assuming $v_{ij}$ is a weak perturbation, we can expand the exchange as \be \begin{split} J_{ij}[V_{B0,ij} &+ v_{ij} \cos \omega_{ij} t] = \bar{J}_{ij}^0 + J_{ij}^{(1)} v_{ij} \cos \omega_{ij} t \\ & + \frac{J_{ij}^{(2)}}{2} v_{ij}^2 \cos^2 \omega_{ij} t +\frac{J_{ij}^{(3)}}{6} v_{ij}^3 \cos^3 \omega_{ij} t, \end{split} \end{equation} where $J_{ij}^{(n)} = d^n J_{ij}/d V_{B,ij}^n|_{V_{B0,ij}}$ are the derivatives of the exchange profile. In the rotating wave approximation we only need to account for the dc exchange term and the term that oscillates near the difference in Zeeman energies between the two dots. As a result, we can regroup the terms to arrive at the expression \be \begin{split} J_{ij}[V_{B,ij}( t)] &\approx \bar{J}_{ij}^0 + \frac{J_{ij}^{(2)}}{J_{ij}^{(1) 2}} [j_{ij}^{0}(t)]^2 \\ &+ \bigg(1+ \frac{J_{ij}^{(3)} [j_{ij}^0(t)]^2}{2 J_{ij}^{(1)3} }\bigg) 2 j_{ij}^0(t) \cos \omega_{ij} t, \end{split} \end{equation} where we defined $j_{ij}^0(t) = J_{ij}^{(1)} v_{ij}(t)/2$ and the first term corresponds to a slowly varying shift in the dc exchange due to the ac driving. For the dependence on $V_{B,ij}$ given by Eq.~(\ref{eqn:jvb}), we can calculate the leading order correction to the dc and ac exchange profile by approximating the dependence of the exchange on barrier gate voltage by a pure exponential $J_{ij}[V_{B0,ij}+v] \approx \bar{J}_{ij}^0 e^{\alpha v}$. This approximation leads to particularly simple expressions for the slowly-varying parameters \begin{align} \bar{J}_{ij}(t) &= \bigg(1 + \frac{[j_{ij}^0(t)]^2}{[\bar{J}_{ij}^{0}]^2} \bigg) \bar{J}_{ij}^0, \\ \label{eqn:pulsedist} j_{ij}(t)& = \bigg( 1 + \frac{[j_{ij}^{0}(t)]^2}{2 [\bar{J}_{ij}^0]^2} \bigg) j_{ij}^0(t). \end{align} Since $j_{ij}^0$ is directly proportional to the ac amplitude on the middle barrier voltage, this shows that the the dc/ac exchange amplitude has a quadratic/cubic nonlinear correction in $v_{ij}(t)$. It is most natural in experiments to design a Gaussian envelope directly for the middle barrier voltage $v_{ij}$, which does not account for these nonlinear corrections. In Fig.~\ref{fig:3dotdist}(a), we show the exchange pulse profile for this control strategy, including the nonlinear correction from Eq.~(\ref{eqn:pulsedist}). We took similar parameters as in Fig.~\ref{fig:3dot}, but with a five times larger value of peak ac exchange value $j_0/h = 15$~MHz to amplify the effect of the shift in the dc exchange and the ac exchange pulse distortions. In Fig.~\ref{fig:3dotdist}(b), we show the performance of spin-CTAP and blockaded spin-CTAP in the presence of these pulse imperfections. Although the intermediate dynamics has slight distortions compared to the ideal case, the fidelity for state transfer is nearly identical. This result is expected based on the intrinsic robustness of these transfer schemes to pulse imperfections and slowly varying perturbations provided one chooses an adiabatic pulse that starts with $j_{12} \ll j_{23} $ and ends with $j_{12} \gg j_{23}$. \section{Multidot spin-CTAP} \label{sec:multispinctap} The long-range transfer of spin states in extended arrays is a long-standing goal for quantum-dot based spin qubits.\cite{Taylor05,Friesen07,Baart16,Fujita17,Mills18,Kandel19,Sigillito19b} In the context of charge based transport, Greentree \textit{et al.\ }showed that a natural generalization of CTAP from three dots to arbitrarily large one-dimensional arrays of odd numbers of dots can be obtained by modulating a large tunnel coupling in the middle of the array.\cite{Greentree04} Partially motivated by recent experimental work in large quantum dot arrays,\cite{Zajac16,Mortemousque18,Mills18,Sigillito19,Kandel19,Volk19,Dehollain19} we now consider the multidot generalization of spin-CTAP. By applying a large ac exchange field on the middle $N-2$ dots for odd $N$, we can effectively isolate a single many-body spin state in the middle of the array that is coupled to the outer two spins by weaker driving of the ac exchange [see Fig.~\ref{fig:Ndot1}(a)]. For even $N$, adiabatic transfer is still possible, but it does not proceed through a zero energy dark state, which generally reduces the efficiency and transfer fidelities of the protocol.\cite{Greentree04} At a qualitative level, our approach is reminiscent of other methods for long-range coupling of spin qubits using intermediate states.\cite{Mehl14,Srinivasa15,Baart17,Croot18,Malinowski18} \begin{figure}[tb] \begin{center} \includegraphics[width= .49\textwidth]{MultidotCTAP_v3.pdf} \caption{ (a) Spin-CTAP protocol for extended arrays with an odd number of sites. The middle spins are taken to be strongly coupled via exchange to effectively create a single zero energy state in the middle of the array. (b) Pulse profile for multidot spin-CTAP. The primary difference from the three-dot case is the large ac exchange interaction that is turned on in the middle region during the transfer.} \label{fig:Ndot1} \end{center} \end{figure} To better understand the dynamics in this limit, we study the resonantly driven Hamiltonian in the rotating frame in the basis of states $\{ \sigma_i^+ \ket{\downarrow \cdots \downarrow}:i =1,\ldots,N\}$ \be H_0 = \left(\begin{array}{c c c c c c c} 0 & j_{12} & 0 & \cdots & 0 & 0 &0 \\ j_{12} & 0 & j_{M}& \cdots & 0 & 0 & 0\\ 0 & j_{M} & 0 & \cdots & 0 & 0 & 0\\ \vdots & \vdots& \vdots& \ddots & \vdots &\vdots& \vdots \\ 0 & 0 & 0 & \cdots & 0 & j_M & 0 \\ 0 & 0 & 0 & \cdots & j_M & 0 & j_{N-1N} \\ 0 & 0 & 0 & \cdots & 0 & j_{N-1N} & 0 \end{array} \right), \end{equation} where $j_M$ is the ac exchange interaction in the middle of the array (assumed to be uniform). Setting $j_{12} = j_{N-1N} = 0$, for odd $N$ there is a zero energy state \be \ket{0} = \frac{1}{\sqrt{(N-1)/2}} \sum_{n=1}^{(N-1)/2} (-1)^n \sigma_{2n}^{+} \ket{\downarrow \cdots \downarrow}. \end{equation} Denoting the energy eigenstates for the delocalized spin states as $\ket{-(N-3)/2},\ldots, \ket{(N-3)/2}$, the energy gaps $|E_n - E_{n+1}|$ between neighboring levels all scale as $j_M/N$. As a result, for sufficiently large $j_M$, we can reduce the problem to a three-level system in the basis $\{ \ket{\uparrow \cdots \downarrow}, \ket{0}, \ket{\downarrow \cdots \uparrow} \}$ \be \label{eqn:h0eff} H_{0} = \left( \begin{array}{c c c} 0 & j_{1}(t) & 0 \\ j_{1}(t) & 0 & j_{2}(t) \\ 0 & j_{2}(t) & 0 \end{array} \right), \end{equation} where $j_1 = - j_{12}/\sqrt{(N-1)/2}$ and $j_2 = (-1)^{(N-1)/2} j_{N-1 N}/\sqrt{(N-1)/2}$. Applying the spin-CTAP pulse sequence for $j_{1/2}$ given by Eqs.~(\ref{eqn:ctap1})-(\ref{eqn:ctap2}) now achieves spin transport across the entire array of $N$ dots. To achieve the multidot transfer process in an adiabatic manner, we also pulse on the exchange in the middle of the array. This approach is inspired by the original CTAP proposal.\cite{Greentree04} In particular, as illustrated in Fig.~\ref{fig:Ndot1}(b), we use an additional Gaussian ac exchange pulse on the middle spins \begin{align} \label{eqn:mult1} j_{ii+1}(t) &= j_{M} \exp\bigg[ -\bigg( t- \frac{t_{0}}{2}\bigg)^2/4\sigma^2\bigg] , \end{align} for $2\le i \le N-2$, with $j_{12}(t)$ and $j_{N-1N}(t)$ given by Eqs.~(\ref{eqn:ctap1})-(\ref{eqn:ctap2}). A schematic level diagram for the multidot spin-CTAP protocol is shown Figs.~\ref{fig:Ndot}(a--b). For our perturbative description above to be valid we require that $|j_i| = |j_{12,N-1N}|/\sqrt{N} \ll j_M/N$. Since the transfer time scales as $t_{\max} \sim 1/j_{i,\max}$ this implies that $t_{\max} \gg N/j_M$. As a result, $j_M$ has to scale linearly with $N$ and the maximum value of $j_{12,N-1N}$ has to scale as $\sqrt{N}$ to keep a constant transfer time in the large $N$ limit. We remark that the scaling for $j_M$ is expected from general bounds on the speed of information spreading in local Hamiltonian systems.\cite{Lieb72} \begin{figure}[tb] \begin{center} \includegraphics[width= .49\textwidth]{MultidotCTAP_v2.pdf} \caption{ (a-b) Level diagram for the $S_z^{\rm tot} = -(N-1)/2$ subspace in energy eigenbasis with $j_{12,N-1,N}=0$ illustrating how the multidot system reduces to an effective three-level state transfer problem. (c) Nine-dot spin-CTAP projection fidelity $F_p = 1/2+\mean{s_9^z}$ vs. $t_{\rm max}$ without noise for realistic pulse parameters. We took $j_0/h = 5~$MHz, $j_M = 10 j_0$, $\sigma= t_{\rm max}/8$, $\bar{J}_{12}/h=\bar{J}_{N-1N}/h = 30~$MHz, $\bar{J}_{M}/h = 60$~MHz, $\Delta_{ii+1}/2\pi = -1.5$ GHZ, and $\omega_{ij}= \Delta_{ij} - \sum_k (\bar{J}_{ik}-\bar{J}_{jk})/2\hbar$.} \label{fig:Ndot} \end{center} \end{figure} An example of the multidot spin-CTAP performance is shown in Fig.~\ref{fig:Ndot}(c) for nine dots in a linear array.\cite{Zajac16} We observe projection fidelities for transferring spin eigenstates that exceed $99\%$ for sufficiently long pulse times. As we noted above, the adiabaticity condition becomes more difficult to satisfy for large $N$ because of decreasing gaps between the dark state and other nearby eigenstates. In principle, this can be overcome by increasing the drive parameter $j_M$ on the middle dots; however, this becomes difficult to realize in practice. As a result, the requisite pulse time $t_{\rm max}$ will generally increase with $N$. \section{GHZ State Generation} \label{sec:ghz} We now show how to extend the pulse sequences described above to generate multipartite entanglement of the spins. The blockaded version of spin-CTAP for a linear array of three quantum dots can be realized whenever there is a difference in the dc exchange for each adjacent pair of dots in the array. Under these conditions, there is a natural method to generate entangled GHZ states by applying the spin-CTAP protocol to the state \be \ket{\psi} = \frac{1}{\sqrt{2}} (\ket{\uparrow\downarrow\downarrow} + \ket{\uparrow\uparrow\downarrow}) \to \frac{1}{\sqrt{2}} (e^{i \phi} \ket{\downarrow\downarrow\uparrow} + \ket{\uparrow\uparrow\downarrow}), \end{equation} where $\phi$ is a phase that will vary with the pulse profile and external noise. \begin{figure}[tb] \begin{center} \includegraphics[width= .49\textwidth]{GHZGen.pdf} \caption{(a) GHZ state fidelity for spin-CTAP protocol with $t_{\rm max} = 10 \hbar \pi/j_0$ computed using full simulations of the spin dynamics. The noiseless fidelity, limited by nonadiabatic corrections from a finite $t_{\max}$, is $\sim 98\%$. We took other parameters as in Fig.~\ref{fig:Ndot}(b). (b) Fidelity for GHZ state preparation using repeated spin-CTAP vs. $Q_e$. We took $j_0/h = 3$~MHz, $j_M=10j_0$, $t_{\rm max}= (N-1)10 \hbar \pi/j_0$, $\Delta_{ii+1}/2\pi = -150~$MHz, $T_2^* = 10~\mu$s and other parameters as in Fig.~\ref{fig:Ndot}(b). Error bars denote one standard deviation due to fluctuations in noise realizations.} \label{fig:ghz} \end{center} \end{figure} Applying a $\pi$ pulse on spin three, we arrive at the state \be \ket{\psi} = \frac{1}{\sqrt{2}} (e^{i \phi} \ket{\downarrow\downarrow\downarrow} + \ket{\uparrow\uparrow\uparrow}), \end{equation} which is equal to a GHZ state $\ket{{\rm GHZ}} = 1/\sqrt{2}(\ket{\downarrow\downarrow\downarrow} + \ket{\uparrow\uparrow\uparrow})$ up to a single-qubit $Z$ rotation. In Fig.~\ref{fig:ghz}(a), we show the state fidelity $F = |\bra{{\rm GHZ}} \psi \rangle|^2$ in the presence of noise after correcting the random phase $\phi$. We see that the GHZ state fidelity is comparable to the fidelity for transferring spin eigenstates. The noiseless limit is higher in this case than $F_p$ shown in Fig.~\ref{fig:noise}(a) because the $\ket{\downarrow \downarrow \downarrow}$ state comprises half the amplitude of the GHZ state and incurs no errors in our model for the spin-CTAP process. To spectroscopically determine the phase $\phi$ and directly measure the state fidelity in experiment, one can perform a measurement of the parity operator $P = \prod_i \sigma_i^x$.\cite{NielsenChuang} Similar to the three-dot case, we can realize a type of quantum-controlled multidot spin-CTAP by taking the value of the time-averaged exchange in the middle of the array, $\bar{J}_{i i+1} = \bar{J}_M$ for $2<i<N-1$, to be different from the two ends $\bar{J}_{12}$ and $\bar{J}_{N-1N}$. Under these conditions, we can extend the GHZ state generation scheme to arbitrarily large arrays by sequentially growing the size of the GHZ state by two qubits in each time step as follows: assume we are given an $N-2$ GHZ state on the middle qubits \be \ket{\psi} = \frac{1}{\sqrt{2}} \ket{\downarrow} \otimes \big( \ket{\uparrow\ldots \uparrow} + \ket{\downarrow\ldots \downarrow} \big) \otimes \ket{\downarrow}. \end{equation} We next flip spin one into an up state and then apply the pulse sequences from Eq.~(\ref{eqn:ctap1}) and Eq.~(\ref{eqn:mult1}). Under ideal conditions, this operation will transform the state \be \begin{split} \ket{\psi} \to \frac{1}{\sqrt{2}} ( \ket{\uparrow\uparrow\ldots \uparrow\downarrow} + e^{i\phi} \ket{\downarrow\downarrow\ldots \downarrow\uparrow}) , \end{split} \end{equation} which is equal to a GHZ state up to a single-qubit $Z$-rotation and $\pi$ pulse on the rightmost dot \be \ket{{\rm GHZ}} = \frac{1}{\sqrt{2}} \big( \ket{\uparrow\uparrow\ldots \uparrow\uparrow} + \ket{\downarrow\downarrow\ldots \downarrow\downarrow} \big). \end{equation} The main challenge in applying this GHZ state preparation scheme is the long-transfer time associated with each step in the operation, which makes the protocol sensitive to noise. In Fig.~\ref{fig:ghz}(b), we show the performance of this GHZ state generation scheme for characteristic parameters up to 11 dots obtained from full numerical simulations of the multi-dot spin dynamics. Although we can successfully generate 11 qubit entanglement with this approach, achieving the highest fidelities requires much larger values of $Q_e$ compared to the three-dot case. Furthermore, the transfer times become comparable to $T_2^*$ for $N>5$, which begins to limit the achievable fidelities. A more practical GHZ state preparation scheme for $N>3$ likely involves local CNOT gates applied to the two ends to sequentially grow the GHZ state.\cite{NielsenChuang} This method has the advantage over our proposal of not requiring full state transfer in each step. \section{Conclusions} \label{sec:conclusions} We have introduced an adiabatic protocol for spin transfer across arbitrarily large arrays of quantum dots that we refer to as spin-CTAP. The spin transfer protocol is realized in the one excitation subspace above the ground state of a spin-1/2 chain of Heisenberg exchange coupled spins in the presence of a large magnetic field gradient. Our approach is based on time-dependent modulation of the exchange interaction near the resonance frequency for nearest-neighbor flip-flops in the array. By controlling the static exchange profile across the array, we can also realize a quantum-controlled version of spin-CTAP, whereby the presence of spin flips in the middle of the array blocks the spin transfer protocol. Quantum controlled spin-CTAP can be used to generate large GHZ states. Spin-CTAP has several applications to quantum information processing with quantum dot spin qubits. In particular, high-fidelity transfer of spin-eigenstates is feasible even in the presence of modest amounts of noise in the spin sector. Thus, this approach may find immediate use in scaling up spin readout in two-dimensional arrays where the central spins cannot be directly coupled to a nearby charge sensor. The simplicity of the control sequence may have advantages for achieving high-fidelity state transfer for some applications. The adiabatic nature of the protocol makes it highly robust to pulse imperfections, but leads to relatively slow transfer times, making it more difficult to transfer superposition states than spin eigenstates. Reducing the strength of the noise by an additional order of magnitude would allow high-fidelity transfer of superposition states. Such a coherent transfer process could be used to distribute long-range entanglement across the array to implement nonlocal quantum gates. \begin{acknowledgements} We thank T. Ladd, G. Platero, A. Sigillito, and C. Tahan for helpful discussions. Funded by DARPA grant No.\ D18AC0025, Army Research Office grant No.\ W911NF-15-1-0149, and the Gordon and Betty Moore Foundation's EPiQS Initiative through Grant GBMF4535. \end{acknowledgements} \bibliographystyle{apsrev-nourl-title-PRX}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} Pushing exoplanet detection thresholds down to lower and lower masses has been a significant science driver in exoplanetary science in recent years. Seventeen Doppler exoplanets have been published to date with minimum (i.e. $m \sin i$) masses of less than 25\,M$_{\rm Earth}$\ -- GJ\,426b \citep{Butler04}; HD\,219826b \citep{Melo07}; HD\,69830b,c,d \citep{Lovis06}; HD\,190360c \citep{Vogt05}; Gl\,581b,c,d \citep{Udry07}; HD\,4308b \citep{Udry06} HD\,160691d \citep{Santos04}; Gl\,674b \citep{Bonfils07}; 55\,Cnc\,e \citep{McArthur04}; Gl\,876d \citep{rivera05}; and the recently announced HD\,40307b,c,d \citep{Mayor08}. The majority of these have been found in short period orbits, with the only three (HD\,69830c,d and Gl\,581d) having orbital periods greater than 30\,d. Of these eighteeen low-mass exoplanets, roughly equal numbers have been found orbiting M-, K- and G-dwarfs (6, 6 and 5 respectively in each of these spectral types). A further three microlensing planets with masses in this range have also been detected -- in each case orbiting stars of M-class or later \citep{Gould06,Beaulieu06,Bennett08} because these stars dominate the field star population that microlensing surveys probe. The roughly equal distribution with host spectral type for the low-mass Doppler exoplanets hides several selection effects. First that finding very low-mass planets orbiting G-dwarfs is {\em much} harder than finding them orbiting M-dwarfs, since the lower mass of an M-dwarf primary will (for a given mass planet of a given orbital period) make the Doppler amplitude of an M-dwarf exoplanet at least three times larger than a G-dwarf one. And second, that current planet search target lists are dominated by G-dwarfs. The detection of such low-mass exoplanets within the last 4-5 years, has in large part been due to the dramatic improvements achieved in the intrinsic, internal measurement precisions of Doppler planet search facilities. These have improved to such an extent, that it is now clear that noise sources {\em intrinsic} to the parent star themselves are the limiting factor for very low-mass exoplanet detection. Characterization of these noise sources (jitter, convective granulation and asteroseismological p-mode oscillations) has become an important focus of Doppler planet detection. A few obvious modifications to current observing strategies have emerged -- (1) target low-mass stars; (2) target chromospherically inactive and slowly rotating stars; (3) target high-gravity stars (where p-mode oscillations are minimized) and (4) extend the observations of stars over several p-mode fundamental periods, so that asteroseismological noise is averaged over. In this paper, we present first results from a major observing campaign -- the Anglo-Australian Rocky Planet Search -- that focussed on the last three of these observing strategies, in an effort to push to the lowest possible detection limits achievable with the Anglo-Australian Planet Search (AAPS) Doppler system. The AAPS began operation in 1998 January, and is currently surveying 250 stars. It has first discovered thirty-one exoplanets with $m \sin i$\ ranging from 0.17 to 10\,M$_{\rm Jup}$\ \citep{AAPSI,AAPSIII,AAPSVII,AAPSXI,AAPSXIII,AAPSII,AAPSV, AAPSIV,AAPSVI,AAPSVIII,AAPSXII,AAPSIX,AAPSX,AAPSXIV,AAPSXV}. The Anglo-Australian Rocky Planet Search targets unevolved dwarfs with low activity levels from our main AAPS program. The observing strategy is to observe every target, on every night of a contiguous 48 night observing run (modulo, of course, the vagaries of weather). This 48n observing run covered two bright lunations, and included an entire dark lunation. Each observation extends over at least 15 minutes in order to beat down p-mode oscillation noise to levels well below 1\,\mbox{m\,s$^{-1}$}\ (O'Toole et al. 2008). The full Rocky Planet Search target list includes 55 objects, of which 24 were targeted on our first 48n campaign in 2007 Jan \& Feb. In this letter we present results for the most compelling new exoplanetary detection (\object[HD16417]{HD\,16417b}) to arise from this concentrated, campaign-mode observing run, together with subsequent AAPS and Keck Planet Search observations. \section{HD\,16417} \label{16417} HD16417 (GJ 101.1, HIP 12186) lies at a distance of 25.5$\pm$0.4\,pc \citep{Perry97}, and has a spectral type of G1V \citep{Houk82,Gray06}, an absolute magnitude of $M_{\rm V} = 3.74$ ($V=5.78$) and colour $B-V=0.653$. Hipparcos photometry finds it to be photometrically stable at the 7 milli-magnitude level over 212 observations over the course of the Hipparcos mission \citep{Perry97}. As a bright and nearby Sun-like star, HD\,16417 has been the subject of multiple detailed atmospheric and isochrone analyses -- the conclusions reached by the most recent of these are summarized in Table \ref{hd16417_atm}. The first point to notice is that all these analyses agree that, while the gravity of HD\,16417 is not low enough for it to be classified as a giant or sub-giant, it is somewhat lower than the $\log g \approx 4.5$ one would expect from a main sequence early-G dwarf, indicating that it has begun to evolve off the main sequence. Where ages have been estimated, they indicate HD\,16417 to be somewhat older than the Sun -- in the range 4-8\,Gyr. The mass of HD\,16417 is estimated to be somewhat larger than that for the Sun, with the most recent estimates of \citet{ValentiFischer05} being 1.38 and 1.18\,M$_{\odot}$\ (based, respectively, on spectroscopic analysis and isochrone analysis) and that of \citet{daSilva06} being 1.18\,M$_{\odot}$. In the analysis that follows we assume a mass of 1.2\,M$_{\odot}$. Metallicity estimates for HD\,16417 range from [Fe/H] of $-$0.01 to $+$0.19, with an average value of [Fe/H]=$+$0.06. Perhaps most critically for the purposes of this study, HD\,16417 is a slow rotator ($v \sin i = 2.1$\,\mbox{km\,s$^{-1}$}) and extremely inactive (log\,R$^\prime_{\rm HK}=-5.08$), making it an ideal target for Doppler planet searching at very high precision. The updated \ion{Ca}{2} jitter calibration of J.Wright (priv.comm.) for HD16417 indicates a jitter of 2.2\,\mbox{m\,s$^{-1}$}. The somewhat lower gravity of HD16417 than solar indicates that Doppler observations will by slightly affected by Doppler noise due to p-mode oscillations; the relations of \citet{otoole08a} indicate an rms noise equivalent of less than 0.6\,\mbox{m\,s$^{-1}$}, for observations of more than 10 minutes. \section{Observations} \label{obs} AAPS Doppler measurements are made with the UCLES echelle spectrograph \citep{diego:90}. An iodine absorption cell provides wavelength calibration from 5000 to 6200\,\AA. The spectrograph point-spread function and wavelength calibration is derived from the iodine absorption lines embedded on every pixel of the spectrum by the cell \citep{val:95,BuMaWi96}. Observations of HD\,16417 began as part of the AAPS main program in 1998, and over the following seven years it was observed regularly in observations of 300-600s (depending on observing conditions), giving a signal-to-noise ratio (SNR) of $\approx$200 per spectral resolution element in the iodine region. These are the observations listed in Table \ref{velhd16417} between JD=2450831.0428 - 245381.1880. In 2005 Jul, HD\,16417 (together with a number of other bright AAPS targets) was elevated within our observing program to high-SNR status, such that its target SNR per epoch became 400 per spectral pixel. The result was that the median internal uncertainties produced by our Doppler fitting process dropped from 1.59\,\mbox{m\,s$^{-1}$}\ to 0.77\,\mbox{m\,s$^{-1}$}. This improvement gave us confidence that our Rocky Planet Search strategy (concentrating on a small number of targets observed as contiguously as possible over a long observing run) would significantly lower our noise levels for the detection of low-mass planets. Observations for our Rocky Planet Search program began on 2007 Jan 10 and continued through 2007 Feb 26. HD\,16417 was able to be observed on 24 of those nights. Since this 48 night run, it has been observed on a further ten nights at the AAT, and on ten nights on the Keck~I telescope with HIRES \citep{Vogt94}. The Doppler velocities derived from all these observations are listed in Table \ref{velhd16417}. \section{Analysis} The root-mean-square (rms) scatter about the mean velocity of all data taken before 2005 Jul was 4.9\,\mbox{m\,s$^{-1}$}. The rms about the mean for all data taken since that date is 4.4\,\mbox{m\,s$^{-1}$}. This is slightly smaller than that seen in the earlier, lower SNR data, but is still significantly larger than would be expected based on the internal measurement uncertainties, and the noise from jitter and p-modes in this star. However, in spite of being observed with this improved precision at 16 epochs over the period 2005 July to 2006 November, no convincing periodicity could be extracted from the resultant Doppler velocities. The 48 night run in 2007, however, provided a clear indication of periodicity at $\approx$ 17\,d. It was then prioritized for intensive observation over the following 18 months, and subsequent data has confirmed the detection first made in our large, contiguous observing block. The traditional Lomb-Scargle periodogram \citep{Lomb76,Scargle82} estimates power as a function of period by fitting sinusoids to a data set. The Two Dimensional Keplerian Lomb-Scargle periodogram \citep[2DKLS, ][]{AAPSXIV} extends this concept by fitting Keplerians as a function of both period and eccentricity. We show in Figure \ref{2DKLS}, for three subsets of the HD16417 data, slices through the 2DKLS at the eccentricity corresponding to the peak power. The subsets examined are; (a) all AAT data taken since 2005 Jul 27, (b) the AAT data taken in 2007 Jan and Feb, (c) all AAT data taken since 2005 Jul {\em except} that taken in 2007 Jan-Feb, and (d) AAT and Keck data taken since 2005 Jul (with Keck data zero-point-corrected to the AAT system as described below). The first point to note is that Fig. \ref{2DKLS} clearly shows evidence for a periodicity at 17\,d. The second point to note is that that periodicity is clearly seen in just the 24 epochs obtained for HD\,16417 in the major campaign run in 2007 (Fig. \ref{2DKLS}b). But perhaps most interestingly, if the long series of continuous data from that major run is removed (Fig. \ref{2DKLS}c), no convincing evidence for any periodicity is detectable. The number of data points and the internal measurement uncertainties of the data that produce Fig. \ref{2DKLS}b and \ref{2DKLS}c are almost exactly the same. The difference is in the window function of the observations. This is a key point to which we return below. And finally the Keck data (Fig. \ref{2DKLS}d) confirms and sharpens the 17\,d peak seen at the AAT. Using the 2DKLS to identify an initial period \citep{AAPSXIV}, a least-squares Keplerian fit to all AAT data obtained since 2005 Jul results in the orbital parameters for HD\,16417b shown in Table \ref{orbit}. Figure \ref{fit} displays this fit (and the residuals to it) as a function of both time and orbital phase. The rms scatter to this fit is 2.7\,\mbox{m\,s$^{-1}$}, and the reduced chi-squared ($\chi^2_\nu$) is 1.46. This fit indicates the presence of a planet with period 17.22$\pm$0.02\,d, eccentricity 0.22$\pm$0.11, semi-major axis 0.14$\pm$0.01\,AU and minimum mass, $m \sin i$, 21.3$\pm$2.3M$_{\rm Earth}$. As an independent test of the validity of the Keplerian fit to the AAT data, observations of HD\,16417 were acquired on 10 epochs in 2008 Aug-Sep with the HIRES spectrograph on the Keck~I telescope. These data were processed as described by \citet{Vogt05}. Being acquired with a completely different telescope, spectrograph, and detector system, these data provide an independent test of our AAT orbit. The Doppler observations from these 10 epochs have a different arbitrary velocity zero-point from our AAT data, which we solve for by determining the mean difference (5.30$\pm$0.8\,\mbox{m\,s$^{-1}$}) between them and the AAT Keplerian fit listed in Table \ref{orbit}. The Keck data have an rms scatter about the AAT Keplerian fit of 2.6\,\mbox{m\,s$^{-1}$}, and are consistent with the AAT orbital fit. The scatter of the Keck data about the AAT fit is consistent with the scatter seen about the AAT data, and is also consistent with being dominated by the 2.2\,\mbox{m\,s$^{-1}$}\ stellar jitter assumed for HD\,16417. A Keplerian fit to both the AAT and Keck Doppler data is plotted in phased format in Fig. \ref{bothfit} and has the parameters listed in Table \ref{orbit}. Inspection of the Table shows that the AAT and AAT+Keck solutions are essentially identical. To test the probability that the noise in our data set might have resulted in a false detection, we have run Monte Carlo simulations using the ``scrambled velocity'' approach of \citet{marcy05}. This technique makes the null hypothesis that no planet is present, and then uses the actual data as the best available proxy for the combined noise due to our observing system and the star. Multiple realizations of that noise are then created by keeping the observed timestamps, and scrambling the observed velocities amongst them. We created 6000 scrambled AAT velocity sets, and then subjected them to the same analysis as our actual data set (i.e. identifying the strongest peak in the 2DKLS followed by a least-squares Keplerian fit). No trial amongst 6000 showed a $\chi_{\nu}^2$ better than that obtained for the original AAT data set, and the distribution of the scrambled reduced $\chi_{\nu}^2$ (see Fig. \ref{scrambled}) shows a clear separation from that obtained with the actual data. We conclude that the probability of obtaining a false planetary detection from our velocities of HD\,16417 is $<$0.017\%. \section{Discussion} The velocity semi-amplitude of HD\,16417b is quite low ($K=4.8$\,m\,s$^{-1}$), so we must consider the possibility that the observed variation could be due to a stellar effect, such as a rotating starspot, rather than a planet. Unfortunately, the velocity amplitude is much too small for an analysis of line bisectors to reveal any surface kinematics. However, from the activity measure log\,R$^{\prime}_{\rm HK}=-5.08$, we can predict a rotation period of 23-33\,d \citep{Wright04}. This is inconsistent with our measured orbital period of 17.22\,d. It is conceivable (as suggested by \citet{Vogt05} for the similarly short-period, low-mass planet orbiting the inactive star HD\,190360) that a $\sim$17\,d Doppler periodicity could be caused by {\em two} spot complexes at opposite longitudes on a star with a rotation period of $\sim$34\,d. However, the presence of two such complexes would also wash out their Doppler signal, such that each individual complex would need to be roughly twice as large as that required to produce a similar velocity signal from a single complex. Given the implausibility of the contrived spot features required on HD\,16417 to produce the observed Doppler periodicity, {\em and} the fact that the 17\,d periodicity has been observed to be coherent in phase over more than 3 years, we argue that the most probable explanation for the observed velocity signal is a low-mass planet in a 17\,d orbit. Given that multiple planet systems are being found around an increasing number of extra-solar planet hosting stars \citep{butler06b}, we have carried out some simple tests of our data to see if further planets may be present. The next most significant Doppler peak in our data (after the first planet has been removed) is found at $\sim$290\,d. Two-planet fits have been tested, and suggest the possibility of a second highly-eccentric planet ($e > 0.8$) at P$\approx$289\,\d. However, at present we are hesitant to propose this as a firm candidate given the low Doppler amplitudes involved. The rms scatter about our single planet fit is just 2.6\,\mbox{m\,s$^{-1}$}\ (from AAT and Keck data combined), which is consistent with being due to our measurement uncertainties (1\,\mbox{m\,s$^{-1}$}) and jitter (2.2\,\mbox{m\,s$^{-1}$}) alone. It is the nature of eccentric Keplerian fits that they are eminently capable of producing apparently good fits to roughly constant data sets with a few velocity outliers -- however if those outliers are truly due to noise, then such fits are essentially meaningless. As this is just the case we see here, more data will be required to confirm or deny the presence of further planets in this system via the repeated observations of periodic outliers to the single planet Keplerian solution. The orbit of HD\,16417b appears to be non-circular ($e=0.20\pm0.09$), adding to the growing list of short-period exoplanets with non-zero orbital eccentricities. Tidal interaction with the planet host star is expected to circularize the orbits of planets with short periods, with circularization timescales typically shorter than the ages of their hosts. We have used the relationship of \citet{GS66} to estimate the circularisation timescale to be $\sim$350\,Gyr. This is much longer than the upper limit to the age of HD\,16417 ($\sim$7\,Gyr). We used a tidal quality factor, $Q_{\mathrm{p}}$, of $10^5$, which is in line with the value estimated for solar system planets, and used a radius estimate based on the measured radius of similar object HAT-P-11b \citep{BTP09}. The origin of the non-circular orbits is not entirely clear. \citet{Matsumura08} suggested that either basing tidal circularization calculations on our solar system is not appropriate, or that these systems are affected by an external perturbation -- i.e. an outer (possibly undetected) planet. In the case of low-mass, short-period exoplanets such as HD\,16417b, however, we advise caution in placing too great an emphasis on non-zero eccentricities. Two recent studies by \citet{otoole09} and \citet{ST08}, have found that there is a bias \emph{against} measuring zero-eccentricity orbits when signal-to-noise ratios are low. We note that the fit uncertainty for HD\,16417b is also quite high ($\sigma_e=0.1$), and so coupled with this bias, it is not clear that the orbital eccentricity is well constrained. Monitoring of the star is ongoing: this will provide future constraints on all orbital parameters. HD\,16417b raises the number of known planets with $m \sin i$\ minimum masses of Neptune-mass (or less) to eighteen. Interestingly, roughly equal numbers have been found in orbit around G-, K- and M-dwarfs (6, 6 and 6 respectively in each spectral type). Interpreting these numbers, though, is fraught with difficulty. On the one hand, low-mass planets are easier to find around K- and M-dwarfs as their host star masses are smaller. On the other hand, substantially fewer K- and M-dwarf stars are under survey by Doppler programs at present, and they tend to be fainter and more difficult to obtain optical Doppler velocities for. What we can clearly conclude from our observations of HD\,16417 is that the efficiency of detecting low-mass planets in short period orbits can be {\em significantly} enhanced through the use of contiguous, targeted observing campaigns. As noted earlier, 24 epochs of data on HD\,16417 obtained over a 48 night observing run show clear evidence for the existence of a low-mass planet orbiting this star. The same quality and quantity of data spread sparsely over an 18 month period in observing blocks of 4-8 nights (and subject to the exigencies of both telescope scheduling and weather) is {\em not} able to detect the same planet. Such intensive observing -- extending across dark, as well as bright, lunations -- may well need to become the norm for future high-precision Doppler planet search observations to continue probing to lower mass planets in short period orbits. \acknowledgements We acknowledge support from the following grants; NSF AST-9988087, NASA NAG5-12182, PPARC/STFC PP/C000552/1, ARC Discovery DP774000; and travel support from the Carnegie Institution of Washington and the Anglo-Australian Observatory. We are extremely grateful for the extraordinary support we have received from the AAT technical staff -- E. Penny, R. Paterson, D. Stafford, F. Freeman, S. Lee, J. Pogson, S. James, J. Stevenson, K. Fiegert and G. Schaffer. \facility{AAT,Keck:I} \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }